NIST 2020
Open Speech Analytic Technologies Evaluation
(OpenSAT20)
OpenSAT20_KWS_Progress_Set
PROGRESS SET
ASSOC. TEST SET
TEAM
TWV
Best
Last
TWV
Date&Time
Team_002
0.6280
X
X
0.7116
Mon Aug 3 23:39:39 UTC 2020
Team_008
0.5171
X
X
0.5859
Sun Aug 16 15:16:08 UTC 2020
Team_0011
0.5023
X
0.4239
Fri Aug 14 11:42:50 UTC 2020
Team_0011
0.5018
X
0.4274
Fri Aug 14 11:45:28 UTC 2020
Team_0014
-0.417
X
X
-0.523
Sun Aug 16 18:10:37 UTC 2020
OpenSAT20_KWS_Test_Set
TEST SET
ASSOC. PROGRESS SET
TEAM
TWV
Best
Last
TWV
Date&Time
Team_002
0.7116
X
X
0.6280
Mon Aug 3 23:39:39 UTC 2020
Team_008
0.5994
X
0.4970
Sat Aug 15 22:11:22 UTC 2020
Team_008
0.5859
X
0.5171
Sun Aug 16 15:16:08 UTC 2020
Team_0011
0.4274
X
X
0.5018
Fri Aug 14 11:45:28 UTC 2020
Team_0014
-0.523
X
X
-0.417
Sun Aug 16 18:10:37 UTC 2020
OpenSAT20_ASR_Progress_Set
PROGRESS SET
ASSOC. TEST SET
TEAM
WER
Best
Last
WER
Date&Time
Team_0017
8.5
X
X
13.3
Sat Aug 15 02:52:31 UTC 2020
Team_002
14.0
X
9.5
Sat Aug 15 04:19:02 UTC 2020
Team_002
14.0
X
9.4
Sat Aug 15 03:48:54 UTC 2020
Team_008
14.0
X
9.9
Sat Aug 15 22:12:03 UTC 2020
Team_008
14.9
X
10.9
Sun Aug 16 19:34:02 UTC 2020
Team_007
18.6
X
12.7
Tue Aug 11 10:07:43 UTC 2020
Team_007
18.8
X
12.7
Tue Aug 11 15:44:52 UTC 2020
Team_0020
19.2
X
X
28.8
Mon Aug 10 12:03:34 UTC 2020
Team_006
22.9
X
14.4
Thu Jul 30 08:32:03 UTC 2020
Team_006
26.0
X
17.9
Fri Aug 14 10:54:04 UTC 2020
Team_0014
107.4
X
X
107.5
Sun Aug 16 10:48:37 UTC 2020
OpenSAT20_ASR_Test_Set
TEST SET
ASSOC. PROGRESS SET
TEAM
WER
Best
Last
WER
Date&Time
Team_0017
13.3
X
X
8.5
Sat Aug 15 02:52:31 UTC 2020
Team_002
14.0
X
9.5
Sat Aug 15 04:19:02 UTC 2020
Team_002
14.0
X
9.4
Sat Aug 15 03:48:54 UTC 2020
Team_008
14.0
X
9.9
Sat Aug 15 22:12:03 UTC 2020
Team_008
14.9
X
10.9
Sun Aug 16 19:34:02 UTC 2020
Team_007
18.6
X
12.7
Tue Aug 11 10:07:43 UTC 2020
Team_007
18.8
X
12.7
Tue Aug 11 15:44:52 UTC 2020
Team_006
22.9
X
14.4
Thu Jul 30 08:32:03 UTC 2020
Team_006
26.0
X
17.9
Fri Aug 14 10:54:04 UTC 2020
Team_0020
28.8
X
X
19.2
Mon Aug 10 12:03:34 UTC 2020
Team_0014
107.5
X
X
107.4
Sun Aug 16 10:48:37 UTC 2020
OpenSAT20_SAD_Progress_Set
PROGRESS SET
ASSOC. TEST SET
TEAM
DCF
Best
Last
DCF
Date&Time
Team_002
0.0440
X
X
0.0699
Sun Aug 16 09:49:38 UTC 2020
Team_0024
0.0443
X
0.0699
Mon Aug 17 03:00:06 UTC 2020
Team_0024
0.0443
X
0.0689
Sun Aug 16 21:45:30 UTC 2020
Team_007
0.0914
X
0.1122
Sat Aug 15 19:01:27 UTC 2020
Team_007
0.1135
X
0.1729
Sat Aug 15 19:32:27 UTC 2020
Team_0014
0.1506
X
X
0.2116
Sun Aug 16 13:42:08 UTC 2020
OpenSAT20_SAD_Test_Set
TEST SET
ASSOC. PROGRESS SET
TEAM
DCF
Best
Last
DCF
Date&Time
Team_002
0.0600
X
0.0536
Sat Aug 15 04:48:06 UTC 2020
Team_0024
0.0678
X
0.0448
Fri Aug 14 01:02:21 UTC 2020
Team_002
0.0699
X
0.0440
Sun Aug 16 09:49:38 UTC 2020
Team_0024
0.0699
X
0.0443
Mon Aug 17 03:00:06 UTC 2020
Team_007
0.1095
X
0.1200
Tue Jul 21 16:31:21 UTC 2020
Team_007
0.1729
X
0.1135
Sat Aug 15 19:32:27 UTC 2020
Team_0014
0.2116
X
X
0.1506
Sun Aug 16 13:42:08 UTC 2020
| System Tasks | Data Domains |
|---|---|
|
Automatic Speech Recognition (ASR)
Speech Activity Detection (SAD) Keyword Search (KWS) |
Public safety communications (PSC)
|
Milestone
Date
Registration opens
April 13th
Development/Training data
Available for download until July 31st
Evaluation data (contains a combined progress data set and a test data set in one file)
Available for download until July 31st
Last date to upload Evaluation data results to NIST server
August 16th
NIST Virtual Workshop
September 16th
- System output must be in the format described in the Evaluation Plan for the task that was performed (SAD, KWS, or ASR).
- Have the .tgz or .zip file ready per Appendix IV in the OpenSAT20 Evaluation Plan and also shown below these steps.
- In "Submission Management", click the Task that represents the system output.
- Click "Create new Submission" located at the upper right of the dashboard. A “Submission Name” page will display.
- Select the Data Domain from the drop down
- Enter a system identifier in “Name”.
- Click “Submit”. A “Submissions” page will display.
- On the “Submissions” page, click “Upload”. The “New Submission” page displays.
- Click the "Choose File". Choose the .tgz or .zip file to upload.
- Click "Submit".
- A submission ID is automatically created for the submission.
- The “Scoring” button on the “Submissions” page displays “submitted” until the scoring server completes scoring and then it changes to “Done”.
- When “Done” is displayed, click “Scoring” button for a Scoring Run Report.
- Click “View Submission” to see Submission information.
- To see a Scoring Run Report, click the “Scoring” button, after “submitted” changes to “Done” on the button.
- To see information about the submission click the “View Submission” button.
- Each submission shall be an archive file named as "SysLabel".tgz or "SysLabel".zip.
- Submit a separate .tgz or .zip file for each system output (e.g., a separate .tgz or .zip file for Primary, Contrastive1, and Contrastive2 systems).
- "SysLabel" shall be an alphanumeric [a-zA-Z0-9] that is a performer-assigned identifier for their submission.
- There should be no parent directory when the submission file is untarred. The tar command should be: > tar MySystemSubmissionFile.tgz or > tar MySystemSubmissionFile.zip respectively.
| Field | Information | Method |
|---|---|---|
| TeamID | [Team] | obtained from login information |
| Task | {SAD | ASR | KWS} | select from drop-down menu |
| SubmissionType | {primary | contrastive} | select from drop-down menu |
| SubmissionType | {primary | contrastive} | select from drop-down menu |
| Training Condition | {unconstrained} | default - hard-coded |
| EvalPeriod | {2019} | default - hard-coded |
| DatasetName | {PSC | VAST | Babel} | select from drop-down menu |
| Date | {YYYYMMDD} | obtained from NIST scoring server at submission date |
| TimeStamp | {HHMMSS} | obtained from NIST scoring server at submission time |
The results of the challenge will be presented at a post-evaluation workshop, to be collocated with Interspeech 2020 in Shanghai, China on October 25th, 2020. All participants are invited to submit 2 page extended abstracts describing their submissions and results on the DIHARD III development and evaluation sets. Provisions will be made for remote participation to accommodate those unable to travel to Shanghai due to COVID-19 related disruptions. For additional details, please see the workshop website.