NIST 2020
Open Speech Analytic Technologies Evaluation

Leaderboards are not available until uploads can be enabled.


Updated: 2020-06-05T16:25:08+00:00
THUEE 0.0579
THUEE 0.0990
THUEE 0.0900
CRIM 0.1171
CRIM 0.1757
CRIM 0.1537
CRIM 0.1219
CRIM 0.1044
THUEE 0.1124
TAM 0.1823
anis-team 0.2481
CRIM 0.0867
Elektronika 0.1223
Catskills-Research-Company 0.1580
Elektronika 0.1589
THUEE 0.1042
CRIM 0.1104
Catskills-Research-Company 0.1580
THUEE 0.2481
BoUn 0.2229
CRIM 0.1335
Elektronika 0.1780
Catskills-Research-Company 0.1580
CRIM 0.0989
TAM 0.1148
TAM 0.2342
CRIM 0.1148
BoUn 0.2108
THUEE 0.0579
THUEE 0.0990
Catskills-Research-Company 0.1580
Elektronika 0.1611
Elektronika 0.0895
CRIM 0.1044
Elektronika 0.1677
CRIM 0.0992
Contact Us

For any information about the OpenSAT Evaluation (data, evaluation code, etc.) please email:
Welcome to the NIST Open Speech Analytic Technologies Evaluation (OpenSAT20)
OpenSAT20 will be supporting the public safety communications domain.
OpenSAT will be hosting the Linguistic Data Consortium (LDC) DIHARD III in 2020.
Go to the DIHARD III tab for more information.
OpenSAT20 is the second in the OpenSAT Series for speech analytic systems evaluations. OpenSAT provides an opportunity for participants to compare their system performance against a pool of systems performances for each task and is intended to encourage cross-learning among developers.

Tasks and Domains

System Tasks Data Domains
Automatic Speech Recognition (ASR)
Speech Activity Detection (SAD)
Keyword Search (KWS)
Public safety communications (PSC)

Simulated Public Safety Communications (PSC)

The OpenSAT20 evaluation will include the ASR, SAD, and KWS tasks for simulated first responder public safety communications. Participants can participate with one, two, or all three tasks.

The simulated public safety communications dataset was created by LDC and funded by the Department of Homeland Security (DHS) to advance first responder assistant technologies in first responder noisy and stressed environments including its effects on speech. The public safety communications dataset includes simulated first responder communications with and without the Lombard effect in speech and moments in speech with expression of urgency. The audio also includes low-level and loud background sounds typical in first responder events.

The simulated first responder communications is intended to be a precursor in advancing assistive technologies for real-world operational communications. Real-world operational data is expected to be made available in future evaluations

2020 Tentative Schedule for SAD, ASR, KWS

Milestone Date
Registration opens April 13th
Development/Training data and Part 1 Evaluation data released April 13th
Part 2 Evaluation "test" data released June 15th
Last date to upload Part 2 Evaluation "test" data results to NIST server June 30th
Registration for NIST Workshop TBD
NIST Workshop August 12-13 (Tentative)
OpenSAT20 is open to the public. All organizations, ie., universities, government institutions, corporations or businesses, are invited to apply their technologies to the OpenSAT20 data and submit their system results to the OpenSAT evaluation server. The evaluation is open worldwide. Participating in the evaluation includes attendance/participation in a workshop that follow the evaluation. Participation is free for the evaluation and the workshop. NIST does not provide funds to participants.

Participation Logistics
Each participant must create an account on this web platform. After creating an account, each participant will either create a new Team or join an existing Team.

After registering and having an LDC data license agreement approved, participants will be able to participate in the OpenSAT20 Evaluation. Most of the data will be accessed from LDC and some of the data from this site.

Participants will submit tar.gz files of their system’s output to the NIST OpenSAT scoring server using this web site.

Go to the Register tab for registration instructions and to register for OpenSAT20.

2020 Tentative Schedule

Registration opens and Development and training data released
Evaluation data released
Last date to upload system output from evaluation data to NIST server
NIST workshop

Registration Instructions
If you already have an account, login here or at the top of the page. To create an account and register, follow the steps below.
To Create an Account:
1- Click the "Sign up" button at the top of the page.
2- Provide a valid e-mail address and create a password in the “Sign up” form. (After entering your email address and creating a password and clicking “Sign up” on the “OpenSAT20 Sign up” online form, a confirmation email will be sent to that email address).
3- Click “Confirm my account” in the e-mail sent to you. (A log-in page will display with your email address and created password already entered.)
4- Click “Log in”. (A dashboard for your account will display with Registration Steps.)
5- Complete the steps in the dashboard to complete an account creation.
6- Registration is completed when steps 1-5 are completed.
When you are notified by email from LDC that your License Agreement is approved, you can then access the data.
Creating a Team or Joining a Site and Team
When joining OpenSAT, a participant can either create a Site, or join an existing Site and create a Team, or join an existing Team. A participant can be a member of multiple teams.
Each participant, Site, and Team will have its own password. The creator(s) of the Site and Team creates those passwords respectively.
The NIST Agreement
Check the “I acknowledge that I have read and accepted the OpenSAT20 Terms and Conditions” box and then click the “Update the License Agreement” button.
The Data License Agreement
Site creator is required to agree to the LDC terms in order to access data for that site. Read the LDC license agreement and accept the terms by uploading the signed license agreement form. Participants cannot download data until LDC approves your uploaded signed LDC license agreement.
The Dashboard
The dashboard is the personalized page for each participant. To access the dashboard at any time, click the "Dashboard" at the top right of the screen. This is where you can make submissions and view results.
System Output Submission Instructions
Each submission must be associated with a Site, Team and Task.
Multiple systems may be created for each Task with a submission for each system.
Submit system output for validation checking or scoring following these steps:
1- Prepare for Submission
  • System output must be in the format described in the Evaluation Plan for the task that was performed (SAD, KWS, or ASR).
  • Have the .tgz or .zip file ready per Appendix IV in the OpenSAT20 Evaluation Plan and also shown below these steps.
2- Go to Dashboard. (Select "Dashboard" on the top right of the page.)
  • In "Submission Management", click the Task that represents the system output.
  • Click "Create new Submission" located at the upper right of the dashboard. A “Submission Name” page will display.
    • Select the Data Domain from the drop down
    • Enter a system identifier in “Name”.
    • Click “Submit”. A “Submissions” page will display.
  • On the “Submissions” page, click “Upload”. The “New Submission” page displays.
    • Click the "Choose File". Choose the .tgz or .zip file to upload.
    • Click "Submit".
    • A submission ID is automatically created for the submission.
    • The “Scoring” button on the “Submissions” page displays “submitted” until the scoring server completes scoring and then it changes to “Done”.
    • When “Done” is displayed, click “Scoring” button for a Scoring Run Report.
    • Click “View Submission” to see Submission information.
3- View Submission Results
  • To see a Scoring Run Report, click the “Scoring” button, after “submitted” changes to “Done” on the button.
  • To see information about the submission click the “View Submission” button.
Below is Appendix VI from the OpenSAT20 Evaluation Plan: Appendix IV- SAD, KWS, and ASR - System Output Submission Packaging
  • Each submission shall be an archive file named as "SysLabel".tgz or "SysLabel".zip.
  • Submit a separate .tgz or .zip file for each system output (e.g., a separate .tgz or .zip file for Primary, Contrastive1, and Contrastive2 systems).
  • "SysLabel" shall be an alphanumeric [a-zA-Z0-9] that is a performer-assigned identifier for their submission.
  • There should be no parent directory when the submission file is untarred. The tar command should be: > tar MySystemSubmissionFile.tgz or > tar respectively.
Prior to uploading the submission file to the NIST scoring server, performers will be asked for information about the submission. The scoring server will attach the following information to the submission filename to catorigize and uniquely identify the submission:
Field Information Method
TeamID [Team] obtained from login information
Task {SAD | ASR | KWS} select from drop-down menu
SubmissionType {primary | contrastive} select from drop-down menu
SubmissionType {primary | contrastive} select from drop-down menu
Training Condition {unconstrained} default - hard-coded
EvalPeriod {2019} default - hard-coded
DatasetName {PSC | VAST | Babel} select from drop-down menu
Date {YYYYMMDD} obtained from NIST scoring server at submission date
TimeStamp {HHMMSS} obtained from NIST scoring server at submission time

Below is an example of a resulting filename:
NIST_ASR_primary_uncontrained_2019_PSC_20190415_163026_ MySystemSubmissionFile.tgz

The NIST scoring server will perform a validation check on each system output submission for conforming to the submission format required for each task.
Submission of a system description conforming to the system description guidelines in Appendix V is required before receiving the system’s score and ranking results in the Evaluation phase.
For the first time OpenSAT will be partnering with Linguistic Data Consortium (LDC) in hosting the Third DIHARD Speech Diarization Challenge (DIHARD III). All DIHARD III evaluation activities (registration, results submission, scoring, and leaderboard display) will be conducted through web-interfaces hosted by OpenSAT.

For additional information about DIHARD III, including registration, schedule, data, tasks, and scoring, please consult the official DIHARD III website.


The results of the challenge will be presented at a post-evaluation workshop, to be collocated with Interspeech 2020 in Shanghai, China on October 25th, 2020. All participants are invited to submit 2 page extended abstracts describing their submissions and results on the DIHARD III development and evaluation sets. Provisions will be made for remote participation to accommodate those unable to travel to Shanghai due to COVID-19 related disruptions. For additional details, please see the workshop website.

For more information about DIHARD III, please join the mailing list or contact the organizers via email at
Scoring and Validation tools: F4DE and SCTK
Here is the Github links for SCTK and F4DE:
SCTK: for the Automatic Speech Recognition (ASR)
F4DE: for the Keyword Search (KWS)

For the Speech Activity Detection (SAD), the tools (Validation & Scoring) are available in the Dashboard after registration and approval of the license agreement.

Questions? Email questions to