ICASSP 2026 URGENT Challenge - Track 1: Universal Speech Enhancement
Organized by urgent - Current server time: Jan. 1, 2026, 12:28 p.m. UTC

First phase

Validation phase
Sept. 20, 2025, noon UTC

End

Competition Ends
Nov. 8, 2025, noon UTC

Validation phase

Start: Sept. 20, 2025, noon

Description: Please follow the instructions in https://urgent-challenge.github.io/urgent2026/track1/#submission to prepare the zip file for submission.

Non-blind test phase

Start: Oct. 14, 2025, noon

Description: Please follow the instructions in https://urgent-challenge.github.io/urgent2026/track1/#submission to prepare the zip file for submission.

Blind test phase

Start: Nov. 4, 2025, noon

Description: Please follow the instructions in https://urgent-challenge.github.io/urgent2026/#submission to prepare the zip file for submission.

Competition Ends

Nov. 8, 2025, noon

You must be logged in to participate in competitions.

Sign In
Stage-1: Final ranking on objective metrics (click to expand/collapse)
Download Table
Results
# User Non-intrusive SE metrics Intrusive SE metrics Downstream-task-independent metrics Downstream-task-dependent metrics Overall ranking score
DNSMOS NISQA UTMOS SCOREQ PESQ ESTOI POLQA SpeechBERTScore LPS SpkSim EmoSim LID CAcc (%)

1

WR

2.87 (5) 3.49 (4) 2.48 (5) 3.52 (2) 2.71 (1) 0.87 (1) 3.46 (1) 0.88 (1) 0.75 (3) 0.78 (2) 0.99 (1) 0.87 (2) 69.69 (1) 2.12

2

GHW

2.76 (11) 3.40 (7) 2.46 (6) 3.41 (3) 2.69 (2) 0.86 (2) 3.46 (1) 0.87 (2) 0.72 (6) 0.77 (3) 0.99 (1) 0.86 (3) 67.66 (5) 3.85

3

subatomicseer

2.71 (13) 3.26 (10) 2.14 (9) 3.16 (7) 2.60 (3) 0.86 (2) 3.42 (3) 0.87 (2) 0.76 (2) 0.78 (2) 0.99 (1) 0.88 (1) 69.46 (2) 3.98

4

baird

2.85 (6) 3.42 (6) 2.51 (4) 3.52 (2) 2.50 (8) 0.86 (2) 3.33 (5) 0.87 (2) 0.72 (6) 0.77 (3) 0.99 (1) 0.85 (4) 67.13 (7) 4.31

5

RICK2000

2.63 (15) 3.01 (13) 2.07 (12) 2.97 (10) 2.54 (6) 0.86 (2) 3.43 (2) 0.88 (1) 0.74 (4) 0.78 (2) 0.99 (1) 0.88 (1) 69.69 (1) 4.90

6

Ali-Universal-SE

2.83 (8) 3.17 (11) 2.28 (7) 3.18 (6) 2.31 (10) 0.82 (5) 3.14 (9) 0.87 (2) 0.75 (3) 0.78 (2) 0.99 (1) 0.88 (1) 69.35 (3) 5.06

7

camssly

2.89 (3) 3.40 (7) 2.09 (11) 3.10 (8) 2.52 (7) 0.83 (4) 3.18 (7) 0.86 (3) 0.71 (7) 0.74 (6) 0.99 (1) 0.87 (2) 67.22 (6) 5.50

8

Jyxiong

2.87 (5) 3.31 (9) 2.09 (11) 3.01 (9) 2.55 (5) 0.82 (5) 3.25 (6) 0.86 (3) 0.73 (5) 0.74 (6) 0.99 (1) 0.86 (3) 66.23 (10) 5.71

9

rc

2.68 (14) 2.81 (16) 2.04 (13) 2.80 (13) 2.57 (4) 0.87 (1) 3.40 (4) 0.88 (1) 0.77 (1) 0.79 (1) 0.99 (1) 0.85 (4) 63.58 (16) 5.88

10

inverseai

2.88 (4) 3.78 (2) 2.85 (1) 3.67 (1) 2.15 (13) 0.81 (6) 2.94 (14) 0.84 (5) 0.69 (9) 0.70 (8) 0.99 (1) 0.79 (7) 64.30 (14) 6.88

11

ai_se

2.84 (7) 3.34 (8) 2.27 (8) 3.25 (5) 2.40 (9) 0.83 (4) 3.15 (8) 0.84 (5) 0.68 (10) 0.75 (5) 0.99 (1) 0.84 (5) 63.96 (15) 7.00

12

abid

3.08 (1) 4.18 (1) 2.73 (2) 3.52 (2) 1.91 (17) 0.78 (8) 2.72 (17) 0.82 (6) 0.70 (8) 0.71 (7) 0.99 (1) 0.78 (8) 65.87 (12) 7.38

13

Baseline_BSRNN

2.63 (15) 2.77 (17) 2.01 (15) 2.79 (14) 2.31 (10) 0.84 (3) 3.10 (10) 0.87 (2) 0.73 (5) 0.76 (4) 0.99 (1) 0.85 (4) 67.04 (8) 7.67

14

4Speech

2.63 (15) 2.73 (18) 1.99 (17) 2.80 (13) 2.29 (12) 0.84 (3) 3.08 (12) 0.87 (2) 0.73 (5) 0.76 (4) 0.99 (1) 0.87 (2) 67.90 (4) 7.75

15

xueke

2.54 (18) 2.72 (19) 1.98 (18) 2.79 (14) 2.31 (10) 0.84 (3) 3.09 (11) 0.87 (2) 0.73 (5) 0.76 (4) 0.99 (1) 0.88 (1) 66.21 (11) 8.25

16

seeee

2.62 (16) 2.77 (17) 2.01 (15) 2.78 (15) 2.30 (11) 0.83 (4) 3.09 (11) 0.87 (2) 0.72 (6) 0.74 (6) 0.99 (1) 0.85 (4) 66.55 (9) 8.35

17

Yuhan_Wei

2.62 (16) 2.82 (15) 2.00 (16) 2.74 (16) 2.30 (11) 0.83 (4) 2.96 (13) 0.87 (2) 0.72 (6) 0.71 (7) 0.99 (1) 0.84 (5) 65.70 (13) 8.90

18

SPEAR

2.74 (12) 3.46 (5) 2.13 (10) 2.88 (12) 1.93 (16) 0.80 (7) 2.66 (18) 0.85 (4) 0.65 (11) 0.67 (10) 0.99 (1) 0.80 (6) 59.32 (22) 10.17

19

sigma

3.03 (2) 3.77 (3) 2.69 (3) 3.34 (4) 1.43 (20) 0.30 (12) 2.03 (20) 0.75 (7) 0.59 (14) 0.40 (13) 0.98 (2) 0.78 (8) 59.78 (21) 10.46

20

miku8023

2.79 (9) 2.94 (14) 1.90 (20) 2.79 (14) 2.09 (14) 0.80 (7) 2.82 (16) 0.84 (5) 0.68 (10) 0.69 (9) 0.99 (1) 0.86 (3) 62.78 (19) 10.52

21

XMU-SPEECH

2.53 (19) 2.73 (18) 1.93 (19) 2.71 (17) 2.15 (13) 0.82 (5) 2.90 (15) 0.86 (3) 0.70 (8) 0.71 (7) 0.99 (1) 0.85 (4) 63.05 (18) 10.56

22

Baseline_BSRNN-Flow

2.77 (10) 3.09 (12) 2.02 (14) 2.92 (11) 1.97 (15) 0.80 (7) 2.82 (16) 0.82 (6) 0.63 (13) 0.70 (8) 0.99 (1) 0.79 (7) 58.33 (23) 10.92

23

Benjamin_Glidden

2.57 (17) 2.69 (20) 1.86 (21) 2.65 (18) 1.89 (18) 0.77 (9) 2.53 (19) 0.82 (6) 0.64 (12) 0.67 (10) 0.99 (1) 0.84 (5) 61.17 (20) 13.08

24

Salute

2.35 (20) 2.44 (21) 1.70 (22) 2.23 (19) 1.50 (19) 0.63 (10) 1.95 (21) 0.72 (8) 0.49 (15) 0.52 (11) 0.99 (1) 0.75 (9) 51.23 (24) 14.98

25

noisy

1.43 (21) 1.42 (22) 1.35 (23) 1.75 (20) 1.17 (21) 0.54 (11) 1.55 (22) 0.65 (9) 0.40 (16) 0.51 (12) 0.97 (3) 0.85 (4) 63.50 (17) 15.25

During the submission stage, the real-time leaderboard used an earlier version of the evaluation_metrics/calculate_phoneme_similarity.py script to compute the LPS scores. For the final results, we fixed a bug in the LPS evaluation and updated the scores accordingly. While the corrected scores differ slightly from those shown on the previous leaderboard, the overall ranking order remains unchanged.