Skip to content

Conversation

@Adel-Moumen
Copy link
Collaborator

@Adel-Moumen Adel-Moumen commented Mar 3, 2024

What does this PR do?

This PR aims at fixing some issues with Whisper. One of them was that fine-tuning a Whisper model with SpeechBrain would lead to awfully bad results. The main reason was due to the input tokens to the decoder. Depending on the model used, you might have had the wrong tokens passed in input. For instance, when using a .en Whisper model, the decoder does not expect to have in input the language token, thus, this was disturbing the generation process. Furthermore, in the LibriSpeech recipe, due to the fact that most of the text is uppercase while the model has been trained on lowercase, the decoder was getting a very high WER at the end of the first epoch.

Also, I did improve by a lot our Whisper model. Now, you can perform language task ID, speech translation, and speech recognition very easily. You just need to specify the task to the Whisper model and the language (otherwise it will be detected automatically). I did also add kv caching in the decoding process, which makes our Whisper model much faster and I also added the support of flash attention by setting to None the output_attention flag.

In terms of simplification, there's no need anymore to pass to the decoding function (greedy or beam search) some key information about the model. It is automatically retrieved so you do not need anymore to pass the bos/eos tokens.

I also added the support of prefix/prompting. Prefix means that you are allowing for resuming the transcription after a certain point within the 30-second speech while prompting means that you can give in input to the decoder the past transcriptions to perform long-form ASR (or you can fine-tune and give special instructions).

In zero-shot, there's still some little differences with openai/whisper in terms of WER, but generally speaking our beam search is much more powerful and gives us a better baseline than what is reported in the original paper.

I made some modifications in the main decoding function to support temperature greedy searcher and made some general improvements in the searchers.

I also did add long-form ASR with the WhisperASR interface. This is a WIP interface and if you do, I can remove what I did but basically you can give in input a very long file (like more than 10 minutes) and get the transcription. You will get the transcription and some chunks information. Note: the implementation slightly deviates from the original Whisper long-form ASR implementation as I am only prompting to the model the past 30 seconds and not the full past tokens because I found that the model was hallucinating a lot.

I reported the results in the READMEs and got a very strong baseline. I am currently uploading the models on our Dropbox and will most likely upload them as well on HF.

I ran the tests and everything works.

Additionally, the model can perform VAD as well. :)

Related issues: #2462

Inference Example

Long form ASR

from speechbrain.inference.ASR import WhisperASR

long_audio = "11 Minute Conversation in Slow Italian  Super Easy Italian 44.mp3"

asr_model = WhisperASR.from_hparams(
    source="speechbrain/asr-whisper-medium-commonvoice-it", 
    savedir="tmp",
    run_opts={"device":"cuda"}
) 

_, probs = asr_model.detect_language_file(long_audio)
print(f"Detected language: {max(probs[0], key=probs[0].get)}") 

transcription, _ = asr_model.transcribe_file(long_audio)
print(f"Transcription: {transcription}")

Output:

Detected language: it
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:28<00:00,  1.23s/it]
Transcription: Finalmente un po' di tempo per noi. Finalmente? Ma vogliamo coinvolgere anche i ragazzi? Va bene. Allora facciamo una conversazione in Slow Italia. Ma prima, caffè. Adesso possiamo iniziare. No, no, no, no, no, no, no, no, no. Allora, com'è andata a Napoli? Bene. Cosa hai fatto? Allora, ho mangiato tante pizze. Quante? Penso almeno quattro, forse cinque in dieci giorni. Complimenti! Non posso resistere. e ho visto tanti amici. Come sta tua mamma e tua sorella? Stanno bene, stanno bene. Adesso vivono tutte e due a Napoli. Prima mia sorella viveva in Calabria, una regione al sud della Campania. Ha avuto un po' di febbre, ma ora sta bene. E tu cosa hai fatto recentemente? Allora, ieri ho portato Brody dal veterinario. Ah, e perché? Perché ha avuto un'infezione all'orecchio. È stata un'esperienza... È stata un'esperienza. Posso immaginare conoscendo Brody. Perché lei è molto ansiosa, ha paura del veterinario e quindi piange, corre dappertutto. tutto, tutti gli altri veterinari ridono perché è pazzo. È un piccolo diavolo della Tasmania. Sì. Ah, e poi quando il veterinario si avvicina, lei si immobilizza per la paura, quindi è stata un'esperienza... Avete mai chiesti perché i bambini apprendono così velocemente la lingua? Sicuramente perché il loro cervello è un po' come una spugna, ma anche perché gli adulti parlano ai bambini in maniera lenta, tutte le parole, a volte anche esagerandone i suoni, come facciamo noi in queste puntate molto speciali e lente. Ma sapete anche cos'altro è utile per imparare l'italiano? Tutti i materiali che potete avere se Fate parte della comunità Easy Italian. In questo modo potrete scaricare esercizi, trascrizione, video con e senza sottotitoli per mettere alla prova il vostro ascolto e anche audio lento e veloce. come far parte della comunità Easy Italian cliccate il link in descrizione o qui. La tua famiglia come sta? Bene. Mio papà è stato in Irlanda. Mia sorella e la famiglia si sono divertiti molto a Natale. Inghilterra quest'anno, però ho visto i video delle mie nipotine e si sono divertite. Ayla, la più grande, ora ha i roller blade. Il vocabolo molto italiano, i roller blade. Se vuoi puoi dire probabilmente pattini in linea, penso. Ma usiamo roller blade. I pattini. I pattini, comunque i pattini, sì. Il caff
è è quasi finito. Che dici se ci mettiamo più comodi sul divano? Sì. Ok. Più comodi. Come va il lavoro? Il lavoro va bene. Abbiamo ricominciato a fare video per i ragazzi e con i ragazzi. Quest'anno abbiamo fatto con i nostri studenti la torta caprese. Molto buona, ma molto pericolosa. Perché per provare la torta l'abbiamo fatta tante volte e abbiamo troppa torta caprese a casa. Noi non possiamo avere i dolci in casa. Se abbiamo un dolce non riusciamo a smettere di mangiarlo. Quindi no dolci. Prima regola di casa. E a te come va il lavoro? Bene, sì. Ho alcuni nuovi colleghi, abbiamo nuovi professori. Bene, mi sto divertendo. Però basta lavoro. Avevamo detto un po' di tempo per noi. Che facciamo questo weekend? No. Che facciamo questo fine settimana? Farà freddo, quindi potrei stare sul divano sotto la coperta con una tisana. È un libro o una serie Netflix. Interessante. Possiamo pensarci. Possiamo anche vedere qualcuno, una cena con gli amici. Bene. Forse. Sì. Possiamo organizzare una cena a casa, così siamo al caldo, comodi, ma socializziamo. Fai la torta caprese? No. Basta. Basta torta caprese. Penso che non farò dolci, però potrei fare la pasta se non fa troppo freddo. La pasta a mano? A mano. Fatta a mano? Sì. Approvo. Bene. Beh, c'è un po' di sole. Perché non andiamo in balcone? Continuiamo il tour. No, in balcone. Che bel sole. Ma non ti sembra esagerato? C'è il sole, ma è gennaio, fa freddo. Tu che vieni da un paese più freddo, non preferisci questa temperatura? No, preferisco il caldo. L'Italia, perché fa freddo? in Italia. Dovrebbe far caldo in Italia. E vabbè, fa anche freddo. Abbiamo anche le montagne, la neve, la pioggia, il vento. Anche a Napoli fa freddo, ma non così. Ho sbagliato tutto. Andiamo in Sicilia. Andiamo. Beh, in effetti fa un po' freddo. Rientriamo? Sì, andiamo. È stato un piacere. Anche per me. Grazie a tutti.

short form ASR

from speechbrain.inference.ASR import WhisperASR

short_audio = "speechbrain/asr-whisper-medium-commonvoice-it/example-it.wav"

asr_model = WhisperASR.from_hparams(
    source="speechbrain/asr-whisper-medium-commonvoice-it", 
    savedir="tmp",
    run_opts={"device":"cuda"}
) 

_, probs = asr_model.detect_language_file(short_audio)
print(f"Detected language: {max(probs[0], key=probs[0].get)}") 

transcription, _ = asr_model.transcribe_file(short_audio)
print(f"Transcription: {transcription}")

Output:

Detected language: it
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  1.72it/s]
Transcription: Buongiorno a tutti e benvenuti a bordo!

Results

Note: eval and training is performed using FP16. I used a batch size of 3 during testing and 5 beams.

CommonVoice

Language Release Model commit hash hyperparams file LM Val. CER Val. WER Test CER Test WER HuggingFace link Model link GPUs
French 2024-03-28 large-v3 e4e2e13 train_hf_whisper.yaml No 2.31% 7.38% 3.11% 9.09% x x 2xV100 32GB
Italian 2024-03-28 large-v3 e4e2e13 train_hf_whisper.yaml No 1.27% 4.85% 1.62% 5.47% x x 2xV100 32GB
French 2024-03-28 medium e4e2e13 train_hf_whisper.yaml No 2.92% 8.90% 4.02% 11.07% x x 2xV100 32GB
Italian 2024-03-28 medium e4e2e13 train_hf_whisper.yaml No 2.05% 7.17% 2.31% 7.79% x x 2xV100 32GB
French 2024-03-28 small e4e2e13 train_hf_whisper.yaml No 4.34% 12.57% 5.89% 15.46% x x 2xV100 32GB
Italian 2024-03-28 small e4e2e13 train_hf_whisper.yaml No 3.20% 11.40% 3.71% 12.25% x x 2xV100 32GB

As a matter of comparison, in Zero-shot whisper is outperformed by every single FT models except the large IT one for a unknown reason the training script decreased a bit the wer while improving the cer. Generally speaking, I saw a net improvement as depicted in this photo:
IMG_3610

LibriSpeech

Release Model commit hash hyperparams file LM Dev Clean WER Test Clean WER Test Other WER HuggingFace link Model link GPUs
2024-03-28 large-v3 e4e2e13 train_hf_whisper.yaml No 2.00% 1.96% 4.30% Not Avail. Not Avail. 2xV100S 32GB
2024-03-28 medium.en e4e2e13 train_hf_whisper.yaml No 2.35% 2.40% 5.59% Not Avail. Not Avail. 2xV100S 32GB

All the fine-tuned models outperforms their respective zero-shot performances except the large v3 on the test-other set.

TODO :

  • FT whisper on CommonVoice with large-v3 and medium (FR and IT)
  • FT whisper on LibriSpeech
  • Upload the models fr/it and librispeech
Before submitting
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Does your code adhere to project-specific code style and conventions?

PR review

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform)
  • Review the self-review checklist to ensure the code is ready for review

@Adel-Moumen Adel-Moumen self-assigned this Mar 3, 2024
@Adel-Moumen Adel-Moumen added the enhancement New feature or request label Mar 3, 2024
@Adel-Moumen Adel-Moumen changed the title Whisper is all you need. That's why we need to improve it! Whisper improvements: flash attention, KV caching, lang_id, translation, training... Apr 16, 2024
@Adel-Moumen
Copy link
Collaborator Author

Hi @pplantinga and @asumagic,

Thanks a lot for your comments. I think I went through all your remarks. Please have a look especially on the ASR inference part since I made quite a lot of changes to accommodate the use of streaming with Whisper.

@Adel-Moumen
Copy link
Collaborator Author

Recipe tests:
Command:

CV

python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Hparam_file"], filters=[["recipes/CommonVoice/ASR/transformer/hparams/train_hf_whisper.yaml"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'

Output:

(1/1) Running test for CommonVoice_row_21...
        ... 45.17s
TEST PASSED

LS

python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Hparam_file"], filters=[["recipes/LibriSpeech/ASR/transformer/hparams/train_hf_whisper.yaml"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'

Output:

(1/1) Running test for LibriSpeech_row_24...
        ... 31.58s
TEST PASSED

@Adel-Moumen
Copy link
Collaborator Author

python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Task"], filters=[["ASR-Transformers"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'
(1/10) Running test for LibriSpeech_row_16...
        ... 54.88s
(2/10) Running test for LibriSpeech_row_17...
        ... 143.70s
(3/10) Running test for LibriSpeech_row_18...
        ... 205.96s
(4/10) Running test for LibriSpeech_row_19...
        ... 207.96s
(5/10) Running test for LibriSpeech_row_20...
        ... 74.03s
(6/10) Running test for LibriSpeech_row_21...
        ... 47.04s
(7/10) Running test for LibriSpeech_row_22...
        ... 72.98s
(8/10) Running test for LibriSpeech_row_23...
        ... 51.81s
(9/10) Running test for LibriSpeech_row_24...
        ... 13.03s
(10/10) Running test for LibriSpeech_row_25...
        ... 92.09s
TEST PASSED

@Adel-Moumen
Copy link
Collaborator Author

 python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Task"], filters=[["ASR-seq2seq"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'
(1/6) Running test for CommonVoice_row_11...
        ... 22.41s
(2/6) Running test for CommonVoice_row_12...
        ... 19.97s
(3/6) Running test for CommonVoice_row_13...
        ... 17.66s
(4/6) Running test for CommonVoice_row_14...
        ... 20.23s
(5/6) Running test for CommonVoice_row_15...
        ... 16.44s
(6/6) Running test for CommonVoice_row_16...
        ... 17.85s
TEST PASSED

@Adel-Moumen Adel-Moumen merged commit e670108 into speechbrain:develop Apr 18, 2024
fpaissan added a commit to fpaissan/speechbrain that referenced this pull request May 2, 2024
* Skip lazy imports when the caller is inspect.py

This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work.

This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that.

* SSL_Semantic_Token _ new PR (speechbrain#2509)

* remove unnecassry  files and move to dasb

* remove extra recepie from test

* update ljspeech qunatization recepie

* add discrete_ssl and remove extra files

* fix precommit

* update kmeans and add tokeizer for postprocessing

* fix precommit

* Update discrete_ssl.py

* fix clone warning

---------

Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>

* _ensure_module Raises docstring

* Expose `ensure_module` so that docs get generated for it

This is already an internal class anyway, and this is safe to call.

* Update actions/setup-python

* Use `uv` in test CI + merge some dep installs

The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway.

* Use `uv` in doc CI + merge some dep installs

Similar rationale as for the test CI

* Parallelize doc generation with Sphinx

This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers.

* Enable `uv` caching on the test CI

* Enable `uv` caching on the docs CI

* CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290)

CTC-only pre-training of conformer and branchformer.

---------

Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>

* Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465)

* Update CV transformer recipes to match latest results with conformer.

---------

Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>

* Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450)

Whisper improvements:
- flash attention
- kv caching
- lang identifaction
- translation
- finetuning amelioration 
... and more ...

* Update README.md

* precommit

* update zed download link (speechbrain#2514)

* `RelPosEncXL` refactor and precision fixes (speechbrain#2498)

* Add `RelPosEncXL.make_pe`, rework precision handling

* Rework RelPosEncXL output dtype selection

* Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504)

* fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506)

* Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524)

* Fix Separation and Enhancement recipes behavior when NaN encountered

* Formatting using precommit hooks

* Lock torch version in requirements.txt (speechbrain#2528)

* Fix compatibility for torchaudio versions without `.io` (speechbrain#2532)

This avoids having the Python interpreter attempt to resolve the type annotation directly.

* fix docstrings

* consistency tests - classification

* consistency tests - classification

* consistency tests - interpret

* default to no wham

* fix after tests pass

* fix after tests pass

* tests after that

* fix consistency

---------

Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com>
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
mravanelli added a commit that referenced this pull request Jul 3, 2024
* works on cnn14 -- but have a bad checkpoint

* fixed l2i as well

* fixed acc in l2i

* fix not listenable

* updated logging for eval

* a bit less verbose

* printing at sample level

* fix logging - was missing avg

* was messing up in the forward

* now running train_piq.py

* minor corrections

* fix l2i training with wham!

* fixed l2i computation

* linters

* add check for wham usage in eval

* add sample saving during eval

* bug fixes

* added predictions info to the logging

* fixed id for overlap test

* cutting sample before saving

* fixed l2i sampling rate

* fixed random seed so eval will match

* running on full set

* faithfulness fix

* remove pdb

* fix smoothgrad and IG

* fix nmf for pre-training

* removed nmf reconstructions

* truncated gaussian fix for smoothgrad

* fix nans in sensitivity

* better l2i psi network

* saving to a different folder. helps not overriding experiments..

* fix l2i

* fix csv logging of exps

* add guided backprop

* added gradcam. guided backprop and guided gradcam need debugging

* l2i encoder 1D

* mel only - ao

* eval for mel only

* changed logging to simple write

* hardcoded checkpoint - to run on cc

* save everything in one folder

* remove joblib import

* fixed eval?

* fix eval again..

* maybe now?

* trying on cc

* add eval_outdir

* runs full eval

* l2i with updated psi

* update gitignore

* l2i logging different loss values

* add us8k classifier

* us8k interpretations

* fixed guided backprop and guided gradcam

* add shap

* normalizing shap attributions

* adding us8k prepare in interp..

* eval on ID

* fixed backward compatibility

* added multiclass classification

* eval xplorer v1

* eval xplorer v2

* implemented multi label interpretation

* update the loss function in multilabel interpretations

* evaluation explorer - minor fixes

* add roar

* roar test

* just removing a print...

* add roar script

* adding the user study parsing script

* savefigs

* fix to roar hparam

* minor

* extract samples for user study

* fix bug roar

* fixed roar

* fix another copy-paste error

* MRT eval

* roar with random baseline

* fix np seed

* computes mrt metrics

* saving masks for mrt viz

* remove rand baseline roar

* abs

* gradcam eval

* fix class

* add mrt to l2i

* train piq us8k

* param in mrt_evaluator

* add viz

* adding the latest

* fixing path problems for multilabelstuff

* changed the loss function to output 10 masks

* more standard maskout term

* changed encoder loading to local

* added accuracy computation

* removed unnecessary evaluation methods

* added all ones mask and average energy computation

* fixed the bug for whitenoise

* pushing eval later

* l2i new ood

* removing useless files

* cleaning up classification as well

* removing useless hparams in interpret

* more useless files

* old linters

* fix paths

* fix paths

* update Cnn14

* restored old piq file

* wham on PIQ

* Adding LMAC - needs refactor (#5)

* WHAM-ing the data

* AO on conv2d classifier

* added interpretability metrics

* fix debug steps -- updated

* minor to train_piq

* fix saving interpretations

* add wham! for L2I

* fix l2i eval

* add NCC

* cross correlation w/ batching

* checked crosscor

* finish finetuning script

* switch to l1

* linters

* add binarized oracle w/ BCE

* fix compute loss in finetuning while saving samples

* comparison script

* fix 0dB mixtures

* add original wav to comparison

* just path to new classifier

* just committing new checkpoint for L2I

* add NMF image logging for debug

* fix bug in viz L2I

* log the number of finetuning masks

* lower crosscor thr

* fix acc

* align L2I debugging w/ PIQ script

* fixed accuracy computation for L2I

* L2I with variable number of components (K=200)

* debugging l2i...

* update hparams

* fixed oracle source

* fixed wrong sources and running finetuning experiments..

* add AST as classifier

* hparams ast -- still not converging

* add ast augmentation

* update training script after merge

* with augmentations is better

* just pushing hparams

* classification with CE

* conv2d fix for CE

* playing with AST augmentation

* fixed thresholding

* starting to experiment with no wham noise stuff

* add wham noise option in classifier training, dot prod correlation in finetuning

* single mask training

* added zero grad

* added the entropy loss

* implemented a psi function for cnn14

* Update README.md

* added stft-mel transformation learning

* add latest eval setup - working on gradient-based

* removed unused brain -- was causing issues in weights loading..

* training l2i on this classifier

* add l2i eval -- removing mosaic; not well defined in the case of L2I

* removed old png file

* debugging eval weight loading..

* was always using vq

* fixed eval AO

* fixed eval -- now everything's fine also for L2I

* better numerical stability

* handling quantus assertionerror

* add saliency from captum

* updated smoothgrad for captum

* added norm to saliency

* IG from captum

* starting gradient-base eval on cnn14...

* commit before merge

* works on cnn14 -- but have a bad checkpoint

* fixed l2i as well

* fixed acc in l2i

* fix not listenable

* updated logging for eval

* a bit less verbose

* printing at sample level

* fix logging - was missing avg

* was messing up in the forward

* now running train_piq.py

* minor corrections

* fix l2i training with wham!

* fixed l2i computation

* linters

* add check for wham usage in eval

* add sample saving during eval

* bug fixes

* added predictions info to the logging

* fixed id for overlap test

* cutting sample before saving

* fixed l2i sampling rate

* fixed random seed so eval will match

* running on full set

* faithfulness fix

* remove pdb

* fix smoothgrad and IG

* fix nmf for pre-training

* removed nmf reconstructions

* truncated gaussian fix for smoothgrad

* fix nans in sensitivity

* better l2i psi network

* saving to a different folder. helps not overriding experiments..

* fix l2i

* fix csv logging of exps

* add guided backprop

* added gradcam. guided backprop and guided gradcam need debugging

* l2i encoder 1D

* mel only - ao

* eval for mel only

* changed logging to simple write

* hardcoded checkpoint - to run on cc

* save everything in one folder

* remove joblib import

* fixed eval?

* fix eval again..

* maybe now?

* trying on cc

* add eval_outdir

* runs full eval

* l2i with updated psi

* update gitignore

* l2i logging different loss values

* add us8k classifier

* us8k interpretations

* fixed guided backprop and guided gradcam

* add shap

* normalizing shap attributions

* adding us8k prepare in interp..

* eval on ID

* fixed backward compatibility

* added multiclass classification

* eval xplorer v1

* eval xplorer v2

* implemented multi label interpretation

* update the loss function in multilabel interpretations

* evaluation explorer - minor fixes

* add roar

* roar test

* just removing a print...

* add roar script

* adding the user study parsing script

* savefigs

* fix to roar hparam

* minor

* extract samples for user study

* fix bug roar

* fixed roar

* fix another copy-paste error

* MRT eval

* roar with random baseline

* fix np seed

* computes mrt metrics

* saving masks for mrt viz

* remove rand baseline roar

* abs

* gradcam eval

* fix class

* add mrt to l2i

* train piq us8k

* param in mrt_evaluator

* add viz

* adding the latest

* fixing path problems for multilabelstuff

* changed the loss function to output 10 masks

* more standard maskout term

* changed encoder loading to local

* added accuracy computation

* removed unnecessary evaluation methods

* added all ones mask and average energy computation

* fixed the bug for whitenoise

* pushing eval later

* l2i new ood

* removing useless files

* cleaning up classification as well

* removing useless hparams in interpret

* more useless files

* old linters

* fix paths

* fix paths

* update Cnn14

* restored old piq file

* wham on PIQ

---------

Co-authored-by: Cem Subakan <csubakan@gmail.com>
Co-authored-by: Francesco Paissan <fpaissan@cedar1.cedar.computecanada.ca>

* removed useless code. needs to be modified to run with self.interpret_sample

* parent class and piq mods

* fix fn names

* simplify viz

* move data prep function

* L2I with parent class

* removed 1 decoderator

* commenting viz_ints. need std

* unifying viz

* change fn call

* removed abstract class

* disable viz_ints

* rm bl comp

* l2i viz

* remove l2i fid

* add lens

* removed some metrics

* extra_metric fix

* removed another metric

* removed another metric

* starting to std viz

* inp fid

* fix ic

* removing metrics as they will be compute elsewhere

* viz piq

* viz piq remove mask_ll

* uniform piq viz

* PIQ fits parent class

* starting to unify metrics eval

* fixed metrics -- missing SPS and COMP

* linters

* lmac into template

* update lmac hparams

* minor

* not converging

* converging now

* computing metrics

* computing extra metrics

* extra metrics for l2i

* starting SPS and COMP

* Adds quantus SPS and COMP metrics to the refactoring code (#6)

* starting to add quantus metrics

* add sps and com

* quantus metrics L2I

* add quantus reqs

* removed unused file

* still throws strange error

* ood eval

* fixed paddedbatch stuff

* eval L2I

* remove useless files

* using right wham preparation

* removing model wrapper as it is not needed

* fix ID samples

* fix linters

* model finetuning test

* pretrained_PIQ -> pretrained_interpreter

* update README.md

* added README instructions for training with WHAM!

* removing the dataset tag on experiment name

* Fix Checks (#8)

* Skip lazy imports when the caller is inspect.py

This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work.

This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that.

* SSL_Semantic_Token _ new PR (#2509)

* remove unnecassry  files and move to dasb

* remove extra recepie from test

* update ljspeech qunatization recepie

* add discrete_ssl and remove extra files

* fix precommit

* update kmeans and add tokeizer for postprocessing

* fix precommit

* Update discrete_ssl.py

* fix clone warning

---------

Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>

* _ensure_module Raises docstring

* Expose `ensure_module` so that docs get generated for it

This is already an internal class anyway, and this is safe to call.

* Update actions/setup-python

* Use `uv` in test CI + merge some dep installs

The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway.

* Use `uv` in doc CI + merge some dep installs

Similar rationale as for the test CI

* Parallelize doc generation with Sphinx

This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers.

* Enable `uv` caching on the test CI

* Enable `uv` caching on the docs CI

* CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (#2290)

CTC-only pre-training of conformer and branchformer.

---------

Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>

* Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (#2465)

* Update CV transformer recipes to match latest results with conformer.

---------

Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>

* Whisper improvements: flash attention, KV caching, lang_id, translation, training... (#2450)

Whisper improvements:
- flash attention
- kv caching
- lang identifaction
- translation
- finetuning amelioration 
... and more ...

* Update README.md

* precommit

* update zed download link (#2514)

* `RelPosEncXL` refactor and precision fixes (#2498)

* Add `RelPosEncXL.make_pe`, rework precision handling

* Rework RelPosEncXL output dtype selection

* Fix in-place input normalization when using `sentence`/`speaker` norm (#2504)

* fix LOCAL_RANK to be RANK in if_main_process (#2506)

* Fix Separation and Enhancement recipes behavior when NaN encountered (#2524)

* Fix Separation and Enhancement recipes behavior when NaN encountered

* Formatting using precommit hooks

* Lock torch version in requirements.txt (#2528)

* Fix compatibility for torchaudio versions without `.io` (#2532)

This avoids having the Python interpreter attempt to resolve the type annotation directly.

* fix docstrings

* consistency tests - classification

* consistency tests - classification

* consistency tests - interpret

* default to no wham

* fix after tests pass

* fix after tests pass

* tests after that

* fix consistency

---------

Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com>
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>

* added wham hparams to vit.yaml

* added focalnet wham hyperparams

* add eval info

* add automatic wham download

* additional instructions on README

* wham prepare uses explicit parameters

* wham docstrings

* edited the instructions on different contamination types

* removing the table

* revert changes to gitignore

* added comments on how to specify custom model

* precommit hooks

* fixed eval.py bug and more instructions in README.md

* remove checkpoint to avoid loading from exp folder

* load pretrained interpreter

* save always during test

* remove checkpointer call in eval.py

* added few more explanations for l2i

* fixed the nmf dictionary error

* fix viz argument for l2i

* added a comment for WHAM! noise

* setting the wham to False in vit and focalnet recipes

* fixed the faithfulness computation in PIQ and added AD AG AI COMPS SPS

* minor documentation improvements

* fixing the bug in SPS computation

* formatting

* Update README.md

* set manifest preparation to True

* fix device (not to add in yaml as it is a runnuing hparam)

* added the missing docstrings for complexity sparseness faithfulness

* fixed the header in eval.py

* added missing l2i command in train_l2i.py

* fixes to train_lmac.py

* description for classifier_temp

* added comments for pretrained_interpreter and ljspeech_path

* updated README to have more information on how to use LJSpeech

* added information for piq_vit.yaml and piq_focalnet.yaml

* added more explanation for LJSpeech downloading

* added missing use_melspectra_log1p attribute to piq_vit.yaml and piq_focalnet.yaml

* added an assert in eval.py for the pretrained path

* updates to the readme, added table, updated l2i to print quantus metrics

* Update README.md

* added the description of pretained_interpreter in README.md.

* fixed the problem in vit

* fixing l2i tests

* fixed ESC50.csv

* fixed the yaml tets

* added links to files

* fixed docstring tests

* bug fix on psi model

* removing the classes from PIQ.py

* fixes in L2I psi classes

* handling sps comp exceptions

* added the dropbox links

* Update README.md

---------

Co-authored-by: Cem Subakan <csubakan@gmail.com>
Co-authored-by: Francesco Paissan <fpaissan@cedar1.cedar.computecanada.ca>
Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com>
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request recipes Changes to recipes only (add/edit)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants