Skip to content

Conversation

@pplantinga
Copy link
Collaborator

Fixes #2715

Adapted models was not saving parameters properly due to PyTorch detaching all parameters in the state dict, setting requires_grad to False. This PR fixes the saving by preserving the parameters in the state dict for iteration and then detaching manually.

@pplantinga pplantinga requested a review from TParcollet October 10, 2024 16:30
@pplantinga pplantinga self-assigned this Oct 10, 2024
@pplantinga pplantinga added this to the v1.0.2 milestone Oct 10, 2024
Copy link
Collaborator

@TParcollet TParcollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Being tested by the person who found the bug. If all good, i'll merge.

@Antoine-Caubriere
Copy link
Collaborator

tested and everything seems fine, thanks for the fix!

@TParcollet TParcollet merged commit 9912b25 into speechbrain:develop Oct 11, 2024
naspert added a commit to naspert/speechbrain that referenced this pull request Oct 29, 2024
* data prep scripts update

* iterate over utterances

* without parallel map

* parallel map -> so fast omfg

* gigaspeech data prep done

* speechcolab extra dep if one must download gigaspeech

* create ASR CTC folder

* base yaml + update data prep to better reflect potential different naming for csvs

* update recipe

* update recipe to be compliant with gigaspeech csv

* add transformers dep

* convert opus to wav

* recipe --debug mode works.

* typo GRABAGE_UTTERANCE_TAGS -> GARBAGE_UTTERANCE_TAGS

* tmp DL file

* update DL FILE

* add DL file in ASR/CTC

* update extra_requirements.txt

* add support of savedir within Pretrained subclasses

* add wbs requirements

* webdataset

* remove print

* tmp files webdataset

* verbosity + metada.json

* letzo now label_encoder can actually train + the recipe seems to work.

* remove wbs

* DL info

* HF DL support

* remove webdataset as it sucks :p

* name

* ngram commands

* whisper baseline

* fix HF

* pre-commit + sentencepiece char

* remove csv

* Add quirks.py, move global PyTorch config and GPU workarounds there

* Add support for SB_DISABLE_QUIRKS environment variable

* Fetch rework: make savedir optional

* bunch of updates to make it run

* no download script

* fix precommit

* fix precommit

* readmes

* readmes

* readmes

* readmes

* doc update

* CI god not happy, make CI god happy

* why you here little encoder

* adding a tranduscer streaming recipe, because why not

* add test for transducer

* works better when me not stupid

* fix yaml

* update req

* add warning for cache dir

* add warning for cache dir

* enable multiprocessing

* Minor cleanups to fetching

* Change default behavior of inference to not create savedir if not specified

* allow data prep without ddp

* fix tests

* smoll readme update

* fix review comments

* fixed concat_start_index check (speechbrain#2717)

* Ensure adapted models save their parameters (speechbrain#2716)

Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>

* wtf

* update doc

* more documentation on storage

* missing arg

* a bit of logs

* new schedulers

* new schedulers

* Fixes speechbrain#2656: Remove EOS from SoundChoice

* fix my stupidity

* Update non-HF code path for new preprocessing code in GigaSpeech

* Fix CSV path for non-HF Gigaspeech

* Fix formatting

* Kmeans fix (speechbrain#2642)

* fix kmeans bug

* fix final batch

* fix chuncksize

* fix

* fix

* fix precommit

* fix doxstrin inconsistency

* fix precommit

* fix doc string

---------

Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>

* add call on start of fit_batch fn

* Update core.py

Fix old commit

* Update core.py

* Fix preprocess_text example

* Fix guess_source docstring with up-to-date info

* Also remove default savedir from Pretrained

* Fix function name for log_applied_quirks

* wip audiomnist+gt

* Revert "fix normalization for LFB"

This reverts commit 3fd0330.

* audiomnist classification setup

* fix config

* add missing file

* update dataset load/training

* remove unnecessary params

* remove sort

* remove unnecessary code

* fix paths

* fix loss computation

* add missing flatten

* print summary

* Explain quirks in docs/experiment.md

* ok stupid linter check that hates intentional leading spaces in markdown

* add citing in README

* add code to pad all wavs to the same length

* fix pad call

* fix error computation

* fix error computation

* Make `collect_in` optional for `Pretrainer`, disable it by default

* Change more defaults to `savedir=None` and `fetch_strategy=SYMLINK`

Since the SYMLINK strategy falls back to NO_LINK whenever `savedir is None`, it makes sense to switch more things to default to `savedir=None`.

Should the `savedir` explicitly be set by the user, past behavior is preserved (defaulting to symlinks).

* move flatten in audionet

* Fix GS transducer test prediction decoding?

* fix data prep logic and paths

* Actually fix GS transducer test prediction decoding

* Remove punctuation filtering that is handled elsewhere

* HuggingFance

* fix skip data prep logic

* add original audionet feature extraction

* fix pooling for audionet feature extraction

* fix audionet shape + remove input norm

* try data augmentation

* add missing refs

* - rework AudioNet to have optional pooling
- use official AudioMNIST train/test/valid splits

* fix typo in url

* update audionet hparams

* update audionet custom hparams

* update audionet custom hparams

* Updated warning for load_collected

* Add results and notices for results for GigaSpeech transducer & wavlm

* english hard

* update audionet custom hparams

* fix doc + pre-commit clean

* fix code examples

* fix consistency tests

* fix pre commit

* remove config

* fix docstring for LFB

* fix docstring for GammatoneConv1D

---------

Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: TParcollet <parcollet.titouan@gmail.com>
Co-authored-by: Peter Plantinga <plantinga.peter@proton.me>
Co-authored-by: gianfranco <62777451+gfdb@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: flexthink <flexthink@users.noreply.github.com>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
@pplantinga pplantinga deleted the fix-adapter-checkpoints branch January 16, 2025 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AdaptedModel does not save weights

3 participants