-
Notifications
You must be signed in to change notification settings - Fork 1.6k
fix LOCAL_RANK to be RANK in if_main_process #2506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix LOCAL_RANK to be RANK in if_main_process #2506
Conversation
|
@Adel-Moumen this change is basically reverting #2101 and most likely will break DDP multi-node training if no other fixes were made in the meantime. The assumption here is that If we now have operations that should run only on the master node, we should then have 2 separate functions, one to check the global master process and one to check the local master process and refactor the DDP code accordingly. |
Hi @lucadellalib,
I really doubt on this. Your fix using LOCAL_RANK was making multinodes not working as you had N multiple running experiments at the same time where N is the number of nodes. I don't think you should use LOCAL_RANK for initialisation DDP because it creates the aforementioned issue... Now everything works smoothly.
why the data would be missed ? If you are doing the data prep on the master node, which has always been the case, then the other nodes can also get access to the actual data ? If your issue is due to wav path that may be different, then just have to use the
Ping @TParcollet on this. |
|
@Adel-Moumen I see what you mean, this works on Compute Canada because the filesystem is shared. In general it might not be the case (in a recent use case I had multiple nodes with no shared storage, so I had to upload the data on each node, generate the manifest files on each nodes, let the local master process save checkpoints on each node etc. Without the local master process doing the necessary I/O operations things do not work properly in this setup). We are still correctly initializing DDP stuff using the global rank: What error do you get on Compute Canada with the current implementation of |
OK, I see. I didn't had this use case in mind.
Yep, but here for instance: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L161 we are using the definition of
It was mostly what I described previously. Having everything duplicated was very weird to me since I wasn't expecting SpeechBrain to do so. Maybe we should try to isolate what are the operations that we want to be perform only on the global rank (e.g. creating the SB experiment, init WANDB etc) and what can be run on each nodes (e.g. data prep) to be under different functions. Wdyt? (basically what you suggested) |
|
I agree, probably logging should be done only the on the |
|
I tried to run the multinodes DDP training uising what you did (i.e. LOCAL_RANK instead of RANK for if_main_proc), I have the following error which seems to appear linked to checkpointing. I don't know if this all of our recipes that are affected or only mine (which is just a wav2vec CTC librispeech training) but its a bit concerning. When you run the code with my definition of if_main_proc (i.e. RANK instead of LOCAL_RANK) there's no issues and I can continue the training (i.e. no ckpt issues etc). |
|
Looks like a race condition due to the shared storage... it's probably better then to revert the change since Compute Canada is the main platform... We should keep in mind that this solution is not general and can break things on other cluster setups. |
|
I think in many libraries there is a separate check As for the shared filesystem vs. node-local storage, recipes could state if they are written with an assumption of a globally shared filesystem, or with an assumption node-locally shared temp filestorages. Or perhaps we can even help switching between the two by making a third checker function like |
|
Okay, so what I propose to do is first to merge this PR with |
|
I agree with @Gastron. Here is my analysis of the situation: we agreed a while back that the use of functions like if_local_main or if_global_main should be avoided in user-visible code. Instead, we should have run_on_main-like functions. In that case, we need to create a run_on_local_main and run_on_global_main OR we could add a flag to run_on_main to toggle to the local main instead of global (which make more sense imho to avoid too many changes in the lib). @Adel-Moumen we should revert and do this in a single PR. |
|
Hi, I made the requested feature using Titouan's proposal (i.e., using a flag). Let me know if you think this PoC is fine. I tried initially to implement multiple functions (one for local_rank and one for global_rank); however, I believe this would require a non-negligible commit, as there are many things to modify and adjust in our codebase (and could introduce backward incompatibilities) |
|
I had a more in-depth discussion with @TParcollet. We think that in the meantime it is better to just revert the |
* Skip lazy imports when the caller is inspect.py This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work. This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that. * SSL_Semantic_Token _ new PR (speechbrain#2509) * remove unnecassry files and move to dasb * remove extra recepie from test * update ljspeech qunatization recepie * add discrete_ssl and remove extra files * fix precommit * update kmeans and add tokeizer for postprocessing * fix precommit * Update discrete_ssl.py * fix clone warning --------- Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> * _ensure_module Raises docstring * Expose `ensure_module` so that docs get generated for it This is already an internal class anyway, and this is safe to call. * Update actions/setup-python * Use `uv` in test CI + merge some dep installs The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway. * Use `uv` in doc CI + merge some dep installs Similar rationale as for the test CI * Parallelize doc generation with Sphinx This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers. * Enable `uv` caching on the test CI * Enable `uv` caching on the docs CI * CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290) CTC-only pre-training of conformer and branchformer. --------- Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> * Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465) * Update CV transformer recipes to match latest results with conformer. --------- Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> * Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450) Whisper improvements: - flash attention - kv caching - lang identifaction - translation - finetuning amelioration ... and more ... * Update README.md * precommit * update zed download link (speechbrain#2514) * `RelPosEncXL` refactor and precision fixes (speechbrain#2498) * Add `RelPosEncXL.make_pe`, rework precision handling * Rework RelPosEncXL output dtype selection * Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504) * fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506) * Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524) * Fix Separation and Enhancement recipes behavior when NaN encountered * Formatting using precommit hooks * Lock torch version in requirements.txt (speechbrain#2528) * Fix compatibility for torchaudio versions without `.io` (speechbrain#2532) This avoids having the Python interpreter attempt to resolve the type annotation directly. * fix docstrings * consistency tests - classification * consistency tests - classification * consistency tests - interpret * default to no wham * fix after tests pass * fix after tests pass * tests after that * fix consistency --------- Co-authored-by: asu <sdelang@sdelang.fr> Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com> Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com> Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
* works on cnn14 -- but have a bad checkpoint * fixed l2i as well * fixed acc in l2i * fix not listenable * updated logging for eval * a bit less verbose * printing at sample level * fix logging - was missing avg * was messing up in the forward * now running train_piq.py * minor corrections * fix l2i training with wham! * fixed l2i computation * linters * add check for wham usage in eval * add sample saving during eval * bug fixes * added predictions info to the logging * fixed id for overlap test * cutting sample before saving * fixed l2i sampling rate * fixed random seed so eval will match * running on full set * faithfulness fix * remove pdb * fix smoothgrad and IG * fix nmf for pre-training * removed nmf reconstructions * truncated gaussian fix for smoothgrad * fix nans in sensitivity * better l2i psi network * saving to a different folder. helps not overriding experiments.. * fix l2i * fix csv logging of exps * add guided backprop * added gradcam. guided backprop and guided gradcam need debugging * l2i encoder 1D * mel only - ao * eval for mel only * changed logging to simple write * hardcoded checkpoint - to run on cc * save everything in one folder * remove joblib import * fixed eval? * fix eval again.. * maybe now? * trying on cc * add eval_outdir * runs full eval * l2i with updated psi * update gitignore * l2i logging different loss values * add us8k classifier * us8k interpretations * fixed guided backprop and guided gradcam * add shap * normalizing shap attributions * adding us8k prepare in interp.. * eval on ID * fixed backward compatibility * added multiclass classification * eval xplorer v1 * eval xplorer v2 * implemented multi label interpretation * update the loss function in multilabel interpretations * evaluation explorer - minor fixes * add roar * roar test * just removing a print... * add roar script * adding the user study parsing script * savefigs * fix to roar hparam * minor * extract samples for user study * fix bug roar * fixed roar * fix another copy-paste error * MRT eval * roar with random baseline * fix np seed * computes mrt metrics * saving masks for mrt viz * remove rand baseline roar * abs * gradcam eval * fix class * add mrt to l2i * train piq us8k * param in mrt_evaluator * add viz * adding the latest * fixing path problems for multilabelstuff * changed the loss function to output 10 masks * more standard maskout term * changed encoder loading to local * added accuracy computation * removed unnecessary evaluation methods * added all ones mask and average energy computation * fixed the bug for whitenoise * pushing eval later * l2i new ood * removing useless files * cleaning up classification as well * removing useless hparams in interpret * more useless files * old linters * fix paths * fix paths * update Cnn14 * restored old piq file * wham on PIQ * Adding LMAC - needs refactor (#5) * WHAM-ing the data * AO on conv2d classifier * added interpretability metrics * fix debug steps -- updated * minor to train_piq * fix saving interpretations * add wham! for L2I * fix l2i eval * add NCC * cross correlation w/ batching * checked crosscor * finish finetuning script * switch to l1 * linters * add binarized oracle w/ BCE * fix compute loss in finetuning while saving samples * comparison script * fix 0dB mixtures * add original wav to comparison * just path to new classifier * just committing new checkpoint for L2I * add NMF image logging for debug * fix bug in viz L2I * log the number of finetuning masks * lower crosscor thr * fix acc * align L2I debugging w/ PIQ script * fixed accuracy computation for L2I * L2I with variable number of components (K=200) * debugging l2i... * update hparams * fixed oracle source * fixed wrong sources and running finetuning experiments.. * add AST as classifier * hparams ast -- still not converging * add ast augmentation * update training script after merge * with augmentations is better * just pushing hparams * classification with CE * conv2d fix for CE * playing with AST augmentation * fixed thresholding * starting to experiment with no wham noise stuff * add wham noise option in classifier training, dot prod correlation in finetuning * single mask training * added zero grad * added the entropy loss * implemented a psi function for cnn14 * Update README.md * added stft-mel transformation learning * add latest eval setup - working on gradient-based * removed unused brain -- was causing issues in weights loading.. * training l2i on this classifier * add l2i eval -- removing mosaic; not well defined in the case of L2I * removed old png file * debugging eval weight loading.. * was always using vq * fixed eval AO * fixed eval -- now everything's fine also for L2I * better numerical stability * handling quantus assertionerror * add saliency from captum * updated smoothgrad for captum * added norm to saliency * IG from captum * starting gradient-base eval on cnn14... * commit before merge * works on cnn14 -- but have a bad checkpoint * fixed l2i as well * fixed acc in l2i * fix not listenable * updated logging for eval * a bit less verbose * printing at sample level * fix logging - was missing avg * was messing up in the forward * now running train_piq.py * minor corrections * fix l2i training with wham! * fixed l2i computation * linters * add check for wham usage in eval * add sample saving during eval * bug fixes * added predictions info to the logging * fixed id for overlap test * cutting sample before saving * fixed l2i sampling rate * fixed random seed so eval will match * running on full set * faithfulness fix * remove pdb * fix smoothgrad and IG * fix nmf for pre-training * removed nmf reconstructions * truncated gaussian fix for smoothgrad * fix nans in sensitivity * better l2i psi network * saving to a different folder. helps not overriding experiments.. * fix l2i * fix csv logging of exps * add guided backprop * added gradcam. guided backprop and guided gradcam need debugging * l2i encoder 1D * mel only - ao * eval for mel only * changed logging to simple write * hardcoded checkpoint - to run on cc * save everything in one folder * remove joblib import * fixed eval? * fix eval again.. * maybe now? * trying on cc * add eval_outdir * runs full eval * l2i with updated psi * update gitignore * l2i logging different loss values * add us8k classifier * us8k interpretations * fixed guided backprop and guided gradcam * add shap * normalizing shap attributions * adding us8k prepare in interp.. * eval on ID * fixed backward compatibility * added multiclass classification * eval xplorer v1 * eval xplorer v2 * implemented multi label interpretation * update the loss function in multilabel interpretations * evaluation explorer - minor fixes * add roar * roar test * just removing a print... * add roar script * adding the user study parsing script * savefigs * fix to roar hparam * minor * extract samples for user study * fix bug roar * fixed roar * fix another copy-paste error * MRT eval * roar with random baseline * fix np seed * computes mrt metrics * saving masks for mrt viz * remove rand baseline roar * abs * gradcam eval * fix class * add mrt to l2i * train piq us8k * param in mrt_evaluator * add viz * adding the latest * fixing path problems for multilabelstuff * changed the loss function to output 10 masks * more standard maskout term * changed encoder loading to local * added accuracy computation * removed unnecessary evaluation methods * added all ones mask and average energy computation * fixed the bug for whitenoise * pushing eval later * l2i new ood * removing useless files * cleaning up classification as well * removing useless hparams in interpret * more useless files * old linters * fix paths * fix paths * update Cnn14 * restored old piq file * wham on PIQ --------- Co-authored-by: Cem Subakan <csubakan@gmail.com> Co-authored-by: Francesco Paissan <fpaissan@cedar1.cedar.computecanada.ca> * removed useless code. needs to be modified to run with self.interpret_sample * parent class and piq mods * fix fn names * simplify viz * move data prep function * L2I with parent class * removed 1 decoderator * commenting viz_ints. need std * unifying viz * change fn call * removed abstract class * disable viz_ints * rm bl comp * l2i viz * remove l2i fid * add lens * removed some metrics * extra_metric fix * removed another metric * removed another metric * starting to std viz * inp fid * fix ic * removing metrics as they will be compute elsewhere * viz piq * viz piq remove mask_ll * uniform piq viz * PIQ fits parent class * starting to unify metrics eval * fixed metrics -- missing SPS and COMP * linters * lmac into template * update lmac hparams * minor * not converging * converging now * computing metrics * computing extra metrics * extra metrics for l2i * starting SPS and COMP * Adds quantus SPS and COMP metrics to the refactoring code (#6) * starting to add quantus metrics * add sps and com * quantus metrics L2I * add quantus reqs * removed unused file * still throws strange error * ood eval * fixed paddedbatch stuff * eval L2I * remove useless files * using right wham preparation * removing model wrapper as it is not needed * fix ID samples * fix linters * model finetuning test * pretrained_PIQ -> pretrained_interpreter * update README.md * added README instructions for training with WHAM! * removing the dataset tag on experiment name * Fix Checks (#8) * Skip lazy imports when the caller is inspect.py This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work. This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that. * SSL_Semantic_Token _ new PR (#2509) * remove unnecassry files and move to dasb * remove extra recepie from test * update ljspeech qunatization recepie * add discrete_ssl and remove extra files * fix precommit * update kmeans and add tokeizer for postprocessing * fix precommit * Update discrete_ssl.py * fix clone warning --------- Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> * _ensure_module Raises docstring * Expose `ensure_module` so that docs get generated for it This is already an internal class anyway, and this is safe to call. * Update actions/setup-python * Use `uv` in test CI + merge some dep installs The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway. * Use `uv` in doc CI + merge some dep installs Similar rationale as for the test CI * Parallelize doc generation with Sphinx This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers. * Enable `uv` caching on the test CI * Enable `uv` caching on the docs CI * CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (#2290) CTC-only pre-training of conformer and branchformer. --------- Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> * Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (#2465) * Update CV transformer recipes to match latest results with conformer. --------- Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> * Whisper improvements: flash attention, KV caching, lang_id, translation, training... (#2450) Whisper improvements: - flash attention - kv caching - lang identifaction - translation - finetuning amelioration ... and more ... * Update README.md * precommit * update zed download link (#2514) * `RelPosEncXL` refactor and precision fixes (#2498) * Add `RelPosEncXL.make_pe`, rework precision handling * Rework RelPosEncXL output dtype selection * Fix in-place input normalization when using `sentence`/`speaker` norm (#2504) * fix LOCAL_RANK to be RANK in if_main_process (#2506) * Fix Separation and Enhancement recipes behavior when NaN encountered (#2524) * Fix Separation and Enhancement recipes behavior when NaN encountered * Formatting using precommit hooks * Lock torch version in requirements.txt (#2528) * Fix compatibility for torchaudio versions without `.io` (#2532) This avoids having the Python interpreter attempt to resolve the type annotation directly. * fix docstrings * consistency tests - classification * consistency tests - classification * consistency tests - interpret * default to no wham * fix after tests pass * fix after tests pass * tests after that * fix consistency --------- Co-authored-by: asu <sdelang@sdelang.fr> Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com> Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com> Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com> * added wham hparams to vit.yaml * added focalnet wham hyperparams * add eval info * add automatic wham download * additional instructions on README * wham prepare uses explicit parameters * wham docstrings * edited the instructions on different contamination types * removing the table * revert changes to gitignore * added comments on how to specify custom model * precommit hooks * fixed eval.py bug and more instructions in README.md * remove checkpoint to avoid loading from exp folder * load pretrained interpreter * save always during test * remove checkpointer call in eval.py * added few more explanations for l2i * fixed the nmf dictionary error * fix viz argument for l2i * added a comment for WHAM! noise * setting the wham to False in vit and focalnet recipes * fixed the faithfulness computation in PIQ and added AD AG AI COMPS SPS * minor documentation improvements * fixing the bug in SPS computation * formatting * Update README.md * set manifest preparation to True * fix device (not to add in yaml as it is a runnuing hparam) * added the missing docstrings for complexity sparseness faithfulness * fixed the header in eval.py * added missing l2i command in train_l2i.py * fixes to train_lmac.py * description for classifier_temp * added comments for pretrained_interpreter and ljspeech_path * updated README to have more information on how to use LJSpeech * added information for piq_vit.yaml and piq_focalnet.yaml * added more explanation for LJSpeech downloading * added missing use_melspectra_log1p attribute to piq_vit.yaml and piq_focalnet.yaml * added an assert in eval.py for the pretrained path * updates to the readme, added table, updated l2i to print quantus metrics * Update README.md * added the description of pretained_interpreter in README.md. * fixed the problem in vit * fixing l2i tests * fixed ESC50.csv * fixed the yaml tets * added links to files * fixed docstring tests * bug fix on psi model * removing the classes from PIQ.py * fixes in L2I psi classes * handling sps comp exceptions * added the dropbox links * Update README.md --------- Co-authored-by: Cem Subakan <csubakan@gmail.com> Co-authored-by: Francesco Paissan <fpaissan@cedar1.cedar.computecanada.ca> Co-authored-by: asu <sdelang@sdelang.fr> Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com> Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com> Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
What does this PR do?
This PR fix one issue that I encountered while using SpeechBrain on Compute Canada. Basically, I found that the
LOCAL_RANKvariable was being0on two different processes hence leading to having two main process. Why ? Because our definition of main process isLOCAL_RANK == 0. I went a bit further in the PyTorch an PyTorch lightning documentation and found that we should not useLOCAL_RANKas a way to determine the main process. Indeed, as explained here: pytorch/pytorch#12042 (comment),LOCAL_RANKis actually the ID within a worker; multiple workers have aLOCAL_RANKof 0.As mentioned here: pytorch/pytorch#12042 (comment), we should use
RANK == 0as a way to find the master process. This is also what is being done with PyTorch Lightning here: pytorch/pytorch#12042 (comment) withglobal_rank.With my fix, now everything works as expected. Only one main process and everything is synchronised.
Logs to help better understanding
I launched on compute canada one sbatch with 2 nodes and 1 gpu per node, I printed some informations about each nodes:
However, if you do print the
LOCAL_RANKyou'll see that both proc has the sameLOCAL_RANKof 0 which cause the issue of having two different speechbrain experiments.When you switch to
RANK == 0definition of main proc, everything works as expected, only one proc is the master and you get this for one epoch:Before submitting
PR review
Reviewer checklist