-
-
Notifications
You must be signed in to change notification settings - Fork 144
Description
I'm encountering an issue when trying to use PyTorch with CUDA in conjunction with nfstream's multiprocessing functionality. Despite setting the multiprocessing start method to 'spawn', I'm still receiving a CUDA re-initialization error.
When running this code, I receive the following error:
"Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method"
I've already set the start method to 'spawn' as suggested, but the error persists. I believe this might be due to how nfstream handles its internal multiprocessing.
Questions:
Is there a way to configure nfstream to use the 'spawn' method for multiprocessing instead of 'fork'?
If not, are there any recommended workarounds for using CUDA-enabled PyTorch models with nfstream?
Is there a way to use threading instead of multiprocessing in nfstream for this use case?
Any guidance or suggestions would be greatly appreciated. Thank you for your time and assistance.
Environment details:
nfstream version: 6.5.4
PyTorch version: 2.4.0
Python version: 3.12
Operating System: Ubuntu 24.04
Here's a minimal example of my setup:
import torch
from nfstream import NFStreamer
from custom_plugin import PacketStatsCollector
def network_monitor():
streamer = NFStreamer(source='eth0',
decode_tunnels=True,
promiscuous_mode=True,
udps=PacketStatsCollector(
model=pytorch_model,
device='cuda'
))
for flow in streamer:
# Process flows
if __name__ == '__main__':
torch.multiprocessing.set_start_method('spawn', force=True)
network_monitor()