cbica / brainmage Goto Github PK
View Code? Open in Web Editor NEWBrain extraction in presence of abnormalities, using single and multiple MRI modalities
License: Other
Brain extraction in presence of abnormalities, using single and multiple MRI modalities
License: Other
While I am executing
brain_mage_run -params brainmage.cfg -test True -mode Multi-4 -dev 'cpu'
Get an error:
Weight file used : /home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/BrainMaGe/weights/resunet_multi_4.pt
/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/EGG-INFO/scripts/brain_mage_run
Hostname :None
Start Time :Thu Sep 2 16:59:30 2021
Start Stamp:1630591170.250223
Generating Test csv
Traceback (most recent call last):
File "/home/mri/miniconda3/envs/brainmage/bin/brain_mage_run", line 4, in <module>
__import__('pkg_resources').run_script('BrainMaGe==1.0.5.dev0', 'brain_mage_run')
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/pkg_resources/__init__.py", line 666, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1463, in run_script
exec(code, namespace, namespace)
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/EGG-INFO/scripts/brain_mage_run", line 215, in <module>
test_multi_4.infer_multi_4(params_file, DEVICE, args.save_brain, weights)
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/BrainMaGe/tester/test_multi_4.py", line 105, in infer_multi_4
checkpoint = torch.load(str(params["weights"]))
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/mri/miniconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 763, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
pytorch '1.9.0+cu102'
horovod '0.22.1'
#used: HOROVOD_WITH_PYTORCH=1 pip install horovod[pytorch]
When I run the following command:
python ../BrainMaGe/brain_mage_run.py -params test_params_multi_4_2020.cfg -test True -mode Multi-4 -dev gpu
I get this error.
Extension horovod.torch has not been built: /opt/conda/lib/python3.7/site-packages/horovod/torch/mpi_lib/_mpi_lib.cpython-37m-x86_64-linux-gnu.so not found
If this is not expected, reinstall Horovod with HOROVOD_WITH_PYTORCH=1 to debug the build error.
Warning! MPI libs are missing, but python applications are still avaiable.
Traceback (most recent call last):
File "../BrainMaGe/brain_mage_run.py", line 118, in <module>
version=pkg_resources.require("BrainMaGe")[0].version
File "/opt/conda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 886, in require
needed = self.resolve(parse_requirements(requirements))
File "/opt/conda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 777, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (SQLAlchemy 1.4.22 (/opt/conda/lib/python3.7/site-packages), Requirement.parse('sqlalchemy<1.4.0.dev0'), {'pybids'})
Why is it looking for a conda environment? Do I need to explicitly create a conda environment?
Thanks,
Jay
Hi,
i was running in an runtime error, because I was running within the cpu mode. See the Traceback:
Traceback (most recent call last): File "/home/marlon/anaconda3/envs/brainmage/bin/brain_mage_run", line 4, in <module> __import__('pkg_resources').run_script('BrainMaGe==1.0.5.dev0', 'brain_mage_run') File "/home/marlon/.local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/home/marlon/.local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1464, in run_script exec(code, namespace, namespace) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/EGG-INFO/scripts/brain_mage_run", line 213, in <module> test_ma.infer_ma(params_file, DEVICE, args.save_brain, weights) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/BrainMaGe-1.0.5.dev0-py3.6.egg/BrainMaGe/tester/test_ma.py", line 155, in infer_ma checkpoint = torch.load(str(params["weights"])) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 773, in _legacy_load result = unpickler.load() File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 729, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 178, in default_restore_location result = fn(storage, location) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 154, in _cuda_deserialize device = validate_cuda_device(location) File "/home/marlon/anaconda3/envs/brainmage/lib/python3.6/site-packages/torch/serialization.py", line 138, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I was able to hotfix this issue by changing the line 155, just like the error is saying:
checkpoint = torch.load(str(params["weights"]),map_location=torch.device('cpu'))
Maybe this line should also go into the if clause, that checks for the device (two lines later). Just to let you know. Everything works fine then for me. Great work!
Cheers
Hi,
I just like to ask, if there's a way to use the BraTS_pipeline application via CLI from Linux? Also I would like to know if you know a tool where only one image is processed and not 4 inputs are needed. I was looking around but I am not quite sure what the pipeline is exactly doing.
For testing I put one image for all the modalities...It worked, but its quite inefficient.
Cheers
All of my images are unit volume. RAS orientation. BTW: I've tried RAS per documentation and also RAI and see no difference.
I'm getting reasonably good masks running
python ../BrainMaGe/brain_mage_single_run -i $input_brain_path -o 'output_mask.nii.gz' -m 'output_brain.nii.gz' -dev 0
However, when I run batch mode:
python ../BrainMaGe/brain_mage_run -params test_params_multi_4_2020.cfg -test True -mode Multi-4 -dev 0
I receive the following error:
`Weight file used : /home/BrainMaGe/weights/resunet_multi_4.pt
../BrainMaGe/brain_mage_run
Hostname :None
Start Time :Tue Sep 21 11:26:16 2021
Start Stamp:1632223576.6366262
Generating Test csv
100%|███████████████████████████████████████████| 61/61 [11:05<00:00, 10.91s/it]
Done with running the model.
You chose to save the brain. We are now saving it with the masks.
0%| | 0/61 [00:00<?, ?it/s]
Traceback (most recent call last):
File "../BrainMaGe/brain_mage_run", line 215, in
test_multi_4.infer_multi_4(params_file, DEVICE, args.save_brain, weights)
File "/home/jupyter/BrainMaGe/BrainMaGe/tester/test_multi_4.py", line 157, in infer_multi_4
image_data[mask_data == 0] = 0
IndexError: boolean index did not match indexed array along dimension 0; dimension is 220 but corresponding boolean dimension is 240`
Here is line 157:
image_data[mask_data == 0] = 0
https://github.com/CBICA/BrainMaGe/blob/master/BrainMaGe/tester/test_multi_4.py#:~:text=image_data%5Bmask_data%20%3D%3D%200%5D%20%3D%200
It is dependent on line 126. It looks like the images are hard-code to be (240, 240, 160)
to_save = interpolate_image(output, (240, 240, 160))
https://github.com/CBICA/BrainMaGe/blob/master/BrainMaGe/tester/test_multi_4.py#:~:text=to_save%20%3D%20interpolate_image(output%2C%20(240%2C%20240%2C%20160))
Am I required to pad images to dimension (240, 240, 160) to use multi_4 mode?
Any guidance would be appreciated.
Thanks,
Jay
What changes would be required if the project were to run on Windows 10?
From Mark:
Add a check for the required arguments and the "help" message to brain_mage_run, rather than loading the entire python module before parsing the arguments.
The initial runtime for "brain_mage_run" with no arguments (just to produce the 'usage' message) is ~35 seconds! Even when the files are cached (ie., running "brain_mage_run" multiple times in quick succession), it takes ~3 seconds to produce the usage message.
According to dependabot:
Link to patch: Lightning-AI/pytorch-lightning@8b7a12c
PyTorch lightning 1.6.1 is already out [ref] which should contain this fix, but making any change in dependencies would require testing.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.