Comments (11)
The subjects are matched based on their number. For example 001, 002 and so on. I guess it is not a very important issue, it would just be easier in the scenario where the data was structured in such a manner, which is how it could typically be in NiftyNet or in Monai π
https://niftynet.readthedocs.io/en/dev/filename_matching.html#automatic-filename-matching
I think it is totally ok to keep it as it already is in GaNDLF also, because it is working once we have the right directory format.
from gandlf.
I wrote some code similar to this for the automatic multi-subject feature extraction pipeline on the IPP. But as I learned, any heuristically based method is going to fail at some point.
The difficulty is this:
The default current scheme assumes that the directory name can be interpreted as a patient identifier, and identifies channels based on strings present in the filenames.
A scheme that handles the data Carl shows requires us to interpret directories as channel names, and identify subjects based on strings present in the filenames.
If we cannot actually make guarantees about how subjectIDs and channels are named (or their order in the filenames themselves), we can't automatically detect each case, so for a general-use constructCSV script, we should just pick one and stick with it. Even if we provided an option for this, something like "topLevelDirsAreChannels" ( I really cannot think of a clear, succinct name for this behavior ), users would have to know what that means and interpret it, which will just cause confusion.
If we can safely assume that subjectIDs only differ by number, then we actually can autodetect this case (and provide a switch just in case users actually don't get the output they expect.) Is that a reasonable assumption?
from gandlf.
It would be great if it was possible to use the constructCSV regardless of the directory format.
Unfortunately, gandlf_constructCSV
is designed to work only with a specific folder structure. If you can think of a way it could be made more generic (while ensuring that the current mechanism works as expected), we'd be happy to consider updating the implementation π.
from gandlf.
The way they did it in Niftynet was to search through a given folder for all files named for example xxx_ct.nii.gz and xxx_gt.nii.gz and save the names to a CSV.
from gandlf.
I started implementing something but ran into a problem right from the get-go: How should the program know how to match subjects in a single folder?
./experiment_0/data_dir/
β β
β ββββPatient_001 # this is used to construct the "SubjectID" header of the CSV
β β β Patient_001_brain_t1.nii.gz
β β β Patient_001_brain_t1ce.nii.gz
β β β Patient_001_brain_t2.nii.gz
β β β Patient_001_brain_flair.nii.gz
β β β Patient_001_brain_seg.nii.gz
β β
β ββββPatient_002 # this is used to construct the "Subject_ID" header of the CSV
β β β ...
β
In the above example, all files contain Patient_${ID}
as an identifier. If this is the case, then it would be much cleaner practice to structure the input folder in a per-patient manner, which would allow ground truth and other metadata to be kept on a per-patient basis. Not dictating how someone should structure their data, it's just that we need to try and hit the lowest common denominator, and supporting all possible data structure formats is impossible π.
Additionally, the above structure is somewhat related to the brain imaging data structure (BIDS) (a formalized mechanism to define data formats), but not entirely, since BIDS has definitions mostly for DICOM. Anyway, let me know what you think.
from gandlf.
What do you guys think about this, @AlexanderGetka-cbica, @Geeks-Sid?
from gandlf.
While this is of great utility, construct_csv is a starter code for folks to get started. There could be many more formats for folder structuring and while it would be great to support all of them, It is currently not in our plans. But as always, pull requests are appreciated.
from gandlf.
Cool, thanks for the input! What about you, @AlexanderGetka-cbica?
from gandlf.
If we can safely assume that subjectIDs only differ by number, then we actually can autodetect this case (and provide a switch just in case users actually don't get the output they expect.)
I think this is a very well-put argument. I'll ask @carlpe for more clarification.
from gandlf.
For the data sets consisting of only one single channel (_ct.nii.gz), the file names will differ only by number.
But in case we have multiple channels such as for example several MR weightings (_T1.nii.gz, T2.nii.gz), it will differ in more than the number exclusively.
I suppose it might be better to keep it as you already have it now, as there is a good reasoning for the formatting.
I found it doesn't take long to convert into your format anyway. I actually found a windows tool that will do this for me batchwise for the whole dataset, it is a free software named "Advanced Renamer", took only a couple of minutes to do the directory formatting.
from gandlf.
Closing this until we have a different solution.
from gandlf.
Related Issues (20)
- Fix some code style issues reported by codacy HOT 1
- Add a script to generate information useful for debugging
- add WarmupCosineSchdule Scheduler HOT 2
- Memory build-up on various locations HOT 4
- Add DCGAN architecture
- GAN metrics
- Compute utilities for incorporating generative networks
- Add remaining files and functionalities for GANs
- Add unit testing for GANs
- Update GANDLF functionalities for compatibility with GAN pipelines
- AUROC error while running classification of pathology images HOT 3
- [FEATURE] Set `line-length` for `black` in the `project.toml` file HOT 1
- Config documentation for GANs
- Black configuration in pyproject.toml
- [FEATURE] Add the ability to split CSVs for training/validation/testing as a separate script HOT 1
- [FEATURE] Add the ability to generate training/validation/testing CSV with proportional splits HOT 1
- [FEATURE] Add tensorboard support HOT 2
- [FEATURE] speed up CI tests HOT 3
- [BUG] Moving from version `0.0.XX` to `0.Y.0` creates compatibility issues. HOT 2
- [FEATURE] Add a "migration guide" in documentation HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gandlf.