cantinilab / mowgli Goto Github PK
View Code? Open in Web Editor NEWSingle-cell multi-omics integration using Optimal Transport
Home Page: https://mowgli.rtfd.io
License: GNU General Public License v3.0
Single-cell multi-omics integration using Optimal Transport
Home Page: https://mowgli.rtfd.io
License: GNU General Public License v3.0
It's important
cost_path
might not be used, fix it
Dear authors,
Thank you for writing the package.
Prior to running mowgli, I have ensured that CUDA+GPU is detectable via torch.
However, I got the following error message: ValueError: Expected a cuda device, but got: cpu
This is the full error message
ValueError Traceback (most recent call last)
Cell In[83], line 1
----> 1 model.train(mdata2[1:1000,:])
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/mowgli/models.py:255, in MowgliModel.train(self, mdata, max_iter_inner, max_iter, device, dtype, lr, optim_name, tol_inner, tol_outer, normalize_rows)
251 try:
252 for _ in range(max_iter):
253
254 # Perform the `W` optimization step.
--> 255 self.optimize(
256 loss_fn=self.loss_fn_w,
257 max_iter=max_iter_inner,
258 tol=tol_inner,
259 history=self.losses_h,
260 pbar=pbar,
261 device=device,
262 )
264 # Update the shared factor `W`.
265 htgw = 0
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/mowgli/models.py:398, in MowgliModel.optimize(self, loss_fn, max_iter, history, tol, pbar, device)
394 if i % 10 == 0:
395
396 # Add a value to the loss history.
397 history.append(loss_fn().cpu().detach())
--> 398 gpu_mem_alloc = torch.cuda.memory_allocated(device=device)
400 # Populate the progress bar.
401 pbar.set_postfix(
402 {
403 "loss": total_loss,
(...)
407 }
408 )
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/torch/cuda/memory.py:326, in memory_allocated(device)
311 def memory_allocated(device: Union[Device, int] = None) -> int:
312 r"""Returns the current GPU memory occupied by tensors in bytes for a given
313 device.
314
(...)
324 details about GPU memory management.
325 """
--> 326 return memory_stats(device=device).get("allocated_bytes.all.current", 0)
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/torch/cuda/memory.py:205, in memory_stats(device)
202 else:
203 result.append((prefix, obj))
--> 205 stats = memory_stats_as_nested_dict(device=device)
206 _recurse_add_to_result("", stats)
207 result.sort()
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/torch/cuda/memory.py:216, in memory_stats_as_nested_dict(device)
214 if not is_initialized():
215 return {}
--> 216 device = _get_device_index(device, optional=True)
217 return torch._C._cuda_memoryStats(device)
File /exports/archive/hg-funcgenom-research/mdmanurung/conda/envs/totalvi/lib/python3.9/site-packages/torch/cuda/_utils.py:30, in _get_device_index(device, optional, allow_cpu)
28 raise ValueError('Expected a cuda or cpu device, but got: {}'.format(device))
29 elif device.type != 'cuda':
---> 30 raise ValueError('Expected a cuda device, but got: {}'.format(device))
31 if not torch.jit.is_scripting():
32 if isinstance(device, torch.cuda.device):
Do you have any suggestion to solve this? Thanks in advance.
Regards,
Mikhael
Add some more verbose logs
Backwards-compatible (keep the train
function as an alias)
It's not very informative and might be confusing
From the source of the datasets given in the paper, I did not find the ground truth annotation of the datasets. Can you please tell me how you obtained the ground truth annotation?
MuData might be enough
Does Muon have one? If not, maybe submit one
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.