Comments (7)
Thank you for pointing this out! It's surprising to me. I extensively memory-profiled when introducing the functionality about a year ago.
Right now, I can only think of one thing that would cause this behavior as a result of the design of AnnData being not mature enough: the .uns
annotations are deep copied... everything else uses numpy's views in the background.
The other reason for views, besides saving memory (if they work properly) is that one should be able to write to the data matrix of the underlying object, especially in backed mode, that is
adata[batch].X = function(adata[batch].X)
should modify the original data (see here).
Anyways, this is a severe issue and I'll definitely fix it soon if it persists.
from anndata.
In the example I gave, I don't think there was anything in the uns
field. Just to be sure, I've run it again with a AnnData object that definitely doesn't have anything in the uns
field and gotten similar results.
Question about the syntax for views. What should happen in the following code?
view = adata[adata.obs["total_counts"] > 500, :]
view.X[view.X != 0] = 0
print(view.X.sum())
print(adata[adata.obs["total_counts"] > 500, :].X.sum())
My assumption is the change on the view should be replicated in the base object. That is, both print statements should return 0.
from anndata.
Observed the same when filtering an adata object. For demonstration purposies, here, the same filter is applied over and over again:
import numpy as np
import scanpy.api as sc
import pandas as pd
import numpy as np
import sys
from pickle import dumps
mat = np.ones((10000, 5000))
obs = pd.DataFrame().assign(n_counts = range(10000))
adata = sc.AnnData(mat, obs)
for i in range(10):
%time adata = adata[adata.obs['n_counts'] > 200, :]
print("Object size: {}M".format(len(dumps(adata))/1e6))
CPU times: user 35.7 ms, sys: 61.5 ms, total: 97.2 ms
Wall time: 95.5 ms
Object size: 396.298382M
CPU times: user 132 ms, sys: 147 ms, total: 278 ms
Wall time: 277 ms
Object size: 592.445935M
CPU times: user 165 ms, sys: 203 ms, total: 369 ms
Wall time: 368 ms
Object size: 788.593488M
CPU times: user 201 ms, sys: 261 ms, total: 462 ms
Wall time: 462 ms
Object size: 984.741041M
CPU times: user 265 ms, sys: 276 ms, total: 542 ms
Wall time: 542 ms
Object size: 1180.888594M
CPU times: user 313 ms, sys: 342 ms, total: 655 ms
Wall time: 656 ms
Object size: 1377.036147M
CPU times: user 356 ms, sys: 378 ms, total: 734 ms
Wall time: 733 ms
Object size: 1573.1837M
CPU times: user 380 ms, sys: 463 ms, total: 843 ms
Wall time: 841 ms
Object size: 1769.331253M
CPU times: user 432 ms, sys: 485 ms, total: 917 ms
Wall time: 917 ms
Object size: 1965.478806M
CPU times: user 496 ms, sys: 683 ms, total: 1.18 s
Wall time: 1.18 s
Object size: 2161.626359M
from anndata.
Ok, something really shady is going on, which I was not aware of. I'll need to take a deeper look at views of AnnData. Or, @Koncopd, do you have some bandwidth to shed some light on this?
from anndata.
I've been doing a little bit of digging on this, and have some suspicious about what's causing it.
First, a view of a view stores it's parent view in _adata_ref
. This means those intermediates won't get gc-ed leading to the linear increase in memory usage in the example above. Demo:
import numpy as np
import scanpy as sc
a = sc.AnnData(np.ones((2, 2)))
v1 = a[0:2, 0:2]
v2 = v1[0:2, 0:2]
v1.isview and v2.isview # True
v2._adata_ref is v1 # True
Second, I think that each view can be getting a copy of the expression matrix. In particular, line 752 will make a copy when the array is accessed with a boolean array
Here's an example of the memory increasing:
import numpy as np
import scanpy as sc
a = sc.AnnData(np.ones((5000, 5000)))
# When view is taken with a slice, no additional memory is used
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.28 GB
v1 = a[0:5000, 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.00 GB
v2 = v1[0:5000, 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.00 GB
# Taken with a boolean array, memory use increases
v1 = a[np.ones(5000, dtype=bool), 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.38 GB, difference +0.09 GB
v2 = v1[np.ones(5000, dtype=bool), 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.47 GB, difference +0.09 GB
My idea for how this could be solved is not to bother actually subsetting X until it's accessed, and just storing the subsetting index until then. This would be updated on every subsequent subset, and should always be an index into the "actual" AnnData.
from anndata.
I don't know whether we discussed this on Slack already or not, Isaac. But your idea is perfect. I think we also discussed on Slack that we don't want _adata_ref
to ever be a view, right? So that we don't get this recursion (https://scanpy.slack.com/archives/CHB1M6X5H/p1557405440002600?thread_ts=1557396576.001600&cid=CHB1M6X5H). Then the memory increase should be gone.
from anndata.
Great. I'm pretty sure what needs to be done is write out how to resolve all the different indexing types (i.e. slice of a slice should be a slice, int array of a slice should be an int array, etc.).
from anndata.
Related Issues (20)
- backed `zarr` datasets have no proper `copy` method
- Release 0.10.4 HOT 1
- Write error reporting got worse
- Switch to Ruff formatting
- Anndata pre-release job is broken on pertpy with numba nopython error HOT 2
- Warn when anndata.concat join=outer fails because of a mismatch of layers HOT 1
- Add compression support for concat_on_disk HOT 2
- `layers` cannot load sparse matrix HOT 1
- Check indexing arg types HOT 3
- Issue with rendering docs for h5py.Group in some cases HOT 1
- Checking array equality during merging doesn't work for >2d HOT 3
- Consider vendoring legacy_api_wrap HOT 3
- Awkward 2.5.2 causing test failure
- SparseDataset errors with boolean mask containing one group
- Use standard API for version information HOT 1
- `SparseDataset` errors out with empty boolean mask HOT 7
- reading directly from S3 HOT 1
- Repeated column names in dataframe cause error during category validation HOT 4
- Boolean subsetting selects incorrect index labels causing duplicate rows HOT 2
- We broke scanpy’s tests HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from anndata.