tdgrant1 / denss Goto Github PK
View Code? Open in Web Editor NEWCalculate electron density from a solution scattering profile
License: GNU General Public License v3.0
Calculate electron density from a solution scattering profile
License: GNU General Public License v3.0
Step Chi2 Rg Support Volume
----- --------- ------- --------------
1102 1.03e+04 26.55 996442 Traceback (most recent call last):
File "/home/jerome/.venv/py39/bin/denss.py", line 71, in <module>
qdata, Idata, sigqdata, qbinsc, Imean, chis, rg, supportV, rho, side = saxs.denss(
File "/home/jerome/.venv/py39/lib/python3.9/site-packages/saxstats/saxstats.py", line 1264, in denss
if j > 101 + shrinkwrap_minstep and mystd(chi[j-100:j], DENSS_GPU=DENSS_GPU) < chi_end_fraction * mymean(chi[j-100:j], DENSS_GPU=DENSS_GPU):
TypeError: unsupported operand type(s) for *: 'float' and 'cupy._core.core.ndarray'
I'll submit a PR on this
I am trying to do create an ensemble of 20 structures for averaging using denss.all.py
, but when I run it, I get the following issue:
λ py build\scripts-3.9\denss.all.py -f data/APS_June/cahd.out
C:\denss\denss (master -> origin)
λ py build\scripts-3.9\denss.all.py -f data/APS_June/cahd.out
data/APS_June/cahd_1
Traceback (most recent call last):
File "C:\denss\denss\build\scripts-3.9\denss.all.py", line 149, in <module>
fh = logging.FileHandler(fname)
File "C:\Python39\lib\logging\__init__.py", line 1146, in __init__
StreamHandler.__init__(self, self._open())
File "C:\Python39\lib\logging\__init__.py", line 1175, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding,
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\denss\\denss\\data\\APS_June\\cahd_1\\data\\APS_June\\cahd_final.log'
I have managed to run denss.py by itself just fine and load up the volumes in PyMol. Cool stuff. Very interested to try ensemble averaging but it doesn't seem to work on windows.
Best
$ pwd
/some/path/foo
$ ls
ly01.out
$ mkdir bar; cd bar
$ denss.py -f ../ly01.out
I would usually expect the files in the current working directory, here foo/bar, but they are created in foo instead.
This can be worked around with "-o ./someprefix".
After running DENSS a few times now, I come to miss the "fit to the experimental data" with all data points.
When running denss.py (e.g. "denss.py -u nm -d 105 -m slow -f SASDA68.out"), a .fit is created. In this fit file, the "experimental data" points seem to be the desmeared and extrapolated data from the .out file used as input; this data has been assigned error estimates of unknown provenance?! While I can see that this makes sense internally, for a user it would be nice to get the fit (and chi-square) to the actual experimental data as well (what elsewhere would be called a .fir file, "fit to real data").
It might be me and my usage, but oddly, when I run denss.all.py, there is no such fit file with as described for the single case, but only multiple "_map.fit" with about one-tenth of the data points?
$ denss.py -f ly01.out
Traceback (most recent call last):
File "/apps/prod/denss/latest/bin/denss.py", line 48, in
args = dopts.parse_arguments(parser)
File "/apps/prod/denss/latest/lib/python2.7/site-packages/saxstats/denssopts.py", line 121, in parse_arguments
q, I, sigq, Ifit, file_dmax, isout = saxs.loadProfile(args.file, units=args.units)
File "/apps/prod/denss/latest/lib/python2.7/site-packages/saxstats/saxstats.py", line 722, in loadProfile
keys = {key.lower().strip().translate(str.maketrans('','', '_ ')): key for key in list(results.keys())}
File "/apps/prod/denss/latest/lib/python2.7/site-packages/saxstats/saxstats.py", line 722, in
keys = {key.lower().strip().translate(str.maketrans('','', '_ ')): key for key in list(results.keys())}
AttributeError: type object 'str' has no attribute 'maketrans'
stackoverflow tells me that maketrans is only available in Python3.
From experience running DENSS a few times, I take it that the assumption is that the data comes in inverse Angstrom if not specified otherwise? For example, data collected at many Beamlines and submitted to SASBDB quite commonly is stored in inverse nanometer. This can lead to some unfortunate results if it goes unnoticed.
As data in inverse Angstrom rarely exceeds q-values of 1.0, a viable check might be: Use user-supplied units if available; otherwise: if q-max < 1.0, assume inv. Angstrom else inv. nanometer.
While this will not catch all cases and issues, especially with WAXS, it should cover the vast majority of cases and lead to happy users. And not models with a Dmax of 10A when one would expect 100A ^^
$ denss.py ly01.out
Traceback (most recent call last):
File "/apps/prod/denss/latest/bin/denss.py", line 29, in
import saxstats.saxstats as saxs
File "/apps/prod/denss/latest/lib/python2.7/site-packages/saxstats/saxstats.py", line 35, in
from builtins import object, range, map, zip, str
ImportError: No module named builtins
"builtins" is the Pyhton3 name, Python2 requires the pip-package "future" which may not be installed. The following seems to work, though:
try:
from builtins import object, range, map, zip, str
except ImportError:
from builtin import object, range, map, zip, str
Easiest way to reproduce: run denss.calcfsc.py with the same .mrc as -ref and -f; the resulting FSC will be constant 1.0, but the resolution estimate given will be 206A. Similar to this (near identical inputs):
# 1/resolution, FSC; Resolution=206.2 A
0.00000e+00 1.00000e+00
1.00000e-02 9.99979e-01
2.00000e-02 9.99736e-01
[...]
4.70000e-01 9.74823e-01
4.80000e-01 9.65391e-01
In the case of (near) identical inputs as used here, it should be rather "less than 2.0"?
I'd love to get a real reduced chi^2 value of the fit of the model to the data. The chi^2 reported by DENSS is for the smoothed IFT data binned to the shannon channel points (I believe), and the errors aren't right so the value doesn't have any real meaning (it's only useful in a relative sense for convergence). How hard would it be to calculate the fit profile at the q points of the original experimental data just once, once you've created the final model, and then actually calculate a real reduced chi^2 that could be used to talk about how well the model fits the data?
When trying to run denss.all.py
with an .out
file (GNOM output), I get the following error:
denss.all.py --file MAXS_cut_0-4.out
Running denss job: 6 / 20 Traceback (most recent call last):
File "/home/sbio/norm/softwares/denss/venv/bin/denss.all.py", line 178, in <module>
denss_outputs = pool.map(mapfunc, range(superargs.nmaps))
File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
ValueError: A value in x_new is above the interpolation range.
Is there something wrong with my .out
file?
I'm sorry to disturb you that after running your program, the .mrc file I get is just a cube with no shape information. I wonder if there is something wrong with the function write-mrc.
A main assumption for the regularization procedure in DNESS is that the excess electron density is positive. This is a very reasonable assumption when considering proteins and has proved to be very effective. However, for other nano sized particles, such as surfactant micelles for example, this is not the case.
Im wondering if you have considered what would happen if you relaxed the positive excess electron density criteria? Im well aware that such a relaxation may just result in being unable to arrive at a consensus model, as the degrees of freedom are simply too great.
In my experience the presence of negative and positive excess electron densities, can give rise to some quite distinct features in the scattering pattern that, to some degree, could constrain the possible solution space.
Additionally, it may also be possible to give an estimate of the factions of the positive and negative electron densities and to compare with scattering data measured on absolute scale which along with a known particle concentration imposes additional constraints for the regularization.
In short, Im curious what would happen if you allow for negative electron densities? Given additional features in the experimental data and/or additional regularization parameters (absolute scale and particle concentration), it may make one able to apply the DENSS procedure to scattering data origination form particles other than proteins.
Connected to a Linux host via ssh, without X forwarding:
$ denss.py
Traceback (most recent call last):
File "/apps/prod/denss/latest/bin/denss.py", line 30, in
import saxstats.denssopts as dopts
File "/apps/prod/denss/latest/lib/python2.7/site-packages/saxstats/denssopts.py", line 9, in
import matplotlib.pyplot as plt
File "/usr/lib64/python2.7/site-packages/matplotlib/pyplot.py", line 97, in
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/init.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_gtk.py", line 13, in
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.7/site-packages/gtk-2.0/gtk/init.py", line 64, in
_init()
File "/usr/lib64/python2.7/site-packages/gtk-2.0/gtk/init.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
It shouldn't try to open anything this early, maybe? I actually tried to get the "--help" output and couldn't. Eventually commented out all attempts to import matplotlib (once in denss.py, once in denssopts.py), , then this disappeared.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.