dww100 / sct Goto Github PK
View Code? Open in Web Editor NEWSCT: Suite of tools for atomistic modelling of SAS data
License: Apache License 2.0
SCT: Suite of tools for atomistic modelling of SAS data
License: Apache License 2.0
1. Permissions/cmake problem:
sudo cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local/bin/ . && make install
Does not work and gives the error:
usage sudo -h| -k| -v
usage sudo -v[ -Akns] [-g group] [-h host] [-p rpmpt] [-u user]
usage sudo -v[ -Akns] [-g group] [-h host] [-p rpmpt] [-u user] [command]
2. Not setting the correct python for some scripts
sct_reanalyse.py
sas_curve_analysis.py
Documentation needs to have an example of what a sluv2 YAML file looks like with a full list of the residues that SCT can process.
Get a message saying polynomial fit failed when q range for fit is not in data file. This should probably be checked in the param file validation as well as the data file read in step.
A script that compares a directory of pre-calculated theoretical curves with one or more experimental curves would be really handy.
The scaling factor is there in rfac output - need to give to the user at the end of the sct_pdb_analysis run too.
If the record type is not ATOM or HETATM it returns an empty set. This can pose a problem if pdbfiles end up with misspellings or quirky capitalizations in the record type field.
optimize.minimize_scalar now uses 'tol' and not 'xtol' to set the tolerance for the optimization.
When inputing X-ray only experimental curves the headings appear misaligned or the columns for some curve comparisons missing.
No heading provided for Neutron (NA filled) section either.
Also model numbers are not passed to output - first column is blank
When a bad PDB is read in there should be a check and then move on to the next one (ideally with a nice error message).
Currently the PDB files are globbed in and proessed as they come. This gives no ordering to the output and if say related files are processed they may be read in different orders in different runs. If the list of PDBs to be processed was sorted in a sensible way then everyone would be a winner.
My environment is:
I cannot install SCT with either of the two instructions in the installation instructions. Using the system wide option produces the following error:
Assembler messages:
Fatal error: can't create build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/sct/sjp_utilmodule.o: Permission denied
In file included from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0,
from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:17,
from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4,
from build/src.linux-x86_64-2.7/fortranobject.h:13,
from build/src.linux-x86_64-2.7/sct/sjp_utilmodule.c:18:
/usr/lib/python2.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by " \
^
build/src.linux-x86_64-2.7/sct/sjp_utilmodule.c:176:12: warning: ‘f2py_size’ defined but not used [-Wunused-function]
static int f2py_size(PyArrayObject* var, ...)
^
Assembler messages:
Fatal error: can't create build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/sct/sjp_utilmodule.o: Permission denied
error: Command "x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Ibuild/src.linux-x86_64-2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c build/src.linux-x86_64-2.7/sct/sjp_utilmodule.c -o build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/sct/sjp_utilmodule.o" failed with exit status 1
make[2]: *** [python/build/timestamp] Error 1
make[1]: *** [python/CMakeFiles/target.dir/all] Error 2
make: *** [all] Error 2
I am unfamiliar with cmake and am not sure what to make of this. Any help would be appreciated!
Currently all of the Python modelling tools (other than sluv2) assume that the sequence we need is the one in the PDB.
The lab have previously used the full sequence and added volume to make the volume 'correct' in the sphere models.
Add option to also specify a FASTA or YAML frequency file in addition to the PDB to:
get_boxside.py
optimize_parameter.py
sct_pdb_analysis.py
Not been run in its current form and should be checked to ensure that it works correctly.
get_boxside.py and optimize_model_parameters.py have changed their input arguments
Using Canopy you sometimes get:
"ImportError: Matplotlib backend_wx and backend_wxagg require wxPython >=2.8"
This looks like a workaround:
https://support.enthought.com/entries/22601196
Needed for documentation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.