Giter Club home page Giter Club logo

gnina's Introduction

codecov Github CI

gnina (pronounced NEE-na) is a molecular docking program with integrated support for scoring and optimizing ligands using convolutional neural networks. It is a fork of smina, which is a fork of AutoDock Vina.

Help

Please subscribe to our slack team. An example colab notebook showing how to use gnina is available here. We also hosted a workshp on using gnina (video, slides).

Citation

If you find gnina useful, please cite our paper(s):

GNINA 1.0: Molecular docking with deep learning (Primary application citation)
A McNutt, P Francoeur, R Aggarwal, T Masuda, R Meli, M Ragoza, J Sunseri, DR Koes. J. Cheminformatics, 2021
link PubMed ChemRxiv

Protein–Ligand Scoring with Convolutional Neural Networks (Primary methods citation)
M Ragoza, J Hochuli, E Idrobo, J Sunseri, DR Koes. J. Chem. Inf. Model, 2017
link PubMed arXiv

Ligand pose optimization with atomic grid-based convolutional neural networks
M Ragoza, L Turner, DR Koes. Machine Learning for Molecules and Materials NIPS 2017 Workshop, 2017
arXiv

Visualizing convolutional neural network protein-ligand scoring
J Hochuli, A Helbling, T Skaist, M Ragoza, DR Koes. Journal of Molecular Graphics and Modelling, 2018
link PubMed arXiv

Convolutional neural network scoring and minimization in the D3R 2017 community challenge
J Sunseri, JE King, PG Francoeur, DR Koes. Journal of computer-aided molecular design, 2018
link PubMed

Three-Dimensional Convolutional Neural Networks and a Cross-Docked Data Set for Structure-Based Drug Design
PG Francoeur, T Masuda, J Sunseri, A Jia, RB Iovanisci, I Snyder, DR Koes. J. Chem. Inf. Model, 2020
link PubMed Chemrxiv

Virtual Screening with Gnina 1.0 J Sunseri, DR Koes D. Molecules, 2021 link Preprints

Docker

A pre-built docker image is available here and Dockerfiles are here.

Installation

We recommend that you use the pre-built binary unless you have significant experience building software on Linux, in which case building from source might result in an executable more optimized for your system.

Ubuntu 22.04

apt-get  install build-essential git cmake wget libboost-all-dev libeigen3-dev libgoogle-glog-dev libprotobuf-dev protobuf-compiler libhdf5-dev libatlas-base-dev python3-dev librdkit-dev python3-numpy python3-pip python3-pytest libjsoncpp-dev

Follow NVIDIA's instructions to install the latest version of CUDA (>= 11.0 is required). Make sure nvcc is in your PATH.

Optionally install cuDNN.

Install OpenBabel3. Note there are errors in bond order determination in version 3.1.1 and older.

git clone https://github.com/openbabel/openbabel.git
cd openbabel
mkdir build
cd build
cmake -DWITH_MAEPARSER=OFF -DWITH_COORDGEN=OFF -DPYTHON_BINDINGS=ON -DRUN_SWIG=ON ..
make
make install

Install gnina

git clone https://github.com/gnina/gnina.git
cd gnina
mkdir build
cd build
cmake ..
make
make install
sudo apt-get remove nvidia-cuda-toolkit
wget https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda_12.4.0_550.54.14_linux.run
chmod 700 cuda_12.4.0_550.54.14_linux.run
sudo sh cuda_12.4.0_550.54.14_linux.run
wget https://developer.download.nvidia.com/compute/cudnn/9.0.0/local_installers/cudnn-local-repo-ubuntu2204-9.0.0_1.0-1_amd64.deb
sudo dpkg -i cudnn-local-repo-ubuntu2204-9.0.0_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-9.0.0/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cudnn-cuda-12
apt-get install build-essential git cmake wget libboost-all-dev libeigen3-dev libgoogle-glog-dev libprotobuf-dev protobuf-compiler libhdf5-dev libatlas-base-dev python3-dev librdkit-dev python3-numpy python3-pip python3-pytest libjsoncpp-dev

git clone https://github.com/openbabel/openbabel.git
cd openbabel
mkdir build
cd build
cmake -DWITH_MAEPARSER=OFF -DWITH_COORDGEN=OFF -DPYTHON_BINDINGS=ON -DRUN_SWIG=ON ..
make -j8
sudo make install

git clone https://github.com/gnina/gnina.git
cd gnina
mkdir build
cd build
cmake ..
make -j8
sudo make install

If you are building for systems with different GPUs (e.g. in a cluster environment), configure with -DCUDA_ARCH_NAME=All.
Note that the cmake build will automatically fetch and install libmolgrid if it is not already installed.

The scripts provided in gnina/scripts have additional python dependencies that must be installed.

Usage

To dock ligand lig.sdf to a binding site on rec.pdb defined by another ligand orig.sdf:

gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf -o docked.sdf.gz

To perform docking with flexible sidechain residues within 3.5 Angstroms of orig.sdf (generally not recommend unless prior knowledge indicates pocket is highly flexible):

gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf --flexdist_ligand orig.sdf --flexdist 3.5 -o flex_docked.sdf.gz

To perform whole protein docking:

gnina -r rec.pdb -l lig.sdf --autobox_ligand rec.pdb -o whole_docked.sdf.gz --exhaustiveness 64

To utilize the default ensemble CNN in the energy minimization during the refinement step of docking (10 times slower than the default rescore option):

gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf --cnn_scoring refinement -o cnn_refined.sdf.gz

To utilize the default ensemble CNN for every step of docking (1000 times slower than the default rescore option):

gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf --cnn_scoring all -o cnn_all.sdf.gz

To utilize all empirical scoring using the Vinardo scoring function:

gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf --scoring vinardo --cnn_scoring none -o vinardo_docked.sdf.gz

To utilize a different CNN during docking (see help for possible options):


gnina -r rec.pdb -l lig.sdf --autobox_ligand orig.sdf --cnn dense -o dense_docked.sdf.gz

To minimize and score ligands ligs.sdf already positioned in a binding site:

gnina -r rec.pdb -l ligs.sdf --minimize -o minimized.sdf.gz

To covalently dock a pyrazole to a specific iron atom on the receptor with the bond formed between a nitrogen of the pyrazole and the iron.

gnina  -r rec.pdb.gz -l conformer.sdf.gz --autobox_ligand bindingsite.sdf.gz --covalent_rec_atom A:601:FE --covalent_lig_atom_pattern '[$(n1nccc1)]' -o output.sdf.gz 

The same as above, but with the covalently bonding ligand atom manually positioned (instead of using OpenBabel binding heuristics) and the ligand/residue complex UFF optimized.

gnina  -r rec.pdb.gz -l conformer.sdf.gz --autobox_ligand bindingsite.sdf.gz --covalent_lig_atom_position -11.796,31.887,72.682  --covalent_optimize_lig  --covalent_rec_atom A:601:FE --covalent_lig_atom_pattern '[$(n1nccc1)]' -o output.sdf.gz 

All options:

Input:
  -r [ --receptor ] arg              rigid part of the receptor
  --flex arg                         flexible side chains, if any (PDBQT)
  -l [ --ligand ] arg                ligand(s)
  --flexres arg                      flexible side chains specified by comma 
                                     separated list of chain:resid
  --flexdist_ligand arg              Ligand to use for flexdist
  --flexdist arg                     set all side chains within specified 
                                     distance to flexdist_ligand to flexible
  --flex_limit arg                   Hard limit for the number of flexible 
                                     residues
  --flex_max arg                     Retain at at most the closest flex_max 
                                     flexible residues

Search space (required):
  --center_x arg                     X coordinate of the center
  --center_y arg                     Y coordinate of the center
  --center_z arg                     Z coordinate of the center
  --size_x arg                       size in the X dimension (Angstroms)
  --size_y arg                       size in the Y dimension (Angstroms)
  --size_z arg                       size in the Z dimension (Angstroms)
  --autobox_ligand arg               Ligand to use for autobox
  --autobox_add arg                  Amount of buffer space to add to 
                                     auto-generated box (default +4 on all six 
                                     sides)
  --autobox_extend arg (=1)          Expand the autobox if needed to ensure the
                                     input conformation of the ligand being 
                                     docked can freely rotate within the box.
  --no_lig                           no ligand; for sampling/minimizing 
                                     flexible residues

Covalent docking:
  --covalent_rec_atom arg            Receptor atom ligand is covalently bound 
                                     to.  Can be specified as 
                                     chain:resnum:atom_name or as x,y,z 
                                     Cartesian coordinates.
  --covalent_lig_atom_pattern arg    SMARTS expression for ligand atom that 
                                     will covalently bind protein.
  --covalent_lig_atom_position arg   Optional.  Initial placement of covalently
                                     bonding ligand atom in x,y,z Cartesian 
                                     coordinates.  If not specified, 
                                     OpenBabel's GetNewBondVector function will
                                     be used to position ligand.
  --covalent_fix_lig_atom_position   If covalent_lig_atom_position is 
                                     specified, fix the ligand atom to this 
                                     position as opposed to using this position
                                     to define the initial structure.
  --covalent_bond_order arg (=1)     Bond order of covalent bond. Default 1.
  --covalent_optimize_lig            Optimize the covalent complex of ligand 
                                     and residue using UFF. This will change 
                                     bond angles and lengths of the ligand.

Scoring and minimization options:
  --scoring arg                      specify alternative built-in scoring 
                                     function: ad4_scoring default dkoes_fast 
                                     dkoes_scoring dkoes_scoring_old vina 
                                     vinardo
  --custom_scoring arg               custom scoring function file
  --custom_atoms arg                 custom atom type parameters file
  --score_only                       score provided ligand pose
  --local_only                       local search only using autobox (you 
                                     probably want to use --minimize)
  --minimize                         energy minimization
  --randomize_only                   generate random poses, attempting to avoid
                                     clashes
  --num_mc_steps arg                 fixed number of monte carlo steps to take 
                                     in each chain
  --max_mc_steps arg                 cap on number of monte carlo steps to take
                                     in each chain
  --num_mc_saved arg                 number of top poses saved in each monte 
                                     carlo chain
  --temperature arg                  temperature for metropolis accept 
                                     criterion
  --minimize_iters arg (=0)          number iterations of steepest descent; 
                                     default scales with rotors and usually 
                                     isn't sufficient for convergence
  --accurate_line                    use accurate line search
  --simple_ascent                    use simple gradient ascent
  --minimize_early_term              Stop minimization before convergence 
                                     conditions are fully met.
  --minimize_single_full             During docking perform a single full 
                                     minimization instead of a truncated 
                                     pre-evaluate followed by a full.
  --approximation arg                approximation (linear, spline, or exact) 
                                     to use
  --factor arg                       approximation factor: higher results in a 
                                     finer-grained approximation
  --force_cap arg                    max allowed force; lower values more 
                                     gently minimize clashing structures
  --user_grid arg                    Autodock map file for user grid data based
                                     calculations
  --user_grid_lambda arg (=-1)       Scales user_grid and functional scoring
  --print_terms                      Print all available terms with default 
                                     parameterizations
  --print_atom_types                 Print all available atom types

Convolutional neural net (CNN) scoring:
  --cnn_scoring arg (=1)             Amount of CNN scoring: none, rescore 
                                     (default), refinement, metrorescore 
                                     (metropolis+rescore), metrorefine 
                                     (metropolis+refine), all
  --cnn arg                          built-in model to use, specify 
                                     PREFIX_ensemble to evaluate an ensemble of
                                     models starting with PREFIX: 
                                     crossdock_default2018 
                                     crossdock_default2018_1 
                                     crossdock_default2018_2 
                                     crossdock_default2018_3 
                                     crossdock_default2018_4 default2017 dense 
                                     dense_1 dense_2 dense_3 dense_4 
                                     general_default2018 general_default2018_1 
                                     general_default2018_2 
                                     general_default2018_3 
                                     general_default2018_4 redock_default2018 
                                     redock_default2018_1 redock_default2018_2 
                                     redock_default2018_3 redock_default2018_4
  --cnn_model arg                    caffe cnn model file; if not specified a 
                                     default model will be used
  --cnn_weights arg                  caffe cnn weights file (*.caffemodel); if 
                                     not specified default weights (trained on 
                                     the default model) will be used
  --cnn_resolution arg (=0.5)        resolution of grids, don't change unless 
                                     you really know what you are doing
  --cnn_rotation arg (=0)            evaluate multiple rotations of pose (max 
                                     24)
  --cnn_update_min_frame arg (=1)    During minimization, recenter coordinate 
                                     frame as ligand moves
  --cnn_freeze_receptor              Don't move the receptor with respect to a 
                                     fixed coordinate system
  --cnn_mix_emp_force                Merge CNN and empirical minus forces
  --cnn_mix_emp_energy               Merge CNN and empirical energy
  --cnn_empirical_weight arg (=1)    Weight for scaling and merging empirical 
                                     force and energy 
  --cnn_outputdx                     Dump .dx files of atom grid gradient.
  --cnn_outputxyz                    Dump .xyz files of atom gradient.
  --cnn_xyzprefix arg (=gradient)    Prefix for atom gradient .xyz files
  --cnn_center_x arg                 X coordinate of the CNN center
  --cnn_center_y arg                 Y coordinate of the CNN center
  --cnn_center_z arg                 Z coordinate of the CNN center
  --cnn_verbose                      Enable verbose output for CNN debugging

Output:
  -o [ --out ] arg                   output file name, format taken from file 
                                     extension
  --out_flex arg                     output file for flexible receptor residues
  --log arg                          optionally, write log file
  --atom_terms arg                   optionally write per-atom interaction term
                                     values
  --atom_term_data                   embedded per-atom interaction terms in 
                                     output sd data
  --pose_sort_order arg (=0)         How to sort docking results: CNNscore 
                                     (default), CNNaffinity, Energy
  --full_flex_output                 Output entire structure for out_flex, not 
                                     just flexible residues.

Misc (optional):
  --cpu arg                          the number of CPUs to use (the default is 
                                     to try to detect the number of CPUs or, 
                                     failing that, use 1)
  --seed arg                         explicit random seed
  --exhaustiveness arg (=8)          exhaustiveness of the global search 
                                     (roughly proportional to time)
  --num_modes arg (=9)               maximum number of binding modes to 
                                     generate
  --min_rmsd_filter arg (=1)         rmsd value used to filter final poses to 
                                     remove redundancy
  -q [ --quiet ]                     Suppress output messages
  --addH arg                         automatically add hydrogens in ligands (on
                                     by default)
  --stripH arg                       remove hydrogens from molecule _after_ 
                                     performing atom typing for efficiency (off
                                     by default)
  --device arg (=0)                  GPU device to use
  --no_gpu                           Disable GPU acceleration, even if 
                                     available.

Configuration file (optional):
  --config arg                       the above options can be put here

Information (optional):
  --help                             display usage summary
  --help_hidden                      display usage summary with hidden options
  --version                          display program version


CNN Scoring

--cnn_scoring determines at what points of the docking procedure that the CNN scoring function is used.

  • none - No CNNs used for docking. Uses the specified empirical scoring function throughout.
  • rescore (default) - CNN used for reranking of final poses. Least computationally expensive CNN option.
  • refinement - CNN used to refine poses after Monte Carlo chains and for final ranking of output poses. 10x slower than rescore when using a GPU.
  • all - CNN used as the scoring function throughout the whole procedure. Extremely computationally intensive and not recommended.

The default CNN scoring function is an ensemble of 5 models selected to balance pose prediction performance and runtime: dense, general_default2018_3, dense_3, crossdock_default2018, and redock_default2018. More information on these various models can be found in the papers listed above.

Training

Scripts to aid in training new CNN models can be found at https://github.com/gnina/scripts and sample models at https://github.com/gnina/models.

The DUD-E docked poses used in the original paper can be found here and the CrossDocked2020 set is here.

License

gnina is dual licensed under GPL and Apache. The GPL license is necessitated by the use of OpenBabel (which is GPL licensed). In order to use gnina under the Apache license only, all references to OpenBabel must be removed from the source code.

gnina's People

Contributors

cdluminate avatar cypof avatar dgolden1 avatar dkoes avatar eelstork avatar erictzeng avatar flx42 avatar jac241 avatar jamt9000 avatar jeffdonahue avatar jsunseri avatar kloudkl avatar longjon avatar luc1100 avatar lukeyeager avatar mattragoza avatar mavenlin avatar noiredd avatar philkr avatar qipeng avatar rbgirshick avatar rishalaggarwal avatar rmeli avatar ronghanghu avatar sergeyk avatar sguada avatar shelhamer avatar tnarihi avatar yangqing avatar yosinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gnina's Issues

Build failed, cmake is looking for openbabel2

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- The CUDA compiler identification is NVIDIA 10.0.130
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda-10.0 (found suitable version "10.0", minimum required is "9.0")
-- Found libmolgrid include files at /usr/local/include
-- Found libmolgrid library at /usr/local/lib/libmolgrid.so
-- Found libmolgrid: /usr/local/include
CMake Error at /usr/local/lib/cmake/openbabel3/OpenBabel3Config.cmake:15 (include):
  include could not find load file:

    /lib/cmake/openbabel3/OpenBabel3_EXPORTS.cmake
Call Stack (most recent call first):
  CMakeLists.txt:42 (find_package)


CMake Error at CMakeLists.txt:43 (get_target_property):
  get_target_property() called with non-existent target "openbabel".


-- CUDA detected: 10.0
-- Added CUDA NVCC flags for: sm_61
CMake Warning (dev) at caffe/cmake/Misc.cmake:32 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
Call Stack (most recent call first):
  caffe/CMakeLists.txt:29 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at caffe/cmake/Utils.cmake:196 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'USE_OPENCV'.
Call Stack (most recent call first):
  caffe/CMakeLists.txt:43 (caffe_option)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at caffe/cmake/Utils.cmake:196 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'USE_LEVELDB'.
Call Stack (most recent call first):
  caffe/CMakeLists.txt:44 (caffe_option)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at caffe/cmake/Utils.cmake:196 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'USE_LMDB'.
Call Stack (most recent call first):
  caffe/CMakeLists.txt:45 (caffe_option)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found Boost: /usr/include (found suitable version "1.65.1", minimum required is "1.54") found components:  system thread filesystem iostreams timer chrono date_time atomic regex
CMake Warning at caffe/cmake/Dependencies.cmake:18 (find_package):
  By not providing "FindOpenBabel2.cmake" in CMAKE_MODULE_PATH this project
  has asked CMake to find a package configuration file provided by
  "OpenBabel2", but CMake did not find one.

  Could not find a package configuration file provided by "OpenBabel2" with
  any of the following names:

    OpenBabel2Config.cmake
    openbabel2-config.cmake

  Add the installation prefix of "OpenBabel2" to CMAKE_PREFIX_PATH or set
  "OpenBabel2_DIR" to a directory containing one of the above files.  If
  "OpenBabel2" provides a separate development package or SDK, be sure it has
  been installed.
Call Stack (most recent call first):
  caffe/CMakeLists.txt:50 (include)


CMake Error at /usr/local/lib/cmake/openbabel3/OpenBabel3Config.cmake:15 (include):
  include could not find load file:

    /lib/cmake/openbabel3/OpenBabel3_EXPORTS.cmake
Call Stack (most recent call first):
  caffe/cmake/Dependencies.cmake:23 (find_package)
  caffe/CMakeLists.txt:50 (include)


-- Found GFlags: /usr/include
-- Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Found Glog: /usr/include
-- Found glog    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found Protobuf: /usr/lib/x86_64-linux-gnu/libprotobuf.so;-lpthread (found version "3.0.0")
-- Found PROTOBUF Compiler: /usr/bin/protoc
-- HDF5: Using hdf5 compiler wrapper to determine C configuration
-- HDF5: Using hdf5 compiler wrapper to determine CXX configuration
-- Found HDF5: /usr/lib/x86_64-linux-gnu/hdf5/serial/libhdf5_cpp.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so (found version "1.10.0.1") found components:  HL
-- CUDA detected: 10.0
-- Added CUDA NVCC flags for: sm_61
-- Found Atlas: /usr/include/x86_64-linux-gnu
-- Found Atlas (include: /usr/include/x86_64-linux-gnu library: /usr/lib/x86_64-linux-gnu/libatlas.so lapack: /usr/lib/x86_64-linux-gnu/liblapack.so
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.6.8", minimum required is "3.0")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so (found suitable version "3.6.8", minimum required is "3.0")
-- Found NumPy: /home/mhassan/.local/lib/python3.6/site-packages/numpy/core/include (found suitable version "1.17.4", minimum required is "1.7.1")
-- NumPy ver. 1.17.4 found (include: /home/mhassan/.local/lib/python3.6/site-packages/numpy/core/include)
-- Found Boost: /usr/include (found suitable version "1.65.1", minimum required is "1.58") found components:  python36
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- Found Git: /usr/bin/git (found version "2.17.1")
--
-- ******************* Caffe Configuration Summary *******************
-- General:
--   Version           :   1.0.0
--   Git               :   6b817e27
--   System            :   Linux
--   C++ compiler      :   /usr/bin/c++
--   Release CXX flags :   -O3 -DNDEBUG -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
--   Debug CXX flags   :   -g -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
--   Build type        :   Release
--
--   BUILD_SHARED_LIBS :   ON
--   BUILD_python      :   ON
--   BUILD_matlab      :   OFF
--   BUILD_docs        :   ON
--   CPU_ONLY          :   OFF
--   USE_OPENCV        :   0
--   USE_LEVELDB       :   0
--   USE_LMDB          :   0
--   USE_NCCL          :   OFF
--   ALLOW_LMDB_NOLOCK :   OFF
--
-- Dependencies:
--   BLAS              :   Yes (Atlas)
--   Boost             :   Yes (ver. 1.65)
--   glog              :   Yes
--   gflags            :   Yes
--   OpenBabel         :   No
--   protobuf          :   Yes (ver. 3.0.0)
--   CUDA              :   Yes (ver. 10.0)
--
-- NVIDIA CUDA:
--   Target GPU(s)     :   Auto
--   GPU arch(s)       :   sm_61
--   cuDNN             :   Not found
--
-- Python:
--   Interpreter       :   /usr/bin/python3 (ver. 3.6.8)
--   Libraries         :   /usr/lib/x86_64-linux-gnu/libpython3.6m.so (ver 3.6.8)
--   NumPy             :   /home/mhassan/.local/lib/python3.6/site-packages/numpy/core/include (ver 1.17.4)
--
-- Documentaion:
--   Doxygen           :   No
--   config_file       :
--
-- Install:
--   Install path      :   /usr/local
--
-- Found Boost: /usr/include (found version "1.65.1") found components:  program_options system iostreams timer thread serialization filesystem date_time regex unit_test_framework chrono atomic
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- Found RDKit include files at /usr/include/rdkit
-- Found RDKit libraries at /usr/lib
-- Found RDKit library files at /usr/lib/libFileParsers.so;/usr/lib/libSmilesParse.so;/usr/lib/libGraphMol.so;/usr/lib/libRDGeometryLib.so;/usr/lib/libRDGeneral.so;/usr/lib/libSubgraphs.so;/usr/lib/libDataStructs.so;/usr/lib/libDepictor.so
-- Found RDKit: /usr/include/rdkit
-- Found Boost: /usr/include (found version "1.65.1") found components:  unit_test_framework system
CMake Warning at test/CMakeLists.txt:9 (find_package):
  By not providing "FindOpenBabel2.cmake" in CMAKE_MODULE_PATH this project
  has asked CMake to find a package configuration file provided by
  "OpenBabel2", but CMake did not find one.

  Could not find a package configuration file provided by "OpenBabel2" with
  any of the following names:

    OpenBabel2Config.cmake
    openbabel2-config.cmake

  Add the installation prefix of "OpenBabel2" to CMAKE_PREFIX_PATH or set
  "OpenBabel2_DIR" to a directory containing one of the above files.  If
  "OpenBabel2" provides a separate development package or SDK, be sure it has
  been installed.


CMake Error at /usr/local/lib/cmake/openbabel3/OpenBabel3Config.cmake:15 (include):
  include could not find load file:

    /lib/cmake/openbabel3/OpenBabel3_EXPORTS.cmake
Call Stack (most recent call first):
  test/CMakeLists.txt:13 (find_package)


-- Found PythonInterp: /usr/bin/python3 (found version "3.6.8")
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- Configuring incomplete, errors occurred!

Issue summary

Build failed

Steps to reproduce

cd gnina
mkdir build
cd build
cmake ..

Your system configuration

Operating system: Ubuntu 18.04
CUDA version (if applicable): 10.0

gnina --score_only error

Issue summary

2o21.pdb downloaded from https://www.rcsb.org/structure/2O21

gnina: docker pull dkoes/gnina

  • command
    gnina --score_only -r 2o21_1/2o21.pdb -l ligand.sdf --cnn_scoring

  • output info

*** IMPORTANT: gnina is not yet intended for production use. Use smina. ***

Weights      Terms
-0.035579    gauss(o=0.000000,_w=0.500000,_c=8.000000)
-0.005156    gauss(o=3.000000,_w=2.000000,_c=8.000000)
0.840245     repulsion(o=0.000000,_c=8.000000)
-0.035069    hydrophobic(g=0.500000,_b=1.500000,_c=8.000000)
-0.587439    non_dir_h_bond(g=-0.700000,_b=0.000000,_c=8.000000)
1.923        num_tors_div

Parse error on line 1661 in file "2o21_1/2o21.pdb": ATOM syntax incorrect: "6 5.40" is not a valid coordinate

  • line 1661 in file 2o21.pdb
ATOM   1426  HB  VAL A 130      -4.077  -3.560   4.915  1.00  0.38           H

Error while running train.py

Hello I tried to install gnina but I always get this error

I already added this in bashrc and sourced it
export PYTHONPATH=/usr/local/python:$PYTHONPATH

./scripts/train.py -h
Traceback (most recent call last):
File "./scripts/train.py", line 11, in
import caffe
File "/usr/local/python/caffe/init.py", line 2, in
from ._caffe import init_log, log, set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver, layer_type_list, set_random_seed, solver_count, set_solver_count, solver_rank, set_solver_rank, set_multiprocess, Layer, get_solver, has_nccl, device_synchronize
ImportError: cannot import name has_nccl

Do these warnings matter?

CUDA9 is used here.

[ 8%] Building NVCC (Device) object caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_molgrid_data_layer.cu.o
/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(13): warning: host annotation on a defaulted function("gfloat3") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(13): warning: device annotation on a defaulted function("gfloat3") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(34): warning: host annotation on a defaulted function("operator=") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(34): warning: device annotation on a defaulted function("operator=") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(60): warning: function "__shfl_down(float, unsigned int, int)"
/garlic/apps/cuda9.0/include/sm_30_intrinsics.hpp(278): here was declared deprecated ("__shfl_down() is deprecated in favor of __shfl_down_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(61): warning: function "__shfl_down(float, unsigned int, int)"
/garlic/apps/cuda9.0/include/sm_30_intrinsics.hpp(278): here was declared deprecated ("__shfl_down() is deprecated in favor of __shfl_down_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(62): warning: function "__shfl_down(float, unsigned int, int)"
/garlic/apps/cuda9.0/include/sm_30_intrinsics.hpp(278): here was declared deprecated ("__shfl_down() is deprecated in favor of __shfl_down_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(13): warning: host annotation on a defaulted function("gfloat3") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(13): warning: device annotation on a defaulted function("gfloat3") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(34): warning: host annotation on a defaulted function("operator=") is ignored

/garlic/apps/Downloads/gnina/./gninasrc/lib/gpu_math.h(34): warning: device annotation on a defaulted function("operator=") is ignored

train our own model for gnina scoring

Dear Dr. Koes,

I have recently complied gnina and had some quick testing. As you pointed out that docking is not recommended for gnina, and we could just use -cnn_scoring option to do scoring. I am happy that scoring one ligand only takes about 0.7s with gnina, which is several times faster than smina. I have tested a few dud-e targets, but found that the AUC values for these targets computed by gnina are almost the same as those produced by smina, I am not sure whether I am doing so in a correct way. I am confused by the following issues:

  1. Is there a qualitative difference between the default scoring system of gnina and the scoring system of smina? If so, what dataset was trained for the default scoring system for gnina (--cnn_scoring)? And do you expect that gnina will exhibit a significant improvement of the AUC value on dud-e targets compared to smina?

  2. If we want to train our own 3DCNN model and do a virtual screening, we need to prepare the .gninatype file. As I understand, the procedures will be:

    • perform a docking for the targets and ligands (using smina maybe??) and select the top ranked pose for each ligand
    • generate the .gninatype file from the selected poses from the previous step, using gninatyper function
    • list the targets and ligands in a .types file, each line contains a label, a receptor and a ligand. The receptor and ligand files are generated in the previous step.
    • perform a training and save the model
    • use gnina to do scoring with specifying which model is used for the scoring function.
      Do I understand correctly? Thank you very much.

Problem with latest development version of OpenBabel

Issue summary

The latest version of OpenBabel is incompatible with PDBQTUtilities.cpp; OBAtom member functions changed name (see OpenBabel PR #1975). This leads to the following error:

$HOME/Documents/git/gnina/gninasrc/lib/PDBQTUtilities.cpp: In function 'bool IsRotBond_PDBQT(OpenBabel::OBBond*, unsigned int)':
/biggin/b195/lina3015/Documents/git/gnina/gninasrc/lib/PDBQTUtilities.cpp:123:36: error: 'class OpenBabel::OBAtom' has no member named 'GetHvyValence'
   if (((the_bond->GetBeginAtom())->GetHvyValence() == 1)
                                    ^
$HOME/Documents/git/gnina/gninasrc/lib/PDBQTUtilities.cpp:124:37: error: 'class OpenBabel::OBAtom' has no member named 'GetHvyValence'
       || ((the_bond->GetEndAtom())->GetHvyValence() == 1)) {
                                     ^

Steps to reproduce

Install the latest version of OpenBabel (with a CMake fix, see PR 1988):

apt purge --auto-remove libopenbabel-dev
git clone https://github.com/RMeli/openbabel.git
cd openbabel && git checkout fix/cmake
cd .. && mkdir obuild && cd obuild
cmake ../openbabel -DWITH_JSON:BOOLEAN=FALSE
make
make -j
make install

then compile gnina.

Steps to Fix and Discussion

This is quick to fix by changing GetHvyValence() to GetHvyDegree() (I already have a PR ready), but will break for people using older development versions of the library. However, a development version of OpenBabel is needed (since the latest version 2.4.1 is missing elements.h, which appeared later according to Issue #1941) and chances are new users will just pull the latest version and incur in this problem.

Your system configuration

Operating system: Ubuntu 16.04
Compiler: GNU 5.4.0
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7

Compilation error

[ 39%] Building CXX object caffe/src/caffe/CMakeFiles/caffe.dir/util/upgrade_proto.cpp.o
[ 39%] Linking CXX shared library ../../lib/libcaffe.so
[ 39%] Built target caffe
Scanning dependencies of target upgrade_solver_proto_text
[ 39%] Building CXX object caffe/tools/CMakeFiles/upgrade_solver_proto_text.dir/upgrade_solver_proto_text.cpp.o
[ 40%] Linking CXX executable upgrade_solver_proto_text
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBAtomAtomIter::operator++()' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_sgemv'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_dgemm' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_sscal'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBAtomAtomIter::OBAtomAtomIter(OpenBabel::OBAtom&)' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_dgemv'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBAtom::IsHbondAcceptor()' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_saxpy'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_ddot' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_dasum'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBConversion::~OBConversion()' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBAtom::IsAromatic() const'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMolAtomIter::OBMolAtomIter(OpenBabel::OBMol&)' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_sgemm'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMol::AddHydrogens(bool, bool, double)' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBConversion::ReadFile(OpenBabel::OBBase*, std::string)'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_dscal' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBElementTable::GetSymbol(int)'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_scopy' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMol::~OBMol()'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_sasum' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMol::NumHvyAtoms()'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMolAtomIter::operator++()' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_daxpy'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBConversion::OBConversion(std::istream*, std::ostream*)' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::etab'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to OpenBabel::OBMol::OBMol()' ../lib/libcaffe.so.1.0.0-rc3: undefined reference to cblas_dcopy'
../lib/libcaffe.so.1.0.0-rc3: undefined reference to `cblas_sdot'
collect2: error: ld returned 1 exit status
caffe/tools/CMakeFiles/upgrade_solver_proto_text.dir/build.make:126: recipe for target 'caffe/tools/upgrade_solver_proto_text' failed
make[2]: *** [caffe/tools/upgrade_solver_proto_text] Error 1
CMakeFiles/Makefile2:474: recipe for target 'caffe/tools/CMakeFiles/upgrade_solver_proto_text.dir/all' failed
make[1]: *** [caffe/tools/CMakeFiles/upgrade_solver_proto_text.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Can anyone help!

where to place the gnina/scripts

I place gnina/scripts outside of the directory of gnina/(at the same directory level),I got the ModuleNotFoundError: No module named 'caffe'), is the place i placed wrong?

CUDA_curand_LIBRARY variable set to NOTFOUND

Issue summary

I was unable to upgrade or downgrade Boost on my google colab instance, so I tried to install gnina on my google cloud instance (Ubuntu 16.04 running CUDA 9.0 and Boost 1.58). However, when I attempted to build gnina with this command:

$ git clone https://github.com/djinnome/gnina.git && cd gnina && mkdir -p build && cd build && cmake .. && make && make install

I got the following error:

--   gflags            :   Yes
--   OpenBabel         :   Yes
--   protobuf          :   Yes (ver. 2.6.1)
--   CUDA              :   Yes (ver. 9.0)
-- 
-- NVIDIA CUDA:
--   Target GPU(s)     :   Auto
--   GPU arch(s)       :   sm_37
--   cuDNN             :   Yes (ver. 7.1.4)
-- 
-- Python:
--   Interpreter       :   /usr/bin/python2.7 (ver. 2.7.12)
--   Libraries         :   /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.12)
--   NumPy             :   /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.11.0)
-- 
-- Documentaion:
--   Doxygen           :   No
--   config_file       :   
-- 
-- Install:
--   Install path      :   /usr/local
-- 
-- Found CUDA: /usr/local/cuda (found version "9.0") 
-- Boost version: 1.58.0
-- Found the following Boost libraries:
--   program_options
--   system
--   iostreams
--   timer
--   thread
--   serialization
--   filesystem
--   date_time
--   regex
--   unit_test_framework
--   chrono
--   atomic
-- Found RDKit include files at /usr/include/rdkit
-- Found RDKit libraries at /usr/lib
-- Found RDKit library files at /usr/lib/libFileParsers.so;/usr/lib/libSmilesParse.so;/usr/lib/libGraphMol.so;/usr/lib/libRDGeometryLib.so;/usr/lib/libRDGeneral.so;/usr/lib/libSubgraphs.so;/usr/lib/libDataStructs.so;/usr/lib/libDepictor.so
-- Found RDKit: /usr/include/rdkit  
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_curand_LIBRARY (ADVANCED)
    linked by target "caffe" in directory /home/acronymph/gnina/caffe/src/caffe
-- Configuring incomplete, errors occurred!
See also "/home/acronymph/gnina/build/CMakeFiles/CMakeOutput.log".
See also "/home/acronymph/gnina/build/CMakeFiles/CMakeError.log".

Steps to reproduce

sudo apt-get install build-essential git wget libopenbabel-dev libboost-all-dev libeigen3-dev libgoogle-glog-dev libprotobuf-dev protobuf-compiler libhdf5-serial-dev libatlas-base-dev python-dev cmake libr
dkit-dev python-numpy python-pip
git clone https://github.com/djinnome/gnina.git && cd gnina && mkdir -p build && cd build && cmake .. && make && make install

Your system configuration

Operating system: Ubuntu 16.04
Compiler: gcc 5.4.0 20160609
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7.1.4
BLAS: libatlas3-base amd64 3.10.2-9
Boost: 1.58.0
Python or MATLAB version (for pycaffe and matcaffe respectively): python 2.7

CNN refinement yields decreasing affinity and score

CNN refinement yields decreasing affinity and score

I am using version gnina Jul 21 2019 with the following for refinement:

gnina -r prot.pdb -l lig.pdbqt --autobox_ligand lig.pdbqt -o refine.pdbqt --cnn_refinement --gpu --log outp-refine.txt
  1. Are the CNN affinity and score supposed to decrease during refinement as shown below?

              _
             (_)
   __ _ _ __  _ _ __   __ _
  / _` | '_ \| | '_ \ / _` |
 | (_| | | | | | | | | (_| |
  \__, |_| |_|_|_| |_|\__,_|
   __/ |
  |___/

gnina is based on smina and AutoDock Vina.
Please cite appropriately.

*** IMPORTANT: Use of CNN scoring while docking is not recommended. ***
***     CNN scoring is best used with --minimize or --score_only    ***

Weights      Terms
-0.035579    gauss(o=0,_w=0.5,_c=8)
-0.005156    gauss(o=3,_w=2,_c=8)
0.840245     repulsion(o=0,_c=8)
-0.035069    hydrophobic(g=0.5,_b=1.5,_c=8)
-0.587439    non_dir_h_bond(g=-0.7,_b=0,_c=8)
1.923        num_tors_div

Using random seed: 617300398
CNNscore: 0.9114680886
CNNaffinity: 6.2475423813
CNNscore: 0.7921536565
CNNaffinity: 6.0033826828
CNNscore: 0.7712228298
CNNaffinity: 6.1471071243
CNNscore: 0.0040464709
CNNaffinity: 5.2708115578
CNNscore: 0.4218868911
CNNaffinity: 5.4078488350
CNNscore: 0.5019182563
CNNaffinity: 6.1058492661
CNNscore: 0.2703747153
CNNaffinity: 4.9184708595
CNNscore: 0.0012346902
CNNaffinity: 4.3380570412
CNNscore: 0.0020727473
CNNaffinity: 4.8777580261
CNNscore: 0.0004754360
CNNaffinity: 4.3021750450
CNNscore: 0.0015913198
CNNaffinity: 4.9182715416
CNNscore: 0.1185201555
CNNaffinity: 4.7961802483
CNNscore: 0.0006804905
CNNaffinity: 4.2967181206
CNNscore: 0.0004343929
CNNaffinity: 4.0752868652
CNNscore: 0.0010188936
CNNaffinity: 4.9657959938
CNNscore: 0.0002310632
CNNaffinity: 4.1413888931
CNNscore: 0.0003256369
CNNaffinity: 4.6696615219
CNNscore: 0.0011818660
CNNaffinity: 4.7335619926
CNNscore: 0.0002052060
CNNaffinity: 4.2882466316
CNNscore: 0.0000663921
CNNaffinity: 3.3440413475

mode |   affinity | dist from best mode
     | (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1       0.1        0.000      0.000
2       0.2        1.473      1.852
3       0.3        1.218      1.909
4       0.4        1.998      2.973
5       0.4        2.997      4.773
6       0.7        3.089      5.069
7       0.8        3.075      5.206
  1. Moreover, based on the help information, specifying --cnn_refinement will "Use a convolutional neural network for final minimization of docked poses", how does this differentiate from specifying --cnn_score --minimize instead?

  2. If refinement and minimization with CNN are different, which should be done first, or they are totally independent without any order preference?

Compilation error at cnn_visualization.cpp.o

[ 48%] Building CXX object gninasrc/CMakeFiles/gninalib.dir/gninavis/cnn_visualization.cpp.o /path/gnina/gninasrc/gninavis/cnn_visualization.cpp: In member function ‘std::string cnn_visualization::get_xyz_from_index(int, bool)’: /path/gnina/gninasrc/gninavis/cnn_visualization.cpp:824:20: error: use of deleted function ‘std::basic_stringstream<char>& std::basic_stringstream<char>::operator=(const std::basic_stringstream<char>&)’ mol_stream = std::stringstream(lig_string);

If you know anything about this, please let me know.

Thank you.

How to build gnina with CPU_ONLY?

I am trying to build gnina setting CPU_ONLY to 1 but I run into problems with molgriddata layer. It still requires both cuda.h and cuda_runtime.h. Anything else I should be setting in CMakeList.txt?

Thanks!

#define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported. Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."

Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.

Please read the guidelines for contributing before submitting this issue.

Issue summary

When installing gnina, using the instructions for Ubuntu 16.04 (even though I am actually on 17.10):

git clone https://github.com/djinnome/gnina.git && cd gnina && mkdir -p build && cd build && cmake .. && make && make install

I get the following error:

 /usr/local/cuda/include/crt/common_functions.h:64:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
CMake Error at cuda_compile_1_generated_math_functions.cu.o.Release.cmake:222 (message):
  Error generating
  /content/gnina/build/caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/./cuda_compile_1_generated_math_functions.cu.o


caffe/src/caffe/CMakeFiles/caffe.dir/build.make:504: recipe for target 'caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o' failed
make[2]: *** [caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o] Error 1
CMakeFiles/Makefile2:307: recipe for target 'caffe/src/caffe/CMakeFiles/caffe.dir/all' failed
make[1]: *** [caffe/src/caffe/CMakeFiles/caffe.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Steps to reproduce

I am reporting a build error that seems to be due to a bug in Caffe, Attached is my CMakeCache.txt

Output of cmake:

Cloning into 'gnina'...
remote: Enumerating objects: 305, done.
remote: Counting objects: 100% (305/305), done.
remote: Compressing objects: 100% (151/151), done.
remote: Total 38817 (delta 178), reused 225 (delta 126), pack-reused 38512
Receiving objects: 100% (38817/38817), 67.52 MiB | 23.48 MiB/s, done.
Resolving deltas: 100% (25492/25492), done.
-- The C compiler identification is GNU 7.2.0
-- The CXX compiler identification is GNU 7.2.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- CUDA detected: 9.2
-- Automatic GPU detection failed. Building for all known architectures.
-- Added CUDA NVCC flags for: sm_35 sm_50 sm_60 sm_61 sm_70
-- Boost version: 1.62.0
-- Found the following Boost libraries:
--   system
--   thread
--   filesystem
--   iostreams
--   timer
--   chrono
--   date_time
--   atomic
--   regex
-- Found Open Babel include files at /usr/include/openbabel-2.0
-- Found Open Babel library at /usr/lib/libopenbabel.so
Setting openbabel found TRUE
-- Found GFlags: /usr/include  
-- Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Found Glog: /usr/include  
-- Found glog    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found Protobuf: /usr/lib/x86_64-linux-gnu/libprotobuf.so;-lpthread (found version "3.0.0") 
-- Found PROTOBUF Compiler: /usr/bin/protoc
-- HDF5: Using hdf5 compiler wrapper to determine C configuration
-- HDF5: Using hdf5 compiler wrapper to determine CXX configuration
-- Found HDF5: /usr/lib/x86_64-linux-gnu/hdf5/serial/libhdf5_cpp.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so (found version "1.10.0.1") found components:  HL 
-- CUDA detected: 9.2
-- Automatic GPU detection failed. Building for all known architectures.
-- Added CUDA NVCC flags for: sm_35 sm_50 sm_60 sm_61 sm_70
-- Found Atlas: /usr/include/x86_64-linux-gnu  
-- Found Atlas (include: /usr/include/x86_64-linux-gnu library: /usr/lib/x86_64-linux-gnu/libatlas.so lapack: /usr/lib/x86_64-linux-gnu/liblapack.so
-- Found PythonInterp: /usr/bin/python2.7 (found suitable version "2.7.14", minimum required is "2.7") 
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable version "2.7.14", minimum required is "2.7") 
-- Found NumPy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (found suitable version "1.14.5", minimum required is "1.7.1") 
-- NumPy ver. 1.14.5 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include)
-- Boost version: 1.62.0
-- Found the following Boost libraries:
--   python
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) 
-- Found Git: /usr/bin/git (found version "2.14.1") 
-- 
-- ******************* Caffe Configuration Summary *******************
-- General:
--   Version           :   1.0.0
--   Git               :   df652092
--   System            :   Linux
--   C++ compiler      :   /usr/bin/c++
--   Release CXX flags :   -O3 -DNDEBUG -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
--   Debug CXX flags   :   -g -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
--   Build type        :   Release
-- 
--   BUILD_SHARED_LIBS :   ON
--   BUILD_python      :   ON
--   BUILD_matlab      :   OFF
--   BUILD_docs        :   ON
--   CPU_ONLY          :   OFF
--   USE_OPENCV        :   0
--   USE_LEVELDB       :   0
--   USE_LMDB          :   0
--   USE_NCCL          :   OFF
--   ALLOW_LMDB_NOLOCK :   OFF
-- 
-- Dependencies:
--   BLAS              :   Yes (Atlas)
--   Boost             :   Yes (ver. 1.62)
--   glog              :   Yes
--   gflags            :   Yes
--   OpenBabel         :   Yes
--   protobuf          :   Yes (ver. 3.0.0)
--   CUDA              :   Yes (ver. 9.2)
-- 
-- NVIDIA CUDA:
--   Target GPU(s)     :   Auto
--   GPU arch(s)       :   sm_35 sm_50 sm_60 sm_61 sm_70
--   cuDNN             :   Not found
-- 
-- Python:
--   Interpreter       :   /usr/bin/python2.7 (ver. 2.7.14)
--   Libraries         :   /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.14)
--   NumPy             :   /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.14.5)
-- 
-- Documentaion:
--   Doxygen           :   No
--   config_file       :   
-- 
-- Install:
--   Install path      :   /usr/local
-- 
-- Found CUDA: /usr/local/cuda (found version "9.2") 
-- Boost version: 1.62.0
-- Found the following Boost libraries:
--   program_options
--   system
--   iostreams
--   timer
--   thread
--   serialization
--   filesystem
--   date_time
--   regex
--   unit_test_framework
--   chrono
--   atomic
-- Found RDKit include files at /usr/include/rdkit
-- Found RDKit libraries at /usr/lib
-- Found RDKit library files at /usr/lib/libFileParsers.so;/usr/lib/libSmilesParse.so;/usr/lib/libGraphMol.so;/usr/lib/libRDGeometryLib.so;/usr/lib/libRDGeneral.so;/usr/lib/libSubgraphs.so;/usr/lib/libDataStructs.so;/usr/lib/libDepictor.so
-- Found RDKit: /usr/include/rdkit  
-- Configuring done
-- Generating done
-- Build files have been written to: /content/gnina/build
[  0%] Running C++/Python protocol buffer compiler on /content/gnina/caffe/src/caffe/proto/caffe.proto
Scanning dependencies of target caffeproto
[  0%] Building CXX object caffe/src/caffe/CMakeFiles/caffeproto.dir/__/__/include/caffe/proto/caffe.pb.cc.o
[  0%] Linking CXX static library ../../lib/libcaffeproto.a
[  0%] Built target caffeproto
[  1%] Building NVCC (Device) object caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o
In file included from /usr/local/cuda/include/common_functions.h:50:0,
                 from /usr/local/cuda/include/cuda_runtime.h:115,
                 from <command-line>:0:
/usr/local/cuda/include/crt/common_functions.h:64:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
CMake Error at cuda_compile_1_generated_math_functions.cu.o.Release.cmake:222 (message):
  Error generating
  /content/gnina/build/caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/./cuda_compile_1_generated_math_functions.cu.o


caffe/src/caffe/CMakeFiles/caffe.dir/build.make:504: recipe for target 'caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o' failed
make[2]: *** [caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o] Error 1
CMakeFiles/Makefile2:307: recipe for target 'caffe/src/caffe/CMakeFiles/caffe.dir/all' failed
make[1]: *** [caffe/src/caffe/CMakeFiles/caffe.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Your system configuration

Operating system: Ubuntu 17.10
Compiler: gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0
CUDA version (if applicable): 9.2
CUDNN version (if applicable):
BLAS: libatlas-base-dev 3.10.3-5
Python or MATLAB version (for pycaffe and matcaffe respectively): python 2.7.14-2ubuntu1

gninavis generated pdbqt files

Issue summary

gninavis will cause the ligand to be deformed when it generats pdbqt file. Most likely this has to do with openbabel but i am not sure how to solve it

Your system configuration

Operating system: Redhat 7
Compiler:gcc
CUDA version (if applicable): 9.2
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):2.7

Note:This installation is done on redhat directly and not via docker

An error happened.

Hi,
I want test gnina cnn scroing result , "Could not open ligmap.old" error hanppend, but I'm sure this file in path "../soft/models/refmodel3/ligmap.old" ,should I set more parameter to do this test ?thanks ,and I run the command as follows :

gnina --cpu 4 --gpu --receptor protein.mol2 --ligand mols-1.sdf --cnn_model /workspace/soft/models/refmodel3/refmodel3.model --cnn_weights /workspace/soft/models/refmodel3/weights/pdbbind_cnnmin1_100000.caffemodel --cnn_center_x 31.944 --cnn_center_y -13.308 --cnn_center_z 4.06928 --cnn_scoring --out gnina_cnn.sdf --num_modes 4 --energy_range 99 --exhaustiveness 4 --autobox_ligand bindingsite.xyz --autobox_add 2 --seed 42

image

Parallel compilation fails when libmoldrid is not present

Issue summary

Parallel compilation make -j fails when libmolgrid is not installed. While libmolgrid is being cloned from its repository, caffe starts to compile in parallel and the following error occurs:

make[2]: *** No rule to make target 'external/lib/libmolgrid.so', needed by 'caffe/src/caffe/CMakeFiles/caffe.dir/cmake_device_link.o'.  Stop.
make[2]: *** Waiting for unfinished jobs..

$HOME/Documents/git/gnina/caffe/src/caffe/common.cpp:10:35: fatal error: libmolgrid/libmolgrid.h: No such file or directory
compilation terminated.
caffe/src/caffe/CMakeFiles/caffe.dir/build.make:75: recipe for target 'caffe/src/caffe/CMakeFiles/caffe.dir/common.cpp.o' failed
make[2]: *** [caffe/src/caffe/CMakeFiles/caffe.dir/common.cpp.o] Error 1
In file included from $HOME/Documents/git/gnina/caffe/src/caffe/layers/molgrid_data_layer.cpp:17:0:
$HOME/Documents/git/gnina/caffe/include/caffe/layers/molgrid_data_layer.hpp:26:41: fatal error: libmolgrid/example_provider.h: No such file or directory
compilation terminated.
caffe/src/caffe/CMakeFiles/caffe.dir/build.make:829: recipe for target 'caffe/src/caffe/CMakeFiles/caffe.dir/layers/molgrid_data_layer.cpp.o' failed
make[2]: *** [caffe/src/caffe/CMakeFiles/caffe.dir/layers/molgrid_data_layer.cpp.o] Error 1
CMakeFiles/Makefile2:1282: recipe for target 'caffe/src/caffe/CMakeFiles/caffe.dir/all' failed
make[1]: *** [caffe/src/caffe/CMakeFiles/caffe.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

Steps to reproduce

Without libmolgrid installed, in gnina root directory:

mkdir build && cd build
cmake ..
make -j

Your system configuration

Operating system: Ubuntu 16.04
Compiler: GNU 5.4.0
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7
OpenBabel: 2.4.90
CMake: 3.14.4

OpenBabel version check for GetHvyValence/GetHvyDegree

#if (OB_VERSION >= OB_VERSION_CHECK(2,4,90))
# include <openbabel/elements.h>
# define GET_SYMBOL OpenBabel::OBElements::GetSymbol
# define GET_HVY(a) a->GetHvyDegree()
#else
# define GET_SYMBOL etab.GetSymbol
# define GET_HVY(a) a->GetHvyValence()
#endif

I now don't think this check fixed the problem referenced in #63 as I am using version 2.4.90 on bridges and I get this build error:

/home/mtragoza/gnina/gninasrc/lib/atom_constants.h:35:24: error: ‘class OpenBabel::OBAtom’ has no member named ‘GetHvyDegree’; did you mean ‘GetHvyValence’?
 # define GET_HVY(a) a->GetHvyDegree()
                        ^

I think newer versions of OpenBabel use GetHvyValence rather than GetHvyDegree, though I don't know what exact version made the switch.

identifier "__shfl_down_sync" is undefined

The error occurs on
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5)
Cuda compilation tools, release 8.0, V8.0.44

/media/cresset/scratch2/rfscore/gnina/gnina/./gninasrc/lib/gpu_math.h(58): error: identifier "__shfl_down_sync" is undefined

/media/cresset/scratch2/rfscore/gnina/gnina/./gninasrc/lib/gpu_math.h(63): error: identifier "__shfl_down_sync" is undefined

2 errors detected in the compilation of "/tmp/tmpxft_00005162_00000000-7_molgrid_data_layer.cpp1.ii".
CMake Error at cuda_compile_1_generated_molgrid_data_layer.cu.o.Release.cmake:282 (message):
Error generating file
/media/cresset/scratch2/rfscore/gnina/gnina/build/caffe/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_molgrid_data_layer.cu.o

Confusions of GPU and CNN usage

1. Does gnina use GPU for docking?

When I tried the following

gnina -r receptor.pdbqt -l lig.pdbqt --autobox_ligand lig.pdbqt --autobox_add 8 --exhaustiveness 16 -o result.pdbqt --cnn --gpu

The GPU utility is 0 (monitored by nvidia-smi) until the following progress bar finishes:

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************

Then gnina uses GPU but the program ends very quickly (<1s) after the following printed out.


mode |   affinity | dist from best mode
     | (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1       -11.2      0.000      0.000
2       -9.6       2.448      3.302
3       -9.4       1.964      2.617
4       -9.2       4.300      7.475
5       -9.2       2.123      2.584
6       -8.9       3.152      7.097
7       -8.8       3.155      7.199
8       -8.7       4.405      6.985
9       -8.7       3.046      3.804
Refine time 27.397
Loop time 30.817

2. What is happening when only --gpu is specified, i.e. the following command:

gnina -r receptor.pdbqt -l lig.pdbqt --autobox_ligand lig.pdbqt --autobox_add 8 --exhaustiveness 16 -o result.pdbqt --gpu

The GPU utility is 99% but the program is running extremely slow (~50 times slower than above example) and the result is also weird:


0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box
 | pose 0 | ligand outside box

mode |   affinity | dist from best mode
     | (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1       -4.8       0.000      0.000
2       -3.4       7.630      11.349
3       -2.9       4.199      5.878
4       -2.7       7.029      9.203
5       -2.4       6.462      9.368
6       -2.1       6.103      8.684
Refine time 1440.356
Loop time 1450.621

3. What is the difference between --cnn and --cnn_model ?
Both seem to be related to default model, and specifying both with empty argument, i.e. --cnn --cnn_model --gpu will result in same slow performance as Question 2.

4. Does gnina support multi GPU?
I've tried --device 0,1 --gpu but it doesn't recognize the command.

'ligand outside box' error

Issue summary

Hello,
I'm evaluating gnina with --gpu to see if I can get a speed improvement in docking over smina. My attempts to run gnina in the same way I ran smina are generating errors that don't occur for smina; specifically:

MCULE-1000001311 | pose 0 | ligand outside box
MCULE-1000001311 | pose 0 | initial pose not within box

Any idea what's going on?
Thanks,
Greg

Steps to reproduce

./gnina --cpu 1 --gpu --receptor protein.mol2 --ligand mols-1.sdf --scoring vina --out gnina.sdf --num_modes 64 --energy_range 99999 --exhaustiveness 4 --autobox_ligand bindingsite.xyz --autobox_add 2 --seed 42

files.zip

Your system configuration

Operating system: Ubuntu
Compiler: g++ 5.4
CUDA version (if applicable): 9.1
CUDNN version (if applicable): 7.1.2
BLAS: Atlas
Python or MATLAB version (for pycaffe and matcaffe respectively):

What could be the cause of this error?

[ 98%] Building CXX object gninasrc/gninaserver/CMakeFiles/gninaserver.dir/MinimizationQuery.cpp.o
In file included from /garlic/apps/Downloads/gnina/gninasrc/gninaserver/MinimizationQuery.h:13:0,
from /garlic/apps/Downloads/gnina/gninasrc/gninaserver/MinimizationQuery.cpp:10:
/garlic/apps/Downloads/gnina/gninasrc/gninaserver/Reorienter.h:14:29: fatal error: eigen3/Eigen/Core: No such file or directory
#include <eigen3/Eigen/Core>
^
compilation terminated.

How to check the ligand source file from respective gninatypes?

Hello,

We are trying to compare the scoring of gnina with Autodock Vina and we are working specifically with https://github.com/gnina/models/tree/master/data/csar/alltest0.types file. We need to retrieve the ligand source files because we already think we can find the receptor from the CSAR dataset.

We tried to check the contents with od and it is showing numbers whose meaning we do not understand like:
0000000 164163 040637 131463 040756 146144 040570 000003 000000 0000020 041776 040641 103045 040745 025553 040603 000014 000000 0000040 130620 040632 135052 040741 121556 040603 000001 000000

We read from another issue that gninatypes should include x, y, z and the atom but from this content we are a bit confused which is which.

Thanks in advance!

RDkit and Boost

/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: warning: libboost_system.so.1.63.0, needed by /usr/local/lib/libFileParsers.so, may conflict with libboost_system.so.1.53.0

probably because I installed RDkit 2017.09.2, is this a problem?

error train.py

Issue summary

Running train.py using refmodel3.model produces Message type "caffe.LayerParameter" has no field named "molgrid_data_param" error message.

~/work/gnina/models/refmodel3$ python ../../scripts/train.py -m refmodel3.model -p ../data/csar/all -d ../data/csar
  0           test ../data/csar/alltest0.types
  0          train ../data/csar/alltrain0.types
  1           test ../data/csar/alltest1.types
  1          train ../data/csar/alltrain1.types
  2           test ../data/csar/alltest2.types
  2          train ../data/csar/alltrain2.types
Traceback (most recent call last):
  File "../../scripts/train.py", line 848, in <module>
    results = train_and_test_model(args, train_test_files[i], outname, cont)
  File "../../scripts/train.py", line 351, in train_and_test_model
    write_model_file(test_model, template, files['train'], test_file, args.data_root, args.avg_rotations)
  File "../../scripts/train.py", line 71, in write_model_file
    prototxt.Merge(f.read(), netparam)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 536, in Merge
    descriptor_pool=descriptor_pool)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 590, in MergeLines
    return parser.MergeLines(lines, message)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 623, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 638, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 763, in _MergeField
    merger(tokenizer, message, field)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 837, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 730, in _MergeField
    (message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 9:3 : Message type "caffe.LayerParameter" has no field named "molgrid_data_param".

Steps to reproduce

.. build gnina ; cmake ; make ; make install
git clone [email protected]:gnina/models.git
git clone [email protected]:gnina/scripts.git
cd models/refmodel3
python ../../scripts/train.py -m refmodel3.model -p ../data/csar/all -d ../data/csar

Your system configuration

Operating system: Ubuntu 16.04
Compiler: gcc 5.4
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7.2.1
BLAS: 3.10.2
Python or MATLAB version (for pycaffe and matcaffe respectively): Python 2.7

molgridlayer top data structure questions

Issue summary

Two implementation in molgrid_data_layer.cpp confused me, will you give me some hint, Thans!

void MolGridDataLayer::forward

where do you have assign the values of the top[0]??

https://github.com/gnina/gnina/blob/master/caffe/src/caffe/layers/molgrid_data_layer.cpp
code line number: 787

  Dtype *top_data = NULL;
  if(gpu)
    top_data = top[0]->mutable_gpu_data();
  else
    top_data = top[0]->mutable_cpu_data();

MolGridDataLayer::set_grid_minfo function

As i understand, recgrid and liggrid are the true input to the next layer, however recgrid and liggrid are local variable, how do they pass on to the next layer

code line number: 720

  if (gpu)
  {
    Grid<Dtype, 4, true> recgrid(data, numReceptorTypes, dim, dim, dim);
    gmaker.forward(minfo.grid_center, rec_atoms, recgrid);
    if(!ignore_ligand) {
      // Grid, first the data, after the dimension
      Grid<Dtype, 4, true> liggrid(data+numgridpoints*numReceptorTypes, \
        numchannels-numReceptorTypes, dim, dim, dim);
      gmaker.forward(minfo.grid_center, lig_atoms, liggrid);
    }
  }

Gnina build fails

Excuse me, I was trying to install Gnina within a singularity container (nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04) on a HPC and experienced the following errors:

[ 43%] Built target classification
[ 43%] Built target convert_mnist_siamese_data
[ 43%] Built target pycaffe
[ 43%] Building CXX object gninasrc/CMakeFiles/gninalib.dir/lib/cnn_scorer.cpp.o
In file included from /jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:8:0:
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.h:32:23: error: 'recursive_mutex' is not a member of 'boost'
caffe::shared_ptrboost::recursive_mutex mtx; //todo, enable parallel scoring
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.h:32:23: error: 'recursive_mutex' is not a member of 'boost'
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.h:32:45: error: template argument 1 is invalid
caffe::shared_ptrboost::recursive_mutex mtx; //todo, enable parallel scoring
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.h: In constructor 'CNNScorer::CNNScorer()':
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.h:42:39: error: 'recursive_mutex' in namespace 'boost' does not name a type
: mgrid(NULL), mtx(new boost::recursive_mutex), current_center(NAN,NAN,NAN) {
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp: In constructor 'CNNScorer::CNNScorer(const cnn_options&)':
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:27:50: error: 'recursive_mutex' in namespace 'boost' does not name a type
: mgrid(NULL), cnnopts(opts), mtx(new boost::recursive_mutex), current_center(NAN,NAN,NAN) {
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp: In member function 'void CNNScorer::lrp(const model&, const string&, bool)':
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:159:21: error: 'recursive_mutex' is not a member of 'boost'
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:159:21: error: 'recursive_mutex' is not a member of 'boost'
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:159:43: error: template argument 1 is invalid
boost::lock_guardboost::recursive_mutex guard(mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:159:52: error: invalid type argument of unary '
' (have 'int')
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp: In member function 'void CNNScorer::gradient_setup(const model&, const string&, const string&, const string&)':
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:184:21: error: 'recursive_mutex' is not a member of 'boost'
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:184:21: error: 'recursive_mutex' is not a member of 'boost'
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:184:43: error: template argument 1 is invalid
boost::lock_guardboost::recursive_mutex guard(mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:184:52: error: invalid type argument of unary '
' (have 'int')
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp: In member function 'float CNNScorer::score(model&, bool, float&, float&)':
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:299:21: error: 'recursive_mutex' is not a member of 'boost'
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:299:21: error: 'recursive_mutex' is not a member of 'boost'
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:299:43: error: template argument 1 is invalid
boost::lock_guardboost::recursive_mutex guard(mtx);
^
/jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/gninasrc/lib/cnn_scorer.cpp:299:52: error: invalid type argument of unary '
' (have 'int')
boost::lock_guardboost::recursive_mutex guard(*mtx);
^
gninasrc/CMakeFiles/gninalib.dir/build.make:13672: recipe for target 'gninasrc/CMakeFiles/gninalib.dir/lib/cnn_scorer.cpp.o' failed
make[2]: *** [gninasrc/CMakeFiles/gninalib.dir/lib/cnn_scorer.cpp.o] Error 1
CMakeFiles/Makefile2:1180: recipe for target 'gninasrc/CMakeFiles/gninalib.dir/all' failed
make[1]: *** [gninasrc/CMakeFiles/gninalib.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

The Initial output was:

-- CUDA detected: 9.0
-- Found cuDNN: ver. 7.2.1 found (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
-- Added CUDA NVCC flags for: sm_60
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- thread
-- filesystem
-- iostreams
-- timer
-- chrono
-- date_time
-- atomic
-- regex
-- Found gflags (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found PROTOBUF Compiler: /usr/bin/protoc
-- CUDA detected: 9.0
-- Found cuDNN: ver. 7.2.1 found (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
-- Added CUDA NVCC flags for: sm_60
-- Found Atlas (include: /usr/include library: /usr/lib/libatlas.so lapack: /usr/lib/liblapack.so
-- NumPy ver. 1.11.0 found (include: /usr/lib/python2.7/dist-packages/numpy/core/include)
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- python
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)

-- ******************* Caffe Configuration Summary *******************
-- General:
-- Version : 1.0.0
-- Git : df65209
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- Release CXX flags : -O3 -DNDEBUG -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
-- Debug CXX flags : -g -Wno-deprecated-declarations -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
-- Build type : Release

-- BUILD_SHARED_LIBS : ON
-- BUILD_python : ON
-- BUILD_matlab : OFF
-- BUILD_docs : ON
-- CPU_ONLY : OFF
-- USE_OPENCV : 0
-- USE_LEVELDB : 0
-- USE_LMDB : 0
-- USE_NCCL : OFF
-- ALLOW_LMDB_NOLOCK : OFF

-- Dependencies:
-- BLAS : Yes (Atlas)
-- Boost : Yes (ver. 1.58)
-- glog : Yes
-- gflags : Yes
-- OpenBabel : Yes
-- protobuf : Yes (ver. 2.6.1)
-- CUDA : Yes (ver. 9.0)

-- NVIDIA CUDA:
-- Target GPU(s) : Auto
-- GPU arch(s) : sm_60
-- cuDNN : Yes (ver. 7.2.1)

-- Python:

-- Interpreter : /usr/bin/python2.7 (ver. 2.7.12)
-- Libraries : /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.12)
-- NumPy : /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.11.0)
-- Documentaion:
-- Doxygen : No
-- config_file :

-- Install:

-- Install path : /usr/local
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- program_options
-- system
-- iostreams
-- timer
-- thread
-- serialization
-- filesystem
-- date_time
-- regex
-- unit_test_framework
-- chrono
-- atomic
-- Found RDKit libraries at /usr/lib
-- Found RDKit library files at /usr/lib/libFileParsers.so;/usr/lib/libSmilesParse.so;/usr/lib/libGraphMol.so;/usr/lib/libRDGeometryLib.so;/usr/lib/libRDGeneral.so;/usr/lib/libSubgraphs.so;/usr/lib/libDataStructs.so;/usr/lib/libDepictor.so
-- Configuring done
-- Generating done
-- Build files have been written to: /jmain01/home/JAD001/mxs10/ccw15-mxs10/gnina/gnina/build

Any ideas on why it might not be recognising the recursive_mutex?

This issue does not seem to happen with a version we downloaded a few months ago (in particular the build: Feb 14, 2018, "Merge branch 'master'" ). All the dependencies suggested in the read me were installed as per instructions for Ubuntu 16.04.

Thanks, Conor

lrp failing with gninavis

Issue summary

lrp in gninavis failure issues. I suspect that it has to do with adding some Hydrogen atoms that are not present in the original file, and failing to remove them before writing out.

Steps to reproduce

Example can be found in fail_ex.tar.gz. Simply untar it, then cd into the directory and run the following command:

gninavis --receptor 1zog_nowat_4.pdb --ligand 1zog_docked_4.sdf --cnn_model gnina_test_ref.model --cnn_weights it2_completeset_DEF2018_s0.0_iter_1732000.caffemodel --atoms_only --vis_method lrp --gpu 0

Your system configuration

(this was run on our cluster)

Protoc version conflict

While I managed to compile libgmolgrid, the compilation continues, then another issue comes about:

[  2%] Completed 'libmolgrid'
[  2%] Built target libmolgrid
[  2%] Building CXX object caffe/src/caffe/CMakeFiles/caffeproto.dir/__/__/include/caffe/proto/caffe.pb.cc.o
In file included from /data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.cc:4:0:
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
 #error This file was generated by an older version of protoc which is
  ^~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
 #error incompatible with your Protocol Buffer headers. Please
  ^~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
 #error regenerate this file with a newer version of protoc.
  ^~~~~
In file included from /data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.cc:4:0:
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:1094:8: error: 'bool caffe::BlobShape::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)' marked 'final', but is not virtual
   bool MergePartialFromCodedStream(
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:1097:8: error: 'void caffe::BlobShape::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const' marked 'final', but is not virtual
   void SerializeWithCachedSizes(
        ^~~~~~~~~~~~~~~~~~~~~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:1099:35: error: 'google::protobuf::uint8* caffe::BlobShape::InternalSerializeWithCachedSizesToArray(google::protobuf::uint8*) const' marked 'final', but is not virtual
   ::PROTOBUF_NAMESPACE_ID::uint8* InternalSerializeWithCachedSizesToArray(
                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:1240:8: error: 'bool caffe::BlobProto::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)' marked 'final', but is not virtual
   bool MergePartialFromCodedStream(
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/data/AI_projects/program/gnina/build/caffe/include/caffe/proto/caffe.pb.h:1243:8: error: 'void caffe::BlobProto::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const' marked 'final', but is not virtual
   void SerializeWithCachedSizes(
        ^~~~~~~~~~~~~~~~~~~~~~~~

I chedked that .h file, it seems that I need to use use protoc3.9.1 or 3.9.2 to compile, when I check my version:

(base) [ai_robot@gpu build]protoc --version
libprotoc 3.9.2

It shows my version is indeed 3.9.2, and I do not know where the problem is?

'parse_error'

root@059b24cea53d:/amr/set1/133# gninagrid -r rec.pdb -l docked.sdf -o trial
terminate called after throwing an instance of 'parse_error'
Aborted (core dumped)

This happens with around 80 receptors/ligands

System--->nvidia-docker
files:
https://nofile.io/f/d7EWQvEY3FQ/133.zip

training performance

Dear Dr. Koes,

I have implemented the training on the refmodel3 using the train.py script and obtained the same results as reported in your publication. With one GPU it takes about 11s for 40 iterations.

Again, I want to test how the framework performs on DUD-E target PUR2. I use docked (with Smina) structure for each ligand and generated .gninatypes file for each ligand. Then I did the similar thing: the ligands are put in a single folder, and then split into traing and testing randomly (0.8:0.2), I can see the following output:

I1104 11:14:08.481962 13687 solver.cpp:352] Iteration 0, Testing net (#1)
I1104 11:14:20.100967 13687 sgd_solver.cpp:99] Gradient clipping: scaling down gradients (L2 norm 10.878 > 10) by scale factor 0.919283

Iteration 40
Train time: 11.914718
Eval test time: 1.151939
Test AUC: 0.961171
Test loss: 0.934658
Eval train time: 4.759745
Train AUC: 0.962875
Train loss: 0.743320
Loop time: 17.827261 (1:13:58.988101 left)
Memory usage: 0.695gb (746708992)
Best test AUC/RMSD: 0.961171 inf   Best train loss: 0.743320

Iteration 80
Train time: 11.716022
Eval test time: 1.129195
Test AUC: 0.998048
Test loss: 0.108145
Eval train time: 4.739851
Train AUC: 0.991603
Train loss: 0.091265
Loop time: 17.586117 (1:13:11.258961 left)
Memory usage: 0.697gb (748298240)
Best test AUC/RMSD: 0.998048 inf   Best train loss: 0.091265

Iteration 120
Train time: 11.722106
Eval test time: 1.123328
Test AUC: 1.000000
Test loss: 0.046416
Eval train time: 4.852293
Train AUC: 0.997313
Train loss: 0.038289
Loop time: 17.698747 (1:12:52.898324 left)
Memory usage: 0.697gb (748834816)
Best test AUC/RMSD: 1.000000 inf   Best train loss: 0.038289

Do you think everything is going in good order? I am currently testing other DUD-E :
I am wondering that when generating .gninatypes files for the ligands, which model is used? I can see from the docked ligand file, there are 10 models, and when gninatyper process finished, there seems to be only one model is used. Is the top ranked one is used (e.g., model 1), or gninatyper will select the exactly right model? Attach you will see an example for DUDE target PUR2.

PUR2.zip

An make error occured

Issue summary

image

../lib/libcaffe.so.1.0.0:对‘boost::filesystem::path::operator/=(boost::filesystem::path const&)’未定义的引用
../lib/libcaffe.so.1.0.0:对‘boost::python::converter::detail::arg_to_python_base::arg_to_python_base(void const volatile*, boost::python::converter::registration const&)’未定义的引用
../lib/libcaffe.so.1.0.0:对‘boost::iostreams::zlib::sync_flush’未定义的引用
../lib/libcaffe.so.1.0.0:对‘boost::iostreams::detail::zlib_base::xinflate(int)’未定义的引用
../lib/libcaffe.so.1.0.0:对‘boost::iostreams::zlib::deflated’未定义的引用
collect2: error: ld returned 1 exit status
caffe/tools/CMakeFiles/extract_features.dir/build.make:121: recipe for target 'caffe/tools/extract_features' failed
make[2]: *** [caffe/tools/extract_features] Error 1
CMakeFiles/Makefile2:497: recipe for target 'caffe/tools/CMakeFiles/extract_features.dir/all' failed
make[1]: *** [caffe/tools/CMakeFiles/extract_features.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

Your system configuration

Operating system:ubuntu 16.04.1
Compiler: gcc 5.4.0
CUDA version (if applicable):10.1
CUDNN version (if applicable): 7.3.1
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):

make error at 100% gninacheck

This time I try to compile gnina on a Ubuntu16.04, I followed the instruction, everything seems to work well. I compiled libmolgrid and caffe. When the "make" process reached 100%, an error comes about:

Scanning dependencies of target gninacheck
[ 98%] Building CUDA object test/gnina/CMakeFiles/gninacheck.dir/test_cache.cu.o
[100%] Building CXX object test/gnina/CMakeFiles/gninacheck.dir/test_cnn.cpp.o
[100%] Building CXX object test/gnina/CMakeFiles/gninacheck.dir/test_gpucode.cpp.o
[100%] Building CXX object test/gnina/CMakeFiles/gninacheck.dir/test_runner.cpp.o
[100%] Building CUDA object test/gnina/CMakeFiles/gninacheck.dir/test_tree.cu.o
[100%] Linking CUDA device code CMakeFiles/gninacheck.dir/cmake_device_link.o
[100%] Linking CXX executable gninacheck
CMakeFiles/gninacheck.dir/test_cache.cu.o: In function `__gnu_cxx::__normal_iterator<char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > boost::re_detail::re_is_set_member<__gnu_cxx::__normal_iterator<char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, char, boost::regex_traits<char, boost::cpp_regex_traits<char> >, unsigned int>(__gnu_cxx::__normal_iterator<char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, __gnu_cxx::__normal_iterator<char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, boost::re_detail::re_set_long<unsigned int> const*, boost::re_detail::regex_data<char, boost::regex_traits<char, boost::cpp_regex_traits<char> > > const&, bool)':
tmpxft_00008ce0_00000000-5_test_cache.compute_70.cudafe1.cpp:(.text._ZN5boost9re_detail16re_is_set_memberIN9__gnu_cxx17__normal_iteratorIPKcNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEcNS_12regex_traitsIcNS_16cpp_regex_traitsIcEEEEjEET_SH_SH_PKNS0_11re_set_longIT2_EERKNS0_10regex_dataIT0_T1_EEb[_ZN5boost9re_detail16re_is_set_memberIN9__gnu_cxx17__normal_iteratorIPKcNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEcNS_12regex_traitsIcNS_16cpp_regex_traitsIcEEEEjEET_SH_SH_PKNS0_11re_set_longIT2_EERKNS0_10regex_dataIT0_T1_EEb]+0x161): undefined reference to `boost::re_detail::cpp_regex_traits_implementation<char>::transform_primary[abi:cxx11](char const*, char const*) const'
tmpxft_00008ce0_00000000-5_test_cache.compute_70.cudafe1.cpp:(.text._ZN5boost9re_detail16re_is_set_memberIN9__gnu_cxx17__normal_iteratorIPKcNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEcNS_12regex_traitsIcNS_16cpp_regex_traitsIcEEEEjEET_SH_SH_PKNS0_11re_set_longIT2_EERKNS0_10regex_dataIT0_T1_EEb[_ZN5boost9re_detail16re_is_set_memberIN9__gnu_cxx17__normal_iteratorIPKcNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEcNS_12regex_traitsIcNS_16cpp_regex_traitsIcEEEEjEET_SH_SH_PKNS0_11re_set_longIT2_EERKNS0_10regex_dataIT0_T1_EEb]+0x4f1): undefined reference to `boost::re_detail::cpp_regex_traits_implementation<char>::transform[abi:cxx11](char const*, char const*) const'
collect2: error: ld returned 1 exit status
test/gnina/CMakeFiles/gninacheck.dir/build.make:246: recipe for target 'test/gnina/gninacheck' failed
make[2]: *** [test/gnina/gninacheck] Error 1
CMakeFiles/Makefile2:2072: recipe for target 'test/gnina/CMakeFiles/gninacheck.dir/all' failed
make[1]: *** [test/gnina/CMakeFiles/gninacheck.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2

System: Ubuntu16.04
CUDA: 9.2
Python: 3.5.2
Boost: 1.58
GCC: 6.1.0

I do not know why this problem comes about, is my Boost too old??

Dead Slack link

Hello! It looks like the current Slack link no longer works. Just wondering if it can be updated. :)

Question about data input

Issue summary

When run command: gnina --minimize -r s.pdb -l ligand.sdf --cnn_soring
I was very confused about the input "s.pdb". I got the following options:

  1. original protein (co-crystal structure), containing both pockets and ligands.
  2. clean protein file, without the ligand structure.
  3. site file, which refers a certain pocket of a given protein(since one protein may have many pockets).

And is there any pre-process steps required, after download the original pdb file from http://www.rcsb.org/ to get the separated clean protein structure and the ligand part.

Many thanks!

Error during compiling gnina

Hi,

Centos7, CUDA-9.0, etc.
I fixed the following problem during the compilation of by upgrading the gcc version to 6.0.1:

[ 14%] Building CUDA object src/CMakeFiles/libmolgrid_shared.dir/grid_maker.cu.o

When the compilation continues, another error came about at 67%:
[ 63%] Built target molgrid
Scanning dependencies of target test_gridmaker_cu
[ 65%] Building CUDA object test/CMakeFiles/test_gridmaker_cu.dir/test_gridmaker.cu.o
[ 67%] Linking CXX executable ../bin/test_gridmaker_cu
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: warning: libz.so.1, needed by /lib64/libopenbabel.so, not found (try using -rpath or -rpath-link)
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: CMakeFiles/test_gridmaker_cu.dir/test_gridmaker.cu.o: relocation R_X86_64_32 against '.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: ../lib64/libmolgrid.a(grid_maker.cu.o): relocation R_X86_64_32S against symbol '_ZTVN6thrust6system12system_errorE' can not be used when making a PIE object; recompile with -fPIC
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: ../lib64/libmolgrid.a(coordinateset.cu.o): relocation R_X86_64_32 against symbol '_ZN10libmolgrid20sum_vector_types_gpuENS_4GridIfLm2ELb1EEENS0_IfLm1ELb1EEE' can not be used when making a PIE object; recompile with -fPIC
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: ../lib64/libmolgrid.a(transform.cu.o): relocation R_X86_64_32 against symbol 'ZN10libmolgrid26transform_translate_kernelIfEEvj6float3NS_4GridIT_Lm2ELb1EEES4' can not be used when making a PIE object; recompile with -fPIC
/data/program/conda3/bin/../lib/gcc/x86_64-conda_cos6-linux-gnu/7.3.0/../../../../x86_64-conda_cos6-linux-gnu/bin/ld: final link failed: nonrepresentable section on output
collect2: error: ld returned 1 exit status
make[5]: *** [bin/test_gridmaker_cu] Error 1
make[4]: *** [test/CMakeFiles/test_gridmaker_cu.dir/all] Error 2
make[3]: *** [all] Error 2
make[2]: *** [libmolgrid-prefix/src/libmolgrid-stamp/libmolgrid-build] Error 2
make[1]: *** [CMakeFiles/libmolgrid.dir/all] Error 2
make: *** [all] Error 2

So, what results in this problem?? Thank you.

gninagrid bad alloc

root@8681699bdf8e:/src/play/set1/14# gninagrid -r rec.pdb -l ligand.mol2 -o a --separate
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

I installed gnina inside docker image
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
gcc version 5.4.0 20160609
CUDA version (if applicable):V9.0.176

How BVLC version caffe can solve 3D model

hello ,
there is a question about BLVC version caffe ,it's just can process 2D images, was caffe conv_layer.cpp reconstructed or other method used to processing ligand and protein space poses? I just can coding python ,so Look forward to your reply.

Linking with "openbabel-NOTFOUND"

Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.

Please read the guidelines for contributing before submitting this issue.

Issue summary

Steps to reproduce

If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.

Your system configuration

Operating system:
Compiler:
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):

why gnina cnn_scoring take so long time ?

Hi,
I do a gnina cnn_scoring test , only one ligand molecular take 15 hours and gpu with high utilization rate when using cnn_scoring,I can't understand Why does gnina need such a large amount of calculation only one ligand? how can I get dock poses quickly ? and I using one gtx 1080 gpu ,command as follows:
gnina --cpu 4 --gpu --receptor protein.mol2 --ligand mols-1.sdf --cnn_model ../soft/models/refmodel3/refmodel3.model --cnn_weights ../soft/models/refmodel3/weights/pdbbind_cnnmin1_100000.caffemodel --cnn_center_x 31.944 --cnn_center_y -13.308 --cnn_center_z 4.06928 --cnn_scoring --out gnina_cnn.sdf --num_modes 4 --energy_range 99 --exhaustiveness 4 --autobox_ligand bindingsite.xyz --autobox_add 2 --seed 42

thanks.

Build rdkit problem

I am running on CentOS7 and Anaconda2, when following the instructions to compile rdkit, after I enter make, then an error came about:

 [ 28%] Building CXX object Code/ForceField/UFF/CMakeFiles/testUFFForceField.dir/testUFFForceField.cpp.o
Linking CXX executable testUFFForceField
../../../lib/libRDKitGraphMol.so.1.2017.03.1: undefined reference to `boost::thread_detail::commit_once_region(boost::once_flag&)'
../../../lib/libRDKitGraphMol.so.1.2017.03.1: undefined reference to `boost::thread_detail::enter_once_region(boost::once_flag&)'
../../../lib/libRDKitGraphMol.so.1.2017.03.1: undefined reference to `boost::thread_detail::rollback_once_region(boost::once_flag&)'
collect2: error: ld returned 1 exit status

It seems that boost_thread is not correctly installed. Acutually I use the code provided by the authors to install boost1.53, I am wondering how to fix this issue. I have also tested to compile is with the CentOS python, the same problem occured.

Moreover, when I run

sudo yum install boost-devel.x86_64 ...

It automatically installs boost1.53, but it seems that a newer version of boost is required when compiling gnina itself. How to fix this as well?

Thank you.

libmolgrid::CoordinateSet’ has no member named ‘coord error

infomation

open bable from : https://github.com/openbabel/openbabel.git

from github not from libopenbabel-dev because it have some error when make gnina

error info

gnina_repos/gnina/caffe/src/caffe/layers/molgrid_data_layer.cpp: In member function ‘double caffe::MolGridDataLayer::mol_info::ligandRadius() const’:
/home/lyt/workspace/molecular_gen/gnina_repos/gnina/caffe/src/caffe/layers/molgrid_data_layer.cpp:43:28: error: ‘const struct libmolgrid::CoordinateSet’ has no member named ‘coord’; did you mean ‘coords’?
vec pos(orig_lig_atoms.coord[i][0],orig_lig_atoms.coord[i][1],orig_lig_atoms.coord[i][2]);
^~~~~
coords

how to fix this? Thank you

ubuntu16/anaconda

Issue summary

I can't install gnina on top of anaconda.I can successfully using normal python but when shifting to anaconda , i encounter so many errors regarding protobuff and other google libs. I am not sure how to install it under anaconda environment and what are the correct and compatible libs to be installed via conda

Your system configuration

Operating system: ubuntu 16
Compiler:
CUDA version (if applicable):9.2
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):python anaconda 2.7

Problem compiling gnina

Issue summary

During gnina installation, I get the following error:

[ 93%] Linking CXX executable ../bin/gnina CMakeFiles/gnina.dir/gnina_intermediate_link.o: In function __cudaRegisterLinkedBinary_66_tmpxft_00002dac_00000000_12_cuda_device_runtime_compute_70_cpp1_ii_8b1a5d37: link.stub:(.text+0x710): undefined reference to __fatbinwrap_66_tmpxft_00002dac_00000000_12_cuda_device_runtime_compute_70_cpp1_ii_8b1a5d37 collect2: error: ld returned 1 exit status gninasrc/CMakeFiles/gnina.dir/build.make:1311: recipe for target 'bin/gnina' failed make[2]: *** [bin/gnina] Error 1 CMakeFiles/Makefile2:1142: recipe for target 'gninasrc/CMakeFiles/gnina.dir/all' failed make[1]: *** [gninasrc/CMakeFiles/gnina.dir/all] Error 2

Your system configuration

Operating system: Ubuntu 16.04
Compiler: gcc 5.4.0
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7.1.3
Python version: 2.7.12

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.