Giter Club home page Giter Club logo

Comments (46)

sef43 avatar sef43 commented on June 14, 2024 3

I've now added PBC support, It seems to work for the minimal_toy_emt.yaml system, where my definition of "working" is that it runs a stable NPT simulation, It need to be tested it on some real systems, and It should be compared to the lammps version.

I have set the NequIP types default behaviour to be a 1 to 1 mapping from the OpenMM atomic element/symbol, with the option of passing a list that explicitly defines the zero indexed NequIP types of each atom. And conversion factors from the model units to OpenMM units need to be given when constructing the potential.

It should work on both GPU and CPU

I have put an example of running a deployed toy EMT model with PBC and NPT:
https://github.com/sef43/openmm-ml/blob/nequip/examples/run_nequip_pbc.ipynb

from nequip.

sef43 avatar sef43 commented on June 14, 2024 2

This code

shifts = torch.round(triu_deltas/cell_size)
wrapped_triu_deltas = triu_deltas - shifts*cell_size

is only accurate for rectangular boxes. If you want to support triclinic boxes its a little more complicated. Still not very complicated though.

Yes that is true, it adds some compute time. I have updated it to support OpenMM triclinic boxes. For completeness here is the benchmarks for the 3 cases:

  1. Wrapping a triclinic box and giving to torch_nl O(n) function
  2. Using the O(n^2) torchscript code with triclinic boxes
  3. using NNPOps neighbour lists, and creating the full pairwise neighbour list and shift vectors from it

benchmark_nls

Updated code here: https://colab.research.google.com/drive/1Vz4QN8Y_cEUkCL2YxY86QhSruYPxfwtr?usp=sharing

@peastman this nicely demonstrates the speed of the NNPOps implementation!

once openmm/NNPOps#92 is fixed and released to the conda-forge build we can default to using NNPOps if available

from nequip.

asedova avatar asedova commented on June 14, 2024 1

That would be ok-- for now, we can let the model decide how the oxygen will act in different contexts, which is what I planned to start with anyway. I also think OpenMM would be ok if the topology object is not coming from any of the normal topology files from packages like AMBER and GROMACS. OpenMM in general allows a good amount of user-defined things, it's just not how I normally use it so I need to do some reading up. Here's an example: https://gpantel.github.io/computational-method/LJsimulation/

from nequip.

peastman avatar peastman commented on June 14, 2024 1

what is missing would be something like how do you specify a model that treats, say, the oxygen atoms in one molecule as different from the oxygen atoms in a second molecule.

In any model that allows forming and breaking bonds, I think the only option is to equate atom types with elements. Everything else can change with time. If the molecules are required to stay fixed over a simulation, that does allow incorporating more information into the atom types. I've experimented with including formal charges and/or hybridization states into the definitions of types, and it does seem to help accuracy.

Most force fields in OpenMM identify atom types by matching templates to residues. That isn't required though. For example, OpenFF uses a completely different mechanism of assigning parameters based on matching SMIRKS patterns (and not involving atom types at all). For ML models, you can build your model however you want. The PyTorch model and the Python code to build it are both completely in your control.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024 1

πŸ‘ I vote then to start with just taking OpenMM element -> chemical symbol as NequIP type name, possibly with an option of letting the user provide their own function that maps OpenMM topology -> NequIP type names, since every trained NequIP model will be different in this way.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024 1

I've just added a few more notes to the post above with information on how we can go about this, FYI.

Regarding stress/virial @asedova, my understanding is that OpenMM doesn't support or need it: openmm/openmm-torch#91 (comment)

from nequip.

peastman avatar peastman commented on June 14, 2024 1

That looks really nice!

If we want to get it into the main OpenMM-ML distribution, there are a couple of things we need to consider. First, it would be very helpful if all dependencies can be installed with conda. Otherwise we can't build and test the conda packages against them.

The other issue is more of a conceptual question. OpenMM-ML is intended to provide general purpose, ready to use models, not just model architectures. For example, you can write

potential = MLPotential('ani2x')

and immediately start simulating your own system with that potential. What you've implemented just provides support for an architecture. It requires the user to provide their own model. That's also useful, for example if you want to create a special purpose model for your particular system, but it requires a lot more work from the user. The goal of OpenMM-ML is to make ML simulations just as easy to run as ones with conventional force fields.

Are there any plans to create general purpose, ready to use models based on NequIP?

from nequip.

sef43 avatar sef43 commented on June 14, 2024 1

yes could just wrap, it is a cheap operation, but actually when running on GPU the naive calculation is significantly faster:
benchmark_nls

I guess the simpler operations map much better to vectorised GPU code by the jit compiler/optimizer

To reproduce
https://colab.research.google.com/drive/1Vz4QN8Y_cEUkCL2YxY86QhSruYPxfwtr?usp=sharing

both start getting out of memory errors for more than 20,000 atoms.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024 1

Wow, very nice! What a plot...

from nequip.

asedova avatar asedova commented on June 14, 2024 1

That looks really nice!

If we want to get it into the main OpenMM-ML distribution, there are a couple of things we need to consider. First, it would be very helpful if all dependencies can be installed with conda. Otherwise we can't build and test the conda packages against them.

The other issue is more of a conceptual question. OpenMM-ML is intended to provide general purpose, ready to use models, not just model architectures. For example, you can write

potential = MLPotential('ani2x')

and immediately start simulating your own system with that potential. What you've implemented just provides support for an architecture. It requires the user to provide their own model. That's also useful, for example if you want to create a special purpose model for your particular system, but it requires a lot more work from the user. The goal of OpenMM-ML is to make ML simulations just as easy to run as ones with conventional force fields.

Are there any plans to create general purpose, ready to use models based on NequIP?

@peastman the types of systems that people using NEQUIP (or DeePMD or other such tools) are studying vary dramatically. For example our team is studying molten salts, organic solvents, ionic liquids, etc. I doubt there will be a totally generalizable general ML potential for any systems from the periodic table any time soon. Therefore, the fact that people have to train their own models is not a deal breaker by any means. We already do all that. Having a fast, Pythonic, efficient way to run the MD inference is going to be really helpful. Especially since we often need many rounds of active-learning iterative training (with potentially some enhanced sampling) before we get stuff right, like with reactive molecular systems. Here are some examples: https://www.sciencedirect.com/science/article/abs/pii/S092058612100136X
https://pubs.acs.org/doi/10.1021/acs.jctc.2c00816?ref=PDF
https://chemrxiv.org/engage/chemrxiv/article-details/638e25b2836ceb2e8b74cdf2

from nequip.

asedova avatar asedova commented on June 14, 2024

Left an question on OpenMM's github about how this would work, and here is the reply: openmm/openmm-ml#48 (comment) I am about to see how hard this would be for a high-level user to do themselves. Any help would be appreciated!

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Hi @asedova ,

Thanks very much for your interest in our methods and working on this integration!

We previously had some discussion about this (#76) and the great news is that the main thing that was missing---a PyTorch neighborlist--- has now been implemented very nicely by Felix Musil with good performance and what appear to be pretty rigorous correctness tests: https://github.com/felixmusil/torch_nl. So it's now the perfect time for a "high-level user" to get this done finally! πŸŽ‰

Reading @peastman 's comment in the linked thread, I think the best way forward would be a 1-to-1 translation of the OpenMM-ML ANI interface https://github.com/openmm/openmm-ml/blob/main/openmmml/models/anipotential.py.

It would be something like:

  • Load a deployed NequIP model using nequip.scripts.deploy.load_deployed_model (nequip.ase.NequIPCalculator is a good example of this); the result is a TorchScript model.
  • Similar to ANIForce, define a wrapper around the deployed NequIP model that:
    • Uses torch_nl to construct edge_index for NequIP from the positions and, optionally, cell + PBC. It is important to set edge_cell_shift correctly as well, see here for how we do this with the ase neighborlist. For complete PBC correctness, make sure to replicate the self-interaction filtering logic as well.
    • Builds a NequIP input dictionary (Dict[str, torch.Tensor], keys can be found here). Necessary keys are pos, edge_index, atom_types and, if PBC are used, cell and edge_cell_shift.
    • Somehow map OpenMM atom types / species into the NequIP nequip.data.AtomicDataDict.ATOM_TYPES_KEY 0-based index. See again (nequip.ase.NequIPCalculator for an example of how to do this. I'm not super familiar with how the OpenMM type machinery works, but basically there should be a user-defined mapping of OpenMM types into NequIP type names, which the code then internally uses to create a mapping of OpenMM types to NequIP type numbers.
    • Call the deployed NequIP model with this dict as input, and get the total energy and forces from the result: model(in_dict)[nequip.data.AtomicDataDict.TOTAL_ENERGY_KEY / FORCE_KEY]
    • Do any unit conversions according to the user's provided conversions
    • Return this total energy and forces, see https://github.com/openmm/openmm-torch#computing-forces-in-the-model
  • torch.jit.script this whole thing, then torch.jit.save it to a tempfile and pass it along to OpenMM-Torch.

It would be very nice to see this PRed into OpenMM-ML; please let me know if you have more questions, and feel free to ask/open PRs (off develop) here if there are any additions to nequip needed to make it work.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

OK @asedova I've edited the above to reflect more recent versions of OpenMM-Torch πŸ™‚

from nequip.

asedova avatar asedova commented on June 14, 2024

Thanks so much! I am going to try to see what I can do. I just started looking at NEQUIP after some time with DeePMDkit, so it may take me some time to work through all this. So happy there is momentum in this direction!

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Great! Let me know if you have any questions. It looks like a lot but I think it should hopefully be straightforward, mostly "copy-paste" style coding.

from nequip.

peastman avatar peastman commented on June 14, 2024

there should be a user-defined mapping of OpenMM types into NequIP type names, which the code then internally uses to create a mapping of OpenMM types to NequIP type numbers.

OpenMM doesn't have any fixed set of types. It treats atom types as a property of a particular force field. addForces() receives a Topology object which contains chemical information: elements, bonds, residue names, etc. No set of atom types have been applied to them.

How does NequIP define atom types?

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

It treats atom types as a property of a particular force field.

That sounds quite compatible.

In nequip, atom types are just 0-based integer indexes of which there can be arbitrarily many. Each is also associated with a string name for usability. What they mean is specific to a given trained model and how the user did the training, but typically the names just correspond to element symbols. So clearly there should be some kind of user-defined mapping from OpenMM "groups" of atoms (however that is mostly naturally defined) to nequip type names; the user is then responsible for making sure this mapping makes sense for the model they are using.

Similarly, nequip places no restriction on system of units, as long as they are self-consistent; a trained model has the units of the data used to train it. So it's also necessary to allow the user to supply the unit (conversion) from the units of the particular model they are using into kJ/mol for OpenMM.

from nequip.

asedova avatar asedova commented on June 14, 2024

@peastman what troubles me is that with NEQUIP (and other similar codes like DeePMDkit) there is no topology, no bonds, no residues. And for good reason-- since it's trained on DFT, these all can actually change during a simulation, just like in AIMD. I'm not super familiar with ANI, but with these codes, input file would just be atom type (element) + features in, then the model gets energy + forces on each atom. The most compatible traditional input file to use as an example would be an xyz file, like CP2K uses.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

receives a Topology object which contains chemical information: elements, bonds, residue names, etc.

I think as a first solution, you can just ignore all of that, take the elements, convert them to chemical symbols, and use those as NequIP type names? For most models this will be sufficient... what is missing would be something like how do you specify a model that treats, say, the oxygen atoms in one molecule as different from the oxygen atoms in a second molecule. That is clearly some kind of a function of the topology, I'm just not sure in OpenMM language how you let the user specify that.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Hi @asedova ,

Just curious if you'd had any luck with this. Thanks!

from nequip.

sef43 avatar sef43 commented on June 14, 2024

Hi @asedova @Linux-cpp-lisp how is progress on this going? I am an RSE working with OpenMM, I have time available to help write/write a wrapper in openmm-ml for NequIP. Let me know if you have any questions and how far you have got with this!

from nequip.

asedova avatar asedova commented on June 14, 2024

Hey all-

Progress is slow, since we have never used NEQUIP that much. We are much more familiar with DeePMD. I should be able to get back to this soon; we are finishing up some DeePMD work. @sef43 I was wondering if there would be a way to pass the virial/stress tensor to OpenMM, based on this: #291 (comment)

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

@sef43 hi, thank you for reaching out! πŸ‘‹ We'd love to see this happen, so your help and hopefully leadership on it as an RSE who knows OpenMM well would be fantastic.

I think all my relevant thoughts are collected in the post above (#288 (comment)), but please feel free to get in touch either here or by email (see my GitHub profile) if there is more we should discuss.

@asedova no worries; do you have any code we ought work off, or would it be better for @sef43 to start a fresh repo / fork to work on?

(I'm assuming @sef43 that, as laid out above, all of this wrapper should live in the fork of openmm-ml and eventually be merged there, but if there is code that we think is more appropriate in our repos we can make that happen easily too.)

from nequip.

asedova avatar asedova commented on June 14, 2024

I would suggest just starting with a basic box of water example first, as done in the nature comms paper. Our systems are really difficult and we are still working out some of the science. Has getting a condensed-phase NNP to work with OpenMM happened before? I am confused because it seems there is a lot of gas phase work within ANI/TorchANI

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Sure--- I think, however, that the science questions that you bring up are fairly decoupled from the software-level interfacing questions, which are fairly mechanical and binary (either they correctly transfer the predictions of the neural network potential or not, regardless of how useful / accurate that potential model is)

At the software level, I think the biggest difficulty moving from gas to condensed phase is PBC, but torch-nl should handle that as described above.

from nequip.

sef43 avatar sef43 commented on June 14, 2024

I will start doing this then!

(I'm assuming @sef43 that, as laid out above, all of this wrapper should live in the fork of openmm-ml and eventually be merged there, but if there is code that we think is more appropriate in our repos we can make that happen easily too.)

Yes I will make a fork with the aim that it ends up merged into openmm-ml main, It shouldn't require any changes to the nequip repo. You steps in the comment above are very clear thank you, should be most of what I need, I will let you know when I have questions.

As @Linux-cpp-lisp has said there should be no major reasons why condensed phase simulations wouldn't work provided an appropriately trained neural network is used.
And yes OpenMM uses a Monte-Carlo barostat for NPT simulations, this means it does not do, and does not need, a virial calculation.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Fantastic! Thanks again @sef43 , looking forward to having this!

It shouldn't require any changes to the nequip repo. You steps in the comment above are very clear thank you, should be most of what I need, I will let you know when I have questions.

Sounds good, feel free to reach out any time here or by email as is convenient for you.

from nequip.

sef43 avatar sef43 commented on June 14, 2024

@Linux-cpp-lisp I have a WIP fork branch here: https://github.com/sef43/openmm-ml/tree/nequip

I have done a first implementation. It works for the Toluene example, I have put an example here, which can be run in colab

It uses a trained deployed model I ran using your example nequip-train configs/example.yaml

Things I still need to do:

  • Implement PBCs properly. Can you provide me with a test case that is a liquid/condensed phase? I will need to be able to run the training, as there are some issues with pytorch versions (more below)
  • cleanup/check the way atom types/atomic numbers are used
  • test on GPUs
  • deal with different unit conversions, currently angstrom and kcal/mol from toluene example are hardcoded

Some issues:

  • I think the version of torch used by nequip that comes from pip (pip install torch) is not compatible with the version openmm-torch uses (conda install -c conda-forge pytorch). I got around this by using nequip to train the toluene example with conda-forge pytorch.

  • when i use nequip.scripts.deploy.load_deployed_model I get errors which I have put below. If I instead just use torch.jit.load directly then it works. Do you know why this might be the case? It looks like load_deployed_model also just calls torch.jit.load

pytorch error from using nequip.scripts.deploy.load_deployed_model:

/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/nequip/utils/_global_options.py:58: UserWarning: Setting the GLOBAL value for jit fusion strategy to `[('DYNAMIC', 3)]` which is different than the previous value of `[('STATIC', 2), ('DYNAMIC', 10)]`
  warnings.warn(
Traceback (most recent call last):
  File "/Users/stephen/Documents/nequip/openmm-ml/example/run_nequip.py", line 19, in <module>
    system = potential.createSystem(pdb.topology)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/openmmml/mlpotential.py", line 178, in createSystem
    self._impl.addForces(topology, system, None, 0, **args)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/openmmml/models/nequippotential.py", line 169, in addForces
    module = torch.jit.script(nequipforce)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_script.py", line 1286, in script
    return torch.jit._recursive.create_script_module(
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 458, in create_script_module
    return create_script_module_impl(nn_module, concrete_type, stubs_fn)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_script.py", line 615, in _construct
    init_fn(script_module)
  File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 500, in init_fn
    cpp_module.setattr(name, scripted)
RuntimeError: Could not cast attribute 'model' to type __torch__.torch.jit._script.RecursiveScriptModule (of Python compilation unit at: 0x6000032964f8): Expected a value of type '__torch__.torch.jit._script.RecursiveScriptModule (of Python compilation unit at: 0x6000032964f8)' for field 'model', but found '__torch__.nequip.nn._rescale.___torch_mangle_0.RescaleOutput (of Python compilation unit at: 0x60000326ee58)'
Exception raised from setattr at /Users/runner/miniforge3/conda-bld/pytorch-recipe_1664817728005/work/torch/csrc/jit/api/object.h:67 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 92 (0x110c0559c in libc10.dylib)
frame #1: torch::jit::Object::setattr(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, c10::IValue) + 1312 (0x1344d6794 in libtorch_python.dylib)
frame #2: void pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_3, void, torch::jit::Object&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::object, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::initJitScriptBindings(_object*)::$_3&&, void (*)(torch::jit::Object&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::object), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) + 840 (0x1346775e8 in libtorch_python.dylib)
frame #3: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3464 (0x13400e71c in libtorch_python.dylib)
frame #4: cfunction_call + 80 (0x10434ca68 in python3.10)
frame #5: _PyObject_MakeTpCall + 612 (0x1042fa050 in python3.10)
frame #6: method_vectorcall + 268 (0x1042fd770 in python3.10)
frame #7: call_function + 524 (0x1043e8e38 in python3.10)
frame #8: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #9: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #10: call_function + 524 (0x1043e8e38 in python3.10)
frame #11: _PyEval_EvalFrameDefault + 26500 (0x1043e4c38 in python3.10)
frame #12: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #13: call_function + 524 (0x1043e8e38 in python3.10)
frame #14: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #15: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #16: call_function + 524 (0x1043e8e38 in python3.10)
frame #17: _PyEval_EvalFrameDefault + 26500 (0x1043e4c38 in python3.10)
frame #18: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #19: call_function + 524 (0x1043e8e38 in python3.10)
frame #20: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #21: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #22: call_function + 524 (0x1043e8e38 in python3.10)
frame #23: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #24: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #25: method_vectorcall + 516 (0x1042fd868 in python3.10)
frame #26: _PyEval_EvalFrameDefault + 27276 (0x1043e4f40 in python3.10)
frame #27: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #28: call_function + 524 (0x1043e8e38 in python3.10)
frame #29: _PyEval_EvalFrameDefault + 26348 (0x1043e4ba0 in python3.10)
frame #30: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #31: run_mod + 216 (0x104438d7c in python3.10)
frame #32: _PyRun_SimpleFileObject + 1264 (0x10443881c in python3.10)
frame #33: _PyRun_AnyFileObject + 240 (0x1044377f4 in python3.10)
frame #34: Py_RunMain + 2340 (0x10445c8ac in python3.10)
frame #35: pymain_main + 1272 (0x10445da68 in python3.10)
frame #36: main + 56 (0x1042a3dc8 in python3.10)
frame #37: start + 2544 (0x198003e50 in dyld)

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

@sef43 fantastic! That was quick πŸ˜† , thanks!

Regarding the issues, sort of in order:

  1. I've now removed on develop the explicit dependence of nequip on torch so that the user's installation takes precedence, since pip is really bad about trying to override your existing, correct PyTorch installation. We always use PyTorch from conda instead, but also always take from PyTorch's own channels, something like: conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia... but I'm not sure how that would interact with OpenMM-torch's dependencies.
  2. We support presently PyTorch 1.11 and 1.13 (2.0 should work once released but is untested; 1.12 is broken due to PyTorch bugs).
  3. Using nequip.scripts.deploy.load_deployed_model(..., freeze=False) should fix it.

Implement PBCs properly. Can you provide me with a test case that is a liquid/condensed phase? I will need to be able to run the training, as there are some issues with pytorch versions (more below)

Two easy options would be:

  • Simple test data on a bulk metal from the EMT classical potential (see minimal_toy_emt.yaml):
dataset: EMTTest                                                                       # type of data set, can be npz or ase
dataset_element: Cu
dataset_supercell: [4, 4, 4]
dataset_num_frames: 50
chemical_symbols:
  - Cu

Both should be in units of eV and Γ…. If it's easier, I'm happy to train a model quickly and email it to you as well if you prefer.

cleanup/check the way atom types/atomic numbers are used

Am I right in understanding that in OpenMM atom type == atomic number, and that OpenMM only reasons about atoms in terms of their atomic numbers? Is that why you have it set up to possibly allow mapping multiple nequip types to one OpenMM atomic number?

deal with different unit conversions, currently angstrom and kcal/mol from toluene example are hardcoded

πŸ‘ yep, NequIP/Allegro is entirely agnostic to units and preserves those of the training data (as you've seen), so a user-facing option in the OpenMM wrapper seems appropriate.

A finally, some random things:

https://github.com/sef43/openmm-ml/blob/nequip/openmmml/models/nequippotential.py#L128

batch is not required when there is only one example in the batch, and should be omitted.

from nequip.

sef43 avatar sef43 commented on June 14, 2024

Am I right in understanding that in OpenMM atom type == atomic number, and that OpenMM only reasons about atoms in terms of their atomic numbers? Is that why you have it set up to possibly allow mapping multiple nequip types to one OpenMM atomic number?

I wouldn't read anything into what I have currently done, I need to understand a bit more about the nequip types!

Each particle in OpenMM is an Atom object which will be a specific Element, so you can go atom.element.atomic_number to get the atomic number and atom.element.symbol to get the chemical symbol.

As I understand nequip types are zero indexed and user defined. So there can be multiple types for the same chemical element. OpenMM is not going to know about this from reading in files (it's own concept of atom types and atom classes is tied to which ForceField you are using). The most flexible way would be to explicitly specify an array of length Natoms containing the nequip types of each atom and pass this to the MLP constructor, but not very user friendly

We support presently PyTorch 1.11 and 1.13 (2.0 should work once released but is untested; 1.12 is broken due to PyTorch bugs).

Ah ok I will avoid PyTorch 1.12 then

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

As I understand nequip types are zero indexed and user defined. So there can be multiple types for the same chemical element.

This is correct, yes.

The most flexible way would be to explicitly specify an array of length Natoms containing the nequip types of each atom and pass this to the MLP constructor, but not very user friendly

I see--- that sounds like a useful, optional choice to have, but I agree that it is an unpleasant interface for the common cases, which are best handled with a mapping. Probably best to restrict the mapping to one-to-one atomic numbers to types and let more complicated cases be handled explicitly, though, to avoid easy but hard to debug issues.

(Incidentally, the common case for nequip is to use our chemical_symbols option, where the user gives something like:

chemical_symbols:
 - H
 - C
 - O

which then sorts them by atomic number (https://github.com/mir-group/nequip/blob/main/nequip/data/transforms.py#L30-L36) and assigns consecutive atom types starting at zero. The other common case is chemical_symbol_to_type, where the mapping of atomic number to type is explicitly given. These are both train-time options, and are intentionally not propagated to deployed models, where users are expected to be explicit about their simulation setup to avoid challenging bugs for a mismatch of expectations around "reasonable" behavior.)

from nequip.

peastman avatar peastman commented on June 14, 2024

We can generate atom types in pretty much any way we want. The only requirement is that it has to be possible to determine them from the arguments to addForces(). As long as that's true, we can write whatever code is needed.

from nequip.

simonbatzner avatar simonbatzner commented on June 14, 2024

This is really awesome, thanks so much @sef43!

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

This is great @sef43 !!! πŸŽ‰ Thanks so much for putting this together.

A small point--- is there a reason to always use the O(N^2) neighborlist? It looks from the benchmarks (https://github.com/felixmusil/torch_nl) that the O(N) becomes faster on GPU around 200 atoms, which I'd imagine would be a very straightforward heuristic to implement?

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

One other piece of relevant information to:

https://github.com/sef43/openmm-ml/blob/nequip/openmmml/models/nequippotential.py#L98-L100

nequip.scripts.deploy.load_deployed_model will call torch.set_default_dtype appropriately based on the metadata of the deployed model (in the current development/future stable) version of nequip, so I think you might be able to address the TODO by moving the self.default_dtype assignment after the load_deployed_model.

from nequip.

sef43 avatar sef43 commented on June 14, 2024

A small point--- is there a reason to always use the O(N^2) neighborlist? It looks from the benchmarks (https://github.com/felixmusil/torch_nl) that the O(N) becomes faster on GPU around 200 atoms, which I'd imagine would be a very straightforward heuristic to implement?

Switched to just use the O(N) version, for small systems the speed difference is negligible, for larger systems it is of course faster

nequip.scripts.deploy.load_deployed_model will call torch.set_default_dtype appropriately based on the metadata of the deployed model (in the current development/future stable) version of nequip, so I think you might be able to address the TODO by moving the self.default_dtype assignment after the load_deployed_model.

it will now set the model dtype from the model metadata, note this means it must be used with nequip develop version 0.6.0.
Also I don't think nequip.scripts.deploy.load_deployed_model will work for models deployed from an older version

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Switched to just use the O(N) version, for small systems the speed difference is negligible, for larger systems it is of course faster

βœ…

Also I don't think nequip.scripts.deploy.load_deployed_model will work for models deployed from an older version

Ah this is a good point, let me update load_deployed_model to assume the 99%-of-the-time correct default of float32 with a warning so that this is backwards compatible and can use older models. πŸ‘

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

OK @sef43 the backward compatibility should be fixed in 91e498f, does that look right to you?

from nequip.

sef43 avatar sef43 commented on June 14, 2024

OK @sef43 the backward compatibility should be fixed in 91e498f, does that look right to you?

not quite, this line:

        if DEFAULT_DTYPE_KEY in metadata:

needs to check the value rather than the key, the key always gets set to exist here, the value gets set to "" if it is an older version model being loaded:


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/stephen/miniconda3/envs/test1/lib/python3.11/site-packages/openmmml/models/nequippotential.py", line 107, in __init__
    self.model, metadata = nequip.scripts.deploy.load_deployed_model(model_path, device=self.device, freeze=False)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/stephen/miniconda3/envs/test1/lib/python3.11/site-packages/nequip/scripts/deploy.py", line 141, in load_deployed_model
    _set_global_options(
  File "/Users/stephen/miniconda3/envs/test1/lib/python3.11/site-packages/nequip/utils/_global_options.py", line 79, in _set_global_options
    new_dtype = dtype_from_name(config["default_dtype"])
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/stephen/miniconda3/envs/test1/lib/python3.11/site-packages/nequip/utils/misc.py", line 10, in dtype_from_name
    return {"float32": torch.float32, "float64": torch.float64}[name]
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: ''

Also metadata['model_dtype'] needs to be set to float32 if it doesn't exist, this is the dtype of the model, which is used to set the dtype in OpenMM, it can be different to the default_dtype. e.g. minimal_toy_emt from develop branch for me ends up with a deployed model where default_dtype is float64 and model_dtype is float32

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

Absolutely right on both counts, thanks for catching this error @sef43 --- I'll get those fixed.

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

OK, maybe this time @sef43 : 3ac367b πŸ˜†

from nequip.

sef43 avatar sef43 commented on June 14, 2024

OK, maybe this time @sef43 : 3ac367b πŸ˜†

Yep looks good, works for me!

I think this is ready for people to try using, I have put a README here: https://github.com/sef43/openmm-ml/blob/nequip/examples/nequip/README.md

Once we know it works for people we can think about PRing it into OpenMM-ML main, tagging @peastman for insight on how to proceede with that

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

@sef43 saw this issue of yours (felixmusil/torch_nl#1 (comment)) on torch-nl, is this relevant for users of this OpenMM plugin?

from nequip.

sef43 avatar sef43 commented on June 14, 2024

@sef43 saw this issue of yours (felixmusil/torch_nl#1 (comment)) on torch-nl, is this relevant for users of this OpenMM plugin?

Yes it will be! I will edit: I have pushed a change to my branch that just uses a simple but definitely always correct NL while I think about the best way to handle this (OpenMM does not wrap the coordinates that get sent to the torchForce)

from nequip.

Linux-cpp-lisp avatar Linux-cpp-lisp commented on June 14, 2024

πŸ‘

Is there a reason not to just wrap the coordinates in PyTorch in the wrapper and keep using torch_nl? Since wrapping is basically just matrix solve, mod 1.0, matmul? (See https://gitlab.com/ase/ase/-/blob/master/ase/geometry/geometry.py#L81)

from nequip.

peastman avatar peastman commented on June 14, 2024

This code

shifts = torch.round(triu_deltas/cell_size)
wrapped_triu_deltas = triu_deltas - shifts*cell_size

is only accurate for rectangular boxes. If you want to support triclinic boxes its a little more complicated. Still not very complicated though.

from nequip.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.