Giter Club home page Giter Club logo

atomicarchitects / equiformer_v2 Goto Github PK

View Code? Open in Web Editor NEW
151.0 4.0 20.0 7.81 MB

[ICLR'24] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations

Home Page: https://arxiv.org/abs/2306.12059

License: MIT License

Python 99.47% Shell 0.53%
catalyst-design computational-chemistry computational-physics deep-learning drug-discovery equivariant-graph-neural-network equivariant-neural-networks force-fields interatomic-potentials machine-learning

equiformer_v2's People

Contributors

yilunliao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

equiformer_v2's Issues

Small equivariant example

Hi @yilunliao,

Thanks for the nice codebase - I am adapting it for another purpose, and I was running into some issues when checking the outputs are actually equivariant. Are there any init flags that must be set in a certain way to guarantee equivariance?

I have a snippet equivalent to this:

import torch_geometric
import torch
from e3nn import o3
from torch_geometric.data import Data
from nets.equiformer_v2.equiformer_v2_oc20 import EquiformerV2_OC20

edge_index = torch.tensor([[0, 1, 1, 2],
                           [1, 0, 2, 1]], dtype=torch.long)
pos = torch.randn(10, 3)
data = Data(pos=pos, edge_index=edge_index)

R = torch.tensor(o3.rand_matrix())

model = EquiformerV2_OC20(
        num_layers=2,
        attn_hidden_channels=16,
        ffn_hidden_channels=16,
        sphere_channels=16,
        edge_channels=16,
        alpha_drop=0.0, # Turn off dropout for eq
        drop_path_rate=0.0, # Turn off drop path for eq
    )

energy1, forces1 = model(data)
rotated_pos = torch.matmul(pos, R)
data.pos = rotated_pos
energy2, forces2 = model(data)

assert energy1 == energy2
assert torch.allclose(forces1, torch.matmul(forces2, R), atol=1.0e-3)

and the energies are equal, but the forces do not obey equality under rotation. I've turned off all dropout and set the model to eval - just wondering if there are any other tricks to retain the genuine eq behaviour. Thanks!

Multi-node multi-gpu training

Could you provide instructions on how to run experiments under the multi-node multi-gpu setting without using the submitit? For example, I have 2 nodes, each of which contains 16 gpus. How should I modify the scripts you provide to reproduce the reported results?

Thanks!

some questions

Hi , not issues but some questions.

  1. Any comparison of the performace to some other SOTA equvariant nets such as MACE or Nequip or sth ?

  2. Is MD simulation available ? or any developments going on?

Thanks for the nice work

Upload models to the Hugging Face Hub

Hi!

Very cool work! It would be nice to have the model checkpoints on the Hugging Face Hub rather than a Dropbox link

Some of the benefits of sharing your models through the Hub would be:

  • versioning, commit history and diffs
  • repos provide useful metadata about their tasks, languages, metrics, etc that make them discoverable
  • multiple features from TensorBoard visualizations, PapersWithCode integration, and more
  • wider reach of your work to the ecosystem

Creating the repos and adding new models should be a relatively straightforward process if you've used Git before. This is a step-by-step guide explaining the process in case you're interested. Please let us know if you would be interested and if you have any questions.

Training logs of QM9 dataset for EquiformerV2

Hello, thank you for sharing your great work!

I was wondering if it would be possible for you to share the training logs, similar to how the logs were provided for Equiformer. I'm specifically interested in seeing the higher precision test results. For example, for mu target 0, the reported results in the paper are limited to 0.11, but the Equiformer logs show higher precision results of 0.1172.

If you're able to share the logs, I would greatly appreciate it as it would allow for a more detailed analysis of the impressive results you achieved.

Thank you again for this excellent contribution to the field. I look forward to hearing back from you @yilunliao.

Best regards,

Can this model be used for molecule generation?

Hi,

I've heard of this strong model which can learn atomic coordinates. Now I want to adapt this model for my project, but I find the code is a bit complicated and hard to follow. (The paper also contains some concepts hard to understand)

I want to make sure, if this model can learn the following task:

**INPUTS**: protein_atom_pos, protein_atom_types, init_molecule_pos, init_molecule_types
**OUTPUTS**: molecule_pos, molecule_types

In another word, I want to learn the molecule given protein pocket, known as Structure-based Drug Design.

Please help me verify if this model suitable for SE(3)-invariant molecule learning, and indicate the relavant code pieces.

Thanks!

e3nn tensors compatibility issue

Hi, I am trying to integrate this with the e3nn package.

For the SO3Embedding class, how can I convert that to an irrep which is compatible with the convention e3nn?
My implementation (not sure this is right or not)

    def to_e3nn_embeddings(self):
        from e3nn.io import SphericalTensor
        from e3nn.o3 import Irreps
        embedding = self.embedding.reshape(self.length, -1)

        l = o3.Irreps(str(SphericalTensor(self.lmax_list[-1], 1, -1)).replace('1x', f'{self.num_channels}x'))
        # multiple channels
        return l, embedding

Speed compared to `TorchMD-Net`

I wanted to train this network on the spice dataset (similar task where I want to predict forces and energy from structure). I was comparing speed of training with torchmd-net (https://github.com/torchmd/torchmd-net). For same parameter count, torchmd-net is at least thrice faster as compared to Equiformer and twice as compared to Equiformer-v2. Is this something common or a bug on my end?

Large performance gap in MD17/22 dataset

Thank you for the great work EquiformerV2. When I test its performance on MD17/22 dataset, I find it lags far behind SOTA models like VisNet. For example, in MD22_AT_AT, when VisNet val loss for E converges to 0.14, F converges to 0.17. While for EqV2 E val loss is 4.7 for E and 5.1 for F.
I follow the setting in oc20/configs/s2ef/all_md/equiformer_v2/equiformer_v2_N@8_L@4_M@2_31M.yml. Are there things I need to modify for adopting EqV2 in MD datasets? Thanks.

How to train on a customied dataset?

I have a dataset with molcules and their atoms, positions, energy, and forces information. I wonder how to train my dataset with equiformer_v2. Is it possible to support customied dataset training or at least give some tutorial or suggestion?

Incorporating vector node features

Hi, nice work! I was wondering what it would take to accommodate for systems with nodes that have additional non-scalar features. Any hints or snippets would be greatly appreciated. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.