Giter Club home page Giter Club logo

transformer-m's People

Contributors

lsj2408 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

transformer-m's Issues

How to encode proteins in the PDBbind task?

Very enlightening work. Congratulations on your great achievements in the OGB Challenge! In addition, I noticed that you have made fine-tuning on the PDBbind dataset. How should you encode the protein information? Because proteins usually contain more heavy atoms, do you directly use Transformer-M to encode proteins?

Details about PDBBind~

Thank you for your code! It's well written~
I have a few questions on finetuning task on PDB-Bind. I sincerely look forward to your kindest reply!
1. Inputs. Which features of protein are used as input? and whether pocket data (sub sequence) or full sequence are used?
2. Model architecture. Whether protein data and ligand data are sent into seperate encoders or they are sent into the same encoder? If different encoders are used, what is the type of protein encoder? and how the extracted features of protein and ligand gathered together for later prediction?
Thank you again for your clarifications for them!
btw, may I ask when the finetuning code for PDB bind will be released? Thanks

Training on QM9

Hi,

Would it be possible to provide the commands for training a model on QM9 from scratch? This is mentioned in appendix B5 when investigating the effectiveness of pre-training.

Kind regards,

Rob

unrecognized arguments error when using provided evaluation code

Hi, here is what confused me. I tried to run the code provided in README.md:

image

but I got the following error:

evaluate.py: error: unrecognized arguments: --add-3d --num-3d-bias-kernel 128 --droppath-prob 0.1 --act-dropout 0.1 --mode-prob 0.2,0.2,0.6

It seems strange, could you help me with it?

How to fine-tune in the PDBbind

Hi,

It's a very interesting model. Would you mind providing the code for preprocessing the data of PDBbind and fine-tuning?

About Steps Settings

Thank you for your release of Transformer-M. About the settings of steps, warmup_ steps and total_ steps, do I need to divide the settings of steps by the number of GPUs used? Because I observed during training that the number of steps in each epoch of multi -gpu parallel will decrease, but the program still counts the steps according to the single card.

Difficulties setting up the environment to reproduce results

Hi,

Thank you for the code and the surrounding instructions!

I was trying to reproduce the results but having some difficulty making the environment work.

I installed cuda and other package versions as mentioned but torch_scatter was erroring out with "'NoneType' object has no attribute 'origin'". Looking up online, I uninstalled your recommended version and installed another one with pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html (even though I have PyTorch 1.7.1). But now, torch sparse decided to error out with

div(float a, Tensor b) -> (Tensor):
  Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

  div(int a, Tensor b) -> (Tensor):
  Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

The original call is:
  File "/nethome/yjakhotiya3/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/storage.py", line 316
        idx = self.sparse_size(1) * self.row() + self.col()

        row = torch.div(idx, num_cols, rounding_mode='floor')
              ~~~~~~~~~ <--- HERE
        col = idx % num_cols
        assert row.dtype == torch.long and col.dtype == torch.long

I also tried on other machines but was getting unknown cuda errors by torch distributed (which could be due to an unrelated driver version mismatch issue).

Did you encounter any of these issues or do you have any advice on how to navigate them?

tasks for finetuning qm9

Hi, Thank you for your code for finetuning qm9, I have a small question on the corresponding relationship between task_idx and specific task.
do you mean that the correspondence is as follows?
2a5a3a27e2c9b27ef0cce5efffe46a9
f7692d06c51c1fa6b289dc8e35509bd
Thank you for your kindest reply.

Question about inputting only 2D Data

Hi!

Thank you for introducing such an interesting model to us and sharing the code!

I'm trying to run the model only on 2D structures, would you mind providing a script for using only 2D structures to train the model (Like for PCQM4M-LSC-V2)?

I tried to change the dataset_name and set add_3D to false in the sample train script for 3D data in the readme file, but that doesn't work. I looked into the code and found that in the tasks/graph_prediction.py file , Class GraphPredictionTask, and load_dataset function, when calling BatchedDataDatset, when it set the dataset_version to "2D" for PCQM4M-LSC-V2, it gives the error in criterions/graph_predictions.py line 45: ori_pos = sample['net_input']['batched_data']['pos'], KeyError: 'pos'.

Thank you so much!

load checkpoint when fine-tune QM9

Hi,
I met a problem when I tried to load 'L12-old.pt' in finetuneqm9.sh , the program told me that the checkpoint's structure mismatched, how can I solve this problem?

code associating with finetuneing

论文作者您好!首先感谢您提供了开源的代码,对复现论文帮助很大,模型的效果也很赞!另外想请问下,不知道finetune相关的代码大概什么时候会上传呀?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.