Giter Club home page Giter Club logo

protein-sequence-embedding-iclr2019's People

Contributors

tbepler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

protein-sequence-embedding-iclr2019's Issues

The embedding sequence shape

Hello

Thank you for the code.
I did able to get the embedding using pretrained mode.
I would like to ask what is the size of the embedding for a given sequence .
For example i have sequences of length 17 . what the shape or the length of the embedding that i can get.
And also i would like to ask what is the difference of the embedding output if i put full_features=False or True?
Which one has more information?

Thanks

RuntimeError: expected type torch.FloatTensor but got torch.LongTensor (run on CPU)

I got a type error when I run this project on CPU with commond:

python .\eval_similarity.py .\model\ssa_L1_100d_lstm3x512_lm_i512_mb64_tau0.5_lambda0.1_p0.05_epoch100.sav

Trace back detail shows as followings:

Traceback (most recent call last):
File ".\eval_similarity.py", line 261, in
main()
File ".\eval_similarity.py", line 231, in main
scores = score_pairs(model, x0_train, x1_train, batch_size)
File ".\eval_similarity.py", line 159, in score_pairs
scores.append(model(x0_mb, x1_mb))
File ".\eval_similarity.py", line 93, in call
scores[i] = torch.sum(p*levels).item()
RuntimeError: expected type torch.FloatTensor but got torch.LongTensor

I also tried eval_secstr.py, got a similar issue:

python .\eval_secstr.py .\model\ssa_L1_100d_lstm3x512_lm_i512_mb64_tau0.5_lambda0.1_p0.05_epoch100.sav

split epoch loss perplexity accuracy
Traceback (most recent call last):
File ".\eval_secstr.py", line 426, in
main()
File ".\eval_secstr.py", line 297, in main
fit_nn_potentials(model, x_train, y_train, num_epochs=num_epochs, use_cuda=use_cuda)
File ".\eval_secstr.py", line 177, in fit_nn_potentials
for x,y in iterator:
File ".\eval_secstr.py", line 146, in iter
x = self.x[order]
RuntimeError: tensors used as indices must be long or byte tensors

Runtime: Windows 10, Python 3.

Warning in Pytorch 1.0.0

I used the code & model weights in original repo to reproduce the performance in the paper, using python3.7 and torch1.0.0, and I got following warnings.

/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.LSTM' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'src.models.multitask.OrdinalRegression' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'src.models.multitask.ConvContactMap' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/*/anaconda3/lib/python3.7/site-packages/torch/serialization.py:434: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)

However, I do not know whether this would cause a bad consequence, anyone has any ideas? Very appreciate!

Runtime Error in eval_secstr.py

python .\eval_secstr.py 1-mer
split epoch loss perplexity accuracy
Traceback (most recent call last):
File ".\eval_secstr.py", line 427, in
main()
File ".\eval_secstr.py", line 361, in main
potentials = model(x).view(x.size(0), -1)
File "D:\Software\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Software\Anaconda\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "D:\Software\Anaconda\lib\site-packages\torch\nn\functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)

OS: Windows10, python 3.7.4, pytorch 1.6.0

embed_sequences.py fails: AttributeError: 'LSTM' object has no attribute '_flat_weights'

I'm trying to run embed_sequences.py, but I get an exception from torch code:

$ python embed_sequences.py seqwence-protein.fasta -m ../pretrained_models/ssa_L1_100d_lstm3x512_lm_i512_mb64_tau0.5_lambda0.1_p0.05_epoch100.sav -o output.h5
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.LSTM' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
# writing: a.h5
# embedding with lm_only=False, no_lm=False, proj_only=False
# pooling: none
Traceback (most recent call last):
  File "embed_sequences.py", line 184, in <module>
    main()
  File "embed_sequences.py", line 171, in main
    z = embed_sequence(sequence, lm_embed, lstm_stack, proj
  File "embed_sequences.py", line 82, in embed_sequence
    z = embed_stack(x, lm_embed, lstm_stack, proj
  File "embed_sequences.py", line 49, in embed_stack
    h = lm_embed(x)
  File "/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/konsti/bepler-berger/src/models/embedding.py", line 25, in forward
    h_lm = self.lm.encode(x)
  File "/home/konsti/bepler-berger/src/models/sequence.py", line 168, in encode
    h_fwd_layers,h_rvs_layers = self.transform(z_fwd, z_rvs)
  File "/home/konsti/bepler-berger/src/models/sequence.py", line 92, in transform
    h,_ = rnn(h)
  File "/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 569, in forward
    result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
  File "/home/konsti/bepler-berger/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 593, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LSTM' object has no attribute '_flat_weights'

These are the versions I installed (from pip freeze):

Cython==0.29.21
dataclasses==0.6
future==0.18.2
h5py==3.1.0
numpy==1.19.4
pkg-resources==0.0.0
torch==1.5.1
typing-extensions==3.7.4.3

I suspect that this has something to do with the torch version, so would it be possible to make the model run on more recent pytorch versions?

Generating Fasta/Text file for different dataset

Hi, thank you for open sourcing this work.

We currently want to utilize your pretrained models for finetuning on a binding affinity dataset. Any advice (or an existing script) on how one can generate fasta files (in the format your models use) for a new dataset would be appreciated.

Pfam preprocessing?

I'd appreciate details on how you subset Pfam (e.g. threw out small families, long sequences, etc) for training the LM's initially. Couldn't find many details in the paper or the repo.

IMP topology great but secondary structure not so much?

Cool work! And thanks for putting the code + weights + datasets out there.

I'm sort of surprised that the topology prediction is competitive with TOPCONS, yet the secondary structure prediction seems pretty far behind other methods.

Johansen uses a biRNN-CRF which is similar to the one you use, right: https://dl.acm.org/citation.cfm?doid=3107411.3107489, but they see much better predictive performance. Do you have a feel for why this is the case? Any chance you've evaluated against the datasets they have (CullPDB CB513 CASP10 CASP11)?

Reproduce performance problems

Great work and thanks for releasing the codes + dataset + pre-trained model.

But I still have some questions about the training procedure, could you kindly spare some time to review the process? I’ve used the codes in Github repo to train the model for several times but failed to reproduce the result (the gap is about 5%), I was wondering that there are some differences between how your released model is trained and how I trained the models.

The training details are as follows:

python train_similarity_and_contact.py \
    --rnn-type lstm \
    --embedding-dim 100 \
    --input-dim 512 \
    --rnn-dim 512 \
    --hidden-dim 50 \
    --width 7 \
    --num-layers 3 \
    --dropout 0 \
    --epoch-scale 5 \
    --epoch-size 100000 \
    --num-epochs 100 \
    --similarity-batch-size 64 \
    --contact-batch-size 10 \
    --weight-decay 0 \
    --lr 0.001 \
    --tau 0.5 \
    --lambda 0.1 \
    --augment 0.05 \
    --lm /embedding/data/raw/bepler/pretrained_models/pfam_lm_lstm2x1024_tied_mb64.sav \
    --output /embedding/v-dache/save_logs/train_lambda0.1_augment0.05.txt \
    --save-prefix /embedding/v-dache/save_logs/train_lambda0.1_augment0.05 \
    --device -2

Here are the questions I would like to know the answers, could you kindly answer them?
Is the released LM same as the one you used for training?
Shall I modify the code to reproduce the result?
I noticed that when loading the samples for SCOP task, the number is 22408 but after resampled to match CMAP task, the number is only about 10% left, that is 2241. So I was wondering whether the resampled dataset matters.
Is the released model obtained by searching some hyperparameters? If yes, how does it be done?

Besides, I revised the source code a little bit and submitted a PR to Github: dc75f65
Will this lead to a performance drop?

The evaluation of the models are as follows:
image
Results from eval_similarity.py

image
Results from eval_similarity.py & eval_secstr.py

image
Results from eval_contact_scop.py

image
Results from eval_transmembrane.py

Any suggestions will be appreciated!

Build error

$ python setup.py build_ext --inplace
running build_ext
building 'src.metrics' extension
gcc -pthread -B /home/banya/miniconda3/envs/protein/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/banya/miniconda3/envs/protein/lib/python3.7/site-packages/numpy/core/include -I/home/banya/miniconda3/envs/protein/include/python3.7m -c src/metrics.c -o build/temp.linux-x86_64-3.7/src/metrics.o
In file included from /usr/lib64/gcc/x86_64-solus-linux/9/include-fixed/syslimits.h:7,
                 from /usr/lib64/gcc/x86_64-solus-linux/9/include-fixed/limits.h:34,
                 from /home/banya/miniconda3/envs/protein/include/python3.7m/Python.h:11,
                 from src/metrics.c:17:
/usr/lib64/gcc/x86_64-solus-linux/9/include-fixed/limits.h:194:15: fatal error: limits.h: No such file or directory
  194 | #include_next <limits.h>  /* recurse down to the real one */
      |               ^~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

I hope that docker image will help me a lot.

RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

when run train_similarity_and_contact.py at 5th or 6th epoch

File "train_similarity_and_contact.py", line 585, in
main()
File "train_similarity_and_contact.py", line 563, in main
eval_contacts(model, cmap_test_iterator, use_cuda)
File "train_similarity_and_contact.py", line 189, in eval_contacts
logits_this, y_this = predict_contacts(model, x, y_mb, use_cuda)
File "train_similarity_and_contact.py", line 161, in predict_contacts
z = model(x) # embed the sequences
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/jwang/protein-sequence-embedding-iclr2019/src/models/multitask.py", line 26, in forward
return self.embedding(x)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/jwang/protein-sequence-embedding-iclr2019/src/models/embedding.py", line 129, in forward
h,_ = self.rnn(h)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 192, in forward
output, hidden = func(input, self.all_weights, hx, batch_sizes)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 323, in forward
return func(input, *fargs, **fkwargs)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 287, in forward
dropout_ts)
File "/root/anaconda3/envs/PSE_SI/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 287, in forward [46/1250]
dropout_ts)
RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

Using pre-trained models

Hi @tbepler ,
I managed to install your code and would like to use pre-trained models to embed 100,000s protein sequences. I tried ssa_L1_100d_lstm3x512_lm_i512_mb64_tau0.5_p0.05_epoch100.sav , but it appears to process only ~ 1 sequence per minute on top end CPU (no cuda). If i understand correctly, sequences should just be passed through the network to get the embeddings, i.e. a fast operation.

Any ideas? Should I use other pre-trained models?

Thanks,
Martin

RuntimeError in training scripts

with the default parameters in repo, RuntimeError occurs in train_similarity_and_contact.py

Traceback (most recent call last):
  File "*/.pycharm_helpers/pydev/pydevd.py", line 1415, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "*/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "*/protein-sequence-embedding-iclr2019/train_similarity_and_contact.py", line 587, in <module>
    main()
  File "*/protein-sequence-embedding-iclr2019/train_similarity_and_contact.py", line 508, in main
    for (cmap_x, cmap_y), (scop_x0, scop_x1, scop_y) in zip(cmap_train_iterator, scop_train_iterator):
  File "*/anaconda3/envs/iclr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    index = self._next_index()  # may raise StopIteration
  File "*/anaconda3/envs/iclr/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 318, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "*/anaconda3/envs/iclr/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 200, in __iter__
    for idx in self.sampler:
  File "*/anaconda3/envs/iclr/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 160, in __iter__
    return iter(torch.multinomial(self.weights, self.num_samples, self.replacement).tolist())
RuntimeError: number of categories cannot exceed 2^24

It seems that there's a BUG in WeightedRandomSampler, could be pytorch version's problem, so any ideas for solving this? Or @tbepler could you please provide the version number of the pytorch when you developed it? Thanks!

How to use the protein encoder?

Would you be able to provide instruction for running the encoder? For example, is there some function like this model.encode('MKVKK') where MKVKK is some amino acid sequence?

Which of the pre-trained models should I use?

Thanks.

Unable to load pretrained model?

Hi, thanks again for sharing this code.
However, I'm unable to load the pretrained model provided here.
It gives me the following error

Traceback (most recent call last):
File "eval_contact_casp12.py", line 462, in
main()
File "eval_contact_casp12.py", line 326, in main
model = torch.load(args.model)
File "/root/Anacondas/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/root/Anacondas/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 532, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '\x1f'.

BTW, I'm using python 3 and pytorch 0.4.1, training works fine for me.
Any suggestions?

Unable to download links from the page

Hi, I am trying to run your pretrained models on my own machine. However, there is some architecture mismatch between the model and the torch version. For example, I tried to use the pfam_lm_lstm2x1024_tied_mb64.sav model to work on the pfam.fasta files but it gives the following error:

AttributeError: 'LSTM' object has no attribute '_flat_weights'. Did you mean: '_all_weights'?

The only suggestion I have received is to downgrade the torch version to 1.3.0, which does not exist. Do you have any updated pretrained models for use?

Question about sequence length

HI,

I am a little bit worried about the capacity of Bi-LSTM. As it is shown at Table 4, the maximum sequence length is 1,664. Is that mean your pre-trained LSTM model need to load all 1663 amino acid to predict the last one? How does that sequence perform? Do you have any algorithm to avoid the long sequence length that may encounter?

Thanks in advance,

Available data

Hey !
I would like to reuse your models without training them from scratch but it seems that the bergerlab-downloads.csail.mit.edu website is not responding anymore. Are there any other links where we could find the parameters for the pretrained model ?

Language Model Accuracy on Pfam?

I couldn't find the next-token prediction accuracy of the biLM in the paper. Would you mind sharing what you got after one epoch on Pfam as described in supplement A?

Protein encoder - PackedSequence error

Hi! Thank you for open sourcing your work!

I am trying to encode my protein sequence with your pretrained model according to the procedure you described in the issue: #1, so basically for testing purposes:

  1. I convert sequence into bytes:
alphabet = Uniprot21()
encoded_f = encode_sequence('ABC', alphabet)
encoded_f2 = np.array([encoded_f, encoded_f]) . # just an imitation of the batch
  1. I upload model and encode the converted batched sequences:
pretrained_model = torch.load('pfam_lm_lstm2x1024_tied_mb64.sav')
pretrained_model.eval()
features = TorchModel(pretrained_model, use_cuda=0, full_features=False)
features(encoded_f2)

an error occurs:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-150-ed0bea8b81e2> in <module>
      1 import numpy as np
----> 2 features(encoded_f2)

./protein-sequence-embedding-iclr2019/eval_secstr.py in __call__(self, x)
    115             z = featurize(c, self.lm_embed, self.lstm_stack, self.proj)
    116         else:
--> 117             z = self.model(c) # embed the sequences
    118         z = unpack_sequences(z, order)
    119 

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

./protein-sequence-embedding-iclr2019/src/models/sequence.py in forward(self, x)
    225         # postpend reverse logp with zero
    226 
--> 227         b = h_fwd.size(0)
    228         zero = h_fwd.data.new(b,1,logp_fwd.size(2)).zero_()
    229         logp_fwd = torch.cat([zero, logp_fwd], 1)

AttributeError: 'PackedSequence' object has no attribute 'size'

I would be grateful for letting me know what I am doing wrong!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.