Giter Club home page Giter Club logo

pair_allegro's People

Contributors

anjohan avatar linux-cpp-lisp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pair_allegro's Issues

Simulated annealing calculation error using pair-allegro

OS: CentOS Linux release 7.9.2009 (Core)
Compiler: GCC 13.2.0
CPU: Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz
NUMA node(s): 2
pytorch:1.12.0
lammps version: 2021.09 release
mpi :intel parallel studio xe 2019

When I executed the simulated annealing algorithm on small clusters, I got the following error.

LAMMPS (29 Sep 2021)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
units metal
atom_style atomic
boundary p p p

newton on

read_data in.data
Reading data file ...
orthogonal box = (0.0000000 0.0000000 0.0000000) to (20.000000 20.000000 20.000000)
1 by 1 by 1 MPI processor grid
reading atoms ...
12 atoms
read_data CPU = 0.003 seconds
#read_restart file.restart.100000

pair_style allegro
pair_coeff * * fe-total.pth Fe

timestep 0.001 # ps

thermo_style custom step dt time temp ke pe etotal press vol
thermo 20
dump 1 all custom 200 dump.lammpstrj id type x y z
restart 100000 file.restart
fix s1 all nvt temp 0.01 1000 $(100.0*dt)
fix s1 all nvt temp 0.01 1000 0.10000000000000000555
run 30000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 8
ghost atom cutoff = 8
binsize = 4, bins = 5 5 5
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair allegro, perpetual
attributes: full, newton on, ghost
pair build: full/bin/ghost
stencil: full/ghost/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 4.315 | 4.315 | 4.315 Mbytes
Step Dt Time Temp KinEng PotEng TotEng Press Volume
0 0.001 0 0 0 -77.797695 -77.797695 0 8000
.......
.......
.......
470920 0.001 470.92 676.16539 0.9614136 -83.998843 -83.03743 128.36286 8000
470940 0.001 470.94 668.32156 0.95026076 -83.998562 -83.048301 126.87379 8000
470960 0.001 470.96 676.39779 0.96174404 -83.99844 -83.036696 128.40698 8000

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 18750 RUNNING AT node02
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 18750 RUNNING AT node02
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

Intel(R) MPI Library troubleshooting guide:
https://software.intel.com/node/561764

The input

file content is as follows。
units metal
atom_style atomic
boundary p p p
newton on
read_data in.data
#read_restart file.restart.100000

pair_style allegro
pair_coeff * * fe-total.pth Fe

timestep 0.001 # ps
thermo_style custom step dt time temp ke pe etotal press vol
thermo 20
dump 1 all custom 200 dump.lammpstrj id type x y z
restart 100000 file.restart
fix s1 all nvt temp 0.01 1000 $(100.0dt)
run 30000
unfix s1
fix s2 all nvt temp 1000 1000 $(100.0
dt)
run 100000
unfix s2
fix s3 all nvt temp 1000 50 $(100.0*dt)
run 6000000
unfix s3
write_data out.data

He did not complete the task. I need to perform 6130000 calculations, but the task ends around 470000 times. Then the error message above appears.
So I tried to use GDB to analyze the errors, but I am not very familiar with this aspect.

The analysis results are as follows.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) where
#0 0x0000000000000000 in ?? ()
#1 0x00007fffe0ff25ad in torch::jit::InterpreterStateImpl::callstack() const () from /opt/software/python3/lib/python3.7/site -packages/torch/lib/libtorch_cpu.so
#2 0x00007fffe0ff3e8e in torch::jit::InterpreterStateImpl::handleError(std::exception const&, bool, c10::NotImplementedError* , c10::optionalstd::string) ()
from /opt/software/python3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so
#3 0x00007fffe1000fd0 in torch::jit::InterpreterStateImpl::runImpl(std::vector<c10::IValue, std::allocatorc10::IValue >&) ( ) from /opt/software/python3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so
#4 0x00007fffe0fee44f in torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocatorc10::IValue >&) () from / opt/software/python3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so
#5 0x00007fffe0fe167a in torch::jit::GraphExecutorImplBase::run(std::vector<c10::IValue, std::allocatorc10::IValue >&) () f rom /opt/software/python3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so
#6 0x00007fffe0c90ade in torch::jit::Method::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordere d_map<std::string, c10::IValue, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const , c10::IValue> > > const&) const () from /opt/software/python3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so
#7 0x00000000006f3496 in torch::jit::Module::forward (this=this@entry=0x2c83a38, inputs=..., kwargs=...) at /opt/software/pyt hon3/lib/python3.7/site-packages/torch/include/torch/csrc/jit/api/module.h:114
#8 0x00000000006ef443 in LAMMPS_NS::PairAllegro::compute (this=0x2c836c0, eflag=, vflag=) at /o pt/source/lammps-stable_29Sep2021/src/pair_allegro.cpp:426
#9 0x00000000005379fb in LAMMPS_NS::Verlet::run (this=0x2c82c60, n=6000000) at /opt/source/lammps-stable_29Sep2021/src/verlet .cpp:312
#10 0x00000000004f291b in LAMMPS_NS::Run::command (this=, narg=, arg=) at /opt/so urce/lammps-stable_29Sep2021/src/run.cpp:180
#11 0x0000000000448614 in LAMMPS_NS::Input::execute_command (this=0x2c68cd0) at /opt/source/lammps-stable_29Sep2021/src/input. cpp:794
#12 0x0000000000448c2c in LAMMPS_NS::Input::file (this=0x2c68cd0) at /opt/source/lammps-stable_29Sep2021/src/input.cpp:273
#13 0x00000000004235a8 in main (argc=, argv=) at /opt/source/lammps-stable_29Sep2021/src/main.cp p:98

I noticed that it mentioned Segmentation fault, but I'm not sure how to solve this problem.I hope u can provide me with some valuable help.thanks!

Calculating virial stress in lammps

Hi,

I am trying to calculate virial stress in lammps using pair_allegro. I have trained the model with stress. I am using the develop branch of Nequip and the main branch of Allegro. For pair_allegro, I am using the stress branch. I am encountering the following error:

ERROR: Pair style Allegro does not support per-atom virial

I have compiled lammps with pytorch version 1.11.0. Please, let me know if you have any suggestions. Thanks.

Virial and Lammps interface

Can I just add lines below from pair_nequip to use virial for NPT simulations?

  // Write forces and per-atom energies (0-based tags here)
  if (vflag)
  {
    if (debug_mode)
      printf("reading virial\n");
    torch::Tensor v_tensor = output.at("virial").toTensor().cpu();
    auto v = v_tensor.accessor<float, 3>();
    // Convert from 3x3 symmetric tensor format, which NequIP outputs, to the flattened form LAMMPS expects
    // First [0] index on v is batch
    virial[0] += v[0][0][0];
    virial[1] += v[0][1][1];
    virial[2] += v[0][2][2];
    virial[3] += v[0][0][1];
    virial[4] += v[0][0][2];
    virial[5] += v[0][1][2];

    if (debug_mode)
    {
      for (int ii = 0; ii < 6; ii++)
      {
        printf("virial  %.10g\n", virial[ii]);
      }
    }
  }
  if (vflag_atom)
  {
    error->all(FLERR, "Pair style Allegro does not support per-atom virial");
  }

I use += instead of = to accumulate different blocks of virial for parallel computation.

Or just use if (vflag_fdotr) virial_fdotr_compute();?

Also, does it make sense to assign eng_vdwl = 0.0;? In general Lammps NN interface such as DeepMD and IAP, eng_vdwl += ... is implemented. Same to the force f and eatom.

In my situation for phase transitions, NPT simulations are vital so a pair_allegro with reliable virial is demanding.

Thank you for the help in advance!

Mix Allegro and LJ type pair styles

I expect to be able to mix Allegro and other pair styles as follows in LAMMPS.

# DEFINE ALLEGRO MODEL
pair_style hybrid/overlay allegro lj/cut 2.5
pair_coeff * * MLFF.pth Pt Pd
pair_coeff 1 2 1 1.1 2.8

When I do this I get the following error

ERROR: Pair coeff for hybrid has invalid style (src/KOKKOS/pair_hybrid_overlay_kokkos.cpp:63)

I am guessing this is caused by Allegro not being supported in pair_hybrid with Kokkos? Is this something easy to resolve?

Error with the new pair_allegro-stress branch

Hi Allegro developers,

Thank you for making the update on the stress branch of pair_allegro. I am trying to install this new branch to allow stress prediction in my MD production. Previously I have used the main branch and made several successful runs on NVT MD. But with this stress branch, some errors occurred when producing the NpT MD trajectories.

I used the following versions of the packages:
torch 1.11.0
libtorch 1.13.0+cu116
(since the documentation says avoid 1.12)
CUDA 11.4.2
cuDNN 8.5.0.96

The LAMMPS was successfully compiled according to the documentation, the FF was trained with stresses included. In the following MD run, I got LAMMPS output as:

LAMMPS (29 Sep 2021)
  using 1 OpenMP thread(s) per MPI task
Reading data file ...
  triclinic box = (0.0000000 0.0000000 0.0000000) to (36.000000 36.000000 36.000000) with tilt (0.0000000 0.0000000 0.0000000)
  1 by 1 by 1 MPI processor grid
  reading atoms ...
  2592 atoms
  read_data CPU = 0.023 seconds
Allegro is using input precision f and output precision d
Allegro is using device cuda
Allegro: Loading model from ./ff_m3.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | H | 1 | H
1 | Pb | 2 | Pb
2 | C | 3 | C
3 | Br | 4 | Br
4 | N | 5 | N
Neighbor list info ...
  update every 1 steps, delay 10 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 7
  ghost atom cutoff = 7
  binsize = 3.5, bins = 11 11 11
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair allegro, perpetual
      attributes: full, newton on, ghost
      pair build: full/bin/ghost
      stencil: full/ghost/bin/3d
      bin: standard
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.001

It stopped with this error:

terminate called after throwing an instance of 'c10::Error'
  what():  expected scalar type Double but found Float
Exception raised from data_ptr<double> at aten/src/ATen/core/TensorMethods.cpp:20 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x3e (0x147bab53f86e in /home/libtorch-new/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5c (0x147bab50a3a8 in /home/libtorch-new/lib/libc10.so)
frame #2: double* at::TensorBase::data_ptr<double>() const + 0x116 (0x147b8207d796 in /home/libtorch-new/lib/libtorch_cpu.so)
frame #3: at::TensorAccessor<double, 2ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<double, 2ul>() const & + 0x4b (0x7198eb in /home/lammps-stable_29Sep2021/build/lmp)
frame #4: /home/lammps-stable_29Sep2021/build/lmp() [0x722dd2]
frame #5: /home/lammps-stable_29Sep2021/build/lmp() [0x55bd22]
frame #6: /home/lammps-stable_29Sep2021/build/lmp() [0x5163c9]
frame #7: /home/lammps-stable_29Sep2021/build/lmp() [0x468e4a]
frame #8: /home/lammps-stable_29Sep2021/build/lmp() [0x469553]
frame #9: /home/lammps-stable_29Sep2021/build/lmp() [0x444b28]
frame #10: __libc_start_main + 0xf3 (0x147b51b9f493 in /lib64/libc.so.6)
frame #11: /home/lammps-stable_29Sep2021/build/lmp() [0x44502e]

/var/spool/PBS/mom_priv/jobs/6986068.pbs.SC: line 11: 2239752 Aborted                 /home/lammps-stable_29Sep2021/build/lmp -in ./npt.in > logfile_$PBS_JOBID

I wonder if this is because I used the wrong combination of packages that the new stress branch is not compatible with?

Thank you so much.

Configuring LAMMPS with pair_allegro

Having problems building LAMMPS with cmake after patching with pair_allegro.

OS: opensuse linux SLES 15.1 (https://www.archer2.ac.uk/about/hardware.html)
LAMMPS version: 20220623
KOKKOS: no

Error messages:

/path/to/lammps/src/pair_allegro.cpp:125:33: error: 'half' is a protected member of 'LAMMPS_NS::NeighRequest'
  neighbor->requests[irequest]->half = 0;
                                ^
/path/to/lammps/src/neigh_request.h:55:7: note: declared protected here
  int half;    // half neigh list (set by default)
      ^
/path/to/lammps/src/pair_allegro.cpp:126:33: error: 'full' is a protected member of 'LAMMPS_NS::NeighRequest'
  neighbor->requests[irequest]->full = 1;
                                ^
/path/to/lammps/src/neigh_request.h:56:7: note: declared protected here
  int full;    // full neigh list
      ^
/path/to/lammps/src/pair_allegro.cpp:128:33: error: 'ghost' is a protected member of 'LAMMPS_NS::NeighRequest'
  neighbor->requests[irequest]->ghost = 1;
                                ^
/path/to/lammps/src/neigh_request.h:69:7: note: declared protected here
  int ghost;           // 1 if includes ghost atom neighbors
      ^

Removing the pair_allegro.cpp and pair_allegro.h from /path/to/lammps/src lets the build go forward and LAMMPS builds. For what it's worth, the same problem arises with pair_nequip. If the patching is not done, the build goes forward.

Any help will be very appreciated!

Problems parallelizing across more than 1 GPU

Hello Allegro team,

I compiled pair allegro using:

cmake -C ../cmake/presets/kokkos-cuda.cmake ../cmake -DPKG_KOKKOS=ON -DKokkos_ARCH_VOLTA70=yes -D PKG_OPENMP=yes -D Kokkos_ENABLE_OPENMP=yes -D Kokkos_ENABLE_CUDA=yes -DCMAKE_PREFIX_PATH=../../pytorch-install/ -D Kokkos_ARCH_KNL=yes -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.6 -DKokkos_ARCH_MAXWELL50=no

When running LAMMPS with the following command:

CUDA_VISIBLE_DEVICES=6,7 mpiexec.hydra -np 1 lmp -sf kk -k on g 2 -pk kokkos newton on neigh full gpu/aware off -in in.rdf

All works well, but I am not parallelizing across multiple GPUs:

LAMMPS (29 Sep 2021 - Update 2)
KOKKOS mode is enabled (src/KOKKOS/kokkos.cpp:97)
  will use up to 2 GPU(s) per node
  using 2 OpenMP thread(s) per MPI task
  using 2 OpenMP thread(s) per MPI task
New timer settings: style=full  mode=nosync  timeout=off
Reading data file ...
  orthogonal box = (0.0000000 0.0000000 0.0000000) to (12.012971 12.012971 12.012971)
  1 by 1 by 1 MPI processor grid
  reading atoms ...
  200 atoms
  read_data CPU = 0.002 seconds
Allegro is using device cuda
Allegro: Loading model from deployed.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | N | 1 | N
1 | Si | 2 | Si
2 | O | 3 | O
3 | Ti | 4 | Ti
WARNING: Using 'neigh_modify every 1 delay 0 check yes' setting during minimization (src/min.cpp:188)
Neighbor list info ...
  update every 1 steps, delay 0 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 8
  ghost atom cutoff = 8
  binsize = 8, bins = 2 2 2
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair allegro/kk, perpetual
      attributes: full, newton on, ghost, kokkos_device
      pair build: full/bin/ghost/kk/device
      stencil: full/ghost/bin/3d
      bin: kk/device
Setting up cg/kk style minimization ...
  Unit style    : metal
  Current step  : 0
WARNING: Fixes cannot yet send exchange data in Kokkos communication, switching to classic exchange/border communication (src/KOKKOS/comm_kokkos.cpp:581)
Per MPI rank memory allocation (min/avg/max) = 2.899 | 2.899 | 2.899 Mbytes
Step Temp E_pair E_mol TotEng Press 
       0            0    -1615.397            0    -1615.397            0 
[W graph_fuser.cpp:105] Warning: operator() profile_node %483 : int[] = prim::profile_ivalue(%481)
 does not have profile information (function operator())
      10            0   -1629.4897            0   -1629.4897            0 
      20            0   -1629.7912            0   -1629.7912            0 
      30            0   -1629.8219            0   -1629.8219            0 
      40            0   -1629.8277            0   -1629.8277            0 
      50            0   -1629.8281            0   -1629.8281            0 
Loop time of 27.655 on 2 procs for 50 steps with 200 atoms

84.3% CPU use with 1 MPI tasks x 2 OpenMP threads

Minimization stats:
  Stopping criterion = linesearch alpha is zero
  Energy initial, next-to-last, final = 
     -1615.39700651169  -1629.82812309265  -1629.82812070847
  Force two-norm initial, final = 22.775953 0.022119489
  Force max component initial, final = 3.3201380 0.0077057327
  Final line search alpha, max atom move = 1.5258789e-05 1.1758015e-07
  Iterations, force evaluations = 50 131

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg|  %CPU | %total
-----------------------------------------------------------------------
Pair    | 27.626     | 27.626     | 27.626     |   0.0 |  84.3 | 99.90
Neigh   | 0          | 0          | 0          |   0.0 | 100.0 |  0.00
Comm    | 0.010064   | 0.010064   | 0.010064   |   0.0 | 100.0 |  0.04
Output  | 0.00099909 | 0.00099909 | 0.00099909 |   0.0 | 100.0 |  0.00
Modify  | 0          | 0          | 0          |   0.0 | 100.0 |  0.00
Other   |            | 0.01761    |            |       |       |  0.06

Nlocal:        200.000 ave         200 max         200 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:        2302.00 ave        2302 max        2302 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:         0.00000 ave           0 max           0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs:      49232.0 ave       49232 max       49232 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 49232
Ave neighs/atom = 246.16000
Neighbor list builds = 0
Dangerous builds = 0
Replicating atoms ...
  orthogonal box = (0.0000000 0.0000000 0.0000000) to (24.025942 24.025942 24.025942)
  1 by 1 by 1 MPI processor grid
  1600 atoms
  replicate CPU = 0.003 seconds
System init for write_restart ...
System init for write_data ...
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.00025
Per MPI rank memory allocation (min/avg/max) = 3.746 | 3.746 | 3.746 Mbytes
Step Temp Lx Ly Lz TotEng Pxx Pyy Pzz 
       0          500    24.025942    24.025942    24.025942   -12206.349    7740.1119    8286.4849    7850.5367 

When running it with the following command:
CUDA_VISIBLE_DEVICES=0,7 mpiexec.hydra -np 2 lmp -sf kk -k on g 2 -pk kokkos newton on neigh full -in in.rdf

LAMMPS (29 Sep 2021 - Update 2)
KOKKOS mode is enabled (src/KOKKOS/kokkos.cpp:97)
  will use up to 2 GPU(s) per node
WARNING: Detected MPICH. Disabling GPU-aware MPI (src/KOKKOS/kokkos.cpp:303)
  using 1 OpenMP thread(s) per MPI task
  using 1 OpenMP thread(s) per MPI task
New timer settings: style=full  mode=nosync  timeout=off
Reading restart file ...
  restart file = 29 Sep 2021, LAMMPS = 29 Sep 2021
WARNING: Restart file used different # of processors: 1 vs. 2 (src/read_restart.cpp:658)
  restoring atom style atomic/kk from restart
  orthogonal box = (0.0000000 0.0000000 0.0000000) to (24.025942 24.025942 24.025942)
  1 by 1 by 2 MPI processor grid
  pair style allegro/kk stores no restart info
  1600 atoms
  read_restart CPU = 0.008 seconds
Allegro is using device cuda:0
Allegro is using device cuda:1
Allegro: Loading model from deployed.pth
Allegro: Loading model from deployed.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | N | 1 | N
1 | Si | 2 | Si
2 | O | 3 | O
3 | Ti | 4 | Ti
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | N | 1 | N
1 | Si | 2 | Si
2 | O | 3 | O
3 | Ti | 4 | Ti
Neighbor list info ...
  update every 1 steps, delay 5 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 8
  ghost atom cutoff = 8
  binsize = 8, bins = 4 4 4
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair allegro/kk, perpetual
      attributes: full, newton on, ghost, kokkos_device
      pair build: full/bin/ghost/kk/device
      stencil: full/ghost/bin/3d
      bin: kk/device
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.00025
terminate called after throwing an instance of 'c10::ValueError'
  what():  Specified device cuda:1 does not match device of data cuda:0
Exception raised from make_tensor at /nfs/site/disks/msironml/pair_allegro/pytorch-build-cu116/aten/src/ATen/Functions.cpp:24 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x2aaaaabc0ff5 in /pair_allegro/pytorch-install/lib/libc10.so)
frame #1: <unknown function> + 0xc8faba (0x2aaac65feaba in /pair_allegro/pytorch-install/lib/libtorch_cpu.so)
frame #2: lmp() [0xb50a62]
frame #3: lmp() [0xb66e3b]
frame #4: lmp() [0x809a92]
frame #5: /lmp() [0x53f62d]
frame #6: lmp() [0x487622]
frame #7: lmp() [0x487c93]
frame #8: lmp() [0x488138]
frame #9: lmp() [0x487688]
frame #10: lmp() [0x487c93]
frame #11: lmp() [0x4390e9]
frame #12: __libc_start_main + 0xf5 (0x2aaada74a765 in /lib64/libc.so.6)
frame #13:  lmp() [0x463459]

Is my command formatted in an improper way to parallelize across 2 GPUs? I have access to a computer with 8 GPUs.

Thanks for your help!

Issue of running NEB with mpirun

Hello Maintainers,

I've encountered an issue after compiling pair_allegro using the provided LAMMPS version in the repository. Specifically, I'm having trouble executing the "neb" command in LAMMPS.

The command I used is:
mpiexec -np 6 lmp -partition 6x1 -in in.neb.sivac

Here, in.neb.sivac is sourced from the example folder in LAMMPS.

The error I received is:

LAMMPS (29 Sep 2021)
Running on 6 partitions of processors
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

For building lmp, I used the following command:

cmake ../cmake \
-DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'`\
-DPKG_KOKKOS=ON -DKokkos_ENABLE_CUDA=ON\
-DCUDA_TOOLKIT_ROOT_DIR=/cm/shared/apps/cudnn7.6-cuda10.2/7.6.5.32 \
-DCUDNN_LIBRARY_PATH=/cm/shared/apps/cudnn7.6-cuda10.2/7.6.5.32/lib64/libcudnn.so \
-DCUDNN_INCLUDE_PATH=/cm/shared/apps/cudnn7.6-cuda10.2/7.6.5.32/include \
-DTorch_DIR=/home/wenjiang0716/anaconda3/envs/allegro_env/lib/python3.9/site-packages/torch/share/cmake/Torch \
-DMKL_INCLUDE_DIR="$CONDA_PREFIX/include"

for lmp -h info:

Large-scale Atomic/Molecular Massively Parallel Simulator - 29 Sep 2021 - Update 2
Git info (HEAD / patch_29Sep2021_update2-modified)

Usage example: lmp -var t 300 -echo screen -in in.alloy

List of command line options supported by this LAMMPS executable:

-echo none/screen/log/both  : echoing of input script (-e)
-help                       : print this help message (-h)
-in none/filename           : read input from file or stdin (default) (-i)
-kokkos on/off ...          : turn KOKKOS mode on or off (-k)
-log none/filename          : where to send log output (-l)
-mdi '<mdi flags>'          : pass flags to the MolSSI Driver Interface
-mpicolor color             : which exe in a multi-exe mpirun cmd (-m)
-cite                       : select citation reminder style (-c)
-nocite                     : disable citation reminder (-nc)
-package style ...          : invoke package command (-pk)
-partition size1 size2 ...  : assign partition sizes (-p)
-plog basename              : basename for partition logs (-pl)
-pscreen basename           : basename for partition screens (-ps)
-restart2data rfile dfile ... : convert restart to data file (-r2data)
-restart2dump rfile dgroup dstyle dfile ... 
                            : convert restart to dump file (-r2dump)
-reorder topology-specs     : processor reordering (-r)
-screen none/filename       : where to send screen output (-sc)
-skiprun                    : skip loops in run and minimize (-sr)
-suffix gpu/intel/opt/omp   : style suffix to apply (-sf)
-var varname value          : set index style variable (-v)

OS: Linux "CentOS Linux 7 (Core)" 3.10.0-1160.90.1.el7.x86_64 on x86_64

Compiler: GNU C++ 8.3.0 with OpenMP 4.5
C++ standard: C++14
MPI v3.1: MPICH Version:	3.3.2
MPICH Release date:	Tue Nov 12 21:23:16 CST 2019
MPICH ABI:	13:8:1

and my mpiexec --version:

HYDRA build details:
    Version:                                 3.3.2
    Release Date:                            Tue Nov 12 21:23:16 CST 2019
    CC:                              gcc -std=gnu99  -m64 -m64 
    CXX:                             g++  -I/cm/shared/apps/gcc/current/include/c++/4.8.5/backward/backward_old -m64 -m64 
    F77:                             gfortran -m64 -m64 
    F90:                             gfortran -m64 -m64 
    Configure options:                       '--disable-option-checking' '--prefix=/cm/shared/apps/mpich/ge/gcc/64/3.3.2' '--enable-cxx' '--with-romio' '--enable-shared' '--with-comm=shared' '--disable-devdebug' 'CC=gcc -std=gnu99' 'CFLAGS=-m64 -O2' 'LDFLAGS=-m64' 'CXX=g++' 'CXXFLAGS=-I/cm/shared/apps/gcc/current/include/c++/4.8.5/backward/backward_old -m64 -O2' 'FC=gfortran' 'FCFLAGS=-m64 -O2' 'F77=gfortran' 'FFLAGS=-m64 -O2' '--cache-file=/dev/null' '--srcdir=.' 'LIBS=' 'CPPFLAGS= -I/root/rpmbuild/BUILD/mpich-3.3.2/src/mpl/include -I/root/rpmbuild/BUILD/mpich-3.3.2/src/mpl/include -I/root/rpmbuild/BUILD/mpich-3.3.2/src/openpa/src -I/root/rpmbuild/BUILD/mpich-3.3.2/src/openpa/src -D_REENTRANT -I/root/rpmbuild/BUILD/mpich-3.3.2/src/mpi/romio/include' 'MPLLIBNAME=mpl'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Checkpointing libraries available:       
    Demux engines available:                 poll select

Would you have any insights or suggestions to resolve this issue? The torch version is 1.11 and nequip is 0.5.5.

Thank you for your assistance.

Trouble compiling lammps with pair_allegro

Hi,

I'm having trouble at the lammps compile step for Lammps with CUDA+Kokkos. My environment looks as follows:
GPU: A100
OS: Debian GNU/Linux 10 (buster)
gcc: 8.3.0
CUDA: 11.3
pytorch: 1.11
cmake: 3.23
All other packages are the latest version/repo/default.

Compile steps:
git clone https://github.com/lammps/lammps.git
git clone https://github.com/mir-group/pair_allegro.git
cd pair_allegro && ./patch_lammps.sh ../lammps/

cmake ../cmake -DCMAKE_PREFIX_PATH=/home/mphuthi/software/libtorch/ -DMKL_INCLUDE_DIR=`python -c "import sysconfig;from pathlib import Path;print(Path(sysconfig.get_paths()[\"include\"]).parent)"` -DPKG_KOKKOS=ON -DKokkos_ENABLE_CUDA=ON -DKokkos_ARCH_AMPERE80=ON

make

The error I get is:

[ 73%] Building CXX object CMakeFiles/lammps.dir/home/mphuthi/software/lammps/src/pair_allegro.cpp.o
/home/mphuthi/software/lammps/src/pair_allegro.cpp(125): error: member "LAMMPS_NS::NeighRequest::half"
/home/mphuthi/software/lammps/src/neigh_request.h(54): here is inaccessible

/home/mphuthi/software/lammps/src/pair_allegro.cpp(126): error: member "LAMMPS_NS::NeighRequest::full"
/home/mphuthi/software/lammps/src/neigh_request.h(55): here is inaccessible

/home/mphuthi/software/lammps/src/pair_allegro.cpp(128): error: member "LAMMPS_NS::NeighRequest::ghost"
/home/mphuthi/software/lammps/src/neigh_request.h(68): here is inaccessible

/home/mphuthi/software/lammps/src/pair_allegro.cpp(361): error: namespace "std" has no member "exclusive_scan"

/home/mphuthi/software/lammps/src/pair_allegro.cpp(399): warning: variable "jtype" was declared but never referenced

/home/mphuthi/software/lammps/src/pair_allegro.cpp(318): warning: variable "newton_pair" was declared but never referenced

4 errors detected in the compilation of "/home/mphuthi/software/lammps/src/pair_allegro.cpp".
make[2]: *** [CMakeFiles/lammps.dir/build.make:4598: CMakeFiles/lammps.dir/home/mphuthi/software/lammps/src/pair_allegro.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:322: CMakeFiles/lammps.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

Some problems encountered when using multiple GPUs

Hi Alby @Linux-cpp-lisp ,

Thank you so much for this useful tool!

I used to run MD with a single GPU and everything worked fine.
However, recently, when I wanted to expand the size of the system and used 4 GPUs to speed up the simulation, I found that it just copied the same task 4 times and sent it to each GPU and run separately, instead of combining four GPUs to complete one task.

When using a multi-gpu machine, I recompiled lammps according to the guidelines. And when I call this commands
mpirun -np Np lmp -sf kk -k on g Ng -pk kokkos newton on neigh full -in in.script
with Np=Ng=[1,2,3,4] to complete the same task with different number of GPUs, they take almost the same time.

The result is the same after adding gpu/aware off to the parameters.
The commands that need to call kokkos are all done on the command line, in the input file pair_style allegro is the same as before.

I've tried a lot of things, but none of them work well for this problem. Is my compilation setup wrong or am I using the wrong command?

Using pair_allegro without stress on the newest version of LAMMPS

Hi,

Is there a way to use the newest version of LAMMPS with an allegro model that does not output stresses? It seems that the version of LAMMPS compatible with the main branch doesn't support certain forms of communication within KOKKOS such as borders communication which I assume is used when running allegro within LAMMPS.

Thanks

[QUESTION] Error while using potential in lammps.

I trained Allegro model in GeSe system and deployed the model to lammps pair potential following the steps.
and I tried to do MD simulation using lammps input below.

echo both
boundary p p p
processors * * * grid numa
units metal
newton on
read_data coo
pair_style allegro
pair_coeff * * ../deployed.pth Ge Se

mass 1 72.61
mass 2 78.96

thermo 10
compute ppa all pe/atom
thermo_style custom step temp etotal pe press vol spcpu cpuremain
timestep 0.002

dump traj all custom 10 melt.dump id type x y z ix iy iz c_ppa fx fy fz
dump_modify traj sort id
velocity all create 1000 20201021
fix int all nve
run 2

Lammps works well in the case of run1 but in the case of run more than 1 (ex run 2 or more) lammps terminate with the errors below.

LAMMPS (29 Sep 2021 - Update 3)
  using 1 OpenMP thread(s) per MPI task
boundary p p p
processors * * * grid numa
units metal
newton on
read_data coo
Reading data file ...
  triclinic box = (0.0000000 0.0000000 0.0000000) to (15.141223 15.141223 15.141223) with tilt (0.0000000 0.0000000 0.0000000)
  1 by 1 by 1 MPI processor grid
  1 by 1 by 1 core grid within node
  reading atoms ...
  120 atoms
  read_data CPU = 0.001 seconds
#replicate 3 3 3
pair_style allegro
Allegro is using device cpu
pair_coeff * * ../deployed.pth Ge Se
Allegro: Loading model from ../deployed.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | Ge | 1 | Ge
1 | Se | 2 | Se

mass 1 72.61
mass 2 78.96

thermo 10
compute ppa all pe/atom
thermo_style custom step temp etotal pe press vol spcpu cpuremain
timestep 0.002

dump traj all custom 10 melt.dump id type x y z ix iy iz c_ppa fx fy fz
dump_modify traj sort id
velocity all create 1000 20201021
fix int all nve
run 2
Neighbor list info ...
  update every 1 steps, delay 10 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 8
  ghost atom cutoff = 8
  binsize = 4, bins = 4 4 4
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair allegro, perpetual
      attributes: full, newton on, ghost
      pair build: full/bin/ghost
      stencil: full/ghost/bin/3d
      bin: standard
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.002
Per MPI rank memory allocation (min/avg/max) = 4.612 | 4.612 | 4.612 Mbytes
Step Temp TotEng PotEng Press Volume S/CPU CPULeft 
       0         1000   -463.34487   -478.72682    4733.1234    3471.2258            0            0 
terminate called after throwing an instance of 'std::runtime_error'
  what():  The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: Unsupported value kind: Tensor

Aborted

in the case of run 1, lammps ended successfully with the output below

LAMMPS (29 Sep 2021 - Update 3)
  using 1 OpenMP thread(s) per MPI task
boundary p p p
processors * * * grid numa
units metal
newton on
read_data coo
Reading data file ...
  triclinic box = (0.0000000 0.0000000 0.0000000) to (15.141223 15.141223 15.141223) with tilt (0.0000000 0.0000000 0.0000000)
  1 by 1 by 1 MPI processor grid
  1 by 1 by 1 core grid within node
  reading atoms ...
  120 atoms
  read_data CPU = 0.001 seconds
#replicate 3 3 3
pair_style allegro
Allegro is using device cpu
pair_coeff * * ../deployed.pth Ge Se
Allegro: Loading model from ../deployed.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | Ge | 1 | Ge
1 | Se | 2 | Se

mass 1 72.61
mass 2 78.96

thermo 10
compute ppa all pe/atom
thermo_style custom step temp etotal pe press vol spcpu cpuremain
timestep 0.002

dump traj all custom 10 melt.dump id type x y z ix iy iz c_ppa fx fy fz
dump_modify traj sort id
velocity all create 1000 20201021
fix int all nve
run 1
Neighbor list info ...
  update every 1 steps, delay 10 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 8
  ghost atom cutoff = 8
  binsize = 4, bins = 4 4 4
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair allegro, perpetual
      attributes: full, newton on, ghost
      pair build: full/bin/ghost
      stencil: full/ghost/bin/3d
      bin: standard
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.002
Per MPI rank memory allocation (min/avg/max) = 4.612 | 4.612 | 4.612 Mbytes
Step Temp TotEng PotEng Press Volume S/CPU CPULeft 
       0         1000   -463.34487   -478.72682    4733.1234    3471.2258            0            0 
       1    994.88372   -463.34483   -478.64809    4708.9075    3471.2258    1.7103956            0 
Loop time of 0.584683 on 1 procs for 1 steps with 120 atoms

Performance: 0.296 ns/day, 81.206 hours/ns, 1.710 timesteps/s
94.0% CPU use with 1 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.58463    | 0.58463    | 0.58463    |   0.0 | 99.99
Neigh   | 0          | 0          | 0          |   0.0 |  0.00
Comm    | 6.9141e-06 | 6.9141e-06 | 6.9141e-06 |   0.0 |  0.00
Output  | 3.5048e-05 | 3.5048e-05 | 3.5048e-05 |   0.0 |  0.01
Modify  | 2.861e-06  | 2.861e-06  | 2.861e-06  |   0.0 |  0.00
Other   |            | 5.484e-06  |            |       |  0.00

Nlocal:        120.000 ave         120 max         120 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:        936.000 ave         936 max         936 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:         0.00000 ave           0 max           0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs:      8850.00 ave        8850 max        8850 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 8850
Ave neighs/atom = 73.750000
Neighbor list builds = 0
Dangerous builds = 0
write_data liq.dat
System init for write_data ...


Total wall time: 0:00:01

Running pair_allegro with Kokkos on multiple GPUs

Hi, I am trying to run pair_allegro with Kokkos on a system with 4 GPUs. I am creating 4 MPI processes with 1 GPU each.

The command looks something like this: mpirun -d -x LD_LIBRARY_PATH -np 4 --mca pml ob1 --mca btl ^openib /home/ubuntu/lammps_install/bin/lmp -sf kk -k on g 1 -pk kokkos newton on neigh full < /home/ubuntu/md_allegro.in

and I get an error saying "Specified device cuda:1 does not match device of data cuda:0". This occurs in the model forward call inside pair_allegro. I tried debugging and it seems that the device the model is initially loaded onto matches the data when later calling the compute method.

Not sure if the command I am running is incorrect or if it's some other issue. Thanks!

RuntimeError: CUDA error: device-side assert triggered

Get the following error when running a standard NVT simulation.


LAMMPS (7 Feb 2024 - Development - patch_7Feb2024_update1-247-g9d9dbc1fa8-modified)
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [116,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [117,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [103,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [104,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [105,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [109,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [110,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [111,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [80,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [90,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [2,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [123,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [8,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [80,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [7,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [110,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [111,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [115,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [90,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [96,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [3,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [5,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [92,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [93,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [94,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [4,0,0], thread: [95,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [115,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [116,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [117,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [123,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484803030/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [6,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
  using 128 OpenMP thread(s) per MPI task
Reading data file ...
  orthogonal box = (0 0 0) to (15.3819 15.3819 12.4454)
  reading atoms ...
  144 atoms
  read_data CPU = 0.007 seconds
Allegro is using device cuda
Allegro: Loading model from ../allegro.pth
Allegro: Freezing TorchScript model...
Type mapping:
Allegro type | Allegro name | LAMMPS type | LAMMPS name
0 | H | 1 | H
1 | C | 2 | C
2 | O | 3 | O
3 | Sc | 4 | Sc
Neighbor list info ...
  update: every = 10 steps, delay = 0 steps, check = no
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.3
  ghost atom cutoff = 5.3
  binsize = 2.65, bins = 6 6 5
  2 neighbor lists, perpetual/occasional/extra = 2 0 0
  (1) pair allegro, perpetual, half/full from (2)
      attributes: half, newton on
      pair build: halffull/newton
      stencil: none
      bin: none
  (2) pair allegro, perpetual
      attributes: full, newton on
      pair build: full/bin/atomonly
      stencil: full/bin/3d
      bin: standard
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.0005
Exception: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/nequip/nn/_grad_output.py", line 32, in forward
        _7 = torch.append(_3, data[k])
      func0 = self.func
      data0 = (func0).forward(data, )
               ~~~~~~~~~~~~~~ <--- HERE
      of = self.of
      _8 = [torch.sum(data0[of])]
  File "code/__torch__/nequip/nn/_graph_mixin.py", line 28, in layer_norm
    input1 = (radial_basis).forward(input0, )
    input2 = (spharm).forward(input1, )
    input3 = (allegro).forward(input2, )
              ~~~~~~~~~~~~~~~~ <--- HERE
    input4 = (edge_eng).forward(input3, )
    input5 = (edge_eng_sum).forward(input4, )
  File "code/__torch__/allegro/nn/_allegro.py", line 113, in AD_unsqueeze_multiple
    _18 = annotate(List[Optional[Tensor]], [active_edges])
    prev_mask = torch.gt(torch.index(cutoff_coeffs, _18), 0)
    _19 = torch.nonzero(torch.gt(cutoff_coeffs, 0))
          ~~~~~~~~~~~~~ <--- HERE
    active_edges0 = torch.squeeze(_19, -1)
    _25 = torch.cat(latent_inputs_to_cat, -1)

Traceback of TorchScript, original code (most recent call last):
  File ".../lib/python3.10/site-packages/nequip/nn/_grad_output.py", line 85, in forward
            wrt_tensors.append(data[k])
        # run func
        data = self.func(data)
               ~~~~~~~~~ <--- HERE
        # Get grads
        grads = torch.autograd.grad(
  File ".../lib/python3.10/site-packages/nequip/nn/_graph_mixin.py", line 356, in layer_norm
    def forward(self, input: AtomicDataDict.Type) -> AtomicDataDict.Type:
        for module in self:
            input = module(input)
                    ~~~~~~ <--- HERE
        return input
  File ".../lib/python3.10/site-packages/allegro/nn/_allegro.py", line 497, in AD_unsqueeze_multiple
            cutoff_coeffs = cutoff_coeffs_all[layer_index]
            prev_mask = cutoff_coeffs[active_edges] > 0
            active_edges = (cutoff_coeffs > 0).nonzero().squeeze(-1)
                            ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    
            # Compute latents
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

Here's the LAMMPS input file:

sample3

units metal
boundary p p p
atom_style atomic

neighbor 2.0 bin
neigh_modify every 10 delay 0 check no

read_data coordinates.lmp

pair_style allegro
pair_coeff * * ../allegro.pth H C O Sc

velocity all create 150 23456

fix 1 all nvt temp 150 150 0.5
timestep 0.0005
thermo_style custom step pe ke etotal temp press vol
thermo 100
dump 1 all custom 1 1.dump id element x y z
dump_modify 1 element H C O Sc

run 2000

fix 1 all nvt temp 150 150 0.5
timestep 0.0005
thermo_style custom step pe ke etotal temp press vol
thermo 1
dump 2 all custom 1 2.dump id element x y z
dump_modify 2 element H C O Sc

run 10000

More trouble in LAMMPS compilation due to "LAMMPS_NS"

Hello,
despite reading through the past issues I still can't manage to compile LAMMPS with pair_allegro.

My environment:
gcc: 9.4.0
CUDA: 11.2.2
cudnn: 8.1.0.77-11.2
pytorch: 1.11
cmake: 3.23.1
GPU: A100

I'm getting libtorch with: wget https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcu113.zip , and I'm using lammps-stable_29Sep2021_update2 that I got from here https://github.com/lammps/lammps/releases/tag/stable_29Sep2021_update2

I run "cmake ../cmake -DCMAKE_PREFIX_PATH=../../libtorch/ -DMKL_INCLUDE_DIR=python -c "import sysconfig;from pathlib import Path;print(Path(sysconfig.get_paths()[\"include\"]).parent)" -DPKG_KOKKOS=ON -DKokkos_ENABLE_CUDA=ON -DKokkos_ARCH_AMPERE80=ON"

And I compile with "make -j 16", getting the errors:

  • /lammps-stable_29Sep2021_update2/src/pair_allegro.cpp(129): error: class "LAMMPS_NS::Neighbor" has no member "add_request"

  • /lammps-stable_29Sep2021_update2/src/pair_allegro.cpp(129): error: namespace "LAMMPS_NS::NeighConst" has no member "REQ_FULL"

  • /lammps-stable_29Sep2021_update2/src/pair_allegro.cpp(129): error: namespace "LAMMPS_NS::NeighConst" has no member "REQ_GHOST"

Issue while Linking CXX executable lmp

I'm encountering some issues compiling lammps with pair_allegro,
I'm using spack with the following modules:

module load numlib/mkl/2021.4.0
module load devel/cudnn/10.2
module load compiler/llvm/10.0
module load `mpi/openmpi/4.1

Using the patch script and cmake ../cmake -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'`

with

>>> torch.__version__
'1.10.2+cu102'

the lammps build process fails with

[100%] Linking CXX executable lmp
liblammps.a(pair_allegro.cpp.o): In function `LAMMPS_NS::PairAllegro::coeff(int, char**)':
pair_allegro.cpp:(.text+0x11f4): undefined reference to `torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&)'
pair_allegro.cpp:(.text+0x13c2): undefined reference to `torch::jit::freeze_module(torch::jit::Module const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, bool, bool)'
liblammps.a(pair_allegro.cpp.o): In function `LAMMPS_NS::PairAllegro::compute(int, int)':
pair_allegro.cpp:(.text+0x3b71): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `torch::jit::Object::get_method(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const':
pair_allegro.cpp:(.text._ZNK5torch3jit6Object10get_methodERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE[_ZNK5torch3jit6Object10get_methodERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE]+0x15): undefined reference to `torch::jit::Object::find_method(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'
pair_allegro.cpp:(.text._ZNK5torch3jit6Object10get_methodERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE[_ZNK5torch3jit6Object10get_methodERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE]+0x17f): undefined reference to `c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `at::TensorAccessor<float, 2ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<float, 2ul>() const &':
pair_allegro.cpp:(.text._ZNKR2at10TensorBase8accessorIfLm2EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv[_ZNKR2at10TensorBase8accessorIfLm2EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv]+0xb7): undefined reference to `c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `at::TensorAccessor<long, 2ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<long, 2ul>() const &':
pair_allegro.cpp:(.text._ZNKR2at10TensorBase8accessorIlLm2EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv[_ZNKR2at10TensorBase8accessorIlLm2EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv]+0xb7): undefined reference to `c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `at::TensorAccessor<long, 1ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<long, 1ul>() const &':
pair_allegro.cpp:(.text._ZNKR2at10TensorBase8accessorIlLm1EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv[_ZNKR2at10TensorBase8accessorIlLm1EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv]+0xb7): undefined reference to `c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `torch::jit::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&)':
pair_allegro.cpp:(.text._ZN5torch3jit6Module7forwardESt6vectorIN3c106IValueESaIS4_EERKSt13unordered_mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES4_St4hashISD_ESt8equal_toISD_ESaISt4pairIKSD_S4_EEE[_ZN5torch3jit6Module7forwardESt6vectorIN3c106IValueESaIS4_EERKSt13unordered_mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES4_St4hashISD_ESt8equal_toISD_ESaISt4pairIKSD_S4_EEE]+0x7f): undefined reference to `torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) const'
liblammps.a(pair_allegro.cpp.o): In function `c10::IValue::IValue(char const*)':
pair_allegro.cpp:(.text._ZN3c106IValueC2EPKc[_ZN3c106IValueC2EPKc]+0x74): undefined reference to `c10::ivalue::ConstantString::create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
liblammps.a(pair_allegro.cpp.o): In function `c10::Device::validate()':
pair_allegro.cpp:(.text._ZN3c106Device8validateEv[_ZN3c106Device8validateEv]+0x59): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
pair_allegro.cpp:(.text._ZN3c106Device8validateEv[_ZN3c106Device8validateEv]+0x9c): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `c10::TensorOptions::set_dtype(c10::optional<c10::ScalarType>) &':
pair_allegro.cpp:(.text._ZNR3c1013TensorOptions9set_dtypeENS_8optionalINS_10ScalarTypeEEE[_ZNR3c1013TensorOptions9set_dtypeENS_8optionalINS_10ScalarTypeEEE]+0x75): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `std::pair<c10::IValue, c10::IValue>::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, at::Tensor, true>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, at::Tensor&&)':
pair_allegro.cpp:(.text._ZNSt4pairIN3c106IValueES1_EC2INSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN2at6TensorELb1EEEOT_OT0_[_ZNSt4pairIN3c106IValueES1_EC2INSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN2at6TensorELb1EEEOT_OT0_]+0xa7): undefined reference to `c10::ivalue::ConstantString::create(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
liblammps.a(pair_allegro.cpp.o): In function `c10::IValue::toStringView() const':
pair_allegro.cpp:(.text._ZNK3c106IValue12toStringViewEv[_ZNK3c106IValue12toStringViewEv]+0x6b): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_allegro.cpp.o): In function `c10::IValue::toComplexDouble() const':
pair_allegro.cpp:(.text._ZNK3c106IValue15toComplexDoubleEv[_ZNK3c106IValue15toComplexDoubleEv]+0xd9): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_nequip.cpp.o): In function `LAMMPS_NS::PairNEQUIP::coeff(int, char**)':
pair_nequip.cpp:(.text+0xeaf): undefined reference to `torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&)'
pair_nequip.cpp:(.text+0x108c): undefined reference to `torch::jit::freeze_module(torch::jit::Module const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, bool, bool)'
liblammps.a(pair_nequip.cpp.o): In function `LAMMPS_NS::PairNEQUIP::compute(int, int)':
pair_nequip.cpp:(.text+0x467d): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
liblammps.a(pair_nequip.cpp.o): In function `at::TensorAccessor<float, 1ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<float, 1ul>() const &':
pair_nequip.cpp:(.text._ZNKR2at10TensorBase8accessorIfLm1EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv[_ZNKR2at10TensorBase8accessorIfLm1EEENS_14TensorAccessorIT_XT0_ENS_16DefaultPtrTraitsElEEv]+0xb7): undefined reference to `c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
clang-10: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [CMakeFiles/lmp.dir/build.make:128: lmp] Error 1
make[1]: *** [CMakeFiles/Makefile2:1335: CMakeFiles/lmp.dir/all] Error 2
make: *** [Makefile:149: all] Error 2

Does pair_allegro require a different pytorch version?
I found

torch-version: [1.10.0]

and assumed it is similar to pair_nequip?

Problem compiling lammps with kokkos

Dear allegro developers,

I encountered some issue when compiling LAMMPS with CUDA+Kokkos. Could you please offer me some suggestion? Seemed to be related to libtorch.
Best
Wei

Environment setup:

OS: CentOS Linux release 7.6.1810
GPU: V100
Compiler: GCC 9.5.0
MPI: openmpi-4.1.6 configured --with-cuda
CUDA: 11.2
libtorch: cxx11 ABI 11.1 (libtorch-cxx11-abi-shared-with-deps-1.9.0+cu111)
MKL: 2022.0.1
cuDNN: v8.9.6

Download and configure sources:

git clone -b stable_29Sep2021_update2 --depth 1 [email protected]:lammps/lammps

git clone [email protected]:mir-group/pair_allegro

cd pair_allegro && ./patch_lammps.sh ../lammps/

Build:

cmake ../cmake  -DBUILD_MPI=yes -DCMAKE_PREFIX_PATH=/home/xiewei/libtorch -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda   -DKokkos_CXX_STANDARD=17 -C ../cmake/presets/basic.cmake -C ../cmake/presets/kokkos-cuda.cmake

make -j 

The error messages:

/home/xiewei/lammps/src/pair_allegro.cpp(222): error: namespace "torch::jit" has no member "setFusionStrategy"

/home/xiewei/lammps/src/pair_allegro.cpp(223): error: name followed by "::" must be a class or namespace name

/home/xiewei/lammps/src/pair_allegro.cpp(237): error: namespace "torch::jit" has no member "setFusionStrategy"

/home/xiewei/lammps/src/pair_allegro.cpp(400): warning: variable "jtype" was declared but never referenced

/home/xiewei/lammps/src/pair_allegro.cpp(312): warning: variable "newton_pair" was declared but never referenced

3 errors detected in the compilation of "/home/xiewei/lammps/src/pair_allegro.cpp".

[QUESTION] Error while using potential in lammps.

in the case of MD simulation with MPI. LAMMPS didn't proceed after this stage.
mpirun -np 8 lmp -sf omp -pk omp 4 -in in.lammps

run 10
No /omp style for force computation currently active

While it works well in the case of mpirun -np 4 lmp -sf omp -pk omp 8 -in in.lammps like below.
I am wondering if there is a specific limit in MPI processor grid size. and sometime MD simluation ends with error below

  Unit style    : metal
  Current step  : 0
  Time step     : 0.0005
Per MPI rank memory allocation (min/avg/max) = 11.64 | 11.64 | 11.64 Mbytes
Step Temp TotEng PotEng Press Volume S/CPU CPULeft 
       0         1000   -502.17886   -517.56082    4733.1234    3471.2258            0            0 
      10    1037.6838   -502.17892   -518.14053    4911.4855    3471.2258   0.62056358    96670.196 
      20    1149.0517   -502.18269   -519.85735    5438.6038    3471.2258    3.3797967    57200.356 
      30    1366.1239   -502.20265   -523.21631    6466.0332    3471.2258    3.3869016    44029.363 
      40    1706.0198   -502.27363   -528.51555    8074.8025    3471.2258    3.3691646     37465.69 
      50      2092.37   -502.46846    -534.6532    9903.4456    3471.2258      0.80146    44927.751 
      60    2388.6437   -502.88855   -539.63056    11305.746    3471.2258   0.49348786    57677.206 
      70    2591.1369   -503.60771   -543.46446    12264.171    3471.2258   0.49194526    66832.571 
      80    2867.5918   -504.70262   -548.81179    13572.666    3471.2258   0.54046821    72327.097 
      90    3162.0488   -506.21135   -554.84985    14966.367    3471.2258   0.47370461    78332.381 
     100    3463.3768   -508.07882   -561.35234     16392.59    3471.2258   0.43357856    84302.633 
     110    3783.0973   -510.20537   -568.39681    17905.867    3471.2258    0.4624285    88399.774 
     120    4040.5194   -512.46371   -574.61481    19124.277    3471.2258    0.4751138    91522.343 
     130    3916.8145   -468.80556   -529.05384    18538.766    3471.2258   0.56733556    92585.622 
     140    4160.4834   -471.28922    -535.2856    19692.081    3471.2258   0.48887372    94704.054 
     150    5138.8348   -472.80773   -551.85307    24322.739    3471.2258   0.47169693    96834.505 
     160    5735.4544   -477.15941   -565.38192    27146.614    3471.2258   0.47489075    98642.676 

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 39872 RUNNING AT n020
=   EXIT CODE: 6
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================


Error during final linking step for LAMMPS

I'm having trouble linking the final lmp executable with libtorch and Kokkos.

My environment is
gcc/9.1.0 (same issue with gcc 10.1.0 and 10.2.0, and different issues with older gcc versions)
cuda/11.2
libtorch/1.12.0
cudnn/8.4.1
cmake/3.19
mkl/2017

The following cmake command succeeded (with a warning about save runtime search paths for lmp)

cmake ../cmake -DCMAKE_PREFIX_PATH=/project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/ -DPKG_KOKKOS=ON -DKokkos_ENABLE_CUDA=ON -DKokkos_ARCH_PASCAL60=ON -DCUDNN_INCLUDE_PATH=/project2/gavoth/loose/CGnequip/conda/include/ -DCUDNN_LIBRARY_PATH=/project2/gavoth/loose/CGnequip/conda/lib/libcudnn.so

The error I receive when I actually make LAMMPS is:

[100%] Linking CXX executable lmp
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `powf@GLIBC_2.27'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `log2f@GLIBC_2.27'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `lgammaf@GLIBC_2.23'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `expf@GLIBC_2.27'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `exp2f@GLIBC_2.27'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `lgamma@GLIBC_2.23'
/software/gcc-9.1.0-el7-x86_64/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: /project2/gavoth/loose/CGnequip/allegro_lammps/libtorch/libtorch/lib/libtorch_cpu.so: undefined reference to `logf@GLIBC_2.27'
collect2: error: ld returned 1 exit status
make[2]: *** [lmp] Error 1
make[1]: *** [CMakeFiles/lmp.dir/all] Error 2
make: *** [all] Error 2

My assumption is that I have either not found a good version of gcc or libtorch.

Issue with multiple species system

I'm struggeling to run a MD using lammps with either pair_nequip or pair_allegro on a system of 5 different species.
For the nequip model I was able to run a MD using ASE, for allegro I could not get the ASE calculator to work yet (is this a known issue?).

I'm using

#############################
### Configure system type ###
#############################

units           metal
boundary        p p p
atom_style      atomic

read_data system.lmp_data

mass 1 10.811
group B type 1

mass 2 12.011
group C type 2

mass 3 18.998403
group F type 3

mass 4 1.00784
group H type 4

mass 5 14.0067
group N type 5


replicate     4 4 4
velocity        all create 303.15 132465

pair_style   allegro  # nequip
pair_coeff   * * deployed.pth H B C N F

timestep        0.001

and checked that the order matches nequip-deply info deployed.pth

I validated nequip / allegro pair styles using a two body system of KCl which worked nicely.
For this model I get:

       0            0       303.15    1.1985879    4009.6546 
[uc2n508.localdomain:2015923] 2 more processes have sent help message help-mpi-btl-openib.txt / default subnet prefix
[uc2n508.localdomain:2015923] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
       1        0.001    19324.723    1.1985879    255601.07 
       2        0.002            0    1.1985879            0 
       3        0.003            0    1.1985879            0 
       4        0.004            0    1.1985879            0 
       5        0.005            0    1.1985879            0 
       6        0.006            0    1.1985879            0 
       7        0.007            0    1.1985879            0 
       8        0.008            0    1.1985879            0 
       9        0.009            0    1.1985879            0

The model metrics are:

validation_loss_f validation_loss_e
0.00235 8.9e-6

They can certainly be optimized but I would not expect the model to explode immediately.
I tried different input files and trained a different model with no change so far.
Because I am able to run the model using ASE, do you think this could be related to the pair_style?

Any plan of updates for newer LAMMPS?

Hi Allegro developer teams,

Thank you very much for the helpful code to integrate Allegro into LAMMPS.

We found that the current 'pair_allegro' has some parts are not suppoted by the newer version of LAMMPS (after version from Jan2022).

Here are some example codes in pair_allegro.cpp, line 123:

// need a full neighbor list
  int irequest = neighbor->request(this,instance_me);
  neighbor->requests[irequest]->half = 0;
  neighbor->requests[irequest]->full = 1;

  neighbor->requests[irequest]->ghost = 1;

The variables "half", "full", "ghost" from neigh_request.h are protected in the newer version of LAMMPS(after version from Jan2022).

Any plan of updates for newer LAMMPS?

Thank you for your time!

Best regards!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.