Giter Club home page Giter Club logo

amusecode / amuse Goto Github PK

View Code? Open in Web Editor NEW
151.0 17.0 99.0 290.63 MB

Astrophysical Multipurpose Software Environment. This is the main repository for AMUSE

Home Page: http://www.amusecode.org

License: Apache License 2.0

Shell 0.20% Fortran 12.24% Python 10.87% TeX 0.15% C++ 6.20% C 25.76% Java 0.59% IDL 0.02% MATLAB 0.04% Perl 0.72% Makefile 0.44% AMPL 41.24% Mathematica 0.26% Cuda 1.12% GLSL 0.03% Gnuplot 0.01% CMake 0.06% Forth 0.01% xBase 0.01% Batchfile 0.04%
astrophysics astronomy simulations python

amuse's Introduction

AMUSE: The Astrophysical Multipurpose Software Environment

DOI PyPI version

This repository contains the AMUSE software. With AMUSE you can write scripts to simulate astrophysical problems in different domains.

The project website is:

and the documentation can be found at:

Getting Started

In short, most probably

pip install amuse

should get you going if you have a linux or Mac were you compile codes on (HDF5 and an MPI libraries must be installed).

Below are some hints for a quick install, if these fail please look for options at the detailed descriptions of the installation procedure in the documents in the 'doc/install' directory.

Compilers

To build AMUSE from source you need to have a working build environment. The AMUSE build system needs C/C++ and fortan 90 compilers, we recommend a recent version of GCC.

In Ubuntu you can setup the environment with (as root):

apt-get install build-essential curl g++ gfortran gettext zlib1g-dev

Other distributions have similar package or package groups available.

In macOS you can use the homebrew or macports package manager (both require the Apple Developer Tools and Xcode to be installed).

For a Windows 10 machine, AMUSE can be installed in the Windows Subsystem for linux (WSL), and installing e.g. Ubuntu from the Microsoft store. Its recommended to use WSL 2. For further installation instructions, see the Linux install instructions.

Python

AMUSE needs Python 3 version >=3.7 installed preferably with pip and virtualenv. It may be necessary to update pip to a recent version. If you cannot use Python 3, legacy support for Python 2 is available in the AMUSE 12 release and the python2 branch.

Installing Prerequisites

The following libraries need to be installed:

  • HDF (version 1.6.5 - 1.12.x)
  • MPI (OpenMPI or MPICH)

The following are needed for some codes:

  • FFTW (version >= 3.0)
  • GSL
  • CMake (version >= 2.4)
  • GMP (version >= 4.2.1)
  • MPFR (version >= 2.3.1)

Installing+building AMUSE

AMUSE can be installed through pip:

pip install [--user] amuse

This will build and install AMUSE with an extensive set of codes. If necessary this will also install some required Python packages:

  • Numpy (version >= 1.3.0)
  • h5py (version >= 1.2.2)
  • mpi4py (version >= 1.1.0)
  • pytest (version >= 5.0)
  • docutils (version >= 0.6)

If you are not using pip these must be installed by hand.

It is possible to install the minimal framework by:

pip install [--user] amuse-framework

This does not include any codes. These can be added

pip install [--user] amuse-<code name>

AMUSE Development

An AMUSE development install can also be handled through pip by executing (in the root of a clone of the repository)

pip install -e .

after this the codes need to be build:

python setup.py develop_build

Running the tests

AMUSE comes with a large set of tests, most can be run automatically. To run these tests start the py.test command from the main amuse directory (directory this README file lives in).

To run these tests do:

  1. install the tests
pip install [--user] amuse-tests

(this will install all tests whether or not you have installed the full amuse package)

  1. Run the automatic tests
pytest --pyargs -v amuse.test.suite

you can also just run the tests for the specific packages you have installed e.g.

pytest --pyargs amuse.test.suite.codes_tests.test_huayno

you may have to prefix mpiexec -n 1 --oversubscribe to the pytest command.

amuse's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amuse's Issues

Mercury crashes after a collision with the central particle

It seems that Mercury crashes after a collision with the central particle occurs. I think this is becuase Mercury (always) removes the particle that hits the central body (mercury6_2.for, L228) and this is not comunicated to amuse. A conflict than appears in the evolve_model of interface.py (L1394). A minimal example follows.

from amuse.units import units
from amuse.datamodel import Particles
from amuse.ext.solarsystem import new_solar_system
from amuse.community.mercury.interface import Mercury

solar_system = new_solar_system()

stone = Particles(1)
stone.position = (3.,0.,0.) | units.AU
stone.velocity = (-10.,0.,0.) | units.kms
stone.mass = 0.|units.kg
stone.name = "stone"
stone.raidus = 100.|units.m

mercury = Mercury(redirection="file", redirect_file="m_redirection.out")
mercury.particles.add_particles(solar_system)
mercury.particles.add_particles(stone)
mercury.commit_particles()
mercury.parameters.info_file = "m_info_file.out"
mercury.parameters.timestep = (1.|units.day)

mercury.evolve_model(1.|units.yr)
mercury.stop()

The output info file 'm_info_file.out' has to exist, otherwise the example stalls. I think this is related to the issue #95 (and the earlier #91 where the solution requires calling commit_parameters() to set up the output files).

error using particlesets read in from version 2 format

the following script:

from amuse.io import write_set_to_file,read_set_from_file
from amuse.ic.plummer import new_plummer_model
p=new_plummer_model(100)
write_set_to_file(p,"test","amuse",append_to_file=False, version="2.0")
p=read_set_from_file("test","amuse")
p.new_attribute=2*p.mass

generates an error:

Traceback (most recent call last):
  File "test.py", line 10, in <module>
    p.new_attribute=2*p.mass
  File "/home/inti/code/amuse/amuse/src/amuse/datamodel/particles.py", line 1106, in __setattr__
    self.set_values_in_store(None, [name_of_the_attribute], [self._convert_from_entities_or_quantities(value)])
  File "/home/inti/code/amuse/amuse/src/amuse/datamodel/particles.py", line 1358, in set_values_in_store
    self._private.attribute_storage.set_values_in_store(indices, attributes, values)
  File "/home/inti/code/amuse/amuse/src/amuse/io/store_v2.py", line 483, in set_values_in_store
    self.attributesgroup
  File "/home/inti/code/amuse/amuse/src/amuse/io/store_v2.py", line 70, in new_attribute
    dataset = group.create_dataset(name, shape=shape, dtype=input.number.dtype)
  File "/home/inti/code/amuse/prerequisites/lib/python2.7/site-packages/h5py/_hl/group.py", line 69, in create_dataset
    dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)
  File "/home/inti/code/amuse/prerequisites/lib/python2.7/site-packages/h5py/_hl/dataset.py", line 49, in make_new_dset
    shape = tuple(shape)
TypeError: 'int' object is not iterable
Exit 1

it does not if the version=2.0 is omitted

Why do we have units.none and units.no_unit?

In all of my amuse related programming, i've never actually had to resort to units.none and/or units.no_unit, so after reading #28 I just couldn't restrain myself from asking:

Why do we have units.none and units.no_unit? What is their purpose? 
What can you do with units.none that you couldn't with a simple numpy ndarray?

Btw, I'm not asking why we have them both, I'm asking why we have them at all.

encounters to be updated to #98

pull request #98 fails on test/codes_tests/test_multiples.py - its probable encounters.py that needs to be updated to the new changes.

just to make clear: the issue is blocking #98, and the changes in encounters.py are to be included in the pull

Missing: interface for Brutus

I noticed that there isn't an interface for Brutus yet.
Tjarda informed me that Adaptb is an older version of that, but basically obsolete.
Maybe this interface can be updated to one for Brutus?

Bonsai and Pikachu no longer working

On my computer (Ubuntu 16.10, Cuda 8.0, GCC 5.4.1, GeForce GTX 970), Bonsai and Pikachu tests fail to run.
Both give errors like "Received reply for call id 0 but expected XXX" (the number XXX seems to change at random).

Gadget on more than 32 cores

Gadget only runs on more than 32 cores, if the following change is made in src/amuse/rfi/channel.py, line 1223:

if not self.hostname is None:
#self.info = MPI.Info.Create()
#self.info['host'] = self.hostname
self.info = MPI.INFO_NULL
else:
self.info = MPI.INFO_NULL

more getters for the MOCASSIN output

It would be useful to add getters for the line fluxes and ion abundances in the grid cells for the MOCASSIN code, similarly to the .grid.electron_temperature.

Mercury build crashes for increased code parameters

I changed some default parameters for Mercury (maximum number of particles etc.), see mercury.inc here. After the change, the build of mercury (using make mercury.code) crashes with the following error:

... : relocation truncated to fit: R_X86_64_PC32 against `.bss'
... : additional relocation overflows omitted from the output

After some google search, I suspect this might be due to allocating some big arrays which could be fixed by adding necessary flags to the fortran compiler (see e.g. here) in the Makefile. However, I struggle to make it work (I'm not exactly sure where exactly to add the flags, actually).
Thanks!

Complete Rebound API integration (Rebound 3.2.3)

Some settable parameters of Rebound are still missing from the interface, including some important ones (symplectic correctors for WHFast/WHFasthelio, accuracy for IAS15, switching radius between ias15/whfast for Helios, etc). I'd like to implement these.
However, I'm wondering how to go about doing this. Of course I can add them as parameters to the interface (doing this now, work in progress), but some parameters may not be settable unless a specific integrator is chosen. So there would need to be a specific order in setting these parameters. Or would there be a better way of doing this?
cc @arjenve

particles attributes now return reference instead of copy

ce7207c has changed p.x from returning a copy to returning a reference to data. This introduces different behaviour for in-memory sets and in-code sets, difference between grids and Particles, and issues with subsets:

p=Particles(5)

p.x=range(5) 
tmp=p.x
print tmp is p.x

#behaviour of index expressions
tmp=p[::2].x

p.x=p.x+1

print tmp == p[::2].x

tmp=p.x[::2]
p.x=p.x+1

print tmp == p[::2].x


# behaviour of grids
p=Grid(5)
p.x=range(5) | units.m
tmp=p.x
print tmp is p.x

gives on 4168ed9:

False
[False False False]
[False False False]
False

while on ce7207c:

True
[False False False]
[ True  True  True]
False

Collision detection: all overlapping particles vs only approaching overlapping particles

There seems to be a discrepancy between a code that has Rebound's collision detections enabled (e.g. "code->collision = reb_simulation::REB_COLLISION_DIRECT;" in "initialize_code()") and one that only enables collision detections in AMUSE. Both return collisions, but the result is not the same (and seems incorrect if Rebound's collision detection is not enabled).
This is at least the case when using the WHFast integrator.
I am not sure why this is though...
Maybe we should add Rebound's collision detection mechanism as a parameter? And/or auto-set it if the AMUSE collision_detection stopping condition is set?

Create a lean AMUSE distribution?

AMUSE in its current form is a big collection of codes. This makes sense when installing it system-wide, but this is not per se the best option. My workflow is increasingly so that I have a separate installation of AMUSE for each of my projects. This allows me to "freeze" the installation for that project, which makes it that much easier to reproduce results later on.

For such a use case, AMUSE is somewhat "bloated". It would be nice to have a "lean" AMUSE installation. This could consist of core-AMUSE, where all community codes (and their datafiles) would be DOWNLOAD_ONLY-style codes.

There are of course several issues that should be resolved in order to make this work nicely:

  • Python3-AMUSE currently needs to install community codes as dist-packages, this would make things difficult
  • Where would community codes be stored?
  • Error message when trying to use a community code could ask user to install respective community code

(add other issues below)

collision detection with bridge

I am having troubles with collision_detection stopping condition while using bridge. After detecting a collision, I remove one of the colliding particles and evolve the system further, but the same collision is detected again. It seems to me that .collision_detection.is_set() is not set to False for the evolve_model after the first collision.

I might be missing something basic here however. Any comments welcome! Please see example code below (I tested several gravity codes, the same problem appears for all).

from amuse.community.rebound.interface import Rebound
from amuse.community.huayno.interface import Huayno
from amuse.community.ph4.interface import ph4
from amuse.units import units, nbody_system
from amuse.datamodel import Particles
from amuse.couple import bridge
from amuse.ext.galactic_potentials import MWpotentialBovy2015

collinders = Particles(3)
collinders.mass = [5.0, 0.0, 0.0] | units.MSun
collinders.position = [(500.,0.,0.), (501.,0.,0.), (502.,0.,0.)] | units.AU
collinders.radius = [0.3, 0.3, 0.4] | units.AU
collinders.velocity = [(0.,0.,0.), (0.,0.,0.), (0.,0.,0.)] | units.kms

converter = nbody_system.nbody_to_si(1|units.MSun,1|units.AU)

#gravity = Rebound(converter)
#gravity.parameters.integrator = "whfast"
#gravity.parameters.timestep = 0.05|units.day

#gravity = Huayno(converter)
#gravity.parameters.timestep_parameter = 0.0001

gravity = ph4(converter)
gravity.parameters.timestep = 0.05|units.day

gravity.stopping_conditions.collision_detection.enable()
gravity.particles.add_particles(collinders)
gravity.commit_particles()

gravity_with_bridge = bridge.Bridge()
external_potential = MWpotentialBovy2015()
gravity_with_bridge.add_system(gravity, (external_potential,), do_sync=True)
gravity_with_bridge.timestep = 0.1|units.day

t_end = 60.|units.day
dt = 5.|units.day

t = 0.|units.day
while t < t_end:
    #gravity.evolve_model(t)
    print " >> evolving >", t, "n_particles =", len(gravity.particles)
    gravity_with_bridge.evolve_model(t)
    #print gravity.stopping_conditions.collision_detection.is_set()
    if gravity.stopping_conditions.collision_detection.is_set():
        #print gravity.stopping_conditions.collision_detection.is_set()
        t_collision = gravity_with_bridge.model_time
        n_particles_in_collision = 2*len(gravity.stopping_conditions.collision_detection.particles(0))
        print " xx collision at", t_collision, "n_particles_in_collisions =", n_particles_in_collision 
        if n_particles_in_collision == 0:
            print " !! only one particle in collision > break"
            break
        else:
            print gravity.stopping_conditions.collision_detection.particles(0)
            print gravity.stopping_conditions.collision_detection.particles(1)
            gravity.particles.remove_particles(gravity.stopping_conditions.collision_detection.particles(0))
            #gravity_with_bridge.particles.remove_particles(gravity.stopping_conditions.collision_detection.particles(0))
            #print gravity.particles
        continue
    t += dt

gravity.stop()
gravity_with_bridge.stop()

Mercury crashes after commit_parameters() is called

A minimal example:

from amuse.units import units
from amuse.ext.solarsystem import new_solar_system
from amuse.community.mercury.interface import Mercury

solar_system = new_solar_system()

mercury = Mercury()
mercury.particles.add_particles(solar_system)
mercury.commit_particles()
mercury.parameters.info_file = "m_info_file.out"
mercury.parameters.timestep = (1.|units.day)
mercury.commit_parameters()

mercury.evolve_model(1.|units.yr)
mercury.stop()

with an error message Exception: While calling commit_parameters of Mercury: No transition from current state state 'CHANGE_PARAMETERS_RUN' to state 'INITIALIZED' possible.

Save Amuse githash in savefiles?

In order to reproduce a simulation with the same version of Amuse, it would be useful to store the githash of AMUSE in a savefile by default. This githash can be extracted as follows:
git rev-parse HEAD
What would be the best way to go about this?

Recursion max exceeded error

a recent commit, I think commit 614c2e5 can trigger a condition where del of a code leads to a recursion limit error, eg

touch src/amuse/rfi/core.py

and then in a python shell

from amuse.community.sse.interface import SSE
s=SSE() # leads to warning about code as expected

then exit with ctrl-d
gives:

Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <bound method SSE.__del__ of <amuse.community.sse.interface.SSE object at 0x6c1f90>> ignored

broken test in test_encounters.py

test test/codes_tests/test_encounters.py:TestAbstractHandleEncounter.test8

is broken; this was masked by wrong test numbering

for the moment its disabled again

ph4 does not build with intel compiler

Compilation output:

$make ph4.code

/home/fjansson/meteo/bin/python setup.py -v build_code --inplace --clean=yes --code-name=ph4
running build_code
building code ph4
cleaning src/amuse/community/ph4
--- 8< ----
make -C src/amuse/community/ph4 all
make[1]: Entering directory `/home/fjansson/amuse/src/amuse/community/ph4'
/home/fjansson/amuse/build.py --type=c interface.py ph4Interface -o worker_code.cc
/home/fjansson/amuse/build.py --type=h  -i amuse.support.codes.stopping_conditions.StoppingConditionInterface interface.py ph4Interface -o interface.h
make -C src all CXX='mpicxx'
make[2]: Entering directory `/home/fjansson/amuse/src/amuse/community/ph4/src'
mpicxx -g -O2 -fPIC -g -O2 -fPIC -Wall -g -O2 -I/home/fjansson/amuse/lib/stopcond  -g -O2 -fPIC -Wall -g -O2 -I/home/fjansson/amuse/lib/stopcond -g -Wall -I/home/fjansson/amuse/lib/stopcond  -c -o debug.o debug.cc 
jdata.h(81): error: name followed by "::" must be a class or namespace name
      MPI::Intracomm mpi_comm;		// communicator for the N-body system
      ^

jdata.h(182): error: name followed by "::" must be a class or namespace name
      void setup_mpi(MPI::Intracomm comm);
                     ^

compilation aborted for debug.cc (code 2)
--- 8< ---
mpicc --version
icc (ICC) 13.1.3 20130607
Copyright (C) 1985-2013 Intel Corporation.  All rights reserved.

This is on the Lisa cluster, with the following modules loaded:

module load python/2.7.12-intel
module load cmake/3.5.1
module load netcdf/intel
module load openmpi/intel/2.0.0 
module load hdf5/intel
module load fortran/intel
module load c/intel/64

Collision detection discrepancies

Related to #88:
There seems to be a discrepancy in which collisions are detected by different codes.

Expected result:
Collision between p0 and p1 is reported if p0.radius + p1.radius > (p1.position - p0.position).length(), and this may only depend on some expected roundoff/accuracy errors.
Possibly, a switch can be used to detect only approaching collisions (negative relative velocity)

Actual result:
Hermite does this perfectly
Rebound as well, but it reports p0 and p1 in reverse
ph4 misses most of the collisions, and not just the ones that are receding from each other
PhiGRAPE misses some of the collisions
BHTree misses some of the collisions

I didn't test other codes so far.

from amuse.lab import *
particles = Particles(5)#,keys=[0,1,2,3,4])
particles.mass = 1|nbody_system.mass
particles.radius = 0.6|nbody_system.length
particles.x = VectorQuantity([0,1,2,3.2,4],nbody_system.length)
particles.y = 0|nbody_system.length
particles.z = 0|nbody_system.length
particles.vx = VectorQuantity([0.1,0.01,0,0,-0.1],nbody_system.speed)
particles.vy = 0|nbody_system.speed
particles.vz = 0|nbody_system.speed

def check_collisions(particles):
    for i in range(len(particles)-1):
        p0 = particles[i]
        for j in range(i+1,len(particles)):
            p1 = particles[j]
            sum_radii = p0.radius+p1.radius
            distance = (p1.position - p0.position).length()
            if (sum_radii > distance):
                print p0.key, p1.key, sum_radii, ">", distance 

print "Manual check"
check_collisions(particles)

print "By code"
codes = [Hermite,Rebound,ph4,PhiGRAPE,BHTree]
codenames = ["Hermite","Rebound","ph4","PhiGRAPE","BHTree"]
for i in range(len(codes)):
    code = codes[i]
    print "code=%s"%(codenames[i])
    gravity = code()
    CD = gravity.stopping_conditions.collision_detection
    CD.enable()
    gravity.particles.add_particles(particles)
    gravity.evolve_model(0.01|nbody_system.time)
    print "time:",gravity.model_time
    print "code collisions found:"
    for pair in zip(CD.particles(0), CD.particles(1)):
        print pair[0].key, pair[1].key
    print "manual collisions found:"
    check_collisions(gravity.particles)
    gravity.stop()

loading particle sets increases rank of linked attributes

see the following example:

    from amuse.datamodel import Particles
    from amuse.io import write_set_to_file,read_set_from_file
    p=Particles(10)
    q=Particles(3)
    q[0].link=p[1]
    q[1].link=p[3]
    q[2].link=p[4]
    print q.link
    write_set_to_file(q,"test","amuse",version=2)
    r=read_set_from_file("test","amuse")
    print r.link

first print gives [..] while second one gives [[...]]

Amuse sockets channel - print address after resolving host names

On one machine where I installed Amuse, the MPI channel worked fine, but the sockets channel didn't. Amuse and the worker code just wouldn't connect to each other, both processes waited until a 60 s timeout was reached.

The reason turned out to be that I had an old alias in /etc/hosts, resolving my locally defined host name to an invalid IP address. The worker code then got this invalid IP address when resolving the host name, at line 1308 in worker_code.cc: server = gethostbyname(host);

This is not an Amuse bug - but additional messages would be helpful when debugging. For example, printing the resolved IP address in the worker code. Thanks to Inti for debugging this with me.

One way to print the IP address:
#include <arpa/inet.h>
...
server = gethostbyname(host); //worker_code.cc line ~1308
printf("Host name: %s\n", host);
printf("IP address: %s\n", inet_ntoa( (in_addr) server->h_addr));

and an alternative without adding another include file:
printf("IP address: %u.%u.%u.%u\n",
(unsigned char)server->h_addr[0], (unsigned char)server->h_addr[1],
(unsigned char)server->h_addr[2], (unsigned char)server->h_addr[3]);

Related - The gethostbyname manpage states:

The gethostbyname*(), gethostbyaddr*(), herror(), and hstrerror() functions are obsolete. Applications should use getaddrinfo(3), getnameinfo(3), and gai_strerror(3) instead.

Also, gethostbyname returns an array of addresses, h_addr is the first of them. h_addr_list contains them all.

distributed channel chokes on string array messages

see https://github.com/ipelupessy/amuse/tree/distributed_channel_string_error

to be specific: test_c_distributed_implementation.py:TestCDistributedImplementationInterface.test7d
freezes on:

File "/home/inti/code/amuse/amuse/src/amuse/rfi/channel.py", line 1714, in _receive_all
data_bytes = thesocket.recv(chunk)

This test tests can_handle_array string function with N=100 input using distributed amuse channel. For me it works for N=10, but not for N much bigger (N=14 fails). echo_int works for N=2000000 fine.

Seeding new_fractal_cluster_model

I'm trying to create 100 fractal clusters with new_fractal_cluster_model(),
with the following piece of code (makefractals.py).

from amuse.ic.fractalcluster import new_fractal_cluster_model

for i in range(100):
    print(i)
    model = new_fractal_cluster_model(N=100, random_seed=i)

but running makefractals.py however, throws the following exception:

laub @ F036961: ~% amuse makefractals.py 
0
1
[...]
43
44
Traceback (most recent call last):
  File "makefractals.py", line 7, in <module>
    model = new_fractal_cluster_model(N=100, random_seed=i)
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 272, in new_fractal_cluster_model
    return uc.result
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 250, in result
    particles = self.new_model()
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 241, in new_model
    generator.generate_particles()
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 189, in generate_particles
    result = self.overridden().generate_particles()
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 196, in __call__
    return self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/rfi/core.py", line 106, in __call__
    raise exceptions.CodeException("Exception when calling function '{0}', of code '{1}', exception was '{2}'".format(self.specification.name, type(self.interface).__name__, ex))
amuse.support.exceptions.CodeException: Exception when calling function 'generate_particles', of code 'FractalClusterInterface', exception was 'lost connection to code'
--------------------------------------------------------------------------
mpiexec has exited due to process rank 0 with PID 8138 on
node F036961 exiting improperly. There are three reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

3. this process called "MPI_Abort" or "orte_abort" and the mca parameter
orte_create_session_dirs is set to false. In this case, the run-time cannot
detect that the abort call was an abnormal termination. Hence, the only
error message you will receive is this one.

This may have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).

You can avoid this message by specifying -quiet on the mpiexec command line.

I've also tried randomizing the seeds like so (makefractals2.py):

from amuse.ic.fractalcluster import new_fractal_cluster_model

import random

random.seed(1)
seeds = [random.randint(1,10000) for i in range(1000)]

for i, s in enumerate(seeds):
    print(i,s)
    model = new_fractal_cluster_model(N=100, random_seed=s)

This will throw the same exception after 164 models.

laub @ F036961: ~% amuse makefractals2.py
(0, 1344)
(1, 8475)
[...]
(163, 6486)
(164, 3949)
Traceback (most recent call last):
  File "makefractals2.py", line 10, in <module>
    model = new_fractal_cluster_model(N=100, random_seed=s)
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 272, in new_fractal_cluster_model
    return uc.result
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 250, in result
    particles = self.new_model()
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 241, in new_model
    generator.generate_particles()
  File "/home/laub/software/amuse-10.0/src/amuse/community/fractalcluster/interface.py", line 189, in generate_particles
    result = self.overridden().generate_particles()
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/support/methods.py", line 196, in __call__
    return self.method(*list_arguments, **keyword_arguments)
  File "/home/laub/software/amuse-10.0/src/amuse/rfi/core.py", line 106, in __call__
    raise exceptions.CodeException("Exception when calling function '{0}', of code '{1}', exception was '{2}'".format(self.specification.name, type(self.interface).__name__, ex))
amuse.support.exceptions.CodeException: Exception when calling function 'generate_particles', of code 'FractalClusterInterface', exception was 'lost connection to code'
--------------------------------------------------------------------------
mpiexec has exited due to process rank 0 with PID 10434 on
node F036961 exiting improperly. There are three reasons this could occur:

[cut mpiexec error messages]

Can anybody confirm/reproduce this behaviour of new_fractal_cluster_model?

mpi4py or openmpi issues on Ubuntu 12.04

I'm having some issues with OpenMPI (or mpi4py?) on a machine running Ubuntu 12.04 Precise Pangolin and I'm not sure whether I'm doing something wrong or whether this is a bug.

Here is some information about the machine:

laub @ u0007221: ~% uname -a 
Linux u0007221 3.13.0-73-generic #116~precise1-Ubuntu SMP Sun Dec 6 16:55:45 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

laub @ u0007221: ~% dpkg -l | grep openmpi | column -t
ii  libopenmpi-dev      1.4.3-2.1ubuntu3  high  performance  message  passing  library  --  header      files
ii  libopenmpi1.3       1.4.3-2.1ubuntu3  high  performance  message  passing  library  --  shared      library
ii  openmpi-bin         1.4.3-2.1ubuntu3  high  performance  message  passing  library  --  binaries
ii  openmpi-checkpoint  1.4.3-2.1ubuntu3  high  performance  message  passing  library  --  checkpoint  support
ii  openmpi-common      1.4.3-2.1ubuntu3  high  performance  message  passing  library  --  common      files

laub @ u0007221: ~% mpicxx -v                        
Using built-in specs.
COLLECT_GCC=/usr/bin/g++
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) 

Every piece of code I've tried to run so far raises the following AttributeError.

laub @ u0007221: /scratch/laub/amuse/examples/simple (master)% am sunandearth.py
Traceback (most recent call last):
  File "sunandearth.py", line 80, in <module>
    x,y = simulate_system_until(particles, 20 | units.yr)
  File "sunandearth.py", line 34, in simulate_system_until
    instance = Hermite(convert_nbody)
  File "/scratch/laub/amuse/src/amuse/community/hermite0/interface.py", line 208, in __init__
    legacy_interface = HermiteInterface(**options)
  File "/scratch/laub/amuse/src/amuse/community/hermite0/interface.py", line 25, in __init__
    **options)
  File "/scratch/laub/amuse/src/amuse/rfi/core.py", line 704, in __init__
    self._start(name_of_the_worker = name_of_the_worker, **options)
  File "/scratch/laub/amuse/src/amuse/rfi/core.py", line 714, in _start
    self.channel = self.channel_factory(name_of_the_worker, type(self), interpreter_executable = interpreter_executable, **options)
  File "/scratch/laub/amuse/src/amuse/rfi/channel.py", line 1155, in __init__
    self.ensure_mpi_initialized()
  File "/scratch/laub/amuse/src/amuse/rfi/channel.py", line 1188, in ensure_mpi_initialized
    if rc.threaded:
AttributeError: 'function' object has no attribute 'threaded'

Running the above example with mpiexec am sunandearth.py results in the same output.

Gadget2 galaxy evolution: Crashes for too many particles & moving center of mass

Hi,
we are trying to evolve a galaxy using Gadget2, but are facing two different issues. I attached a short minimal working example at the very end.

  1. Even though the galaxy is created in equilibrium by GalactICs, the galaxy starts moving significantly when evolved by Gadget2 and even picks up speed over time (which it clearly shouldn't). This can be seen by the output of our example:
initial center of mass:  [1.02325568452e-15, -2.32558110117e-16, -1.10465102306e-16] kpc
initial center of massvelocity:  [1.66310147465e-15, 2.53946632357e-15, -6.9710840255e-17] km / s
************
evolve time: 99.9999749716 Myr
center of mass:  [0.0588263048091, -0.0379048695523, -0.0280453237148] kpc
center of mass velocity:  [1.17130513957, -0.731165006738, -0.607068637535] km / s
************
evolve time: 199.999983236 Myr
center of mass:  [0.234390826298, -0.150570631089, -0.124480727945] kpc
center of mass velocity:  [2.2641629082, -1.46963160643, -1.25022558086] km / s
************
evolve time: 299.999991501 Myr
center of mass:  [0.522732046682, -0.337502883977, -0.268133498215] kpc
center of mass velocity:  [3.38565395672, -2.18582723033, -1.52220449693] km / s
************
evolve time: 399.999999765 Myr
center of mass:  [0.926854435383, -0.603334840113, -0.434237508811] kpc
center of mass velocity:  [4.49560641442, -3.00302680617, -1.71735882503] km / s
  1. The example below works with around 3e5 particles. If we try to use 10 times more particles however, we receive the following error message (if we do not print out the particle information the same crash will happen in dynamics.evolve_model):
initial center of mass:  wrapped<wrapped<wrapped<wrapped<function: int get_mass(int index_of_the_particle)
output: double mass, int __result>>>>
Traceback (most recent call last):
  File "simulation.py", line 27, in <module>
    print 'initial center of mass: ',set1.center_of_mass().as_quantity_in(units.kpc)
[...]
amuse.support.exceptions.CodeException: Exception when calling function 'commit_particles', of code 'Gadget2Interface', exception was 'Received reply for call id 0 but expected 944'

We already tried to debug this for quite some time, but haven't made any progress.
Here is the working example:

import os
import os.path
import numpy as np
import sys

from amuse.units import nbody_system, units, generic_unit_converter, constants
from amuse.datamodel import Particles
from amuse.io import read_set_from_file, write_set_to_file
from amuse.community.gadget2.interface import Gadget2
from amuse.ext.galactics_model import new_galactics_model

if __name__ == '__main__':
    if os.path.exists('disk_galactICs.amuse'):
        galaxy1 = read_set_from_file('disk_galactICs.amuse', 'amuse')
    else:
        disk_part = 100000
        bulge_part=  50000
        halo_part = 100000
        converter = generic_unit_converter.ConvertBetweenGenericAndSiUnits(constants.G, 1.0e12 | units.MSun, 33.0 | units.kpc)
        galaxy1 = new_galactics_model(halo_part,halo_outer_slope = 3.0, disk_number_of_particles=disk_part,generate_disk_flag=True,generate_halo_flag=True,generate_bulge_flag=True,bulge_number_of_particles=bulge_part, do_scale=True, unit_system_converter=converter)
        write_set_to_file(galaxy1, 'disk_galactICs.amuse', 'amuse')

    converter = nbody_system.nbody_to_si( 1.0e12 |units.MSun, 33.0 |units.kpc)
    dynamics = Gadget2(converter,number_of_workers=23)
    set1 = dynamics.particles.add_particles(galaxy1)

    print 'initial center of mass: ',set1.center_of_mass().as_quantity_in(units.kpc)
    print 'initial center of massvelocity: ',set1.center_of_mass_velocity().as_quantity_in(units.km/units.s)

    for i in range(1, 5):
        dynamics.evolve_model(i * (100 | units.Myr))
        print '************'
        print 'evolve time:',dynamics.model_time.as_quantity_in(units.Myr)
        print 'center of mass: ',set1.center_of_mass().as_quantity_in(units.kpc)
        print 'center of mass velocity: ',set1.center_of_mass_velocity().as_quantity_in(units.km/units.s)

    dynamics.stop()

Memory leak in Rebound

Memory usage in Rebound seems to increase over time for my debris disc simulations. I'm not yet sure what causes this, I'll investigate.
Possibly it could be related to synchronising back and forth with Amuse?

Bonsai doesn't set particle radius properly

When particles have an unequal radius, Bonsai doesn't properly track which particle has which radius. This problem occurs at the moment a particle is added, and seems to depend on the number of particles. For two particles there is no problem, but for three and more the result is consistently incorrect.

in plots.py

"from pynbody.snapshot import _new"
should be changed to:
"from pynbody.snapshot import new"

netCFD fails to build

I'm trying to build AMUSE on Linux cluster at LRZ, following the build procedure (http://amusecode.org/doc/install/howto-install-prerequisites.html#installation-scripts) results in the following error:

configure: error: Can't find or link to the hdf5 library. Use --disable-netcdf-4, or see config.log for errors.
Traceback (most recent call last):
  File "./install.py", line 750, in <module>
    _commands[x](names, skip)
  File "./install.py", line 675, in install
    INSTALL.build_apps(names, skip)
  File "./install.py", line 544, in build_apps
    function(temp_app_dir)
  File "./install.py", line 341, in basic_build
    self.run_application(x, path)
  File "./install.py", line 228, in run_application
    raise Exception("Error when running <" + commandline + ">")
Exception: Error when running <./configure --prefix=/home/hpc/pr28fa/di73kuj2/amuse/prerequisites --enable-shared>

getting nonexistent attribute of large set slow

for example:

from amuse.datamodel import Particles
p=Particles(100000000)
print p.dummy

is rather slow in figuring out attribute dummy is nonexistent, this slowness derives from particles.py:1350

        subset = self[indices]

Problems with AMUSE with a VPN enabled

Amuse and Python3 don't play nice together yet. While installing it in a virtualenv seems to work for me now, I can't run anything. When I add particles to a code, the code freezes and nothing happens anymore...

from amuse.lab import *
g = ph4()
p = new_plummer_model(100)
g.particles.add_particles(p)  # the code freezes after this point, no output is generated
print(g.particles[0].x)
g.evolve_model(0.02|nbody_system.time)
print(g.particles[0].x)

Mercury stalls after setting output files

It seems that Mercury gets stuck after the parameters for output files are set. While it seems to work fine with the default values (/dev/null for all output files), when any of the parameters close_encounters_file, bigbody_file, smallbody_file, integration_parameters_file, or restart_file is set to some other path, the code stalls. The minimal example works fine until the bit changing the output files is in:

import os
from amuse.units import units
from amuse.ext.solarsystem import new_solar_system
from amuse.community.mercury.interface import Mercury

solsys = new_solar_system()
mercury = Mercury()
mercury.particles.add_particles(solsys)

### changing mercury output files path stalls the code
#~ names=["close_encounters_file", 
       #~ "bigbody_file",
       #~ "smallbody_file",
       #~ "integration_parameters_file",
       #~ "restart_file"]
#~ for name in names:
    #~ setattr(mercury.parameters,name,os.path.join('./',name))

print mercury.parameters

mercury.evolve_model(6.66|units.yr)
mercury.stop()

Qparameter verbosity

Can we remove the print statements in the Qparameter method or at least give the user the option to silence them?

>>> from amuse.ic.plummer import new_plummer_model
>>> model = new_plummer_model(1000)
>>> model.Qparameter()
making graph
constructing MST
Q: 1.53074998216
1.5307499821583916
>>> 

mpiexec -np doesn't always work

when starting with sockets channel an mpi enabled code it does e.g. a
mpiexec -np 2 worker_name

"-np" doesn't always work (can be -n)??

Unexpected behaviour new_fractal_cluster_model

This issue may (or may not) be related to #7 .

The following piece of code generate_fractals.py crashes after 1089 models.

#generate_fractals.py
import numpy
from amuse.ic.fractalcluster import new_fractal_cluster_model
from amuse.ic.salpeter import new_salpeter_mass_distribution
from amuse.units import units
from amuse.units.nbody_system import nbody_to_si

numpy.random.seed(123)
masses = new_salpeter_mass_distribution(100)

models = []

theta_true = [2, 1.6]
guesses = 0.3 * numpy.random.randn(2000*2).reshape(2000, 2) + theta_true

for i in range(2000):
    vradius, fdim = guesses[i]
    print(i, vradius, fdim)
    converter = nbody_to_si(masses.sum(), vradius |units.parsec)
    models.append(new_fractal_cluster_model(convert_nbody=converter,
                                            N=100,
                                            masses=masses,
                                            fractal_dimension=fdim,
                                            virial_ratio=0.0,
                                            random_seed=i, 
                                            match_N=True))
bernie @ archbox: ~/githubs/fractalsproject/tests (master)% time am generate_fractals.py
(0, 2.6261340078764555, 1.6493323690689476)
(1, 2.3450616627639898, 1.219794385296932)
(2, 2.054310538879101, 1.9533585816425736)
(3, 1.8994967714200952, 1.9093343376765228)
[...]
(1087, 2.4213385288873868, 1.6016495030916851)
(1088, 1.6922299795081461, 1.6144512470492853)
(1089, 1.9724433265007539, 1.3019379430875744)
Traceback (most recent call last):
  File "generate_fractals.py", line 30, in <module>
    match_N=True))
  File "/home/bernie/githubs/amuse/src/amuse/community/fractalcluster/interface.py", line 276, in new_fractal_cluster_model
    return uc.result
  File "/home/bernie/githubs/amuse/src/amuse/community/fractalcluster/interface.py", line 254, in result
    particles = self.new_model()
  File "/home/bernie/githubs/amuse/src/amuse/community/fractalcluster/interface.py", line 242, in new_model
    generator.generate_particles()
  File "/home/bernie/githubs/amuse/src/amuse/community/fractalcluster/interface.py", line 189, in generate_particles
    result = self.overridden().generate_particles()
  File "/home/bernie/githubs/amuse/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/bernie/githubs/amuse/src/amuse/support/methods.py", line 107, in __call__
    result = self.method(*list_arguments, **keyword_arguments)
  File "/home/bernie/githubs/amuse/src/amuse/support/methods.py", line 196, in __call__
    return self.method(*list_arguments, **keyword_arguments)
  File "/home/bernie/githubs/amuse/src/amuse/rfi/core.py", line 113, in __call__
    raise exceptions.CodeException("Exception when calling function '{0}', of code '{1}', exception was '{2}'".format(self.specification.name, type(self.interface).__name__, ex))
amuse.support.exceptions.CodeException: Exception when calling function 'generate_particles', of code 'FractalClusterInterface', exception was 'lost connection to code'
-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[13870,1],0]
  Exit code:    245
--------------------------------------------------------------------------
am generate_fractals.py  152.41s user 38.36s system 43% cpu 7:17.60 total
bernie @ archbox: ~/githubs/fractalsproject/tests (master)%

I've ran this multiple times, and it always crashes at the same seed.

Btw, one can speed up to the point where it crashes by wrapping the new_fractal_cluster_model
call into an if == 1089 block.

if i == 1089:
    models.append(new_fractal_cluster_model(convert_nbody=converter,
                                            N=100,
                                            masses=masses,
                                            fractal_dimension=fdim,
                                            virial_ratio=0.0,
                                            random_seed=i,
                                            match_N=True))

writing subsets of grids in code not working w/o copy

the following works:

from amuse.io import write_set_to_file
from amuse.datamodel import Grid
g=Grid(10,10,10)
g.z=1
sg=g[1:5,2:5,:]
write_set_to_file(sg,"subgrid","amuse")

but the write_set_to_file seems to break when sg is a subgrid of an in-code grid, you need sg.copy() first...

OpenMPI Issues Causing "AttributeError: 'function' object has no attribute 'threaded'"

Hello Everyone,

I have been having some issues running AMUSE example codes after compiling from source. Specifically the error that returns on my desktop (AMD Phenom II 6-Core, 8 GB RAM, fresh install of Fedora 23) and our local RedHat cluster is:

[jglaser@localhost amuse]$ mpiexec -np 1 python examples/simple/cluster.py
Traceback (most recent call last):
  File "examples/simple/cluster.py", line 132, in <module>
    9.0 | nbody_system.time
  File "examples/simple/cluster.py", line 35, in simulate_small_cluster
    gravity = Hermite(number_of_workers = number_of_workers)
  File "/home/jglaser/amuse/src/amuse/community/hermite0/interface.py", line 208, in __init__
    legacy_interface = HermiteInterface(**options)
  File "/home/jglaser/amuse/src/amuse/community/hermite0/interface.py", line 25, in __init__
    **options)
  File "/home/jglaser/amuse/src/amuse/rfi/core.py", line 581, in __init__
    self._start(name_of_the_worker = name_of_the_worker, **options)
  File "/home/jglaser/amuse/src/amuse/rfi/core.py", line 591, in _start
    self.channel = self.channel_factory(name_of_the_worker, type(self), interpreter_executable = interpreter_executable, **options)
  File "/home/jglaser/amuse/src/amuse/rfi/channel.py", line 1108, in __init__
    self.ensure_mpi_initialized()
  File "/home/jglaser/amuse/src/amuse/rfi/channel.py", line 1140, in ensure_mpi_initialized
    if rc.threaded:
AttributeError: 'function' object has no attribute 'threaded'
-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[50049,1],0]
  Exit code:    1
--------------------------------------------------------------------------

I am compiling with the correct OpenMPI directory (/usr/lib64/openmpi) and reinstalling mpi4py returns the following linking scheme:

[jglaser@localhost amuse]$ ldd /home/jglaser/anaconda2/lib/python2.7/site-packages/mpi4py/MPI.so
    linux-vdso.so.1 (0x00007ffeeff40000)
    libdl.so.2 => /lib64/libdl.so.2 (0x00007f123b0b8000)
    libpython2.7.so.1.0 => /home/jglaser/anaconda2/lib/libpython2.7.so.1.0 (0x00007f123acd0000)
    libmpi.so.1 => /usr/lib64/openmpi/lib/libmpi.so.1 (0x00007f123a9ed000)
    libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f123a7d0000)
    libc.so.6 => /lib64/libc.so.6 (0x00007f123a40e000)
    /lib64/ld-linux-x86-64.so.2 (0x000055fe46260000)
    libutil.so.1 => /lib64/libutil.so.1 (0x00007f123a20b000)
    libm.so.6 => /lib64/libm.so.6 (0x00007f1239f09000)
    libopen-rte.so.7 => /usr/lib64/openmpi/lib/libopen-rte.so.7 (0x00007f1239c8c000)
    libopen-pal.so.6 => /usr/lib64/openmpi/lib/libopen-pal.so.6 (0x00007f1239a02000)
    librt.so.1 => /lib64/librt.so.1 (0x00007f12397fa000)
    libhwloc.so.5 => /lib64/libhwloc.so.5 (0x00007f12395c0000)
    libevent-2.0.so.5 => /lib64/libevent-2.0.so.5 (0x00007f1239377000)
    libevent_pthreads-2.0.so.5 => /lib64/libevent_pthreads-2.0.so.5 (0x00007f1239174000)
    libnuma.so.1 => /lib64/libnuma.so.1 (0x00007f1238f68000)
    libltdl.so.7 => /lib64/libltdl.so.7 (0x00007f1238d5e000)

Which seems to be just fine. So my question to you all is: have any of you seen this error before? And if so, is there a common way to work around it?

~ Joe Glaser

from pynbody.snapshot import _new

In the latest version of pynbody (0.30) pynbody.snapshot._new is renamed to pynbody.snapshot.new (without the underscore). This raises an ImportError in /src/amuse/plot.py

Python 3 AMUSE: dependence on 2to3?

I'm curious why 2to3 is a requirement for using AMUSE with Python 3. Is it just print statements that need to be transformed to print functions, or is there more that necessitates this?

Boolean attributes issue

Codes with parrticle boolean attributes cause an issue with some versions of numpy, it relates to the fact that codes return int8 for boolean output arguments, while if a particle set with a boolean attribute cannot be set with an int8 array. An example of this is the test of secularmultiple code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.