Giter Club home page Giter Club logo

mupif / mupif Goto Github PK

View Code? Open in Web Editor NEW
17.0 8.0 11.0 22.63 MB

MuPIF is open-source, modular, distributed, object-oriented integration platform allowing to create complex, distributed, multiphysics simulation workflows across the scales and processing chains by combining existing simulation tools. Platform comes with data management system allowing to build digital twin representations of physical systems.

Home Page: https://www.mupif.org

License: GNU Lesser General Public License v3.0

Python 19.48% C 0.01% GLSL 80.37% Shell 0.13% Dockerfile 0.01% Makefile 0.01%
interoperability modeling-tool platform-infrastructure simulation-environment numerical-simulations multiphysics distributed-computing python-library

mupif's Introduction

MuPIF

MuPIF is modular, object-oriented integration platform allowing to create complex, distributed, multiphysics simulation workflows across the scales and processing chains by combining existing simulation tools.

Build Status codecov PyPI version Downloads Downloads

Documentation

Prerequisites

MuPIF requires the python interpreter, version 3.8 or newer. It has been tested on Linux / Windows systems. Network communication relies on Pyro5 module.

Installation

There are two options for MuPIF installation:

  • The first, recommended one, relies on Python Package Index (run as pip3 or pip)
  • For a system wide installation (needs admin privileges): pip3 install --upgrade git+https://github.com/mupif/mupif.git
  • For a user space installation: pip3 install mupif --user

The second option relies on the most advanced version on github:

  • git clone https://github.com/mupif/mupif.git mupif.git

License

MuPIF has been developed at Czech Technical University by Borek Patzak and coworkers and is available under GNU Library or Lesser General Public License version 3.0 (LGPLv3).

Further information

Please consult MuPIF home page (http://www.mupif.org) for additional information.

mupif's People

Contributors

bpatzak avatar eudoxos avatar kant avatar nitramkaroh avatar ollitapa avatar stanislavsulc avatar vit-smilauer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mupif's Issues

MuPIF examples

Some other modules (mupif/jypyter) rely on models from mupif examples. However, the models are not imported when importing mupif into jupyter notebook or into any other other python script.
In mupif/jupyter this is solved by cloning the models.py, which is not nice.
The are 2 possible solutions:

  • allow to import models.py from mupif, but not by default (not sure if possible)
  • make examples as independent module.

How to get an Output's ScalarArray with an HTTP request

I tried to initialize and schedule a new workflow execution for workflow101 within the DeeMa project.
Everything seems fine as I receive a workflowExecutionID: 622f2e9e5e61c3f54346c978.

Except that when I want to see the execution outputs (http://172.23.1.1:5555/workflowexecutions/622f2e9e5e61c3f54346c978/outputs) a link is displayed in the value's column (http://172.23.1.1:5555/property_array_view/622f4c775e61c3f54346c979/1) to see the actual data (a two-dimensional array).

Is it because of the data type itself or something else?

It feels like there is an additional request that fetches the data but I don't think this is documented.

So can someone please tell me where it is documented or tell me more about it?

data accessibility in workflow method

In the workflow class in Workflow.py, the solve method contains a self.terminate() command at the end. This means that after executing the solve method in a workflow, access to fields or properties are no more possible. So it may be useful to remove the command from the solve method. One could call the terminate method in the same workflow than the one that initializes the workflow, for example.

Error in distributed computing as in Example 11

On my computer, I manage to run Example successfully. When I adapt this example for running with my API, the pyhton control script hangs, and one mupif server gives the following error:


2018-03-07 17:22:50 INFO:JobMan2cmd.py:91 Running daemon on hosts 127.0.0.1 port 9200 nathost 127.0.0.1 natport 6200 hkey mupif-secret-key

2018-03-07 17:22:50 INFO:JobMan2cmd.py:91 Running daemon on hosts 127.0.0.1 port 9200 nathost 127.0.0.1 natport 6200 hkey mupif-secret-key

2018-03-07 17:22:50 INFO:JobMan2cmd.py:95 Initializing application with initial file input.in and workdir /home/rauchs/Modelling/testmupifdist4/WorkDir2/[email protected]@Solver2

2018-03-07 17:22:50 INFO:JobMan2cmd.py:95 Initializing application with initial file input.in and workdir /home/rauchs/Modelling/testmupifdist4/WorkDir2/[email protected]@Solver2

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/mupif-2.0.0-py3.6.egg/mupif/tools/JobMan2cmd.py", line 124, in
main()
File "/usr/local/lib/python3.6/site-packages/mupif-2.0.0-py3.6.egg/mupif/tools/JobMan2cmd.py", line 96, in main
app = conf.applicationClass(conf.applicationInitialFile, workDir)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 50, in init
open(workdir+os.path.sep+file, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/home/rauchs/Modelling/testmupifdist4/WorkDir2/[email protected]@Solver2/input.in'


the error comes after the command in the python script:
app2 = PyroUtil.allocateApplicationWithJobManager( ns, JobMan2, cfg.jobNatPorts[0], cfg.hkey)

It looks like the dummy file name input.in, set in the serverConfig,py, does not point to the right input file. I noticed the following difference between the demoapp and my app:

  • in the demo app, the input file is read later, in the solvestep method.
  • in my app, the input file is read by the constructor init
    Is it possible that this causes the error? And how to fix it? It does not seem reasonable to me to read the input file in the solvestep method, because this method is called once for every solution of a load increment. Unless it is restricted to the first execution of solvestep in an incremental analysis.

Pass workDirectory path to model __init__

When model instance is allocated using JobManager, the model instance should know its workDirectory, as it may need to transfer some files into it.
Solution: This information is passed from JobManager allocateJob method to _spawnedProcess method but not propagated to mupif.pyroutil.runAppServer which should pass this info to model init

New problem with getFieldURI

Using mupif-2.0.6, I get the following erro message, using openvpn or ssh, windows or linux:

Traceback (most recent call last):
File "testrundist-v1.py", line 136, in
uri = app1.getFieldURI(FieldID.FID_Displacement,istep.getTime().inUnitsOf('s').getValue())
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 179, in call
return self.__send(self.__name, args, kwargs)
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 451, in _pyroInvoke
raise data
mupif.APIError.APIError: Error: can not obtain field

Any idea where this comes from?

Clean abstract field class

The Filed class should be abstract, allowing for different field representations.
However, its interface is focused on interpolated fields, as many methods introduced by this class refer to discretization character of description (getVertexValue, getMesh, etc). These field representation specific methods should be moved to child classes.

PyroFile & permissions

@grauchs in #54 (comment):

When using the method pyrofile to create an instance of a file to be copied subsequently, I run into permission issues. When the server has been started under my username, I manage to create the instance and copy, whereas is the server was started by a different user (member of the same group), I get a permission denied. In fact, the local working directory of the job on the server is created by the jobmanager with ownership 755. This was never an issue in mupif/composelector. One noteable difference is that with mupif/Deema, the pyrofile is created as a pyrofile instance in the workflow run by the user, whereas before getpyrofile (used for creating the file) was a method of the jobmanager, so probably having the same access right as the jobmanager. Could this be the cause of the problem?

Can you describe more in detail what you do? JobManagers run the models with the same user as the jobmanager itself. Do you have something non-standard there?

method Field.toVTK2 not working for tensors?

I manage to do field output for scalars and vectors. For tensors, I get the following error message:
File "/usr/local/.../mupif/Field.py", line 651, in toVTK2
self.field2VTKData().tofile(fileName=fileName,format=format)
AttributeError: 'NoneType' object has no attribute 'tofile'
I noticed that in field2VTKData, there are branches for scalar and vector, but not for tensor. Is it possible that tensor output to VTH format is not implemented yet?
Or do I put the tensor in a wrong form?

problem when using evaluate method with tensors

In the setField method, I use the evaluate method. For vectors, the output works, for tensors, I get the following error:
Traceback (most recent call last):
File "testrun-v4.py", line 70, in
app2.setField(res3)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 269, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 219, in _evaluate
answer = icell.interpolate(position, [self.values[i.getNumber()] for i in icell.getVertices()])
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Cell.py", line 730, in interpolate
0.25*(1.0+ac[0])(1.0-ac[1])vertexValues[3][i]) for i in range(len(vertexValues[0]))])
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Cell.py", line 730, in
0.25
(1.0+ac[0])
(1.0-ac[1])*vertexValues[3][i]) for i in range(len(vertexValues[0]))])
TypeError: 'numpy.float64' object cannot be interpreted as an integer

For information: the Tensor is given to the field constructor in getField as a tuple consisting in three tuples with each having three reals as members

Octree localizer takes 10 minutes to fill for 155k-tetra mesh

I was testing HeavyMesh with 155k tetra elements (175k vertices). The localizer takes 599s to build in RAM (subsequent field evaluations are fast). @bpatzak, is that acceptable or do we need to look at something more efficient?

(The octree infinite subdivision bug #73 was not a bug in octree but in HeavyMesh and is fixed already)

issue with toVTK2 method of field class using vpn

Here is the error message I get when running my control script on a windows PC addressing two linux servers. Th8is error doesn't happen when I am running the control script on a Linux machine. Please note that res1 is a field, created by getField. The error does not happen if the field is defined by getFieldURI.

$$$$ control file displacement field type <class 'mupif.Field.Field'>
Traceback (most recent call last):
File "testrundist-v1.py", line 133, in
res1.toVTK2('testoutput-d',format='ascii')
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif-2.0.1-py3.6.egg\mupif\Field.py", line 721, in toVTK2
self.field2VTKData().tofile(filename=fileName,format=format)
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif-2.0.1-py3.6.egg\mupif\Field.py", line 415, in field2VTKData
return pyvtk.VtkData(self.mesh.getVTKRepresentation(), pyvtk.PointData(pyvtk.Vectors(self.value,**vectorsKw),lookupTable), 'Unstructured Grid Example')
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif-2.0.1-py3.6.egg\mupif\Mesh.py", line 677, in getVTKRepresentation
return pyvtk.UnstructuredGrid(vertices, hexahedron=hexahedrons, tetra=tetrahedrons, quad=quads, triangle=triangles)
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyvtk-0.5.18-py3.6.egg\pyvtk\UnstructuredGrid.py", line 73, in init
ValueError: In cell quad: must be (seq of seq|seq) integers less than 60
Exception ignored in: <bound method RemoteApplication.del of <mupif.Application.RemoteApplication object at 0x07D8FCD0>>
Traceback (most recent call last):
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif-2.0.1-py3.6.egg\mupif\Application.py", line 349, in del
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif-2.0.1-py3.6.egg\mupif\Application.py", line 323, in terminate
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyro4-4.54-py3.6.egg\Pyro4\core.py", line 179, in call
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyro4-4.54-py3.6.egg\Pyro4\core.py", line 442, in _pyroInvoke
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyro4-4.54-py3.6.egg\Pyro4\util.py", line 162, in deserializeData
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyro4-4.54-py3.6.egg\Pyro4\util.py", line 433, in loads
AttributeError: Can't get attribute 'AttributeError' on None
2018-08-16 16:32:32 INFO:Application.py:328 RemoteApplication: Terminating jobManager job [email protected]@Solver1 on Mupif.JobManager@Solver1

Problem with definition of dimensionless units

Here is a description of the problem:
When properties get transferred between digimat and abaqus in the solvestep() method, everything goes fine. In workflow runs without digimat, I define the composite properties in the workflow() method. Here for the Poissons ratio, which is dimensionless, the solver.setProperty() method in the solvestep() method exited with an error about a division by zero! I was able to track the error down to the inUnitsof() method.
I checked that in workflow runs with and without digimat intervention, screen output of the defined properties was exactly the same! Now, when in the workflow() method, I changed the setProperty from

workflow.setProperty(Property.ConstantProperty(compositeInPlanePoisson, PropertyID.PID_CompositeInPlanePoisson, ValueType.Scalar, PQ.getDimensionlessUnit(), None, 0))
workflow.setProperty(Property.ConstantProperty(compositeTransversePoisson, PropertyID.PID_CompositeTransversePoisson, ValueType.Scalar, PQ.getDimensionlessUnit(), None, 0))

to

workflow.setProperty(Property.ConstantProperty(compositeInPlanePoisson, PropertyID.PID_CompositeInPlanePoisson, ValueType.Scalar, 'none', None, 0))
workflow.setProperty(Property.ConstantProperty(compositeTransversePoisson, PropertyID.PID_CompositeTransversePoisson, ValueType.Scalar, 'none', None, 0))

everything went fine. This means replacing PQ.getDimensionlessUnit() by 'none' fixes the problem. So it looks like in some cases PQ.getDimensionlessUnit() is not treated the same way as 'none'. despite the unit of the property being output as none in both cases. In addition to that, this problem with dimensionless properties did not intervene with properties of other solvers like digimat or lammps. So where could the error be?

Convert monitor to and other tools to mupif.monitor (exposed as JSON endpoint in the future)

The new mupif.monitor module should be the unified place for various mupif network data, such as job managers, schedulers, VPN and others in the future. The interestis to have unified JSON data which could be used by all related services (web monitor, web-api, console monitors). This ticket shows the current output when the module is run. The data are already sufficient for the web monitor, which could thus switch to simple HTML page with JS fetching the data over JSON and filling the tables as necessary.

Comments on how to enhance the module, which data to add etc welcome.

schedulerInfo(ns) lists all schedulers (usually just one).

[{'lastExecutions': [{'finished': '2022-06-30T06:58:07',
                      'started': '2022-06-30T06:58:04',
                      'status': 'Finished',
                      'weid': '62bd491936161784affa5ee2',
                      'wid': 'workflow_13'},
                     {'finished': '2022-06-30T06:57:06',
                      'started': '2022-06-30T06:57:04',
                      'status': 'Finished',
                      'weid': '62bd490936161784affa5edf',
                      'wid': 'workflow_13'},
                     {'finished': '2022-06-30T06:43:06',
                      'started': '2022-06-30T06:43:03',
                      'status': 'Finished',
                      'weid': '62bd45b836161784affa5eda',
                      'wid': 'workflow_13'}],
  'ns': {'metadata': {'network:{"host": "172.24.1.1", "port": "36000"}',
                      'type:scheduler'},
         'name': 'mupif.scheduler',
         'uri': 'PYRO:[email protected]:36000'},
  'numTasks': {'failed': 0,
               'finished': 6,
               'processed': 6,
               'running': 0,
               'scheduled': 0}}]

jobmanInfo(ns) lists job managers:

[{'jobs': [],
  'ns': {'metadata': {'network:{"host": "172.24.1.1", "port": "34274"}',
                      'type:jobmanager'},
         'name': 'CVUT.demo01',
         'uri': 'PYRO:[email protected]:34274'},
  'numJobs': {'curr': 0, 'max': -1, 'total': 4388},
  'signature': 'Mupif.JobManager.SimpleJobManager'}]

vpnInfo() returns data about wireguard VPNs. Remote addresses can be hidden, and geolocation can be optionally enabled.

{'test': {'bytes': {'rx': 694376856, 'tx': 1730452528},
          'ipAddr': '172.24.1.1',
          'peers': [{'bytes': {'rx': 503374568, 'tx': 1186033704},
                     'lastHandshake': datetime.datetime(2022, 7, 14, 11, 19, 6),
                     'publicKey': '2MQ91K0II2S81V8H7b/y9LLlKkvPO9UmOkrgTwcbDBo=',
                     'remote': {'host': 'xx.xx.xx.xx', 'port': 50185},
                     'vpnAddr': '172.24.1.2/32'},
                    {'bytes': {'rx': 143787932, 'tx': 472108396},
                     'lastHandshake': datetime.datetime(2022, 6, 24, 15, 31, 2),
                     'publicKey': 'AoD7e1cnm+pCybMUv618wXqqt62+xffpRFRxAjrflCY=',
                     'remote': {'host': 'xx.xx.xx.xx', 'port': 58928},
                     'vpnAddr': '172.24.1.10/32'},
                    {'bytes': {'rx': 5241212, 'tx': 4712448},
                     'lastHandshake': datetime.datetime(2022, 7, 14, 11, 18, 59),
                     'publicKey': 'shG/OCmze8kn6HIYxBbDA2EOzIahd1GIEROJJFX//x8=',
                     'remote': {'host': '185.19.37.57', 'port': 3654},
                     'vpnAddr': '172.24.1.11/32'},
                    {'bytes': {'rx': 392, 'tx': 184},
                     'lastHandshake': datetime.datetime(2022, 6, 17, 10, 37, 22),
                     'publicKey': 'WHzNMvVROFynhgZaou81R47R6Pkm1N6EAxTjGG0EjEc=',
                     'remote': {'host': 'xx.xx.xx.xx', 'port': 39578},
                     'vpnAddr': '172.24.1.13/32'},
                    {'bytes': {'rx': 41711240, 'tx': 64772396},
                     'lastHandshake': datetime.datetime(2022, 7, 13, 11, 19, 17),
                     'publicKey': 'Cxkt8L7DaGxl2SbnxEE3+IWqeuWZ40qXLhDz4D09H2c=',
                     'remote': {'host': 'xx.xx.xx.xx', 'port': 53449},
                     'vpnAddr': '172.24.1.20/32'},
                    {'bytes': {'rx': 261512, 'tx': 2825400},
                     'lastHandshake': datetime.datetime(2022, 6, 27, 9, 52, 59),
                     'publicKey': 'lfLCUVN96AEsn/GFpg9Yvzv4t0cq9F6YI8j0tCE06Bk=',
                     'remote': {'host': 'xx.xx.xx.xx', 'port': 48783},
                     'vpnAddr': '172.24.1.32/32'}]}}

Bug in property constructor?

in the Property constructor interface, there is no value parameter:
def init(self, propID, valueType, units, objectID=0):
Isn't this an error? Or has this class been deactivated on purpose for forcing the user to use ConstantProperty, which works fine?

Problem with time management

There is a potential issue in the increment loop management. In fact, the TimeStep features current time, time increment and increment number, which is passed to the applications via the API for the solution of the increment. However, the time control in the mupif workflow also uses the simulation end time, called targetTime in the examples, to control the increment loop. This targetTime is not communicated to the application via the API, but may be required by the application, for example for scaling the boundary conditions (non-zero displacements, pressures, nodal loads) in the increment loop, i.e. load=max_load*time/targetTime. The way it works now is that the target time is defined independently in both the workflow and in the input file of the application (if there is any), This may be a source of error if those targetTimes are not set identical values. It might be usefull to include a method in the API for getting/setting targetTime, or includeing it into the TimeStep class.

Safe termination of JobManager

The jobManager has terminate method, which unfortunately does not support true safe termination. In the present implementation it hard-terminates all jobs running. Thus, the currently running executions will fail.
The terminate method should:

  • immediately stop accepting allocation requests (no further jobs can be allocated)
  • wait for completion of already existing jobs (default behavior)
  • optionally, perform hard-termination of running jobs

problem with mupif file transfer

When trying to transfer files (which worked without problems in the composelector-mupif version) with the commands:

for inpFile in inpFiles:
       try:
              pf = self.abaqusJobMan.getPyroFile(macroSolver.getJobID(), inpFile, 'wb', buffSize = 1024*1024)
       except Exception as err:
              print(' ***    getPyroFile ' + repr(err))
       try:
              pyroutil.uploadPyroFile(os.path.join(inpDir,inpFile), pf, hkey, size = 1024*1024, compressFlag = True)
       except Exception as err:
              print(' ***    uploadPyroFile ' + repr(err))

I get the following error messages:

***    getPyroFile TypeError('__init__() takes 1 positional argument but 4 were given')
***    uploadPyroFile AttributeError("module 'mupif.pyroutil' has no attribute 'uploadPyroFile'")

I noticed there is a copy method in the pyrofile module, but I donโ€™t see what arguments have to be given. Does this replace the uploadpyrofile method? What about the getpyrofile error?

Problem with Field.evaluate method

When I call the evaluate function in the setField method, I get the following error message:
Traceback (most recent call last):
File "testrun-v4.py", line 71, in
app2.setField(res1)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 296, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.6-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.6-py3.6.egg/mupif/Field.py", line 219, in _evaluate
answer = icell.interpolate(position, [self.values[i.number] for i in icell.getVertices()])
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.6-py3.6.egg/mupif/Field.py", line 219, in
answer = icell.interpolate(position, [self.values[i.number] for i in icell.getVertices()])
AttributeError: 'numpy.int64' object has no attribute 'number'

The field should be correctly defined (I manage to view it in Paraview by vtk output), the evaluation point is inside a cell, for debugging I can output the vertices, cell vertices, values etc. in the setField method before calling evaluate().
I am even able to output the field values using either field.values[inode.getNumber()] or field.values[inode.number], the latter appearing in the error message

example11 simplifications

Hi Borek,

is this loop only supposed to copy data over? In that case, we might just have a method which will copy the underlying HDF5 file (as file, e.g. shutil.copy) and open a new handle using the copy; would that be a good solution?

outGrains.resize(size=len(inGrains))
#todo: copy inGrains to outGrains (check for more elegant way)
for igNum,ig in enumerate(inGrains):
outGrains[igNum].getMolecules().resize(size=len(ig.getMolecules()))
for imNum, im in enumerate(ig.getMolecules()):
om = outGrains[igNum].getMolecules()[imNum]
om.getIdentity().setMolecularWeight(im.getIdentity().getMolecularWeight())
om.getAtoms().resize(size=len(im.getAtoms()))
for iaNum, ia in enumerate(im.getAtoms()):
oa = om.getAtoms()[iaNum]
oa.getIdentity().setElement(ia.getIdentity().getElement())
oa.getProperties().getTopology().setPosition(ia.getProperties().getTopology().getPosition())
oa.getProperties().getTopology().setVelocity(ia.getProperties().getTopology().getVelocity())
oa.getProperties().getTopology().setStructure(ia.getProperties().getTopology().getStructure())
atomCounter+=1

To replace the molecule, there is a few functionality in the last commit, so it could be done via the inject (perhaps replace would be a better name for that?) method:

repMol.getIdentity().setMolecularWeight(random.randint(1,10)*u.yg)
if (1):
#print(repMol.getAtoms()[0]) # call _T_assertDataset()
#print (repMol.getAtoms())
#print("Deleting "+repMol.getAtoms().ctx.h5group.name+'/'+repMol.getAtoms()[0].datasetName)
##todo: make a method to solve this
#del self.outputGrainState._h5obj[repMol.getAtoms().ctx.h5group.name+'/'+repMol.getAtoms()[0].datasetName]
repMol.getAtoms().resize(size=random.randint(30,60),reset=True)
#print (repMol.getAtoms()[0])
for a in repMol.getAtoms():
a.getIdentity().setElement(random.choice(['H','N','Cl','Na','Fe']))
a.getProperties().getTopology().setPosition((1,2,3)*u.nm)
a.getProperties().getTopology().setVelocity((24,5,77)*u.m/u.s)
struct=np.array([random.randint(1,20) for i in range(random.randint(5,20))],dtype='l')
a.getProperties().getTopology().setStructure(struct)

could be replaced with something like

outGrains[rgNum].getMolecules()[rmNum].inject(inGrains[1].getMolecules()[3])

The inject method internally invokes to_dump and then from_dump, so all the data must fit into RAM; from_dump handles all resizing as necessary, and also converts units (if they happen to differ). The test uses to_dump and from_dump but I don't think those need to be exposed to the API, as we would be opening our storage structure to the users:

def test_06_dump_inject(self):
handle=mp.HeavyDataHandle()
mols=handle.getData(mode='create-memory',schemaName='molecule',schemasJson=mp.heavydata.sampleSchemas_json)
mols.resize(2)
mols[0].getIdentity().setMolecularWeight(1*u.g)
mols[0].getIdentity().setMolecularWeight(1*u.g)
m0a=mols[0].getAtoms()
m0a.resize(2)
m0a.getIdentity()[0].setElement('AA')
m0a.getIdentity()[1].setElement('BB')
# manipulate the dump by hand, to check
dmp=mols.to_dump()
dmp[0]['identity.molecularWeight']=(1000.,'u') # change mass of mol0 to 1000 u
dmp[1]['identity.molecularWeight']=(1,u.kg) # change mass of mol1 to 1 kg
# create from scratch
handle2=mp.HeavyDataHandle()
mols2=handle2.getData(mode='create-memory',schemaName='molecule',schemasJson=mp.heavydata.sampleSchemas_json)
# use modified dump
mols2.from_dump(dmp)
self.assertEqual(mols2[0].getIdentity().getMolecularWeight(),1000*u.Unit('u'))
self.assertEqual(mols2[1].getIdentity().getMolecularWeight(),1*u.Unit('kg'))
self.assertEqual(mols2[0].getAtoms()[0].getIdentity().getElement(),'AA')
self.assertEqual(mols2[0].getAtoms()[1].getIdentity().getElement(),'BB')
# inject only a part of data
handle3=mp.HeavyDataHandle()
mols3=handle3.getData(mode='create-memory',schemaName='molecule',schemasJson=mp.heavydata.sampleSchemas_json)
mols3.resize(2)
mols3[0].getAtoms().inject(mols[0].getAtoms())
self.assertEqual(len(mols3[0].getAtoms()),2)
self.assertEqual(mols3[0].getAtoms()[0].getIdentity().getElement(),'AA')
mols3[1].getAtoms().resize(5)
# self.assertEqual(mols3[1]
mols3[1].getAtoms()[4].inject(mols[0].getAtoms()[1])
self.assertEqual(mols3[1].getAtoms()[4].getIdentity().getElement(),'BB')

I don't want to touch example11 without coordination, so please modify it according to your consideration or we can talk so that I can do it.

Workflow checkpointing support

Consider adding checkpointing support for (long-running) workflows.
Allow to store workflow state at specific (user-defined) points, saving the workflow state and restart later. Would require the state save/restore support from individual models.

error in makeFromVTK2

when I am using makeFromVTK2, I get error, which I don't get when I am using makeFromVTK3. The messages on the screen are:

2020-01-20 11:19:07 DEBUG:UnstructuredGrid.py:125       getting 17773 points

2020-01-20 11:19:07 DEBUG:UnstructuredGrid.py:135       getting 258155 cell indexes

2020-01-20 11:19:09 DEBUG:UnstructuredGrid.py:148       getting 51631 cell types

2020-01-20 11:19:09 DEBUG:UnstructuredGrid.py:158 unstructured_grid_fromfile done

Traceback (most recent call last):
  File "meshinput.py", line 130, in <module>
    vtkfield=Field.Field.makeFromVTK2(vfile,'none')
  File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Field.py", line 782, in makeFromVTK2
    data = pyvtk.VtkData(fileName)  # this is where reading the file happens (inside pyvtk)
  File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\pyvtk\__init__.py", line 138, in __init__
    self.fromfile(args[0])
  File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\pyvtk\__init__.py", line 252, in fromfile
    lst.append(ff(f,n,sl[1:]))
  File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\pyvtk\Vectors.py", line 41, in vectors_fromfile
    vectors += map(eval,common._getline(f).split(' '))
TypeError: a bytes-like object is required, not 'str'

problem in execution using VPN

In the procided example666 running under VPN, I can reach the vpn nameserver and start the jobmanagers. Once the control script tries to register an application, it fails. The output of the control script is given below. Strange is the fact that manual setting up of ssh tunnels is asked for.

$$$ Starting distributed mupif execution
2018-05-31 11:17:30 DEBUG:PyroUtil.py:172 Can connect to a LISTENING port of nameserver on 172.30.0.1:9090

2018-05-31 11:17:30 DEBUG:PyroUtil.py:182 Connected to NameServer on 172.30.0.1:9090. Pyro4 version on your localhost is 4.54

$$$ nameserver connected
2018-05-31 11:17:30 INFO:PyroUtil.py:508 Located Job Manager Mupif.JobManager@Solver2 at: 172.30.0.161 44383 None None

L None 172.30.0.161 44383 rauchs 172.30.0.161
2018-05-31 11:17:30 INFO:PyroUtil.py:119 If ssh tunnel does not exist, do it manually using a command e.g. ssh -L None:172.30.0.161:44383 [email protected] -N , or putty.exe -L None:172.30.0.161:44383 [email protected] -N

2018-05-31 11:17:31 DEBUG:PyroUtil.py:243 Application Mupif.JobManager@Solver2, found URI PYRO:[email protected]:44383 on ('172.30.0.161', 44383, 'None', 'None') from a nameServer <Pyro4.core.Proxy at 0x7fcf906e0710; connected; for PYRO:[email protected]:9090>

2018-05-31 11:17:31 INFO:PyroUtil.py:251 Connecting to application Mupif.JobManager@Solver2 with <Pyro4.core.Proxy at 0x7fcf906e0e48; not connected; for PYRO:[email protected]:44383>

2018-05-31 11:17:31 DEBUG:PyroUtil.py:253 Connected to Mupif.JobManager.SimpleJobManager2 with the application Mupif.JobManager@Solver2

$$$ jobmanager 2 found
2018-05-31 11:17:31 INFO:PyroUtil.py:508 Located Job Manager Mupif.JobManager@Solver1 at: 172.30.0.161 44382 None None

L None 172.30.0.161 44382 rauchs 172.30.0.161
2018-05-31 11:17:31 INFO:PyroUtil.py:119 If ssh tunnel does not exist, do it manually using a command e.g. ssh -L None:172.30.0.161:44382 [email protected] -N , or putty.exe -L None:172.30.0.161:44382 [email protected] -N

2018-05-31 11:17:32 DEBUG:PyroUtil.py:243 Application Mupif.JobManager@Solver1, found URI PYRO:[email protected]:44382 on ('172.30.0.161', 44382, 'None', 'None') from a nameServer <Pyro4.core.Proxy at 0x7fcf906e0710; connected; for PYRO:[email protected]:9090>

2018-05-31 11:17:32 INFO:PyroUtil.py:251 Connecting to application Mupif.JobManager@Solver1 with <Pyro4.core.Proxy at 0x7fcf906e0f60; not connected; for PYRO:[email protected]:44382>

2018-05-31 11:17:32 DEBUG:PyroUtil.py:253 Connected to Mupif.JobManager.SimpleJobManager2 with the application Mupif.JobManager@Solver1

$$$ jobmanager 1 found
$$$ creating jobs
$$$ Creating job 2
2018-05-31 11:17:32 DEBUG:PyroUtil.py:543 Trying to connect to JobManager

2018-05-31 11:17:33 INFO:PyroUtil.py:561 Allocated job, returned record from jobManagaer:(1, '[email protected]@Solver2', 9250)

L 6250 172.30.0.161 9250 rauchs 172.30.0.161
2018-05-31 11:17:33 INFO:PyroUtil.py:119 If ssh tunnel does not exist, do it manually using a command e.g. ssh -L 6250:172.30.0.161:9250 [email protected] -N , or putty.exe -L 6250:172.30.0.161:9250 [email protected] -N

2018-05-31 11:17:34 DEBUG:PyroUtil.py:243 Application [email protected]@Solver2, found URI PYRO:[email protected]:6250 on ('172.30.0.161', 9250, '172.30.0.161', '6250') from a nameServer <Pyro4.core.Proxy at 0x7fcf906e0710; connected; for PYRO:[email protected]:9090>

2018-05-31 11:17:34 INFO:PyroUtil.py:251 Connecting to application [email protected]@Solver2 with <Pyro4.core.Proxy at 0x7fcf906fe550; not connected; for PYRO:[email protected]:6250>

2018-05-31 11:17:34 ERROR:PyroUtil.py:255 Communication error, perhaps a wrong key hkey=mupif-secret-key?

2018-05-31 11:17:34 ERROR:Example666.py:57 cannot connect: [Errno 111] Connection refused
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 487, in __pyroCreateConnection
sock = socketutil.createSocket(connect=connect_location, reuseaddr=Pyro4.config.SOCK_REUSE, timeout=self.__pyroTimeout, nodelay=Pyro4.config.SOCK_NODELAY)
File "/usr/local/lib/python3.6/site-packages/Pyro4/socketutil.py", line 295, in createSocket
sock.connect(connect)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "Example666.py", line 44, in
app2 = PyroUtil.allocateApplicationWithJobManager( ns, JobMan2, cfg.jobNatPorts[0], cfg.hkey, PyroUtil.SSHContext(sshClient=cfg.sshClient, options=cfg.options, sshHost=cfg.sshHost) )
File "/usr/local/lib/python3.6/site-packages/mupif-2.0.0-py3.6.egg/mupif/PyroUtil.py", line 580, in allocateApplicationWithJobManager
app = _connectApp(ns, retRec[1], hkey)
File "/usr/local/lib/python3.6/site-packages/mupif-2.0.0-py3.6.egg/mupif/PyroUtil.py", line 252, in _connectApp
sig = app2.getApplicationSignature()
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 263, in getattr
self._pyroGetMetadata()
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 570, in _pyroGetMetadata
self.__pyroCreateConnection()
File "/usr/local/lib/python3.6/site-packages/Pyro4/core.py", line 522, in __pyroCreateConnection
raise ce
Pyro4.errors.CommunicationError: cannot connect: [Errno 111] Connection refused
Traceback (most recent call last):
File "Example666.py", line 82, in
print(app2.getApplicationSignature())
AttributeError: 'NoneType' object has no attribute 'getApplicationSignature'

Enable to run workflow as model inside other workflow

This works locally, however, not yet in distributed settings.
The nested workflow is not remotely available and have to be made available and executed on scheduling node.
Things to consider:

  • scheduler: query top-level and nested workflows, extended check for resource availability

Huge output when using evaluate at point outside of field mesh

When I do setField using a mesh which is not conforming with the original field mesh, some points lie outside of the domain of the field mesh. The evaluate function then gives a huge output. There seem to be 2 issues with the output:

  • therre seems to be a problem with logging: "TypeError: not all arguments converted during string formatting"
  • one sort of message only gives 0 as information

Here is the output stemming from one single point outside of the mesh domain:
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/init.py", line 992, in emit
msg = self.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 838, in format
return fmt.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 575, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.6/logging/init.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "testrun-v4.py", line 70, in
app2.setField(res1)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 269, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 231, in _evaluate
log.error('Field::evaluate - no source cell found for position ', position)
Message: 'Field::evaluate - no source cell found for position '
Arguments: ((1000.0, 0.0, 0.0),)
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/init.py", line 992, in emit
msg = self.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 838, in format
return fmt.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 575, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.6/logging/init.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "testrun-v4.py", line 70, in
app2.setField(res1)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 269, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 231, in _evaluate
log.error('Field::evaluate - no source cell found for position ', position)
Message: 'Field::evaluate - no source cell found for position '
Arguments: ((1000.0, 0.0, 0.0),)
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/init.py", line 992, in emit
msg = self.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 838, in format
return fmt.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 575, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.6/logging/init.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "testrun-v4.py", line 70, in
app2.setField(res1)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 269, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 233, in _evaluate
log.debug(icell.number, icell.containsPoint(position), icell.glob2loc(position))
Message: 0
Arguments: (False, (False, (-18.00227703832303, 1.1019910889255309)))
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/init.py", line 992, in emit
msg = self.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 838, in format
return fmt.format(record)
File "/usr/local/lib/python3.6/logging/init.py", line 575, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.6/logging/init.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "testrun-v4.py", line 70, in
app2.setField(res1)
File "/home/rauchs/ComPoSTe/Interface/composteAPI.py", line 269, in setField
vec=field.evaluate(node.getCoordinates(),0.5)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 196, in evaluate
return self._evaluate(positions, eps)
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.7-py3.6.egg/mupif/Field.py", line 233, in _evaluate
log.debug(icell.number, icell.containsPoint(position), icell.glob2loc(position))
Message: 0
Arguments: (False, (False, (-18.00227703832303, 1.1019910889255309)))

How do I get remote jobManWorkDir

In the workflow control script, I can upload the input file for the application to the application server using getPyroFile and PyroUtil.uploadPyroFile. However, in the model.initialize method, I have to give the working directory where the file is located. How do I get the working directory of a remote application server?

octree subdivides to tiny cells without limit, until recursion limit is reached

In a specific case (HeavyMesh with big data loaded), the octree keeps subdividing until it exhausts python recursion limit. This is the start of the debugging output:

Octree init: origin: (-0.00495, -200.003571429, -0.0015) size: 495.0099
Octant insert: data limit reached, subdivision
Dividing locally: self  BBox [(-0.00495, -200.003571429, -0.0015)-(495.00495, 295.00632857100004, 495.0084)]  mask: (True, True, True)
Octree init: origin: (-0.00495, -200.003571429, -0.0015) size: 247.50495
  Children:  BBox [(-0.00495, -200.003571429, -0.0015)-(247.5, 47.501378571000004, 247.50345000000002)]
Octree init: origin: (-0.00495, -200.003571429, 247.50345000000002) size: 247.50495
  Children:  BBox [(-0.00495, -200.003571429, 247.50345000000002)-(247.5, 47.501378571000004, 495.00840000000005)]
Octree init: origin: (-0.00495, 47.501378571000004, -0.0015) size: 247.50495
  Children:  BBox [(-0.00495, 47.501378571000004, -0.0015)-(247.5, 295.00632857100004, 247.50345000000002)]
Octree init: origin: (-0.00495, 47.501378571000004, 247.50345000000002) size: 247.50495
  Children:  BBox [(-0.00495, 47.501378571000004, 247.50345000000002)-(247.5, 295.00632857100004, 495.00840000000005)]
Octree init: origin: (247.5, -200.003571429, -0.0015) size: 247.50495
  Children:  BBox [(247.5, -200.003571429, -0.0015)-(495.00495, 47.501378571000004, 247.50345000000002)]
Octree init: origin: (247.5, -200.003571429, 247.50345000000002) size: 247.50495
  Children:  BBox [(247.5, -200.003571429, 247.50345000000002)-(495.00495, 47.501378571000004, 495.00840000000005)]
Octree init: origin: (247.5, 47.501378571000004, -0.0015) size: 247.50495
  Children:  BBox [(247.5, 47.501378571000004, -0.0015)-(495.00495, 295.00632857100004, 247.50345000000002)]
Octree init: origin: (247.5, 47.501378571000004, 247.50345000000002) size: 247.50495
  Children:  BBox [(247.5, 47.501378571000004, 247.50345000000002)-(495.00495, 295.00632857100004, 495.00840000000005)]
Octant insert: data limit reached, subdivision
Dividing locally: self  BBox [(-0.00495, 47.501378571000004, -0.0015)-(247.5, 295.00632857100004, 247.50345000000002)]  mask: (True, True, True)
Octree init: origin: (-0.00495, 47.501378571000004, -0.0015) size: 123.752475
  Children:  BBox [(-0.00495, 47.501378571000004, -0.0015)-(123.74752500000001, 171.253853571, 123.75097500000001)]
Octree init: origin: (-0.00495, 47.501378571000004, 123.75097500000001) size: 123.752475
  Children:  BBox [(-0.00495, 47.501378571000004, 123.75097500000001)-(123.74752500000001, 171.253853571, 247.50345000000002)]
Octree init: origin: (-0.00495, 171.253853571, -0.0015) size: 123.752475
  Children:  BBox [(-0.00495, 171.253853571, -0.0015)-(123.74752500000001, 295.00632857100004, 123.75097500000001)]
Octree init: origin: (-0.00495, 171.253853571, 123.75097500000001) size: 123.752475
  Children:  BBox [(-0.00495, 171.253853571, 123.75097500000001)-(123.74752500000001, 295.00632857100004, 247.50345000000002)]
Octree init: origin: (123.74752500000001, 47.501378571000004, -0.0015) size: 123.752475
  Children:  BBox [(123.74752500000001, 47.501378571000004, -0.0015)-(247.5, 171.253853571, 123.75097500000001)]
Octree init: origin: (123.74752500000001, 47.501378571000004, 123.75097500000001) size: 123.752475
  Children:  BBox [(123.74752500000001, 47.501378571000004, 123.75097500000001)-(247.5, 171.253853571, 247.50345000000002)]
Octree init: origin: (123.74752500000001, 171.253853571, -0.0015) size: 123.752475
  Children:  BBox [(123.74752500000001, 171.253853571, -0.0015)-(247.5, 295.00632857100004, 123.75097500000001)]
Octree init: origin: (123.74752500000001, 171.253853571, 123.75097500000001) size: 123.752475
  Children:  BBox [(123.74752500000001, 171.253853571, 123.75097500000001)-(247.5, 295.00632857100004, 247.50345000000002)]
Octant insert: data limit reached, subdivision
Dividing locally: self  BBox [(-0.00495, 47.501378571000004, -0.0015)-(123.74752500000001, 171.253853571, 123.75097500000001)]  mask: (True, True, True)
Octree init: origin: (-0.00495, 47.501378571000004, -0.0015) size: 61.8762375
  Children:  BBox [(-0.00495, 47.501378571000004, -0.0015)-(61.8712875, 109.377616071, 61.8747375)]
Octree init: origin: (-0.00495, 47.501378571000004, 61.8747375) size: 61.8762375
  Children:  BBox [(-0.00495, 47.501378571000004, 61.8747375)-(61.8712875, 109.377616071, 123.75097500000001)]
Octree init: origin: (-0.00495, 109.377616071, -0.0015) size: 61.8762375
  Children:  BBox [(-0.00495, 109.377616071, -0.0015)-(61.8712875, 171.253853571, 61.8747375)]
Octree init: origin: (-0.00495, 109.377616071, 61.8747375) size: 61.8762375
  Children:  BBox [(-0.00495, 109.377616071, 61.8747375)-(61.8712875, 171.253853571, 123.75097500000001)]
Octree init: origin: (61.8712875, 47.501378571000004, -0.0015) size: 61.8762375
  Children:  BBox [(61.8712875, 47.501378571000004, -0.0015)-(123.747525, 109.377616071, 61.8747375)]
Octree init: origin: (61.8712875, 47.501378571000004, 61.8747375) size: 61.8762375
  Children:  BBox [(61.8712875, 47.501378571000004, 61.8747375)-(123.747525, 109.377616071, 123.75097500000001)]
Octree init: origin: (61.8712875, 109.377616071, -0.0015) size: 61.8762375
  Children:  BBox [(61.8712875, 109.377616071, -0.0015)-(123.747525, 171.253853571, 61.8747375)]
Octree init: origin: (61.8712875, 109.377616071, 61.8747375) size: 61.8762375
  Children:  BBox [(61.8712875, 109.377616071, 61.8747375)-(123.747525, 171.253853571, 123.75097500000001)]
Octant insert: data limit reached, subdivision

and this is much later:

Octant insert: data limit reached, subdivision
Dividing locally: self  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]  mask: (True, True, True)
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 1.4547033526277637e-36
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octant insert: data limit reached, subdivision
Dividing locally: self  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]  mask: (True, True, True)
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octree init: origin: (7.42594178104, 149.999928571, 122.556091341) size: 7.2735167631388184e-37
  Children:  BBox [(7.42594178104, 149.999928571, 122.556091341)-(7.42594178104, 149.999928571, 122.556091341)]
Octant insert: data limit reached, subdivision

problem with mesh.setup() method

I have a problem when generating a mupif mesh. In fact, for performing the setfield method for the ABAQUS API, which I have to perform several times, I have to generate the mesh instance several times. However, there is something strange happening when generating the mesh several times. In fact, the cell and vertex numbering seems not to be reset, i.e. for the first readmesh operation, vertex and cell numbers start with 1, whereas after the second readmesh, they start at 6851 and 7085, despite the number of cells and vertices always being correct. For every time I perform a readmesh, the starting number is increased by the number of vertices or cells read before. Is there an issue with resetting mesh number when a mupif mesh instance is created?

new problem in Field.tovtk2 method

I am running my test problem in the following distributed configuration:

  • name server and control script on a windows PC, with python3.6
  • two application servers on one linux machine
    I get the following error message in field.tovtk2, which I don't get under linux, i.e. ns and control script on one linux machine, two application servers on another linux machine. Please note that issue #3 has also been fixed in the windoes pyvtk version. The error only happens when I retrieve the field using the getfield method. If I retrieve it using the getFieldURI method, this error does not occur, but issue #10 occurs. Here is the error output from the control script window:

Traceback (most recent call last):
File "testrundist-v1.py", line 116, in
res1.toVTK2('testoutput-d',format='ascii')
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif\Fiel
d.py", line 722, in toVTK2
self.field2VTKData().tofile(filename=fileName,format=format)
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif\Fiel
d.py", line 416, in field2VTKData
return pyvtk.VtkData(self.mesh.getVTKRepresentation(), pyvtk.PointData(pyvtk
.Vectors(self.value,**vectorsKw),lookupTable), 'Unstructured Grid Example')
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif\Mesh
.py", line 678, in getVTKRepresentation
return pyvtk.UnstructuredGrid(vertices, hexahedron=hexahedrons, tetra=tetrah
edrons, quad=quads, triangle=triangles)
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\pyvtk\Unst
ructuredGrid.py", line 73, in init
raise ValueError('In cell %s: must be (seq of seq|seq) integers less than %s
'%(k,sz))
ValueError: In cell quad: must be (seq of seq|seq) integers less than 60
Exception ignored in: <bound method RemoteApplication.del of <mupif.Applicat
ion.RemoteApplication object at 0x06D6FB90>>
Traceback (most recent call last):
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif\Appl
ication.py", line 341, in del
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\mupif\Appl
ication.py", line 323, in terminate
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\Pyro4\core
.py", line 179, in call
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\Pyro4\core
.py", line 442, in _pyroInvoke
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\Pyro4\util
.py", line 162, in deserializeData
File "C:\Users\rauchs\AppData\Roaming\Python\Python36\site-packages\Pyro4\util
.py", line 433, in loads
AttributeError: Can't get attribute 'AttributeError' on None
2018-04-06 15:17:55 INFO:Application.py:327 RemoteApplication: Terminating jobMa
nager job [email protected]@Solver1 on <mupif.JobManager.RemoteJobManager objec
t at 0x06D6FA50>

Error running Example10

While running Example10.py, I get the following error (which shows only up if matplotlib is installed):
Traceback (most recent call last):
File "Example10.py", line 17, in
f.field2Image2D(title='Thermal', fileName='thermal.png')
File "/usr/local/lib/python3.6/site-packages/mupif-1.1.6-py3.6.egg/mupif/Field.py", line 467, in field2Image2D
vertexValue.append(value)
UnboundLocalError: local variable 'value' referenced before assignment

mupif.Mesh.UnstructuredMesh is not imported

In dev branch, when importing mupif one could not create instance of mupif.Mesh.UnstructuredMesh
Example:
import mupif
instance = mupif.Mesh.UnstructuredMesh()

Error:
Traceback (most recent call last):
File "", line 1, in
AttributeError: type object 'Mesh' has no attribute 'UnstructuredMesh'

Issue with fields read by makeFromVTK3

With makeFromVTK3, I manage to read a field from a vtk file. I also can access the field information using different field methods. However, when I try to evaluate the field, I get the following error message:
Traceback (most recent call last):
File "meshinput.py", line 190, in
print ('point value ', i+1,vtkfield[0].evaluate(transform(point, M),eps=.000001).getValue())
File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Field.py", line 223, in evaluate
return PhysicalQuantity(self._evaluate(positions, eps), self.unit)
File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Field.py", line 236, in _evaluate
cells = self.mesh.giveCellLocalizer().giveItemsInBBox(BBox.BBox([c-eps for c in position], [c+eps for c in position]))
File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Mesh.py", line 523, in giveCellLocalizer
bb = ccc.iter().next().getBBox() # use the first bbox as base
File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Cell.py", line 161, in getBBox
c = self.mesh.getVertex(vertex).coords
File "C:\Users\rauchs\AppData\Roaming\Python\Python37\site-packages\mupif\Mesh.py", line 469, in getVertex
return self.vertexList[i]
TypeError: list indices must be integers or slices, not Vertex

logging output disappears

In the mupif-Deema version, scripts which run fine with the mupif-composelector verison fail to put logging on the screen. Are there some settings which have to be fine-tuned?

Allow serializing PyroFile (deserialize as Pyro Proxy to the remote object)

As a result of our discussion (unifying access to local and remote files in models), the intention was to automatically register PyroFile when serialized (if not registered yet) and pass a Proxy instead. The automatic registration at serialization time is not possible due to irmen/Pyro5#48 , though as long as the PyroFile is registered in (any) Daemon, it will be deserialized as Proxy on the remote side, which is what we want. The content can be obviously obtained via mupif.pyroutils.downloadPyroFile.

Navigation within entity state: dot and array notation

This is to document a need for navigation within a entity state (which is hierarchical data structure made of objects (with attributes), arrays of objects, etc.
The navigation is a sort of path to particular attribute within state that would allow to access or update the value of attribute
Examples:

  • molecule[5].HOMO
  • molecule[6].atom[3].velocity
  • part[5].layer[2].grain[6].molecule[6].atom[3].velocity

The challenge is that part of data is in DMS directly, part can be in data container referenced from DMS.

Approach: Implement hierarchical walk along the state driven by "query string"

issue with getFieldURI in distributed setting

I manage to run my mupif script on one machine, using two server applications on a different machine, transgferring the fields by getField and setField methods. If I replace the getField method by getFieldURI, the execution hangs in the subsequently called setField method. However, if I run the control script and the two server applications on the same machine, it executes without any problem until the end.
the relevant part in the control script is:


print('$$$ Displacement output')
uri = app1.getFieldURI(FieldID.FID_Displacement,istep.getTime().inUnitsOf('s').getValue())
log.info("URI of problem 1's Displacement field is " + str(uri) )
res1 = Pyro4.Proxy(uri)
#res1 = app1.getField(FieldID.FID_Displacement,istep.getTime().inUnitsOf('s').getValue())
res1.toVTK2('testoutput-d',format='ascii')

print('******** SET FIELD TO SECOND MODEL ')
print('
*************************************')
print('$$$ Field 1 time',res1.getTime())
print('$$$ set displacement field to model 2')
app2.setField(res1)


I noticed that I can address field res1 in the controlscript, as in print('$$$ Field 1 time',res1.getTime()), but not in the subsequent setField executed in the API. I noticed that in the API's setField method, the field is treated as an Pyro4.core.Proxy, and subsdequent Field methods do not execute. As mentionned above , the problem does not show up if the control script executes on the same machine as the two application servers!

For information, here are the relevant parts Config.py and two Serverconfig.py Files


        self.nshost = '10.1.1.231'
        self.nsport = 9090

        self.server = '10.1.1.232'
        self.serverPort = 44382
        self.serverNathost = '127.0.0.1'
        self.serverNatport = 5555

        self.server2 = '10.1.1.232'
        self.serverPort2 = 44385
        self.serverNathost2 = self.server2
        self.serverNatport2 = 5558
        self.appName2 = 'MuPIFServer2'

    self.serverPort = self.serverPort+1
    if self.serverNatport != None:
        self.serverNatport+=1
    self.socketApps = self.socketApps+1
    self.portsForJobs=( 9200, 9249 )
    self.jobNatPorts = [None] if self.jobNatPorts[0]==None else list(range(6200, 6249)) 

    self.serverPort = self.serverPort+2
    if self.serverNatport != None:
        self.serverNatport+=2
    self.socketApps = self.socketApps+2
    self.portsForJobs=( 9250, 9300 )
    self.jobNatPorts = [None] if self.jobNatPorts[0]==None else list(range(6250, 6300)) 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.