Giter Club home page Giter Club logo

mikeio's People

Contributors

clemenscremer avatar cmitr avatar daniel-caichac-dhi avatar dependabot[bot] avatar ecomodeller avatar georgebv avatar havrevoll avatar hendrik1987 avatar j08lue avatar jsmariegaard avatar laurafroelich avatar marcridler avatar maximlt avatar miab-dhi avatar mmfontana avatar mohm-dhi avatar otzi5300 avatar q-r-b avatar watermain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mikeio's Issues

Ideas for dfsu and mesh

Dfsu and Mesh classes have very similar appearance and functionality. Should they inherit from the same base class or should Dfsu simply inherit Mesh like https://github.com/robjameswall/dhitools/blob/master/dhitools/dfsu.py ?

I think the Mesh/Dfsu class should have a number of properties like node_coordinates, element_coordinates, number_of_layers etc. Most of these should be set on read() (those that does not require any computation) others should be set (lazy) only when the user requests it first time (e.g. element_coordinates can take time on large meshes). When the user calls find_closest_element_index() which needs element_coordinates, it will be built on the first run (and therefore take some time) subsequent runs will be fast as element_coordinates is already there.

Furthermore, when creating a new dfsu file from a Dfsu object which have already read a dfsu file (that would be a typical workflow) I would like create() to use these properties instead of reading in a sourcefile again.

What do you think?

Fast writing of dfs0

I am currently creating some relatively big dfs0 files with more than 1e6 time steps. This takes a long time with the current implementation of Dfs0.create(). In the DHI matlab toolbox you would use the .NET function MatlabDfsUtil.DfsUtil.WriteDfs0DataDouble(). Could we not do the same in mikeio?

Generic processing of dfs files

Add functionality to process dfs irrespective of geometry

Examples:

  • Scale: multiply all data in file with a scale factor
  • AddConstant: add a constant to all data in the file
  • Sum: add to files together (assuming identical structure)
  • Diff: subtract to files from each other (assuming identical structure)

The requirement of identical structure could be relaxed if one of the files is static (i.e. only has a single timestep)

Release mikeio on conda-forge

Is your feature request related to a problem? Please describe.
Library is available only via PyPI and github.

Describe the solution you'd like
Many people in engineering community use Anaconda distribution of Python. This is especially important because some libraries cannot be built on Windows via pip and conda provides pre-compiled binaries of those (recent example - numpy, though they have added wheels on PyPI so its not an issue anymore).

Mesh().plot() for global mesh not working as expected

Describe the bug
After pip install of the latest development version on 2020-09-23 I experience problems with msh.plot() for a global mesh.

To Reproduce
The following code

from mikeio import Mesh
meshfilename = r"a_global_mesh_file.mesh"
msh = Mesh(meshfilename)
msh.plot()

throws the following error

Traceback (most recent call last):

  File "<ipython-input-2-1c07926ee84f>", line 1, in <module>
    msh.plot()

  File "C:\Users\hewr\Anaconda3\lib\site-packages\mikeio\dfsu.py", line 1227, in plot
    ax.plot(*domain.exterior.xy, color=out_col, linewidth=1.2)

AttributeError: 'MultiPolygon' object has no attribute 'exterior'

and produces the plot in below screenshot.

Expected behavior
A plot like the first one in https://github.com/DHI/mikeio/blob/master/notebooks/Mesh.ipynb is the expected result.

This line on the other hand does not throw an error, but the plot is still the same.

msh.plot(show_outline=False)

The following, however, works as expected

msh.plot_boundary_nodes()
mp = msh.to_shapely()

Screenshots
image

System information:

  • Python version 3.7.6.final.0
  • MIKE version 2020

Reorder dimension

At the moment the last dimension is time e.g. a dfs2 has the following dimensions (y, x, nt).

How about moving them around to have the time axis first?

Dfs0: (t)
Dfs1: (t, x)
Dfs2: (t, y, x)
Dfs3: (t, z, y, x)

Name

How about calling the package (and repo) dfspy? Then the scope is more apparent - DFS file IO.

There is only one existing repo with that name and that is impossible to mix up with ours is and, most importantly, it is not on PyPI.

Fix stuff in notebooks

A couple of suggestions:

  • Fix capital letter dfsx-to-Dfsx in 01-TimeSeries, Aggregator, Distance-to-land, and SST notebooks
  • Remove untitled notebook
  • Move png to mikeio/images folder
  • Agree on naming convention - numbers first or not
  • Move some of the contents of aggregator to new test-file

Temporal subsetting without pocket calculator

Is your feature request related to a problem? Please describe.

>>> dfs = Dfsu("foo.dfsu")
>>> dfs
Dfsu2D
...
Time: 9 steps with dt=9000.0s
      1985-08-06 07:00:00 -- 1985-08-07 03:00:00
>>> ds = dfs.read(items=[0, 3], time_steps=list(range(2,8)))
>>> ds
<mikeio.DataSet>
Dimensions: (6, 884)
Time: 1985-08-06 12:00:00 - 1985-08-07 00:30:00

Describe the solution you'd like

>>> ds = dfs.read(items=[0, 3], time_range="1985-08-07 12:00:1985-08-08 00:00")
>>> ds
<mikeio.DataSet>
Dimensions: (6, 884)
Time: 1985-08-06 12:00:00 - 1985-08-07 00:30:00

Things to consider:
How to best handle if selected timesteps are not available.

dfsu.read() parameter does not read all parameters

Dear mikeIO Team,
I have a script reading a Mike3DSigma file, which does not run anymore, bellow the bug description:

Describe the bug
I´m reading a dfsu file with:
data, time, names = dfs.read(dfsu_f)
If I print names I used to get the output ['Current Speed', 'Temperature']

Now I get the output [Z coordinate (meter), Current speed (meter per sec)]
First result I got with mikeio version 0.3.0, second (broken) one I get since version 0.4.1
I went up with the version to check since which version the read function started to read the parameters different.

I also had a look in the dfsu.read function, but as far as I understand the code, dfsu.read should still scip the first parameter (Dynamic Z) if the file is a 3dSigma / SigmaZ file. So I couldn´t really find out what's wrong here.

To Reproduce
Steps to reproduce the behavior:
pip install mikeio==0.3.0
data, time, names = dfs.read(dfsu_f) #dfsu_f -> 3Dsigma model with parameter current speed and temp
print (names)

pip install mikeio==0.4.1
data, time, names = dfs.read(dfsu_f) #dfsu_f -> same model as before
print (names)

Expected behavior
dfsu.read() reads all parameters which are in the model

Screenshots
If applicable, add screenshots to help explain your problem.

System information:

  • Python version = 3.6
  • MIKE version 0.3.0 (dfsu_read() intact, 0.4.1 dfsu_read() broken

Wrong dfs delta time unit

The time step. Therefore dt of 5.5 with timeseries_unit of TimeStep.MINUTE

From the doc string in dfs2 write it says dt is in minutes. But the file turns out to be seconds. Please change doc string to reflect the choice of seconds and test that datetimes are correctly written to the file.

Problem after updating the mikeio module

Describe the bug
I get this error after updating mikeio today

`Installing collected packages: pycparser, pythonnet, numpy, six, python-dateutil, pytz, pandas, mikeio
Running setup.py install for mikeio ... done
Successfully installed mikeio-0.5.3 numpy-1.19.2 pandas-1.1.2 pycparser-2.20 python-dateutil-2.8.1 pythonnet-2.5.1 pytz-2020.1 six-1.15.0

(tutorial-env) (base) C:\Users\hels>python
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.

import mikeio
exit()

(tutorial-env) (base) C:\Users\hels>python
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.

from mikeio.eum import TimeStep, ItemInfo, EUMType, EUMUnit
Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name 'TimeStep' from 'mikeio.eum' (C:\Users\hels\tutorial-env\lib\site-packages\mikeio\eum.py)
from mikeio import Dfs0
from mikeio.eum import TimeStep, ItemInfo, EUMType, EUMUnit
Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name 'TimeStep' from 'mikeio.eum' (C:\Users\hels\tutorial-env\lib\site-packages\mikeio\eum.py)
`

To Reproduce
Steps to reproduce the behavior:

  1. install mikeio from repo
  2. from mikeio.eum import TimeStep, ItemInfo, EUMType, EUMUnit

Include code snippet

Expected behavior
As it is not changed in the Jupyter notebooks I assume it is still the way to import mikeio for dfs0 files.

Screenshots
If applicable, add screenshots to help explain your problem.

System information:

  • Python 3.7
  • MIKE version 2019/2020

Error in creating dfs3 with proper grid spacing

While creating the dfs3 file from other formats dx and dy could not be mentioned as error peeps. Because of this grid spacing mentioned as 1 degree by default. Why dx=dx, dy=dy is not working while creating the dfs3 file?

mikeIO_error

Return Figure and Axes for plot methods

Is your feature request related to a problem? Please describe.
I want to be able to edit plots generated by mikeio. I would also like to save figures as fig.savefig.

Describe the solution you'd like
Please return Figure and Axes (or list/tuple of Axes) objects when calling .plot* methods.

try-except-finally in all create/write functions

When I try to create a new dfs file and something goes wrong, the file is not closed. This means that retry with the same filename because that file is locked. I think setting dfs.close() in a finally-block would solve this issue.

dfs2 issues

write() should use Dfs2FileOpenEdit and the t,x,y order should be fixed. I will try to fix that now.

install from pypi?

Would it be useful to publish this package on PyPi, thus you can install it from anywhere using pip? I have done this for ifm_contrib (which is basically the FEFLOW pendant to this project) and it makes things really easy. Happy to assist.

xarray as a dependency?

Hi!

I've been following this repo for quite some time already and am happily surprised to see how fast it's growing, it looks very promising and helpful!

I've seen that you've implemented a Dataset class to store multidimensional arrays. I just wanted to check if you're aware of xarray which is a powerful and complete library that provides such an object. I'm also mentioning xarray here as I had a tiny project (not open-sourced yet) that allowed to read .dfs2 files as xarray dataset objects, that worked quite well and allowed to easily do such a thing:

import tinydfs2package
xarray_result = tinydfs2package.read("output.dfs2")
max_values = xarray_result.max(dim="time")  # xarray allows to express your intent very clearly

As a bonus question, do you consider the API stable already? If not, do you have any ETA?

Thanks!

Add method to dfsu class to read mesh from dfsu object and return a mesh object

Hello there,

First off, great work on this python pkg. Very useful.

I had starting developing something similar with a lot of the same functionality a while back that worked with some data structures/proprietary code at work, and have looked at this a number of times for help. I will upload my package to github in the near future once it's ready (minus the proprietary parts), hopefully you/others find it helpful.

Question....

The mesh class uses the DHI.Generic.MikeZero.DFS.mesh.MeshFile.ReadMesh function to open the mesh file, and extract the mesh information.

When a dfsu file is read, you can extract the same type of mesh information out of it using the DHI.Generic.MikeZero.DFS.DfsFile.ReadStaticItem function. This is fine, but I'm wondering if there is a way (or if you could implement a method) that takes in the dfsu file object (DHI.Generic.MikeZero.DFS.DfsFile) and pass it into some SDK function that will return a mesh object (or similar), so it has all the same properties as a DHI.Generic.MikeZero.DFS.mesh.MeshFile.

Or some other way of using the SDK to extract the mesh out of the dfsu file object without using static items?

The reason I say this is because the mesh of the dfsu could re-use the existing mesh class code to store it's mesh.

Thank you!

Import error (typo?)

I get the following error when importing:

Traceback (most recent call last):

File "C:\Users\Spencer\Anaconda2\envs\py36\lib\site-packages\IPython\core\interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)

File "", line 1, in
import mikeio

File "C:\Users\Spencer\Anaconda2\envs\py36\lib\site-packages\mikeio_init_.py", line 65
raise Exception(f"{ext} is an unsupported extension")
^
SyntaxError: invalid syntax

I believe that the 'f' in the exception is probably meant to be an 'r'

When I make this change, the library imports properly until the following:

FileNotFoundException Traceback (most recent call last)
in ()
----> 1 import mikeio

~\Anaconda2\envs\py36\lib\site-packages\mikeio_init_.py in ()
7 sys.path.append(r"C:\Program Files (x86)\DHI\2019\bin\x64")
8 sys.path.append(r"C:\Program Files (x86)\DHI\2020\bin\x64")
----> 9 clr.AddReference("DHI.Generic.MikeZero.DFS")
10 clr.AddReference("DHI.Generic.MikeZero.EUM")
11 clr.AddReference("System")

FileNotFoundException: Unable to find assembly 'DHI.Generic.MikeZero.DFS'.
at Python.Runtime.CLRModule.AddReference(String name)

I am using MIKE 2017, would that be the issue? Is this package only compatible with >2019 MIKE? (Tried modifying the 2019 append to path with 2017, and did not work).

Thanks

Creating .dfsu based on mesh and timeseries data

Hi

I am currently trying to invoke a method to specify timeseries data (e.g. rain) on specific regions, and then export it to a new .dfsu file. I tried to follow this doc from section "Store result in a new Dfsu file".
https://github.com/DHI/mikeio/blob/master/notebooks/Dfsu%20-%20Distance%20to%20land.ipynb

The difference between the above and my problem is that I have multiple timesteps whereas the above only have 1.
From the .dfsu.py file there is written in line 1616-1617 in the write function:

data: list[np.array] or Dataset
            list of matrices, one for each item. Matrix dimension: time, x

So, my data numpy array has the dimensions (361,39324) (time, x) which corresponds to the description above, I believe? I only have one item and my time array is 361 long, so when I do the dfsu.write command I get this printed error/warning: too many indices for array: array is 1-dimensional, but 2 were indexed. I can't see what's the issue, but here is my code:

from mikeio import Dfsu, Mesh, Dfs0
import geopandas as gpd
import numpy as np

MESH_FILE ='MESH_Dalum_RAW.mesh'
dfs0 = Dfs0('LHA_2_T100_CDS_kobling.dfs0')
ts = dfs0.read()
start_time = ts.time[0]
dt = (ts.time[1] - start_time).seconds
tsdf = ts.to_dataframe()
nts = len(ts.time)

mesh = Mesh(MESH_FILE)
mp = mesh.to_shapely()
ne = mesh.n_elements

mp_gpd = gpd.GeoSeries(list(mp))

rf = gpd.read_file('regnfordeling_dalum.shp')
rf.set_index('ID',inplace=True)

data = np.empty([nts,ne])
vec_index = []
for t in range(nts):
    mesh_data = np.zeros(ne)
    
    for i,p in enumerate(rf.index): # Loop number of polygons
        try:
            mp_index = vec_index[i]
        except: # Only does this once for each polygon
            intsects = mp_gpd.within(rf.loc[p].geometry)
            mp_ints = mp_gpd.loc[intsects]
            mp_index = list(mp_ints.index)
            vec_index.append(mp_index)
        mesh_data[mp_index] = ts[str(p)][t]
    
    data[t] = mesh_data
data[data==0] = np.nan

dfs = Dfsu(MESH_FILE)
outfile = 'test.dfsu'
items = [ItemInfo('Regnfordeling',itemtype=EUMType.Precipitation_Rate)]
dfs.write(outfile,data=data,start_time=start_time,dt=dt,items=items,title='Regnfordeling')

Notice that I tried to put data in a list - same result

The files in the code are in the uploaded zip-file.
Dfsu_issue.zip

Select/filter Dataset on time and/or items

If I have a Dataset from a dfsu file, I can very conveniently filter on element_ids and save to new file. That's great!

selds = ds.isel(idx=elem_ids)
dfs.write(newfile, selds, element_ids=elem_ids)

I would like to similarly filter on timesteps and/or items as a method on the Dataset class returning a new Dataset. Something like this:

 selds = ds.select_steps(range(4,19)) 

or

 selds = ds.select_steps('2018-1-1','2018-2-1')

or similar with a daterange or something. And likewise on items either by name or number. These selections can then be chained:

selds = ds.select_items('water level').select_steps('2018-1-1','2018-2-1').isel(element_ids)

Or in a single method like this:

selds = ds.select(items=['water level'], time_steps=['2018-1-1','2018-2-1'], element_ids=element_ids)

What do you think?

Read more res1d quantity types

Hi,
any chance that new DATA_TYPES_HANDLED_IN_QUERIES will be added soon?
In order to read result information of structures, I am missing the following:

  • Discharge Combined Structure
  • Gate level: (Underflow Gate)

& there is a typo in the Examples (Read Res1D file Return Pandas DataFrame):
ts = r1d.read('res1dfile.res1d', queries)
vs.
ts = res1d.read...

Thanks! Anna

Support for PFS files

Read pfs file into suitable python data structure. A dictionary is simple, but a bit clumsy to use for deeply nested structures. Any suggestions?

from mikeio import Pfs

d = Pfs("foo.m21fm")

sources = d["FemEngineHD"]["HYDRODYNAMIC_MODULE"]["SOURCES"]

n_sources = sources["number_of_sources"]

for i in range(n_sources):
  key = f"SOURCE_{i+1}"
  source = sources[key]
  name = source["name"]

Or perhaps:

m = Pfs("foo.m21fm")
m["FemEngineHD/HYDRODYNAMIC_MODULE/SOURCES"]

Or:

p = Pfs("foo.m21fm")
p.FemEngineHD.HYDRODYNAMIC_MODULE.SOURCES

One additional idea is to strip out unused modules during read

p = Pfs("foo.m21fm", strip=True)

p['PARTICLE_TRACKING_MODULE']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'PARTICLE_TRACKING_MODULE'

Optional dependencies

It doesn't seem right to include matplotlib and its dependencies as requirements in this lightweight file read/write package. Any idea how to manage optional dependencies? The same applies to xarray in issue #36 . Also jupyter is needed to view notebooks in the repo but should not be installed as part of the default package.

Write Dfs0 with data of type float32 fails

Describe the bug
A clear and concise description of what the bug is.

To Reproduce

❯ python
Python 3.8.3 (default, May 19 2020, 06:50:17) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> from mikeio import Dfs0
>>> float_array = np.array([0.0, 1.0],dtype=np.float32)
>>> dfs = Dfs0()
>>> dfs.write("test.dfs0", [float_array])
C:\Users\JAN\code\mikeio\mikeio\dfs0.py:325: UserWarning: No items info supplied. Using Item 1, 2, 3,...
  warnings.warn("No items info supplied. Using Item 1, 2, 3,...")
C:\Users\JAN\code\mikeio\mikeio\dfs0.py:338: UserWarning: No start time supplied. Using current time: 2020-09-24 11:26:36.187013 as start time.
  warnings.warn(
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\JAN\code\mikeio\mikeio\dfs0.py", line 358, in write
    Dfs0Util.WriteDfs0DataDouble(dfs, t_seconds, data_to_write)
TypeError: No method matches given arguments for WriteDfs0DataDouble: (<class 'DHI.Generic.MikeZero.DFS.DfsFile'>, <class 'list'>, <class 'System.Single[,]'>)

Expected behavior

The write method should handle conversion to double precision.
It should not be necessary to convert to double precision like this.

>>> dfs.write("test_double.dfs0", [float_array.astype(np.float64)])

System information:

  • Python 3.8.3
  • MIKE 2020

Any plans for a Linux enabled version?

The current version in __init__.py is loading dlls. If the equivalent functionality can be replaced with shared libraries perhaps a Linux or MacOS version could be achieved.

It is possible that the Linux version could be made available sooner as there are already binaries for the cloud enabled Linux version of Mike Zero.

multipolygon to geopandas dataframe to shapefile

Is your feature request related to a problem? Please describe.
During today's blue cafe I saw there is already a feature to import the mesh (maybe also dfsu?) into a multipolygon. This multipolygon can be easily saved as shapefile (one of the questions was how). I think this could be a useful feature.

Describe the solution you'd like
I have attached some code I've used with py-dhi-dfs from a while ago. Therefore, the code is not compatible with the current mikeio. However, I think the principle can be easily copied. This is also something I would happily do myself, but not this week.

A list of polygons can be transferred into a geopandas dataframe, see code below. After that you can add data to the dataframe and save it, see code below. This list of polygons can be constructed from the multipolygon, see this question on stackoverflow

Describe alternatives you've considered
None. For me geopandas is the way to make shapefiles

Additional context
example code is attached (not 100% compatible with mikeio)
example_dfsu_to_shp.txt

dfs0 read units

I frequently use the dfs0 functions to read in dfs0 files for data analysis. Sometimes this requires comparing with data in different units. Since dfs0 files used by DHI include unit information, I think it would be helpful to either extract these with dfs0.read() or to include that option in a separate function.

It could be something around lines 36-38 in dfs0.py along the lines of:

units = []
names = []
for i in range(n_items):
    names.append(dfs.ItemInfo[i].Name)
    units.append(dfs.ItemInfo[i].Quantity.UnitAbbreviation)

Then units would be passed into and out of read() and into (but not out of?) read_to_pandas()

I am happy to develop a pull request with changes along these lines unless there is a reason that this information is not currently extracted.

dfsu.read does not read wave files

Describe the bug
I'm trying to read a MIKE SW output file, but I get the error message "ValueError: 5100 is not a valid EUMUnit"

To Reproduce
Steps to reproduce the behavior:
from mikeio import Dfsu
dfsu = Dfsu()
file_test = r'C:\Program Files (x86)\DHI\2020\MIKE Zero\Examples\MIKE_21\FlowModel_FM\ST\Torsminde\Data\Waves\Wave_Sim.dfsu'
res = dfsu.read(file_test)

Expected behavior
dfsu.read returns data in the file

System information:

  • Python version: 3.6.10
  • MIKE version: 2020
  • mikeio version: 0.4.2

dfs0 write path fail

In the dfs0 write function I get an error when checking file exists in line:
if not path.exists(filename)

Adding "os." seems to fix the issue:
if not os.path.exists(filename)

Delete Value to NaN conversion not working

I found that the following does not work:

    data[data == -1.0000000180025095e-35] = np.nan
    data[data == -1.0000000031710769e-30] = np.nan
    data[data == dfs.FileInfo.DeleteValueFloat] = np.nan
    data[data == dfs.FileInfo.DeleteValueDouble] = np.nan

maybe we should do this instead:

    tol = 1e-3*abs(dfs.FileInfo.DeleteValueFloat)
    data[abs(data-dfs.FileInfo.DeleteValueFloat)<tol] = np.nan

And it's not very pretty to test for 4 different delete values. Hmmm. But it may still be a good idea.

Contributor guidelines

There is a visual studio project, but with outdated file references and doesn't seem to be able run tests.
How do you setup the code in Visual Studio to also include the tests? Also guidelines for setting up the required python environment is missing, e.g. a dev requirements txt?
Please add a few lines in the readme about setting up this environment for contributing if you already know how.

Support for Python 3.8

The README for this project states that pythonnet doesn't support Python 3.8. However the latest version of pythonnet (2.5) does support Python 3.8. Can mikeio be updated to depend on pythonnet 2.5 please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.