Giter Club home page Giter Club logo

asammdf's Introduction

asammdf is a fast parser and editor for ASAM (Association for Standardization of Automation and Measuring Systems) MDF (Measurement Data Format) files.

asammdf supports MDF versions 2 (.dat), 3 (.mdf) and 4 (.mf4).

asammdf works on Python >= 3.8

Status

Continuous Integration Coveralls Codacy ReadTheDocs
continuous integration Coverage Status Codacy Badge Documentation Status
PyPI conda-forge
PyPI version conda-forge version

Project goals

The main goals for this library are:

  • to be faster than the other Python based mdf libraries
  • to have clean and easy to understand code base
  • to have minimal 3-rd party dependencies

Features

  • create new mdf files from scratch

  • append new channels

  • read unsorted MDF v3 and v4 files

  • read CAN and LIN bus logging files

  • extract CAN and LIN signals from anonymous bus logging measurements

  • filter a subset of channels from original mdf file

  • cut measurement to specified time interval

  • convert to different mdf version

  • export to HDF5, Matlab (v7.3), CSV and parquet

  • merge multiple files sharing the same internal structure

  • read and save mdf version 4.10 files containing zipped data blocks

  • space optimizations for saved files (no duplicated blocks)

  • split large data blocks (configurable size) for mdf version 4

  • full support (read, append, save) for the following map types (multidimensional array channels):

    • mdf version 3 channels with CDBLOCK

    • mdf version 4 structure channel composition

    • mdf version 4 channel arrays with CNTemplate storage and one of the array types:

      • 0 - array
      • 1 - scaling axis
      • 2 - look-up
  • add and extract attachments for mdf version 4

  • handle large files (for example merging two fileas, each with 14000 channels and 5GB size, on a RaspberryPi)

  • extract channel data, master channel and extra channel information as Signal objects for unified operations with v3 and v4 files

  • time domain operation using the Signal class

    • Pandas data frames are good if all the channels have the same time based
    • a measurement will usually have channels from different sources at different rates
    • the Signal class facilitates operations with such channels
  • graphical interface to visualize channels and perform operations with the files

Major features not implemented (yet)

  • for version 3

    • functionality related to sample reduction block: the samples reduction blocks are simply ignored
  • for version 4

    • experimental support for MDF v4.20 column oriented storage
    • functionality related to sample reduction block: the samples reduction blocks are simply ignored
    • handling of channel hierarchy: channel hierarchy is ignored
    • full handling of bus logging measurements: currently only CAN and LIN bus logging are implemented with the ability to get signals defined in the attached CAN/LIN database (.arxml or .dbc). Signals can also be extracted from an anonymous bus logging measurement by providing a CAN or LIN database (.dbc or .arxml)
    • handling of unfinished measurements (mdf 4): finalization is attempted when the file is loaded, however the not all the finalization steps are supported
    • full support for remaining mdf 4 channel arrays types
    • xml schema for MDBLOCK: most metadata stored in the comment blocks will not be available
    • full handling of event blocks: events are transferred to the new files (in case of calling methods that return new MDF objects) but no new events can be created
    • channels with default X axis: the default X axis is ignored and the channel group's master channel is used
    • attachment encryption/decryption using user provided encryption/decryption functions; this is not part of the MDF v4 spec and is only supported by this library

Usage

from asammdf import MDF

mdf = MDF('sample.mdf')
speed = mdf.get('WheelSpeed')
speed.plot()

important_signals = ['WheelSpeed', 'VehicleSpeed', 'VehicleAcceleration']
# get short measurement with a subset of channels from 10s to 12s
short = mdf.filter(important_signals).cut(start=10, stop=12)

# convert to version 4.10 and save to disk
short.convert('4.10').save('important signals.mf4')

# plot some channels from a huge file
efficient = MDF('huge.mf4')
for signal in efficient.select(['Sensor1', 'Voltage3']):
   signal.plot()

Check the examples folder for extended usage demo, or the documentation http://asammdf.readthedocs.io/en/master/examples.html

https://canlogger.csselectronics.com/canedge-getting-started/ce3/log-file-tools/asammdf-gui/

Documentation

http://asammdf.readthedocs.io/en/master

And a nicely written tutorial on the CSS Electronics site

Contributing & Support

Please have a look over the contributing guidelines

If you enjoy this library please consider making a donation to the numpy project or to danielhrisca using liberapay Donate using Liberapay

Contributors

Thanks to all who contributed with commits to asammdf:

Installation

asammdf is available on

pip install asammdf
# for the GUI 
pip install asammdf[gui]
# or for anaconda
conda install -c conda-forge asammdf

In case a wheel is not present for you OS/Python versions and you lack the proper compiler setup to compile the c-extension code, then you can simply copy-paste the package code to your site-packages. In this way the python fallback code will be used instead of the compiled c-extension code.

Dependencies

asammdf uses the following libraries

  • numpy : the heart that makes all tick
  • numexpr : for algebraic and rational channel conversions
  • wheel : for installation in virtual environments
  • pandas : for DataFrame export
  • canmatrix : to handle CAN/LIN bus logging measurements
  • natsort
  • lxml : for canmatrix arxml support
  • lz4 : to speed up the disk IO performance
  • python-dateutil : measurement start time handling

optional dependencies needed for exports

  • h5py : for HDF5 export
  • hdf5storage : for Matlab v7.3 .mat export
  • fastparquet : for parquet export
  • scipy: for Matlab v4 and v5 .mat export

other optional dependencies

  • PySide6 : for GUI tool
  • pyqtgraph : for GUI tool and Signal plotting
  • matplotlib : as fallback for Signal plotting
  • faust-cchardet : to detect non-standard Unicode encodings
  • chardet : to detect non-standard Unicode encodings
  • pyqtlet2 : for the GPS window
  • isal : for faster zlib compression/decompression
  • fsspec : access files stored in the cloud

Benchmarks

http://asammdf.readthedocs.io/en/master/benchmarks.html

asammdf's People

Contributors

car-efficiency avatar danielhrisca avatar david32 avatar driftregion avatar eblis avatar eisdrache avatar fillbk avatar fkohlgrueber avatar freakatzz avatar higgx2 avatar isuruf avatar jamezo97 avatar jogo- avatar juliengrv avatar jvanthoog avatar kopytjuk avatar matinf avatar morbult avatar nos86 avatar simc avatar stanifrolov avatar tobiasandorfer avatar tov101 avatar travis2319 avatar venden avatar victorjoh avatar vorobiovm avatar woodpy avatar yahym avatar zariiii9003 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

asammdf's Issues

Add Contributing Guidelines

From: https://github.com/blog/1184-contributing-guidelines

Oftentimes open source projects place a CONTRIBUTING file in the root directory. It explains how a participant should do things like format code, test fixes, and submit patches. Here is a fine example from puppet and another one from factory_girl_rails. From a maintainer's point of view, the document succinctly communicates how best to collaborate. And for a contributor, one quick check of this file verifies their submission follows the maintainer's guidelines.

Development seems to be hitting a stride, a lot of the 'boring tedious' work is getting automated. Having some contributing guidelines would help new developers get started.

conversion table reading as dict

Python version

('python=3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) '
'[MSC v.1900 64 bit (AMD64)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.14.0'
'asammdf=3.3.1'

Code

Code snippet

none

Traceback

none

Description

What is the best way to get the conversion tables tied to channels as dict?

iter_groups() reports an error

Python version

'python=3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 '
'64 bit (AMD64)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.14.0'
'asammdf=3.2.1'

Code

Code snippet

from asammdf import MDF
obj = MDF("/filename.dat")
it = obj.iter_groups()
next(it)

Traceback

Traceback (most recent call last):

File "", line 1, in
next(it)

File "C:\Users\NPERVEZ2\AppData\Local\Continuum\anaconda3\lib\site-packages\asammdf\mdf.py", line 1222, in iter_groups
master.append(self.get_master(i, data=data_bytes))

File "C:\Users\NPERVEZ2\AppData\Local\Continuum\anaconda3\lib\site-packages\asammdf\mdf_v3.py", line 3123, in get_master
data_bytes, offset = fragment

ValueError: not enough values to unpack (expected 2, got 1)

Description

group iterator reports an error.

is there a way to extract a subset of signals of a group as pandas dataframe without interpolation?

UnboundLocalError: local variable 'vals' referenced before assignment (in append method of mdf4)

Hi Daniel,

vals is not assigned, what is it supposed to be here?

Traceback (most recent call last):
  File "test.py", line 44, in <module>
    main()
  File "test.py", line 32, in main
    mdf_reduced = mdf.filter(channels)
  File "D:\Workspace\asammdf\asammdf\mdf.py", line 543, in filter
    common_timebase=True,
  File "D:\Workspace\asammdf\asammdf\mdf4.py", line 2142, in append
    types.append(vals.dtype)
UnboundLocalError: local variable 'vals' referenced before assignment

mdf.get(): TypeError: 'float' object cannot be interpreted as an integer

Hey,

just tried to use your library. Unfortunately I get an error loading data from a measurement, which was recorded with ETAS INCA 7.2. It is a mdf version 3 file.

from asammdf.asammdf import MDF, Signal, configure
import numpy as np
import os


DIR = r'...\Desktop'  
FILE = r'some_measurement.dat'

mdf_file = MDF(os.path.join(DIR, FILE))

mdf_file.get('some_signal')

This produces the following error.


TypeError Traceback (most recent call last)
in ()
----> 1 mdf_file.get('some_signal')

...\asammdf\asammdf\mdf3.py in get(self, name, group, index, raster, samples_only, data)
2184 if 'record' not in grp:
2185 if dtypes.itemsize:
-> 2186 record = fromstring(data, dtype=dtypes)
2187 else:
2188 record = None

...\Continuum\Anaconda3_4.4\lib\site-packages\numpy\core\records.py in fromstring(datastring, dtype, shape, offset, formats, names, titles, aligned, byteorder)
707 shape = (len(datastring) - offset) / itemsize
708
--> 709 _array = recarray(shape, descr, buf=datastring, offset=offset)
710 return _array
711

...\Continuum\Anaconda3_4.4\lib\site-packages\numpy\core\records.py in new(subtype, shape, dtype, buf, offset, strides, formats, names, titles, byteorder, aligned, order)
421 self = ndarray.new(subtype, shape, (record, descr),
422 buffer=buf, offset=offset,
--> 423 strides=strides, order=order)
424 return self
425

TypeError: 'float' object cannot be interpreted as an integer

Due to legal restrictions I can not post more details. However I can send you the print out of gr['parents'] and gr['types'] via Mail.

Error selecting a signal from an MDF4

Python version

3.6.0

Platform information

Windows 10 Enterprise

asammdf version

2.8.1

Description

I tried to use the following code with an MDF4 file I can't share

from asammdf import MDF, Signal, configure
import numpy as np
configure(integer_compacting=True)
configure(split_data_blocks=True, split_threshold=10*1024)
with MDF(r"file.mdf") as mdf:
  s = mdf.get("CHANNEL_NAME")```

This returned the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\H215374\AppData\Local\Programs\Python\Python36-32\lib\site-packages\asammdf\mdf.py", line 832, in select
    signal = self.get(group=group, index=index, data=data)
  File "C:\Users\H215374\AppData\Local\Programs\Python\Python36-32\lib\site-packages\asammdf\mdf4.py", line 3574, in get
    pos_byte, pos_offset = divmod(ch_invalidation_pos)
TypeError: divmod expected 2 arguments, got 1

Error seems pretty straightforward - couldn't find a version of Python whose docs allow one argument for divmod(). So, I wonder if this line is covered by unit tests.

Thanks, and thank you for your hard work on this library!

How can I read Signal values

I am trying to read the values of all signals. Samples, Timestamps etc. How can I do it?

from asammdf import MDF, Signal

mdf = MDF("./FILE_NAME.MF4")
# print(mdf.info())

all_channels = mdf.channels_db

for ch in all_channels:
    # print(ch)
    each_channels = mdf.get(ch)
    print(each_channels)
    # for sigs in each_channels:

Add a filter option

I think it would be a nice feature!

Why?

Because sometimes the file is too big to be completely loaded, so you need to use load_measured_data=False. But then you will lose all the other nice methods and will have to make lots of call to the get method.

How?

We add a filter parameter to the __init__ method, which would take a dictionary object like:

  • {'channels': ['<name_1>', <...>, '<name_N>']} -> only read these channels
  • {'channels': [(<idx_group_1>, <idx_channel_1>), <...>, (<idx_group_N>, <idx_channel_N>)]} -> only read these channels
  • {'channel_groups': ['<idx_group_1>', <...>, '<idx_group_N>']} -> only read all channels of these groups

IMO, this feature would be very useful and could even have more options in the future.

Please let me know what you think about it. If you don't have enough time to implement it, I could also do the changes myself and make a PR. :)

Corrupt output file for many signals

Hi,
I just upgraded from version 2.8.0 to 3.2.1. I am now getting a corrupt output mf4, if there are to many signals in the file. I attach a minimal example, where this happens. I am using Python 2.72.

When I have 75 Signals in this example, everything is working fine, from 76 on, all channels including the time channel are corrupted, approximately half of the channel data is 0. I am fairly new to this library, I could not really see, where the issue is coming from...


import asammdf
import numpy as np

timestamps2 = np.arange(30.02,305.47,0.02)
signals = []
for i in range(76):
    signals.append( asammdf.Signal(samples=np.ones(len(timestamps2)), timestamps=timestamps2, name="number %d"%i) )
    
mdf4 = asammdf.MDF(version='4.10',memory="minimal")
mdf4.append(signals, 'Created by Python')
mdf4.save('weird.mf4', overwrite=True)

new = asammdf.MDF("weird.mf4")
new.get("number 1").timestamps

Greetings, Julius

Code quality improvement.

I'm starting to do some static analysis and linting at work and needed a demo project so work can't claim it's anything I worked on (and let the commit record show I've not done anything for asammdf).

This is from Jenkins on a DigitalOcean instance and ran my fork of asammdf through flake8 (with a lot of customization).

Using the Jenkins Warnings plugin.

Flake8 Warings Using a custom parser regex and mapping script.

The former allows more flexibility in reporting, the latter requires less setup. Both are using the same flake8 settings.

Build > Execute Shell:

. /opt/asammdf_venv/bin/activate
python --version
flake8 --version
flake8  --exit-zero --format=default --output-file=flake8.log
flake8  --exit-zero --format=pylint  --output-file=flake8_pylint.log

Unexpected output on console (neither error nor warning) while reading MF4 file.

Python version

Please run the following snippet and write the output here

import platform
import sys
from pprint import pprint

pprint("python=" + sys.version)
pprint("os=" + platform.platform())

try:
    import numpy
    pprint("numpy=" + numpy.__version__)
except ImportError:
    pass

try:
    import asammdf
    pprint("asammdf=" + asammdf.__version__)
except ImportError:
    pass

('python=3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit '
'(AMD64)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.14.3'
'asammdf=3.5.0'

Code

MDF version

_please write here the file version (you can run print(MDF(file).version))

print(MDF4(mdf_file).version)
Expected CG, SD, DL, DZ or CN block at 0x602b008 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b068 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b0c8 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b128 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b188 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b1e8 but found id="b'##AT'"
4.00

Code snippet

please write here the code snippet that triggers the error

asam_mdfdata = MDF4(mdf_file)

Traceback

please write here the error traceback
There is no error traceback.

Description

Please describe the issue here.
After reading the file: Following things gets printed in console

asam_mdfdata = MDF4(mdf_file)
Expected CG, SD, DL, DZ or CN block at 0x602b008 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b068 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b0c8 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b128 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b188 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b1e8 but found id="b'##AT'"

Even if i use asammdf.MDF module to read MF4 file , i still get same output on console

test = asammdf.MDF(name = mdf_file)
Expected CG, SD, DL, DZ or CN block at 0x602b008 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b068 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b0c8 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b128 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b188 but found id="b'##AT'"
Expected CG, SD, DL, DZ or CN block at 0x602b1e8 but found id="b'##AT'"

loading an mf4 file give divide by zero:

I'm getting a divide by zero loading this file. I can use turbo lab's mf4 and it load's w/o issues - however, it's incredibly complex to use. Here's the output, appeantly size = 0.

C:\Python27\python.exe C:/Users/xzcjml/PycharmProjects/untitled/main.py
Traceback (most recent call last):
File "C:/Users/xzcjml/PycharmProjects/untitled/main.py", line 14, in
for g in mdf.iter_groups():
File "C:\Python27\lib\site-packages\asammdf\mdf.py", line 2183, in iter_groups
C:\Users\xzcjml\Documents\dSPACE\ControlDeskNG\5.3\Project_001\Experiment_001\Measurement Data\rec1_011.mf4
4.10
master.append(self.get_master(i, data=fragment))
File "C:\Python27\lib\site-packages\asammdf\mdf_v4.py", line 4763, in get_master
parents, dtypes = self._prepare_record(group)
File "C:\Python27\lib\site-packages\asammdf\mdf_v4.py", line 1668, in _prepare_record
if ca_block['byte_offset_base'] // size > 1 and
ZeroDivisionError: integer division or modulo by zero

Process finished with exit code 1

installation conflict in conda with py-xgboost

$ conda install -c conda-forge asammdf
Fetching package metadata .............
Solving package specifications: .

UnsatisfiableError: The following specifications were found to be in conflict:

  • asammdf
  • py-xgboost
    Use "conda info " to see the dependencies for each package.

$ conda info py-xgboost
Fetching package metadata ...........

py-xgboost 0.60 py27np112h0aae3cd_0

file name : py-xgboost-0.60-py27np112h0aae3cd_0.tar.bz2
name : py-xgboost
version : 0.60
build string: py27np112h0aae3cd_0
build number: 0
channel : defaults
size : 51 KB
arch : x86_64
date : 2017-03-27
md5 : 78cf04ef47f34508262d0eb196d7ff3c
noarch : None
platform : linux
url : https://repo.continuum.io/pkgs/free/linux-64/py-xgboost-0.60-py27np112h0aae3cd_0.tar.bz2
dependencies:
libxgboost 0.60 hdfa14d8_0
numpy 1.12.1
python >=2.7,<2.8
scikit-learn
scipy

py-xgboost 0.60 py35np112h32b46d2_0

file name : py-xgboost-0.60-py35np112h32b46d2_0.tar.bz2
name : py-xgboost
version : 0.60
build string: py35np112h32b46d2_0
build number: 0
channel : defaults
size : 52 KB
arch : x86_64
date : 2017-03-27
md5 : 0342614790aa32e1b6619c9d9e785992
noarch : None
platform : linux
url : https://repo.continuum.io/pkgs/free/linux-64/py-xgboost-0.60-py35np112h32b46d2_0.tar.bz2
dependencies:
libxgboost 0.60 hdfa14d8_0
numpy 1.12.1
python >=3.5,<3.6
scikit-learn
scipy

py-xgboost 0.60 py36np112h54d5342_0

file name : py-xgboost-0.60-py36np112h54d5342_0.tar.bz2
name : py-xgboost
version : 0.60
build string: py36np112h54d5342_0
build number: 0
channel : defaults
size : 52 KB
arch : x86_64
date : 2017-03-27
md5 : 5989503e497979acea3f51f9f18e7afb
noarch : None
platform : linux
url : https://repo.continuum.io/pkgs/free/linux-64/py-xgboost-0.60-py36np112h54d5342_0.tar.bz2
dependencies:
libxgboost 0.60 hdfa14d8_0
numpy 1.12.1
python >=3.6,<3.7
scikit-learn
scipy

xlsxwriter not found; export to Excel unavailable

Python version

Please write here the output of printing sys.version 3.6.3

Platform information

Please write here the output of printing platform.platform()

asammdf version

_Please write here the output of printing asammdf.__version___3.1.0

Code

Code snippet

please write here the code snippet that triggers the error

Traceback

pleaase write here the error traceback

Description

Please describe the issue here.

Cannot get Signal when load_measured_data=False

When trying to get a signal of an MDF4 file read with the option load_measured_data=False, the returned signal is a different one and its samples and timestamps are empty.

>>> import asammdf
>>> mdf = asammdf.MDF('test.MF4', load_measured_data=False)
>>> signal = mdf.get('signal_XXXX')
>>> print(signal)
Signal { name="signal_YYYY":       s=[]    t=[]
        unit="" conversion=None }

I undertook some research and found out in the get method that the if statement on line 726 in mdf4.py does not pass because the ch_data_addr is 0, which should not be 0. So the problem seems to be located near there.

     [...]
725  ch_data_addr = channel['data_block_addr']
726  if ch_data_addr:
727      file_stream.seek(ch_data_addr, SEEK_START)
         [...]

review of incoming changes

Pyhton version

all

Platform information

all

asammdf version

3.0.0dev

Description

There are some topics regarding the upcoming 3.0.0 release that I would like to get some feed-back before releasing the code, so please share your opinion

  1. mdf versions 2 and 3 are similar enough that the classes MDF2 and MDF3 have been merged in the new MDF23 class found in the file mdf_v2_v3.py. I couldn't come with a better naming and it feels a bit odd. Maybe we can come up with a better name

  2. MDF objects are now iterable so you can write code like this:

    mdf = MDF(filename)
    for signal in mdf:
        # do something

    There are two things to decide here:

    1. at the moment the iterator is configurable to yield Signal objects or pandas DataFrame:
      from asammdf import MDF, configure
      configure(iter_channels=True)
      mdf = MDF(filename)
      for signal in mdf:
          # do something with Signal objects
      configure(iter_channels=False)
      for dataframe in mdf:
          # do something with the whole channel group as pandas DataFrame
      Should we keep this option or stick to yielding only Signal objects?
    2. when the iterator is configured to yield Signal objects the master channels are not yielded.
      Should we also yield the master channel?

GUI

Python version

asammdf 3.3.2

Description

Starting with 3.3.2 there will be a graphical user interface to handle

image

Any help is appreciated:

  • is this useful?
  • what are the missing features?
  • bugs?

.get fails when a channel is recorded at multiple sample rates.

Some channels in our field data is sampled at multiple rates, putting it into multiple MDF groups. (Setup was before I got here and out of my control)

mdfreader complains and renames the channel:

>>> mdf1 = mdfreader.mdf(mdf_file)
WARNING added number to duplicate channel name: BrakeAppPress_4
WARNING added number to duplicate channel name: BrakeAppPress_7
WARNING added number to duplicate channel name: BrakeAppPress_13

So when you get the channel, it grabs the 'original' (First one it found?)

>>> mdf1.get("BrakeAppPress")
{'data': array([ 0., ...,  0.]),
 'description': '',
 'master': 'time',
 'masterType': 1,
 'unit': 'kPa'}

asammdf throws a Value error:

>>> mdf2.get("BrakeAppPress")
ValueError: field 'BrakeSwitch' occurs more than once

Here's the channel_db:

>>> mdf2.channels_db["BrakeAppPress"]
[(0, 3), (0, 4), (0, 7), (0, 13)]

It should probably throw a warning but return a default value (either the longest or shortest dataset).

'right_shift' not supported

Hello
I met a issue when load MDF file, it seems comes from numpy, and this issue is not exist in version 2.5.3, bellowing is the error information

site-packages\asammdf\mdf4.py", line 2912, in get
vals = vals >> bit_offset

ufunc 'right_shift' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

at this time vals is a ndarray which with a list

Is it possible to append a bool signal ?

Python version

('python=3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 17:26:49) [MSC v.1900 32 bit '
'(Intel)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.13.3'
'asammdf=3.2.1'

Code

the existing signal ter15st has logical type in inca:

mdffile.get('Ter15st')
Signal(name=Ter15st\CCP:1, samples=array([1, 1, 1, ..., 1, 1, 1], dtype=uint8), timestamps=array([ 3.1090906 , 3.1205086 , 3.1292956 , ..., 245.62956445,
245.6399234 , 245.64957463]), unit=-, conversion={'id': b'CC', 'block_len': 62, 'range_flag': 0, 'min_phy_value': 0.0, 'max_phy_value': 0.0, 'unit': b'-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 'conversion_type': 0, 'ref_param_nr': 2, 'b': 0.0, 'a': 1.0}, comment=Terminal 15 status
Terminal 15 status, raw=False)

Code snippet

simt = timestamps array copied form an existing signal
siml = list of values (1, 0) copied from an existing signal

sl = np.asarray(siml, dtype=bool)
sig = asammdf.Signal(sl, simt, name='NewSignal')
sig
Signal(name=NewSignal, samples=array([ True, True, True, ..., True, True, True], dtype=bool), timestamps=array([ 3.1090906 , 3.1205086 , 3.1292956 , ..., 245.62956445,
245.6399234 , 245.64957463]), unit=, conversion=None, comment=, raw=False)
len(sl)
24222
len(simt)
24222
mdffile.append([sig,])

Traceback

Traceback (most recent call last):
File "<pyshell#48>", line 1, in
mdffile.append([sig,])
File "C:\WinPython-32bit-3.6.3.0Qt5\python-3.6.3\lib\site-packages\asammdf\mdf_v3.py", line 1356, in append
signal.samples.shape,
File "C:\WinPython-32bit-3.6.3.0Qt5\python-3.6.3\lib\site-packages\asammdf\utils.py", line 359, in fmt_to_datatype_v3
raise MdfException(message.format(fmt, shape))
asammdf.utils.MdfException: Unknown type: dtype=bool, shape=(24222,)_

Description

Is it possible to append a bool type signal ?
The goal would be to show the signal in the viewer MDA as logical type (graph at the bottom)

Thanks in advance

Creat a signal and append it in the existing file

Python version

'3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 17:26:49) [MSC v.1900 32 bit (Intel)]'

Platform information

'Windows-7-6.1.7601-SP1'

asammdf version

'3.1.0'

Code

Code snippet

mdffile = asammdf.MDF('recordFile.dat')

take for exemple samples and timestamps from an existing signal

sams = mdffile.get('TS155_bo').samples
samt = mdffile.get('TS155_bo').timestamps
sig1 = asammdf.Signal(sams, samt, name='newSignal')

sig1
Signal(name=newSignal, samples=array([1, 1, 1, ..., 1, 1, 1], dtype=uint8), timestamps=array([ 3.1090906 , 3.1205086 , 3.1292956 , ..., 245.62956445,
245.6399234 , 245.64957463]), unit=, conversion=None, comment=, raw=False)
mdffile.append(sig1)

Traceback

Traceback (most recent call last):
File "<pyshell#7>", line 1, in
mdffile.append(sig1)
File "C:\WinPython-32bit-3.6.3.0Qt5\python-3.6.3\lib\site-packages\asammdf\mdf_v3.py", line 1111, in append
timestamps = signals[0].timestamps
AttributeError: 'numpy.uint8' object has no attribute 'timestamps'

Description

I wanted to create a new signal from scratch and append it in an existing MDF3 file and then save it (replace the original .dat file or create a new one...)
What would be the right way to do this ?

Thanks in advance

Provide an Anaconda package on conda-forge

Hi,

I'm using conda for managing the dependencies of my projects and would like to ask if you'd be interested in providing a conda package of asammdf. I'm relatively new to creating conda packages, but it seems like the process is pretty straight-forward if the package is on PyPI already (see here). Having asammdf available on conda forge and being able to install it by a simple conda install asammdf -c conda-forge would be great!

What do you think? I could start working on this, but I wanted to ask for your opinions first.

Thanks in advance!

Python.exe crashes with mf4 files

Python version

Please run the following snippet and write the output here

('python=3.6.3 (v3.6.3:2c5fed8, Oct  3 2017, 17:26:49) [MSC v.1900 32 bit '
 '(Intel)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.14.3'
'asammdf=3.4.3'

Code

MDF version

4.10

Code snippet

mdf = asammdf.MDF('testdrive.mf4')
s = mdf.get('LabelName').samples

Traceback

There is no traceback, i have only a window with the crash and the message that python.exe has stopped working, no any message in the terminal.

Description

Sometimes python.exe crashes when i get a channel of MDF4 format.
I can reproduce it if i try 10 - 20 times.
Sometimes i reproduce it at the first time, sometimes after ...10 or more times.
I've never reproduced it with a MDF3 file
I've never reproduced it in the IDLE terminal
I've reproduced it also when getting the timestamps
I can reproduce it with memory='low' and memory='minimal' with the above code. However when i run a script where i do some calculations i get an error 'Devising by zero'
I tried with more files MDF4 (inca - mf4) but form the same machine (i don't know if there is a specific file output configuration), and i could reproduce it too.
The file size is 5.87 mb
I reproduced it also with mdf.select([Name])

What i could do to debug this case ?

Cant export to csv.

Python version

Please run the following snippet and write the output here

import platform
import sys
from pprint import pprint

pprint("python=" + sys.version)
('python=3.6.5 (default, Apr 25 2018, 14:23:58) \n'
 '[GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]')

pprint("os=" + platform.platform())
'os=Darwin-17.5.0-x86_64-i386-64bit'

try:
    import numpy
    pprint("numpy=" + numpy.__version__)
except ImportError:
    pass
'numpy=1.14.3'

try:
    import asammdf
    pprint("asammdf=" + asammdf.__version__)
except ImportError:
    pass
'asammdf=3.4.3'

Code

MDF version

_please write here the file version (you can run print(MDF(file).version))
3.00

Code snippet

please write here the code snippet that triggers the error
mdf.export('csv')

Traceback

please write here the error traceback
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/site-packages/asammdf/mdf.py", line 1294, in export
master.samples -= master.samples[0]
IndexError: index 0 is out of bounds for axis 0 with size 0

Description

Please describe the issue here.
Can't export to csv.

error: TypeError: divmod expected 2 arguments, got 1

Hi,
I have a problem using asammdf , hope you can help me~
When I open a.mf4 file like this:

from asammdf import MDF
mdf = MDF('F110.mf4', version='4.10')
speed = mdf.get('Speed')

it shows that:
File "......\asammdf\mdf4.py", line 3574, in get
pos_byte, pos_offset = divmod(ch_invalidation_pos)

TypeError: divmod expected 2 arguments, got 1

How can I do with it and why there is only one argument 'ch_invalidation_pos' in divmod?

Thank you very much and have a nice day~

get() returns wrong signal data for MDF4 format

Hi everyone,

Since version 2.8.1, get() returns wrong signal data for a given name (i.e data from another signal I guess).

The issue is still in the development branch but like I said the issue is there since version 2.8.1. The issue was introduced after my fix in PR #23.

After the introduction of my fix in PR #23, it had somehow an impact on the _prepare_record function, so that now the returned parents from _prepared_record is wrong.

The fix that I made in _read_channels() (see PR #23) changed the order in grp['channel_dependencies']. This one was wrong before. To make a long story short, since it was a recursive function, we had to append the composition channel before doing another call to _read_channels(). Indeed, channel_dependencies have to be append in the same order as for the channels list since we use the channel index to find the corresponding dependency_list in grp['channel_dependencies'] later on when the get function is called.

Let's consider this small MDF tree:

  • CH0: composition channel
    • CH1: composition channel
      • CH2: channel
      • CH3: channel

Before my fix (so in version 2.8.0), the get method was working (only for signals that were luckily without a dependency_list) and the grp['channels_dependencies'] would look like this:
[None, None, [CH2, CH3], [CH1]]
After my fix, this is how it looks like:
[[CH1], [CH2, CH3], None, None]

So now, the channels_dependencies are in the right order, but the data returned using get are wrong.

I did some investigation and my initial assumption is that the problem relies in the _prepare_record() function. I might be wrong but I think the order in which we loop over the channels in sortedchannels may be wrong, so that in the end we have wrong parent and wrong offset calculation.

It would be nice if someone can help with this issue since I am myself lacking some understandings regarding this _prepared_record function.

Best, Julien.

Creating a MDF instance from a file-like object

Hello,

I just began using asammdf and I'm wondering if it's possible to load data from a file-like object in the proper pythonic way. The default constructor for MDF accepts just a file name and I'm not sure if there's a way to deal with file-like objects.

UnicodeDecoderError in reading version in md.py

Python version

Please run the following snippet and write the output here

import platform
import sys
from pprint import pprint

pprint("python=" + sys.version)
pprint("os=" + platform.platform())

try:
    import numpy
    pprint("numpy=" + numpy.__version__)
except ImportError:
    pass

try:
    import asammdf
    pprint("asammdf=" + asammdf.__version__)
except ImportError:
    pass

Code

Code snippet

please write here the code snippet that triggers the error

Traceback

pleaase write here the error traceback

Description

version = file_stream.read(4).decode('ascii').strip(' \0')

UnicodeDecodeError: 'ascii' codec can't decode byte 0x8c in position 3: ordinal not in range(128)

Error using Group Generator

Python version

Please run the following snippet and write the output here

import platform
import sys
from pprint import pprint

pprint("python=" + sys.version)
pprint("os=" + platform.platform())

try:
    import numpy
    pprint("numpy=" + numpy.__version__)
except ImportError:
    pass

try:
    import asammdf
    pprint("asammdf=" + asammdf.__version__)
except ImportError:
    pass

('python=3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 '
'64 bit (AMD64)]')
'os=Windows-10-10.0.14393-SP0'
'numpy=1.14.1'
'asammdf=3.2.1'

Code

Code snippet

from asammdf import MDF

class MultiPandas(object):

def __init__(self, mdfpath):
    self.__mdf = MDF(mdfpath)
    iterGroups = self.__mdf.iter_groups()
    self.__groups = []
    finished = next(iterGroups)
    self.__groups.append(finished)

if name == "main":
filename = ...
multipandas = MultiPandas(filename)

Traceback

File "D:/Projekte/PythonKI/CoreData/MultiPandas.py", line 48, in
multipandas = MultiPandas(filename)
File "D:/Projekte/PythonKI/CoreData/MultiPandas.py", line 23, in init
finished = next(iterGroups)
File "C:\ProgramData\Anaconda3\envs\osmnx\lib\site-packages\asammdf\mdf.py", line 1222, in iter_groups
master.append(self.get_master(i, data=data_bytes))
File "C:\ProgramData\Anaconda3\envs\osmnx\lib\site-packages\asammdf\mdf_v3.py", line 3123, in get_master
data_bytes, offset = fragment
ValueError: not enough values to unpack (expected 2, got 1)

Description

Error when using MDF.iter_groups()

Cannot find channel's name with '\\'

Hello,

I read a mdf3 file (.dat) from inca
I have the dictionary with the channels and when i want to get a channel it is impossible to find it because of the slashes in the name.
eg:
{'ACC\CCP:1': [(4, 1)], 'Pwt\CCP:1': [(7, 1)], 'GearCord\CCP:1': [(8, 1)], '$EVENT_COMMENTS': [(9, 1)], '$PAUSE_COMMENTS': [(10, 1)], '$SNAPSHOT': [(11, 1)]}

mdffile.get(''ACC\CCP:1') cannot be found as it searches for 'ACC' and not for 'ACC\CCP:1'

if name not in self.channels_db:

Cam

Python version

Please run the following snippet and write the output here

import platform
import sys
from pprint import pprint

pprint("python=" + sys.version)
pprint("os=" + platform.platform())

try:
    import numpy
    pprint("numpy=" + numpy.__version__)
except ImportError:
    pass

try:
    import asammdf
    pprint("asammdf=" + asammdf.__version__)
except ImportError:
    pass

Code

MDF version

_please write here the file version (you can run print(MDF(file).version))

Code snippet

please write here the code snippet that triggers the error

Traceback

please write here the error traceback

Description

Please describe the issue here.

Fix Mat file Export

Python version

Please run the following snippet and write the output here

python=2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]'
'os=Windows-7-6.1.7601-SP1'

Code

 for file_name in os.listdir(dir_mdf_file):
        mdf_parser = asammdf.mdf.MDF(name=file_name, memory='low', version='4.00')
        mdf_file = mdf_parser.export('mat',file_name)

Traceback

(<type 'exceptions.TypeError'>, TypeError("'generator' object has no attribute '__getitem__'",), <traceback object at 0x000000000E247A88>)

Error 2:
(<type 'exceptions.ValueError'>, ValueError('array-shape mismatch in array 7',), <traceback object at 0x0000000014D9B0C8>)

Description

While trying to convert MDF files to .mat files using MDF module, I am getting these two errors.
I am assuming that Error 2 may be because of NUMPY, can anyone help why this error comes while exporting?
Can i get help on this??

How can I configure verbosity when exporting?

Python version

'python=3.6.5 (default, Apr 12 2018, 22:45:43) \n[GCC 7.3.1 20180312]'
'os=Linux-4.14.4-1-ARCH-x86_64-with-arch'
'numpy=1.14.2'
'asammdf=3.3.1'

Code

Code snippet

mdf4.export('csv', export_path, overwrite=True)

Traceback

Exporting group 1 of 1

Description

Is there a way to disable these 'Exporting group 1 of 1' messages? I'm using asammdf in another application that prints status messages on its own and don't want these messages inbetween.

Error when append a signal in a mf4 file

Python version

Please run the following snippet and write the output here

('python=3.6.3 (v3.6.3:2c5fed8, Oct  3 2017, 17:26:49) [MSC v.1900 32 bit '
 '(Intel)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.13.3'
'asammdf=3.4.0'

Code

MDF version

4.10

Code snippet

sig = asammdf.Signal(samples, times, name='NSIGNAL')

sig
<Signal NSIGNAL:
samples=[0 0 0 ..., 0 0 0]
timestamps=[ 4.33278000e-02 1.46902900e-01 2.50719500e-01 ..., 5.55888950e+02
5.55988988e+02 5.56078259e+02]
unit=""
conversion=None
source=None
comment=""
mastermeta="None"
raw=True
display_name=>
mdf.append([sig,])

Traceback

Traceback (most recent call last):
File "<pyshell#14>", line 1, in
mdf.append([sig,])
File "C:\WinPython-32bit-3.6.3.0Qt5\python-3.6.3\lib\site-packages\asammdf\mdf_v4.py", line 2366, in append
seek(0, 2)
File "C:\WinPython-32bit-3.6.3.0Qt5\python-3.6.3\lib\tempfile.py", line 483, in func_wrapper
return func(*args, **kwargs)
ValueError: seek of closed file

Description

Crash when trying to append a signal in a mf4 file.
Is this a Python issue ? https://bugs.python.org/issue29245

Streaming/Appending support?

Examples all use a fixed set of data to create new files.

I'd like to ditch CANalyzer and and dSpace setup and replace a chunk of it with a Pi, some MPC2515 based adapter and other middleware. SocketCAN supports quite a few adapters. Being able to drop our CANcase Loggers for certain cases would be great.

Not sure the best way to implement it:

Pesudo code:

mdf4 = MDF(version='4.00')
s_engspd= Signal(name='EngineSpeed', unit='f8')

....

signals = [s_engspd, s_imap, s_fuel]
mdf4.stream(signals, 'Created by Python')

t, s = pyccp.decode(python_can.read())
s_engspd.append(timestamp = t, sample=s)

mdf4.flush() # Write to disk.

Looking to shoehorn asammdf in with these tools for live data collection:

The unit is not stored in the Signal with MDF3 format

Python version

('python=3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 17:26:49) [MSC v.1900 32 bit '
'(Intel)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.13.3'
'asammdf=3.3.0'

Code

Code snippet

import asammdf
import numpy as np
mdf = asammdf.MDF3()
sig = asammdf.Signal(np.array([0,10,15,20]), np.array([0,1,2,3]), name='S3', unit='Km/h')
>>>sig
<Signal S3:
	samples=[ 0 10 15 20]
	timestamps=[0 1 2 3]
	unit="Km/h"
	conversion=None
	source=None
	comment=""
	mastermeta="None"
	raw=False
	display_name=>

mdf.append([sig])
>>>mdf
<asammdf.mdf_v3.MDF3 object at 0x032C73F0>
mdf.get('S3')
<Signal S3:
	samples=[ 0 10 15 20]
	timestamps=[ 0.  1.  2.  3.]
	unit=""
	conversion=None
	source=SignalSource(name='Channel inserted by Python Script', path='', comment='Module number=0 @ address=0', source_type=0, bus_type=0)
	comment=""
	mastermeta="('Time', 1)"
	raw=False
	display_name=>

Traceback

No traceback

Description

With MDF3 format, the unit is not stored in the signal.
With MDF4 it works fine.

Cut mdf3 file string operation on non-string array

Hi,

Get an error when trying to do a mdf.cut(start=300)
Stacktrace below, string operation on non-string array

300 320 Traceback (most recent call last): File "whlk.py", line 26, in short = mdf.cut(start=first) File "C:\Users\dennis.delin\AppData\Local\Programs\Python\Python36-32\lib\site-packages\asammdf\mdf.py", line 220, in cut data=data File "C:\Users\dennis.delin\AppData\Local\Programs\Python\Python36-32\lib\site-packages\asammdf\mdf3.py", line 1913, i n get vals = encode(vals, 'latin-1') File "C:\Users\dennis.delin\AppData\Local\Programs\Python\Python36-32\lib\site-packages\numpy\core\defchararray.py", l ine 540, in encode _vec_string(a, object_, 'encode', _clean_args(encoding, errors))) TypeError: string operation on non-string array

Cannot get signal in MDF4 because of error in dependency_list

Hi everyone,

When I try to get a list of signals that belongs to the same ChannelArrayBlock, the last signal has its channel_dependencies not empty (although it is a normal Channel).
The dependency_list being not empty, it tries to get every signals like if it was a ChannelArrayBlock. This signal being itself in the dependency_list, we enter into an infinite loop and asammdf crashes when reaching Python maximal recursion limit.

When looking a bit more in detail, the dependency_list that we get for this signal, is actually the dependency_list of the ChannelArrayBlock it belongs to.

In the ChannelArrayBlock I tested, I have 3 signals, so when I put an offset of three, the get function is working again:

dependency_list = grp['channel_dependencies'][ch_nr + 3]
instead of:
dependency_list = grp['channel_dependencies'][ch_nr]
in get function

However, I am not sure yet if this offset is constant or if it depends.

I am still doing some investigation to find out why the channel_dependencies does not match with the channels we have.

Cannot read MDF4 file

Hi Daniel,

Thank you for your work!
When trying to read an MDF4 file with no options, I got the following error:

>>> import asammdf
>>> mdf = asammdf.MDF('test.MF4')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf.py", line 48, in __init__
    self.file = MDF4(name, load_measured_data, compression=compression)
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf4.py", line 95, in __init__
    self._read(file_stream)
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf4.py", line 168, in _read
    self._read_channels(ch_addr, grp, file_stream, dg_cntr, ch_cntr)
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf4.py", line 268, in _read_channels
    sd_block = SignalDataBlock(address=data_list['data_block_addr{}'.format(i)], file_stream=file_stream)
KeyError: 'data_block_addr2'

UnboundLocalError: local variable 'record' referenced before assignment

Python version

3.6.0

Platform information

Windows 10 Enterprise

asammdf version

3.0.0.dev0

Description

Loading an MDF4 file with the following code:

from asammdf import MDF, Signal, configure
import numpy as np
with MDF(r"filename.mf4") as mdf:
  s = mdf.get("CHANNEL_NAME")

I get the following error:

Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "C:\Users\H215374\AppData\Local\Programs\Python\Python36-32\lib\site-packages\asammdf\mdf_v4.py", line 3801, in get
    inval_bytes = record['invalidation_bytes']
UnboundLocalError: local variable 'record' referenced before assignment

TypeError: 'float' object cannot be interpreted as an integer

Trying to read an MDF3 from a CANcase logger.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
C:\Projects\Python-64bit-3.6\python-3.6.1.amd64\lib\site-packages\asammdf\mdf3.py in get(self, name, group, index, raster, samples_only)
    981             try:
--> 982                 record = grp['record']
    983             except:

KeyError: 'record'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-17-0762963e6e7c> in <module>()
----> 1 mdf.get("EngSpeed")

C:\Projects\Python-64bit-3.6\python-3.6.1.amd64\lib\site-packages\asammdf\mdf3.py in get(self, name, group, index, raster, samples_only)
    982                 record = grp['record']
    983             except:
--> 984                 record = grp['record'] = fromstring(grp['data_block']['data'], dtype=dtypes)
    985 
    986         # check if this is a channel array

C:\Projects\Python-64bit-3.6\python-3.6.1.amd64\lib\site-packages\numpy\core\records.py in fromstring(datastring, dtype, shape, offset, formats, names, titles, aligned, byteorder)
    707         shape = (len(datastring) - offset) / itemsize
    708 
--> 709     _array = recarray(shape, descr, buf=datastring, offset=offset)
    710     return _array
    711 

C:\Projects\Python-64bit-3.6\python-3.6.1.amd64\lib\site-packages\numpy\core\records.py in __new__(subtype, shape, dtype, buf, offset, strides, formats, names, titles, byteorder, aligned, order)
    421             self = ndarray.__new__(subtype, shape, (record, descr),
    422                                       buffer=buf, offset=offset,
--> 423                                       strides=strides, order=order)
    424         return self
    425 

TypeError: 'float' object cannot be interpreted as an integer

MDF4's save() doesn't handle spaces correctly

Python version

3.6.0

Platform information

Windows 10 Enterprise

asammdf version

3.0.0dev

Description

All text after a space is stripped from the destination passed to save() in the MDF4 class.

Specifically, I have a series of MDF files from a customer that following a naming pattern like "MYFILE [1].mf4", "MYFILE [2].mf4". I am reading them (works fine), updating them (works fine), and saving them. The actual save operation completes successfully, but they are all saved as "MYFILE.mf4", and the file gets overwritten each subsequent save.

EDIT: This only appears to happen if the extension is excluded (.e.g. mdf.save("MYFILE [1]")). If dst includes ".mf4" the spaces are handled correctly.

Module not working when load_measured_data=False

Hi Daniel!

When the option load_measured_data is equal to False, the module crashes on line 248 in mdf4.py because it tries to access a value for the key 'data_block' in gp dictionary but the key does not exist.

Traceback (most recent call last):
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf.py", line 48, in __init__
    self.file = MDF4(name, load_measured_data, compression=compression)
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf4.py", line 90, in __init__
    self._read(file_stream)
  File "C:\Users\xxxx\Documents\git\asammdf\asammdf\mdf4.py", line 248, in _read
    ba = bytearray(gp['data_block']['data'][:8])
KeyError: 'data_block'

Best, Julien

Repo clean-up

It is advised to avoid adding large binary files to a git repository.

I've broken this rule and the result is clear:

  • a clone of asammdf takes ~118MB for ~1900 objects
  • as a comparison the CPython has ~216MB for ~680000 objects

image
image

This is why I've decided to clean-up the repository and remove the bloat.

The test files used for bench-marking will be removed, as well as the PNG results. I can provide the test files upon request to whoever is interested.

I know this will break the existing forks and I apologize for it.

Sampling rate different: when using ETAS MDA and Asamdf MDF

('python=3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 11:57:41) '
'[MSC v.1900 64 bit (AMD64)]')
'os=Windows-7-6.1.7601-SP1'
'numpy=1.14.2'
'asammdf=3.3.1'

Hello, difference was found when using ETAS MDA and Asamdf MDF to read a .dat file. The sampling time is different, MDA at 0.010s as set when be recorded, but MDF at abt. 0.009s and 0.001s jumping.
Donnot know WHY?
image

Part of the data:
ETAS MDA ASAMMDF MDF
time UPmpP_pUnFil time UPmpP_pUnFil
0 0.072471607 5007 0 5007
1 0.082392079 5056 0.072471607 5007
2 0.092392554 5020 0.082392079 5056
3 0.102425032 4983 0.092392554 5020
4 0.112441508 4946 0.102425032 4983
5 0.122393982 4910 0.112441508 4946
6 0.132394457 5020 0.122393982 4910
7 0.142442935 5056 0.132394457 5020
8 0.152475413 5020 0.133610515 5020
9 0.162395885 4983 0.142442935 5056
10 0.17239636 4946 0.152475413 5020
11 0.18247684 4934 0.162395885 4983
12 0.192445314 5032 0.17239636 4946
13 0.202397788 5056 0.18247684 4934
14 0.212398264 5020 0.192445314 5032
15 0.222478743 4983 0.202397788 5056
16 0.232463218 4946 0.212398264 5020
17 0.24238369 4934 0.222478743 4983
18 0.252416168 5032 0.232463218 4946
19 0.262480646 5056 0.233679276 4946
20 0.272449121 5007 0.24238369 4934
21 0.282401594 4971 0.252416168 5032
22 0.29240207 4934 0.262480646 5056
23 0.30248255 4946 0.272449121 5007
24 0.312483025 5032 0.282401594 4971
25 0.322387497 5044 0.29240207 4934
26 0.332403973 5007 0.30248255 4946
27 0.342484453 4971 0.312483025 5032
28 0.352452927 4922 0.322387497 5044
29 0.362405401 4946 0.332403973 5007
30 0.372405877 5044 0.333620031 5007
31 0.382486356 5044 0.342484453 4971
32 0.392470831 4995 0.352452927 4922
33 0.402391303 4959 0.362405401 4946
34 0.41240778 4922 0.372405877 5044
35 0.42248826 4971 0.382486356 5044
36 0.432440733 5044 0.392470831 4995
37 0.442393207 5032 0.402391303 4959
38 0.452409683 5007 0.41240778 4922
39 0.462442161 4959 0.42248826 4971
40 0.472474638 4922 0.432440733 5044
41 0.48239511 4983 0.433656791 5044
42 0.492411587 5056 0.442393207 5032

How to get source information from groups, signals

Python version

'3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]'

Platform information

'Windows-7-6.1.7601-SP1'

asammdf version

3.1.0

Description

Sorry press the enter to quickly...
I am a newbie in GitHub and I am not sure this is the correct way to ask a question. Sorry if it is not.

I just discovered asammdf and could read some signals from my mf4 files (4.11). I like it, great module!
I was just wondering how I could get more information from the signals (like source information, etc...)?

Background: our mf4-files contains bus protocols of same type of ECUs from different networks. Each ECU works with the same DBC. The MF4-files also contains same signal name (for example MSG_Motor::angle_MT) but from different sources (CAN1, CAN2... CAN6).

Is there any possibility to get these information from the MF4 with asammdf?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.