Giter Club home page Giter Club logo

dolfyn's People

Contributors

jmcvey3 avatar lillieogden avatar lkilcher avatar mcfogarty avatar rfonsecadasilva avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dolfyn's Issues

ADV heading range -180 to +180, ADCP heading range 0-360

RDI ADCP heading range is 0-360 degrees, but the Nortek ADV heading range is -180 to +180 degrees. It would be convenient if all headings were reported in the range 0-360.

I think the ADV heading range of -180 to +180 will only occur when _check_declination is executed and 'declination' is found. This is because nortek_orient2euler will be called on line 108 of rotate/base.py and it uses np.rad2deg where rad2deg(x) is 180 * x / pi. nortek_orient2euler is defined on line 180 of rotate/base.py.

But, the ADV heading should have the range 0-360 degrees if the 'if' statement on line 120 of rotate/base.py is executed:
if 'heading' in odata and
not advo.props.get('declination_in_heading', False):

    if cs == 'earth' and not rotation_done_flag:
        # Rotate to instrument coordinate-system before adjusting
        # for declination.
        advo.rotate2('inst', inplace=True)

    odata['heading'] += advo.props['declination']
    odata['heading'] %= 360

advo.props['declination_in_heading'] = True

check_offset warning

There is some text in the RDIReader.check_offset method indicating a bug. This issue is to collect feedback regarding that statement. If you encounter that statement, please reopen this issue and, if possible, provide a link to the offending data file.

error in reading .vec files

Hello,

Firstly, thanks a ton for this incredibly useful dolfyn package! I am trying to implement your motion correction module to some ADV-IMU data that I have, and am running into issues. While I verified that the dolfyn package is correctly installed (it runs fine for the sample .vec files in the example_data folder), I get an error when I try to use it on the .vec files of my data. Does the package work for data acquired in both continuous as well as burst modes in the Vector?

dolfyn_error

I have attached a snapshot of the error here and also attached the sample acquired data in .vec format. Any comments regarding this would be extremely helpful.
https://www.dropbox.com/s/yr76kx32zfpuhgl/sample_data1.vec?dl=0

Thanks!
Aditya

Wrong Pitch/Roll/Heading out of DOLfYN

It appears that p,r,h are calculated incorrectly in dolfyn.rotate.base.nortek_orient2euler. I think we need to take the inverse of omat before calculating?! How does this relate with euler2orient?

Incorrect scaling of Accel, AngRt.

The Accel and AngRt data are not in the correct units. Version 0.3.2 changed the DVel, DAng signals to Accel and AngRt, but did not scale these signals by the sample_rate. This results in incorrect estimates of urot, uacc and therefore motion correction. All motion correction performed using v0.3.2 is incorrect (unless fs=1Hz).

New Pip Install?

Hello,

First, many thanks for writing this package. It's going to help immensely with my data processing. However I'm having some trouble with the install. The most recent pip install is version 0.8.1, which encounters various errors in Python 3. And when I clone 0.9.0 from git, I can't get past the:

python setup.py install

because I encounter the error:

error: Unexpected HTML page found at https://github.com/lkilcher/pyDictH5.git#egg=pyDictH5

Anyway, I was hoping that by updating the pip version then the process would go more smoothly. Or if you have any other suggestions regarding the HTML egg error I'd be happy to try those as well!

-Galen

ValueError: illegal value in 4-th argument of internal None

This issue was indicated to me by a colleague who was trying to call the dolfyn.adv.turbulence.calc_turbulence function. Here is the image she sent me of the stack trace:
image001

As I was investigating this issue on my machine I was getting the following warning everytime I called scipy.signal.detrend:

/Library/Python/2.7/site-packages/scipy/linalg/basic.py:884: RuntimeWarning: internal gelsd driver 
lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038,
fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver.

I haven't exactly fixed this issue for either of us yet, but I thought I'd note it here...

prcnt_gd seems backward

The values of dat.signal.prcnt_gd for RDI ADCPs seems backwards (good values have values of 0, and bad values have values >0).

issues with api.py

I encountered several issues when using this command: import dolfyn.adp.api as apm

Here are the warning messages I got:

  1. File "C:/ProgramData/Anaconda3/Lib/site-packages/test/adp_tests.py", line 1, in
    import dolfyn.adp.api as apm
    ImportError: cannot import name 'binner' from 'dolfyn.adp.base' (C:\ProgramData\Anaconda3\lib\site-packages\dolfyn\adp\base.py)

  2. File "C:\ProgramData\Anaconda3\lib\site-packages\dolfyn\adp\api.py", line 3, in
    from .api import read, load
    ImportError: cannot import name 'read' from 'dolfyn.adp.api' (C:\ProgramData\Anaconda3\lib\site-packages\dolfyn\adp\api.py)

  3. File "C:\ProgramData\Anaconda3\lib\site-packages\dolfyn\adp\api.py", line 3, in from .api import read_rdi, load
    ImportError: cannot import name 'read_rdi' from 'dolfyn.adp.api' (C:\ProgramData\Anaconda3\lib\site-packages\dolfyn\adp\api.py)

I had those issues because I used a depreated interface of dolfyn.
With the new interface (0.10.1) import dolfyn.adp.api as apm has to be change to import dolfyn as dlfn.

Now all work just fine.

Coordinate system of Motion Correction

It seems that there is an inconsistency between correct_motion and CorrectMotion in that: the former explicitly requires the input object be in the inst coordinate system, but it is implicit for the latter. If the latter works on data in the 'earth' frame, is this creating errors because the angrt data is in earth frame?!

>3 beam instruments

Data from instruments with more than 3 beams are not handled consistently. Different manufacturers use different beam-to-inst rotation matrices (e.g., RDI calculates error velocity, while Nortek just calculates another vertical velocity). More work is needed to figure out how to handle the extra information consistently.

Error in io._read_bin

I ran into the following error when trying to read source data files in example_data:

In[16]: dolfyn.read('RDI_withBT.000')
Reading file RDI_withBT.000 ...
Traceback (most recent call last):
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3291, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-16-1b2a3777040f>", line 1, in <module>
    dolfyn.read('RDI_withBT.000')
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/api.py", line 35, in read
    dat = func(fname, userdata=userdata, nens=nens)
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/rdi.py", line 38, in read_rdi
    with adcp_loader(fname) as ldr:
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/rdi.py", line 831, in __init__
    self.read_hdr()
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/rdi.py", line 699, in read_hdr
    nextbyte = fd.read_ui8(1)
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/_read_bin.py", line 171, in read_ui8
    return self.read(n, 'B')
  File "/Users/wu-jung/miniconda3/envs/echopype/lib/python3.7/site-packages/dolfyn/io/_read_bin.py", line 163, in read
    raise eofException
dolfyn.io._read_bin.eofException

I tested it using python 3.7 and 3.5. The same error occurred for reading ad2cp files and a few others I tried to read.
This must be something stupid but I couldn't figure it out. Any help will be much appreciated!

Document `err_vel`

The error velocity (from RDI's) is not documented. It is currently located here: dat.vel[3].

Improper handling of declination in motion correction.

The motion correction algorithms don't properly account for declination when calculating velacc (and AccelStable, and Accel). This is because these data items are computed separately, and before any others. This won't affect data where declination was not specified, but it means that velacc is in a different coordinate system (magnetic North) than the other variables such as vel and velrot (True North), when it is.

RDI Timestamp error

I received a report of an error in reading RDI timestamps that ends with:

File "<dolfyn_path>\dolfyn\adp\_readbin.py", line 741, in load_data
 1e4 * self.ensemble.rtc[6, :]))

ValueError: microsecond must be in 0..999999

This is due to an invalid timestamp being input into datetime.datetime. In this case, it was an invalid microsecond (greater than 1M).

Declination adjustment

Does the sign of the declination adjustment change if an ADP or ADV is up vs. down-facing?

What about the order of the rotations? Does it matter?

`subset` doesn't work for arbitrary data additions

If you add a data-item that doesn't have a time-dimension that matches the other time dimensions, then subset won't work.

For now, a workaround is to pop the data item from the data object, then perform subset, then add it back:

z_ = dat.pop('z_')
dat2 = dat.subset[1000:5000]
dat2.z_ = z_
dat.z_ = z_

This is linked to the problem that we do not yet track the array dimensions. It looks like xarray has the functionality we need, perhaps we should switch to using that?

User-input 'body2head_rotmat' read issue

I was having an issue where changing the user-inputted 'body2head_rotmat' was making no difference within the motion correction, so I tracked it down to the private method _rotate_vel2body()
(located in ./dolfyn/rotate/vector.py):

def _rotate_vel2body(advo):
    if (np.diag(np.eye(3)) == 1).all():
        advo.props['vel_rotated2body'] = True
    if 'vel_rotated2body' in advo.props and \
       advo.props['vel_rotated2body'] is True:
        # Don't re-rotate the data if its already been rotated.
        return
    if not rotb._check_rotmat_det(advo.props['body2head_rotmat']).all():
        raise ValueError("Invalid body-to-head rotation matrix"
                         " (determinant != 1).")
    # The transpose should do head to body.
    advo['vel'] = np.dot(advo.props['body2head_rotmat'].T, advo['vel'])
    advo.props['vel_rotated2body'] = True

The first "if" statement is never false, and subsequently the second "if" statement is always true, and the function returns without reaching the second to last line that does the head to body transpose.

I believe, for the first "if" statement, np.eye(3) was supposed to be advo.props['body2head_rotmat'], because if the latter = an identity matrix, the function appears to work fully as written.

So, replacing np.eye(3) with advo.props['body2head_rotmat'], along with using the right body2head orientation matrix appears to fix all the data that is run through dolfyn.adv.api.motion.correct_motion().
(It doesn't fix the uncorrected data - this private method to do the head-body transpose appears only to be called in dolfyn.adv.api.motion.correct_motion()?).

However this change does error out six of the nosetests because they can't find 'body2head_rotmat'; it doesn't appear to be loaded with the .h5 files? Not sure.

Clean up `adp/clean.py`

Most of the functions in here are no longer compatible with the latest dolfyn data format. For example, these functions often loop over .u, .v, .w, but in reality it should just operate on ['vel'].

bt_vel not in dat.props[rotate_vars] for RDI with bottom tracking

dat = dolfyn.read(fname_rdi + '.000', keep_orient_raw=True)

yields
dat.props['rotate_vars'] = {'vel'}

despite rdi.py having
def load_data(self, nens=None):
.
.
.
if 'bt_vel' in dat:
dat['props']['rotate_vars'].update({'bt_vel', })

And dat object does have bt_vel after being read in.

Velocity magnitude changes when using 'Declination'

The velocity magnitude changes when motion correction is applied even when motion signals are set to zero (dat.Accel[:] = 0; dat.AngRt[:] = 0). This SHOULD NOT be happening.

For example, using a data file from a 'Turbulence Torpedo' deployment, the following code:

import dolfyn.adv.api as avm
import matplotlib.pyplot as plt
import numpy as np

# Create a default motion correction instance
mc = avm.motion.CorrectMotion()

# Load the data file:
dat = avm.read_nortek('data_file.VEC')

# Specify a declination
# !!! This is crucial, the bug is minor or non-existant without this line.
dat.props['declination'] = 14.5

# Specify the postion/orientation of the ADV head.
dat.props['body2head_vec'] = np.array([0.21, 0, 0.603])  # in meters
dat.props['body2head_rotmat'] = np.array([[0, 0, 1],
                                          [0, 1, 0],
                                          [-1, 0, 0], ])
# Grab the relavent data
dat = dat.subset(slice(22150,103900))

# Copy the data for performing motion correction.
dat2 = dat.copy()
# Zero these out so that I can see the effect of 'orientation signals' only.
dat2.Accel[:] = 0
dat2.AngRt[:] = 0
mc(dat2)
avm.rotate.earth2principal(dat2)


#######
# Compare velocity magnitudes.
fig = plt.figure(2, figsize=[5, 5])
fig.clf()
ax = plt.subplot(211)
ax.plot(-dat.u)
ax.plot(-dat2.u)
ax.axhline(0, color='k', linestyle='--')
ax.set_xlim([0, 10000])
ax.set_ylim([0, 4])
ax.set_ylabel('Streamwise velocity [m/s]')
ax = plt.subplot(212)
ax.plot(np.sqrt((dat2._u ** 2).sum(0)) - np.sqrt((dat._u ** 2).sum(0)))
ax.axhline(0, color='k', linestyle='--', linewidth=2)
ax.set_xlim([0, 10000])
ax.set_ylim((-0.4, 0.8))
ax.set_ylabel('$|u_\mathrm{mc}| - |u_\mathrm{raw}|\,\mathrm{[m/s]}$', size='large')

Generates this figure,

bad_magnitude

So, the question is clearly, why is the magnitude of the velocity different (second panel is non-zero) when the motion signals (third panel) are zero?

Note here that this issue is not particularly large when the declination is not specified. Also note that this appears to be related to something going wrong with the orientation matrix:

In [61]: np.linalg.det(dat.orientmat[:,:,0])
Out[61]: 0.99999988

In [62]: np.linalg.det(dat2.orientmat[:,:,0])
Out[62]: -0.18341382

The determinant of the orientation matrix after motion-correction is non-zero?!! This is no good. Also, the orientation matrix is non-zero even when I comment out the dat.props['declination'] = 14.5 line. In that case, I get:

In [64]: np.linalg.det(dat2.orientmat[:,:,0])
Out[64]: -0.35748771

Why does this problem exist? Why does changing declination mess up the velocity magnitude, but the orientation matrix gets messed up - differently - either way.

Signature Data Organization

Reading Nortek Signature files (.ad2cp) has ambiguity associated with the ensemble counter when multiple ping-types are merged together. To address this, it would be better if output data was organized by ping-type, and then the user could apply a nortek2.reduce-like function that meets their needs.

I think this starts with using ping-type specific ensemble counters. This raises issues when the user specifies the number of ensembles he/she wants, but that can be sorted out I think.

read ADCP data from a specific ensemble to another

My concern is not actually a dolfyn issue.

I'm wandering if there is any way to just read ADCP data with dlfn.read from a specific ensemble to another. Is that possible? I just need some specific periods that I previously identified with WinADCP. Reading the entire raw data file takes time and it is not useful for what I want to do.

Thanks

edit suggested for assigning dat['range'] for AD2CP data

in nortek2.py
data['range'] = (np.arange(data['vel'].shape[1]) *
data['config']['cell_size'] +
data['config']['blanking'])
if 'vel_b5' in data:
data['range_b5'] = (np.arange(data['vel_b5'].shape[1]) *
data['config']['cell_size_b5'] +
data['config']['blanking_b5'])

But, np.arange(data['vel'].shape[1]) yields an
array with first entry as 0

if cell_size = 1.0 m and blanking = 0.5 m,
dat.range[0] = 0 * cell_size + blanking = 0 + 0.5 = 0.5, when based on
N3015-025-Principles-of-Operation_Signature document from Nortek,

dat.range[0] should be

center of nth cell = blanking + n * cell_size = 0.5 m + (1)*1 m = 1.5 m

Suggested edit to nortek2.py:
data['range'] = (np.arange(data['vel'].shape[1]) + 1 *
data['config']['cell_size'] +
data['config']['blanking'])
if 'vel_b5' in data:
data['range_b5'] = (np.arange(data['vel_b5'].shape[1]) + 1 *
data['config']['cell_size_b5'] +
data['config']['blanking_b5'])

Note the rdi.py read file assigns dat['range'] differently and is okay.
dat['range'] = (self.cfg['bin1_dist_m'] +
np.arange(self.cfg['n_cells']) *
self.cfg['cell_size_m'])

Trouble joining two AD2CP datasets

Trying to merge two AD2CP files, one from the first part of the day (.102a.h5) and one from the latter part of the day (.102b.h5).

This merge was attempted in using the following script and works for the main groups (e.g. 'vel', but not the. subgroups (e.g. 'env', 'orient'). This method is failed because of the if statement.

day_wrap0 = dolfyn.load(fname_sig + '.102a.h5')

. 16.80 hours (started: Jul 15, 2017 00:00)
. earth-frame
. (54 bins, 120993 pings @ 2Hz)

day_wrap = dolfyn.load(fname_sig + '.102b.h5')

. 7.20 hours (started: Jul 15, 2017 16:48)
. earth-frame
. (54 bins, 51806 pings @ 2Hz)

Now join the two split datasets (fill NaN between)

n_end = len(day_wrap.mpltime) # 51806
n_str = len(day_wrap0.mpltime) # 120993
outd = day_wrap.copy() # copy of latter part of the day, ...102b.h5
for ky in day_wrap.keys():  #day_wrap.keys()
                            #  dict_keys(['props', 'alt', 'altraw', 'config', 'env',
                            #  'mpltime', 'orient', 'range', 'range_b5', 'signal',
                            #  'sys', 'vel', 'vel_b5'])
    try:
        dend = day_wrap[ky] # dend is the contents of each key, length 51806, latter part of the day
        dstr = day_wrap0[ky] # dstr is the contents of each key, length 120993, beginning of day
    except AttributeError:  
        continue
    if (isinstance(dend, np.ndarray) and
            dend.shape[-1] == n_end):
            # isinstance is false if ky = env, because type(dend) is dolfyn.data.base.TimeData
            # dend.shape[-1] == n_end throws error with ky = env, TimeData' object has no attribute 'shape'
            # isinstance is true if ky = vel, because type(dend) is numpy.ndarray
            # dend.shape[-1] == n_end  is true if ky = vel

        shp_out = list(dstr.shape)
        shp_out[-1] = n_day
        tmpd = np.empty(shp_out, dtype=dend.dtype)
        tmpd[..., :n_str] = dstr
        try:
            tmpd[..., n_str:-n_end] = np.NaN
        except ValueError:
            tmpd[..., n_str:-n_end] = -1
        tmpd[..., -n_end:] = dend
        outd[ky] = tmpd
outd.to_hdf5(fname_sig + '.102.h5')

This yields

. 24.00 hours (started: Jul 15, 2017 00:00)
. earth-frame
. (54 bins, 172800 pings @ 2Hz)
*------------
| mpltime : <array; (172800,); float64>
| range : <array; (54,); float64>
| range_b5 : <array; (54,); float64>
| vel : <array; (4, 54, 172800); float64>
| vel_b5 : <array; (1, 54, 172800); float32>

But the subgroups have not been merged (e.g. dat.env)
<class 'dolfyn.data.base.TimeData'>: Data Object with Keys:
*------------
| c_sound : <array; (51806,); float32>
| press : <array; (51806,); float32>
| temp : <array; (51806,); float32>

I attempted to use the append function from
https://lkilcher.github.io/dolfyn/api.html?highlight=append#dolfyn.Velocity.append
append(other)

Join two data objects together.
For example, two data objects d1 and d2 (which must contain the same variables, with the same array dimensions) can be joined together by:

dat = d1.append(d2)

dat102 = dat1.append(dat2)

But nothing happens. If in Visual Source Code I right-click on append, no function is found.

Handle declination rotations in ADPs.

We should automatically apply declination rotations for ADP data.

This will take some care because RDI data files have the config['magnetic_var_deg'] and config['xducer_misalign_deg'] variables that need to be handled somehow.

Why do '7f79' bytes appear in an RDI file where '7f7f' is expected?

A file that I received from some collaborators has a different header start byte than what is indicated in the RDI documentation. I still do not understand why this is, but branch debug-7f79 resolves the issue for the file they supplied. For now I'm going to leave this fix in that branch until I understand the source of the discrepancy.

Handle external heading input

Are RDI instruments the only ones that can take external heading data (e.g., GPS NMEA strings)? If so, we will need to take care in how we handle those data. In particular, I think we need to:

  • check on whether that data is being used for converting to earth coords
  • properly handle config['xducer_misalign_deg']
  • Other?

error: unpack requires a string argument of length 2 when reading adcp file

Dear Levi,

After correcting the problem with the nanmean import, we get the following error when trying to read a adcp binary file:

In [23]: data = apm.read_rdi('SMADCP-FEM-Brehat-1.000')

error Traceback (most recent call last)
in ()
----> 1 data = apm.read_rdi('SMADCP-FEM-Brehat-1.000')

C:\Données brutes\dolfyn\dolfyn\adp\api.py in read_rdi(fname, nens)
14
15 def read_rdi(fname, nens=None):
---> 16 with adcp_loader(fname) as ldr:
17 dat = ldr.load_data(nens=nens)
18 return dat

C:\Données brutes\dolfyn\dolfyn\adp_readbin.py in init(self, fname, navg, a
vg_func)
597 #self.f=io.npfile(fname,'r','l')
598 self.f = bin_reader(fname)
--> 599 self.read_hdr()
600 self.read_cfg()
601 # Seek back to the beginning of the file:

C:\Données brutes\dolfyn\dolfyn\adp_readbin.py in read_hdr(self)
480 cfgid = list(fd.read_ui8(2))
481 nread = 0
--> 482 while (cfgid[0] != 127 | cfgid[1] != 127) | (not self.checkheade
r()):
483 nextbyte = fd.read_ui8(1)
484 pos = fd.tell()

C:\Données brutes\dolfyn\dolfyn\adp_readbin.py in checkheader(self)
770 if numbytes > 0:
771 fd.seek(numbytes - 2, 1)
--> 772 cfgid = fd.read_ui8(2)
773 #### sloppy code:
774 if len(cfgid) == 2:

C:\Données brutes\dolfyn\dolfyn\adp_read_bin.py in read_ui8(self, n)
164
165 def read_ui8(self, n):
--> 166 return self.read(n, 'B')
167
168 def read_float(self, n):

C:\Données brutes\dolfyn\dolfyn\adp_read_bin.py in read(self, n, frmt)
161 return unpack(self.endian + frmt * n, val)[0]
162 else:
--> 163 return np.array(unpack(self.endian + frmt * n, val))
164
165 def read_ui8(self, n):

error: unpack requires a string argument of length 2

Do you have any idea? We can send you a data file to test the code.

Best regards,

Rui Duarte

loading hdf5 file

I'm using Python 2.7 and I have trouble loading a hdf5 file. I get the following error:

File "C:\MT\CODES\PYTHON\dolfyn\dolfyn\io\hdf5.py", line 624, in iter_groups if grp.class is h5.highlevel.Group:

AttributeError: 'module' object has no attribute 'highlevel'

I tried with different versions of Dolfyn but the problem still remains.

Beam Coordinates

Currently when data is in beam-coordinates, only variables in dat.props['rotate_vars'] that start with 'vel' are rotated to beam coordinates. This is because this is how most manufacturers handle data internal to their instruments, but also because rotating a 3-D vector into 4-beam or 5-beam coordinate system has mathematical challenges. This issue is mostly a placeholder to think about whether this should be changed (so that all data is in 'beam' coordinates), or whether this should be left alone (I'm leaning this way).

Issue with read_nortek function

Hello lklicher,
I was planning to develop a Python script to read .vec file when I found your library : very useful ! Many thanks for developing it !
I began to use your read_nortek function and it works well on the majority of my .vec file but on some others, it throws this error :
error
I saw a similar issue was raised in the past and partially solved : #3
Unfortunately I can't provide you the .vec that are raising the error but the thing I can tell you is that data are generated in continuous mode.
Thank you for your time,
Tkideau

Detrend vs. demean?

In the process of reviewing #67, I've realized I use demean and detrend rather inconsistently (e.g., in calc_acov, calc_xcov vs. calc_tke and calc_stress) and without a clear rationale? It seems like we should clarify this a bit, and/or make it more consistent?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.