Giter Club home page Giter Club logo

cassava's Introduction

Hackage Cabal build

cassava: A CSV parsing and encoding library

Please refer to the package description for an overview of cassava.

Usage example

Here's the two second crash course in using the library. Given a CSV file with this content:

John Doe,50000
Jane Doe,60000

here's how you'd process it record-by-record:

{-# LANGUAGE ScopedTypeVariables #-}

import qualified Data.ByteString.Lazy as BL
import Data.Csv
import qualified Data.Vector as V

main :: IO ()
main = do
    csvData <- BL.readFile "salaries.csv"
    case decode NoHeader csvData of
        Left err -> putStrLn err
        Right v -> V.forM_ v $ \ (name, salary :: Int) ->
            putStrLn $ name ++ " earns " ++ show salary ++ " dollars"

If you want to parse a file that includes a header, like this one

name,salary
John Doe,50000
Jane Doe,60000

use decodeByName:

{-# LANGUAGE OverloadedStrings #-}

import Control.Applicative
import qualified Data.ByteString.Lazy as BL
import Data.Csv
import qualified Data.Vector as V

data Person = Person
    { name   :: !String
    , salary :: !Int
    }

instance FromNamedRecord Person where
    parseNamedRecord r = Person <$> r .: "name" <*> r .: "salary"

main :: IO ()
main = do
    csvData <- BL.readFile "salaries.csv"
    case decodeByName csvData of
        Left err -> putStrLn err
        Right (_, v) -> V.forM_ v $ \ p ->
            putStrLn $ name p ++ " earns " ++ show (salary p) ++ " dollars"

You can find more code examples in the examples/ folder as well as smaller usage examples in the Data.Csv module documentation.

Project Goals for cassava

There's no end to what people consider CSV data. Most programs don't follow RFC4180 so one has to make a judgment call which contributions to accept. Consequently, not everything gets accepted, because then we'd end up with a (slow) general purpose parsing library. There are plenty of those. The goal is to roughly accept what the Python csv module accepts.

The Python csv module (which is implemented in C) is also considered the base-line for performance. Adding options (e.g. the above mentioned parsing "flexibility") will have to be a trade off against performance. There's been complaints about performance in the past, therefore, if in doubt performance wins over features.

Last but not least, it's important to keep the dependency footprint light, as each additional dependency incurs costs and risks in terms of additional maintenance overhead and loss of flexibility. So adding a new package dependency should only be done if that dependency is known to be a reliable package and there's a clear benefit which outweights the cost.

Further reading

The primary API documentation for cassava is its Haddock documentation which can be found at http://hackage.haskell.org/package/cassava/docs/Data-Csv.html

Below are listed additional recommended third-party blogposts and tutorials

cassava's People

Contributors

23skidoo avatar andreasabel avatar basvandijk avatar bgamari avatar bos avatar fhartwig avatar fumieval avatar hvr avatar ivan-m avatar jcbelanger avatar jcristovao avatar luke-clifton avatar lukerandall avatar meiersi avatar mkscrg avatar nh2 avatar nkpart avatar phadej avatar pjones avatar psibi avatar quasicomputational avatar rcook avatar ryanglscott avatar shimuuar avatar sjoerdvisscher avatar solidsnack avatar tibbe avatar timbod7 avatar tomferon avatar uu1101 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cassava's Issues

Encoding API is difficult to use

I've been trying to use cassava for what seem like some simple purposes, and have so far gotten nowhere.

My use case is inside criterion, where I want to write out the header of a CSV file in one part of the program, and then (somewhere not closely related) write out individual lines/records to the file.

From what I can figure out, to support this I need to write a ToNamedHeader instance, then create a Header value, then encodeByNameWith with two custom EncodeOptions values, one that makes sure a header is written out (for my first use case), and another that does not (for my second).

It honestly took me a couple of hours of banging my head against the API docs to even figure this out, because it's not easy to see how the various types and functions inter-relate. The code that I think I would need to write to actually do all of this feels like a lot of work, so I haven't switched over.

What I really hoped I was signing up for was a much more lightweight API:

  • write a Generic instance for my type, then have the boilerplate ToNamedHeader instance generated for me with one line of code: instance ToNamedRecord MyType
  • an encodeHeader function
  • an encodeRow function
  • variants of encodeHeader and encodeRow that could accept a Header spec to control ordering, but also variants that don't need this repetition if you don't care

GHC-7.10.1 compatibility

Hi,

if Travis is not misleading me, then cassava has no version compatible with GHC-7.10.1 (which ships ghc-prim-0.4) due to ghc-prim (>=0.2 && <0.4).

Thanks,
Joachim

Custom parser

There is no way (that I know of, at least) of passing a custom parser function in place of the one coming from the instance of FromRecord or FromNamedRecord. Though it's really useful when you need to parameterise the parser, e.g. using column names given by the user.

Fails to parse data with custom delimiter

I tried to use custom delimiter but cassava fails to parse data. Here is test case

{-# LANGUAGE OverloadedStrings #-}
import Data.Csv

import qualified Data.ByteString.Lazy as BS
import Data.Vector (Vector)

txt :: BS.ByteString
txt = "first second third"

n = 32 -- or space

notOK :: Either String (Vector (String,String,String))
notOK = decodeWith (defaultDecodeOptions{decDelimiter = n}) txt

Result fo decoding is:

Left "conversion error: cannot unpack array of length 1 into a 3-tuple. Input record: fromList [\"first second third\"]"

Make encode run in constant space

The encode functions could run in constant space (i.e. be lazy) if the input was a list instead of a vector. See e.g. #41 for an example of when this would be useful.

String types confusion

I need to parse a CSV file and use the data to interact with SPARQL. Now I find myself in a situation where I constantly have to work with Strings, ByteStrings, Texts, [wchar]s, and yet more string types.

I tried turning on the overloaded strings GHC directive, but it generates a compile error.

Could cassava be modified to use the basic Strings ([char]s) for ease of use?

Make the error message type a type alias

Right now we use String for errors. We should at least abstract this to:

type DecodeError = String
type EncodeError = String

to provide minimal implementation hiding so we can eventually move over to an ADT for representing errors.

Add support for space delimited data

It would be nice to add support for space-delimited data. It appears quite frequienty in the context of data analysis and require only minimal changes to the parser/encoder. I've started working on it.

What do you think about this idea? And how do you prefer to structure the code. Should new functions be placed into separate modules?

Add Either instance for FromField

We could take

instance FromField a => FromField (Either ByteString a) where

to mean: try to parse an a and if that fails return Left field.

Difficult to implement ISO 8601 dates with current design

I am trying to output ISO 8601 dates surrounded by double quotes. This seems to be very difficult or impossible. What I want is this output:

followers,account_id,created
7346950,13460080,"2014-10-29T21:38:07.476015Z"

I tried this:

newtype IsoDate = IsoDate { unIsoDate :: UTCTime } deriving (Show, Eq, Ord)

instance CSV.ToField IsoDate where
  toField (IsoDate tm) = 
      fromString $ "\""++ (formatTime defaultTimeLocale "%FT%T%QZ" tm)++"\""

But that gives me triple double-quotes around the date. Is there any way to specify that a field must always be surrounded by quotes?

Option to specify location of error

If I am working on a csv file with many columns, and one produces an error while parsing with decodeByName (say it should be an Int, but is empty), the error message I get is:

parse error (Failed reading: conversion error: expected Int, got "" (not enough input)) at
field1Data,field2Data,...,,,...

Is there an option to indicate the column name or index (and possibly line number) where the error occurred (i.e., 'error occurred in 4th field', or 'error occured in salary field' on line 445)?

I see an example in the docs of defining a custom type and making a FromField instance for it with a custome parseField function definition. While this allows for adding information to the error message, I don't see a way to output information on the location of the error .

Is there some option to specify location of error, or is it recommended to parse everything as a string and manually do the conversions and checks afterwards?

Do escaping outside toField

This came up in #14.

Currently CSV escaping is done by the ToField instances. This limits the kind of escaping we can do and puts the onus on the user to do it correctly. We should instead do it outside of toField. One way to do it without losing too much performance is to walk to the string and check if we need to escape it (which is often not the case) and make a second pass over it and do the escaping, if needed.

Avoid intermediate bytestring construction when encoding

Currently, lot's of intermediate bytestrings are constructed during encoding. This is unnecessary and could be avoided by adding a toFieldBuilder method to the ToRecord typeclass. I expect at least a factor two speedup in encoding from this change.

Work around increase in space usage due to attoparsec-0.10.4.0

attoparsec-0.10.4.0 fixed a backtracking issue in <|> so it now always backtracks correctly. However, this increased space usage by about 3.5x. Since we don't really need backtracking we could try to work around this by using a non-backtracking version of <|>.

Export `encodeRecord`

Right now the only way to encode csv files is to build a Vector Record, which means that my input has to be fully loaded into memory before I can begin encoding. However, I would like to be able to generate Records on the fly (i.e. using pipes) and stream them to a file without going through an intermediate Vector Record representation.

I was studying the source code and I see that cassava uses encodeRecord internally, which is basically the function I would like to use. I could obviously copy and paste its functionality, but I wanted to first try to ask if you could re-export it instead.

Add a newline, get Partial instead of Some

I have an encountered an infelicity in the incremental parser, where input
that could be returned is delayed unnecessarily. Please find below a GHCi
session demonstrating the problem.

- Prelude
> import Data.ByteString.Char8 (ByteString)
- Prelude Data.ByteString.Char8
> :set -XOverloadedStrings 
- Prelude Data.ByteString.Char8
> import qualified Data.Csv.Incremental as Cassava
- Prelude Data.ByteString.Char8 Cassava
> let Cassava.Partial f = Cassava.decode False :: Cassava.Parser [ByteString]
f :: ByteString -> Cassava.Parser [ByteString]
- Prelude Data.ByteString.Char8 Cassava
> f "a"
Partial <function>
it :: Cassava.Parser [ByteString]
- Prelude Data.ByteString.Char8 Cassava
> f "a\nb"
Some [Right ["a"]] <function>
it :: Cassava.Parser [ByteString]
- Prelude Data.ByteString.Char8 Cassava
> f "a\nb\n"
Partial <function>
it :: Cassava.Parser [ByteString]
- Prelude Data.ByteString.Char8 Cassava
> f "a\nb\nc"
Some [Right ["a"],Right ["b"]] <function>
it :: Cassava.Parser [ByteString]

As you can see, when a\nb\n is presented to the parser, it retains the
entire input, even though it could parse a before.

In my application, I/O is interleaved with parsing on a fixed schedule, with
records flushed to a database at each tick. Cassava is used to group them, so
that only whole tuples are wrapped with COPY ... before being sent to
Postgres. Thus there are a few large transaction per second instead of
thousands and thousands (not really possible with Postgres on the hardware
that I have).

In testing, this problem seems to affect at most 20 rows at a time.

Allow other number formats

German Excel editions export numbers with ',' instead of '.' as the decimal separator. It would be good to expose encoding and decoding options to support this.

Release new cassava with the bumped text version bound patch.

I'm trying to install something that depends on the latest version of Criterion using Haskell Platform 2013.2.0.0. Criterion needs Cassava and that causes conflicts with Text. It's difficult to untangle the root cause because the package also depends on a lot of other things.

Commit c3871d3 bumps the version of text supported which I think would solve my problem. Can you release a new version of Cassava and upload to Hackage?

chokes on crappy CSV file

parse error (Failed reading: takeWith) at \"\\\"
...
NUL\\n\\NUL\\n\"")

I can't show the contents of the file here, so i edited it out with ...
Relative information:

  • the give file is identified by file -i as UTF16LE text.
  • the identification is from the BOM
  • a unix command like cat displays the file in a terminal as text,
  • but VIM and less display it as binary

I looked through the cassava source for something using takeWith but I couldn't find anything.

If I convert the file to utf8 either with iconv then it works fine.

Superinear runtime performance as filesize increases

I'm using the same code from this issue to load a CSV file. It has 100,000 lines, and each line has 384 real valued entries. I let it run for several minutes, but the file wouldn't load. I used the head command to create smaller files, and I generated the following load times:

number of lines run time (s)
5000 5.9
10000 12.4
20000 28.4
30000 49.7
40000 78.9

The run time is clearly superlinear, but I can't imagine a good reason for this. For reference, one line of my file looks like:

0.020186,0.030765,0.023419,0.019486,0.02827,0.038371,0.029893,0.025991,0.0074942,0.02288,0.019465,0.010611,0.034241,0.024845,0.035119,0.032946,0.013862,0.037618,0.018302,0.018738,0.030751,0.042402,0.04282,0.017354,0.0091786,0.046128,0.038533,0.015432,0.016483,0.02541,0.022601,0.014827,0.014151,0.06166,0.018451,0.025642,0.05031,0.058812,0.059761,0.037432,0.020897,0.10172,0.052057,0.022016,0.010253,0.023444,0.031117,0.015782,0.033845,0.13556,0.02185,0.021951,0.065047,0.11053,0.073719,0.050383,0.02365,0.13043,0.078436,0.034575,0.021022,0.08649,0.037808,0.015946,0.043717,0.10921,0.029096,0.036696,0.082003,0.144,0.092833,0.032662,0.027958,0.10479,0.067934,0.036611,0.03658,0.13744,0.021706,0.018149,0.0136,0.05164,0.037206,0.028305,0.057749,0.10325,0.080313,0.023166,0.026017,0.072317,0.049104,0.016417,0.016266,0.059149,0.028749,0.020952,0.011716,0.048961,0.018227,0.008496,0.050257,0.066238,0.0427,0.026753,0.011746,0.04943,0.052419,0.015122,0.0082256,0.024543,0.01416,0.013864,0.01473,0.033893,0.011576,0.010114,0.04645,0.044882,0.034404,0.022275,0.0086444,0.031865,0.035361,0.0082695,0.015215,0.021564,0.017555,0.016153,0.032971,0.037051,0.025589,0.025971,0.053622,0.059035,0.033765,0.017958,0.014136,0.030258,0.037187,0.018862,0.031648,0.032008,0.039469,0.033288,0.02414,0.057455,0.026307,0.019248,0.052276,0.056139,0.061597,0.025764,0.025077,0.054444,0.060394,0.020114,0.018324,0.035207,0.030006,0.015793,0.029668,0.090496,0.035148,0.038085,0.080044,0.086811,0.053725,0.043839,0.054274,0.14869,0.087212,0.034474,0.015743,0.048747,0.056401,0.03313,0.05832,0.14389,0.025803,0.018283,0.060324,0.071262,0.052371,0.053344,0.045831,0.18508,0.081422,0.064838,0.028842,0.14689,0.05218,0.02751,0.066949,0.12552,0.03373,0.019215,0.076767,0.16337,0.038165,0.035298,0.043724,0.10422,0.079077,0.063995,0.042224,0.15329,0.048346,0.020787,0.025633,0.1043,0.030611,0.017639,0.063376,0.11098,0.037439,0.045476,0.035578,0.094429,0.093345,0.02321,0.033182,0.11448,0.067127,0.031345,0.030729,0.070066,0.029163,0.012423,0.080807,0.10122,0.052669,0.049554,0.022146,0.071648,0.095515,0.032544,0.016164,0.033165,0.028797,0.01944,0.028712,0.040516,0.014249,0.011697,0.082792,0.085769,0.037796,0.026305,0.018755,0.042912,0.041147,0.013259,0.011909,0.02591,0.022033,0.013639,0.033032,0.034228,0.010746,0.016569,0.037463,0.035254,0.030109,0.010959,0.011868,0.026652,0.040978,0.017336,0.016091,0.013864,0.012727,0.013226,0.035753,0.055229,0.027363,0.011357,0.054589,0.071837,0.049681,0.03029,0.037896,0.04479,0.03738,0.029975,0.012412,0.016699,0.014122,0.011251,0.04386,0.05177,0.024893,0.019809,0.060459,0.077069,0.03018,0.012932,0.055678,0.086074,0.050414,0.025238,0.029773,0.050626,0.031153,0.014706,0.021806,0.029021,0.037457,0.026738,0.041171,0.066327,0.063226,0.038589,0.074539,0.13095,0.092657,0.049758,0.037652,0.078363,0.059148,0.032257,0.055406,0.093416,0.02604,0.02561,0.072503,0.096976,0.035778,0.034728,0.038493,0.084846,0.052988,0.022634,0.029038,0.10766,0.060172,0.013635,0.063798,0.10924,0.031248,0.018171,0.056458,0.095396,0.04405,0.013797,0.031402,0.069971,0.051015,0.014692,0.047649,0.096856,0.040146,0.018149,0.026696,0.035679,0.023215,0.01877,0.051515,0.071287,0.048293,0.035933,0.016274,0.030674,0.047847,0.034723,0.019065,0.020965,0.025284,0.023059,0.022703,0.028759,0.0080757,0.0045987,0.04691,0.042316,0.025127,0.013491,0.017832,0.015271,0.032227,0.014164,0.0056829,0.014378,0.018159,0.011127

Variant of doublequote

I work with a csv-file in which the symbol '"' escapes slash '' instead of doubling.
Is it possible to introduce a parameter that specifies the syntax of this option.
as well as in python csv.parameter Dialect.doublequote

Decoding using misc character encodings

I'd like to parse a CSV file using a preconfigured character encoding, is there a recommended way of doing this?

Since eg the FromField Text instance assumes utf8 the only way I can think of is decoding into ByteString and using the encoding package as a postprocessing step.

Add FromField instance for Maybe

It could be useful to have a FromField instance for Maybe that returns Nothing if the string is empty. Might also be worth adding an instance for Either. Maybe it will also make sense to do the same for FromRecord and FromNamedRecord.

Allow unescaped characters or strings (e.g. double quotes) in ToField

I would like to make a CSV containing a field with hexadecimal digits. This field should always be treated as text by Microsoft Excel, but Excel converts number-like text into numbers by default (e.g 55e2 is converted to 5500). There is a workaround using double quotes ", but double quotes are escaped by Data.Csv.encode.

Is there any way to prevent or control escaping during encoding?

I think this feature request is much like the one for #68.

Provide a type class-less encoding and decoding based API

Noah Daniels writes:

I have the following, which assumes that each record will be of length 25:

instance FromRecord Barcode where
    parseRecord v
        | V.length v == 25 = Barcode <$>
                             v .! 0  <*>
                             T.traverse parseField (V.tail v)
        | otherwise        = mzero

I'd like to parameterize this length, so that it can be run on datasets that might have barcodes of different lengths. But, I want to make sure that for a given data set, all barcodes are the same length. One way to do this is to infer that length from the first record in the input file; the other way is to require it as a parameter. I'm ok with either option. In some ways, I prefer requiring it as a parameter, as it keeps the biologists honest :)

I realize this gets into dependent types, which Haskell doesn't really support. Other than the obvious and awful solution of post-processing all the records and yelling at the user if he or she gives bad input, is there any nice way to make parseRecord take the record length as a parameter? Either compile-time or run-time would be acceptable.

I think we could support this by adding a different decode function

decodeUsing :: (Record -> Parser a) -> HasHeader -> ByteString -> Either String (Vector a)

You could then pass in an arbitrary parsing function, including a parameterized one. Here's one possible such function, assuming that you already have a FromRecord instance for a and just want to do some sanity checking:

decodeBarcode :: Int -> Record -> Parser a
decodeBarcode n v
    | V.length v - 1 /= n = mzero
    | otherwise           = parseRecord v

You'd use it like so

test = decodeUsing (decodeBarcode someLength) ...

Field parsers do no check for end of input

In numeric fields it could lead to silent truncation:

>>> import Data.Csv
>>> import Data.Vector (Vector)
>>> decode NoHeader  "12.7" :: Either String (Vector (Only Int))
Right (fromList [Only {fromOnly = 12}])

Or worse:

>>> decode NoHeader  "1e6" :: Either String (Vector (Only Int))
Right (fromList [Only {fromOnly = 1}])

Also it would be good idea to allow trailing (and probably leading) whitespaces since those are frequently added for readability in hand-written CSV

String confusion

I can't get cassava to work at all due to the complexity of different string types. Please consider using the traditional String type instead.

No instance for (FromRecord ByteString.ByteString)
  arising from a use of `decode'
Possible fix:
  add an instance declaration for (FromRecord ByteString.ByteString)
In the expression: decode csvData

allow index-based access while retaining information about the header

I've hit the problem a few times now while writing generic CSV manipulation code: i sometimes want to parse something where I don't know the headers up-front, but they are relevant to the result. For instance - i'm currently writing a csv merge tool that looks at a list of csv files, with differing headers, and creates a single csv file consisting of the superset of the headers, with a default value for missing values.

decode :: FromRecord a
=> HasHeader    
-> ByteString   
-> Either String (Vector a)

would work fine, except I don't get access to the header without reparsing with decodeByName, which seems wasteful.

on the other hand, if I use decodeByName directly, I end up with this abomination:

newtype SemrushRow = SemrushRow {
  unSemRushRow :: Csv.Header -> [(String,Text)]
}

instance FromNamedRecord (SemrushRow) where
   parseNamedRecord m = return . SemrushRow $ \header ->
     zip (map BS.unpack $ V.toList header)
         (V.toList $ V.map (decodeUtf8 . fromJust . (`HM.lookup` m)) header)

which is dreadful in more ways than i can count.

Generalize `ToNamedRecord` instances?

Would it be possible to generalize the current ToNamedRecord instances from this:

ToField a => ToNamedRecord (Map ByteString a)
ToField a => ToNamedRecord (HashMap ByteString a)
(ToField a, ToField b) => ToNamedRecord (Map a b)
(ToField a, ToField b) => ToNamedRecord (HashMap a b)

This came up earlier today where I was trying to convert some dictionaries retrieved from JSON to CSV. The dictionaries had Text keys but CSV required ByteString keys, so I had to do an explicit conversion using mapKeys.

Is this a potential correctness issue? My understanding is that mapKeys runs the risk of key collision and I'm not sure if conversion to UTF8 can result in key collision or not. Is it possible for two distinct Text values to UTF8-encode to the same ByteString?

Switch to `IntX` encodings provided by the new bytestring builder

The decimal function in Data.Csv.Conversion.Internal seems to just format a number in decimal format. I was not sure if this really is the case. However, if it is, then one could just replace the calls to this functions with the appropriate intXDec functions from the new bytestring builder. Note that the integerDec function is not the newest version. It will be fast, once I port my newest patches to the darcs repo. Somehow, the state of mine and Duncan's bytestring repository diverged :-/

Make the Records type head strict

As we might create a long list of records before returning it to the user we should force each record to WHNF to avoid extra memory usage.

Interleave parsing and type conversion

In Data.Csv.decode we first parse the whole CSV file into a Vector (Vector ByteString). This results in quite high memory usage even for medium-sized inputs as the memory overhead for small Vectors and ByteStrings is pretty high.

Even though the semantics of the decode functions is to parse the whole data before handing it to the user, we could still interleave parsing and type conversion in the implementation and thus only have a small constant number of Vectors and ByteStrings in memory at any given time.

Space leak

I'm using the following code:

type DP = VU.Vector Float

loaddata :: String -> IO (V.Vector DP)
loaddata filename = do
    bs <- BS.readFile filename
    rs <- case decode False bs of
        Right rs -> return rs
        Left str -> error $ "failed to parse CSV file " ++ filename ++ ": " ++ take 1000 str
    return $ rnf rs

to open a very large csv file. It has about 500,000 rows and 50 columns with each entry being a real number. It takes up about 250 Mb on disk. But this procedure uses over 3 gigs of heap (more than my computer has) when loading the file and crashes. When I test on smaller files, I see that this same code uses extra heap while running, but immediately frees it afterward so that only the pinned vector remains.

Is this something that I'm doing wrong, or is this a bug in the library?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.