Giter Club home page Giter Club logo

gabbi's Introduction

Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.7, 3.8, 3.9, 3.10, 3.11, 3.12 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

gabbi's People

Contributors

a-detiste avatar cdent avatar dhduvall avatar edwardbetts avatar elmiko avatar fnd avatar hayderimran7 avatar jasonamyers avatar jd avatar joshleeb avatar justanotherdot avatar msabramo avatar pshchelo avatar pyup-bot avatar scottwallacesh avatar sileht avatar thematrix97 avatar tomviner avatar trevormccasland avatar zaneb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gabbi's Issues

In 'live' testing scenarios argument passing to build_tests is convoluted and SSL may not work

If you want to use build_tests to create real TestCases against a live server it's likely you know the URL and that would be most convenient thing to pass instead of having to parse out the host, port and prefix (script_name) and then pass those.

In addition, if you have a URL you know if your server is SSL but the tests may not have been written to do SSL (with an ssl: true entry). Because of the test building process this is a bit awkward at the moment. It would be better to be able to say "yeah, this is SSL" for the whole run.

consistent capitalization

it appears there's no canonical spelling of "gabbi" (or "Gabbi"?)

I realize that gabbi is the technical (package) name, but the documentation should refer to it as either gabbi or Gabbi, consistently

Array of JSON objects in response body causing ValueError in YAML $RESPONSE variable

ValueError: JSONPath '$.id' failed to match on data: '[{u'can_change_state': True, u'id': u'14998924', u'errors': []]'

I've got a REST call that returns an array of objects which I can limit to a single item, but still returns in its array form, as the snippet above shows. The corresponding YAML entry causing this is $RESPONSE['$.id']

I also tried $RESPONSE[0]['$.id'] but I think the unicodification of the JSON is wreaking havoc.

Any clever thoughts for working with/around this?

Running granular test cases in a broader test suite of YAML files

Is there a way to override the build_test functionality and load tests on a more granular basis? Something similar to nosetests -m to modify the test regex pattern.

Better: Can request only a single YAML file be run instead of all YAML files in various gabbits/
Best: Can regex on the tests: -name: string.

It's one of two remaining sticking points in my gabbi Proof of Concept.

for test_file in glob.iglob(yaml_file_glob):
        if intercept:
            host = str(uuid.uuid4())
        test_yaml = load_yaml(test_file)
        test_name = '%s_%s' % (test_loader_name,
                               os.path.splitext(
                                   os.path.basename(test_file))[0])
        file_suite = test_suite_from_yaml(loader, test_name, test_yaml,
                                          path, host, port, fixture_module,
                                          intercept)
        top_suite.addTest(file_suite)
    return top_suite

need a way to inject a random name into things

One way to do this would be to have another replacement magic string, such as $UUID which gets a uuid.

However would this be the same uuid throughout all replacements on the current request, or a different one with each replacement. If the latter, how do we refer to the name again (for example in uri and in data)? Presumably in those cases the test writer should just make something up?

url requirement not being enforced

The docs say that the url key is required. However if it is not there then '' is used which becomes '/' through the tortured path.

We should bail early if url is not set to a non-'' value.

The check can go in driver.py near where 'name' is tested.

evaluate if response_json_paths needs to do _replace_response_values

It might be useful to be able to do things like this:

- name: showAlarm
  desc: Shows information for a specified alarm.
  url: /v2/alarms/$RESPONSE['$.alarm_id']
  method: GET
  response_json_paths:
      $.severity: low
      $.alarm_id: $RESPONSE['$.alarm_id']
      $.threshold_rule.threshold: 300.0
      $.threshold_rule.comparison_operator: eq

Of course the only reason this came up is because this particular API makes poor use of the location header.

YAML defaults cannot be overridden by test specifics

Feature request? It would be lovely if a test-specific could override the defaults' value.

Example:

defaults:
    request_headers:
        accept: application/json
        content-type: application/json
    response_headers:
        content-type: application/json; charset=utf-8

<-- snip -->

tests:
    - name: delete note test case
      verbose: True
      method: DELETE
      url: /v2/notes/$RESPONSE['$.id']
      request_headers:
          accept: text/javascript
          content-type: text/javascript; charset=utf-8
      response_headers:
          content-type: text/javascript; charset=utf-8
      status: 302 || 200

This causes "delete note test case" to return application/json.

Workaround: If I nix the defaults and specify content-type and accept header details for each test, all is well.

$RESPONSE and json_response_paths fail if content-type isn't JSON

Probably a feature request ...

Right now, our API supports XML and JSON and we'd ideally love to just toggle between the two.

Because of the way gabbi does the response_strings, and using $ENVIRON in the defaults: for content-type/accept, we can so almost pull this off.

Two places where the fancy idea fails:

  1. Any use of $RESPONSE with an XML content-type response will result in:
----------------------------------------------------------------------
_StringException: Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 78, in wrapper
    func(self)
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 118, in test_request
    self._run_test()
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 291, in _run_test
    base_url = self.replace_template(test['url'])
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 132, in replace_template
    message = getattr(self, method)(message)
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 253, in _response_replace
    self._json_replacer, message)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py", line 151, in sub
    return _compile(pattern, flags).sub(repl, string, count)
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 202, in _json_replacer
    return str(self.extract_json_path_value(self.prior.json_data, path))
AttributeError: 'test_env_loader_env_notes_add_environment_note' object has no attribute 'json_data'
  1. json_response_paths: can't be used and ignored when XML or non-JSON is detected. It would be lovely if this was possible. It'd be nice to get some "bonus" JSON tests when JSON is detected, and ignored with a warning otherwise.

malformed test chunk results in unfriendly error

if you create (note the missing - before name):

tests:
    name: foo
    url: /
    stauts: 404

You'll get a traceback:

  File "/Users/cdent/src/gabbi/gabbi/driver.py", line 120, in test_update
    for key, val in six.iteritems(new_dict):
  File "/Library/Python/2.7/site-packages/six.py", line 576, in iteritems
    return iter(d.iteritems(**kw))
AttributeError: 'str' object has no attribute 'iteritems'

We can do better than that.

Make test generation pluggable

It ought to be possible to generate tests from something other than yaml files.

From IRC converations with @elmiko one idea was:

[10:24pm] cdent: what about passing a generator class to build_tests that defaults to the yaml reader?
[10:24pm] elmiko: yea, exactly
[10:24pm] elmiko: that would make nice room for a possible plugin at some point

As discussed there this would likely require some adjustments to the flow of info through driver.py but the result would allow a lot of flexibility.

I reckon this should be post 1.0 because we don't want to be adding large features now.

Basically there would be a TestSuiteGenerator class with a sub class that is generate-from-yaml-files-dir that behaves as the current code. While it would be nice to preserve the build_tests signature, we don't have to make it that if the new signature was sufficiently powerful.

Should a class be passed or an instantiated object or even a list of well-formed objects? As things stand now there is tight coupling between the gathering of the test data and the assembly of that data into tests cases. This could easily be two separate steps: Make the data, traverse it to make it into cases. The way it is now is simply the result of making the initial use case work.

(readers feel free to add comments with additional ideas)

magical variables enforce single quotes

One of my tests contained the following line:

url: $RESPONSE["$._links.next"]

That refused to work - it took me quite a while to figure out that's because gabbi seems to only accept single quotes there, as this worked just fine.

url: $RESPONSE['$._links.next']

I would argue that users' expectation is that single and double quotes are equivalent.

output colorization (gabbi-run)

In order to make it easier/quicker to interpret results at a glance, it would be useful if gabbi-run optionally highlighted success vs. failure:

Ran 4 tests in 0.012s

FAILED (failures=3)

would appear in red while

Ran 5 tests in 0.482s

OK

would appear in green

Using $ENVIRON to populate YAML booleans always evaluates True due to os.environ's string return

I had a "clever" idea (since I sometimes don't know what I can't do until hindsight hits). Thoughts on whether this is worth amending? It'd sure be a nice feature :) Would also enable env-specified replacers for the ssl, xfail and redirects booleans.

Verbose as "False" is evaluating to True because os.environ returns a string. RATS!

defaults:
    verbose: $ENVIRON['GABBI_VERBOSE']
      def start_fixture(self):                                                                        
         try:                                                                                        
             g_verbose = os.environ.get('GABBI_VERBOSE')                                             
             print "GABBI_VERBOSE={0}".format(g_verbose)                                             
         except KeyError:                                                                            
             os.environ['GABBI_VERBOSE'] = True 
GABBI_VERBOSE=False

###########################
GET https://url.com:443/v2/configurations
content-type: application/json
accept: application/json
authorization: Basic Z2FiYmlfQTo4ODc5YmNmZGJkYjQyMTM2M2Y3OWE3MDFhYzMwYmFiN2JiZGJiNmMz

If I were to do a pull request to enable myself, I've pondered:

Option 1:
Modify case.py's _run_testconditional from

if test['verbose']:

to

if test['verbose'] and (isinstance(test['verbose'], bool) or test['verbose'].lower() == "true"):

Option 2:
Modify case.py's _environ_replacer to check for bools (and maybe integers? I don't have a good use case for that.) and re-cast the types before returning os.environ[environ_name].

Possibly both of these are horrifically naïve for reasons I haven't considered.

When a fixture fails, stop_fixture should be informed

That is, if there is an exception prior to stop_fixture being called, we should pass a value into stop_fixture so that the fixture can decide how much it wants to clean up: It might make sense in some cases to not clean up.

Implement pre-reqs via yaml instructions

Primarily this would be to insure that required data is available prior to test run.

The idea would be that these reqs are fixtures for just this testsuite (not yet sure how that would be done).

Special case? ValueError: "No JSON object could be decoded" for JSON response with an empty body

======================================================================
ERROR: test_request (gabbi.driver.test_env_loader_env_notes_delete_environment_note)
gabbi.driver.test_env_loader_env_notes_delete_environment_note.test_request
----------------------------------------------------------------------
_StringException: Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 78, in wrapper
    func(self)
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 118, in test_request
    self._run_test()
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 312, in _run_test
    self._run_request(full_url, method, headers, body)
  File "/Library/Python/2.7/site-packages/gabbi/case.py", line 279, in _run_request
    self.json_data = json.loads(decoded_output)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
    return _default_decoder.decode(s)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 365, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 383, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
defaults:
    request_headers:
        accept: application/json
        content-type: application/json
    response_headers:
        content-type: application/json

tests:
    - name: delete environment note
      verbose: True
      method: DELETE
      # Use previous $RESPONSE to get note ID
      url: /v2/notes/$RESPONSE['$.id']
      status: 302 || 200

Response body: empty
Response headers:

Status Code: 200
X-Runtime: 0.302572
Date: Wed, 10 Jun 2015 17:37:25 GMT
Content-Encoding: gzip
X-Rack-Cache: invalidate, pass
Server: nginx
ETag: W/"7215ee9c7d9dc229d2921a40e899ec5f"
Vary: Accept-Encoding
Content-Type: application/json; charset=utf-8
Status: 200 OK
Cache-Control: max-age=0, private, must-revalidate
Transfer-Encoding: chunked
Connection: keep-alive
X-Request-Id: 1d3efa9239c321a00155be9e1d44324f
X-UA-Compatible: IE=Edge,chrome=1

Consider regex matches in response_headers

response_headers are the most likely place where regular expression could make it much more easy to evaluate the correctness of headers.

While I personally would recommend against using regex matching during TDD of a new API, for existing APIs it might be kind of necessary due to ambiguous implementations.

case-insensitive headers

As far as I can tell, gabbi enforces lowercase headers:

response_headers:
    content-type: text/html; charset=utf-8
... ✓ front page returns HTML

vs.

response_headers:
    Content-Type: text/html; charset=utf-8
... E front page returns HTML

ERROR: front page returns HTML
     "'Content-Type' header not available in response keys: dict_keys(['server', 'content-type', 'access-control-allow-origin', 'date', 'access-control-allow-credentials', 'connection', 'content-location', 'content-length', 'status'])"

From my perspective, the second version is more readable - so it would be nice if header name comparisons were case-insensitive.

$ENVIRON can't access environment variables created in a fixture

Can:
env GABBI_VERBOSE="False" python -m unittest test_loader

Can't:
fixtures.py

class GabbiVarFixture(fixtures.GabbiFixture):
    <-- snip -->

    def start_fixture(self):
        try:
            os.environ.get('GABBI_VERBOSE')
        except KeyError:
            os.environ['GABBI_VERBOSE'] = "False"

test_thing.yaml

fixtures:
    - GabbiVarFixture

defaults:
    verbose: $ENVIRON['GABBI_VERBOSE']
<-- snip -->

I'm sure I'm doing something wrong, and I know environment context can be infuriatingly tricky in Unix from dealing with it in crontabs, but I finish my work day with this conundrum. I'll do some research over the weekend and see if I can figure my way out of this problem.

Also, interesting:
If I use an $ENVIRON['var'] in my YAML settings for the booleans, like $ENVIRON['GABBI_VERBOSE'] and it loses the value due to the above problem, it will always evaluate as True. (default: False, so I think it's the $ENVIRON replacer throwing in a wrench)

Don't automatically follow redirects

By default httplib2 will follow redirects. This is handy in that it reflects good web behavior, but if we are actually trying to track those redirects across the tests, then we'll miss them.

Options:

  • make it never follow redirects with no options
  • don't follow redirects by default, but allow a setting in the YAML

group keys in format documentation

The Test Format documentation page currently lists all keys as a lengthy list in seemingly arbitrary order.

This might be more comprehensible (and less daunting) if keys were grouped: metadata (name, description), request parameters, response expectations, settings (verbose, polling) - there might be more.

Also, table layout might improve readability (or scannability) by providing some visual consistency.

extra verbosity to include request/response bodies

Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

ConnectionRefusedError causes premature termination

I'm using gabbi to test a non-Python application, so accessing the server via HTTP. Thus I have wait for the application to boot up before running the tests - which can be tricky.

So I thought I could use the poll feature, blocking tests until the server is ready to accept connections:

tests:
- name: wait until server has booted up
  method: GET
  url: /
  status: 200
  poll:
      count: 10
      delay: 0.5
- name: actual test
  ...

However, since the connection cannot be established (see below; reproduce by making gabbi target an unused port), gabbi just fails immediately.

Arguably there shouldn't be a material difference between an unexpected server response and no response at all?


$ gabbi-run localhost:9999 < http.yaml
test_request (gabbi.driver.input_wait_until_server_has_booted_up)
gabbi.driver.input_wait_until_server_has_booted_up.test_request ... ERROR

======================================================================
ERROR: test_request (gabbi.driver.input_wait_until_server_has_booted_up)
gabbi.driver.input_wait_until_server_has_booted_up.test_request
----------------------------------------------------------------------
testtools.testresult.real._StringException: Traceback (most recent call last):
  File ".../venv/lib/python3.4/site-packages/gabbi/case.py", line 78, in wrapper
    func(self)
  File ".../venv/lib/python3.4/site-packages/gabbi/case.py", line 118, in test_request
    self._run_test()
  File ".../venv/lib/python3.4/site-packages/gabbi/case.py", line 329, in _run_test
    self._run_request(full_url, method, headers, body)
  File ".../venv/lib/python3.4/site-packages/gabbi/case.py", line 274, in _run_request
    body=body
  File ".../venv/lib/python3.4/site-packages/httplib2/__init__.py", line 1313, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File ".../venv/lib/python3.4/site-packages/httplib2/__init__.py", line 1063, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File ".../venv/lib/python3.4/site-packages/httplib2/__init__.py", line 987, in _conn_request
    conn.connect()
  File ".../venv/lib/python3.4/site-packages/wsgi_intercept/__init__.py", line 513, in connect
    HTTPConnection.connect(self)
  File ".../python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py", line 834, in connect
    self.timeout, self.source_address)
  File ".../python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socket.py", line 512, in create_connection
    raise err
  File ".../python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socket.py", line 503, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused


======================================================================
ERROR: test_request (gabbi.driver.input_actual_test)
gabbi.driver.input_actual_test.test_request
----------------------------------------------------------------------
[...]
ConnectionRefusedError: [Errno 61] Connection refused


----------------------------------------------------------------------
Ran 2 tests in 0.005s

FAILED (errors=2)

gabbi-run fails silently when providing incorrect server target

I just ran gabbi-run localhost 4000, wondering why I kept getting "connection refused" errors - turns out the correct invocation is "localhost:4000".

Clearly this was my mistake, but it would be nice if gabbi was more accommodating, e.g. by

  • detecting and reporting this (common?) error
  • accommodating both "localhost:4000" and "localhost 4000" parameters (there's no general consistency in similar tools AFAICT)
  • reporting host and port to STDERR before, allowing the user to see their mistake

Ability to use any response in a scenario

My need here is not only use the previous response via $REPONSE, but anyone in the scenario. In Gnocchi we need to create 2 metrics with 2 different request, that will each return an ID, and then use both those id in a subsequent request. That could be achieved by having $REPONSES being a dict and an array like object that can be accessed via the test name of the test number (relative or absolute), e.h.:

tests:
    - name: create archive policy
      desc: for later use
      url: /v1/archive_policy
      method: POST
      request_headers:
        content-type: application/json
        x-roles: admin
      data:
        name: high
        definition:
          - granularity: 1 second
      status: 201

    - name: create metric
      url: /v1/metric
      request_headers:
        content-type: application/json
      method: post
      data:
        archive_policy_name: "high"
      status: 201

    - name: create metric 2
      url: /v1/metric
      request_headers:
        content-type: application/json
      method: post
      data:
        archive_policy_name: "high"
      status: 201

    - name: search measure
      url: /v1/search/metric?metric_id=$RESPONSES[-1]['$.id']&metric_id=$REPONSES[-2]['$.id']
      method: post
      request_headers:
        content-type: application/json
      data:
        ∧:
          - ≥: 1000
      status: 400
      response_strings:
        - Invalid value for start

missing response header raises error rather than failure

The following reports an error when it should report a failure instead:

tests:

  - name: failure
    url: http://google.com

    status: 302
    response_headers:
        x-foo: bar

AFAICT that's because internally a KeyError is raised ("'x-foo' header not available in response keys: dict_keys(...)"): testtools.TestCase's default exception_handlers doesn't have a mapping for that particular exception, so it defaults to _report_error rather than using _report_failure.

prefix functionality needs to be available in $NETLOC

Or at least that seems about right. Say you are testing a location header:

      response_headers:
          location: $SCHEME://$NETLOC/v1/archive_policy/%E2%9C%94%C3%A9%C3%B1%E2%98%83

If you use a prefix with that test that will not match. The easiest way to get the right result (while maintaining backwards compatibility with existing tests) would be for $NETLOC to include prefix. This is semantically awkward but useful.

Not sure.

allow a prefix setting for a leading path on all urls in tests

build_tests ought to take a prefix keyword arg to state the prefix of all the URL in a test suite.

For example if all the URLs are at /foobar/resources then prefix=/foobar might be useful.

The driving force behind this is being able to use the same tests to drive live wsgi apps mounted in different places.

It would be nice if gabbi.reporter had some test coverage

Not only because it probably just should, but also because it is blowing the stats. But low priority in any case.

Note that I'm just saying reporter here, not the runner, although the same techniques (mentioned below) could be used to check it.

It seems like the most straightforward way to make it happens is to

  • create some yaml with one test for each outcome mode and feed it to driver.test_suite_from_yaml and get a suite
  • make sys.stdout a six.StringIO
  • run ConciseTestRunner with the suite
  • inspect the StringIO

It could also be done with mocks but I'd rather not see them in gabbi anywhere.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.