Giter Club home page Giter Club logo

fabric8-analytics-server's Introduction

Build Status codecov

Fabric8-Analytics Core API Documentation

The Fabric8-Analytics API is a microservice that is responsible for:

  • Serving generated analysis results to clients
  • Scheduling new analyses based on client requests

API information

See our API details for more info.

Contributing

See our contributing guidelines for more info.

Docker based API testing

From the top-level git directory, run the tests in a container using the helper script:

$ .qa/runtest.sh

(The above command assumes you have passwordless docker invocation configured - if you don't, then sudo will be necessary to enable docker invocation).

If you're changing dependencies rather than just editing source code locally, you will need images to be rebuilt when invoking runtest.sh. You can set environment variable REBUILD=1 to request image rebuilding.

If the offline virtualenv based tests have been run, then this may complain about mismatched locations in compiled files. Those can be deleted using:

$ find -name *.pyc -delete

NOTE: Running the container based tests is likely to cause any already running local core API instance launched via Docker Compose to fall over due to changes in the SELinux labels on mounted volumes, and may also cause spurious test failures.

Virtualenv-based offline testing

Test cases marked with pytest.mark.offline may be executed without having a Docker daemon running locally.

For server testing, the virtualenv should be created using Python 3.4 or later

To configure a virtualenv (called bayesian in the example) to run these tests:

(bayesian) $ python -m pip install -e ../lib
(bayesian) $ python -m pip install -r requirements.txt
(bayesian) $ python -m pip install -r tests/requirements.txt

The marked offline tests can then be run as:

(bayesian) $ py.test -m offline tests/

If the Docker container based tests have been run, then this might complain about mismatched locations in compiled files. Those can be deleted using:

(bayesian) $ sudo find -name *.pyc -delete

Footnotes

Check for all possible issues

The script named check-all.sh is to be used to check the sources for all detectable errors and issues. This script can be run w/o any arguments:

./check-all.sh

Expected script output:

Running all tests and checkers
  Check all BASH scripts
    OK
  Check documentation strings in all Python source file
    OK
  Detect common errors in all Python source file
    OK
  Detect dead code in all Python source file
    OK
  Run Python linter for Python source file
    OK
  Unit tests for this project
    OK
Done

Overal result
  OK

An example of script output when one error is detected:

Running all tests and checkers
  Check all BASH scripts
    Error: please look into files check-bashscripts.log and check-bashscripts.err for possible causes
  Check documentation strings in all Python source file
    OK
  Detect common errors in all Python source file
    OK
  Detect dead code in all Python source file
    OK
  Run Python linter for Python source file
    OK
  Unit tests for this project
    OK
Done

Overal result
  One error detected!

Please note that the script creates bunch of *.log and *.err files that are temporary and won't be commited into the project repository.

Coding standards

  • You can use scripts run-linter.sh and check-docstyle.sh to check if the code follows PEP 8 and PEP 257 coding standards. These scripts can be run w/o any arguments:
./run-linter.sh
./check-docstyle.sh

The first script checks the indentation, line lengths, variable names, white space around operators etc. The second script checks all documentation strings - its presence and format. Please fix any warnings and errors reported by these scripts.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Code complexity measurement

The scripts measure-cyclomatic-complexity.sh and measure-maintainability-index.sh are used to measure code complexity. These scripts can be run w/o any arguments:

./measure-cyclomatic-complexity.sh
./measure-maintainability-index.sh

The first script measures cyclomatic complexity of all Python sources found in the repository. Please see this table for further explanation on how to comprehend the results.

The second script measures maintainability index of all Python sources found in the repository. Please see the following link with explanation of this measurement.

You can specify command line option --fail-on-error if you need to check and use the exit code in your workflow. In this case the script returns 0 when no failures has been found and non zero value instead.

Dead code detection

The script detect-dead-code.sh can be used to detect dead code in the repository. This script can be run w/o any arguments:

./detect-dead-code.sh

Please note that due to Python's dynamic nature, static code analyzers are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.

Because of this potential problems, only code detected with more than 90% of confidence is reported.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Common issues detection

The script detect-common-errors.sh can be used to detect common errors in the repository. This script can be run w/o any arguments:

./detect-common-errors.sh

Please note that only semantical problems are reported.

List of directories containing source code, that needs to be checked, are stored in a file directories.txt

Check for scripts written in BASH

The script named check-bashscripts.sh can be used to check all BASH scripts (in fact: all files with the .sh extension) for various possible issues, incompatibilities, and caveats. This script can be run w/o any arguments:

./check-bashscripts.sh

Please see the following link for further explanation, how the ShellCheck works and which issues can be detected.

Commands to generate the dependency files for stack analysis call

Maven
mvn org.apache.maven.plugins:maven-dependency-plugin:3.0.2:tree -DoutputFile=/someloc/dependencies.txt -DoutputType=dot -DappendOutput=true;
NPM
npm install; npm list --prod --json > npmlist.json
Pypi
python -m pip install -r requirements.txt; python -c 'exec("""
        import pkg_resources as pr;import json,sys;gd=pr.get_distribution;res=list();
        for i in open(sys.argv[1]):
            try:
                rs={};I=gd(i);rs["package"]=I.key;rs["version"]=I.version;rs["deps"]=set();
                for j in pr.require(i):
                    for k in j.requires():
                        K=gd(k);rs["deps"].add((K.key, K.version))
                rs["deps"]=[{"package":p,"version":v}for p,v in rs["deps"]];res.append(rs)
            except: pass
        a=sys.argv[2:3]
        op=open(a[0],"w")if a else sys.stdout
        json.dump(res,op)
        """)'  requirements.txt  pylist.json

fabric8-analytics-server's People

Contributors

abs51295 avatar akshaybhansali18 avatar arajkumar avatar bkabrda avatar cnulenka avatar dgpatelgit avatar fridex avatar humaton avatar invinciblejai avatar jmelis avatar jparsai avatar jpopelka avatar jyasveer avatar krishnapaparaju avatar mathur07 avatar miteshvp avatar msrb avatar preeticp avatar rafiu007 avatar rootavish avatar sara-02 avatar sawood14012 avatar shaded-enmity avatar sivaavkd avatar sunilk747 avatar tisnik avatar tuxdna avatar vinagarw272001 avatar yzainee avatar yzainee-zz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric8-analytics-server's Issues

Wrong number in analyzed_dependencies_count?

An example how the stack analyses v2 output look like - there's and array with one analyzed dependency, but the attribute analyzed_dependencies_count says it is zero:

                "analyzed_dependencies": [
                    {
                        "version": "6.7",
                        "package": "click"
                    }
                ],
                "analyzed_dependencies_count": 0,

Error during docker-compose build

The following error message is displayed during ./docker-compose build:

Step 1 : FROM registry.centos.org/sclo/postgresql-94-centos7:latest
 ---> b75cffc48625
Step 2 : MAINTAINER Fridolin Pokorny <[email protected]>
 ---> Using cache
 ---> d135636a86e1
Step 3 : ADD ./common.sh /usr/share/container-scripts/postgresql/common.sh
 ---> Using cache
 ---> ba8001e265a2
Step 4 : EXPOSE 5432
 ---> Using cache
 ---> 502ab02bf416
Step 5 : CMD run-postgresql
 ---> Using cache
 ---> 2cc1ed8bcfa3
Successfully built 2cc1ed8bcfa3
Step 1 : FROM registry.centos.org/centos/centos:7
 ---> 77ae887f4989
Step 2 : MAINTAINER Slavek Kabrda <[email protected]>
 ---> Using cache
 ---> 263d83a0152b
Step 3 : RUN yum install -y https://download.postgresql.org/pub/repos/yum/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-3.noarch.rpm &&		yum -y install pgbouncer postgresql &&		yum clean all
 ---> Using cache
 ---> b14c7a4731d4
Step 4 : COPY run-pgbouncer.sh health-check-probe.sh /usr/bin/
 ---> Using cache
 ---> 2a95f4405580
Step 5 : EXPOSE 5432
 ---> Using cache
 ---> adbe9b253ed6
Step 6 : ENTRYPOINT /usr/bin/run-pgbouncer.sh
 ---> Using cache
 ---> 9ba18a34843b
Successfully built 9ba18a34843b
Step 1 : FROM registry.centos.org/centos/centos:7
 ---> 77ae887f4989
Step 2 : MAINTAINER Pavel Odvody <[email protected]>
 ---> Using cache
 ---> 7453569dcf81
Step 3 : ENV LANG en_US.UTF-8
 ---> Using cache
 ---> f78f0e61e67d
Step 4 : RUN useradd coreapi
 ---> Using cache
 ---> b9ee5dd439ee
Step 5 : RUN yum install -y epel-release &&    yum install -y gcc patch git python34-pip python34-requests httpd httpd-devel python34-devel postgresql-devel redhat-rpm-config libxml2-devel libxslt-devel python34-pycurl &&    yum clean all
 ---> Using cache
 ---> 1384d591d254
Step 6 : RUN mkdir -p /coreapi
 ---> Using cache
 ---> 178761057f2b
Step 7 : COPY ./requirements.txt /coreapi
 ---> Using cache
 ---> f940f63766b9
Step 8 : RUN pushd /coreapi &&     pip3 install -r requirements.txt &&     rm requirements.txt &&     popd
 ---> Using cache
 ---> c13816a9e552
Step 9 : RUN mkdir -p /tmp/install_deps/patches/
 ---> Using cache
 ---> ecc791e043c1
Step 10 : COPY hack/patches/* /tmp/install_deps/patches/
 ---> Using cache
 ---> 31de91c85f6d
Step 11 : COPY hack/apply_patches.sh /tmp/install_deps/
 ---> Using cache
 ---> 58da51e5e320
Step 12 : COPY ./coreapi-httpd.conf /etc/httpd/conf.d/
 ---> Using cache
 ---> 387d23f3264a
Step 13 : ENTRYPOINT /usr/bin/coreapi-server.sh
 ---> Using cache
 ---> 27cde7407fe6
Step 14 : COPY ./ /coreapi
 ---> Using cache
 ---> 5f87c00b4637
Step 15 : RUN pushd /coreapi &&     pip3 install . &&     popd &&     find coreapi/ -mindepth 1 -maxdepth 1 \( ! -name 'alembic*' -a ! -name hack \) -exec rm -rf {} +
 ---> Using cache
 ---> 68c2ee614177
Step 16 : ENV F8A_WORKER_VERSION aa3a742
 ---> Using cache
 ---> 5debb249d2cb
Step 17 : RUN pip3 install git+https://github.com/fabric8-analytics/fabric8-analytics-worker.git@${F8A_WORKER_VERSION}
 ---> Using cache
 ---> 2a31bdfd06cf
Step 18 : COPY .git/ /tmp/.git
 ---> Using cache
 ---> e26e6f5deeea
Step 19 : RUN cd /tmp/.git &&    git show -s --format="COMMITTED_AT=%ai%nCOMMIT_HASH=%h%n" HEAD | tee /etc/coreapi-release &&    rm -rf /tmp/.git/
 ---> Using cache
 ---> 8bcf6b63bd60
Step 20 : COPY hack/update_selinon.sh /tmp/
 ---> Using cache
 ---> b53c9c9b8e8c
Step 21 : RUN sh /tmp/update_selinon.sh
 ---> Using cache
 ---> a51fb7c667fa
Step 22 : RUN cd /tmp/install_deps/ && /tmp/install_deps/apply_patches.sh
 ---> Running in f022962a6fdb
patching file boto/endpoints.json
Hunk #1 FAILED at 569.
1 out of 1 hunk FAILED -- saving rejects to file boto/endpoints.json.rej

Component search response contains a string with JSON inside

If I access the following endpoint:

http://localhost:32000/api/v1/component-search/foobar

I get:

"{\"result\": []}"

I'd assume the following response:

{"result": []}

I.e. it seems that the search result is explicitly converted to JSON two times

Miss-leading API response

When I try to create an analysis for one component for the first time I got this response:

$ curl  http://localhost:32000/api/v1/component-analyses/maven/io.vertx:vertx-core/3.4.1
{
  "error": "No data found for maven Package io.vertx:vertx-core/3.4.1"
}

I believe it's not an error that there are no data and I think that when there is analysis in progress for this package there could be information about it instead of this "error" message.

Typo in stack analysis v2 result JSON

outlier_prbability should be outlier_probability:

           "recommendations": {
                "companion": [],
                "usage_outliers": [
                    {
                        "package_name": "wheel",
                        "outlier_prbability": 0.71875
                    },
                    {
                        "package_name": "setuptools",
                        "outlier_prbability": 0.640625
                    }
                ],
                "alternate": []

Component analyses failure

API call:

curl -v -k -H "Authorization: Bearer {proper_token} localhost:32000/api/v1/component-analyses/npm/lodash/4.17.4

Error messages seen in the log file (I hope they are related to this issue)

worker-ingestion_1      | 21 14:47:02,175 [ERROR] celery.app.trace: Task selinon.SelinonTaskEnvelope[be0d5c1c-c7b2-46f5-b651-b6a59d806624] raised unexpected: HTTPError('500 Server Error: Internal Server Error for url: http://data-model-importer:9192/api/v1/ingest_to_graph',)
worker-ingestion_1      | Traceback (most recent call last):
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
worker-ingestion_1      |     R = retval = fun(*args, **kwargs)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/celery/app/trace.py", line 622, in __protected_call__
worker-ingestion_1      |     return self.run(*args, **kwargs)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/selinon/selinonTaskEnvelope.py", line 170, in run
worker-ingestion_1      |     raise self.retry(max_retries=0, exc=exc)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/celery/app/task.py", line 668, in retry
worker-ingestion_1      |     raise_with_context(exc)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/selinon/selinonTaskEnvelope.py", line 115, in run
worker-ingestion_1      |     result = task.run(node_args)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/f8a_worker/base.py", line 38, in run
worker-ingestion_1      |     result = self.execute(node_args)
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/f8a_worker/workers/graph_importer.py", line 26, in execute
worker-ingestion_1      |     response.raise_for_status()
worker-ingestion_1      |   File "/usr/lib/python3.4/site-packages/requests/models.py", line 937, in raise_for_status
worker-ingestion_1      |     raise HTTPError(http_error_msg, response=self)
worker-ingestion_1      | requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://data-model-importer:9192/api/v1/ingest_to_graph
data-model-importer_1   | --------------------------------------------------------------------------------
data-model-importer_1   | INFO in rest_api [/src/rest_api.py:48]:
data-model-importer_1   | Ingesting the given list of EPVs
data-model-importer_1   | --------------------------------------------------------------------------------
data-model-importer_1   | ERROR:data_importer:import_epv() failed with error: Parameter validation failed:
data-model-importer_1   | Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
data-model-importer_1   | ERROR:data_importer:Traceback for latest failure in import call: Traceback (most recent call last):
data-model-importer_1   |   File "/src/data_importer.py", line 149, in import_epv_http
data-model-importer_1   |     ver_list_keys.extend(data_source.list_files(bucket_name=config.AWS_EPV_BUCKET, prefix=ver_key_prefix))
data-model-importer_1   |   File "/src/data_source/s3_data_source.py", line 55, in list_files
data-model-importer_1   |     for obj in bucket.objects.filter(Prefix=prefix):
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/boto3/resources/collection.py", line 83, in __iter__
data-model-importer_1   |     for page in self.pages():
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/boto3/resources/collection.py", line 166, in pages
data-model-importer_1   |     for page in pages:
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/paginate.py", line 249, in __iter__
data-model-importer_1   |     response = self._make_request(current_kwargs)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/paginate.py", line 326, in _make_request
data-model-importer_1   |     return self._method(**current_kwargs)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/client.py", line 310, in _api_call
data-model-importer_1   |     return self._make_api_call(operation_name, kwargs)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/client.py", line 573, in _make_api_call
data-model-importer_1   |     api_params, operation_model, context=request_context)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/client.py", line 625, in _convert_to_request_dict
data-model-importer_1   |     params=api_params, model=operation_model, context=context)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/hooks.py", line 227, in emit
data-model-importer_1   |     return self._emit(event_name, kwargs)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/hooks.py", line 210, in _emit
data-model-importer_1   |     response = handler(**kwargs)
data-model-importer_1   |   File "/usr/lib/python2.7/site-packages/botocore/handlers.py", line 212, in validate_bucket_name
data-model-importer_1   |     raise ParamValidationError(report=error_msg)
data-model-importer_1   | ParamValidationError: Parameter validation failed:
data-model-importer_1   | Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255

OTOH the old analyses are reachable.

Query for stack recommendations result in internal server error

At times, it has been observed that querying for stack recommendations result in no output on the openshift.io space dashboard as well as on the stack report.

The UI console log suggests that the call to /stack-analyses API endpoint causes HTTP error 500, i.e. internal server error.

image

Stack analyses failure

Use case

1st step:

curl -v -k -F "manifest[][email protected]" -H "Authorization: Bearer token" http://localhost:32000/api/v1/stack-analyses-v2

I get proper response with job ID (say 287d59868ce94343ba79e3c5c38d4afd), etc.

2nd step:

curl -v -k -H "Authorization: Bearer token" http://localhost:32000/api/v1/stack-analyses-v2/287d59868ce94343ba79e3c5c38d4afd

I receive 500 Internal Server Error

Error log:

coreapi-server          | [Fri Jul 21 15:21:37.907384 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/mod_wsgi/server/__init__.py", line 1484, in handle_request
coreapi-server          | [Fri Jul 21 15:21:37.907393 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return self.application(environ, start_response)
coreapi-server          | [Fri Jul 21 15:21:37.907428 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 2000, in __call__
coreapi-server          | [Fri Jul 21 15:21:37.907433 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return self.wsgi_app(environ, start_response)
coreapi-server          | [Fri Jul 21 15:21:37.907472 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1991, in wsgi_app
coreapi-server          | [Fri Jul 21 15:21:37.907479 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     response = self.make_response(self.handle_exception(e))
coreapi-server          | [Fri Jul 21 15:21:37.907507 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask_restful/__init__.py", line 271, in error_router
coreapi-server          | [Fri Jul 21 15:21:37.907511 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return original_handler(e)
coreapi-server          | [Fri Jul 21 15:21:37.907544 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1567, in handle_exception
coreapi-server          | [Fri Jul 21 15:21:37.907551 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     reraise(exc_type, exc_value, tb)
coreapi-server          | [Fri Jul 21 15:21:37.907578 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/_compat.py", line 32, in reraise
coreapi-server          | [Fri Jul 21 15:21:37.907582 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     raise value.with_traceback(tb)
coreapi-server          | [Fri Jul 21 15:21:37.907610 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1988, in wsgi_app
coreapi-server          | [Fri Jul 21 15:21:37.907621 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     response = self.full_dispatch_request()
coreapi-server          | [Fri Jul 21 15:21:37.907664 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1641, in full_dispatch_request
coreapi-server          | [Fri Jul 21 15:21:37.907673 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     rv = self.handle_user_exception(e)
coreapi-server          | [Fri Jul 21 15:21:37.907712 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask_restful/__init__.py", line 271, in error_router
coreapi-server          | [Fri Jul 21 15:21:37.907720 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return original_handler(e)
coreapi-server          | [Fri Jul 21 15:21:37.907790 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1544, in handle_user_exception
coreapi-server          | [Fri Jul 21 15:21:37.907801 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     reraise(exc_type, exc_value, tb)
coreapi-server          | [Fri Jul 21 15:21:37.907848 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/_compat.py", line 32, in reraise
coreapi-server          | [Fri Jul 21 15:21:37.907858 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     raise value.with_traceback(tb)
coreapi-server          | [Fri Jul 21 15:21:37.907903 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1639, in full_dispatch_request
coreapi-server          | [Fri Jul 21 15:21:37.907911 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     rv = self.dispatch_request()
coreapi-server          | [Fri Jul 21 15:21:37.907956 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1625, in dispatch_request
coreapi-server          | [Fri Jul 21 15:21:37.907965 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return self.view_functions[rule.endpoint](**req.view_args)
coreapi-server          | [Fri Jul 21 15:21:37.908012 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask_restful/__init__.py", line 477, in wrapper
coreapi-server          | [Fri Jul 21 15:21:37.908020 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     resp = resource(*args, **kwargs)
coreapi-server          | [Fri Jul 21 15:21:37.908067 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask/views.py", line 84, in view
coreapi-server          | [Fri Jul 21 15:21:37.908076 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return self.dispatch_request(*args, **kwargs)
coreapi-server          | [Fri Jul 21 15:21:37.908124 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib/python3.4/site-packages/bayesian/api_v1.py", line 206, in dispatch_request
coreapi-server          | [Fri Jul 21 15:21:37.908132 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     response = super().dispatch_request(*args, **kwargs)
coreapi-server          | [Fri Jul 21 15:21:37.908171 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib64/python3.4/site-packages/flask_restful/__init__.py", line 587, in dispatch_request
coreapi-server          | [Fri Jul 21 15:21:37.908180 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     resp = meth(*args, **kwargs)
coreapi-server          | [Fri Jul 21 15:21:37.908222 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib/python3.4/site-packages/bayesian/auth.py", line 47, in wrapper
coreapi-server          | [Fri Jul 21 15:21:37.908231 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     return view(*args, **kwargs)
coreapi-server          | [Fri Jul 21 15:21:37.908278 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]   File "/usr/lib/python3.4/site-packages/bayesian/api_v1.py", line 360, in get
coreapi-server          | [Fri Jul 21 15:21:37.908287 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160]     alternate = reco_result['task_result']['recommendations']['alternate']
coreapi-server          | [Fri Jul 21 15:21:37.908326 2017] [wsgi:error] [pid 13:tid 140356544870144] [remote 172.17.0.1:45160] TypeError: 'NoneType' object is not subscriptable
coreapi-server          | 172.17.0.1 - - [21/Jul/2017:15:21:37 +0000] "GET /api/v1/stack-analyses-v2/287d59868ce94343ba79e3c5c38d4afd HTTP/1.1" 500 531 - "curl/7.51.0"

RFE: Document the current API

As API has changed a lot there's a need to document the current API so the integration tests could be created and the test coverage could be computed as well.

Do not pass manifest files via SQS queue

Currently, all manifest files are sent on queue as part of message. Note that there could be multiple manifest files and the limit of one single message is 256KB [1]. As we already pass other metadata related to task flows in the message (such as flow related metadata that are necessary for dispatcher), flow arguments and Celery related metadata so this limit can be easily reached.

Instead, we should pass only reference to S3 object stored on S3.

[1] http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-messages.html

Stack analysis wrong API response

If you request with an unknown id you get the misleading error that the analysis is in progress:

$ curl -H "Authorization: Bearer $OSIO_TOKEN" https://recommender.api.openshift.io/api/v1/stack-analyses/someunkownid
{
"error": "Analysis for request ID 'someunkownid' is in progress"
}
This is especially misleading as it is the same message if there is a legitimate analysis in progress. But that case should not be noted as an "error".

Main reference: openshiftio/openshift.io#276

Expose component analyses on task level

Currently the /api/v1/component-analyses endpoint shows results of all tasks that were run during analysis. It would be nice to have smaller granularity there and be able to get results of specific tasks.

This would lead to better optimization when querying for some particular data - e.g. if there are requested only github related statistics, there is no need to show results of other tasks (note that the output of some tasks can be very large).

This could be done in API v2.

Get rid of list of values in each entry

Currently the API server returns results of tasks from graph database where there is available a list under each entry. In some cases it does not make sense and having such thing on API makes API not intuitive and harder to consume.

An example could be (/api/v1/component-analyses/npm/serve-static/1.7.1):

{
    "result": {
        "data": [
            {
                "package": {
                    "ecosystem": [
                        "npm"
                    ],
                    "gh_forks": [
                        -1
                    ],
                    "gh_issues_last_month_closed": [
                        -1
                    ],
                    "gh_issues_last_month_opened": [
                        -1
                    ],
                    "gh_issues_last_year_closed": [
                        -1
                    ],
                    "gh_issues_last_year_opened": [
                        -1
                    ],
                    "gh_prs_last_month_closed": [
                        -1
                    ],
                    "gh_prs_last_month_opened": [
                        -1
                    ],
                    "gh_prs_last_year_closed": [
                        -1
                    ],
                    "gh_prs_last_year_opened": [
                        -1
                    ],
                    "gh_stargazers": [
                        -1
                    ],
                    "last_updated": [
                        1496738887.85
                    ],
                    "latest_version": [
                        "1.12.1"
                    ],
                    "name": [
                        "serve-static"
                    ],
                    "package_dependents_count": [
                        -1
                    ],
                    "package_relative_used": [
                        "not used"
                    ],
                    "tokens": [
                        "serve",
                        "static"
                    ],
                    "vertex_label": [
                        "Package"
                    ]
                },
                "version": {
                    "cm_avg_cyclomatic_complexity": [
                        -1.0
                    ],
                    "cm_loc": [
                        511
                    ],
                    "cm_num_files": [
                        4
                    ],
                    "cve_ids": [
                        "CVE-2015-1164:0",
                        "CWE-211:0"
                    ],
                    "dependents_count": [
                        -1
                    ],
                    "description": [
                        "serve static files"
                    ],
                    "last_updated": [
                        1493117926.84
                    ],
                    "licenses": [
                        "MITNFA"
                    ],
                    "pecosystem": [
                        "npm"
                    ],
                    "pname": [
                        "serve-static"
                    ],
                    "shipped_as_downstream": [
                        false
                    ],
                    "version": [
                        "1.7.1"
                    ],
                    "vertex_label": [
                        "Version"
                    ]
                }
            }
        ],
        "recommendation": {
            "change_to": "1.12.1",
            "component-analyses": {
                "cve": [
                    {
                        "cvss": 0.0,
                        "id": "CVE-2015-1164"
                    },
                    {
                        "cvss": 0.0,
                        "id": "CWE-211"
                    }
                ]
            },
            "message": "CVE/s found for Package - serve-static, Version - 1.7.1\nCVE-2015-1164, CWE-211 with a max cvss score of - 0.0\n It is recommended to use Version - 1.12.1\n It is recommended to use Version - 1.12.1\n It is recommended to use Version - 1.12.1"
        }
    },
    "schema": {
        "name": "analyses_graphdb",
        "url": "http://recommender.api.openshift.io/api/v1/schemas/api/analyses_graphdb/1-2-0/",
        "version": "1-2-0"
    }
}

As there is already integration with other parts, this could be introduced in v2 API.

Stack analyses v2 failure for pom.xml, probably because ecosystem is not set properly?

Stack analyses v2 fails for the current pom.xml file:

<project>
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.redhat.bayessian.test</groupId>
  <artifactId>test-app-junit-dependency</artifactId>
  <version>1.0</version>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
    </dependency>
  </dependencies>
</project>

After the pom.xml is sent to the analysis, the following log message can be seen:

worker-api_1            | cher_id": "56e775c3-7e8f-4df9-92da-ea96820f7c8b", "task_name": "GraphAggregatorTask", "flow_name": "stackApiGraphV2Flow", "parent": {}, "task_id": "12cf991c-26f5-482f-b5cc-739a171ee124", "node_args": {"manifest": [{"content": "<project>\n  <modelVersion>4.0.0</modelVersion>\n  <groupId>com.redhat.bayessian.test</groupId>\n  <artifactId>test-app-junit-dependency</artifactId>\n  <version>1.0</version>\n  <dependencies>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>3.8.1</version>\n    </dependency>\n  </dependencies>\n</project>\n", "ecosystem": "maven", "filename": "pom.xml"}], "data": {"user_email": "[email protected]", "api_name": "stack_analyses", "user_profile": {"iat": 1500567890, "exp": 1508343890, "sub": "testuser"}, "request": [{"content": "<project>\n  <modelVersion>4.0.0</modelVersion>\n  <groupId>com.redhat.bayessian.test</groupId>\n  <artifactId>test-app-junit-dependency</artifactId>\n  <version>1.0</version>\n  <dependencies>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>3.8.1</version>\n    </dependency>\n  </dependencies>\n</project>\n", "ecosystem": "maven", "filename": "pom.xml"}]}, "ecosystem": null, "external_request_id": "0ba28a835040495ab5d092ed13cca7ea"}, "event": "TASK_START", "queue": "ptisnovs_api_GraphAggregatorTask_v0"}

After a while:

worker-api_1            | {"dispatcher_id": "56e775c3-7e8f-4df9-92da-ea96820f7c8b", "selective": false, "flow_name": "stackApiGraphV2Flow", "node_name": "recommendation_v2", "node_id": "8207b164-afce-472a-b69c-6771b8d76854", "what": "Traceback (most recent call last):\n  File \"/usr/lib/python3.4/site-packages/celery/app/trace.py\", line 367, in trace_task\n    R = retval = fun(*args, **kwargs)\n  File \"/usr/lib/python3.4/site-packages/celery/app/trace.py\", line 622, in __protected_call__\n    return self.run(*args, **kwargs)\n  File \"/usr/lib/python3.4/site-packages/selinon/selinonTaskEnvelope.py\", line 170, in run\n    raise self.retry(max_retries=0, exc=exc)\n  File \"/usr/lib/python3.4/site-packages/celery/app/task.py\", line 668, in retry\n    raise_with_context(exc)\n  File \"/usr/lib/python3.4/site-packages/selinon/selinonTaskEnvelope.py\", line 115, in run\n    result = task.run(node_args)\n  File \"/usr/lib/python3.4/site-packages/f8a_worker/base.py\", line 38, in run\n    result = self.execute(node_args)\n  File \"/usr/lib/python3.4/site-packages/f8a_worker/workers/recommender.py\", line 571, in execute\n    for pgm_result in pgm_response:\nTypeError: 'NoneType' object is not iterable\n", "event": "NODE_FAILURE"}
worker-api_1            | ive": false, "retry": 4, "node_args": {"manifest": [{"content": "<project>\n  <modelVersion>4.0.0</modelVersion>\n  <groupId>com.redhat.bayessian.test</groupId>\n  <artifactId>test-app-junit-dependency</artifactId>\n  <version>1.0</version>\n  <dependencies>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>3.8.1</version>\n    </dependency>\n  </dependencies>\n</project>\n", "ecosystem": "maven", "filename": "pom.xml"}], "data": {"api_name": "stack_analyses", "user_email": "[email protected]", "user_profile": {"iat": 1500567890, "exp": 1508343890, "sub": "testuser"}, "request": [{"content": "<project>\n  <modelVersion>4.0.0</modelVersion>\n  <groupId>com.redhat.bayessian.test</groupId>\n  <artifactId>test-app-junit-dependency</artifactId>\n  <version>1.0</version>\n  <dependencies>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>3.8.1</version>\n    </dependency>\n  </dependencies>\n</project>\n", "ecosystem": "maven", "filename": "pom.xml"}]}, "ecosystem": null, "external_request_id": "0ba28a835040495ab5d092ed13cca7ea"}, "event": "FLOW_FAILURE", "queue": "ptisnovs_api_stackApiGraphFlow_v0", "dispatcher_id": "56e775c3-7e8f-4df9-92da-ea96820f7c8b", "flow_name": "stackApiGraphV2Flow", "parent": null, "will_retry": false, "state": {"finished_nodes": {"GraphAggregatorTask": ["12cf991c-26f5-482f-b5cc-739a171ee124"], "ManifestKeeperTask": ["b8897e79-8bcc-4d90-b031-a685173c46fd"], "BookkeeperTask": ["1a71f258-e88d-4e5a-96c0-3b3f1050fef0"], "stack_aggregator_v2": ["757206f6-2f09-4260-b929-3e717a38e896"]}, "failed_nodes": {"recommendation_v2": ["8207b164-afce-472a-b69c-6771b8d76854"]}}}
coreapi-pgbouncer       | 2017-07-28 13:58:12.978 14 LOG C-0x147a6f0: coreapi/[email protected]:35104 login attempt: db=coreapi user=coreapi tls=no
coreapi-pgbouncer       | 2017-07-28 13:58:12.987 14 LOG C-0x147a6f0: coreapi/[email protected]:35104 closing because: client close request (age=0)
worker-api_1            | 28 13:58:12,987 [ERROR] celery.app.trace: Task selinon.Dispatcher[56e775c3-7e8f-4df9-92da-ea96820f7c8b] raised unexpected: FlowError('{"finished_nodes": {"GraphAggregatorTask": ["12cf991c-26f5-482f-b5cc-739a171ee124"], "ManifestKeeperTask": ["b8897e79-8bcc-4d90-b031-a685173c46fd"], "BookkeeperTask": ["1a71f258-e88d-4e5a-96c0-3b3f1050fef0"], "stack_aggregator_v2": ["757206f6-2f09-4260-b929-3e717a38e896"]}, "failed_nodes": {"recommendation_v2": ["8207b164-afce-472a-b69c-6771b8d76854"]}}',)
worker-api_1            | Traceback (most recent call last):
worker-api_1            |   File "/usr/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
worker-api_1            |     R = retval = fun(*args, **kwargs)
worker-api_1            |   File "/usr/lib/python3.4/site-packages/celery/app/trace.py", line 622, in __protected_call__
worker-api_1            |     return self.run(*args, **kwargs)
worker-api_1            |   File "/usr/lib/python3.4/site-packages/selinon/dispatcher.py", line 103, in run
worker-api_1            |     raise self.retry(max_retries=0, exc=flow_error)
worker-api_1            |   File "/usr/lib/python3.4/site-packages/celery/app/task.py", line 668, in retry
worker-api_1            |     raise_with_context(exc)
worker-api_1            |   File "/usr/lib/python3.4/site-packages/selinon/dispatcher.py", line 83, in run
worker-api_1            |     retry = system_state.update()
worker-api_1            |   File "/usr/lib/python3.4/site-packages/selinon/systemState.py", line 760, in update
worker-api_1            |     started, reused, fallback_started = self._continue_and_update_retry([])
worker-api_1            |   File "/usr/lib/python3.4/site-packages/selinon/systemState.py", line 745, in _continue_and_update_retry
worker-api_1            |     raise FlowError(json.dumps(state_info))
worker-api_1            | selinon.errors.FlowError: {"finished_nodes": {"GraphAggregatorTask": ["12cf991c-26f5-482f-b5cc-739a171ee124"], "ManifestKeeperTask": ["b8897e79-8bcc-4d90-b031-a685173c46fd"], "BookkeeperTask": ["1a71f258-e88d-4e5a-96c0-3b3f1050fef0"], "stack_aggregator_v2": ["757206f6-2f09-4260-b929-3e717a38e896"]}, "failed_nodes": {"recommendation_v2": ["8207b164-afce-472a-b69c-6771b8d76854"]}}

Error message for expired token is not quite useful

At the moment if I try to authenticate with an expired token, I get following error message

{
  "error": "Authentication failed - could not decode JWT token"
}

What happened is not obvious on the first side. It could be more descriptive.

Naming inconsistency

Here is a list of all possible paths that are currently supported by v1 API:

{
    "paths": [
        "/api/v1",
        "/api/v1/analyses",
        "/api/v1/analyses/<ecosystem>/<package>/<version>",
        "/api/v1/analyses/by-artifact-hash/<algorithm>/<artifact_hash>",
        "/api/v1/analyses/by-id/<int:analysis_id>",
        "/api/v1/component-analyses/<ecosystem>/<package>/<version>",
        "/api/v1/ecosystems",
        "/api/v1/packages/<ecosystem>",
        "/api/v1/schemas",
        "/api/v1/schemas/<collection>",
        "/api/v1/schemas/<collection>/<name>",
        "/api/v1/schemas/<collection>/<name>/<version>",
        "/api/v1/stack-analyses",
        "/api/v1/stack-analyses/<external_request_id>",
        "/api/v1/stack-analyses/by-origin/<origin>",
        "/api/v1/system/version",
        "/api/v1/user",
        "/api/v1/user-feedback",
        "/api/v1/versions/<ecosystem>/<package>",
        "/api/v1/versions/in-range/<ecosystem>"
    ]
}

Note that /api/v1/analyses endpoints actually provide information about /api/v1/component-analyses. We should unify analyses with component-analyses or vice versa. This was introduced due to dropping old analyses endpoint communicating with the relational database.

As many components are already dependent on this, we could consider create API v2.

Create a Sentiment Analysis API for Che integration

It is planned to integrate 'sentiment-analysis' with 'Che' similar to 'component-analysis'. The sentiment-analysis will require the "Google Big-Query" to read data from 'stackoverflow tables'and "Google Cloud Natural Language API" to compute sentiment_details. The following attached document [1] contains the work-flow diagram, block-diagram and few others details.
[1] https://docs.google.com/a/redhat.com/document/d/15Pq9WNz2WuyP2PsIUvkUMAxCpCX13yROuM233QAag70/edit?usp=sharing

cc: @miteshvp

Incorrect HTTP status when the job to be deleted does not exists

curl -v -X DELETE localhost:34000/api/v1/jobs/UNKNOWN_JOB

returns HTTP code:

HTTP/1.1 401 UNAUTHORIZED

Should be either:

DELETE /api/documents/2 - 404 Users permission irrelevant, resource does not exist
DELETE /api/documents/1 - 410 User has permission, resource already deleted

Stack analyses v2 test failure on stage - usage outliers probability

Content of the requirements.txt:

click==6.7
httpie==0.9.9
parsimonious==0.7.0
pygments==2.2.0
six==1.10.0
wheel==0.30.0a0
setuptools==36.0.1

Stack analyses v2 response (part of it):

                "usage_outliers": [
...
... skipped
...
                    {
                        "outlier_prbability": 0.7802197802199999,
                        "package_name": "click"
                    }
                ]

According to the newest version of https://github.com/fabric8-analytics/fabric8-analytics-stack-analysis/blame/master/analytics_platform/kronos/pgm/src/pgm_constants.py#L22 the constant should be greater than 0.9

See fabric8-analytics/fabric8-analytics-stack-analysis@774f362

Incorrect HTTP status code for job delete API endpoint

curl -v -X DELETE localhost:34000/api/v1/jobs/TEST_TO_DELETE

returns:

HTTP/1.1 201 CREATED

According to the HTTP method definitions it would be better:

9.7 DELETE

A successful response SHOULD be 200 (OK) if the response includes an entity describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No Content) if the action has been enacted but the response does not include an entity.

The following graph might be useful as well:
https://i.stack.imgur.com/whhD1.png

RFE: component-analysis API HTTP codes

Currently when you acces component-analysis API endpoints the API respond either by:

200 OK if analysis is done

404 NOT FOUND if: analysis just started OR is in progress

Because this API endpoints is used for many things at once, I'd suggest to use:

200 OK if analysis is done and available (as currently implemented)
201 CREATED or 202 ACCEPTED when new analysis is to be started (ie ecosystem+component+version are ok). The former option follows RFC 7231
202 ACCEPTED or if analysis is in progress
404 NOT FOUND if the ecosystem|component|version are not ok or in case of other error

WDYT?

Local deployment failure: ERROR: An HTTP request took too long to complete

The local deployemnt started to fail with the following error message:

worker-ingestion_1      | 2017-07-21 09:44:52,417 [DEBUG] CodeMetricsTask: Executing command, timeout=300: ['cr', '--format=json', '/var/lib/f8a_worker/worker_data/npm/wisp/0.10.0/extracted_package']
worker-ingestion_1      | 2017-07-21 09:44:52,417 [DEBUG] f8a_worker.utils: running command '['cr', '--format=json', '/var/lib/f8a_worker/worker_data/npm/wisp/0.10.0/extracted_package']'; timeout '300'
cvedb-s3-dump_1         | + upload /tmp/cvedb/vulndb.json ptisnovs-bayesian-core-snyk
cvedb-s3-dump_1         | + cvedb=/tmp/cvedb/vulndb.json
cvedb-s3-dump_1         | + bucket=ptisnovs-bayesian-core-snyk
cvedb-s3-dump_1         | + endpoint_arg='--endpoint-url http://coreapi-s3:33000'
cvedb-s3-dump_1         | + aws --endpoint-url http://coreapi-s3:33000 s3 ls
cvedb-s3-dump_1         | + grep ptisnovs-bayesian-core-snyk
cvedb-s3-dump_1         | 2017-07-21 10:09:28 ptisnovs-bayesian-core-snyk
cvedb-s3-dump_1         | + aws --endpoint-url http://coreapi-s3:33000 s3 cp /tmp/cvedb/vulndb.json s3://ptisnovs-bayesian-core-snyk
upload: ../tmp/cvedb/vulndb.json to s3://ptisnovs-bayesian-core-snyk/vulndb.json(s) remaining
fabric8analyticscommon_cvedb-s3-dump_1 exited with code 0
coreapi-pgbouncer       | 2017-07-21 10:15:26.944 13 LOG Stats: 0 req/s, in 12 b/s, out 18 b/s,query 7301 us
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.