Giter Club home page Giter Club logo

allegroai / clearml Goto Github PK

View Code? Open in Web Editor NEW
5.3K 94.0 633.0 45.37 MB

ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution

Home Page: https://clear.ml/docs

License: Apache License 2.0

Python 100.00%
version-control experiment-manager version control experiment deeplearning deep-learning machine-learning machinelearning ai

clearml's People

Contributors

ah-merii avatar ainoam avatar alex-burlacu-clear-ml avatar allegroai-git avatar chengzegang avatar cksac avatar danmalowany-allegro avatar daugihao avatar eliorc avatar erezalg avatar eugen-ajechiloae-clearml avatar eyalto avatar h4dr1en avatar hyamsg avatar jalexand3r avatar jday1 avatar jkhenning avatar john-zielke-snkeos avatar make42 avatar materight avatar mmiller-max avatar natephysics avatar pktiuk avatar pollfly avatar rizwan-hasan avatar shaked avatar thepycoder avatar tonyd avatar yiftachbeer avatar zhouzaida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clearml's Issues

Comparing scalar charts

Thanks for this awesome tool!

I'm trying to compare scalar values recorded across experiments, but given they all have the same name it's hard to track which experiment is which.

Screenshot 2019-10-30 at 10 32 46

It might be the way I'm logging however unsure how to solve this, any help would be appreciated!

trains with local server

trains were working for me with the demo server, I then installed the trains server using the docker image on a remote machine, I then used 'trains-init' comment to set the connection my server on my first run I get the following weird exception:

    task=Task.init(project_name="Erlichsefi", task_name=current_case['name'])
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\task.py", line 227, in init
    reuse_last_task_id,
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\task.py", line 413, in _create_dev_task
    log_to_backend=True,
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\task.py", line 102, in __init__
    super(Task, self).__init__(**kwargs)
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_interface\task\task.py", line 79, in __init__
    super(Task, self).__init__(id=task_id, session=session, log=log)
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_interface\base.py", line 120, in __init__
    super(IdObjectBase, self).__init__(session, log, **kwargs)
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_interface\base.py", line 33, in __init__
    self._session = session or self._get_default_session()
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_interface\base.py", line 94, in _get_default_session
    secret_key=ENV_SECRET_KEY.get(),
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_api\session\session.py", line 143, in __init__
    self.refresh_token()
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_api\session\token_manager.py", line 95, in refresh_token
    self._set_token(self._do_refresh_token(self.__token, exp=self.req_token_expiration_sec))
  File "C:\Users\sefi.erlich\.virtualenvs\df-z3Y28GXl\lib\site-packages\trains\backend_api\session\session.py", line 522, in _do_refresh_token
    raise LoginError(str(ex))
trains.backend_api.session.session.LoginError: 'data'

any ideas?

Add support to multiple tensorboard writers (e.g. train, val)

It is a common practice, I believe, to have more than one tensorboard writers in your training code.
For example, one for the training results, and one for the validation results.
Both writers have the exact same scalars, graphs, plots, etc.
Currently trains API doesn't support multiple writers with the same graph names.
TensorBoard splits the outputs in two, so you can get two training events for a single model (e.g. one with the prefix "train" and one with the prefix "val").
In the tensorboard it is then easy to compare train and val (in a single graph), or filter out either of those.

A similar solution would be appreciated.

model_from_json behaves differently after trains is loaded

This piece of code behaves differently before/after connecting to trains:

        from keras import models
        from keras.applications.resnet50 import ResNet50
        model = ResNet50(weights=None)
        json_string = model.to_json()
        model = models.model_from_json(json_string)
        print(type(model))
        return

It returns a model with type keras.engine.network.Network object, but it should return a keras.engine.training.Model, as it does when trains is not loaded.

conda support

Any chance that i can use anaconda to resolve trains dependency ?

Bug with automatic upload of matplotlib plots

plt.imshow(image)
plt.title("title")

This works and uploads the image automatically to the debug_images

plt.title("title")
plt.imshow(image)

This doesn't upload the image automatically.

Tracking specific environment variables

Is there a way to provide a default list of environment variables that are not part of args that are tracked along with the experiment ?
If not can you add this as feature request ?

Use few task names in single run

Hello,
First Thanks for this project it looks very helpful.

I like to run a cross validation loop and save the results under the same project name but with different task name.

There is a way to tell trains that i now i finish one task and start another?
When i just try to do the Task.init() again with the same project name and different task name i got an error:

Traceback (most recent call last):
  File "/media/ophir/DATA1/***/***/***/***/main.py", line 84, in <module>
    descrip = ''
  File "/media/ophir/DATA1/software/anaconda3/envs/py36/lib/python3.6/site-packages/trains/task.py", line 167, in init
    verify_defaults_match()
  File "/media/ophir/DATA1/software/anaconda3/envs/py36/lib/python3.6/site-packages/trains/task.py", line 161, in verify_defaults_match
    current=current,
trains.errors.UsageError: Current task already created and requested task name 'validIter1' does not match current task name 'validIter0'   

Thanks
Ophir

Using trains on a batch-job system

Hi, I appreciate trains, which helps my research projects.

Now I want to use trains on a batch-job system consisting of a login node and several GPU nodes. I created a trains.conf on the login node and pushed a job with trains on a GPU node. Then I got the following error.

Retrying (Retry(total=239, connect=239, read=240, redirect=240, status=240)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x2aaf57a2cc50>, 'Connection to XXX.XXX.XXX.XXX timed out. (connect timeout=3.0)')': /auth.login

I checked XXX.XXX.XXX.XXX is reachable from the GPU node I ran the job.

Also, I follow the answer in #7 and modified a line of trains.conf to

api { verify_certificate = False } 

But I got the same message.

It would be helpful if you have any advice. Thank you for your advance.

Model snapshots

Hi,

According to the readme, trains supports:

Model snapshots (With optional automatic upload to central storage: Shared folder, S3, GS, Azure, Http)

Indeed I do see references to google, azure and aws in the configuration example but no examples for an Http or shared folder.

I'd like to store my models on a nexus server which supports file upload through http POST.

Is it possible ?

Better way to handle exception?

Hey guys, first of all, thanks for this amazing open-source tool!

I was wondering if there might be a better way to handle Failed credentials exception in StorageHelper class. Just spent a couple of hours trying to figure out what was wrong with my S3 credentials. I added print statement to the exception, and it turned out I didn't have boto3 installed in my new environment:
AWS S3 storage driver (boto3) not found. Please install driver using "pip install 'boto3>=1.9'". I'm pretty sure there might be other cases when this exception can be caused by something rather than problems with S3 credentials.

Progress bar for training

Hi,
It would be convenient if there would be an ongoing progress bar for each task in the UI, which depends on the number of epochs, number of batches per epoch and the batch size. Is there a possibility to add this in the future?

Add support for SKlearn

Hey guys.

I'd like to use this mostly for SKLearn. Please advise. e.g. gridsearch support, allowing param recording for search iterations.

pip install trains, fails on Amazon Linux AMI

Got he following error when trying to install trains on Amazon Linux AMI:
$ pip install trains --user
Collecting trains
Using cached https://files.pythonhosted.org/packages/9c/f0/2c3e1e8a765fb1a8f2714530bce27c11d35a6d640928dd7608466ef3427a/trains-0.9.2-py2.py3-none-any.whl
Collecting enum34>=0.9 (from trains)
Downloading https://files.pythonhosted.org/packages/af/42/cb9355df32c69b553e72a2e28daee25d1611d2c0d9c272aa1d34204205b2/enum34-1.1.6-py3-none-any.whl
Collecting apache-libcloud>=2.2.1 (from trains)
Downloading https://files.pythonhosted.org/packages/07/c3/ddbd6b48395d0825f9910e5724a098e9de1e720d4e7124625a8aff1eedec/apache_libcloud-2.5.0-py2.py3-none-any.whl (3.1MB)
100% |████████████████████████████████| 3.1MB 13.2MB/s
Collecting plotly>=3.9.0 (from trains)
Downloading https://files.pythonhosted.org/packages/ff/75/3982bac5076d0ce6d23103c03840fcaec90c533409f9d82c19f54512a38a/plotly-3.10.0-py2.py3-none-any.whl (41.5MB)
100% |████████████████████████████████| 41.5MB 1.2MB/s
Collecting funcsigs>=1.0 (from trains)
Downloading https://files.pythonhosted.org/packages/69/cb/f5be453359271714c01b9bd06126eaf2e368f1fddfff30818754b5ac2328/funcsigs-1.0.2-py2.py3-none-any.whl
Requirement already satisfied: boto3>=1.9 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (1.9.123)
Requirement already satisfied: botocore>=1.12 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (1.12.123)
Collecting pyhocon>=0.3.38 (from trains)
Downloading https://files.pythonhosted.org/packages/3f/35/34e16968df0b8b65d3696d80b8add0aaffd4f0461c1ef3c0f066fdc747e8/pyhocon-0.3.51.tar.gz (70kB)
100% |████████████████████████████████| 71kB 27.2MB/s
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (2.8.0)
Collecting attrs>=18.0 (from trains)
Downloading https://files.pythonhosted.org/packages/23/96/d828354fa2dbdf216eaa7b7de0db692f12c234f7ef888cc14980ef40d1d2/attrs-19.1.0-py2.py3-none-any.whl
Collecting opencv-python>=3.2.0.8 (from trains)
Downloading https://files.pythonhosted.org/packages/7b/d2/a2dbf83d4553ca6b3701d91d75e42fe50aea97acdc00652dca515749fb5d/opencv_python-4.1.0.25-cp36-cp36m-manylinux1_x86_64.whl (26.6MB)
100% |████████████████████████████████| 26.6MB 1.9MB/s
Collecting pyjwt>=1.6.4 (from trains)
Downloading https://files.pythonhosted.org/packages/87/8b/6a9f14b5f781697e51259d81657e6048fd31a113229cf346880bb7545565/PyJWT-1.7.1-py2.py3-none-any.whl
Collecting future>=0.16.0 (from trains)
Downloading https://files.pythonhosted.org/packages/90/52/e20466b85000a181e1e144fd8305caf2cf475e2f9674e797b222f8105f5f/future-0.17.1.tar.gz (829kB)
100% |████████████████████████████████| 829kB 29.0MB/s
Collecting google-cloud-storage>=1.13.2 (from trains)
Using cached https://files.pythonhosted.org/packages/e2/4e/aee59b19321eb1063317c2e6fa4c2f3cfe21740586de78578eedbd2bed3d/google_cloud_storage-1.16.1-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.10 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (1.16.2)
Collecting humanfriendly>=2.1 (from trains)
Downloading https://files.pythonhosted.org/packages/90/df/88bff450f333114680698dc4aac7506ff7cab164b794461906de31998665/humanfriendly-4.18-py2.py3-none-any.whl (73kB)
100% |████████████████████████████████| 81kB 29.0MB/s
Collecting typing>=3.6.4 (from trains)
Downloading https://files.pythonhosted.org/packages/4a/bd/eee1157fc2d8514970b345d69cb9975dcd1e42cd7e61146ed841f6e68309/typing-3.6.6-py3-none-any.whl
Collecting coloredlogs>=10.0 (from trains)
Downloading https://files.pythonhosted.org/packages/08/0f/7877fc42fff0b9d70b6442df62d53b3868d3a6ad1b876bdb54335b30ff23/coloredlogs-10.0-py2.py3-none-any.whl (47kB)
100% |████████████████████████████████| 51kB 23.9MB/s
Collecting jsonschema>=2.6.0 (from trains)
Downloading https://files.pythonhosted.org/packages/aa/69/df679dfbdd051568b53c38ec8152a3ab6bc533434fc7ed11ab034bf5e82f/jsonschema-3.0.1-py2.py3-none-any.whl (54kB)
100% |████████████████████████████████| 61kB 26.1MB/s
Requirement already satisfied: urllib3>=1.22 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (1.24.1)
Collecting psutil>=3.4.2 (from trains)
Downloading https://files.pythonhosted.org/packages/1c/ca/5b8c1fe032a458c2c4bcbe509d1401dca9dda35c7fc46b36bb81c2834740/psutil-5.6.3.tar.gz (435kB)
100% |████████████████████████████████| 440kB 28.5MB/s
Collecting requests-file>=1.4.2 (from trains)
Downloading https://files.pythonhosted.org/packages/23/9c/6e63c23c39e53d3df41c77a3d05a49a42c4e1383a6d2a5e3233161b89dbf/requests_file-1.4.3-py2.py3-none-any.whl
Collecting furl>=2.0.0 (from trains)
Downloading https://files.pythonhosted.org/packages/bd/b6/302ecc007de38274509d6397300afd2e274aba7f1c3c0a165b5f1e1a836a/furl-2.0.0-py2.py3-none-any.whl
Collecting pathlib2>=2.3.0 (from trains)
Downloading https://files.pythonhosted.org/packages/2a/46/c696dcf1c7aad917b39b875acdc5451975e3a9b4890dca8329983201c97a/pathlib2-2.3.3-py2.py3-none-any.whl
Requirement already satisfied: six>=1.11.0 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (1.12.0)
Collecting PyYAML>=3.12 (from trains)
Using cached https://files.pythonhosted.org/packages/a3/65/837fefac7475963d1eccf4aa684c23b95aa6c1d033a2c5965ccb11e22623/PyYAML-5.1.1.tar.gz
Collecting colorama>=0.4.1 (from trains)
Downloading https://files.pythonhosted.org/packages/4f/a6/728666f39bfff1719fc94c481890b2106837da9318031f71a8424b662e12/colorama-0.4.1-py2.py3-none-any.whl
Collecting tqdm>=4.19.5 (from trains)
Using cached https://files.pythonhosted.org/packages/45/af/685bf3ce889ea191f3b916557f5677cc95a5e87b2fa120d74b5dd6d049d0/tqdm-4.32.1-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.18.4 in /home/ec2-user/.local/lib/python3.6/site-packages (from trains) (2.21.0)
Collecting jsonmodels>=2.2 (from trains)
Downloading https://files.pythonhosted.org/packages/e9/c4/93ce38601474210eeb97b50c7f65d48827ee19f5e7b6e51b63b3684059df/jsonmodels-2.4-py2.py3-none-any.whl
Collecting watchdog>=0.8.0 (from trains)
Downloading https://files.pythonhosted.org/packages/bb/e3/5a55d48a29300160779f0a0d2776d17c1b762a2039b36de528b093b87d5b/watchdog-0.9.0.tar.gz (85kB)
100% |████████████████████████████████| 92kB 29.3MB/s
Requirement already satisfied: pytz in /home/ec2-user/.local/lib/python3.6/site-packages (from plotly>=3.9.0->trains) (2018.9)
Collecting decorator>=4.0.6 (from plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/5f/88/0075e461560a1e750a0dcbf77f1d9de775028c37a19a346a6c565a257399/decorator-4.4.0-py2.py3-none-any.whl
Collecting retrying>=1.3.3 (from plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/44/ef/beae4b4ef80902f22e3af073397f079c96969c69b2c7d52a57ea9ae61c9d/retrying-1.3.3.tar.gz
Collecting nbformat>=4.2 (from plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/da/27/9a654d2b6cc1eaa517d1c5a4405166c7f6d72f04f6e7eea41855fe808a46/nbformat-4.4.0-py2.py3-none-any.whl (155kB)
100% |████████████████████████████████| 163kB 34.4MB/s
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/.local/lib/python3.6/site-packages (from boto3>=1.9->trains) (0.9.4)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /home/ec2-user/.local/lib/python3.6/site-packages (from boto3>=1.9->trains) (0.2.0)
Requirement already satisfied: docutils>=0.10 in /home/ec2-user/.local/lib/python3.6/site-packages (from botocore>=1.12->trains) (0.14)
Requirement already satisfied: pyparsing>=2.0.3 in /home/ec2-user/.local/lib/python3.6/site-packages (from pyhocon>=0.3.38->trains) (2.2.0)
Collecting google-auth>=1.2.0 (from google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/c5/9b/ed0516cc1f7609fb0217e3057ff4f0f9f3e3ce79a369c6af4a6c5ca25664/google_auth-1.6.3-py2.py3-none-any.whl (73kB)
100% |████████████████████████████████| 81kB 31.3MB/s
Collecting google-cloud-core<2.0dev,>=1.0.0 (from google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/98/7f/ff56aec313787577e262d5a2e306c04aef61c5c274699ff9fb40095e6691/google_cloud_core-1.0.2-py2.py3-none-any.whl
Collecting google-resumable-media>=0.3.1 (from google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/e2/5d/4bc5c28c252a62efe69ed1a1561da92bd5af8eca0cdcdf8e60354fae9b29/google_resumable_media-0.3.2-py2.py3-none-any.whl
Collecting pyrsistent>=0.14.0 (from jsonschema>=2.6.0->trains)
Downloading https://files.pythonhosted.org/packages/68/0b/f514e76b4e074386b60cfc6c8c2d75ca615b81e415417ccf3fac80ae0bf6/pyrsistent-0.15.2.tar.gz (106kB)
100% |████████████████████████████████| 112kB 34.8MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/site-packages (from jsonschema>=2.6.0->trains) (28.8.0)
Collecting orderedmultidict>=1.0 (from furl>=2.0.0->trains)
Downloading https://files.pythonhosted.org/packages/05/70/9f0a8867d4d98becf60dc5707e10b39930747ee914dae46414b69e33a266/orderedmultidict-1.0-py2.py3-none-any.whl
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /home/ec2-user/.local/lib/python3.6/site-packages (from requests>=2.18.4->trains) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.6/site-packages (from requests>=2.18.4->trains) (2019.3.9)
Requirement already satisfied: idna<2.9,>=2.5 in /home/ec2-user/.local/lib/python3.6/site-packages (from requests>=2.18.4->trains) (2.8)
Collecting argh>=0.24.1 (from watchdog>=0.8.0->trains)
Downloading https://files.pythonhosted.org/packages/06/1c/e667a7126f0b84aaa1c56844337bf0ac12445d1beb9c8a6199a7314944bf/argh-0.26.2-py2.py3-none-any.whl
Collecting pathtools>=0.1.1 (from watchdog>=0.8.0->trains)
Downloading https://files.pythonhosted.org/packages/e7/7f/470d6fcdf23f9f3518f6b0b76be9df16dcc8630ad409947f8be2eb0ed13a/pathtools-0.1.2.tar.gz
Collecting traitlets>=4.1 (from nbformat>=4.2->plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/93/d6/abcb22de61d78e2fc3959c964628a5771e47e7cc60d53e9342e21ed6cc9a/traitlets-4.3.2-py2.py3-none-any.whl (74kB)
100% |████████████████████████████████| 81kB 29.6MB/s
Collecting ipython-genutils (from nbformat>=4.2->plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/fa/bc/9bd3b5c2b4774d5f33b2d544f1460be9df7df2fe42f352135381c347c69a/ipython_genutils-0.2.0-py2.py3-none-any.whl
Collecting jupyter-core (from nbformat>=4.2->plotly>=3.9.0->trains)
Downloading https://files.pythonhosted.org/packages/1d/44/065d2d7bae7bebc06f1dd70d23c36da8c50c0f08b4236716743d706762a8/jupyter_core-4.4.0-py2.py3-none-any.whl (126kB)
100% |████████████████████████████████| 133kB 36.4MB/s
Collecting pyasn1-modules>=0.2.1 (from google-auth>=1.2.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/91/f0/b03e00ce9fddf4827c42df1c3ce10c74eadebfb706231e8d6d1c356a4062/pyasn1_modules-0.2.5-py2.py3-none-any.whl (74kB)
100% |████████████████████████████████| 81kB 29.4MB/s
Collecting rsa>=3.1.4 (from google-auth>=1.2.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting cachetools>=2.0.0 (from google-auth>=1.2.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting google-api-core<2.0.0dev,>=1.11.0 (from google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/10/d6/8b1e8d79a8a56649af3a094e3d90dd213278da942f36d831b57c0ca4a503/google_api_core-1.11.1-py2.py3-none-any.whl (66kB)
100% |████████████████████████████████| 71kB 30.0MB/s
Collecting pyasn1<0.5.0,>=0.4.1 (from pyasn1-modules>=0.2.1->google-auth>=1.2.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/7b/7c/c9386b82a25115cccf1903441bba3cbadcfae7b678a20167347fa8ded34c/pyasn1-0.4.5-py2.py3-none-any.whl (73kB)
100% |████████████████████████████████| 81kB 31.9MB/s
Collecting googleapis-common-protos!=1.5.4,<2.0dev,>=1.5.3 (from google-api-core<2.0.0dev,>=1.11.0->google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/eb/ee/e59e74ecac678a14d6abefb9054f0bbcb318a6452a30df3776f133886d7d/googleapis-common-protos-1.6.0.tar.gz
Collecting protobuf>=3.4.0 (from google-api-core<2.0.0dev,>=1.11.0->google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage>=1.13.2->trains)
Downloading https://files.pythonhosted.org/packages/d2/fb/29de8d08967f0cce1bb10b39846d836b0f3bf6776ddc36aed7c73498ca7e/protobuf-3.8.0-cp36-cp36m-manylinux1_x86_64.whl (1.2MB)
100% |████████████████████████████████| 1.2MB 23.3MB/s
google-api-core 1.11.1 has requirement setuptools>=34.0.0, but you'll have setuptools 28.8.0 which is incompatible.
Installing collected packages: enum34, apache-libcloud, decorator, retrying, pyrsistent, attrs, jsonschema, ipython-genutils, traitlets, jupyter-core, nbformat, plotly, funcsigs, pyhocon, opencv-python, pyjwt, future, pyasn1, pyasn1-modules, rsa, cachetools, google-auth, protobuf, googleapis-common-protos, google-api-core, google-cloud-core, google-resumable-media, google-cloud-storage, humanfriendly, typing, coloredlogs, psutil, requests-file, orderedmultidict, furl, pathlib2, PyYAML, colorama, tqdm, jsonmodels, argh, pathtools, watchdog, trains
Running setup.py install for retrying ... done
Running setup.py install for pyrsistent ... done
Running setup.py install for pyhocon ... done
Running setup.py install for future ... done
Running setup.py install for googleapis-common-protos ... error
Complete output from command /usr/local/bin/python3 -u -c "import setuptools, tokenize;file='/tmp/pip-install-uvetk06x/googleapis-common-protos/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-record-ji5tpf0g/install-record.txt --single-version-externally-managed --compile --user --prefix=:
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/site-packages/setuptools/init.py", line 10, in
from setuptools.extern.six.moves import filter, filterfalse, map
File "/usr/local/lib/python3.6/site-packages/setuptools/extern/init.py", line 1, in
from pkg_resources.extern import VendorImporter
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 3017, in
@_call_aside
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 3003, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 3045, in _initialize_master_working_set
dist.activate(replace=False)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 2577, in activate
declare_namespace(pkg)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 2151, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 2091, in _handle_ns
_rebuild_mod_path(path, packageName, module)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 2120, in _rebuild_mod_path
orig_path.sort(key=position_in_sys_path)
AttributeError: '_NamespacePath' object has no attribute 'sort'

----------------------------------------

Command "/usr/local/bin/python3 -u -c "import setuptools, tokenize;file='/tmp/pip-install-uvetk06x/googleapis-common-protos/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-record-ji5tpf0g/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-install-uvetk06x/googleapis-common-protos/

run manual_reporting.py breaks

manual_reporting.py breaks now. It seems it's due to python 2 and 3 compatibility.
I run the code using python 3.7.

$ python manual_reporting.py 
TRAINS Task: created new task id=a359fbe1bf4d43c4b99a0ab282be3a3e
TRAINS results page: None/projects/df19a47fd49248f2b6c4a1cd0163166d/experiments/a359fbe1bf4d43c4b99a0ab282be3a3e/output/log
DEBUG:root:This is a debug message
INFO:root:This is an info message
WARNING:root:This is a warning message
ERROR:root:This is an error message
CRITICAL:root:This is a critical message
hello
Traceback (most recent call last):
  File "manual_reporting.py", line 46, in <module>
    logger.report_image_and_upload("test case", "image uint", iteration=1, matrix=m)
  File "/home/user/anaconda3/envs/trains_py37/lib/python3.7/site-packages/trains/logger.py", line 45, in fixed_names
    func(self, title, series, *args, **kwargs)
  File "/home/user/anaconda3/envs/trains_py37/lib/python3.7/site-packages/trains/logger.py", line 517, in report_image_and_upload
    upload_uri = self._default_upload_destination or self._task._get_default_report_storage_uri()
  File "/home/user/anaconda3/envs/trains_py37/lib/python3.7/site-packages/trains/backend_interface/task/task.py", line 675, in _get_default_report_storage_uri
    elif parsed.netloc.startswith('demoapp.'):
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
======> WARNING! UNCOMMITTED CHANGES IN REPOSITORY https://github.com/allegroai/trains.git <======

Color channel order in debug images with TensorBoardX

I'm using TRAINS without any changes of the codes written with PyTorch+TensorBoardX.
It almost works well, however, debug images have different color channel order from TensorBoardX's one, i.e. BGR.
Considering compatibility between torchvision and PIL, I think it should be RGB.

Tags for tasks

Hello,
Is it possible to add "tags" for different tasks? It will be comfortable to search among different tasks by the tags created for them (For example, show all tasks with tags "adam"), and then compare them more easily- in order to not provide an over-descriptive task_name.

'Compare Experiments' feature doesn't display scalar charts

I'm having issues with the 'Compare Experiments' feature (TRAINS v0.10.7).

After selecting multiple experiments, clicking on 'Compare' and navigating to the 'Scalar Charts' tab, I get this blank content page:

image

This bug is not consistent, as sometimes the charts would render correctly and sometimes they won't.
I tried refreshing the page and logging out and in to the panel multiple times, while comparing the same experiments, but the charts wouldn't load (it also occurs with other finished experiments).

Queue models to train

I can't find information on docs.

Can I queue eg. 10 models with different hyperparameters, and Trains will run them one by one?

trains doesn't support keras model save

Hello

When I use keras' model.save() method, it doesn't detect it automatically and I have to do it manually.
Any chance you can support this?
Attached the code I tried it with and the data it needs (it's just a sample code, not my own).

Please help :)

keras_model.zip

Parallel Coordinates Plot

Feature request: To be able to create a parallel coordinates plot in TRAINS in order to compare several hyper-parameters in respect to a specific metric, a similar feature is available in Tensorboard and MLFlow.

SVN support

This is a feature request.

Currently trains server supports git as a version control platform; in other words, when the script of interest is connected to trains server in a git folder, trains-agent creates an environment cloning this git repository.

Do you have any plans for adding Subverison (SVN) support, in addition to git?

Distributed training across multiple nodes

Hi,
I use horovod with distributed training across multiple nodes.
Is there any way to log resource monitoring metrics and information about environment not only from rank 0? Also I think is would be useful to check if a source code of training script the same on the different nodes.

python2.7 ImportError: cannot import name timezone on 0.11.1

when use trains 0.11.1, we encountered an import error in python 2.7 environment:

Python 2.7.15 |Anaconda, Inc.| (default, Nov 13 2018, 23:04:45) 
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import trains
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/__init__.py", line 4, in <module>
    from .task import Task
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/task.py", line 14, in <module>
    from .binding.joblib_bind import PatchedJoblib
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/binding/joblib_bind.py", line 8, in <module>
    from ..binding.frameworks import _patched_call, _Empty, WeightsFileHandler
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/binding/frameworks/__init__.py", line 9, in <module>
    from ...model import InputModel, OutputModel
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/model.py", line 15, in <module>
    from .backend_interface.util import validate_dict, get_single_result, mutually_exclusive
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/backend_interface/__init__.py", line 2, in <module>
    from .task import Task
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/backend_interface/task/__init__.py", line 1, in <module>
    from .task import Task
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/backend_interface/task/task.py", line 21, in <module>
    from ..model import Model
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/backend_interface/model.py", line 12, in <module>
    from .util import make_message
  File "/home/wei/anaconda3/envs/training/lib/python2.7/site-packages/trains/backend_interface/util.py", line 4, in <module>
    from datetime import datetime, timezone
ImportError: cannot import name timezone

However, it is fine in 0.11.0

Default installation of opencv-python conflicts with existing opencv version

Description: Trains automatically installs package opencv-python, which conflicts with existing opencv installations. Moreover this is not an official release (As quoted in PyPi.org: "Unofficial pre-built OpenCV packages for Python.")
Same might happen also for numpy installation.
Problem: new installation prevents us from using specific official opencv versions.
In our case it is even worse, as we build our own opencv from source code.
Possible solutions:

  1. Check if opencv is already installed before installing it
  2. Remove opencv from requirements.txt and ask user to install manually

Environment:
Linux, Ubuntu 16.0.4. (using standard pip-install)
python 3.6
Opencv 4.0.1 (modified and locally built)
Trains v0.10.1

How can I remove an experiment from the project?

How can I remove an experiment from the project?
Sorry to bother you, but I can't find the way to remove an experiment after some tries.
I only find the way to delete a project.
Thanks a lot!

Comparing experiments with time axis instead of iterations

I would like to compare experiments with X axis as time instead of iteration.
This can have two approaches and uses-
With t=0 for all experiments- would enable to show for similar experiments which is faster.
With t as actual time+date- would show experiments had some issue in a certain time in the day, for example due to network overflow, or some HW failure.

If it is already possible and I missed it, I apologize :-)

image

Thanks a lot!

If I “stop” the experiment in the web-app, process continues to run

Issue description: Code running on Windows 10 64-bit machine, if I “stop” the experiment in the web-app (right click -> “stop”), process continues to run (as if the “stop” command never happened).

Expected behaviour: I expected the experiment to terminate.

Environment: Windows 10, python==3.6.7, tensorflow==1.10.0, trains==0.9.0

Continuing a previous task while preserving its charts

Hi,
Suppose I want to continue a "completed" experiment. When I run it, I give it the checkpoint's path and the same task_name as the completed experiment. Is there a way to continue this specific experiment (for example, I want to train over more epochs) so that the results/charts of the previous run will be "concatenated" at the beginning before the new run's results? Either with the task_name or with some token/id that I see is generated whenever a new task is run.

Deprecation warnings whilst using trains

Hey guys! During training, when trains is enabled I keep getting spammed with these messages in stdout. Any ideas what could be causing this? I'll be investigating further to see what the issue is but any help would be appreciated.

/local/sean.narenthiran/anaconda3/lib/python3.7/site-packages/jsonschema/validators.py:896: DeprecationWarning:

The types argument is deprecated. Provide a type_checker to jsonschema.validators.extend instead.

Code throws exception but task status is not set to “failed”

Issue description: Code throws exception (not caught) and exists with error, but task status changes to “completed”, instead of “failed”.

Expected behavior: I expect the status to be changed to “failed”.

Environment: Windows 10, python==3.6.7, tensorflow==1.10.0

Requirements.txt and git+ssh python packge urls

Hi,

In my python requirements.txt, I have some dependencies hosted on private git repositories.

# content of requirements.txt
git+ssh://MyGitServer/MyPackage.git@12365

I would like to clone and run experiments on multiple workers thanks to trains-agent.
It seems trains detects only package name and version and has no clue of the package source. Therefore workers fail to install my script dependencies.

Any hint?

Thanks

make Trains intranet and k8s friendly

Hi there,
Great project and thank you so much for sharing it!
I'm trying to set up the Train services on our intranet-based (which blocks internet connection) K8S cluster and faces the following issues:

  1. the Train docker container (for web) has hard-coded the backend server ports (8008 etc), which makes it hard for K8S to dynamically link the train web server and backend service; could you make the ports configurable externally?
  2. There're some internet resource links in the webapp, which, in our intranet environment, cannot be loaded. Would it be possible to pack all resources inside the docker images? Or provide an "offline" mode to inject pre-downloaded resources.

Thanks and Best Regards.

zhikai

Potential dependency conflicts between trains and urllib3

Hi, trains directly and transitively introduced multiple versions of urllib3.

As shown in the following full dependency graph of trains,trains requires urllib3 (the latest version), while the installed version of requests(2.22.0) requires urllib3>=1.21.1,<1.26.

According to Pip's “first found wins” installation strategy, urllib3 1.25.3 is the actually installed version.

Although the first found package version urllib3 1.25.3 just satisfies the later dependency constraint (urllib3>=1.21.1,<1.26), it will lead to a build failure once developers release a newer version of urllib3.

Dependency tree--------

trains(version range:)
| +-apache-libcloud(version range:>=2.2.1)
| +-attrs(version range:>=18.0)
| +-backports.functools-lru-cache(version range:>=1.0.2)
| +-boto3(version range:>=1.9)
| +-botocore(version range:>=1.12)
| +-enum34(version range:>=0.9)
| +-funcsigs(version range:>=1.0)
| +-furl(version range:>=2.0.0)
| | +-six(version range:>=1.8.0)
| | +-orderedmultidict(version range:>=1.0)
| +-future(version range:>=0.16.0)
| +-futures(version range:>=3.0.5)
| +-google-cloud-storage(version range:>=1.13.2)
| +-humanfriendly(version range:>=2.1)
| +-jsonmodels(version range:>=2.2)
| +-jsonschema(version range:>=2.6.0)
| +-numpy(version range:>=1.10)
| +-opencv-python(version range:>=3.2.0.8)
| +-pathlib2(version range:>=2.3.0)
| +-pigar(version range:>=0.9.2)
| | +-colorama(version range:>=0.3.9)
| | +-requests(version range:>=2.20.0)
| | | +-chardet(version range:>=3.0.2,<3.1.0)
| | | +-idna(version range:>=2.5,<2.9)
| | | +-urllib3(version range:>=1.21.1,<1.26)
| | | +-certifi(version range:>=2017.4.17)
| +-plotly(version range:>=3.9.0)
| +-psutil(version range:>=3.4.2)
| +-pyhocon(version range:>=0.3.38)
| | +-pyparsing(version range:>=2.0.3)
| +-pyjwt(version range:>=1.6.4)
| +-python-dateutil(version range:>=2.6.1)
| +-pyyaml(version range:>=3.12)
| +-requests(version range:>=2.20.0)
| | +-chardet(version range:>=3.0.2,<3.1.0)
| | +-idna(version range:>=2.5,<2.9)
| | +-urllib3(version range:>=1.21.1,<1.26)
| | +-certifi(version range:>=2017.4.17)
| +-requests-file(version range:>=1.4.2)
| +-six(version range:>=1.11.0)
| +-tqdm(version range:>=4.19.5)
| +-typing(version range:>=3.6.4)
| +-urllib3(version range:>=1.22)

Thanks for your attention.
Best,
Neolith

X vs Y plots

I currently use mlflow and the functionality I use the most is the ability for quick X vs Y scatter plots. let's say I select 10 runs and want to compare them. The X vs. Y plots allow me to compare any nvalue. For example, I usually have the total number of parameters registered, then I can quickly plot the performance vs number of parameters and decide whether I should try a bigger/smaller model. Or number of iterations vs performance etc. Or choice of optimizer vs whatever. It allows me to easily explore and asses future options.

I mostly use it to compare various statistics of the model (number of parameters, layers etc.) vs various metrics I am interested in.

Having this functionality in trains would be really helpful!

(Also, how should I save information about my model. Like number of parameters, optimizer, parameters for the optimizer, number of layers etc.).

Unstable connection causes training to freeze

Hi,
I tried to use TRAINS (v0.10.7) in an environment with an unstable connection to the internet.
While getting a lot of 'retrying to connect' messages, my training steps are being delayed (or stop).

I tried to set the following configuration in /trains/backend_api/config/default/api.conf, to minimize the amount of those kind of messages:

http {
        max_req_size = 15728640  # request size limit (smaller than that configured in api server)

        retries {
            # retry values (int, 0 means fail on first retry)
            total: 0     # Total number of retries to allow. Takes precedence over other counts.
            connect: 0   # How many times to retry on connection-related errors (never reached server)
            read: 0      # How many times to retry on read errors (waiting for server)
            redirect: 0  # How many redirects to perform (HTTP response with a status code 301, 302, 303, 307 or 308)
            status: 0   # How many times to retry on bad status codes

            # backoff parameters
            # timeout between retries is min({backoff_max}, {backoff factor} * (2 ^ ({number of total retries} - 1))
            backoff_factor: 1000.0
            backoff_max: 1.0
        }

        wait_on_maintenance_forever: false

        pool_maxsize: 512
        pool_connections: 512
    }

After cutting off the internet connection in the middle of a training, it froze completely with an infinitely repeating message: Failed logging task to backend (1 lines).

Is there a workaround for working offline (in which TRAINS will completely stop reporting), or at least one that won't interfere with the training steps (without freeze or delay)?

Highest score in all itterations support

If I store a value and list it in my columns I can search/order by it. e.g. Overall_AUC. This is good. However it does not use the highest score (just the last one made I think). Which is not good. I can work around this with code (updating a 'best_AUC' value whenever it improves within a session. So low priority. But it will be a common UX usecase I think.

Plan for view git diff in Web UI

Is there any planned release date for git diff view in Web UI? Also is there any workaround I can do at current stage? (where is it stored?)

_FileStorageDriver.download_object does not contain callback argument

Steps for reproduce:

  1. init task with task = Task.init(project_name='example', task_name='example', output_uri='local_folder')
  2. setup InputModel with
input_model = InputModel.import_model('file://local_folder/...') # uri from model page MODEL URI field
input_model.connect(task)
  1. try to load state dict to your pytorch model
model.load_state_dict(torch.load(input_model.get_weights()))

logs will contain:

2019-10-14 16:11:07,399 - trains.model - INFO - Selected model id: 6b56ee9709314f35b0a804914a7063d5
2019-10-14 16:11:07,595 - trains.model - INFO - Selected model id: 58791155e4e444ae97d5721fcc22b2bc
2019-10-14 16:11:07,643 - trains.Task - WARNING - Task connect, second input model is not supported, adding into comment section
2019-10-14 16:11:09,728 - trains.model - INFO - Selected model id: 622297cf18a74496b406479907bc1908
2019-10-14 16:11:09,854 - trains.storage - INFO - Start downloading from file:///ds2/trained_modeles/examples/pytorch with
tensorboard.6190111e06b943c597febc9287e95b12/models/model15 2019-10-14 16:11:09,854 - trains.storage - ERROR - Could not download
file:///ds2/trained_modeles/examples/pytorch with tensorboard.6190111e06b943c597febc9287e95b12/models/model15 , err: download_object() got an unexpected keyword argument 'callback'

Environment:
trains.__version__ = '0.11.2'

SSL Conection error [CERTIFICATE_VERIFY_FAILED]

I have installed trains and when i run my code with the required 2 lines of integration, i get the following error message:
Retrying (Retry(total=239, connect=240, read=240, redirect=240, status=240)) after connection broken by 'SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)'),)': /auth.login
It seems like the firewall at my company that monitors https connections blocks the connection to your demo server.
Is there a way to fix it?

loguru support

Hi,
I'm using loguru as a logging package, and it's the TRAINS log can't catch the logged messages unless the task is initialized before importing loguru.

  1. Is it possible to fix it?
  2. loguru print the messages formatted with pretty colors, If you could support parsing it in the UI it would be fantastic! :)

Override config location

Hi,
Is there a way to specify the location of the configuration file, instead of it's default (~/trains.conf)?
The use case is running on AWS Sagemaker (or similar).
Thanks.

expanding artifact uniqueness comparison criteria

Hello,

In your current implementation of artifact feature, you are checking to see if the entire dataframe row is the same, and if even one column is different you'll mark these 2 rows as unique.

On my testing code, I am going over a CSV file that gives me annotations of videos, I sometime make a mistake and have 2 rows with the same video but with different annotation, it is very hard to avoid as this mistake.

I want to have a feature where comparing 2 rows can be done based on a single or multiple columns, IE, if first column called "name" is the same, but the rest are not (what I use for annotations) you'll indicate that these 2 rows are the same.
I want to be able to choose which columns to compare, or to just have the configuration as it is now (compare all columns).
It will be good to have this flexibility.

Thank you

FileNotFoundError occurs when I load pretrained model

Hi,
I'm using PyTorch+TRAINS with single calling on the top of codes.

I load pretrained model manually with following code:

state_dict = torch.load(
    path, # pathlib.Path 
    map_location=lambda storage, loc: storage
)

Then, TRAINS make unexpected behavior like below:

  File "/home/ubuntu/work/xxxx/xxxx/xxxx.py", line 50, in __init__
    map_location=lambda storage, loc: storage
  File "/home/ubuntu/anaconda3/envs/torch/lib/python3.6/site-packages/trains/binding/frameworks/__init__.py", line 25, in _inner_patch
    raise ex
  File "/home/ubuntu/anaconda3/envs/torch/lib/python3.6/site-packages/trains/binding/frameworks/__init__.py", line 23, in _inner_patch
    ret = patched_fn(original_fn, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/torch/lib/python3.6/site-packages/trains/binding/frameworks/pytorch_bind.py", line 92, in _load
    model = original_fn(filename or f, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/torch/lib/python3.6/site-packages/torch/serialization.py", line 382, in load
    f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'xxxx.pth'

It seems TRAINS automatically detects something and tries to load pth file with a wrong path.
Please give some descriptions about this behavior?
As I don't currently want any more functionality, I want to make it off if I can.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.