Giter Club home page Giter Club logo

gsoc-ideas's Introduction

CloudCV GSoC Ideas

Development

This site is built with Jekyll and Bootstrap.

Adding new project

  1. Copy a file project file from the _posts folder and rename it to {date}-project-{project-name}
  2. Fill the fields in the copied file accordingly
  3. Run jekyll serve and check in the browser on 0.0.0.0:4000 if the site looks like expected
  4. Send a Pull Request

Acknowledgements

Based on the OpenSUSE's excellent mentoring project.

gsoc-ideas's People

Contributors

deshraj avatar gautamjajoo avatar gchhablani avatar ram81 avatar rishabhjain2018 avatar sanji515 avatar spyshiv avatar virajprabhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gsoc-ideas's Issues

Port EvalAI from Angular 1 to Angular 5

Project Title: Port EvalAI from Angular 1 to Angular 5

Description: The current frontend code of EvalAI uses Angular 1. We want to migrate it to Angular 5 since it brings a lot of new features which brings easy maintainability, better SEO etc. Some of the initial tasks include:

  • Getting started with Angular 5

  • Setting up EvalAI-ngx (https://github.com/Cloud-CV/EvalAI-ngx)

  • Setup Docker for EvalAI-ngx

  • Setup CI/CD pipeline using Travis CI

  • Implement the features in Angular 5 that are already present in current version

  • Implement responsive web application

  • Write the robust test suite

  • Properly document the codebase

  • Setup sphinx documentation

  • Extended goals:

    1. Create a unique link for each of the entry on the leaderboard so that participant can share their entry position on the leaderboard
      • Make sure that meta tag for each entry link is different and shows participant team name, rank, score etc on the thumbnail when shared on some social media such as Twitter, Facebook, etc.
    2. Create a badge system for each of the entries on the leaderboard
  • Integrate server side rendering for Angular 5

Besides this, students should also provide ideas of their own.

Deliverables: The issues mentioned above should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented improvements that they have proposed.

Mentors: Akash Jain @aka-jain, Shiv Baran Singh @spyshiv, Rishabh Jain
@RishabhJain2018

Co-Mentor(s): Deshraj Yadav @deshraj

Skills: AngularJS, Python, Django, Django rest framework

Skill Level: Medium

Get started: Take a look at our issues on Github.

Recommended: Try to fix some issues (note that there are some issues labeled with GSOC on the project repository). For example, look at this issue: Cloud-CV/EvalAI-ngx#46

Important Links:

Share Deep Learning Models online

Project Title: Share Deep Learning Models online

Description: One of the problems facing young researchers who want to learn more about deep learning models is the amount of effort it takes to learn these new frameworks. Here is a small list of 64 deep learning frameworks that are available. Each framework is good at different purposes. The goal of this project is to provide an online platform for trying deep learning algorithms / models that will reduce the barrier of entry to the world of deep learning and applications in computer vision.

In GSoC '16, @gauravgupta22 built the first version of Fabrik, a platform to build deep-learning models via a simple drag-and-drop interface. As of today, we support export to model definition files of widely popular deep learning frameworks like TensorFlow, Caffe and Keras, as well as the ability to import and visualize models developed in these frameworks.

Our next goal is become a collaborative platform, where students and researchers can have model discussions, and collaboratively build and edit models in real time

  • Model Sharing (see #17).
  • Layer boxes UI Enhancements (see #16)
  • Switch from scroll to zoom in canvas to vertical scrolling (see [#14](https://github.com/Cloud-- CV/IDE/issues/14))
  • Write unittests (see #7)

Deliverable: the issues mentioned above should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented other improvements that they have proposed.

As an additional goal, we want to add support for PyTorch which is another widely popular deep learning framework.

Mentor: Viraj Prabhu @virajprabhu, Deshraj Yadav @deshraj, Harsh Agrawal @dexter1691

Co-Mentor: Shiv Baran Singh @spyshiv

Skills: ReactJS, Python, Django, familiarity with deep learning a plus (but not essential)

Skill Level: Medium

Get started: Take a look at our issues on Github, the ones marked as starter-project are good places to start. Feel free to reach out to us on our Gitter channel if you have questions.

Easy challenge management on EvalAI

Project Title: Easy challenge management on EvalAI

Description:

This project will focus on streamlining the newly adopted GitHub challenge creation pipeline, building API’s for fully automating challenge creation on EvalAI, adding new capabilities in EvalAI’s latest frontend for a seamless user experience, and making our backend robust and less error-prone by adding test cases for different frontend and backend components. As of now, EvalAI admin has to be in the loop for the challenge creation process with respect to scaling worker resources for prediction-based AI challenges, setting up remote evaluation for AI challenges, and most importantly setting up code-upload AI challenges on EvalAI, the goal of this project is to remove EvalAI admin out of the loop by fully automating the process.

Deliverables:

  • Add feature to approve participants for a challenge by challenge hosts for unauthorized signups in the challenge.
    Give challenge hosts/participants the control to remove participants from a challenge/team.

  • Minor: Fix the issue of entering email ID in caps while signing up on EvalAI.Add git bi-directional sync support on EvalAI. See this PR for reference.

  • Add support to create new challenge phases/dataset splits, etc. from the Github-based challenge creation pipeline after the challenge has been created.

  • Add feature in GitHub-based challenge to manage multiple challenge configs over the years for the same challenge.

  • Add feature to create GitHub repositories for existing challenges on EvalAI to allow users to migrate from config challenge creation to github based challenge creation.

  • Add APIs and celery tasks support to re-run bulk submissions. Also, add complete UI changes to allow hosts to re-run all existing submissions to the challenge.

  • Increase submission message time limit in the SQS queue to prevent the submission messages from expiring when there are large number of pending submissions.

  • Add APIs and UI changes to allow challenge hosts to rename the metrics on the leaderboard without needing to re-run submissions.

  • Add challenge configuration examples with documentation for code-upload challenges.

  • Add examples and documentation for remote challenge setup on EvalAI.

Mentors: - Ram Ramrakhya, Rishabh Jain

Skills Required: - Python, Django, AngularJS, AWS

Project size - 175 hours

Difficulty - Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2022)

Important Links:

Analytics dashboards for challenge hosts and participants

Project Title: Analytics dashboards for challenge hosts and participants

Description:

This project will involve writing REST API’s, plotting relevant graphs and building analytics dashboards for challenge hosts and participants. The analytics will help challenge hosts view the progress of participants in their challenge -- for instance, comparing the trends of the accuracy from participant submissions over the period of time. Participants will be able to visualize the performance of all of their submissions with time and their corresponding rank on the leaderboard. The final goal is to provide users with several analytics to track their progress on the platform.

Deliverable:

  1. For challenge host -

    • Add APIs to fetch and aggregate metrics for the performance of a participant in a challenge over a period of time.
    • Add frontend changes to plot graphs for the metrics fetched from the APIs.
    • Add backend APIs and plot for showing the number of submissions in a challenge by a participant.
    • Add backend APIs and plots for showing the number of submissions in a challenge in a day, month, and yearly.
    • Add backend APIs and plot to show the ranks of various participants on the leaderboard over time.
    • Add backend APIs and plot for the evaluation time of submissions over time.
    • Write tests for the APIs and frontend
  2. For participants -

    • Add backend APIs and graphs for checking the trend of submissions in a challenge and the corresponding rank.
    • Add backend APIs and plot for the number of submissions in a challenge over time which can be filtered by submission status.
    • Write tests for the APIs and frontend

Mentor: Rishabh Jain (@RishabhJain2018), Ram Ramrakhya (@Ram81)

Skills: Angular 7, Django, Django Rest Framework, D3.js

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Important Links:

Implement robust evaluation pipeline in EvalAI

Project Title: Implement robust evaluation pipeline in EvalAI

Description:

Currently, the submission worker that evaluates the challenge requires manual scaling. Moreover, logging & metrics-monitoring isn’t available to the challenge hosts for the submission worker in real-time. Also, an often requested feature by the challenge organizers has been the ability to test their competition package (evaluation scripts, etc) locally before uploading it to EvalAI. This capability will also reduce assistance required by the platform maintainers. The goal of this project is to write a robust test suite for submission worker, port it to AWS Fargate to setup auto-scaling and logging. The tasks will also include giving control to challenge hosts over the submission worker from the UI in terms of starting, stopping and restarting it.

Deliverable:

  • Add filtering in Django backend using https://django-filter.readthedocs.io/en/latest/guide/rest_framework.html
  • Add unit tests for submission worker
  • Add Integration tests for submission worker
  • Shift worker container on AWS Fargate for auto-scaling
  • Provide naming to the worker containers running for different challenges
  • Add slack notification to inform the challenge host/EvaLAI Admin when a worker is killed
  • Add feature to get the updated auth token before launching workers on the server for different challenges
  • Add feature to automatically launch/restart worker containers when:
    • Create a container when the challenge is approved by EvalAI admin
    • Restart containers when a challenge host updates evaluation script or annotations
  • Add feature to start/stop/restart worker container from UI.
    • For challenge host
    • EvalAI admin
  • Add feature to request EvalAI Admin for increasing number of workers from 1 to x
  • If the remote evaluation is enabled for a challenge, then add the capability of storing challenge host AWS keys and launch AWS instances from their account to setup worker docker container that will evaluate the submissions

Extended Goals:

  • Add feature to display logs of the worker container on UI

Mentor: Ram Ramrakhya @Ram81 , Rishabh Jain @RishabhJain2018 , Deshraj @deshraj

Skills: Python, Django, Django Rest Framework, AWS, Docker

Skill Level: Hard

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2019)

Tutorials:

a) Docker
b) AWS-Fargate

Important Links:

Challenge Synchronization with GitHub Repositories

Project Title: Challenge Synchronization with GitHub Repositories

Description: This project aims to facilitate the migration of legacy challenges from challenge zip files to GitHub repositories, providing hosts with access to their old challenges through a familiar interface. Additionally, it will establish a bidirectional sync between EvalAI and GitHub repositories, ensuring that changes made on either platform are reflected seamlessly. With enhanced compatibility and synchronization capabilities, this project will contribute to a smoother experience for hosts managing challenges on EvalAI.

Deliverable:

  • Legacy Challenge Backward Migration:
    • Implement an admin option to create GitHub repositories for hosts sharing their GitHub token.
    • Develop functionality to copy over evaluation zip files, challenge configurations, and other necessary files to GitHub repositories, updating the config.json as required.
    • Ensure compatibility for prediction-only challenges and explore the possibility of moving code-upload challenges.
  • Bidirectional Sync:
    • Enable backward sync of challenges from EvalAI to GitHub repositories.
    • Allow hosts to edit start/end dates, titles, and other details directly on EvalAI, with changes reflecting back on GitHub repositories.
    • Implement mechanisms to sync backend changes made on EvalAI with corresponding GitHub repositories.

Mentor: @gchhablani, @RishabhJain2018

Skills: Django, Markdown, Python

Skill Level: Medium-Hard

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSoC-2024).

Important Links:

Enhancements in code upload pipeline

Project Title: Enhancements in code upload pipeline

Description:

As EvalAI hosts more code-upload challenges and researchers utilizes our modular kubernetes based infrastructure for hosting these challenges, we would like to automate this pipeline as much as possible to enhance user experience. During GSoC 2019, we built this pipeline for evaluating AI model’s code by running it against unseen test environments in real time and this year the plan is to add features like start, stop, restart, delete cluster, etc. so as to give challenge hosts more control over their challenge evaluation cluster. This will not only involve control over the nodes running evaluation but also viewing logs which are being updated in real-time. Finally, the plan is to give challenge hosts the capability to run the evaluation cluster in their cloud by simply plugging in their keys and rest all will be taken care by EvalAI.

Deliverable:

  • Dockerize the code upload challenge submission worker.
  • Create a dashboard to manage challenge workers by the challenge hosts.
  • Add APIs to send cloudwatch logs from the workers to challenge hosts so as to debug or monitor the submissions.
  • Add API’s for challenge hosts to manage (start/stop/restart/delete) their challenge kubernetes cluster after the challenge is approved by the admin.
  • Add CRUD operations for challenge hosts for kubernetes config for the challenge cluster.
  • Setup a pipeline such that a node in kubernetes should only spawn when a submission comes in and shuts down after the evaluation is completed.
  • Setup a pipeline so that a challenge host can add their AWS credentials with the challenge and the evaluation kubernetes cluster is setup onto challenge hosts cloud while the challenge is still hosted on EvalAI.
  • Add a feature to use GCP instead of AWS for setting up cluster
  • As a part of this project we will also focus on improving the CI/CD pipeline so that we can push the changes from GitHub to the production servers seamlessly with minimal supervision.
  • Write robust test cases for the added APIs and scripts.
  • Add documentation for the newly added features.

Mentor: Kartik Verma @vkartik97 , Rishabh Jain @RishabhJain2018

Skills: Docker, Kubernetes, AWS, Django, DRF

Skill Level: Difficult

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Tutorials:

Important Links:

Streamlining challenge creation on EvalAI

Project Title: Streamlining challenge creation on EvalAI

Description:

EvalAI is a platform to host and participate in AI challenges around the globe. To host a challenge, challenge creation is one of the core features which is utilized by challenge hosts to create AI challenges. The idea is to use private GitHub repositories to host the challenge files instead of the zip file. The next step is to build and integrate a continuous deployment pipeline with GitHub so that for every new commit in the challenge repository, the changes are automatically reflected on the UI. We will also build support for tests so that new commits are fully tested before they are pushed to the live challenge hosted on EvalAI. The goal is to enhance the challenge creation experience for challenge hosts and set up a challenge on EvalAI by the challenge hosts involving minimal human effort from the EvalAI team.

Deliverable:

  • Setup a pipeline to integrate a continuous deployment system for creating a challenge on EvalAI.
  • Write tests to validate the challenge configuration.
  • Integrate the tests with the continuous deployment system to run them whenever a challenge host commits a change to github.
  • Setup a mechanism to authenticate evalai-user with private github repository.
  • Add APIs for updating details on the backend database.
  • Once the tests pass, configure autobots on github which will push the changes using APIs on EvalAI database.
  • Add feature to display submission worker logs on the UI so that debugging is simple for challenge hosts.
  • Write robust test cases for the code.
  • Add the documentation so that it is easy for challenge hosts to set up.

Mentor: Ram @Ram81 , Rishabh Jain @RishabhJain2018

Skills: Python, Django, Django Rest Framework, Knowledge of any CI/CD tool.

Skill Level: Ambitious

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Tutorials:

Important Links:

Seamless User Experience & Leaderboard Porting

Project Title: Seamless User Experience & Leaderboard Porting

Description: This project aims to enhance the overall user experience on EvalAI by introducing user-centric features, improving existing functionalities, and optimizing integrations with external platforms.

Two key features that we aim to implement in this project will be:

  • Allowing custom requests on code-upload and static code-upload challenges from within the evaluation workers running on EKS platform.
  • Making enhancements to the EvalAI in an efficient way to to migrate existing leaderboards for external challenges from various sources (maybe a tabular format) to EvalAI as the hosts migrate their challenges to EvalAI.

These features, coupled with additional deliverables below, will improve the overall experience of EvalAI users and add several utilities for improving challenge participation experience.

Deliverable:

  • Code-Upload Challenges Improvements:
    • Allow custom requests for EKS challenges to external sites like HuggingFace and OpenAI for model downloads.
    • Address sporadic pod log population on AWS and address the existing issue of environment logs not getting populated on our backend from CloudWatch.
  • Leaderboard Migration:
    • Provide a solution for transferring existing challenge leaderboards from external sources to EvalAI.
    • This involves adding an ability to take a CSV (or any other format) of participant teams, leaderboard metrics, submission timestamp, challenge name and convert it to an “archived” entry on the public leaderboard to see.
  • User-centric Features:
    • Implement a customizable participant details download option on the analytics dashboard.
    • Improve error handling and display for email verification issues.
    • Embed the forum on the challenge page for easy communication between hosts and participants.
  • Additional User Experience Improvements:
    • Fix the search and filter feature PR and perform robust testing on the same.
    • Automate challenge category/forum creation on EvalAI forum during challenge approval.

Mentor: Rahul Singh, @gautamjajoo , @gchhablani @RishabhJain2018

Skills: SQL, Django, AngularJS, AWS

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSoC-2024).

Important Links:

Enhanced Exception Handling Testing Documentation

Project Title: Enhanced Exception Handling Testing Documentation

Description: This project is dedicated to elevating the overall user experience on EvalAI by implementing robust exception handling mechanisms, strengthening the suite of test cases, and enhancing the comprehensiveness of documentation.

This project aims to significantly contribute to EvalAI's reliability, user-friendliness, and developer-friendliness. The deliverables below collectively contribute to a more stable and user-centric EvalAI experience.

Deliverable:

  • Improved Test Coverage and Cases:
    • Identify and address areas with low code coverage to ensure a comprehensive testing suite.
    • Focus on key components that are critical to the functioning of EvalAI, enhancing their test coverage.
    • Develop tests for frequently used APIs, especially those accessed through EvalAI CLI, to ensure their reliability.
    • Add tests for code-upload workers and enhance testing for other components like submissions and remote workers.
      (Optional) Integrate Postman API testing during the EvalAI build process to fortify the backend and ensure resilience.
  • Improved Documentation and Tutorials:
    • Enhance API documentation on EvalAI, providing detailed summaries of API functionalities and troubleshooting tips.
    • Create a comprehensive tutorial for organizers unfamiliar with AWS, guiding them on setting up infrastructure for their challenges.
    • Develop a detailed FAQ page based on common questions received by EvalAI, providing quick and easy-to-understand answers/solutions.
    • Create detailed documentation on how auto-scaling functions for ECS, EKS, and EC2, offering insights into host-end cost-saving options.
    • Improve documentation on various challenge phases and submission options. For example, explaining the visibility of leaderboards, submissions, and challenge phases, along with preference order details.
  • Improved Error Handling and Messages:
    • Conduct a thorough analysis of the codebase to identify potential breakage points.
    • Enhance error messages throughout EvalAI, providing clearer insights into issues for faster resolution.
    • Improve exception handling in AWS utilities, eliminating silent errors and providing verbose messages for quick error correction.
    • Ensure all errors are returned in API responses with proper reasoning, facilitating better understanding and resolution.

Mentor: @gautamjajoo , @RishabhJain2018

Skills: Django, Markdown, Python

Skill Level: Easy

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSoC-2024).

Important Links:

New frontend for EvalAI based on Angular 5

Project Title: New frontend for EvalAI based on Angular 5

Description:

EvalAI’s current frontend is setup using Angular 1 which is not maintained by the community actively. Angular in the later versions support really nice features like better SEO, client-side rendering, etc. We want to migrate the current codebase in Angular 5 with a new design and achieve feature-parity. The first half of the summer will focus on adding the existing features from the older version with a new UI, while the latter half will focus on building an exhaustive analytics platform for challenge host and participants. The tasks will also include adding the UI for hosts and participants for reinforcement learning based challenges.

Deliverable:

  • Fix the bugs in Home page / Landing page
    • Change the challenge creation video on the home page
    • Add a carousel for adding more organizations
    • Change image on the homepage of the laptop
  • Add feature to disable the challenge phase submission if it is not active and the challenge is active
  • Add feature to edit every challenge detail
  • Add Discussions tab for each challenge
  • Change notification messages and show the response from the API’s.
  • Add feature to view all the submissions by a challenge host
  • Add feature to display the private leaderboard and private challenge phase
  • Add feature to make the challenge public/private
  • Add feature to delete a challenge
  • Show tags if the challenge is approved or published under my challenges
  • Add the challenge analytics dashboard
  • Add Error 500 page
  • Add challenge creation using UI
  • Make the website loading faster
  • Add new theme templates for email verification, success page, password reset page, password reset success
  • Add feature to evaluate submissions by a challenge host
  • Add feature to export all submissions in different formats and by selecting different fields

Mentor: Mayank Lunayach @lunayach, Shekhar Prasad Rajak @Shekharrajak Shekharrajak,
Shivani Prakash @shivaniprakash95 (Design Mentor) , Rishabh Jain @RishabhJain2018

Skills: Angular 5, HTML, CSS, Typescript

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2019)

Tutorials:

a) Angular Tutorial
b) Angular Basic Application

Important Links:

Implementing RESTful web services for EvalAI

Project Title: Implementing RESTful web services for EvalAI

Description: is an evaluation server that will host AI challenges like Visual Question Answering, Image Captioning. Our next goal of this project is to implement REST APIs. Some of the initial tasks include:

  • Add setInterval to API calls to get real-time data (see Cloud-CV/EvalAI#653) (Already fixed)
  • Setup Continuous Deployment for EvalAI.
  • Add integration tests using Selenium
  • Enforce unique random file name for files being uploaded (Already fixed)
  • Create notification system for EvalAI (Already fixed)
  • Add number of concurrent request metric in metrics middleware
  • Build a newsfeed for EvalAI (see Cloud-CV/EvalAI#664)
  • Build a dashboard for participants and hosts where they can analyze the submissions for a competition and get relevant insights
  • Export the leaderboard table to Latex / PNG
  • Webhooks for integrating it with other services / sites / chat-bots etc
  • Scale the worker nodes system to run multiple competitions in the most optimal manner.
  • Create an EvalAI badging system for the challenge evaluation script, challenge submissions, and top 10 leaderboard entries. For reference, see this https://github.com/badges/shields

Besides this, students should also provide ideas of their own.

Deliverable: the issues mentioned above should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented improvements that they have proposed.

Mentor: Taranjeet Singh @trojan, Akash Jain @aka-jain, Shiv Baran Singh @spyshiv

Co-Mentor: Deshraj Yadav @deshraj

Skills: AngularJS, Python, Django

Skill Level: Medium

Get started: Take a look at our issues on Github. Try to fix some (note that there are some issues labeled with GSOC).

Add support for different deep learning frameworks in Fabrik

Description: Fabrik is an online collaborative web application for building and visualizing neural networks in the browser, aimed at lowering the barrier to entry to getting started with deep learning. We wish to create a model-agnostic platform which works with most popular deep learning frameworks and provides a seamless experience when trying to visualize existing models, create new ones, or to export them to the framework of your choice.

Over the last couple of years, we have built a version of this platform which provides a drag-and-drop interface for creating neural networks, extensively supports two popular deep learning frameworks (Caffe and Keras), and has experimental support for Tensorflow, another widely-popular framework.

Our goal moving forward is to complete support for our existing frameworks and extend support to new and upcoming frameworks like PyTorch. Initially, we would like to accomplish the following tasks:

  • Complete support for Tensorflow (see #293)
  • Add support for PyTorch (see #294)
  • Add an exhaustive documentation for new and existing backends
  • Write robust unit tests

Deliverables: The issues mentioned above should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented improvements that they have proposed.

Extended Goals:

  • Setup docker containers for easy setup on local machines (see #130)
  • Extend login functionality to allow users to privately share their networks with others
  • Investigate the addition of ONNX support
  • Improve the visualization algorithm to make it more generalized.

Mentors: Utsav Garg

Co-Mentor(s): Deshraj Yadav

Skills:

  • Essential: React JS, Python, Django, Unix.
  • Preferred: Some experience with Tensorflow, PyTorch, Keras

Skill Level: Medium

Getting started: Take a look at our issues on Github, the ones marked as starter-project are good places to start. Feel free to reach out to us on our Gitter channel if you have questions.

Important links:

Static code upload challenge evaluation and enhancements in GitHub based challenge creation

Project Title: Static code upload challenge evaluation and enhancements in GitHub based challenge creation

Description:

EvalAI is a platform to host and participate in AI challenges around the globe. To a challenge host, reproducibility of submission results and privacy of the test data are the main concerns. Towards this, the idea is to allow users to submit a docker image for their models and evaluate them on static datasets. In order to achieve this we want to build a pipeline which will use the dockerized models and run it on kubernetes based infrastructure with stored test annotations and report the results on the EvalAI leaderboard. Another part of the project is to streamline our challenge creation pipeline. Last year we added support for github based challenge creation which allows challenge hosts to use a private github repository to create and manage updates in a challenge. The goal for this year is to support bi-directional updates for challenges created using github. This feature will allow hosts to sync changes from EvalAI UI to their challenge github repository. The goal is to enhance the challenge creation experience for challenge hosts involving minimal support from the EvalAI team.

Deliverable:

  • Add is_static_dataset_docker_based_challenge field in Challenge model and accept the parameter as part of challenge config in Challenge creation API
  • Add submission_input_file field in submission model for static code upload submission.
  • Add is_static_code_upload_submission flag in SQS queue message for submissions made to static code upload challenge. This flag will allow us to use same queue for 2 step evaluation
  • Add static code upload challenge test dataset download function in code upload worker.
  • Add methods in the code upload worker to spawn a single Kubernetes pod for static code upload evaluation.
  • Create single kubernetes container using the submission docker image
  • Mount test dataset volume on the container at a fixed path
  • Add submission_pk as environment variable
  • Add Kubernetes pre stop hook to submit the generated output file submission.json or submission.csv to EvalAI as a submission using submission_pk environment variable of the kubernetes pod.
  • Add support for a job (it can be a cron job) to monitor evaluation time and kill the job if it exceeds the submission time limit.
  • Add support to report error logs from job pods to submission stderr file
  • Write tests for the static code upload challenge worker
  • Add github token model field in challenge model to sync challenge changes to github repo
  • Add post save hooks to generate challenge details update and submit as a pull request to the hosts github repo using Github APIs.
  • Write tests for challenge details sync post hook methods

Mentor: Khalid Riyaz (@KhalidRmb), Ram Ramrakhya (@Ram81), Rishabh Jain (@RishabhJain2018)

Skills: Python, Django, Kubernetes, Docker

Skill Level: Hard

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Important Links:

Adversarial Data using Gradio and EvalAI

Project Title: Adversarial Data using Gradio and EvalAI

Description: The aim of this project is to develop an infrastructure that enables the collection of adversarial data for models submitted to EvalAI. This will be achieved by integrating Gradio with EvalAI's code upload challenge pipeline and deploying the models as web services. The web services will record all user interactions, providing a dataset for each submission that can be used to evaluate the robustness of the model.

Deliverable:

  • Build system design for EvalAI and Gradio integration
  • Setup Gradio integration for a single VQA demo as a proof of concept
    • Create a Gradio interface wrapper that is modular for model inputs
    • Integrate the gradio interface with dockerized model for inference
  • Setup auto launching of a worker service for deploying a Gradio app using a celery task from EvalAI django backend.
  • Add frontend controls to deploy a Gradio web service for a single submission made by a participant to a code upload challenge.
  • Add support to log interactions made by a user on the Gradio web service and push the interactions to a database table specific to a demo.
  • Add API support to export the logged interactions for a demo.
  • Implement frontend changes to allow users to download the logged interactions in a standard dataset format.

Mentor: Rishabh Jain, Ram Ramrakhya

Skills: Python, Django, AngularJS, AWS

Skill Level: Hard

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2023)

Important Links:

Analytics Dashboard for EvalAI Admin, Challenge hosts & Participants

Project Title: Analytics Dashboard for EvalAI Admin, Challenge hosts & Participants

Description:

As the number of compute-intensive challenges on EvalAI are increasing, we want to focus on improving the performance of our services. As a first step, we will focus on monitoring and measuring all key metrics of our services. Insights from these will allow us to efficiently utilize our infrastructure, improve uptime and reduce costs. The project will concentrate on writing REST API’s, plotting pretty graphs and building analytics dashboards to cater for all three types of users on EvalAI i.e. admin, challenge hosts and participants. The analytics will help challenge hosts to view the progress of participants in their challenge for instance, comparing the trends of the accuracy from participant submissions over the period of time, etc. The final goal is to provide users with several analytics so as to display their progress on the platform and utilize the resources efficiently to reduce costs.

Deliverable:

  1. EvalAI Admin -

    • Add backend APIs & graphs on UI for number of daily, monthly, & yearly signups of users, number of challenges on EvalAI & number of submissions on EvalAI.
    • Integration of the backend with open-source softwares like Grafana so to report metrics for the load on APIs
    • Integrate the backend with the AWS cloudwatch to get the metrics for server load, CPU utilization, storage, etc.
    • Write tests for the APIs and frontend
  2. Challenge Host -

    • Add backend APIs and UI graphs for the performance of a participant in the challenge over a period of time.
    • Add a plot for showing the number of submissions in a challenge by a participant.
    • Add plots for showing the number of submissions in a challenge in a day, month, and yearly.
    • Add a plot to show the ranks of various participants on the leaderboard over time.
    • Write tests for the APIs and frontend
  3. Participants -

    • Add backend APIs and graphs for submissions in a challenge and the corresponding rank.
    • Add a plot for the number of submissions in a challenge over time.
    • Write tests for the APIs and frontend

Mentor: Shekhar Prasad Rajak @Shekharrajak, Rishabh Jain @RishabhJain2018

Skills: Angular 7, Django, Django Rest Framework, D3.js

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Tutorials:

Important Links:

Improve demo creation in Origami

Project Title: Improve demo creation in Origami

Description:

Origami (previously called CloudCV-fy your code) is a AI-as-a-service solution that allows researchers to easily convert their deep learning models into an online service that is widely accessible to everyone without the need to setup the infrastructure, resolve the dependencies, and build a web service around the deep learning model.

Deliverable:

  1. Drag and Drop demo creation (#78)
  2. Demo creation using markdown
  3. Add support for dialogue like interface (#79)
  4. Provide REST API to access models .
  5. Identify and fix all the UI bugs.
  6. Configure auto deployment and deploy Origami
  7. Writing robust unit tests for Origami-lib and Origami repository
  8. Analytics page for your demos (Origami Insights)

Mentor: Avais Pagarkar @AvaisP, Utkarsh Gupta @uttu357

Skills:

  • Essential: React JS, Python, Django, Flask Unix.
  • Preferred: Docker, AWS

Skill Level: Medium

Get started: Take a look at our issues on Github, the ones marked as starter-project are good places to start. Feel free to reach out to us on our Gitter channel if you have questions.

Important links:

Implementing Popular Deep Learning Architectures in Cloud CV

Project Title: Writing tutorials and implementing popular deep learning architectures in CloudCV

Description: This project involves implementing popular deep learning algorithms and integrating them with the code base of CloudCV. Having said that, this project also aims at developing a
Web-application where users can share their code, discuss about their research work and write
tutorials .

Deliverable: By the end of this project one can expect a web-application which would function as a learning platform to all the members of the computer vision community with the following features :

  • It will lay the foundations of a complete repository for prominent deep learning algorithms discussed and enable the computer vision community to invest their time in learning something new rather than implementing the existing algorithms.

  • The web application would host source codes for popular deep learning algorithms, provide an interactive platform to the users to view the results of each algorithm along with in-depth tutorial.

To sum up the end-product of this project ie.. a web-application will essentially be a one stop resource where users can find relevant papers, open source code based on those papers, an online demo of that algorithm and a descriptive tutorial on the same.

Mentor: Deshraj Yadav (@deshraj), is a suitable mentor as he had been involved in the project when it was proposed last year as well. Other mentors too can be added ?

Skills:

  • Familiarity with containers, javascript, HTML, CSS and PHP would be a big plus.

  • Expertise in using python based web-servers like Flask and Django.

  • Familiarity with deep learning frameworks like Caffe / Theano / Keras / TensorFlow etc. Students are expected to have played around with these tools and should be familiar with the input / output pipelines.
    Familiarity with building multi-threading, multi-processing architectures, and asynchronous operations.
    Expertise in Lua (especially Torch framework) to build a similar tool for Torch. This may require building an interface for python-torch communication and the student will have to experiment with various ways since Lua is not as mature as Python in terms of open source web-frameworks.

Skill Level: Medium

Implementing python package and REST APIs for EvalAI

Project Title: Implementing python package and new features for EvalAI

Description:
a) Currently, if someone wants to make a submission to a challenge on EvalAI then he has to login to EvalAI and upload a file. We want to make this process even easier for researchers who really like their terminal screen. The first main goal of this project is to create a python package for EvalAI which lets the participants to import evalai as a python package and then make submissions through their python script instead of logging in to the website and then doing it.

b) EvalAI uses Django REST Framework (DRF) at the backend. We want to add a lot of new features this summer where we want the student to implement RESTful web services using DRF. We expect the student to write simple and clean REST APIs, unit tests and an exhaustive documentation for each feature that he is building. Please see the deliverables section to get more details about the project.

Deliverables:

  1. EvalAI Python Package:

This package should include the following functionalities (but students are strongly encouraged to come up with their ideas/features and should definitely mention in their proposal):

  • For Participants:

    • Show all the challenge details (except test annotations file and other private details)
    • Show the number of challenges in which the participant has participated
    • Pretty print leaderboard for each challenge
    • Ability to make submissions using the python package
    • Show the number of overall submissions happened between two dates for each challenge phase of a challenge
  • For Hosts:

    • Show the number of overall submissions happened between two dates for each challenge phase of a challenge
    • Check evalai.cloudcv.org for other host related analytics features that can be added here
    • There are a lot of analytics related feature that can be added here. We strongly encourage prospective students to brainstorm and add it in their proposal.
  1. Implementing RESTful web services
  • Analytics (implement REST APIs and UI for following features):
    • Show how the rank has dropped/increased for a participant in a challenge
    • Pie chart for the status of the submissions for a participant in a particular challenge
    • Graph for the number of submissions happening per day
  • Add the functionality to show the challenges details when the user is not logged in
  • Create an admin dashboard where EvalAI admin can do following things:
    • Admin can send email to all/some users informing them about a new challenge. Also, setup the email formatting in such a way so that we can add images (or a HTML) in the email content.
  • API versioning
  1. Implement webhooks for retrieving challenge submissions and results:
    • The aim of this feature to retrieve the results of a challenge from some forked version of EvalAI to the main EvalAI server. This will help the challenge hosts who are running forked EvalAI on their private servers to show their results on the main Evaluation server (i.e evalai.cloudcv.org).
    • Note that there are a lot things to be taken care of like:
      • How will make sure that the data is authentic?
      • How will you make sure that the results are retrieved in real time?
    • Also have a webhook for bots (slack/ messenger / github etc). Triggered when new submission, leaderboard changes, rank increases or decreases etc.

The above mentioned features should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented improvements that they have proposed.

Mentors: Rishabh Jain @RishabhJain2018, Shiv Baran Singh @spyshiv

Co-Mentor(s): Deshraj Yadav @deshraj

Skills: AngularJS, Python, Django, Django REST Framework, D3.JS, AWS

Skill Level: Difficult/Ambitious

Getting started:

We expect the students to solve at least one (the more the better) from the following list of issues which are super relevant to the project. Here is the list of getting started issues solve (some of these would already be open issues on Github):

  • Create a unique link for invitation to the team -- And the person who is inviting gets to see the link only. If he wants, then he can share the link with other future team members.
  • Create an API for adding the affiliation to the participants
  • On UI, show them dropdown with the existing affiliation in the database
    • Show 'verified' for each of the admin approved affiliations
  • Implement the feature of setting only allowed email domains and only blocked email domains for a particular challenge.
  • Modify Challenge creation using zip file API -- unzip the challenge details after it gets approved by EvalAI admin (go through the codebase for more details and ask questions on Gitter).

Recommended: Try to fix some issues (note that there are some issues labeled with GSOC).

Important Links:

Issues for GSOC 209

Project Title: Issues for GSOC 2019

Description: We need new issues for GSOC 2019, so potential students (like me) can know what projects to focus on and which issues to tackle for GSOC application.

Deliverable: List of issues

Improvement in EvalAI Frontend

Project Title: Improvement in EvalAI frontend

Description:

As a part of last year’s GSOC, we took the first step towards modernizing our UI which involved shifting the current codebase from Angular 1 to Angular 7. As we’ve reached the feature parity with the existing UI, this project will involve fixing the last remaining kinks in the UI and incorporating latest UI feedback which we have received from the challenge hosts and participants for the AI challenges organized this year. The goal of this project would be to successfully replace the existing UI with the new UI after GSOC.

Deliverable:

  • Add features to participate in a challenge based on the invitation by the challenge host.
  • Add testimonials, statistics on the home page
  • Enhance the UI for displaying the submissions under My submission Tab & View all submissions Tab
  • Add feature for prettier URLs for the challenge.
  • Add support for editing challenge details.
  • Add CRUD feature to challenge host on submissions
  • Add pages for maintenance, 400, 500, 404 errors on the website.
  • Add feature for filtering the challenges on UI
  • Add features to export submissions in different formats by the challenge hosts for analysis.
  • Add service workers for making the website loading faster.

Mentor: Sanjeev Singh @Sanji515 , Mayank @lunayach , Shekhar Prasad Rajak @Shekharrajak, Rishabh Jain @RishabhJain2018

Skills: Angular 7, HTML, CSS, Typescript

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2020)

Tutorials:

Important Links:

Analytics Dashboards for EvalAI Users

Project Title: Analytics Dashboards for EvalAI Users

Description: The goal of this project is to provide challenge hosts and participants with insightful analytics to track their progress on the platform. This project will involve writing REST APIs, plotting relevant graphs, and building analytics dashboards for both challenge hosts and participants. The analytics will help challenge hosts view the progress of participants in their challenge (changes in performance and ranking over time), and participants will be able to visualize the performance of all their submissions over time and their corresponding rank on the leaderboard.

Deliverable: Expectations from the student at the end of the project

  1. For Challenge Hosts:
    • Implement APIs to fetch and aggregate metrics for participant performance in a challenge over a period of time.
    • Develop frontend changes to plot graphs for the metrics fetched from the APIs.
    • Implement backend APIs and plots for showing the number of submissions in a challenge by a participant, and the evaluation time of submissions over time.
    • Create backend APIs and plots to show the ranks of various participants on the leaderboard over time and the number of submissions in a challenge by day, month, and year.
    • Write tests for the APIs and frontend.
  2. For Participants:
    • Implement backend APIs and graphs for checking the trend of submissions in a challenge and the corresponding rank.
    • Develop backend APIs and plots for the number of submissions in a challenge over time, filtered by submission status.
    • Write tests for the APIs and frontend.

Mentor: Gunjan Chhablani, Ram Ramrakhya

Skills: Angular 7, Django, Django Rest Framework, D3.js

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2023)

Important Links:

Show progress bar when a participant is uploading a submission from UI - Improvements in EvalAI frontend

Project Title: Showing a progress bar while uploading a submission

Description: This is one of the expected task in the process of improving the existing UI before GSOC 2021. The expected tasks are from the feedback received from the challenge hosts and participants for the AI challenges organised that year.

Deliverable: Show progress bar apt to the style and theme of the application when a participant is uploading a submission from UI.

Mentor: @RishabhJain2018

Skill Level: Medium

Add search option on all challenges and hosted challenges page

Project Title: Add search option on all challenges page

Description: General information about the project, avoid one Liners, the description should be as detailed as possible.

Deliverable: Expectations from the student at the end of the project

Mentor: Who is the mentor? Who is the Co-Mentor? Also please assign the issue to the mentor!

Skills: Which skills are needed? Programming languages, frameworks, concepts etc.

Skill Level: Easy, Medium, Hard

Get started: Tasks that mentors may want to suggest students so that they can start contributing to the code base (e.g. junior jobs, low hanging fruits, discussion on the mailing list)

Improvements in EvalAI frontend

Project Title: Improvements in EvalAI frontend

Description:

After last year’s GSOC, we’ve reached feature parity on EvalAI-ngx with the existing UI, this project will involve fixing the last remaining kinks in the UI. The goal of this project would be to improve the new UI as we replace the existing UI with the new UI before GSOC. We will be improving on the new UI and incorporating the feedback we will receive from the challenge hosts and participants for the AI challenges organized this year.

Deliverable:

  • Add testimonials, statistics on the home page
  • Add a feature to allow collapsing metrics on the leaderboard to make it easier to view details when the leaderboard has a lot of metrics.
  • Add meta tags for each challenge page. If a user shares some challenge page link on twitter, then it should show the details of challenge and challenge cover picture instead of EvalAI image.
  • Add challenge evaluation script, test annotations edit feature in settings tab
  • Add frontend changes to upload the html field for overview page or allow editing html on the edit dialog of overview page.
  • Add frontend changes to upload the html field for terms and conditions page or allow editing html on the edit dialog of overview page.
  • Add changes to redirect user to challenge page once a user uploads a challenge
  • Show cli commands to update test annotations on phases tab to challenge hosts.
  • Show progress bar when a participant is uploading a submission from UI.
  • Add search option on all challenges and hosted challenges page
  • Show worker status (active, inactive, error) on challenge manage tab. It will make status checking easier for hosts instead of going through logs.
  • Fix inconsistent padding and font weight across the UI.

Mentor: Kajol Kumari (@Kajol-Kumari), Mayank Lunayach (@lunayach), Rishabh Jain (@RishabhJain2018), Ram Ramrakhya (@Ram81)

Skills: Angular 7, HTML, CSS, Typescript

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2021)
Important Links:

Query Regarding GSoC 2022

Hey @RishabhJain2018
I see #37 was a GSoC 2021 Idea but hasn't been implemented yet, are there any plans of Cloud-CV to participate in GSoC 2022 if yes then which all projects will be taking part in that??
EvalAI??

Evaluation Infrastructure Optimization

Project Title: Evaluation Infrastructure Optimization

Description: This project aims to enhance EvalAI's functionalities through automating large worker deployments in AWS, adding relevant features for efficient challenge management and also writing a robust and efficient test suite. The focus of the project is two-fold:

  • To automate large worker deployment processes on EvalAI using AWS EC2 instances or spot instances, make challenge management seamless and less reliant on the admins. We want to reduce the dependency of challenge hosts on EvalAI admins.
  • To make EvalAI more reliable and error-free by incorporating tests for different frontend and backend components. Having robust tests prevents making code-breaking changes to the codebase. This task will include adding unit tests for the API suite, prediction upload evaluation workers, code upload evaluation workers (on EKS), and integration tests for end-to-end testing of all components.

Deliverable:

  • Infrastructure Optimization
    • Implement auto-scaling for the challenges being hosted on independent EC2 instances.
    • Add feature to allow hosts to use custom docker images based on EvalAI worker images for submission evaluation.
    • Add features to create a forum for the challenge during challenge creation.
    • Add feature in GitHub-based challenge to manage multiple challenge configs over the years for the same challenge.
  • Building a Test Suite
    • Add tests for GitHub-based challenge creation on EvalAI.
    • Add tests for code upload evaluation workers, including unit tests for individual components and integration tests for the worker.
    • Add tests for code-upload challenge evaluation and static code upload challenge evaluation pipelines.
    • Add unit tests of Kubernetes components using mock.
    • Add tests for frontend components:
      • Challenge Page
      • Make submission page
      • My submissions and All submissions page
      • Settings tab
      • Dashboard with tabs for all challenges, hosted challenges, and participated challenges.

Mentor: Gunjan Chhablani, Ram Ramrakhya

Skills: Python, Django, AngularJS, AWS

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2023)

Important Links:

Improvements in EvalAI User Interface

Project Title: Improvements in EvalAI User Interface

Description: The goal of this project is to improve the overall user experience for challenge hosts and participants on EvalAI by allowing them to tag and filter challenges and creating an intuitive and informative leaderboard. This project will involve creating - a comprehensive search feature to find challenges, a tagging system for different challenge types (ex: Computer Vision, NLP, etc) for categorization. This will help participants and challenge hosts find challenges and search for related ones based on tags. An improved leaderboard, along with the search feature, will help in streamlining the process of organizing challenges, participating in challenges, and ranking participants. In addition, we will also work on adding support for relevant metadata for each challenge such as prize money, sponsors, etc.

Deliverable:

  • Search, Filter and Tagging

    • Add the following attributes to challenges via challenge configuration:
      • Tags such as CV, NLP, RL, etc.
      • Prize Money
      • Sponsors
    • Implement search functionality in challenges by specific meta information (e.g. title, sponsors, ID, etc.)
    • Implement frontend changes to display tags on challenge page/gallery.
    • Implement relevant Django APIs for filtering challenges by tags, date, etc.
    • Add filter options in search functionality such as by tags, date, etc.
    • Implement breadcrumbs on frontend and the ability to view challenges based on the tags selected.
    • Add tests for the search, filter and tagging functionalities.
  • Leaderboard

    • Add a feature to highlight the submission on the “My Submissions” page for the participants which appears on the current leaderboard.
      • This helps the participants know which of their submissions are currently showing on the leaderboard, i.e. which approach is working the best according to the metric of choice.
    • Change CSS to crop longer column headings.
    • Normalize headings in the leaderboard to follow consistent format.

Mentor: Gunjan Chhablani, Ram Ramrakhya

Skills: Python, Django, AngularJS, AWS

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2023)

Important Links:

CloudCV Web App Redesign

Project Title: Redesigning CloudCV using React + Django.

https://github.com/Cloud-CV/CloudCV

Description: CloudCV main website has a lot of legacy code that needs to be cleaned and needs a complete revamp. The goal of this project is to use best practices for deployment + operations and support all the other infrastructural needs for other projects like EvalAI, IDE, CV-fy etc.

Deliverable:

  • Use React + Material design to build standard components that are shared amongst all the demos.
  • Setup CI/CD for CloudCV
  • Add integration tests using Selenium
  • Add more logging / analytics around each demo so that we can use it later for training new models.
  • Add metric logging / error logging in Datadog / Sentry.
  • Dockerize the entire CloudCV infrastructure.
  • Add support for running some of the demos on Web using javascript based DL frameworks.

Mentor: Deshraj Yadav @deshraj / Harsh Agrawal @dexter1691

Skills: React, Python, Django

Skill Level: Medium

Get started: Take a look at our issues on other Projects. Try to fix some (note that there are some issues labeled with GSOC).

Evaluating submission code in Docker containers

Project Title: Evaluating submission code in Docker containers on EvalAI

Description:

The rise of reinforcement learning based problems or any problem which requires that an agent must interact with an environment introduces additional challenges for benchmarking. In contrast to the supervised learning setting where performance is measured by evaluating on a static test set, it is less straightforward to measure generalization performance of these agents in the context of the interactions with the environment. Evaluating these agents involves running the associated code on a collection of unseen environments that constitutes a hidden test set for such a scenario. The goal of this project is to set up a robust pipeline for uploading prediction code in the form of Docker containers (as opposed to test prediction file) that will be evaluated on remote machines and the results will be displayed on the leaderboard.

Deliverable:

  • Setup the pipeline for uploading the image on ECR registry
  • Add feature to display the image uploaded by a user on UI
  • Setup the pipeline to spawn a GPU based AWS machine automatically and start the evaluation of the uploaded docker image
  • Pipeline to send the evaluation results back to EvalAI and kill the spawned instance
  • Display the stderr, stdout, results corresponding to the submission on EvalAI
  • Setup a dummy environment based challenge on EvalAI
  • Optimize the submission evaluation by evaluating it on the cluster of machines
  • Write robust test cases for the code
  • Write Documentation for setting it up on server

The aforementioned features should be implemented by the end of the GSoC period. Moreover, we expect students to also have implemented improvements that they have proposed.

Mentors: Deepesh Pathak @fristonio , Rishabh Jain @RishabhJain2018 , Deshraj @deshraj

Skills Required: Docker, AWS-CLI, Django, DRF [Knowledge of Reinforcement Learning is not necessary.]

Skill Level: Difficult/Ambitious

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2019)

Tutorials:

a) Docker
b) AWS-CLI
c) Buildpack
d) jupyter-repo2docker

Important Links:

Robust test suite and infra optimization setup

Project Title: Robust test suite and infra optimization setup

Description:

This project will focus on building a robust test suite for EvalAI's functionalities. As part of the project we will focus on making EvalAI robust and less error-prone by adding test cases for different frontend and backend component. It will involve adding unit tests for the API suite, prediction upload evaluation workers, code upload evaluation workers (on EKS) and integration tests for the end to end testing of all the components.

Deliverables:

  • Add tests for GitHub-based challenge creation on EvalAI.
  • Add tests for submission and remote submission workers.
  • Add unit tests for individual components in submission and remote submission worker.
  • Add integration tests for the worker.
  • Add tests for code upload evaluation workers.
  • Add unit tests for individual components in code upload worker.
  • Add integration tests for the worker.
  • Add tests for code-upload challenge evaluation and static code upload challenge evaluation pipeline.
  • Unit testing kubernetes components using mock
  • Add tests for frontend components:
    • Challenge page
    • Make submission page
    • My submissions and All submissions page
    • Settings tab
    • Dashboard - all challenges, hosted challenges, participated challenges tab

Mentors: - Ram Ramrakhya, Rishabh Jain

Skills Required: - Python, Django, AngularJS, AWS

Project size - 175 hours

Difficulty - Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2022)

Important Links:

Proved interest in this

Project Title: Project title, short enough to catch attention

Description: General information about the project, avoid one Liners, the description should be as detailed as possible.

Deliverable: Expectations from the student at the end of the project

Mentor: Who is the mentor? Who is the Co-Mentor? Also please assign the issue to the mentor!

Skills: Which skills are needed? Programming languages, frameworks, concepts etc.

Skill Level: Easy, Medium, Hard

Get started: Tasks that mentors may want to suggest students so that they can start contributing to the code base (e.g. junior jobs, low hanging fruits, discussion on the mailing list)

Adversarial data collection with Gradio

Project Title: Adversarial data collection with Gradio

Description:

This project will focus on building an infrastructure to allow exposing models submitted to EvalAI as demos in order to collect adversarial data for the model. As part of the project, we will integrate Gradio with our code upload challenge pipeline to allow deploying the models as web services. Additionally, this web service will record all interactions to curate a "in-the-wild" dataset for each submission.

Deliverables:

  • Build system design for EvalAI and Gradio integration
  • Set up Gradio integration for a single VQA demo as a proof of concept
    • Write a gradio interface wrapper that is modular for model inputs
    • Integrate the gradio interface with dockerized model for inference
  • Set up auto launching of a worker service for deploying a Gradio app using a celery task from EvalAI django backend
  • Add frontend controls to deploy a Gradio web service for a single submission made by participant to a code upload challenge.
  • Add support to log interactions made by a user on the gradio web service and push the interactions to a database table specific to a demo.
  • Add API support to export the logged interactions for a demo
  • Add frontend changes to allow users to download the logged interactions in a standard dataset format

Mentors: - Ram Ramrakhya, Rishabh Jain

Skills Required: - Python, Django, AngularJS, AWS

Project size - 175 hours

Difficulty - Hard

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2022)

Important Links:

Admin Tools Enhancement and Cost Optimization

Project Title: Admin Tools Enhancement and Cost Optimization

Description: The goal of this project is to focus on improving admin experience on EvalAI, as well as target efficient cost-reduction for maintaining EvalAI.

One of the primary focuses will be on enhancing the existing automation of the cancellation of submissions which have expired messages on the SQS queues. The second focus will be identifying underutilized/overutilized ECS instances using AWS health metrics to automatically determine the required compute. Other improvements will include admin actions on Django administration for starting/stopping/restarting the EC2 instance workers, and providing automated deletion of code-upload infrastructure on challenge un-approval.

These features, along with others mentioned in the deliverables, will make EvalAI administrative experience seamless and will also save costs in the longer run.

Deliverable:

  • Admin Enhancements:
    • Create admin actions for EC2 worker start/stop/create/restart.
    • Implement an approval button directly on Slack request notifications for challenge approval.
    • Add a feature to automatically delete code-upload infrastructure on unapproving challenges via Django administration.
  • Cost Optimization Measures:
    • Use the custom SQS queue retention time to automatically cancel submissions and save costs.
    • Enhance the auto-cancel script to reflect changes in Prometheus metrics for accuracy of metrics.
    • Add improvements for retention of Prometheus metrics on container restart.
    • Identify and stop excessive instances and EC2 clones running on AWS for cost-saving.
    • Identify and remove old ECR repositories (and other avenues) for cost reduction on AWS.
  • Infrastructure Monitoring and Automation:
    • Automate ECS monitoring to detect and adjust CPU and memory consumption based on challenge requirements.
    • Address challenges requiring frequent restarts on both EC2 and ECS instances. Migrate problematic workers from ECS to EC2 and improve efficiency of the instances.

Mentor: @gchhablani, Rahul Singh, @gautamjajoo, @RishabhJain2018

Skills: Python, Django, AngularJS, AWS

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSoC-2024).

Important Links:

Monitoring setup for EvalAI admins

Project Title: Monitoring setup for EvalAI admins

Description:

As the number of challenges on EvalAI are increasing, we want to focus on improving the performance of our services. As a first step, we will focus on monitoring and measuring all the key metrics of our services. Insights from these will allow us to efficiently utilize our infrastructure, improve uptime and reduce costs. The project will concentrate on setting up metric reporting and alerts infrastructure, writing REST API’s, plotting relevant graphs and building analytics dashboards to help EvalAI admins maintain and monitor the services.

Deliverable:

  • Setup dockerized Grafana and Prometheus for reporting metrics
  • Add django-prometheus app package to EvalAI backend setup
  • Add prometheus counters to report metrics from EvalAI APIs to get metrics like number of HTTP 4xx, 5xx, 2xx
  • Add model Mixin to report creation/deletion/update query rate for all the models
  • #45
  • Create dashboards and plot graphs for all reported metrics on grafana. Each EvalAI app will have a separate dashboard.
  • Setup cloudwatch metrics for system health like CPU, memory and storage utilization
  • Add prometheus counters to report metrics from evaluation workers to get metrics like number of submissions in queue, number of submissions being processed, failed, etc. These metrics will give us an idea for how long workers are not processing submissions.
  • Configure alerts for the metrics reported by EvalAI APIs, evaluation worker and system health to notify admins about critical issues like
  • Worker not processing submissions
  • Worker processing submission and marking all submissions as failed
  • No qps on EvalAI backend i.e. server is down or not processing requests
  • Number of exceptions thrown by APIs exceeds threshold
  • Write tests for the monitoring APIs

Mentor: Deshraj Yadav (@deshraj), Rishabh Jain (@RishabhJain2018), Ram Ramrakhya (@Ram81)

Skills: Python, Django, Django rest framework, Docker

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2021)

Important Links:

Origami: Artificial Intelligence as a service

Project Title: Origami
https://github.com/Cloud-CV/cvfy-lib
https://github.com/Cloud-CV/cvfy-frontend

Description: Deep learning and its application in AI-subfields (computer vision, natural language processing) has seen a tremendous growth in the recent years. Driven in part by code, data, and manuscript sharing on github and arxiv, we are witnessing increasing public access to state-of-the-art deep learning models for object detection, classification, image captioning, and visual question answering.

However, running someone’s released implementation and making sense of the results often involves painstaking preparation and execution involving steps like setting up the environment and dependencies (installing torch / caffe / tensorflow / keras / theano), setting up the I/O pipeline, keeping track of inter-package consistencies, etc.

Origami (previously CloudCV-fy your code) is a platform that can automatically create an online demo and a corresponding API that other researchers / developers can use without understanding fine-grained details about how the algorithm works. Testing or experimenting the model should be as simple as going to a web-page and uploading images to look at the results.
Examples of such manually curated demos can be found at:
http://cloudcv.org/vqa/
http://cloudcv.org/classify/
http://cloudcv.org/vip/

Mentor: Deshraj Yadav @deshraj , Harsh Agrawal @dexter1691

Pre-requisites:

  • Familiarity with containers, javascript and bash scripts.
  • Expertise in using python based web-servers like Flask and Django.
  • Familiarity with deep learning frameworks like Caffe / Theano / Keras / TensorFlow etc. Students are expected to have played around with these tools and should be familiar with the input / output pipelines.
  • Familiarity with building multi-threading, multi-processing architectures, and asynchronous operations.
  • Expertise in Lua (especially Torch framework) to build a similar tool for Torch. This may require building an interface for python-torch communication and the student will have to experiment with various ways since Lua is not as mature as Python in terms of open source web-frameworks.

Deliverables:

  • CMS system that gives user enough flexibility to modify the page according to his/her need. Different tasks need different input/output setup. For example, object classification will take an image as input and output a class label, while visual question answering will take image + question as input and return an answer. Therefore CMS should provide enough flexibility to support such use-cases.

  • Pre-defined templates for the most popular Artificial Intelligence tasks.

  • Support for third-party integrations like upload images from Dropbox, or save results to Dropbox.

  • Support Default sample files to be uploaded by the user.

  • Discover page to search different demos.

  • Improve the cvfy-lib and release it as pypy package

  • Setup continuous integrations

  • REST APIs for easy third party integrations like FB / Slack bot system

  • Deep integration with EvalAI. Allow "to compare" two algorithms on real user provided data.

  • Anonymous link to the demo! This is to complement a paper in review where anonymity is important. It would be super cool if there can be central reliable place for users to anonymously make their demo available so that reviewers can see it without being able to see who the authors are for that demo!

By the end of GSOC, students are expected to finish the above mentioned deliverables.

Skill Level: Medium

Get started: Take a look at our issues on Github, the ones marked as GSOC are good places to start. Feel free to reach out to us on our Gitter channel if you have questions.

Analytics dashboards for challenge hosts and participants

Project Title: Analytics dashboards for challenge hosts and participants

Description:

This project will involve writing REST API’s, plotting relevant graphs and building analytics dashboards for challenge hosts and participants. The analytics will help challenge hosts view the progress of participants in their challenge -- for instance, comparing the trends of the accuracy from participant submissions over the period of time. Participants will be able to visualize the performance of all of their submissions with time and their corresponding rank on the leaderboard. The final goal is to provide users with several analytics to track their progress on the platform.

Deliverable:

  1. For challenge host -

    • Add APIs to fetch and aggregate metrics for the performance of a participant in a challenge over a period of time.
    • Add frontend changes to plot graphs for the metrics fetched from the APIs.
    • Add backend APIs and plot for showing the number of submissions in a challenge by a participant.
    • Add backend APIs and plots for showing the number of submissions in a challenge in a day, month, and yearly.
    • Add backend APIs and plot to show the ranks of various participants on the leaderboard over time.
    • Add backend APIs and plot for the evaluation time of submissions over time.
    • Write tests for the APIs and frontend
  2. For participants -

    • Add backend APIs and graphs for checking the trend of submissions in a challenge and the corresponding rank.
    • Add backend APIs and plot for the number of submissions in a challenge over time which can be filtered by submission status.
    • Write tests for the APIs and frontend

Mentor: Gautam Jajoo (@gautamjajoo), Rishabh Jain (@RishabhJain2018), Ram Ramrakhya (@Ram81)

Skills: Angular 7, Django, Django Rest Framework, D3.js

Skill Level: Medium

Project size - 175 hours

Difficulty - Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2022)

Important Links:

Enhance UI/UX of EvalAI

Project Title: Enhance UI/UX of EvalAI

Description:

This will focus on improving the existing UI of EvalAI to improve the experience of both challenge organizers and participants. We also want to improve the discoverability of all the features that are supported on EvalAI. With the increase in the number of users of on EvalAI, it is critical to have a frictionless and intuitive user experience. The goal of this project is to ease the pipeline for challenge creation, enhancing the user experience of the platform, adding plots for displaying the progress of state-of-the-art algorithms, for displaying the progress of participant team in a challenge over the years and several other features.

Deliverable:

  • Migrate from bower to yarn
  • Challenge creation using templates on UI
  • Enhance the feature to edit the details of the challenge from UI
  • Create a tutorial for challenge creation using templates as well as zip
  • Add video for challenge creation on the homepage
  • Add FAQ section for creating a challenge on EvalAI
  • Create a news page for EvalAI
  • Modify the home page to add the supported features
  • List the features of having multiple phases, multiple dataset splits and multiple leaderboards
  • Add Evalai-CLI and its main features
  • Feature for showing the previous challenges leaderboard.
  • Display the accuracies, submission metadata on the leaderboard, my submissions and view all submissions as a collapsible
  • Create a unique link for each of the entry on the leaderboard so that participant can share their entry position on the leaderboard
  • Highlight participant team entry on the leaderboard for the logged in user along with displaying the team name
  • Add filtering in view all submissions based upon participant team name
  • Modify pagination for EvalAI
  • Basic Challenge Analytics
  • Display a line chart of the progress in a challenge over a period of time (2015-2018) which is to be shown on the leaderboard page corresponding to each phase and dataset split
  • Display line chart for increase/decrease in the rank of a participant
  • A plot showing the progress over the duration of the contest with maximum, mean, baseline and lowest values
  • Add feature to display the accuracy with variable decimal fields
  • Create a website maintenance page
  • Write robust tests for the frontend

Extended Goals:

  • Add Documentation for frontend

Mentor: Gali Prem Sagar @galipremsagar, Shivani Prakash @shivaniprakash95 (Design Mentor) , Rishabh Jain @RishabhJain2018

Skills Required: AngularJS, HTML, CSS, Javascript

Skill Level: Medium

Get started: Try to fix some issues in EvalAI (note that there are some issues labeled with GSOC-2019)

Tutorials:

a) AngularJS
b) Javascript

Important Links:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.