Giter Club home page Giter Club logo

gpu-scheduler-for-deep-learning's Introduction

README

Introduction

This repository contains a re-implementation of our deep learning training infrastructure, described in the paper "AntMan: Dynamic Scaling on GPU Clusters for Deep Learning" (OSDI'20).

Note

  1. The original implementation of our paper is based on FUXI, which is tightly coupled with the internal infrastructure of Alibaba. The goal of this project is to provide a cloud-native solution to demonstrate the feature of the paper and benenfit the community.
  2. This is a WIP project. Please grant us several days to fix the missing components with code cleaning and show the end-to-end demo with some benchmarks. We are working hard to achieve that. More detailed documents are on the way.
  3. The implementation of our kubernetes scheduler is only tested in the ACK cluster of alibaba cloud, based on Kubernetes V1.18. The deployement script we provide may not be able to apply in other kubernetes infrastructures directly.

Modules

The development of this repository is based on some open-source repositories.

k8s-related

  1. KubeDL: an all-in-one operator, responsible to reconcile tfjobs
  2. Scheduler Plugins: a k8s cluster scheduler, responsible to schedule DL GPU pods for both resource-guarantee/opportunistic jobs
  3. k8s-device-plugin: report GPU resources to k8s

TensorFlow

The dynamic scaling mechianism is initially implemented in PAI-TF, a highly-optimized TensorFlow verison used in Alibaba. We port the core implementation to the open-source TensorFlow v1.15.

  1. TensorFlow

gpu-scheduler-for-deep-learning's People

Contributors

alibaba-oss avatar shiruren avatar wencongxiao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpu-scheduler-for-deep-learning's Issues

bazel build error

When I finished configuration in TensorFlow-with-dynamic-scaling, I typed "bazel build --config=cuda [--config=option] //tensorflow/tools/pip_package:build_pip_package".
However, I received "INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=120
ERROR: Config value cuda is not defined in any .rc file"at once.
And It will be of great help if someone can share a working example using (TensorFlow-with-dynamic-scaling).

This cycle occurred because of a configuration option when I use bazel build tensorflow in this project.

hi, @WencongXiao @ShiruRen I have an error when i execute this command line:
bazel build tensorflow/tools/pip_package:build_pip_package

The output is:
`ERROR: /usr/src/TensorFlow-with-dynamic-scaling/tensorflow/core/BUILD:3290:1: in cc_library rule //tensorflow/core:core_cpu_lib: cycle in dependency graph:
//tensorflow/tools/pip_package:build_pip_package
//tensorflow/python/distribute:combinations
//tensorflow/python/eager:context
//tensorflow/python/eager:executor
//tensorflow/python:pywrap_tensorflow
//tensorflow/python:pywrap_tensorflow_internal
//tensorflow/python:pywrap_tensorflow_internal.py
//tensorflow/python:pywrap_tensorflow_internal_py_wrap
//tensorflow/python:py_func_lib
//tensorflow/python:safe_ptr
//tensorflow/c/eager:c_api
//tensorflow/core/profiler/lib:profiler_lib
//tensorflow/core/profiler/internal/cpu:host_tracer
.-> //tensorflow/core:core_cpu_lib
| //tensorflow/core:core_cpu_impl
| //tensorflow/contrib/resource_management:gpu_resource_management
| //tensorflow/contrib/resource_management:gpu_usage_adjustment
| //tensorflow/core:gpu_runtime
| //tensorflow/core:gpu_runtime_impl
-- //tensorflow/core:core_cpu_lib

This cycle occurred because of a configuration option
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted
INFO: Elapsed time: 0.612s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
`

It seems that there are cycle calls when building this code, is there any idea for it?
I hope someone can help me to solve this problem.

Any Progress?

This is a WIP project. Please grant us several days to fix the missing components with code cleaning and show the end-to-end demo with some benchmarks. We are working hard to achieve that. More detailed documents are on the way.

I am currently reading the paper accepted by OSDI' 20, and I wonder if there is any new progress of the code.

Working example Of TensorFlow

It will be of great help if someone can share a working example using (TensorFlow-with-dynamic-scaling). in which more than one DNN training jobs are sharing the same set of resources. Specifically, I have following questions:

  1. How do I make sure AntMan is being used as a resource scheduler for a given DNN training job that I want to execute ? Are there any extra arguments that need to passed to tf.Session while training a given DNN model?
  2. Can I run multiple training jobs separately using the same (TensorFlow-with-dynamic-scaling) source (with different python and bash scripts) and run them on the same set of resources and all of these will automatically be visible to a global AntMan scheduler and will share resources as expected ?
  3. Or I have to execute/register all Jobs that I plan to execute on the same set of resources at once?
  4. How do I specify if a given DNN job is a background/low priority job?

any plan for documentation?

Thanks for the great work!

I'm really interested in it and want to try it out myself. However, there is no instructions/guidelines/documentation on how to build and run it. Is there any plan for documentation in the near future?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.