Full Stack Deep Learning
  • Full Stack Deep Learning
  • Course Content
    • Setting up Machine Learning Projects
      • Overview
      • Lifecycle
      • Prioritizing
      • Archetypes
      • Metrics
      • Baselines
    • Infrastructure and Tooling
      • Overview
      • Software Engineering
      • Computing and GPUs
      • Resource Management
      • Frameworks and Distributed Training
      • Experiment Management
      • Hyperparameter Tuning
      • All-in-one Solutions
    • Data Management
      • Overview
      • Sources
      • Labeling
      • Storage
      • Versioning
      • Processing
    • Machine Learning Teams
      • Overview
      • Roles
      • Team Structure
      • Managing Projects
      • Hiring
    • Training and Debugging
      • Overview
      • Start Simple
      • Debug
      • Evaluate
      • Improve
      • Tune
      • Conclusion
    • Testing and Deployment
      • Project Structure
      • ML Test Score
      • CI / Testing
      • Docker
      • Web Deployment
      • Monitoring
      • Hardware/Mobile
    • Research Areas
    • Labs
    • Where to go next
  • Guest Lectures
    • Xavier Amatriain (Curai)
    • Chip Huyen (Snorkel)
    • Lukas Biewald (Weights & Biases)
    • Jeremy Howard (Fast.ai)
    • Richard Socher (Salesforce)
    • Raquel Urtasun (Uber ATG)
    • Yangqing Jia (Alibaba)
    • Andrej Karpathy (Tesla)
    • Jai Ranganathan (KeepTruckin)
    • Franziska Bell (Toyota Research)
  • Corporate Training and Certification
    • Corporate Training
    • Certification
Powered by GitBook
On this page

Was this helpful?

  1. Course Content
  2. Infrastructure and Tooling

Resource Management

How to effectively manage compute resources?

PreviousComputing and GPUsNextFrameworks and Distributed Training

Last updated 5 years ago

Was this helpful?

Summary

  • Running complex deep learning models poses a very practical resource management problem: how to give every team the tools they need to train their models without requiring them to operate their own infrastructure?

  • The most primitive approach is to use spreadsheets that allow people to reserve what resources they need to use.

  • The next approach is to utilize a SLURM Workload Manager, a free and open-source job scheduler for Linux and Unix-like kernels.

  • A very standard approach these days is to use Docker alongside Kubernetes.

    • Docker is a way to package up an entire dependency stack in a lighter-than-a-Virtual-Machine package.

    • Kubernetes is a way to run many Docker containers on top of a cluster.

  • The last option is to use open-source projects.

    • Using Kubeflow allows you to run model training jobs at scale on containers with the same scalability of container orchestration that comes with Kubernetes.

    • Polyaxon is a self-service multi-user system, taking care of scheduling and managing jobs in order to make the best use of available cluster resources.

Resource Management - Infrastructure and Tooling