The repository provides code to train an image classifier using distributed training within a containerized environment. We use AI Platform and Docker to achieve this.
Sayak Paul
Accompanying blog post: Distributed Training in TensorFlow with AI Platform & Docker
This repository provides code to train an image classification model in a distributed manner with the tf.distribute.MirroredStrategy
strategy (single host multiple GPUs) in TensorFlow 2.4.1. We make use of the MLOps stack to do this:
training
jobs (by GCP) to manage running the custom Docker image using multiple GPUs. It also handles automatic provisioning and de-provisioning of resources. Advantages of training in this manner (as opposed to doing that in a Jupyter Notebook environment) are the following:
Other recipes included:
Note: One needs to have a billing-enabled GCP project to fully follow these steps.
We will use a cheap AI Platform Notebook instance as our staging machine which we will use to build our custom Docker image, push it to Google Container Registry (GCR), and submit a training job to AI Platform. Additionally, we will use this instance to create TensorFlow Records (TFRecords) from the original dataset (Cats vs. Dogs in this case) and upload them to a GCS Bucket. AI Platform notebooks come pre-configured with many useful Python libraries, Linux packages like docker
, and also the command-line GCP tools like gcloud
.
(I used an n1-standard-4
instance (with TensorFlow 2.4 as the base image) which costs $0.141 hourly.)
Set the following environmental variables and set the shell scripts to be executables:
$ export PROJECT_ID=your-gcp-project-id
$ export BUCKET_NAME=unique-gcs-bucket-name
$ chmod +x scripts/*.sh
Create a GCS Bucket:
$ gsutil mb ${BUCKET_NAME}
You can additionally pass in the zone where you want to create the bucket like the following: $ gsutil mb -l asia-east1 ${BUCKET_NAME}
. If all of your resources will be provisioned from that same zone, then you will likely get a slight performance boost.
Create TFRecords and upload them to the GCS Bucket.
$ cd scripts
$ source upload_tfr.sh
Build the custom Docker image and run it locally:
$ cd ~/Distributed-Training-in-TensorFlow-2-with-AI-Platform
$ source scripts/train_local.sh
If everything is looking good, you can interrupt the training run with Ctrl-C
and proceed to run on Cloud:
$ source scripts/train_cloud.sh
... and done!
Find my TensorBoard logs online here. The training artifacts (SavedModel
s, TensorBoard logs, and TFRecords) can be found here.
โโโ config.yaml: Specifies the type of machine to use to run training on Cloud.
โโโ scripts
โ โโโ train_cloud.sh: Trains on Cloud with the given specifications.
โ โโโ train_local.sh: Trains locally.
โ โโโ upload_tfr.sh: Creates and uploaded TFRecords to a GCS Bucket.
โโโ trainer
โโโ config.py: Specifies hyperparameters and other constants.
โโโ create_tfrecords.py: Driver code for creating TFRecords. It is called by `upload_tfr.sh`.
โโโ data_loader.py: Contains utilities for the data loader.
โโโ model_training.py: Contains the actual data loading and model training code.
โโโ model_utils.py: Contains model building utilities.
โโโ task.py: Parses the command-line arguments given and starts an experiment.
โโโ tfr_utils.py: Utilities for creating TFRecords.
https://github.com/GoogleCloudPlatform/ai-platform-samples
I am thankful to the ML-GDE program for providing generous GCP support.
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.