Train deep networks on multiple GPUs, clusters, and clouds, using Parallel Computing Toolbox™. Scale up deep learning with multiple GPUs locally or on clusters, and train multiple networks interactively or in batch jobs. To learn about options, see Scale Up Deep Learning in Parallel and in the Cloud.
Deep Learning with Big Data on GPUs and in Parallel
Train deep networks on CPUs, GPUs, clusters, and clouds, and tune options to suit your hardware.
Scale Up Deep Learning in Parallel and in the Cloud
Options for deep learning with MATLAB using multiple GPUs, locally or in the cloud.
Deep Learning with MATLAB on Multiple GPUs
Specify multiple GPUs to use locally or in the cloud for training.
Train Network Using Automatic Multi-GPU Support
This example shows how to use multiple GPUs on your local machine for deep learning training using automatic parallel support.
Train Deep Learning Networks in Parallel
This example shows how to run multiple deep learning experiments on your local machine.
Use parfor to Train Multiple Deep Learning Networks
This example shows how to use a parfor
loop to perform a parameter sweep on a training option.
Use parfeval to Train Multiple Deep Learning Networks
This example shows how to use parfeval
to perform a parameter sweep on the depth of the network architecture for a deep learning network and retrieve data during training.
Upload Deep Learning Data to the Cloud
This example shows how to upload data to an Amazon S3 bucket.
Send Deep Learning Batch Job to Cluster
This example shows how to send deep learning training batch jobs to a cluster so that you can continue working or close MATLAB during training.
Train Network in Parallel with Custom Training Loop
This example shows how to set up a custom training loop to train a network in parallel.