Notebooks are full instances of JupyterLab running on up to 4 dedicated GPUs. Our pre-built conda environments are designed specifically for machine learning model training on GPUs, with the latest Tensorflow, PyTorch, MXNet, and others are pre-installed.
Training Jobs allow you to effortlessly run parallel model training experiments across dozens of GPUs. Just provide the model's git repository and the location of the data, we handle the rest. No instance provisioning, environment setup, or worrying about turning it off when you're done.
Learn MoreInference Jobs allow you to run new data through trained models and deliver the results back without any concern for managing, scaling, or descaling server clusters.
Learn MoreEndpoints deploy your models as a REST API. They are fully managed, giving you the real-time predictions you need for production applications without having to worry about servers, certificates, networking, or web development.
Learn MorePersistent Datasets allow you to reuse training data across multiple notebooks or training jobs. You can populate them directly from your local computer or another cloud provider.
Learn MoreModels enable you to store an immutable version of model code and its artifacts for reuse in other jobs. Models can be populated by saving notebooks, running training jobs, or even downloaded from external sources.
Learn More4.5 TFLOPS (fp32)
6 GB GPU RAM
7.1 TFLOPS (fp32)
8 GB GPU RAM
9 TFLOPS (fp32)
8 GB GPU RAM
13.5 TFLOPS (fp32)
11 GB GPU RAM
35.5 TFLOPS (fp32)
24 GB GPU RAM
Instance Type | Training Duration (hrs) | $/hr | Total Training Cost | Savings % |
aws ml.p3.2xlarge | 0.64 | $3.83 | $2.45 | n/a |
trainML RTX 3090 | 0.35 | $0.98 | $0.34 | 87% |
trainML RTX 2080 Ti | 0.68 | $0.35 | $0.24 | 90% |
trainML RTX 2070 Super | 0.89 | $0.28 | $0.25 | 90% |
trainML RTX 2060 Super | 1.01 | $0.25 | $0.25 | 90% |
trainML GTX 1060 | 1.66 | $0.10 | $0.17 | 93% |