the best choice for deep learning in cloud
Card | Price per hour2 | FP32, TFLOPS (peak) | FP32 (ML benchmarks score)1 | Memory, GB | Tensor cores | CUDA cores |
2080Ti by Puzl | 0.29€ | 14.2 | 1 | 11 GDDR6 | 544 | 4352 |
Tesla T4 | 0.31€ x1.07 | 8.1 x0.57 | 0.53 | 16 GDDR6 | 320 x0.59 | 2560 x0.59 |
Tesla V100 | 1.26€ x4.34 | 15.7 x1.11 | 1.57 | 16 HBM2 | 640 x1.18 | 5120 x1.18 |
1. Based on average normalized GPU score for ResNet, Inception and AlexNet benchmarks. Normalization was performed to 2080Ti score (1 is 2080 Ti score).
2. The minimum market price per 1 GPU on demand, taken from public price lists of popular cloud and hosting providers. Information is current as of June 2020.
* Benchmarks are made on instances with 1 GPU, 16GB RAM, 4vCPU and fast data storage with similar IOPS and bandwidth rate.
** All multiplicators are calculated in proportion to 2080Ti.
*AMD, and the AMD Arrow logo, AMD EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc.
With 2nd generation AMD processors you can allocate 64 vCPU and 10 GPU in one Docker container.
Fast, reliable data storage for your datasets and trained models, runtime-extensible up to 4TB.
DDR4 ECC 2.9Ghz memory with flexible allocation up to 384GB.
Disk space will be provided to store Docker image files for containers of your Pod.
Fast flexible data storage for your datasets. To reduce your costs you can load data via SSH first and run Pod with GPUs after.
A container file system provides a temporary fast data storage for each container in your Pod. Its capacity cannot be changed. You pay only if your application use it.
*1GB = 1024MB
Have an account? Sign in