Advanced Micro Devices
Advanced Micro Devices (AMD) is a global semiconductor company that develops high-performance computing, graphics, and visualization technologies for data centers, gaming, and embedded markets. AMD provides the ROCm open software platform for GPU computing, HIP programming interface, and the AMD Developer Cloud for AI workloads using AMD Instinct GPUs.
APIs
AMD Developer Cloud API
The AMD Developer Cloud API provides access to AMD Instinct GPU instances for AI inference, training, and HPC workloads. Supports managing compute instances, deploying AI models...
AMD ROCm API
The AMD ROCm (Radeon Open Compute) platform provides the runtime and library APIs for GPU-accelerated computing on AMD hardware. Includes HIP (Heterogeneous-compute Interface fo...
Capabilities
AMD AI GPU Computing
Unified workflow capability for AI and HPC workloads on AMD Instinct GPUs — provision instances, deploy LLMs, monitor performance, and manage cloud credits. Designed for AI rese...
Run with NaftikoFeatures
On-demand access to MI300X, MI250, and MI210 GPU instances for AI training, inference, and HPC workloads.
Open-source GPU compute platform with HIP programming model, math libraries, and deep learning framework support.
CUDA-compatible GPU programming interface enabling portable code across AMD and NVIDIA hardware.
Deploy and serve large language models using vLLM, TGI, and other inference engines on AMD Instinct GPUs.
Optimized libraries including rocBLAS, rocFFT, rocRAND, and rocSPARSE for scientific computing and deep learning.
RCCL (ROCm Communication Collectives Library) for efficient multi-GPU and multi-node collective operations.
Free GPU cloud credits for qualifying researchers, startups, and developers through the AMD AI Developer Program.
Full compatibility with PyTorch, TensorFlow, JAX, and other ML frameworks via ROCm backend support.
Use Cases
Train and fine-tune large language models on AMD Instinct GPU clusters with ROCm-optimized PyTorch.
Deploy LLM inference endpoints using vLLM on AMD Instinct GPUs for high-throughput token generation.
Run HPC simulations, molecular dynamics, and fluid dynamics workloads on AMD GPU clusters with ROCm.
Train and deploy image classification, object detection, and segmentation models using AMD GPU acceleration.
Accelerate data processing and analytics workloads using GPU-accelerated computing with ROCm.
Develop and iterate on generative AI applications using AMD Developer Cloud free GPU credits.
Integrations
Full ROCm support for PyTorch including autograd, distributed training, and all major model architectures.
TensorFlow-ROCm integration enabling GPU-accelerated training and inference on AMD hardware.
AMD Instinct Day-0 support in vLLM for high-performance LLM inference serving.
Transformers and Diffusers library compatibility with ROCm for loading and running models from Hugging Face Hub.
AMD GPU operator for Kubernetes enabling GPU-accelerated containerized workloads on AMD hardware.
Official ROCm Docker images for containerized GPU computing environments.
ONNX Runtime ROCm execution provider for cross-framework model deployment on AMD GPUs.