Managed Soperator: your quick access to Slurm training

Join us for a webinar introducing Managed Soperator, Nebius AI Cloud fully managed Slurm-on-Kubernetes solution that transforms AI training infrastructure deployment.

Learn how to provision a Slurm training cluster with NVIDIA GPUs, pre-installed libraries and drivers in just minutes, eliminating the complexity of manual configuration and lengthy setup processes.

What you will learn and who should attend

This webinar is ideal for ML researchers, data scientists, ML developers and technical teams who want to accelerate their training workflows without infrastructure complexity.

One-click AI training clusters

How to deploy powerful Slurm-based training environments instantly without DevOps expertise or manual configuration headaches.

Cloud-native Slurm architecture

Understanding Soperator’s Kubernetes operator technology, shared root filesystem capabilities and proven scalability for multi-GPU training up to thousands GPUs.

Managed service advantages

Leveraging integrated monitoring, automated security updates, enterprise-grade cloud platform features and advanced IAM without operational overhead.

Getting started & scaling options

Step-by-step guidance on setting up your first cluster, scaling from 32 GPUs to enterprise solutions, and accessing professional support when needed.

Our hosts

Evgeny Arhipov

Head of scheduler services

René Schönfelder

Solutions Architect

Try Nebius AI Cloud console today

Get immediate access to NVIDIA® GPUs, along with CPU resources, storage and additional services through our user-friendly self-service console.