Self-Hosted GPU Runner

📘

Please contact us to gain access to the Self-Hosted GPU Runner feature.

What is a Self-Hosted GPU Runner?

Self-hosted GPU runners are dedicated systems or servers that utilizes their own GPU (Graphics Processing Unit) to execute computational tasks, such as machine learning training, data processing, or rendering. Instead of relying on external GPU resources provided by cloud services, a self-hosted GPU runner leverages the user's own hardware infrastructure. This setup can be integrated into continuous integration and deployment (CI/CD) pipelines, enabling users to manage and execute their workflows locally or within their private networks.

Why Use Your Own Hardware?

By using a self-hosted GPU runner, users can make the most of their existing hardware investments. This approach allows them to fully utilize their own GPU resources, and can lead to cost savings and better resource management, especially for organizations with idle or underused GPUs.

Self-hosted GPU runners provide an added layer of security. When running custom functions and sensitive workflows, it's crucial to maintain control over the execution environment. By using their own infrastructure, users can ensure that their code, data, and proprietary algorithms are processed in a secure and isolated manner, reducing the risk of exposure to external threats.

With a self-hosted GPU runner, users have complete control over their environment. This enables them to customize the setup according to their specific needs, including software configurations, library versions, and system optimizations.

Try It Out

Check out how to set up your environment to install runners and train models on your own hardware.