Skip to content

Installation

This guide will help you install GPUX and verify your setup.


๐Ÿ“‹ Requirements

System Requirements

  • Operating System: Windows, macOS, or Linux
  • Python: 3.11 or higher
  • Memory: 4GB RAM minimum (8GB+ recommended)
  • Storage: 500MB for GPUX + space for your models

Optional Requirements

  • GPU: NVIDIA, AMD, Apple Silicon, Intel, or Windows GPU (for accelerated inference)
  • Docker: For containerized deployments (optional)

CPU-Only Support

GPUX works perfectly on CPU-only machines. GPU acceleration is optional but recommended for better performance.


๐Ÿš€ Installation Methods

Choose your preferred installation method:

uv is a fast, reliable Python package manager.

Install uv

# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Install GPUX

# Add GPUX to your project
uv add gpux

# Or install globally
uv pip install gpux

Why uv?

  • โšก 10-100x faster than pip
  • ๐Ÿ”’ Deterministic dependency resolution
  • ๐ŸŽฏ Modern Python package management
  • ๐Ÿš€ Used by GPUX internally

Standard Python package manager.

Install GPUX

# Install with pip
pip install gpux

# Or with specific version
pip install gpux==0.2.0

# Upgrade to latest
pip install --upgrade gpux
# Create virtual environment
python -m venv venv

# Activate (macOS/Linux)
source venv/bin/activate

# Activate (Windows)
venv\Scripts\activate

# Install GPUX
pip install gpux

For development or latest features.

Clone Repository

git clone https://github.com/gpux/gpux-runtime.git
cd gpux-runtime

Install with uv

# Install dependencies
uv sync

# Install in development mode
uv pip install -e .

Install with pip

# Install dependencies
pip install -e .

# Or with dev dependencies
pip install -e ".[dev]"

โœ… Verify Installation

After installation, verify that GPUX is working correctly:

Check Version

gpux --version

Expected output:

GPUX version 0.2.0

Check Available Commands

gpux --help

Expected output:

โ•ญโ”€ Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ build     Build and optimize models for GPU inference.   โ”‚
โ”‚ run       Run inference on a model.                      โ”‚
โ”‚ serve     Start HTTP server for model serving.          โ”‚
โ”‚ inspect   Inspect models and runtime information.       โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Verify GPU Providers

Check which GPU providers are available on your system:

python -c "import onnxruntime as ort; print(ort.get_available_providers())"

Example outputs:

['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
['CoreMLExecutionProvider', 'CPUExecutionProvider']
['ROCmExecutionProvider', 'CPUExecutionProvider']
['CPUExecutionProvider']

GPU Not Detected?

If your GPU isn't listed, you may need to install GPU-specific drivers or ONNX Runtime packages. See GPU Setup below.


๐Ÿ–ฅ๏ธ GPU Setup

NVIDIA GPUs (CUDA)

For NVIDIA GPU acceleration:

# Install CUDA-enabled ONNX Runtime
pip install onnxruntime-gpu

# Verify CUDA is available
nvidia-smi

Requirements: - CUDA 11.8 or 12.x - cuDNN 8.x - NVIDIA drivers 520+

TensorRT Support

For best performance, install TensorRT:

pip install onnxruntime-gpu tensorrt

AMD GPUs (ROCm)

For AMD GPU acceleration:

# Install ROCm-enabled ONNX Runtime
pip install onnxruntime-rocm

# Verify ROCm
rocm-smi

Requirements: - ROCm 5.4+ - AMD drivers

Apple Silicon (M1/M2/M3)

Apple Silicon support is built-in:

# Standard ONNX Runtime includes CoreML
pip install onnxruntime

Requirements: - macOS 12.0+ - Apple Silicon Mac (M1, M2, M3, etc.)

Intel GPUs (OpenVINO)

For Intel GPU acceleration:

# Install OpenVINO-enabled ONNX Runtime
pip install onnxruntime-openvino

Requirements: - Intel GPU drivers - OpenVINO toolkit

Windows GPUs (DirectML)

DirectML support is built-in on Windows:

# Standard ONNX Runtime includes DirectML
pip install onnxruntime-directml

Requirements: - Windows 10/11 - DirectX 12 compatible GPU


๐Ÿ“ฆ Optional Dependencies

Install optional features based on your needs:

ML Framework Support

For model conversion from PyTorch, TensorFlow, etc.:

# PyTorch support
uv add --group ml torch torchvision

# TensorFlow support
uv add --group ml tensorflow

# Transformers support (BERT, GPT, etc.)
uv add --group ml transformers

HTTP Server

For serving models via HTTP:

# FastAPI + Uvicorn
uv add --group serve fastapi uvicorn

Development Tools

For contributing or development:

# Install dev dependencies
uv sync --group dev

# Includes: pytest, ruff, mypy, pre-commit

๐Ÿงช Test Your Installation

Let's run a quick test to ensure everything works:

Create Test Script

Create a file named test_gpux.py:

"""Test GPUX installation."""
from gpux.utils.helpers import check_dependencies, get_gpu_info

# Check dependencies
print("Checking dependencies...")
deps = check_dependencies()
for name, available in deps.items():
    status = "โœ…" if available else "โŒ"
    print(f"{status} {name}")

# Check GPU info
print("\nChecking GPU...")
gpu_info = get_gpu_info()
if gpu_info["available"]:
    print(f"โœ… GPU Available: {gpu_info.get('provider', 'Unknown')}")
else:
    print("โš ๏ธ  No GPU detected (CPU only)")

print("\nโœ… GPUX is ready to use!")

Run Test

python test_gpux.py

Expected output:

Checking dependencies...
โœ… onnxruntime
โœ… onnx
โœ… numpy
โœ… yaml
โœ… click
โœ… typer
โœ… rich
โœ… pydantic

Checking GPU...
โœ… GPU Available: CoreMLExecutionProvider

โœ… GPUX is ready to use!


๐Ÿ› Troubleshooting

Command Not Found

If gpux command is not found:

# Check if GPUX is installed
pip list | grep gpux

# Reinstall
pip install --force-reinstall gpux

Import Errors

If you see ModuleNotFoundError:

# Verify Python version
python --version  # Should be 3.11+

# Reinstall dependencies
pip install --upgrade gpux

GPU Not Detected

If your GPU isn't detected:

  1. Verify drivers are installed

    # NVIDIA
    nvidia-smi
    
    # AMD
    rocm-smi
    

  2. Install GPU-specific ONNX Runtime

    # NVIDIA
    pip install onnxruntime-gpu
    
    # AMD
    pip install onnxruntime-rocm
    

  3. Check provider availability

    python -c "import onnxruntime; print(onnxruntime.get_available_providers())"
    

Permission Errors

If you encounter permission errors:

# Use user install (no sudo required)
pip install --user gpux

# Or use virtual environment
python -m venv venv
source venv/bin/activate
pip install gpux

๐Ÿ“š Next Steps

Now that GPUX is installed, let's create your first model!

Continue to: First Steps โ†’


๐Ÿ†˜ Still Having Issues?