Installation¶
This guide will help you install GPUX and verify your setup.
๐ Requirements¶
System Requirements¶
- Operating System: Windows, macOS, or Linux
- Python: 3.11 or higher
- Memory: 4GB RAM minimum (8GB+ recommended)
- Storage: 500MB for GPUX + space for your models
Optional Requirements¶
- GPU: NVIDIA, AMD, Apple Silicon, Intel, or Windows GPU (for accelerated inference)
- Docker: For containerized deployments (optional)
CPU-Only Support
GPUX works perfectly on CPU-only machines. GPU acceleration is optional but recommended for better performance.
๐ Installation Methods¶
Choose your preferred installation method:
uv is a fast, reliable Python package manager.
Install uv¶
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
Install GPUX¶
Why uv?¶
- โก 10-100x faster than pip
- ๐ Deterministic dependency resolution
- ๐ฏ Modern Python package management
- ๐ Used by GPUX internally
Standard Python package manager.
Install GPUX¶
# Install with pip
pip install gpux
# Or with specific version
pip install gpux==0.2.0
# Upgrade to latest
pip install --upgrade gpux
Create Virtual Environment (Recommended)¶
โ Verify Installation¶
After installation, verify that GPUX is working correctly:
Check Version¶
Expected output:
Check Available Commands¶
Expected output:
โญโ Commands โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ build Build and optimize models for GPU inference. โ
โ run Run inference on a model. โ
โ serve Start HTTP server for model serving. โ
โ inspect Inspect models and runtime information. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Verify GPU Providers¶
Check which GPU providers are available on your system:
Example outputs:
GPU Not Detected?
If your GPU isn't listed, you may need to install GPU-specific drivers or ONNX Runtime packages. See GPU Setup below.
๐ฅ๏ธ GPU Setup¶
NVIDIA GPUs (CUDA)¶
For NVIDIA GPU acceleration:
# Install CUDA-enabled ONNX Runtime
pip install onnxruntime-gpu
# Verify CUDA is available
nvidia-smi
Requirements: - CUDA 11.8 or 12.x - cuDNN 8.x - NVIDIA drivers 520+
AMD GPUs (ROCm)¶
For AMD GPU acceleration:
Requirements: - ROCm 5.4+ - AMD drivers
Apple Silicon (M1/M2/M3)¶
Apple Silicon support is built-in:
Requirements: - macOS 12.0+ - Apple Silicon Mac (M1, M2, M3, etc.)
Intel GPUs (OpenVINO)¶
For Intel GPU acceleration:
Requirements: - Intel GPU drivers - OpenVINO toolkit
Windows GPUs (DirectML)¶
DirectML support is built-in on Windows:
Requirements: - Windows 10/11 - DirectX 12 compatible GPU
๐ฆ Optional Dependencies¶
Install optional features based on your needs:
ML Framework Support¶
For model conversion from PyTorch, TensorFlow, etc.:
# PyTorch support
uv add --group ml torch torchvision
# TensorFlow support
uv add --group ml tensorflow
# Transformers support (BERT, GPT, etc.)
uv add --group ml transformers
HTTP Server¶
For serving models via HTTP:
Development Tools¶
For contributing or development:
๐งช Test Your Installation¶
Let's run a quick test to ensure everything works:
Create Test Script¶
Create a file named test_gpux.py:
"""Test GPUX installation."""
from gpux.utils.helpers import check_dependencies, get_gpu_info
# Check dependencies
print("Checking dependencies...")
deps = check_dependencies()
for name, available in deps.items():
status = "โ
" if available else "โ"
print(f"{status} {name}")
# Check GPU info
print("\nChecking GPU...")
gpu_info = get_gpu_info()
if gpu_info["available"]:
print(f"โ
GPU Available: {gpu_info.get('provider', 'Unknown')}")
else:
print("โ ๏ธ No GPU detected (CPU only)")
print("\nโ
GPUX is ready to use!")
Run Test¶
Expected output:
Checking dependencies...
โ
onnxruntime
โ
onnx
โ
numpy
โ
yaml
โ
click
โ
typer
โ
rich
โ
pydantic
Checking GPU...
โ
GPU Available: CoreMLExecutionProvider
โ
GPUX is ready to use!
๐ Troubleshooting¶
Command Not Found¶
If gpux command is not found:
Import Errors¶
If you see ModuleNotFoundError:
# Verify Python version
python --version # Should be 3.11+
# Reinstall dependencies
pip install --upgrade gpux
GPU Not Detected¶
If your GPU isn't detected:
-
Verify drivers are installed
-
Install GPU-specific ONNX Runtime
-
Check provider availability
Permission Errors¶
If you encounter permission errors:
# Use user install (no sudo required)
pip install --user gpux
# Or use virtual environment
python -m venv venv
source venv/bin/activate
pip install gpux
๐ Next Steps¶
Now that GPUX is installed, let's create your first model!
Continue to: First Steps โ
๐ Still Having Issues?¶
- ๐ Check the FAQ
- ๐ Report installation issues
- ๐ฌ Ask on Discord
- ๐ง Email support