Glossary¶
Key terms and concepts in GPUX.
A¶
ONNX - Open Neural Network Exchange, a standard format for ML models.
E¶
Execution Provider - Backend that runs inference (CUDA, CoreML, ROCm, etc.).
G¶
GPU - Graphics Processing Unit, hardware accelerator for ML.
GPUX - Docker-like GPU runtime for ML inference.
I¶
Inference - Running predictions on a trained ML model.
M¶
Model - Trained machine learning model in ONNX format.
P¶
Provider - See Execution Provider.
R¶
Runtime - The GPUX execution environment.
Tip
Confused by a term? Ask on Discord!