ProviderManager¶
Execution provider management and selection.
Overview¶
ProviderManager handles automatic selection of the best GPU execution provider based on platform and availability.
from gpux.core.providers import ProviderManager, ExecutionProvider
manager = ProviderManager()
provider = manager.get_best_provider()
Enum: ExecutionProvider¶
Available execution providers.
class ExecutionProvider(Enum):
TENSORRT = "TensorrtExecutionProvider"
CUDA = "CUDAExecutionProvider"
ROCM = "ROCmExecutionProvider"
COREML = "CoreMLExecutionProvider"
DIRECTML = "DirectMLExecutionProvider"
OPENVINO = "OpenVINOExecutionProvider"
CPU = "CPUExecutionProvider"
Class: ProviderManager¶
Constructor¶
Automatically detects available providers and determines priority.
Example:
Methods¶
get_best_provider()¶
Get the best available execution provider.
Parameters:
preferred(str, optional): Preferred provider name (cuda,coreml, etc.)
Returns:
ExecutionProvider: Best available provider
Example:
# Auto-select best provider
provider = manager.get_best_provider()
# Prefer CUDA if available
provider = manager.get_best_provider("cuda")
get_available_providers()¶
Get list of available provider names.
Returns:
list[str]: Available provider names
Example:
providers = manager.get_available_providers()
print(f"Available: {providers}")
# Output: ['CUDAExecutionProvider', 'CPUExecutionProvider']
get_provider_config()¶
Get provider configuration.
Parameters:
provider(ExecutionProvider): Provider to configure
Returns:
list[tuple[str, dict]]: Provider configuration
Example:
get_provider_info()¶
Get provider information.
Parameters:
provider(ExecutionProvider): Provider to query
Returns:
dict: Provider informationname(str): Provider nameavailable(bool): Availabilityplatform(str): Platform/hardware typedescription(str): Description
Example:
info = manager.get_provider_info(ExecutionProvider.CUDA)
print(f"Provider: {info['name']}")
print(f"Available: {info['available']}")
print(f"Platform: {info['platform']}")
Provider Priority¶
Providers are prioritized based on platform and performance:
Default Priority¶
- TensorRT (NVIDIA - best performance)
- CUDA (NVIDIA)
- ROCm (AMD)
- CoreML (Apple Silicon)
- DirectML (Windows)
- OpenVINO (Intel)
- CPU (Universal fallback)
Platform-Specific¶
Apple Silicon: 1. CoreML 2. CPU
Windows: 1. TensorRT 2. CUDA 3. DirectML 4. OpenVINO 5. CPU
Complete Example¶
from gpux.core.providers import ProviderManager, ExecutionProvider
# Create manager
manager = ProviderManager()
# Check available providers
available = manager.get_available_providers()
print(f"Available providers: {available}")
# Get best provider
provider = manager.get_best_provider()
print(f"Selected provider: {provider.value}")
# Get provider info
info = manager.get_provider_info(provider)
print(f"Platform: {info['platform']}")
print(f"Description: {info['description']}")
# Prefer specific provider
cuda_provider = manager.get_best_provider("cuda")
if cuda_provider == ExecutionProvider.CUDA:
print("Using CUDA acceleration")
else:
print("CUDA not available, falling back to:", cuda_provider.value)