GPUs, Frameworks & Use Cases
Supported GPU models (RTX 4090 to H100), AI/ML frameworks, and real-world use cases from training to rendering.
Supported GPUs, Frameworks & Use Cases
FluxEdge supports a wide range of GPU hardware β from consumer-grade NVIDIA GeForce cards provided by community members to enterprise-grade H100 and Blackwell GPUs from NVIDIA partner Hyperstack. Because FluxEdge is infrastructure-level (Kubernetes + Docker), any framework that runs in a container works out of the box.
Supported GPU Models
Dedicated Machines (Community Providers)
| GPU Family | Notable Models | Typical VRAM | Best For |
|---|---|---|---|
| NVIDIA GeForce RTX 40-series | RTX 4090, RTX 4080, RTX 4070 Ti | 12-24 GB | AI inference, image generation, rendering |
| NVIDIA GeForce RTX 30-series | RTX 3090, RTX 3080, RTX 3070 | 8-24 GB | General GPU compute, inference, gaming |
| NVIDIA Professional | RTX A6000, A5000, Quadro | 16-48 GB | Professional visualization, large model inference |
| AMD GPUs | Various models | Varies | Compute tasks with ROCm support |
Premium Machines (Hyperstack / NVIDIA)
| GPU | VRAM | Architecture | Best For |
|---|---|---|---|
| NVIDIA H100 | 80 GB HBM3 | Hopper | Large-scale AI training, enterprise inference |
| NVIDIA H200 | 141 GB HBM3e | Hopper | Ultra-large model training, HPC workloads |
| NVIDIA A100 | 40/80 GB HBM2e | Ampere | Production ML training and inference |
| NVIDIA RTX A6000 | 48 GB GDDR6 | Ampere | Professional visualization, model fine-tuning |
| NVIDIA Blackwell | TBA | Blackwell | Next-gen AI training and inference (planned) |
Premium machines require KYC1 verification. These are sourced from NexGen Cloud's Hyperstack platform, an NVIDIA partner, ensuring enterprise-grade reliability and performance.
Supported Frameworks & Tools
FluxEdge is infrastructure-level β any application that runs in a Docker container works. Here are commonly used frameworks and tools confirmed on the platform:
| Framework/Tool | Category | Docker Image Example |
|---|---|---|
| PyTorch | Deep Learning | pytorch/pytorch:latest |
| TensorFlow | Deep Learning | tensorflow/tensorflow:latest-gpu |
| ONNX Runtime | Inference | mcr.microsoft.com/onnxruntime/server |
| Jupyter Notebook | Interactive Development | Quick Launch template available |
| Ollama | LLM Inference | Quick Launch template available |
| Stable Diffusion | Image Generation | Quick Launch template available |
| NVIDIA NIM | Optimized Inference | NVIDIA NGC catalog |
| NVIDIA NeMo | AI Agents / RAG | NVIDIA NGC catalog |
| Hugging Face | Model Hub / Transformers | Custom images from HF Docker |
| vLLM | High-throughput LLM Serving | vllm/vllm-openai:latest |
| Blender | 3D Rendering | Used in benchmarks; custom images |
Use Cases
- 1
AI/ML Model Training
Train deep learning models with GPU acceleration. Use Dedicated GPUs (RTX 4090) for smaller models or Premium machines (H100, A100) for large-scale training. FluxEdge is ideal for teams that need burst GPU capacity without long-term cloud commitments.
- 2
AI Inference & LLM Hosting
Run inference servers for production or development. Deploy Ollama, vLLM, NVIDIA NIM, or custom model servers. The zero egress fees make inference serving particularly cost-effective.
- 3
Image & Video Generation
Run Stable Diffusion, DALL-E alternatives, or video generation pipelines. Quick Launch templates make this a one-click deployment.
- 4
3D Rendering
Offload Blender, Unreal Engine, or other rendering workloads to GPU machines. Especially useful for animation studios needing burst render capacity.
- 5
Scientific Computing & HPC
Run computational fluid dynamics, molecular dynamics, bioinformatics, or other GPU-accelerated scientific workloads.
- 6
Data Processing
Use GPU-accelerated data processing tools (RAPIDS, cuDF) for large-scale data analytics. Dedicated CPU machines are also available for non-GPU data workloads.
- 7
Development & Prototyping
Spin up Jupyter notebooks with GPU access for quick prototyping and experimentation without setting up local GPU environments.
Crypto mining is also permitted on FluxCore provider machines that opt into it. The Auto-Switch feature seamlessly transitions between mining and rental workloads for maximum utilization.
Other articles in FluxEdge GPU Computing
What is FluxEdge?
Overview of the decentralized GPU compute marketplace β value proposition, network scale, and getting started.
Renting GPU Compute
How to rent Dedicated and Premium GPU machines β filtering, provisioning, and machine management.
Deploying Workloads
Quick Launch templates and custom Docker/YAML deployments β ports, domains, GPU selection, and persistent storage.
Becoming a Provider with FluxCore
Install FluxCore, benchmark your GPU, join the marketplace, and earn from rentals with auto-switch mining fallback.
Pricing, Billing & Payments
Dynamic pricing formula, payment methods (fiat + crypto), deposit bonus, provider earnings, and KYC levels.
Architecture, Security & Networking
Kubernetes orchestration, ArcaneOS chain-of-trust, container isolation, networking, and data encryption.
FluxEdge vs Traditional Cloud
Detailed comparison with AWS, GCP, Azure β egress fees, pricing, vendor lock-in, and migration strategy.