You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
2.8 KiB
2.8 KiB
| id | title | status | source_sections | related_topics | key_equations | key_terms | images | examples | open_questions |
|---|---|---|---|---|---|---|---|---|---|
| ai-frameworks | AI Frameworks and Development Tools | established | Web research: NVIDIA newsroom, Arm learning paths, NVIDIA DGX Spark User Guide | [dgx-os-software gb10-superchip ai-workloads] | [] | [pytorch nemo rapids cuda ngc jupyter tensorrt llama-cpp docker nvidia-container-runtime fex] | [] | [] | [TensorFlow support status on ARM GB10 (official vs. community) Full NGC catalog availability — which containers work on GB10? vLLM or other inference server support on ARM Blackwell JAX support status] |
AI Frameworks and Development Tools
The Dell Pro Max GB10 supports a broad AI software ecosystem, pre-configured through DGX OS.
1. Core Frameworks
PyTorch
- Primary deep learning framework
- ARM64-native builds available
- Full CUDA support on Blackwell GPU
NVIDIA NeMo
- Framework for fine-tuning and customizing large language models
- Supports supervised fine-tuning (SFT), RLHF, and other alignment techniques
- Optimized for NVIDIA hardware
NVIDIA RAPIDS
- GPU-accelerated data science libraries
- Includes cuDF (DataFrames), cuML (machine learning), cuGraph (graph analytics)
- Drop-in replacements for pandas, scikit-learn, and NetworkX
2. Inference Tools
CUDA Toolkit
- Low-level GPU compute API
- Compiler (nvcc) for custom CUDA kernels
- Profiling and debugging tools
llama.cpp
- Quantized LLM inference engine
- ARM-optimized builds available for GB10
- Supports GGUF model format
- Documented in Arm Learning Path
TensorRT (expected)
- NVIDIA's inference optimizer
- Blackwell architecture support expected
3. Development Environment
- DGX Dashboard — web-based system monitor with integrated JupyterLab (T0 Spec)
- Python — system Python with AI/ML package ecosystem
- NVIDIA NGC Catalog — library of pre-trained models, containers, and SDKs
- Docker + NVIDIA Container Runtime — pre-installed for containerized workflows (T0 Spec)
- NVIDIA AI Enterprise — enterprise-grade AI software and services
- Tutorials: https://build.nvidia.com/spark
4. Software Compatibility Notes
Since the GB10 is an ARM system:
- All Python packages must have ARM64 wheels or be compilable from source
- Most popular ML libraries (PyTorch, NumPy, etc.) have ARM64 support
- Some niche packages may require building from source
- x86-only binary packages will not run natively
- FEX emulator can translate x86 binaries to ARM at a performance cost (used for Steam/Proton gaming — see ai-workloads)
- Container images must be ARM64/aarch64 builds
Key Relationships
- Runs on: dgx-os-software
- Accelerated by: gb10-superchip
- Powers: ai-workloads