Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 808 156

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 432 73

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.9k 1.7k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 244

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.2k 506

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.9k 1k

Repositories

Showing 10 of 710 repositories
  • NemoClaw Public

    Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

    NVIDIA/NemoClaw’s past year of commit activity
    JavaScript 18,798 Apache-2.0 2,265 256 248 Updated Apr 9, 2026
  • aicr Public

    Tooling for optimized, validated, and reproducible GPU-accelerated AI runtime in Kubernetes

    NVIDIA/aicr’s past year of commit activity
    Go 259 Apache-2.0 27 20 8 Updated Apr 9, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 2,412 Apache-2.0 339 60 127 Updated Apr 9, 2026
  • bionemo-framework Public

    BioNeMo Framework: For building and adapting AI models in drug discovery at scale

    NVIDIA/bionemo-framework’s past year of commit activity
    Jupyter Notebook 720 138 35 133 Updated Apr 9, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 13,320 2,264 572 678 Updated Apr 9, 2026
  • OpenShell Public

    OpenShell is the safe, private runtime for autonomous AI agents.

    NVIDIA/OpenShell’s past year of commit activity
    Rust 4,637 Apache-2.0 486 47 10 Updated Apr 9, 2026
  • Megatron-LM Public

    Ongoing research training transformer models at scale

    NVIDIA/Megatron-LM’s past year of commit activity
    Python 15,974 3,796 344 (1 issue needs help) 360 Updated Apr 9, 2026
  • NVSentinel Public

    NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated computing environments

    NVIDIA/NVSentinel’s past year of commit activity
    Go 247 Apache-2.0 67 36 19 Updated Apr 9, 2026
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 979 Apache-2.0 356 438 (16 issues need help) 114 Updated Apr 9, 2026
  • ncx-infra-controller-core Public

    NCX Infra Controller - Hardware Lifecycle Management and multitenant networking

    NVIDIA/ncx-infra-controller-core’s past year of commit activity
    Rust 114 Apache-2.0 75 133 (5 issues need help) 54 Updated Apr 9, 2026

Top languages

Loading…

Most used topics

Loading…