Skip to content

Bump tensor bridge version cap for PyTorch 2.12 #2089

@leofang

Description

@leofang

Summary

The tensor bridge's _torch_version_check() in _memoryview.pyx caps the supported PyTorch version range at (2, 3) <= (major, minor) <= (2, 11). PyTorch 2.12 was released on ~May 14, 2026, causing the version check to fail. When this happens, _is_torch_tensor() returns False and torch tensors fall through to the generic CAI/DLPack paths instead of the fast AOTI tensor bridge.

For from_cuda_array_interface, this causes a hard failure because PyTorch reports __cuda_array_interface__ version 2, while cuda-python requires version 3+:

BufferError: only CUDA Array Interface v3 or above is supported

Root cause

# _memoryview.pyx
cdef inline bint _torch_version_check():
    ...
    _torch_version_ok = (2, 3) <= (major, minor) <= (2, 11)  # ← needs bumping

The upper bound exists because the tensor bridge relies on undocumented PyTorch internals (THPVariable struct layout, AtenTensorHandle == at::Tensor*). The docstring says: "Bump the upper bound after verifying a new PyTorch release."

Action needed

  1. Verify PyTorch 2.12's THPVariable struct layout is unchanged (check torch/csrc/autograd/python_variable.h)
  2. Bump the upper bound to (2, 12)
  3. Re-add latest to the nightly CI pytorch test matrix (currently pinned to 2.11.0 as a workaround in Add nightly CI for optional-dependency testing (PyTorch, numba-cuda) #1987)

Discovered by

Nightly optional-dependency CI (#1987) — the latest torch entry started failing when PyTorch 2.12 was released.

-- Leo's bot

Metadata

Metadata

Assignees

No one assigned

    Labels

    P1Medium priority - Should docuda.coreEverything related to the cuda.core module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions