docker.io/gpustack/runner:cuda12.8-sglang0.5.5 linux/amd64

docker.io/gpustack/runner:cuda12.8-sglang0.5.5 - 国内下载镜像源 浏览次数:8
源镜像 docker.io/gpustack/runner:cuda12.8-sglang0.5.5
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5
镜像ID sha256:0ffc830b317c45266b61ea22f38dcf3fb84c464d07b2b6e942cf5bbc836c1055
镜像TAG cuda12.8-sglang0.5.5
大小 32.96GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 tini --
工作目录 /
OS/平台 linux/amd64
浏览量 8 次
贡献者
镜像创建 2025-11-12T03:33:05.478258465Z
同步时间 2025-12-05 02:43
更新时间 2025-12-05 09:13
环境变量
PATH=/usr/local/mpi/bin:/usr/local/ucx/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/efa/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 NV_CUDA_CUDART_VERSION=12.8.90-1 CUDA_VERSION=12.8.1 LD_LIBRARY_PATH=/usr/local/cuda/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.8.1-1 NV_NVTX_VERSION=12.8.90-1 NV_LIBNPP_VERSION=12.3.3.100-1 NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1 NV_LIBCUSPARSE_VERSION=12.5.8.93-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8 NV_LIBCUBLAS_VERSION=12.8.4.1-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1 NCCL_VERSION=2.25.1-1 NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8 NVIDIA_PRODUCT_NAME=CUDA NV_CUDA_CUDART_DEV_VERSION=12.8.90-1 NV_NVML_DEV_VERSION=12.8.90-1 NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1 NV_LIBNPP_DEV_VERSION=12.3.3.100-1 NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1 NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1 NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8 NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1 NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1 NV_NVPROF_VERSION=12.8.90-1 NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8 LIBRARY_PATH=/usr/local/cuda/lib64/stubs NV_CUDNN_VERSION=9.8.0.87-1 NV_CUDNN_PACKAGE_NAME=libcudnn9-cuda-12 NV_CUDNN_PACKAGE=libcudnn9-cuda-12=9.8.0.87-1 NV_CUDNN_PACKAGE_DEV=libcudnn9-dev-cuda-12=9.8.0.87-1 DEBIAN_FRONTEND=noninteractive LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 PYTHON_VERSION=3.12 PIP_NO_CACHE_DIR=1 PIP_DISABLE_PIP_VERSION_CHECK=1 PIP_ROOT_USER_ACTION=ignore PIPX_HOME=/root/.local/share/pipx PIPX_LOCAL_VENVS=/root/.local/share/pipx/venvs UV_NO_CACHE=1 UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match CUDA_HOME=/usr/local/cuda CUDA_ARCHS= UV_SYSTEM_PYTHON=1 UV_PRERELEASE=allow VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 VLLM_AWS_EFA_VERSION=1.43.3 VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_NVIDIA_NVSHMEM_DIR=/usr/local/nvshmem VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 VLLM_LMCACHE_VERSION=0.3.9post1 RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES=1 SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5
镜像标签
9.8.0.87-1: com.nvidia.cudnn.version NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5  docker.io/gpustack/runner:cuda12.8-sglang0.5.5

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5  docker.io/gpustack/runner:cuda12.8-sglang0.5.5

Shell快速替换命令

sed -i 's#gpustack/runner:cuda12.8-sglang0.5.5#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5  docker.io/gpustack/runner:cuda12.8-sglang0.5.5'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5  docker.io/gpustack/runner:cuda12.8-sglang0.5.5'

镜像构建历史


# 2025-11-12 11:33:05  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["tini" "--"]
                        
# 2025-11-12 11:33:05  0.00B 设置工作目录为/
WORKDIR /
                        
# 2025-11-12 11:33:05  0.00B 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 CMAKE_MAX_JOBS= SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5 /bin/bash -eo pipefail -c     # Postprocess

    # Review
    uv pip tree \
        --package sglang \
        --package sglang-router \
        --package sgl-kernel \
        --package flashinfer-python \
        --package flash-attn \
        --package triton \
        --package vllm \
        --package torch \
        --package deep-ep \
        --package diffusers \
        --package opencv-python
 # buildkit
                        
# 2025-11-12 11:33:05  338.97MB 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 CMAKE_MAX_JOBS= SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5 /bin/bash -eo pipefail -c     # Dependencies

    # Install Dependencies,
    # see https://github.com/sgl-project/sglang/blob/41c10e67fcae6ac50dfe283655bdf545d224cba9/docker/Dockerfile#L181-L209.
    cat <<EOT >/tmp/requirements.txt
nvidia-cutlass-dsl==4.3.0.dev0
datamodel_code_generator
mooncake-transfer-engine==0.3.7.post2
nixl
EOT
    uv pip install \
        -r /tmp/requirements.txt

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-12 11:33:00  71.94MB 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 CMAKE_MAX_JOBS= SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5 /bin/bash -eo pipefail -c     # SGlang Router

    # Install Rust
    curl --retry 3 --retry-connrefused --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
    export PATH="/root/.cargo/bin:${PATH}" \
        && rustc --version \
        && cargo --version

    # Install build tools
    uv pip install \
        setuptools-rust maturin

    # Install SGLang Router
    git -C /tmp clone --recursive --shallow-submodules \
        --depth 1 --branch v${SGLANG_VERSION} --single-branch \
        https://github.com/sgl-project/sglang.git sglang
    pushd /tmp/sglang/sgl-router \
        && ulimit -n 65536 && maturin build --release --features vendored-openssl --out dist \
        && tree -hs /tmp/sglang/sgl-router/dist \
        && uv pip install --force-reinstall /tmp/sglang/sgl-router/dist/*.whl

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /root/.cargo \
        && rm -rf /root/.rustup \
        && sed -i '$d' /root/.profile \
        && sed -i '$d' /root/.bashrc \
        && ccache --clear --clean
 # buildkit
                        
# 2025-11-12 11:24:57  3.07GB 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 CMAKE_MAX_JOBS= SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5 /bin/bash -eo pipefail -c     # SGLang

    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${VLLM_TORCH_CUDA_VERSION}"

    CMAKE_MAX_JOBS="${CMAKE_MAX_JOBS}"
    if [[ -z "${CMAKE_MAX_JOBS}" ]]; then
        CMAKE_MAX_JOBS="$(( $(nproc) / 2 ))"
    fi
    if (( $(echo "${CMAKE_MAX_JOBS} > 8" | bc -l) )); then
        CMAKE_MAX_JOBS="8"
    fi
    SG_CUDA_ARCHS="${CUDA_ARCHS}"
    if [[ -z "${SG_CUDA_ARCHS}" ]]; then
        if (( $(echo "${CUDA_MAJOR} < 12" | bc -l) )); then
            SG_CUDA_ARCHS="7.5 8.0+PTX 8.9"
        elif (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.8" | bc -l) )); then
            SG_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0"
        else
            SG_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0 10.0+PTX 12.0+PTX"
        fi
    fi
    export MAX_JOBS="${CMAKE_MAX_JOBS}"
    export TORCH_CUDA_ARCH_LIST="${SG_CUDA_ARCHS}"
    export COMPILE_CUSTOM_KERNELS=1
    export NVCC_THREADS=1

    # Install SGLang
    git -C /tmp clone --recursive --shallow-submodules \
        --depth 1 --branch v${SGLANG_VERSION} --single-branch \
        https://github.com/sgl-project/sglang.git sglang-${SGLANG_VERSION}
    pushd /tmp/sglang-${SGLANG_VERSION}/python \
        && uv pip install --verbose .[all]

    # Download FlashInfer pre-compiled cubins
    export FLASHINFER_CUBIN_DOWNLOAD_THREADS="${CMAKE_MAX_JOBS}"
    export FLASHINFER_LOGGING_LEVEL=warning
    python -m flashinfer --download-cubin

    # Install SGLang Diffusion Extension
    if [[ "${TARGETARCH}" == "amd64" ]]; then
        # Diffusion Extension
        pushd /tmp/sglang-${SGLANG_VERSION}/python \
            && uv pip install --verbose .[diffusion]
    fi

    # Install pre-complied SGLang Kernel
    if (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.7" | bc -l) )); then
        IFS="." read -r KERNEL_MAJOR KERNEL_MINOR KERNEL_PATCH KERNEL_POST <<< "${SGLANG_KERNEL_VERSION}"
        if [[ "${TARGETARCH}" == "arm64" ]] && [[ "${KERNEL_MAJOR}.${KERNEL_MINOR}" == "0.3" ]] && (( $(echo "${KERNEL_PATCH} < 15" | bc -l) )); then
            uv pip install --force-reinstall --no-deps \
                https://github.com/sgl-project/whl/releases/download/v${SGLANG_KERNEL_VERSION}/sgl_kernel-${SGLANG_KERNEL_VERSION}-cp310-abi3-manylinux2014_aarch64.whl
        else
            uv pip install --force-reinstall --no-deps \
                https://github.com/sgl-project/whl/releases/download/v${SGLANG_KERNEL_VERSION}/sgl_kernel-${SGLANG_KERNEL_VERSION}+cu124-cp310-abi3-manylinux2014_$(uname -m).whl
        fi
    elif (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.9" | bc -l) )); then
        uv pip install \
            sgl-kernel==${SGLANG_KERNEL_VERSION}
    else
        uv pip install --force-reinstall --no-deps \
            https://github.com/sgl-project/whl/releases/download/v${SGLANG_KERNEL_VERSION}/sgl_kernel-${SGLANG_KERNEL_VERSION}+cu130-cp310-abi3-manylinux2014_$(uname -m).whl
    fi

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && ccache --clear --clean
 # buildkit
                        
# 2025-11-12 11:24:57  0.00B 设置环境变量 SGLANG_VERSION SGLANG_KERNEL_VERSION
ENV SGLANG_VERSION=0.5.5 SGLANG_KERNEL_VERSION=0.3.16.post5
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG SGLANG_KERNEL_VERSION=0.3.16.post5
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG SGLANG_VERSION=0.5.5
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG CMAKE_MAX_JOBS
                        
# 2025-11-12 11:24:57  0.00B 设置环境变量 UV_SYSTEM_PYTHON UV_PRERELEASE
ENV UV_SYSTEM_PYTHON=1 UV_PRERELEASE=allow
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG TARGETOS=linux
                        
# 2025-11-12 11:24:57  0.00B 定义构建参数
ARG TARGETPLATFORM=linux/amd64
                        
# 2025-11-12 11:24:57  0.00B 
SHELL [/bin/bash -eo pipefail -c]
                        
# 2025-11-12 01:36:05  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["tini" "--"]
                        
# 2025-11-12 01:36:05  0.00B 设置工作目录为/
WORKDIR /
                        
# 2025-11-12 01:36:05  0.00B 设置环境变量 RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES
ENV RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES=1
                        
# 2025-11-12 01:36:05  0.00B 执行命令并创建新的镜像层
RUN |11 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 VLLM_LMCACHE_VERSION=0.3.9post1 /bin/bash -eo pipefail -c     # Postprocess

    # Review
    uv pip tree \
        --package vllm \
        --package flashinfer-python \
        --package flash-attn \
        --package torch \
        --package triton \
        --package pplx-kernels \
        --package deep-gemm \
        --package deep-ep \
        --package lmcache
 # buildkit
                        
# 2025-11-12 01:36:04  274.38MB 执行命令并创建新的镜像层
RUN |11 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 VLLM_LMCACHE_VERSION=0.3.9post1 /bin/bash -eo pipefail -c     # Dependencies

    # Install
    BITSANDBYTES_VERSION="0.46.1"
    if [[ "${TARGETARCH}" == "arm64" ]]; then
        BITSANDBYTES_VERSION="0.42.0"
    fi
    cat <<EOT >/tmp/requirements.txt
accelerate
hf_transfer
modelscope
bitsandbytes>=${BITSANDBYTES_VERSION}
timm>=1.0.17
boto3
EOT
    uv pip install \
        -r /tmp/requirements.txt

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-12 01:36:03  37.37MB 执行命令并创建新的镜像层
RUN |11 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 VLLM_LMCACHE_VERSION=0.3.9post1 /bin/bash -eo pipefail -c     # Ray

    # Install Ray Client and Default
    RAY_VERSION=$(pip show ray | grep Version: | cut -d' ' -f 2)
    cat <<EOT >/tmp/requirements.txt
ray[client]==${RAY_VERSION}
ray[default]==${RAY_VERSION}
EOT
    uv pip install \
        -r /tmp/requirements.txt

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-12 01:36:01  178.71MB 执行命令并创建新的镜像层
RUN |11 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 VLLM_LMCACHE_VERSION=0.3.9post1 /bin/bash -eo pipefail -c     # LMCache

    # Ref https://github.com/LMCache/LMCache/blob/5afe9688b3519074b9915e7b3acf871328250150/docs/source/getting_started/installation.rst?plain=1#L67-L129.

    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${VLLM_TORCH_CUDA_VERSION}"

    if [[ "${TARGETARCH}" != "amd64" ]]; then
        echo "Skipping LMCache building for ${TARGETARCH}..."
        exit 0
    fi

    # Install LMCache
    CMAKE_MAX_JOBS="${CMAKE_MAX_JOBS}"
    if [[ -z "${CMAKE_MAX_JOBS}" ]]; then
        CMAKE_MAX_JOBS="$(( $(nproc) / 2 ))"
    fi
    if (( $(echo "${CMAKE_MAX_JOBS} > 8" | bc -l) )); then
        CMAKE_MAX_JOBS="8"
    fi
    LC_CUDA_ARCHS="${CUDA_ARCHS}"
    if [[ -z "${LC_CUDA_ARCHS}" ]]; then
        if (( $(echo "${CUDA_MAJOR} < 12" | bc -l) )); then
            LC_CUDA_ARCHS="7.5 8.0+PTX 8.9"
        elif (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.8" | bc -l) )); then
            LC_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0"
        else
            LC_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0 10.0+PTX 12.0+PTX"
        fi
    fi
    export MAX_JOBS="${CMAKE_MAX_JOBS}"
    export TORCH_CUDA_ARCH_LIST="${LC_CUDA_ARCHS}"
    export NVCC_THREADS=1
    git -C /tmp clone --recursive --shallow-submodules \
        --depth 1 --branch v${VLLM_LMCACHE_VERSION} --single-branch \
        https://github.com/LMCache/LMCache.git lmcache
    sed -i "s/^infinistore$/infinistore; platform_machine == 'x86_64'/" /tmp/lmcache/requirements/common.txt
    uv pip install --no-build-isolation --verbose \
        /tmp/lmcache

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && ccache --clear --clean
 # buildkit
                        
# 2025-11-12 01:32:00  0.00B 设置环境变量 VLLM_LMCACHE_VERSION
ENV VLLM_LMCACHE_VERSION=0.3.9post1
                        
# 2025-11-12 01:32:00  0.00B 定义构建参数
ARG VLLM_LMCACHE_VERSION=0.3.9post1
                        
# 2025-11-12 01:32:00  998.76MB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # FlashAttention

    if [[ ! -d /flashattention/workspace ]]; then
        echo "Skipping FlashAttention installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    uv pip install --no-build-isolation \
        /flashattention/workspace/*.whl

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 13:44:41  53.90MB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # DeepEP

    if [[ ! -d /deepep/workspace ]]; then
        echo "Skipping DeepEP installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    uv pip install --no-build-isolation \
        /deepep/workspace/*.whl

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 13:44:41  119.84MB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # PPLX Kernels

    if [[ ! -d /pplx-kernels/workspace ]]; then
        echo "Skipping PPLX Kernels installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    uv pip install --no-build-isolation \
        /pplx-kernels/workspace/*.whl

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 13:44:40  43.15MB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # DeepGEMM

    if [[ ! -d /deepgemm/workspace ]]; then
        echo "Skipping DeepGEMM installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    uv pip install --no-build-isolation \
        /deepgemm/workspace/*.whl

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 13:44:40  4.15GB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # FlashInfer

    if [[ ! -d /flashinfer/workspace ]]; then
        echo "Skipping FlashInfer installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    uv pip install --no-build-isolation \
        /flashinfer/workspace/*.whl

    CMAKE_MAX_JOBS="${CMAKE_MAX_JOBS}"
    if [[ -z "${CMAKE_MAX_JOBS}" ]]; then
        CMAKE_MAX_JOBS="$(( $(nproc) / 2 ))"
    fi
    if (( $(echo "${CMAKE_MAX_JOBS} > 8" | bc -l) )); then
        CMAKE_MAX_JOBS="8"
    fi

    # Download pre-compiled cubins
    export FLASHINFER_CUBIN_DOWNLOAD_THREADS="${CMAKE_MAX_JOBS}"
    export FLASHINFER_LOGGING_LEVEL=warning
    python -m flashinfer --download-cubin

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 09:23:48  2.82GB 执行命令并创建新的镜像层
RUN |10 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 VLLM_VERSION=0.11.0 /bin/bash -eo pipefail -c     # vLLM

    CMAKE_MAX_JOBS="${CMAKE_MAX_JOBS}"
    if [[ -z "${CMAKE_MAX_JOBS}" ]]; then
        CMAKE_MAX_JOBS="$(( $(nproc) / 2 ))"
    fi
    if (( $(echo "${CMAKE_MAX_JOBS} > 8" | bc -l) )); then
        CMAKE_MAX_JOBS="8"
    fi
    VL_CUDA_ARCHS="${CUDA_ARCHS}"
    if [[ -z "${VL_CUDA_ARCHS}" ]]; then
        if (( $(echo "${CUDA_MAJOR} < 12" | bc -l) )); then
            VL_CUDA_ARCHS="7.5 8.0+PTX 8.9"
        elif (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.8" | bc -l) )); then
            VL_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0"
        else
            VL_CUDA_ARCHS="7.5 8.0+PTX 8.9 9.0 10.0+PTX 12.0+PTX"
        fi
    fi
    export MAX_JOBS="${CMAKE_MAX_JOBS}"
    export TORCH_CUDA_ARCH_LIST="${VL_CUDA_ARCHS}"
    export COMPILE_CUSTOM_KERNELS=1
    export NVCC_THREADS=1

    # Install
    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${VLLM_TORCH_CUDA_VERSION}"
    if [[ "${TARGETARCH}" == "amd64" ]]; then
        uv pip install --verbose --extra-index-url https://download.pytorch.org/whl/cu${CUDA_MAJOR}${CUDA_MINOR} \
            vllm==${VLLM_VERSION}
    else
        uv pip install --verbose --extra-index-url https://download.pytorch.org/whl/cpu/ \
            vllm==${VLLM_VERSION}
    fi

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && ccache --clear --clean
 # buildkit
                        
# 2025-11-11 09:23:29  0.00B 设置环境变量 VLLM_VERSION
ENV VLLM_VERSION=0.11.0
                        
# 2025-11-11 09:23:29  0.00B 定义构建参数
ARG VLLM_VERSION=0.11.0
                        
# 2025-11-11 09:23:29  0.00B 定义构建参数
ARG CMAKE_MAX_JOBS
                        
# 2025-11-11 09:23:29  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-11-11 09:23:29  0.00B 定义构建参数
ARG TARGETOS=linux
                        
# 2025-11-11 09:23:29  0.00B 定义构建参数
ARG TARGETPLATFORM=linux/amd64
                        
# 2025-11-11 09:23:29  0.00B 
SHELL [/bin/bash -eo pipefail -c]
                        
# 2025-11-11 09:23:29  6.92GB 执行命令并创建新的镜像层
RUN |9 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1 /bin/bash -eo pipefail -c     # Torch

    # Install
    cat <<EOT >/tmp/requirements.txt
torch==${VLLM_TORCH_VERSION}
torchvision
torchaudio
EOT
    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${VLLM_TORCH_CUDA_VERSION}"
    if [[ "${TARGETARCH}" == "amd64" ]]; then
        uv pip install --index-url https://download.pytorch.org/whl/cu${CUDA_MAJOR}${CUDA_MINOR} \
            -r /tmp/requirements.txt
    else
        uv pip install --extra-index-url https://download.pytorch.org/whl/cpu/ \
            -r /tmp/requirements.txt
    fi
    uv pip install \
        numpy scipy

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 09:22:49  0.00B 设置环境变量 VLLM_TORCH_VERSION VLLM_TORCH_CUDA_VERSION
ENV VLLM_TORCH_VERSION=2.8.0 VLLM_TORCH_CUDA_VERSION=12.8.1
                        
# 2025-11-11 09:22:49  0.00B 定义构建参数
ARG VLLM_TORCH_CUDA_VERSION=12.8.1
                        
# 2025-11-11 09:22:49  0.00B 定义构建参数
ARG VLLM_TORCH_VERSION=2.8.0
                        
# 2025-11-11 09:22:49  409.96MB 执行命令并创建新的镜像层
RUN |7 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 CMAKE_MAX_JOBS= VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 /bin/bash -eo pipefail -c     # NVIDIA NVSHMEM

    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${CUDA_VERSION}"

    # Download
    mkdir -p /tmp/nvshmem
    if (( $(echo "${CUDA_MAJOR} > 12" | bc -l) )); then
        curl --retry 3 --retry-connrefused -fL "https://github.com/NVIDIA/nvshmem/releases/download/v${VLLM_NVIDIA_NVSHMEM_VERSION}-0/nvshmem_src_cuda-all-all-${VLLM_NVIDIA_NVSHMEM_VERSION}.tar.gz" | tar -zxv -C /tmp
    else
        curl --retry 3 --retry-connrefused -fL "https://developer.download.nvidia.com/compute/redist/nvshmem/${VLLM_NVIDIA_NVSHMEM_VERSION}/source/nvshmem_src_cuda12-all-all-${VLLM_NVIDIA_NVSHMEM_VERSION}.tar.gz" | tar -zxv -C /tmp
    fi

    # Build
    CMAKE_MAX_JOBS="${CMAKE_MAX_JOBS}"
    if [[ -z "${CMAKE_MAX_JOBS}" ]]; then
        CMAKE_MAX_JOBS="$(( $(nproc) / 2 ))"
    fi
    if (( $(echo "${CMAKE_MAX_JOBS} > 8" | bc -l) )); then
        CMAKE_MAX_JOBS="8"
    fi
    NS_CUDA_ARCHS="${CUDA_ARCHS}"
    if [[ -z "${NS_CUDA_ARCHS}" ]]; then
        if (( $(echo "${CUDA_MAJOR} < 12" | bc -l) )); then
            NS_CUDA_ARCHS="7.5 8.0 8.9"
        elif (( $(echo "${CUDA_MAJOR}.${CUDA_MINOR} < 12.8" | bc -l) )); then
            NS_CUDA_ARCHS="7.5 8.0 8.9 9.0"
        else
            NS_CUDA_ARCHS="7.5 8.0 8.9 9.0 10.0 10.3 12.0"
        fi
    fi
    export MAX_JOBS="${CMAKE_MAX_JOBS}"
    export CUDA_ARCH="${NS_CUDA_ARCHS}"
    export NVSHMEM_IBGDA_SUPPORT=1
    export NVSHMEM_USE_GDRCOPY=1
    export NVSHMEM_SHMEM_SUPPORT=0
    export NVSHMEM_UCX_SUPPORT=0
    export NVSHMEM_USE_NCCL=0
    export NVSHMEM_PMIX_SUPPORT=0
    export NVSHMEM_TIMEOUT_DEVICE_POLLING=0
    export NVSHMEM_IBRC_SUPPORT=0
    export NVSHMEM_BUILD_TESTS=0
    export NVSHMEM_BUILD_EXAMPLES=0
    export NVSHMEM_MPI_SUPPORT=0
    export NVSHMEM_BUILD_HYDRA_LAUNCHER=0
    export NVSHMEM_BUILD_TXZ_PACKAGE=0
    export NVSHMEM_TIMEOUT_DEVICE_POLLING=0
    export NVCC_THREADS=1
    echo "Building NVSHMEM with the following environment variables:"
    env
    # FIX: Hide Python3.10 to avoid issues with Python version mismatch.
    PYTHON3_10_BIN=$(which python3.10 || true)
    if [[ -n "${PYTHON3_10_BIN}" ]]; then
        mv "${PYTHON3_10_BIN}" /tmp/python3.10
    fi
    pushd /tmp/nvshmem_src \
        && cmake -G Ninja -S . -B build -DCMAKE_INSTALL_PREFIX=${VLLM_NVIDIA_NVSHMEM_DIR} \
        && cmake --build build --target install -j${MAX_JOBS}
    if [[ -n "${PYTHON3_10_BIN}" ]]; then
        mv /tmp/python3.10 "${PYTHON3_10_BIN}"
    fi

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-11 08:53:39  0.00B 设置环境变量 VLLM_NVIDIA_NVSHMEM_VERSION VLLM_NVIDIA_NVSHMEM_DIR
ENV VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5 VLLM_NVIDIA_NVSHMEM_DIR=/usr/local/nvshmem
                        
# 2025-11-11 08:53:39  0.00B 定义构建参数
ARG VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5
                        
# 2025-11-11 08:53:39  0.00B 定义构建参数
ARG CMAKE_MAX_JOBS
                        
# 2025-11-11 08:53:39  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/mpi/bin:/usr/local/ucx/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/efa/bin
                        
# 2025-11-11 08:53:39  46.36MB 执行命令并创建新的镜像层
RUN |5 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 VLLM_AWS_EFA_VERSION=1.43.3 /bin/bash -eo pipefail -c     # AWS EFA

    # Download
    curl --retry 3 --retry-connrefused -fL "https://efa-installer.amazonaws.com/aws-efa-installer-${VLLM_AWS_EFA_VERSION}.tar.gz" | tar -zxv -C /tmp

    # Install
    pushd /tmp/aws-efa-installer && \
        ./efa_installer.sh -y --skip-kmod

    # Prepare
    rm /opt/amazon/efa/lib/libfabric.a || true

    # Review
    ldconfig -v

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-11 08:51:48  0.00B 设置环境变量 VLLM_AWS_EFA_VERSION
ENV VLLM_AWS_EFA_VERSION=1.43.3
                        
# 2025-11-11 08:51:48  0.00B 定义构建参数
ARG VLLM_AWS_EFA_VERSION=1.43.3
                        
# 2025-11-11 08:51:48  0.00B 设置环境变量 PATH OPAL_PREFIX OMPI_MCA_coll_hcoll_enable
ENV PATH=/usr/local/mpi/bin:/usr/local/ucx/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0
                        
# 2025-11-11 08:51:48  1.44GB 执行命令并创建新的镜像层
RUN |4 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4 /bin/bash -eo pipefail -c     # NVIDIA HPC-X

    # Prepare
    rm -f $(dpkg-query -L libibverbs-dev librdmacm-dev libibumad-dev | grep "\(\.so\|\.a\)$") || true
    IFS="." read -r CUDA_MAJOR CUDA_MINOR CUDA_PATCH <<< "${CUDA_VERSION}"
    source /etc/os-release

    # Get Download Version
    # If VLLM_NVIDIA_HPCX_VERSION=2.24.1_cuda13, VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=2.24.1
    # If VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4, VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=2.22.1
    # If VLLM_NVIDIA_HPCX_VERSION=2.21.3, VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=2.21.3
    if [[ "${VLLM_NVIDIA_HPCX_VERSION}" == *"_cuda"* ]]; then
        VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=$(echo "${VLLM_NVIDIA_HPCX_VERSION}" | sed 's/_cuda.*//')
    elif [[ "${VLLM_NVIDIA_HPCX_VERSION}" == *"rc"* ]]; then
        VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=$(echo "${VLLM_NVIDIA_HPCX_VERSION}" | sed 's/rc.*//')
    else
        VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD=${VLLM_NVIDIA_HPCX_VERSION}
    fi

    # Download
    mkdir -p /opt/hpcx
    curl --retry 3 --retry-connrefused -fL "https://content.mellanox.com/hpc/hpc-x/v${VLLM_NVIDIA_HPCX_VERSION}/hpcx-v${VLLM_NVIDIA_HPCX_VERSION_DOWNLOAD}-gcc-inbox-${ID}${VERSION_ID}-cuda${CUDA_MAJOR}-$(uname -m).tbz" | tar -jxv -C /opt/hpcx --strip-components 1

    # Install
    ln -sf /opt/hpcx/ompi /usr/local/mpi
    ln -sf /opt/hpcx/ucx /usr/local/ucx
    sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf
    sed -i 's/^\(btl = self\)$/#\1/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf
    cat <<EOT > /etc/ld.so.conf.d/hpcx.conf
/opt/hpcx/clusterkit/lib
/opt/hpcx/hcoll/lib
/opt/hpcx/nccl_rdma_sharp_plugin/lib
/opt/hpcx/ncclnet_plugin/lib
/opt/hpcx/ompi/lib
/opt/hpcx/sharp/lib
/opt/hpcx/ucc/lib
/opt/hpcx/ucx/lib
EOT

    # Fix DeepEP IBGDA symlink
    ln -sf /usr/lib/$(uname -m)-linux-gnu/libmlx5.so.1 /usr/lib/$(uname -m)-linux-gnu/libmlx5.so || true

    # Review
    ldconfig -v

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-11 08:50:58  0.00B 设置环境变量 VLLM_NVIDIA_HPCX_VERSION
ENV VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4
                        
# 2025-11-11 08:50:58  0.00B 定义构建参数
ARG VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4
                        
# 2025-11-11 08:50:58  1.89MB 执行命令并创建新的镜像层
RUN |3 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 /bin/bash -eo pipefail -c     # GDRCopy

    if [[ ! -d /gdrcopy/workspace ]]; then
        echo "Skipping GDRCopy installation for ${TARGETARCH}..."
        exit 0
    fi

    # Install
    dpkg -i /gdrcopy/workspace/libgdrapi_*.deb && \
        ldconfig -v

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-11 08:50:58  0.00B 设置环境变量 UV_SYSTEM_PYTHON UV_PRERELEASE
ENV UV_SYSTEM_PYTHON=1 UV_PRERELEASE=allow
                        
# 2025-11-11 08:50:58  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-11-11 08:50:58  0.00B 定义构建参数
ARG TARGETOS=linux
                        
# 2025-11-11 08:50:58  0.00B 定义构建参数
ARG TARGETPLATFORM=linux/amd64
                        
# 2025-11-11 08:50:58  0.00B 
SHELL [/bin/bash -eo pipefail -c]
                        
# 2025-11-10 12:18:57  0.00B 设置环境变量 CUDA_HOME CUDA_VERSION CUDA_ARCHS
ENV CUDA_HOME=/usr/local/cuda CUDA_VERSION=12.8.1 CUDA_ARCHS=
                        
# 2025-11-10 12:18:57  0.00B 定义构建参数
ARG CUDA_ARCHS
                        
# 2025-11-10 12:18:57  0.00B 定义构建参数
ARG CUDA_VERSION=12.8.1
                        
# 2025-11-10 12:18:57  151.88MB 执行命令并创建新的镜像层
RUN |4 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 PYTHON_VERSION=3.12 /bin/bash -eo pipefail -c     # Buildkit

    cat <<EOT >/tmp/requirements.txt
build
cmake<4
ninja<1.11
setuptools<80
setuptools-scm
packaging<25
wheel==0.45.1
pybind11<3
Cython
psutil
pipx
uv
EOT
    pip install -r /tmp/requirements.txt

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/*
 # buildkit
                        
# 2025-11-10 12:18:52  0.00B 设置环境变量 PIP_NO_CACHE_DIR PIP_DISABLE_PIP_VERSION_CHECK PIP_ROOT_USER_ACTION PIPX_HOME PIPX_LOCAL_VENVS UV_NO_CACHE UV_HTTP_TIMEOUT UV_INDEX_STRATEGY
ENV PIP_NO_CACHE_DIR=1 PIP_DISABLE_PIP_VERSION_CHECK=1 PIP_ROOT_USER_ACTION=ignore PIPX_HOME=/root/.local/share/pipx PIPX_LOCAL_VENVS=/root/.local/share/pipx/venvs UV_NO_CACHE=1 UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match
                        
# 2025-11-10 12:18:52  90.36MB 执行命令并创建新的镜像层
RUN |4 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 PYTHON_VERSION=3.12 /bin/bash -eo pipefail -c     # Python

    if (( $(echo "$(python3 --version | cut -d' ' -f2 | cut -d'.' -f1,2) == ${PYTHON_VERSION}" | bc -l) )); then
        echo "Skipping Python upgrade for ${PYTHON_VERSION}..."
        if [[ -z "$(ldconfig -v 2>/dev/null | grep libpython${PYTHON_VERSION})" ]]; then
            PYTHON_LIB_PREFIX=$(python3 -c "import sys; print(sys.base_prefix);")
            echo "${PYTHON_LIB_PREFIX}/lib" >> /etc/ld.so.conf.d/python3.conf
            echo "${PYTHON_LIB_PREFIX}/lib64" >> /etc/ld.so.conf.d/python3.conf
            ldconfig -v
        fi
        exit 0
    fi

    # Add deadsnakes PPA for Python versions
    for i in 1 2 3; do
        add-apt-repository -y ppa:deadsnakes/ppa && break || { echo "Attempt $i failed, retrying in 5s..."; sleep 5; }
    done
    apt-get update -y

    # Install
    apt-get install -y --no-install-recommends \
        python${PYTHON_VERSION} \
        python${PYTHON_VERSION}-dev \
        python${PYTHON_VERSION}-venv \
        python${PYTHON_VERSION}-lib2to3 \
        python${PYTHON_VERSION}-gdbm \
        python${PYTHON_VERSION}-tk
    if (( $(echo "${PYTHON_VERSION} <= 3.11" | bc -l) )); then
        apt-get install -y --no-install-recommends \
            python${PYTHON_VERSION}-distutils
    fi

    # Update alternatives
    if [[ -f /etc/alternatives/python3 ]]; then update-alternatives --remove-all python3; fi; update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1
    if [[ -f /etc/alternatives/python ]]; then update-alternatives --remove-all python; fi; update-alternatives --install /usr/bin/python python /usr/bin/python${PYTHON_VERSION} 1
    curl -sS "https://bootstrap.pypa.io/get-pip.py" | python${PYTHON_VERSION}
    if [[ -f /etc/alternatives/2to3 ]]; then update-alternatives --remove-all 2to3; fi; update-alternatives --install /usr/bin/2to3 2to3 /usr/bin/2to3${PYTHON_VERSION} 1 || true
    if [[ -f /etc/alternatives/pydoc3 ]]; then update-alternatives --remove-all pydoc3; fi; update-alternatives --install /usr/bin/pydoc3 pydoc3 /usr/bin/pydoc${PYTHON_VERSION} 1 || true
    if [[ -f /etc/alternatives/idle3 ]]; then update-alternatives --remove-all idle3; fi; update-alternatives --install /usr/bin/idle3 idle3 /usr/bin/idle${PYTHON_VERSION} 1 || true
    if [[ -f /etc/alternatives/python3-config ]]; then update-alternatives --remove-all python3-config; fi; update-alternatives --install /usr/bin/python3-config python3-config /usr/bin/python${PYTHON_VERSION}-config 1 || true

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-10 12:17:13  0.00B 设置环境变量 PYTHON_VERSION
ENV PYTHON_VERSION=3.12
                        
# 2025-11-10 12:17:13  0.00B 定义构建参数
ARG PYTHON_VERSION=3.12
                        
# 2025-11-10 12:17:13  1.01GB 执行命令并创建新的镜像层
RUN |3 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 /bin/bash -eo pipefail -c     # C buildkit

    # Install
    apt-get install -y --no-install-recommends \
        make ninja-build pkg-config ccache
    curl --retry 3 --retry-connrefused -fL "https://github.com/Kitware/CMake/releases/download/v3.31.7/cmake-3.31.7-linux-$(uname -m).tar.gz" | tar -zx -C /usr --strip-components 1

    # Install dependencies
    apt-get install -y --no-install-recommends \
        perl-openssl-defaults perl yasm \
        zlib1g zlib1g-dev libbz2-dev libffi-dev libgdbm-dev libgdbm-compat-dev \
        openssl libssl-dev libsqlite3-dev lcov libomp-dev \
        libblas-dev liblapack-dev libopenblas-dev libblas3 liblapack3 libhdf5-dev \
        libxml2 libxslt1-dev libgl1-mesa-glx libgmpxx4ldbl \
        libncurses5-dev libreadline6-dev libsqlite3-dev \
        liblzma-dev lzma lzma-dev tk-dev uuid-dev libmpdec-dev \
        ffmpeg libjpeg-dev libpng-dev libtiff-dev libwebp-dev \
        libnuma1 libnuma-dev libjemalloc-dev \
        libgrpc-dev libgrpc++-dev libprotobuf-dev protobuf-compiler protobuf-compiler-grpc \
        libnl-route-3-200 libnl-3-200 libnl-3-dev  libnl-route-3-dev \
        libibverbs1 libibverbs-dev \
        librdmacm1 librdmacm-dev \
        libibumad3 libibumad-dev \
        libtool \
        ibverbs-utils ibverbs-providers libibverbs-dev

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-10 12:16:36  0.00B 执行命令并创建新的镜像层
RUN |3 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 /bin/bash -eo pipefail -c     # GCC

    # Upgrade GCC if the Ubuntu version is lower than 21.04.
    source /etc/os-release
    if (( $(echo "${VERSION_ID} >= 21.04" | bc -l) )); then
        echo "Skipping GCC upgrade for ${VERSION_ID}..."
        exit 0
    fi

    # Install
    apt-get install -y --no-install-recommends \
        gcc-11 g++-11 gfortran-11 gfortran

    # Update alternatives
    if [[ -f /etc/alternatives/gcov-dump ]]; then update-alternatives --remove-all gcov-dump; fi; update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-11 10
    if [[ -f /etc/alternatives/lto-dump ]]; then update-alternatives --remove-all lto-dump; fi; update-alternatives --install /usr/bin/lto-dump lto-dump /usr/bin/lto-dump-11 10
    if [[ -f /etc/alternatives/gcov ]]; then update-alternatives --remove-all gcov; fi; update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-11 10
    if [[ -f /etc/alternatives/gcc ]]; then update-alternatives --remove-all gcc; fi; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 10
    if [[ -f /etc/alternatives/gcc-nm ]]; then update-alternatives --remove-all gcc-nm; fi; update-alternatives --install /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-11 10
    if [[ -f /etc/alternatives/cpp ]]; then update-alternatives --remove-all cpp; fi; update-alternatives --install /usr/bin/cpp cpp /usr/bin/cpp-11 10
    if [[ -f /etc/alternatives/g++ ]]; then update-alternatives --remove-all g++; fi; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 10
    if [[ -f /etc/alternatives/gcc-ar ]]; then update-alternatives --remove-all gcc-ar; fi; update-alternatives --install /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-11 10
    if [[ -f /etc/alternatives/gcov-tool ]]; then update-alternatives --remove-all gcov-tool; fi; update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-11 10
    if [[ -f /etc/alternatives/gcc-ranlib ]]; then update-alternatives --remove-all gcc-ranlib; fi; update-alternatives --install /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-11 10
    if [[ -f /etc/alternatives/gfortran ]]; then update-alternatives --remove-all gfortran; fi; update-alternatives --install /usr/bin/gfortran gfortran /usr/bin/gfortran-11 10

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-10 12:16:36  353.40MB 执行命令并创建新的镜像层
RUN |3 TARGETPLATFORM=linux/amd64 TARGETOS=linux TARGETARCH=amd64 /bin/bash -eo pipefail -c     # Tools

    # Refresh
    apt-get update -y && apt-get install -y --no-install-recommends \
        software-properties-common apt-transport-https \
        ca-certificates gnupg2 lsb-release gnupg-agent \
      && apt-get update -y \
      && add-apt-repository -y ppa:ubuntu-toolchain-r/test \
      && apt-get update -y

    # Install
    apt-get install -y --no-install-recommends \
        ca-certificates build-essential binutils bash openssl \
        curl wget aria2 \
        git git-lfs \
        unzip xz-utils \
        tzdata locales \
        iproute2 iputils-ping ifstat net-tools dnsutils pciutils ipmitool \
        rdma-core rdmacm-utils infiniband-diags \
        procps sysstat htop \
        tini vim jq bc tree

    # Update locale
    localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8

    # Update timezone
    rm -f /etc/localtime \
        && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
        && echo "Asia/Shanghai" > /etc/timezone \
        && dpkg-reconfigure --frontend noninteractive tzdata

    # Cleanup
    rm -rf /var/tmp/* \
        && rm -rf /tmp/* \
        && rm -rf /var/cache/apt
 # buildkit
                        
# 2025-11-10 12:16:36  0.00B 设置环境变量 DEBIAN_FRONTEND LANG LANGUAGE LC_ALL
ENV DEBIAN_FRONTEND=noninteractive LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
                        
# 2025-11-10 12:16:36  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-11-10 12:16:36  0.00B 定义构建参数
ARG TARGETOS=linux
                        
# 2025-11-10 12:16:36  0.00B 定义构建参数
ARG TARGETPLATFORM=linux/amd64
                        
# 2025-11-10 12:16:36  0.00B 
SHELL [/bin/bash -eo pipefail -c]
                        
# 2025-03-11 07:14:56  1.05GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     ${NV_CUDNN_PACKAGE}     ${NV_CUDNN_PACKAGE_DEV}     && apt-mark hold ${NV_CUDNN_PACKAGE_NAME}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 07:14:56  0.00B 添加元数据标签
LABEL com.nvidia.cudnn.version=9.8.0.87-1
                        
# 2025-03-11 07:14:56  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 07:14:56  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 07:14:56  0.00B 设置环境变量 NV_CUDNN_PACKAGE_DEV
ENV NV_CUDNN_PACKAGE_DEV=libcudnn9-dev-cuda-12=9.8.0.87-1
                        
# 2025-03-11 07:14:56  0.00B 设置环境变量 NV_CUDNN_PACKAGE
ENV NV_CUDNN_PACKAGE=libcudnn9-cuda-12=9.8.0.87-1
                        
# 2025-03-11 07:14:56  0.00B 设置环境变量 NV_CUDNN_PACKAGE_NAME
ENV NV_CUDNN_PACKAGE_NAME=libcudnn9-cuda-12
                        
# 2025-03-11 07:14:56  0.00B 设置环境变量 NV_CUDNN_VERSION
ENV NV_CUDNN_VERSION=9.8.0.87-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs
                        
# 2025-03-11 06:36:52  389.48KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_DEV_PACKAGE_NAME} ${NV_LIBNCCL_DEV_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:36:52  5.94GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-dev-12-8=${NV_CUDA_CUDART_DEV_VERSION}     cuda-command-line-tools-12-8=${NV_CUDA_LIB_VERSION}     cuda-minimal-build-12-8=${NV_CUDA_LIB_VERSION}     cuda-libraries-dev-12-8=${NV_CUDA_LIB_VERSION}     cuda-nvml-dev-12-8=${NV_NVML_DEV_VERSION}     ${NV_NVPROF_DEV_PACKAGE}     ${NV_LIBNPP_DEV_PACKAGE}     libcusparse-dev-12-8=${NV_LIBCUSPARSE_DEV_VERSION}     ${NV_LIBCUBLAS_DEV_PACKAGE}     ${NV_LIBNCCL_DEV_PACKAGE}     ${NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:36:52  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:36:52  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE
ENV NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_VERSION
ENV NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_NAME
ENV NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_DEV_PACKAGE
ENV NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_VERSION
ENV NV_NVPROF_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE
ENV NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_VERSION
ENV NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE
ENV NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE_NAME
ENV NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_VERSION
ENV NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_PACKAGE
ENV NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_VERSION
ENV NV_LIBNPP_DEV_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUSPARSE_DEV_VERSION
ENV NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVML_DEV_VERSION
ENV NV_NVML_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_CUDART_DEV_VERSION
ENV NV_CUDA_CUDART_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:24:31  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2025-03-11 06:24:31  2.53KB 复制新文件或目录到容器中
COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
                        
# 2025-03-11 06:24:31  3.06KB 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2025-03-11 06:24:31  263.00KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:24:31  3.11GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-libraries-12-8=${NV_CUDA_LIB_VERSION}     ${NV_LIBNPP_PACKAGE}     cuda-nvtx-12-8=${NV_NVTX_VERSION}     libcusparse-12-8=${NV_LIBCUSPARSE_VERSION}     ${NV_LIBCUBLAS_PACKAGE}     ${NV_LIBNCCL_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:24:31  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:24:31  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE
ENV NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_VERSION
ENV NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_NAME
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE
ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_VERSION
ENV NV_LIBCUBLAS_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE_NAME
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUSPARSE_VERSION
ENV NV_LIBCUSPARSE_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_PACKAGE
ENV NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_VERSION
ENV NV_LIBNPP_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_NVTX_VERSION
ENV NV_NVTX_VERSION=12.8.90-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-03-11 06:19:20  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-03-11 06:19:20  22.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-03-11 06:19:20  203.35MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-8=${NV_CUDA_CUDART_VERSION}     cuda-compat-12-8     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.8.1
                        
# 2025-03-11 06:19:05  10.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:19:05  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.8.90-1
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2025-01-26 13:31:11  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-01-26 13:31:10  77.86MB 
/bin/sh -c #(nop) ADD file:1b6c8c9518be42fa2afe5e241ca31677fce58d27cdfa88baa91a65a259be3637 in / 
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:0ffc830b317c45266b61ea22f38dcf3fb84c464d07b2b6e942cf5bbc836c1055",
    "RepoTags": [
        "gpustack/runner:cuda12.8-sglang0.5.5",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner:cuda12.8-sglang0.5.5"
    ],
    "RepoDigests": [
        "gpustack/runner@sha256:af38001a1223a49241d0e5df20088b38078970726a4f6e2bf43e79717d806f93",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/gpustack/runner@sha256:79214f12560e0add66401f7c4ce262ac84cce5aa211dac2235f8b164e2faa690"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-11-12T03:33:05.478258465Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/mpi/bin:/usr/local/ucx/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/efa/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.8 brand=unknown,driver\u003e=470,driver\u003c471 brand=grid,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=vapps,driver\u003e=470,driver\u003c471 brand=vpc,driver\u003e=470,driver\u003c471 brand=vcs,driver\u003e=470,driver\u003c471 brand=vws,driver\u003e=470,driver\u003c471 brand=cloudgaming,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551 brand=unknown,driver\u003e=560,driver\u003c561 brand=grid,driver\u003e=560,driver\u003c561 brand=tesla,driver\u003e=560,driver\u003c561 brand=nvidia,driver\u003e=560,driver\u003c561 brand=quadro,driver\u003e=560,driver\u003c561 brand=quadrortx,driver\u003e=560,driver\u003c561 brand=nvidiartx,driver\u003e=560,driver\u003c561 brand=vapps,driver\u003e=560,driver\u003c561 brand=vpc,driver\u003e=560,driver\u003c561 brand=vcs,driver\u003e=560,driver\u003c561 brand=vws,driver\u003e=560,driver\u003c561 brand=cloudgaming,driver\u003e=560,driver\u003c561 brand=unknown,driver\u003e=565,driver\u003c566 brand=grid,driver\u003e=565,driver\u003c566 brand=tesla,driver\u003e=565,driver\u003c566 brand=nvidia,driver\u003e=565,driver\u003c566 brand=quadro,driver\u003e=565,driver\u003c566 brand=quadrortx,driver\u003e=565,driver\u003c566 brand=nvidiartx,driver\u003e=565,driver\u003c566 brand=vapps,driver\u003e=565,driver\u003c566 brand=vpc,driver\u003e=565,driver\u003c566 brand=vcs,driver\u003e=565,driver\u003c566 brand=vws,driver\u003e=565,driver\u003c566 brand=cloudgaming,driver\u003e=565,driver\u003c566",
            "NV_CUDA_CUDART_VERSION=12.8.90-1",
            "CUDA_VERSION=12.8.1",
            "LD_LIBRARY_PATH=/usr/local/cuda/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.8.1-1",
            "NV_NVTX_VERSION=12.8.90-1",
            "NV_LIBNPP_VERSION=12.3.3.100-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1",
            "NV_LIBCUSPARSE_VERSION=12.5.8.93-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8",
            "NV_LIBCUBLAS_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1",
            "NCCL_VERSION=2.25.1-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8",
            "NVIDIA_PRODUCT_NAME=CUDA",
            "NV_CUDA_CUDART_DEV_VERSION=12.8.90-1",
            "NV_NVML_DEV_VERSION=12.8.90-1",
            "NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1",
            "NV_LIBNPP_DEV_VERSION=12.3.3.100-1",
            "NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1",
            "NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8",
            "NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1",
            "NV_NVPROF_VERSION=12.8.90-1",
            "NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1",
            "NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev",
            "NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1",
            "NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs",
            "NV_CUDNN_VERSION=9.8.0.87-1",
            "NV_CUDNN_PACKAGE_NAME=libcudnn9-cuda-12",
            "NV_CUDNN_PACKAGE=libcudnn9-cuda-12=9.8.0.87-1",
            "NV_CUDNN_PACKAGE_DEV=libcudnn9-dev-cuda-12=9.8.0.87-1",
            "DEBIAN_FRONTEND=noninteractive",
            "LANG=en_US.UTF-8",
            "LANGUAGE=en_US:en",
            "LC_ALL=en_US.UTF-8",
            "PYTHON_VERSION=3.12",
            "PIP_NO_CACHE_DIR=1",
            "PIP_DISABLE_PIP_VERSION_CHECK=1",
            "PIP_ROOT_USER_ACTION=ignore",
            "PIPX_HOME=/root/.local/share/pipx",
            "PIPX_LOCAL_VENVS=/root/.local/share/pipx/venvs",
            "UV_NO_CACHE=1",
            "UV_HTTP_TIMEOUT=500",
            "UV_INDEX_STRATEGY=unsafe-best-match",
            "CUDA_HOME=/usr/local/cuda",
            "CUDA_ARCHS=",
            "UV_SYSTEM_PYTHON=1",
            "UV_PRERELEASE=allow",
            "VLLM_NVIDIA_HPCX_VERSION=2.22.1rc4",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "VLLM_AWS_EFA_VERSION=1.43.3",
            "VLLM_NVIDIA_NVSHMEM_VERSION=3.4.5",
            "VLLM_NVIDIA_NVSHMEM_DIR=/usr/local/nvshmem",
            "VLLM_TORCH_VERSION=2.8.0",
            "VLLM_TORCH_CUDA_VERSION=12.8.1",
            "VLLM_VERSION=0.11.0",
            "VLLM_LMCACHE_VERSION=0.3.9post1",
            "RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES=1",
            "SGLANG_VERSION=0.5.5",
            "SGLANG_KERNEL_VERSION=0.3.16.post5"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/",
        "Entrypoint": [
            "tini",
            "--"
        ],
        "OnBuild": null,
        "Labels": {
            "com.nvidia.cudnn.version": "9.8.0.87-1",
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        },
        "Shell": [
            "/bin/bash",
            "-eo",
            "pipefail",
            "-c"
        ]
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 32963626653,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/fc13a7427af06c751b4aec0373d511ddda8f1a1de7099604aa4b3a824bf89f8d/diff:/var/lib/docker/overlay2/38dc41ad7efaba9bea8a33897b30734b9781c580366736569089a0c90922eae2/diff:/var/lib/docker/overlay2/5053ed87c49279c103ceb0425cba9c2792d76feea5bc2cc118df199830d3d57d/diff:/var/lib/docker/overlay2/eeb7f3d8ca6b98e7a3c1f0bdd796cdc10b279b20fc9bcb34eefb1fc0a56c0eae/diff:/var/lib/docker/overlay2/7b8d3abcba8e399cc8a15efad5fb1bdae027a174653c6b49623b01c9eb8d1365/diff:/var/lib/docker/overlay2/4b3a175067120c7dbfbecff2cca89f99431efb96981839b5f7b82d54d86c99ec/diff:/var/lib/docker/overlay2/25ce2fc7a8db31065b4ab62939acf9b0f0fc98cd4bda096652f6b90fd4d5c0b2/diff:/var/lib/docker/overlay2/9e6a2ac15aae7fafb0395e4cee68975fd93bbdb92ec3ae7777c4ac99d6c4f3af/diff:/var/lib/docker/overlay2/6d0a43e3898c25eb9908d83b6cd87ba4af6d2b403d875c4743ea87f6670d89a2/diff:/var/lib/docker/overlay2/cc9f11b372477e0661bdaf917ff9e3cee33f1c60384b9a001d1b93227d8fb623/diff:/var/lib/docker/overlay2/ae0b39997e9f076b80296c479194cbe15cacb9e274f1e6c62c25908a291b5cb2/diff:/var/lib/docker/overlay2/d96dfbed780d56e7e5b1c480341d28c79c15321e415b3790b88e13cbbb125b46/diff:/var/lib/docker/overlay2/9599f449127710b2ec5258fe95fb3d0f48ab4f7dfc33307149758f3750d0234d/diff:/var/lib/docker/overlay2/e440dabf6a0895f817bfab3f3017fe293292d0bd5cb79a37dd3c73ad591b1757/diff:/var/lib/docker/overlay2/7b0d656043884177dd44266a29266e29027d835b880d094c3a1c713dc0e11ff2/diff:/var/lib/docker/overlay2/2a6d99ae5161268027cb1c0d0066a7174add5aefbb8d8bfdff5c6250c63d1953/diff:/var/lib/docker/overlay2/c237f042709d904a6233d91011d4602b2ad25d1b47f7810052915200da899227/diff:/var/lib/docker/overlay2/aa8d8dad5ce7fa33070cbf9901637d0699c0a747ae2982d202e9c14300867e66/diff:/var/lib/docker/overlay2/c10b33c21274215dbece7e600328049aed64cb94d2dec7d6282c451b0931aa83/diff:/var/lib/docker/overlay2/09632b3d84b5c8bddc3e9be472466904f101e6582ef73cc2b5a2bc2ba362427e/diff:/var/lib/docker/overlay2/deee7fcd8f0589ddff1c139e7ff1d36c4f986283542abb10e261ed033c45c37c/diff:/var/lib/docker/overlay2/abcdc3df641a4d5ea896ac8887499fbcfa2f673fae7403163f1fcb32cd884843/diff:/var/lib/docker/overlay2/217f8d683025636df91623f7c6f5a8460703b9e357b28d003a6e9c46342ade9b/diff:/var/lib/docker/overlay2/e747895ae6a95bfae280c63d16eb21cb409f363594004ab8066f40701e402e7a/diff:/var/lib/docker/overlay2/5521e992ce245ed6b6763359f94e19e6c791e1125056d9ee5ccc29d09b2bf5e4/diff:/var/lib/docker/overlay2/2bab6d9073775e27e3a13694bf5f14afca38a5045e4744c7ed15df364c6bf0cc/diff:/var/lib/docker/overlay2/ec6491050c87cee4ae28dee1aeef789abb5df44cf749e3dceb33919b457e9472/diff:/var/lib/docker/overlay2/1350018e2c62badd5a1ab4ce360e32bfd069300b75a09aad02c65476f9f4112f/diff:/var/lib/docker/overlay2/18c5673adfce3725219df08cd006e07b689fd116f270896695afe616aac82eb5/diff:/var/lib/docker/overlay2/48fdb0bf251e8d0190f0a3e8148f66557fba88303a705d7804054871835ea4b1/diff:/var/lib/docker/overlay2/7e63a90e328a23806213e5938c6cac5629a02cce46e30c4f723f603f4d590bc7/diff:/var/lib/docker/overlay2/2568a668ea5588fa6d71623b508420c8e7cb90d9e922fb09212830d19df3e075/diff:/var/lib/docker/overlay2/2e8e299ae8fe4125e6e12a9b6afa45541d65ec0ac3ed363839e7c9de8d065232/diff:/var/lib/docker/overlay2/f88eb142a4b18dd3df190078aa4c8390a2c5ebb1b0392742639c2537c5d4f7e9/diff:/var/lib/docker/overlay2/ace3f972cf88bd330727fa9a25fd0df2c3fec1df161ac9102bf9f5739b40b82c/diff",
            "MergedDir": "/var/lib/docker/overlay2/c925f41f6b072d1b8f4c9d644da9e8020bd30a7899b168675441acfdf1fb6f9f/merged",
            "UpperDir": "/var/lib/docker/overlay2/c925f41f6b072d1b8f4c9d644da9e8020bd30a7899b168675441acfdf1fb6f9f/diff",
            "WorkDir": "/var/lib/docker/overlay2/c925f41f6b072d1b8f4c9d644da9e8020bd30a7899b168675441acfdf1fb6f9f/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:270a1170e7e398434ff1b31e17e233f7d7b71aa99a40473615860068e86720af",
            "sha256:fb456a9e7760e7e4481d3d9eddee3ef62358753326d669c3efe6b0cc254ab9c3",
            "sha256:981a144d9fa2efffdbdfbdcfd3881dccb4fbf07867d4fa326a386126ca11b50f",
            "sha256:583f1d6040fc90a1380cb4d11d46b78bbbd4ad7d6274b9656ab9832ef7ba6342",
            "sha256:6020d84069cf575521faca653d2817e2195b8233b8c4e2d7bc65f3296296c2bb",
            "sha256:eb0143fcce68bd06a29bf314125d30f2a1f44d606fc419e7a54ab602bc26b2d0",
            "sha256:cb395f276e984728ca05fdee4e1dc203e1310a9e943a2d16d7aa87e5b2cf754b",
            "sha256:70a158f70e3777a129c15ffe51399bb6c226218c08ff6819948b9b1f26a46277",
            "sha256:9bb0510c7b4b74b056648f24e0a55c26f7521a1a1b766250a0776e14ce9d93f6",
            "sha256:7215ae9b9700d9530f70dea6f546b7ebdaac0098e4436b4c99e3e8063170c274",
            "sha256:acbc0d2ed199232b8f482ce41318ccd603e3c9a7731d39f0b08b88b253a6ca67",
            "sha256:f73947d6312e5c0bb71714a40dd2578d2ddd0b8bd2f6fb4075ab7252ef546a72",
            "sha256:c8969d6ca107810781c69a7e4739454ff8c29a9cde975d2fd7213b750d7b15ed",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:eda6edfc53047eb4f0a29af3ca64ef2e0c310346ef9a9a7298e043b766888836",
            "sha256:6a008e9166f6a135cfdff14ca1627f0889e77cf739b1b39ecfdf780ded6ac483",
            "sha256:3018455d5297a8ddbfcdd5ebfa8b6d6b3e9122d88c7ec08ac3545df2aadabc94",
            "sha256:2a06c37334e3bc9dd7a9fe6b5e86424a6f843a928d288dbd156a721a98240f40",
            "sha256:aac2126a3821e0b54dfe65c7f7b5c1de29cb99a75ec80e77ff96dec6c9d4a91a",
            "sha256:bbe0d805a961711b096a924d0f08b55215ee491b4538eeb97d88dc0e45f2a1d6",
            "sha256:266672c040c47b2c01afeb27285c7956348bcd8687cb62a33e0dfb320e978525",
            "sha256:10c329d520a4d17c2b6f81a048ef407fab85d0b94c3e0db37c349325cf455799",
            "sha256:13f964313260f42ef306f1eede3267b011797b34b72905b81646d6d21e1763bf",
            "sha256:08c3adb6f827d8a53875981f37b8fe5e601038537570ee9f1863e651a7687ee5",
            "sha256:de6a4e50230e715fd1567be6da9c4e3d0c9c8de298c7434b0ebe417977d637b1",
            "sha256:767ce811066aa2c8e42cdaf4e60eacfa4353d4f7d98e7acacaaac89c882cacd2",
            "sha256:8ba3d2695768c272c9d1028d07cf2d042b68e96cc119a73defdfc65bb673aa18",
            "sha256:f82f19a7e52c00e61e78d143e4e59948d208bc8a4ae5fdd218ba7ee44aa93e5f",
            "sha256:2af508a0abdbdcaf79dfe86d8e83df13dec721d1649ce6762868e8864e32d89f",
            "sha256:73fd7ef5971c409dff564e5a21013dd90ba50df2fb5397a59740155685883847",
            "sha256:5f761928e1fd61dc96ff2c210be267c4b59cc91fb0c27d33a4e7b2ac19cc2552",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:06b4cec91124a4f8f4ab0dd1216f4481f07e9eefcddc9f4b69689328816ad8b4",
            "sha256:0196b942120bc08f1cf543617aa2ae8457c3a5e151bb960ce4f6c69e54fe61e4",
            "sha256:76b758bad7f8b71f2ea52c72cd7d652a713ca53ea759d99e9794cbba34681a3e",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-12-05T02:24:14.504357724+08:00"
    }
}

更多版本

docker.io/gpustack/runner:cann8.2-910b-vllm0.11.0

linux/arm64 docker.io15.96GB2025-11-27 00:53
68

docker.io/gpustack/runner:cuda12.9-vllm0.11.2

linux/amd64 docker.io33.62GB2025-12-03 00:42
34

docker.io/gpustack/runner:cann8.2-910b-sglang0.5.2

linux/amd64 docker.io17.03GB2025-12-05 01:02
8

docker.io/gpustack/runner:cann8.2-910b-mindie2.1.rc2

linux/arm64 docker.io16.02GB2025-12-05 01:13
9

docker.io/gpustack/runner:cann8.2-910b-sglang0.5.2

linux/arm64 docker.io18.27GB2025-12-05 01:28
9

docker.io/gpustack/runner:cuda12.4-vllm0.11.0

linux/amd64 docker.io24.31GB2025-12-05 02:03
9

docker.io/gpustack/runner:cuda12.8-sglang0.5.5

linux/amd64 docker.io32.96GB2025-12-05 02:43
7

docker.io/gpustack/runner:cuda12.4-voxbox0.0.20

linux/amd64 docker.io17.15GB2025-12-05 04:14
10