docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3 linux/amd64

docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3 - 国内下载镜像源 浏览次数:11

这是一个 NVIDIA Triton Inference Server 的 Docker 镜像。Triton Inference Server 是一个高性能的推理服务器,用于部署各种深度学习模型,支持多种框架(例如 TensorFlow, PyTorch, TensorRT 等),并提供模型版本管理、模型部署、以及高效的推理服务。

源镜像 docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3
镜像ID sha256:85e166e897d33255d2ae1ace9068e1d15b2f05ee76416debbcae94f9ea792eb8
镜像TAG 25.04-vllm-python-py3
大小 23.34GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /opt/nvidia/nvidia_entrypoint.sh
工作目录 /opt/tritonserver
OS/平台 linux/amd64
浏览量 11 次
贡献者
镜像创建 2025-05-02T03:37:23.49230555Z
同步时间 2025-06-15 02:52
更新时间 2025-06-15 18:28
环境变量
PATH=/opt/tritonserver/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=50.0 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0 NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSPARSELT_VERSION=0.7.1.0 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11 DALI_VERSION=1.48.0 DALI_BUILD= DALI_URL_SUFFIX=120 POLYGRAPHY_VERSION=0.49.20 TRANSFORMER_ENGINE_VERSION=2.2 MODEL_OPT_VERSION=0.25.0 LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video NVIDIA_PRODUCT_NAME=Triton Server LIBRARY_PATH=/usr/local/cuda/lib64/stubs: PIP_BREAK_SYSTEM_PACKAGES=1 TRITON_SERVER_VERSION=2.57.0 NVIDIA_TRITON_SERVER_VERSION=25.04 UCX_MEM_EVENTS=no TF_ADJUST_HUE_FUSED=1 TF_ADJUST_SATURATION_FUSED=1 TF_ENABLE_WINOGRAD_NONFUSED=1 TF_AUTOTUNE_THRESHOLD=2 TRITON_SERVER_GPU_ENABLED=1 TRITON_SERVER_USER=triton-server DEBIAN_FRONTEND=noninteractive TCMALLOC_RELEASE_RATE=200 DCGM_VERSION=3.3.6 NVIDIA_BUILD_ID=164182487
镜像标签
true: com.amazonaws.sagemaker.capabilities.accept-bind-to-port true: com.amazonaws.sagemaker.capabilities.multi-models 164182487: com.nvidia.build.id 0beb0717a0395341699ac92454c1858e91d2fe3f: com.nvidia.build.ref 0.4.4.50: com.nvidia.cal.version 12.9.0.2: com.nvidia.cublas.version 0.4.0.789: com.nvidia.cublasmp.version 9.0: com.nvidia.cuda.version 9.9.0.52: com.nvidia.cudnn.version 11.4.0.6: com.nvidia.cufft.version 10.3.10.19: com.nvidia.curand.version 11.7.4.40: com.nvidia.cusolver.version 12.5.9.5: com.nvidia.cusparse.version 0.7.1.0: com.nvidia.cusparselt.version 2.26.3: com.nvidia.nccl.version 12.4.0.27: com.nvidia.npp.version 2025.2.0.11: com.nvidia.nsightcompute.version 2025.2.1.130: com.nvidia.nsightsystems.version 12.4.0.16: com.nvidia.nvjpeg.version 10.9.0.34+cuda12.8: com.nvidia.tensorrt.version : com.nvidia.tensorrtoss.version 2.57.0: com.nvidia.tritonserver.version nvidia_driver: com.nvidia.volumes.needed ubuntu: org.opencontainers.image.ref.name 24.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3  docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3  docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3

Shell快速替换命令

sed -i 's#nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3  docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3  docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3'

镜像构建历史


# 2025-05-02 11:37:23  8.85KB 复制新文件或目录到容器中
COPY --chown=1000:1000 docker/sagemaker/serve /usr/bin/. # buildkit
                        
# 2025-05-02 11:37:23  0.00B 添加元数据标签
LABEL com.amazonaws.sagemaker.capabilities.multi-models=true
                        
# 2025-05-02 11:37:23  0.00B 添加元数据标签
LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true
                        
# 2025-05-02 11:37:23  6.68MB 执行命令并创建新的镜像层
RUN |7 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 BUILD_PUBLIC_VLLM=false VLLM_INDEX_URL=https://gitlab-ci-token:glcbt-64_vc3eH5PJvym-VsTDgMfh@gitlab-master.nvidia.com/api/v4/projects/100660/packages/pypi/simple PYTORCH_TRITON_URL=https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/pytorch_triton/wheel/pytorch_triton-3.1.0+cf34004b8.internal-cp312-cp312-linux_x86_64.whl NVPL_SLIM_URL=None PYVER=3.12 /bin/sh -c pip3 install -r python/openai/requirements.txt # buildkit
                        
# 2025-05-02 11:37:21  365.85MB 执行命令并创建新的镜像层
RUN |7 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 BUILD_PUBLIC_VLLM=false VLLM_INDEX_URL=https://gitlab-ci-token:glcbt-64_vc3eH5PJvym-VsTDgMfh@gitlab-master.nvidia.com/api/v4/projects/100660/packages/pypi/simple PYTORCH_TRITON_URL=https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/pytorch_triton/wheel/pytorch_triton-3.1.0+cf34004b8.internal-cp312-cp312-linux_x86_64.whl NVPL_SLIM_URL=None PYVER=3.12 /bin/sh -c find /opt/tritonserver/python -maxdepth 1 -type f -name     "tritonserver-*.whl" | xargs -I {} pip install --upgrade {}[all] &&     find /opt/tritonserver/python -maxdepth 1 -type f -name     "tritonfrontend-*.whl" | xargs -I {} pip install --upgrade {}[all] # buildkit
                        
# 2025-05-02 11:37:13  3.01MB 复制新文件或目录到容器中
COPY --chown=1000:1000 NVIDIA_Deep_Learning_Container_License.pdf . # buildkit
                        
# 2025-05-02 11:37:13  0.00B 设置工作目录为/opt/tritonserver
WORKDIR /opt/tritonserver
                        
# 2025-05-02 11:37:13  541.61MB 复制新文件或目录到容器中
COPY --chown=1000:1000 build/install tritonserver # buildkit
                        
# 2025-05-02 11:37:13  0.00B 设置工作目录为/opt
WORKDIR /opt
                        
# 2025-05-02 11:37:13  0.00B 添加元数据标签
LABEL com.nvidia.build.ref=0beb0717a0395341699ac92454c1858e91d2fe3f
                        
# 2025-05-02 11:37:13  0.00B 添加元数据标签
LABEL com.nvidia.build.id=164182487
                        
# 2025-05-02 11:37:13  0.00B 设置环境变量 NVIDIA_BUILD_ID
ENV NVIDIA_BUILD_ID=164182487
                        
# 2025-05-02 11:37:13  733.00B 复制新文件或目录到容器中
COPY docker/entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2025-05-02 11:37:13  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=Triton Server
                        
# 2025-05-02 11:37:13  0.00B 执行命令并创建新的镜像层
RUN |7 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 BUILD_PUBLIC_VLLM=false VLLM_INDEX_URL=https://gitlab-ci-token:glcbt-64_vc3eH5PJvym-VsTDgMfh@gitlab-master.nvidia.com/api/v4/projects/100660/packages/pypi/simple PYTORCH_TRITON_URL=https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/pytorch_triton/wheel/pytorch_triton-3.1.0+cf34004b8.internal-cp312-cp312-linux_x86_64.whl NVPL_SLIM_URL=None PYVER=3.12 /bin/sh -c rm -fr /opt/tritonserver/* # buildkit
                        
# 2025-05-02 11:37:13  0.00B 设置工作目录为/opt/tritonserver
WORKDIR /opt/tritonserver
                        
# 2025-05-02 11:37:13  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2025-05-02 11:37:13  0.00B 定义构建参数
ARG PYVER=3.12
                        
# 2025-05-02 11:37:13  9.89GB 执行命令并创建新的镜像层
RUN |6 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 BUILD_PUBLIC_VLLM=false VLLM_INDEX_URL=https://gitlab-ci-token:glcbt-64_vc3eH5PJvym-VsTDgMfh@gitlab-master.nvidia.com/api/v4/projects/100660/packages/pypi/simple PYTORCH_TRITON_URL=https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/pytorch_triton/wheel/pytorch_triton-3.1.0+cf34004b8.internal-cp312-cp312-linux_x86_64.whl NVPL_SLIM_URL=None /bin/sh -c if [ "$BUILD_PUBLIC_VLLM" = "false" ]; then         if [ "$(uname -m)" = "x86_64" ]; then             pip3 install --no-cache-dir                 mkl==2021.1.1                 mkl-include==2021.1.1                 mkl-devel==2021.1.1;         elif [ "$(uname -m)" = "aarch64" ]; then             echo "Downloading NVPL from: $NVPL_SLIM_URL" &&             cd /tmp &&             wget -O nvpl_slim_24.04.tar $NVPL_SLIM_URL &&             tar -xf nvpl_slim_24.04.tar &&             cp -r nvpl_slim_24.04/lib/* /usr/local/lib &&             cp -r nvpl_slim_24.04/include/* /usr/local/include &&             rm -rf nvpl_slim_24.04.tar nvpl_slim_24.04;         fi         && pip3 install --no-cache-dir --progress-bar on --index-url $VLLM_INDEX_URL -r /run/secrets/requirements         && cd /tmp         && wget $PYTORCH_TRITON_URL         && pip install --no-cache-dir /tmp/pytorch_triton-*.whl         && rm /tmp/pytorch_triton-*.whl;     else         pip3 install vllm==0.8.1;     fi # buildkit
                        
# 2025-05-02 11:35:11  0.00B 定义构建参数
ARG NVPL_SLIM_URL=None
                        
# 2025-05-02 11:35:11  0.00B 定义构建参数
ARG PYTORCH_TRITON_URL=https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/pytorch_triton/wheel/pytorch_triton-3.1.0+cf34004b8.internal-cp312-cp312-linux_x86_64.whl
                        
# 2025-05-02 11:35:11  0.00B 定义构建参数
ARG VLLM_INDEX_URL=https://gitlab-ci-token:glcbt-64_vc3eH5PJvym-VsTDgMfh@gitlab-master.nvidia.com/api/v4/projects/100660/packages/pypi/simple
                        
# 2025-05-02 11:35:11  0.00B 定义构建参数
ARG BUILD_PUBLIC_VLLM=false
                        
# 2025-05-02 11:35:11  199.57MB 执行命令并创建新的镜像层
RUN |2 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 /bin/sh -c apt-get update       && apt-get install -y --no-install-recommends             python3             libarchive-dev             python3-pip             python3-wheel             python3-setuptools             libpython3-dev       && pip3 install --upgrade             "numpy<2"             virtualenv       && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-05-02 11:35:02  50.34KB 执行命令并创建新的镜像层
RUN |2 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib     && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf     && ldconfig     && rm -f ${_CUDA_COMPAT_PATH}/lib # buildkit
                        
# 2025-05-02 11:35:02  1.75GB 执行命令并创建新的镜像层
RUN |2 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 /bin/sh -c curl -o /tmp/cuda-keyring.deb           https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb       && apt install /tmp/cuda-keyring.deb       && rm /tmp/cuda-keyring.deb       && apt-get update       && apt-get install -y datacenter-gpu-manager=1:3.3.6 # buildkit
                        
# 2025-05-02 11:34:44  0.00B 设置环境变量 DCGM_VERSION
ENV DCGM_VERSION=3.3.6
                        
# 2025-05-02 11:34:44  0.00B 设置环境变量 TCMALLOC_RELEASE_RATE
ENV TCMALLOC_RELEASE_RATE=200
                        
# 2025-05-02 11:34:44  384.41MB 执行命令并创建新的镜像层
RUN |2 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 /bin/sh -c apt-get update       && apt-get install -y --no-install-recommends               clang               curl               dirmngr               git               gperf               libb64-0d               libcurl4-openssl-dev               libgoogle-perftools-dev               libjemalloc-dev               libnuma-dev               software-properties-common               wget                              python3-pip       && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-05-02 11:34:23  4.49KB 执行命令并创建新的镜像层
RUN |2 TRITON_VERSION=2.57.0 TRITON_CONTAINER_VERSION=25.04 /bin/sh -c userdel tensorrt-server > /dev/null 2>&1 || true       && userdel ubuntu > /dev/null 2>&1 || true       && if ! id -u $TRITON_SERVER_USER > /dev/null 2>&1 ; then           useradd $TRITON_SERVER_USER;         fi       && [ `id -u $TRITON_SERVER_USER` -eq 1000 ]       && [ `id -g $TRITON_SERVER_USER` -eq 1000 ] # buildkit
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TRITON_SERVER_USER
ENV TRITON_SERVER_USER=triton-server
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TRITON_SERVER_GPU_ENABLED
ENV TRITON_SERVER_GPU_ENABLED=1
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TF_AUTOTUNE_THRESHOLD
ENV TF_AUTOTUNE_THRESHOLD=2
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TF_ENABLE_WINOGRAD_NONFUSED
ENV TF_ENABLE_WINOGRAD_NONFUSED=1
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TF_ADJUST_SATURATION_FUSED
ENV TF_ADJUST_SATURATION_FUSED=1
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TF_ADJUST_HUE_FUSED
ENV TF_ADJUST_HUE_FUSED=1
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 UCX_MEM_EVENTS
ENV UCX_MEM_EVENTS=no
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 PATH
ENV PATH=/opt/tritonserver/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin
                        
# 2025-05-02 11:34:23  0.00B 添加元数据标签
LABEL com.nvidia.tritonserver.version=2.57.0
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 NVIDIA_TRITON_SERVER_VERSION
ENV NVIDIA_TRITON_SERVER_VERSION=25.04
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 TRITON_SERVER_VERSION
ENV TRITON_SERVER_VERSION=2.57.0
                        
# 2025-05-02 11:34:23  0.00B 定义构建参数
ARG TRITON_CONTAINER_VERSION=25.04
                        
# 2025-05-02 11:34:23  0.00B 定义构建参数
ARG TRITON_VERSION=2.57.0
                        
# 2025-05-02 11:34:23  0.00B 设置环境变量 PIP_BREAK_SYSTEM_PACKAGES
ENV PIP_BREAK_SYSTEM_PACKAGES=1
                        
# 2025-04-16 00:59:31  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
                        
# 2025-04-16 00:59:31  1.01GB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEVEL=1 BASE=0  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUDA.sh  && /nvidia/build-scripts/installLIBS.sh  && if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then /nvidia/build-scripts/installNCCL.sh; fi  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi  && rm -f /tmp/cuda-*.patch # buildkit
                        
# 2025-04-16 00:56:13  1.49KB 复制新文件或目录到容器中
COPY cuda-*.patch /tmp # buildkit
                        
# 2025-04-16 00:56:13  99.01MB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         git         libglib2.0-0         less         libhwloc15         libnl-route-3-200         libnl-3-dev         libnl-route-3-dev         libnuma-dev         libnuma1         libpmi2-0-dev         nano         numactl         openssh-client         vim         wget  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-04-16 00:49:21  467.00B 执行命令并创建新的镜像层
RUN |39 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11 CUSPARSELT_VERSION=0.7.1.0 DALI_VERSION=1.48.0 DALI_BUILD= DALI_URL_SUFFIX=120 POLYGRAPHY_VERSION=0.49.20 TRANSFORMER_ENGINE_VERSION=2.2 MODEL_OPT_VERSION=0.25.0 _LIBPATH_SUFFIX= /bin/sh -c mkdir -p /workspace && cp -f -p /opt/nvidia/entrypoint.d/30-container-license.txt /workspace/license.txt # buildkit
                        
# 2025-04-16 00:49:21  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2025-04-16 00:49:21  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2025-04-16 00:49:21  16.04KB 复制新文件或目录到容器中
COPY entrypoint/ /opt/nvidia/ # buildkit
                        
# 2025-04-16 00:49:21  33.35KB 执行命令并创建新的镜像层
RUN |39 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11 CUSPARSELT_VERSION=0.7.1.0 DALI_VERSION=1.48.0 DALI_BUILD= DALI_URL_SUFFIX=120 POLYGRAPHY_VERSION=0.49.20 TRANSFORMER_ENGINE_VERSION=2.2 MODEL_OPT_VERSION=0.25.0 _LIBPATH_SUFFIX= /bin/sh -c if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then            echo "/opt/amazon/aws-ofi-nccl/lib" > /etc/ld.so.conf.d/aws-ofi-nccl.conf       && ldconfig;                                                 fi # buildkit
                        
# 2025-04-16 00:49:21  5.11MB 复制新文件或目录到容器中
COPY /opt/amazon/aws-ofi-nccl /opt/amazon/aws-ofi-nccl # buildkit
                        
# 2025-04-16 00:48:01  0.00B 设置环境变量 PATH LD_LIBRARY_PATH NVIDIA_VISIBLE_DEVICES NVIDIA_DRIVER_CAPABILITIES
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG _LIBPATH_SUFFIX=
                        
# 2025-04-16 00:48:01  46.00B 执行命令并创建新的镜像层
RUN |38 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11 CUSPARSELT_VERSION=0.7.1.0 DALI_VERSION=1.48.0 DALI_BUILD= DALI_URL_SUFFIX=120 POLYGRAPHY_VERSION=0.49.20 TRANSFORMER_ENGINE_VERSION=2.2 MODEL_OPT_VERSION=0.25.0 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf  && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-04-16 00:48:01  13.39KB 复制文件或目录到容器中
ADD docs.tgz / # buildkit
                        
# 2025-04-16 00:48:01  0.00B 设置环境变量 DALI_VERSION DALI_BUILD DALI_URL_SUFFIX POLYGRAPHY_VERSION TRANSFORMER_ENGINE_VERSION MODEL_OPT_VERSION
ENV DALI_VERSION=1.48.0 DALI_BUILD= DALI_URL_SUFFIX=120 POLYGRAPHY_VERSION=0.49.20 TRANSFORMER_ENGINE_VERSION=2.2 MODEL_OPT_VERSION=0.25.0
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG MODEL_OPT_VERSION=0.25.0
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG TRANSFORMER_ENGINE_VERSION=2.2
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG POLYGRAPHY_VERSION=0.49.20
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG DALI_URL_SUFFIX=120
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG DALI_BUILD=
                        
# 2025-04-16 00:48:01  0.00B 定义构建参数
ARG DALI_VERSION=1.48.0
                        
# 2025-04-16 00:48:01  0.00B 添加元数据标签
LABEL com.nvidia.nccl.version=2.26.3 com.nvidia.cublas.version=12.9.0.2 com.nvidia.cufft.version=11.4.0.6 com.nvidia.curand.version=10.3.10.19 com.nvidia.cusparse.version=12.5.9.5 com.nvidia.cusparselt.version=0.7.1.0 com.nvidia.cusolver.version=11.7.4.40 com.nvidia.npp.version=12.4.0.27 com.nvidia.nvjpeg.version=12.4.0.16 com.nvidia.cublasmp.version=0.4.0.789 com.nvidia.cal.version=0.4.4.50 com.nvidia.cudnn.version=9.9.0.52 com.nvidia.tensorrt.version=10.9.0.34+cuda12.8 com.nvidia.tensorrtoss.version= com.nvidia.nsightsystems.version=2025.2.1.130 com.nvidia.nsightcompute.version=2025.2.0.11
                        
# 2025-04-16 00:48:01  7.59GB 执行命令并创建新的镜像层
RUN |32 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11 CUSPARSELT_VERSION=0.7.1.0 /bin/sh -c /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -z "${JETPACK_HOST_MOUNTS}" ]; then       /nvidia/build-scripts/installNCCL.sh;     fi; # buildkit
                        
# 2025-04-16 00:46:55  0.00B 设置环境变量 NCCL_VERSION CUBLAS_VERSION CUFFT_VERSION CURAND_VERSION CUSPARSE_VERSION CUSPARSELT_VERSION CUSOLVER_VERSION NPP_VERSION NVJPEG_VERSION CUFILE_VERSION NVJITLINK_VERSION CUBLASMP_VERSION CAL_VERSION NVSHMEM_VERSION CUDNN_VERSION CUDNN_FRONTEND_VERSION TRT_VERSION TRTOSS_VERSION NSIGHT_SYSTEMS_VERSION NSIGHT_COMPUTE_VERSION
ENV NCCL_VERSION=2.26.3 CUBLAS_VERSION=12.9.0.2 CUFFT_VERSION=11.4.0.6 CURAND_VERSION=10.3.10.19 CUSPARSE_VERSION=12.5.9.5 CUSPARSELT_VERSION=0.7.1.0 CUSOLVER_VERSION=11.7.4.40 NPP_VERSION=12.4.0.27 NVJPEG_VERSION=12.4.0.16 CUFILE_VERSION=1.14.0.30 NVJITLINK_VERSION=12.9.41 CUBLASMP_VERSION=0.4.0.789 CAL_VERSION=0.4.4.50 NVSHMEM_VERSION=3.2.5 CUDNN_VERSION=9.9.0.52 CUDNN_FRONTEND_VERSION=1.11.0 TRT_VERSION=10.9.0.34+cuda12.8 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.2.1.130 NSIGHT_COMPUTE_VERSION=2025.2.0.11
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUSPARSELT_VERSION=0.7.1.0
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NSIGHT_COMPUTE_VERSION=2025.2.0.11
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NSIGHT_SYSTEMS_VERSION=2025.2.1.130
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG TRTOSS_VERSION=
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG TRT_VERSION=10.9.0.34+cuda12.8
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUDNN_FRONTEND_VERSION=1.11.0
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUDNN_VERSION=9.9.0.52
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NVSHMEM_VERSION=3.2.5
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CAL_VERSION=0.4.4.50
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUBLASMP_VERSION=0.4.0.789
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NVJITLINK_VERSION=12.9.41
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUFILE_VERSION=1.14.0.30
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NVJPEG_VERSION=12.4.0.16
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NPP_VERSION=12.4.0.27
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUSOLVER_VERSION=11.7.4.40
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUSPARSE_VERSION=12.5.9.5
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CURAND_VERSION=10.3.10.19
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUFFT_VERSION=11.4.0.6
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG CUBLAS_VERSION=12.9.0.2
                        
# 2025-04-16 00:46:55  0.00B 定义构建参数
ARG NCCL_VERSION=2.26.3
                        
# 2025-04-16 00:46:55  0.00B 添加元数据标签
LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
                        
# 2025-04-16 00:46:55  0.00B 设置环境变量 _CUDA_COMPAT_PATH ENV BASH_ENV SHELL NVIDIA_REQUIRE_CUDA
ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
                        
# 2025-04-16 00:46:55  59.18KB 执行命令并创建新的镜像层
RUN |12 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 /bin/sh -c cp -vprd /nvidia/. /  &&  patch -p0 < /etc/startup_scripts.patch  &&  rm -f /etc/startup_scripts.patch # buildkit
                        
# 2025-04-16 00:46:55  874.65MB 执行命令并创建新的镜像层
RUN |12 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02 /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
                        
# 2025-04-11 09:18:54  0.00B 设置环境变量 CUDA_VERSION CUDA_DRIVER_VERSION
ENV CUDA_VERSION=12.9.0.036 CUDA_DRIVER_VERSION=575.51.02
                        
# 2025-04-11 09:18:54  0.00B 定义构建参数
ARG CUDA_DRIVER_VERSION=575.51.02
                        
# 2025-04-11 09:18:54  0.00B 定义构建参数
ARG CUDA_VERSION=12.9.0.036
                        
# 2025-04-11 09:18:54  0.00B 设置环境变量 OMPI_MCA_coll_hcoll_enable
ENV OMPI_MCA_coll_hcoll_enable=0
                        
# 2025-04-11 09:18:54  0.00B 设置环境变量 OPAL_PREFIX PATH
ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin
                        
# 2025-04-11 09:18:54  230.02MB 执行命令并创建新的镜像层
RUN |10 JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 RDMACORE_VERSION=50.0 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0 TARGETARCH=amd64 /bin/sh -c cd /nvidia  && ( export DEBIAN_FRONTEND=noninteractive        && apt-get update                            && apt-get install -y --no-install-recommends              libibverbs1                                  libibverbs-dev                               librdmacm1                                   librdmacm-dev                                libibumad3                                   libibumad-dev                                ibverbs-utils                                ibverbs-providers                     && rm -rf /var/lib/apt/lists/*               && rm $(dpkg-query -L                                    libibverbs-dev                               librdmacm-dev                                libibumad-dev                            | grep "\(\.so\|\.a\)$")          )                                            && ( cd opt/gdrcopy/                              && dpkg -i libgdrapi_*.deb                   )                                         && ( cp -r opt/hpcx /opt/                                         && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/          && ln -sf /opt/hpcx/ompi /usr/local/mpi                      && ln -sf /opt/hpcx/ucx  /usr/local/ucx                      && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf         && sed -i 's/^\(btl = self\)$/#\1/'                             /opt/hpcx/ompi/etc/openmpi-mca-params.conf       )                                                         && ( if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then           cd opt/amazon/efa/                                           && dpkg -i libfabric*.deb                                    && rm /opt/amazon/efa/lib/libfabric.a                        && echo "/opt/amazon/efa/lib" > /etc/ld.so.conf.d/efa.conf;         fi                                                         )                                                         && ldconfig # buildkit
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-04-11 09:18:47  0.00B 设置环境变量 GDRCOPY_VERSION HPCX_VERSION MOFED_VERSION OPENUCX_VERSION OPENMPI_VERSION RDMACORE_VERSION EFA_VERSION AWS_OFI_NCCL_VERSION
ENV GDRCOPY_VERSION=2.4.1 HPCX_VERSION=2.22.1 MOFED_VERSION=5.4-rdmacore50.0 OPENUCX_VERSION=1.18.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=50.0 EFA_VERSION=1.38.1 AWS_OFI_NCCL_VERSION=1.14.0
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG AWS_OFI_NCCL_VERSION=1.14.0
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG EFA_VERSION=1.38.1
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG OPENMPI_VERSION=4.1.7
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG OPENUCX_VERSION=1.18.0
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG MOFED_VERSION=5.4-rdmacore50.0
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG RDMACORE_VERSION=50.0
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG HPCX_VERSION=2.22.1
                        
# 2025-04-11 09:18:47  0.00B 定义构建参数
ARG GDRCOPY_VERSION=2.4.1
                        
# 2025-04-11 09:18:47  311.13MB 执行命令并创建新的镜像层
RUN |1 JETPACK_HOST_MOUNTS= /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         apt-utils         build-essential         ca-certificates         curl         libncurses6         libncursesw6         patch         wget         unzip         jq         gnupg         libtcmalloc-minimal4  && rm -rf /var/lib/apt/lists/*  && echo "hsts=0" > /root/.wgetrc # buildkit
                        
# 2025-04-11 09:18:30  0.00B 执行命令并创建新的镜像层
RUN |1 JETPACK_HOST_MOUNTS= /bin/sh -c if [ -n "${JETPACK_HOST_MOUNTS}" ]; then        echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf     && echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf;     fi # buildkit
                        
# 2025-04-11 09:18:30  0.00B 设置环境变量 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS
ENV NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
                        
# 2025-04-11 09:18:30  0.00B 定义构建参数
ARG JETPACK_HOST_MOUNTS=
                        
# 2025-04-08 18:43:15  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-04-08 18:43:14  78.10MB 
/bin/sh -c #(nop) ADD file:1d7c45546e94b90e941c5bf5c7a5d415d7b868581ad96171d4beb76caa8ab683 in / 
                        
# 2025-04-08 18:43:12  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=24.04
                        
# 2025-04-08 18:43:12  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-04-08 18:43:12  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-04-08 18:43:12  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:85e166e897d33255d2ae1ace9068e1d15b2f05ee76416debbcae94f9ea792eb8",
    "RepoTags": [
        "nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3"
    ],
    "RepoDigests": [
        "nvcr.io/nvidia/tritonserver@sha256:1350145d83bb8b4d4f71859f2260f44747eae8a2055cfd554378bc19e80b8ace",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver@sha256:aa6f4d83ffac43eaa72533c3b5d9eb2494e9b618478454bc5c9ae3534791d298"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-05-02T03:37:23.49230555Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/opt/tritonserver/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin",
            "NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=",
            "GDRCOPY_VERSION=2.4.1",
            "HPCX_VERSION=2.22.1",
            "MOFED_VERSION=5.4-rdmacore50.0",
            "OPENUCX_VERSION=1.18.0",
            "OPENMPI_VERSION=4.1.7",
            "RDMACORE_VERSION=50.0",
            "EFA_VERSION=1.38.1",
            "AWS_OFI_NCCL_VERSION=1.14.0",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "CUDA_VERSION=12.9.0.036",
            "CUDA_DRIVER_VERSION=575.51.02",
            "_CUDA_COMPAT_PATH=/usr/local/cuda/compat",
            "ENV=/etc/shinit_v2",
            "BASH_ENV=/etc/bash.bashrc",
            "SHELL=/bin/bash",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=9.0",
            "NCCL_VERSION=2.26.3",
            "CUBLAS_VERSION=12.9.0.2",
            "CUFFT_VERSION=11.4.0.6",
            "CURAND_VERSION=10.3.10.19",
            "CUSPARSE_VERSION=12.5.9.5",
            "CUSPARSELT_VERSION=0.7.1.0",
            "CUSOLVER_VERSION=11.7.4.40",
            "NPP_VERSION=12.4.0.27",
            "NVJPEG_VERSION=12.4.0.16",
            "CUFILE_VERSION=1.14.0.30",
            "NVJITLINK_VERSION=12.9.41",
            "CUBLASMP_VERSION=0.4.0.789",
            "CAL_VERSION=0.4.4.50",
            "NVSHMEM_VERSION=3.2.5",
            "CUDNN_VERSION=9.9.0.52",
            "CUDNN_FRONTEND_VERSION=1.11.0",
            "TRT_VERSION=10.9.0.34+cuda12.8",
            "TRTOSS_VERSION=",
            "NSIGHT_SYSTEMS_VERSION=2025.2.1.130",
            "NSIGHT_COMPUTE_VERSION=2025.2.0.11",
            "DALI_VERSION=1.48.0",
            "DALI_BUILD=",
            "DALI_URL_SUFFIX=120",
            "POLYGRAPHY_VERSION=0.49.20",
            "TRANSFORMER_ENGINE_VERSION=2.2",
            "MODEL_OPT_VERSION=0.25.0",
            "LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility,video",
            "NVIDIA_PRODUCT_NAME=Triton Server",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs:",
            "PIP_BREAK_SYSTEM_PACKAGES=1",
            "TRITON_SERVER_VERSION=2.57.0",
            "NVIDIA_TRITON_SERVER_VERSION=25.04",
            "UCX_MEM_EVENTS=no",
            "TF_ADJUST_HUE_FUSED=1",
            "TF_ADJUST_SATURATION_FUSED=1",
            "TF_ENABLE_WINOGRAD_NONFUSED=1",
            "TF_AUTOTUNE_THRESHOLD=2",
            "TRITON_SERVER_GPU_ENABLED=1",
            "TRITON_SERVER_USER=triton-server",
            "DEBIAN_FRONTEND=noninteractive",
            "TCMALLOC_RELEASE_RATE=200",
            "DCGM_VERSION=3.3.6",
            "NVIDIA_BUILD_ID=164182487"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/opt/tritonserver",
        "Entrypoint": [
            "/opt/nvidia/nvidia_entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "com.amazonaws.sagemaker.capabilities.accept-bind-to-port": "true",
            "com.amazonaws.sagemaker.capabilities.multi-models": "true",
            "com.nvidia.build.id": "164182487",
            "com.nvidia.build.ref": "0beb0717a0395341699ac92454c1858e91d2fe3f",
            "com.nvidia.cal.version": "0.4.4.50",
            "com.nvidia.cublas.version": "12.9.0.2",
            "com.nvidia.cublasmp.version": "0.4.0.789",
            "com.nvidia.cuda.version": "9.0",
            "com.nvidia.cudnn.version": "9.9.0.52",
            "com.nvidia.cufft.version": "11.4.0.6",
            "com.nvidia.curand.version": "10.3.10.19",
            "com.nvidia.cusolver.version": "11.7.4.40",
            "com.nvidia.cusparse.version": "12.5.9.5",
            "com.nvidia.cusparselt.version": "0.7.1.0",
            "com.nvidia.nccl.version": "2.26.3",
            "com.nvidia.npp.version": "12.4.0.27",
            "com.nvidia.nsightcompute.version": "2025.2.0.11",
            "com.nvidia.nsightsystems.version": "2025.2.1.130",
            "com.nvidia.nvjpeg.version": "12.4.0.16",
            "com.nvidia.tensorrt.version": "10.9.0.34+cuda12.8",
            "com.nvidia.tensorrtoss.version": "",
            "com.nvidia.tritonserver.version": "2.57.0",
            "com.nvidia.volumes.needed": "nvidia_driver",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "24.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 23337421185,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/026cfb23ee326b8300302dd7afd3cfbb1760ca4628702353a33c03015762da2b/diff:/var/lib/docker/overlay2/91c3c5b838248c8ca86a22c3ed2afc38346906d49d3d50e7a0a9ae8418352822/diff:/var/lib/docker/overlay2/bfb7ffe74211bbd092db35b606f1803e3a7b05f39228a9454a4e0d1903d4015b/diff:/var/lib/docker/overlay2/a784118b371f9635fab3e0f1994b28f12f286f120d232debdee7a990222ab16f/diff:/var/lib/docker/overlay2/b12a7eec1f323ad45e595223fe379514155895323ebcdd91f07065baf9de44d1/diff:/var/lib/docker/overlay2/9f36bf857765b609ae835b2348eb7b16c776a6257d490c25ebef1c6c40722ff5/diff:/var/lib/docker/overlay2/b6e0913e5f65a7670a6cb81984cbf303116b8a801e2f0361b7b0ce76d2e9e778/diff:/var/lib/docker/overlay2/7aa67620731bb85e3af660e717d3ab67afd57bdc5ae17fdfd3dec0ecf11d61e7/diff:/var/lib/docker/overlay2/93e455bae3a48a8460034fe92f000011a4530a464043a9a1bde95fe4861098f9/diff:/var/lib/docker/overlay2/b636ff4dca070c8e7a55cbd9438c85a6ec7be12f4c48495c621a1bcad6c6b133/diff:/var/lib/docker/overlay2/cc8ac5e4c485e4cb984681edfbc5f6aaeb852f6c6052f50bcdc4acddff0bf150/diff:/var/lib/docker/overlay2/bad99c87c6efaf5a46744360dc0b16a906222cef171f5d16329b456d6ffc9453/diff:/var/lib/docker/overlay2/c50870ddcd4fc357652e0a1625ea3288fa5d010d31b816b208769ea3b4512844/diff:/var/lib/docker/overlay2/b3ba4c12eaf00c53ed88072f989f5adf3b5d4d1b4b9286bd40bb9607964ce0f8/diff:/var/lib/docker/overlay2/9993734903d4834776dc4e3990dee18cae0693accea88d52379bd8732cd03701/diff:/var/lib/docker/overlay2/2a0310c6e7bdf732ed47f653084cdac203574b05eaff7749c2c19591e847f973/diff:/var/lib/docker/overlay2/c511b82b24408041f73c94683dad867e0b3b08a6f9e0dd3e8e20563dc074ac0a/diff:/var/lib/docker/overlay2/5d16b79c470fa49bf3f7c3b55dd5be91408e81e74b7111046da29b3c9d24916e/diff:/var/lib/docker/overlay2/cf0d9f76c76449fd21dcfe98741a5f297688e065ffef76407740365568ef818c/diff:/var/lib/docker/overlay2/dc75be54138b8b43f5b0b16d6a88eb4d347a1609b0858b9c07927d56e166eebc/diff:/var/lib/docker/overlay2/d146d6286ec8edcbdaf9d5e58c33e7c6c00633932eee56832e9ade6507093d80/diff:/var/lib/docker/overlay2/f665a5d5f7566ed154bd431c430ad2f8668db580566b2415fea092ad993591f6/diff:/var/lib/docker/overlay2/649ae5958815e5e7589b61e559c8d5b16701427ea478696093c6da34a28cf760/diff:/var/lib/docker/overlay2/48d33312c6d7d0de82237a4c0038465e19f551622d7872d8fcf926cee4f251ed/diff:/var/lib/docker/overlay2/39b9d38d475c07733db1f44850daaad5fcfa659a67a05931f1b8d083cfeea760/diff:/var/lib/docker/overlay2/ffa3bd351dcb07232a43304c3f3b0837fe92bacb7e2dea3f75daa27ec08f0cba/diff:/var/lib/docker/overlay2/ef39eebcb36db1a0af0d44772ac58c06f39eb64b25161fdbeeafa97264016b3f/diff:/var/lib/docker/overlay2/b9d2655ab85c2fca0a5d83d4183d30395aee9422171d7feabd1379f53d357ff7/diff:/var/lib/docker/overlay2/eebb4bef86ae7474041ac8640db0ac89789c4e78a757f9d0d9a051e290d0721d/diff:/var/lib/docker/overlay2/bd307a2c11d0f0261371086826004a62c8d26073196a8db4992ab139f6c044f4/diff:/var/lib/docker/overlay2/697096b4c822b101843182ff505291800d91c80f1ac5c99f2b9cab5e17ecccf2/diff",
            "MergedDir": "/var/lib/docker/overlay2/329d210752ee3ef10298be139ca7ea50f088a23c475a2f3dc14b8d9220ea3cf3/merged",
            "UpperDir": "/var/lib/docker/overlay2/329d210752ee3ef10298be139ca7ea50f088a23c475a2f3dc14b8d9220ea3cf3/diff",
            "WorkDir": "/var/lib/docker/overlay2/329d210752ee3ef10298be139ca7ea50f088a23c475a2f3dc14b8d9220ea3cf3/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:3abdd8a5e7a8909e1509f1d36dcc8b85a0f95c68a69e6d86c6e9e3c1059d44b3",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:56b48a64c1a89635c37cc8eee09f44dfdd1b0bffb178c7c6e684b5602230dc86",
            "sha256:51095d499b316211c41771ac0086d35be15ddb88e250ffc16c472317f7186592",
            "sha256:382f0aacfa3721bb8e87165e2adb98f5986881bebfd195c8a39ab90b56dd923a",
            "sha256:0c5897de4ba5c763c7e50e3a44f16f87a36e6672290cfab474eb27cd313b39c6",
            "sha256:383dc5aac9d7a3d6dd11b9da9733258cc739f0b2e7fccc1fab120eaf5e04f205",
            "sha256:4ffee5b339a82e809699254d6393839780f9c85d54f2e67bbdacaf40ca3a879e",
            "sha256:9f2c7f673b63c3a31155f340149224e92b466afeb5fee5f0be73548bf803f01e",
            "sha256:6b5e4b206c8b3d4b99f76bf751d9bf52108a496529ebf673d5083023a8dea26d",
            "sha256:38566b056d73e56f9339fd3dae54d29b26713ab07773d022d807f35b5fe3a398",
            "sha256:f4bcfdcad5ed7fdcabb6fa7f3839922cda7a295b9f03fe7c0df00535631bf0ca",
            "sha256:0879c5d84b48fcfd1e9de1636ce3b171b9bd86a4a0772f5305cfe02e2781b00c",
            "sha256:bf3c7891de9351c7abd4db9910444bc818063088aff357a3033efc3c8b2a2786",
            "sha256:dd87c33ba4b2c61e7164cce127f51144776ceec66be37e2ee05926dbb20aee57",
            "sha256:ee5507ec4a0e87d3f11fc56363c27db42d182871b5355f8b7fa12e61ee679c8e",
            "sha256:bda3311f0d7948fc53d34dcc2d8d8409324f07ab0b7770b2288de41925923d4e",
            "sha256:4c7c1b14643d584e254b5c2f1ead3036592db4db4497db040aa46c75cd3e0d5c",
            "sha256:567d2fc77ce95b9b90fe910de1ebca5b5d25455d2c4431e4b70e590107c4a566",
            "sha256:354a151e42d2285fce34a3b7ca39646aa662b4469dd03d67481f17de00aa1d28",
            "sha256:ff7deba7d509546525e3067c6e00f2241f39d0370f170daa8399e654c9ea3d6b",
            "sha256:b867e3bf7abea2290d968682906adb81b9ec83b7f11a24ddc3ff0ba176990e8e",
            "sha256:63accbe8efeeea9f40f422a9490940b5cb429bb0e414a69aba700c5d2d509f02",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5e752c96675b602a655d435d36dd3abec7913d3e82746e0a1fce0d065329ca89",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:53a71f06bb8860ba443d0ce21ab439432b0ec46b1f0ce138ec4b6ca4c94cbab4",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:796acbbbe86d10ee36d9b1c59d664bf5d03b7f209e77b9b04e50c5e77261c734",
            "sha256:dfe675bf3db9d0134fc271721c96ac3065ae1c09f306950c86eb1cc86612b7aa",
            "sha256:250f57beec7a3c6b62222962f97826c372bc4fd1e7cb356bafac8cbf087f0629",
            "sha256:63ad4be4a636f0b2b892dc1d0bf9c8982e30c986b8fac9583418b5bd79a9b7f3"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-06-15T02:40:28.171917539+08:00"
    }
}

更多版本

docker.io/nvcr.io/nvidia/tritonserver:24.11-trtllm-python-py3

linux/amd64 docker.io24.86GB2025-02-26 02:25
370

docker.io/nvcr.io/nvidia/tritonserver:25.04-py3

linux/amd64 docker.io19.59GB2025-05-22 00:46
189

docker.io/nvcr.io/nvidia/tritonserver:25.02-trtllm-python-py3

linux/amd64 docker.io30.15GB2025-05-25 02:51
72

docker.io/nvcr.io/nvidia/tritonserver:22.12-py3

linux/amd64 docker.io14.00GB2025-06-04 06:32
29

docker.io/nvcr.io/nvidia/tritonserver:25.05-trtllm-python-py3

linux/amd64 docker.io32.90GB2025-06-09 03:41
49

docker.io/nvcr.io/nvidia/tritonserver:25.04-vllm-python-py3

linux/amd64 docker.io23.34GB2025-06-15 02:52
10