docker.io/nvcr.io/nvidia/tritonserver:22.12-py3 linux/amd64

docker.io/nvcr.io/nvidia/tritonserver:22.12-py3 - 国内下载镜像源 浏览次数:16

这是一个 NVIDIA Triton Inference Server 的 Docker 镜像。Triton Inference Server 是一个高性能的推理服务器,用于部署各种深度学习模型,支持多种框架(例如 TensorFlow, PyTorch, TensorRT 等),并提供模型版本管理、模型部署、以及高效的推理服务。

源镜像 docker.io/nvcr.io/nvidia/tritonserver:22.12-py3
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3
镜像ID sha256:20d3e6634cd3bce981e653ef55cd27d3ea5fba26de1613eeb16c9915a09ae058
镜像TAG 22.12-py3
大小 14.00GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /opt/nvidia/nvidia_entrypoint.sh
工作目录 /opt/tritonserver
OS/平台 linux/amd64
浏览量 16 次
贡献者
镜像创建 2022-12-16T20:54:33.340516329Z
同步时间 2025-06-04 06:32
更新时间 2025-06-06 02:08
环境变量
PATH=/opt/tritonserver/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS= _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0 NCCL_VERSION=2.15.5 CUBLAS_VERSION=11.11.3.6 CUFFT_VERSION=10.9.0.58 CURAND_VERSION=10.3.0.86 CUSPARSE_VERSION=11.7.5.86 CUSOLVER_VERSION=11.4.1.48 CUTENSOR_VERSION=1.6.1.5 NPP_VERSION=11.8.0.86 NVJPEG_VERSION=11.9.0.86 CUDNN_VERSION=8.7.0.84 TRT_VERSION=8.5.1.7 TRTOSS_VERSION=22.12 NSIGHT_SYSTEMS_VERSION=2022.4.2.1 NSIGHT_COMPUTE_VERSION=2022.3.0.22 DALI_VERSION=1.20.0 DALI_BUILD=6562491 POLYGRAPHY_VERSION=0.43.1 TRANSFORMER_ENGINE_VERSION=0.3 LD_LIBRARY_PATH=/opt/tritonserver/backends/onnxruntime:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video NVIDIA_PRODUCT_NAME=Triton Server GDRCOPY_VERSION=2.3 HPCX_VERSION=2.13 MOFED_VERSION=5.4-rdmacore36.0 OPENUCX_VERSION=1.14.0 OPENMPI_VERSION=4.1.4 RDMACORE_VERSION=36.0 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 LIBRARY_PATH=/usr/local/cuda/lib64/stubs: TRITON_SERVER_VERSION=2.29.0 NVIDIA_TRITON_SERVER_VERSION=22.12 TF_ADJUST_HUE_FUSED=1 TF_ADJUST_SATURATION_FUSED=1 TF_ENABLE_WINOGRAD_NONFUSED=1 TF_AUTOTUNE_THRESHOLD=2 TRITON_SERVER_GPU_ENABLED=1 TRITON_SERVER_USER=triton-server DEBIAN_FRONTEND=noninteractive TCMALLOC_RELEASE_RATE=200 DCGM_VERSION=2.2.9 NVIDIA_BUILD_ID=50109463
镜像标签
true: com.amazonaws.sagemaker.capabilities.accept-bind-to-port true: com.amazonaws.sagemaker.capabilities.multi-models 50109463: com.nvidia.build.id 1a651ccb23c8f4416b5540653b207154a531194d: com.nvidia.build.ref 11.11.3.6: com.nvidia.cublas.version 9.0: com.nvidia.cuda.version 8.7.0.84: com.nvidia.cudnn.version 10.9.0.58: com.nvidia.cufft.version 10.3.0.86: com.nvidia.curand.version 11.4.1.48: com.nvidia.cusolver.version 11.7.5.86: com.nvidia.cusparse.version 1.6.1.5: com.nvidia.cutensor.version 2.15.5: com.nvidia.nccl.version 11.8.0.86: com.nvidia.npp.version 2022.3.0.22: com.nvidia.nsightcompute.version 2022.4.2.1: com.nvidia.nsightsystems.version 11.9.0.86: com.nvidia.nvjpeg.version 8.5.1.7: com.nvidia.tensorrt.version 22.12: com.nvidia.tensorrtoss.version 2.29.0: com.nvidia.tritonserver.version nvidia_driver: com.nvidia.volumes.needed

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3  docker.io/nvcr.io/nvidia/tritonserver:22.12-py3

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3  docker.io/nvcr.io/nvidia/tritonserver:22.12-py3

Shell快速替换命令

sed -i 's#nvcr.io/nvidia/tritonserver:22.12-py3#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3  docker.io/nvcr.io/nvidia/tritonserver:22.12-py3'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3  docker.io/nvcr.io/nvidia/tritonserver:22.12-py3'

镜像构建历史


# 2022-12-17 04:54:33  5.89KB 
/bin/sh -c #(nop) COPY --chown=1000:1000file:f8db1667a716992ff7b4e742f2894499ba7899797cee31102a29fb929995fa20 in /usr/bin/. 
                        
# 2022-12-17 04:54:33  0.00B 
/bin/sh -c #(nop)  LABEL com.amazonaws.sagemaker.capabilities.multi-models=true
                        
# 2022-12-17 04:54:33  0.00B 
/bin/sh -c #(nop)  LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true
                        
# 2022-12-17 04:54:32  3.01MB 
/bin/sh -c #(nop) COPY --chown=1000:1000file:6d5d6be54a7e1bc76ff422ff2a96234d0a7877b5edacfdd48581e009ee550303 in . 
                        
# 2022-12-17 04:54:32  0.00B 
/bin/sh -c #(nop) WORKDIR /opt/tritonserver
                        
# 2022-12-17 04:54:23  6.56GB 
/bin/sh -c #(nop) COPY --chown=1000:1000dir:dad6dcf4935264b18cd7ab8382ec4f8b4e199bd7dff51301c43a5a92e2280862 in tritonserver 
                        
# 2022-12-17 04:53:01  0.00B 
/bin/sh -c #(nop) WORKDIR /opt
                        
# 2022-12-17 04:53:01  0.00B 
/bin/sh -c #(nop)  LABEL com.nvidia.build.ref=1a651ccb23c8f4416b5540653b207154a531194d
                        
# 2022-12-17 04:53:01  0.00B 
/bin/sh -c #(nop)  LABEL com.nvidia.build.id=50109463
                        
# 2022-12-17 04:53:01  0.00B 
/bin/sh -c #(nop)  ENV NVIDIA_BUILD_ID=50109463
                        
# 2022-12-17 04:53:01  733.00B 
/bin/sh -c #(nop) COPY dir:f4289e7e4927571235f936b2a375c0e461a3f47ffdd68d1476481f574ab9d0f7 in /opt/nvidia/entrypoint.d/ 
                        
# 2022-12-17 04:53:01  0.00B 
/bin/sh -c #(nop)  ENV NVIDIA_PRODUCT_NAME=Triton Server
                        
# 2022-12-17 04:53:01  0.00B 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c rm -fr /opt/tritonserver/*
                        
# 2022-12-17 04:53:00  0.00B 
/bin/sh -c #(nop) WORKDIR /opt/tritonserver
                        
# 2022-12-17 04:52:59  139.10MB 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends             python3 libarchive-dev             python3-pip             libpython3-dev &&     pip3 install --upgrade pip &&     pip3 install --upgrade wheel setuptools &&     pip3 install --upgrade numpy &&     rm -rf /var/lib/apt/lists/*
                        
# 2022-12-17 04:52:33  56.95KB 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib  && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf  && ldconfig  && rm -f ${_CUDA_COMPAT_PATH}/lib
                        
# 2022-12-17 04:52:32  380.94MB 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c curl -o /tmp/cuda-keyring.deb     https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb     && apt install /tmp/cuda-keyring.deb && rm /tmp/cuda-keyring.deb &&     apt-get update && apt-get install -y datacenter-gpu-manager=1:2.2.9
                        
# 2022-12-17 04:52:12  0.00B 
/bin/sh -c #(nop)  ENV DCGM_VERSION=2.2.9
                        
# 2022-12-17 04:52:12  0.00B 
/bin/sh -c #(nop)  ENV TCMALLOC_RELEASE_RATE=200
                        
# 2022-12-17 04:52:11  98.43MB 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends             software-properties-common             libb64-0d             libcurl4-openssl-dev             libre2-5             git             gperf             dirmngr             libgoogle-perftools-dev             libnuma-dev             curl             libgomp1 &&     rm -rf /var/lib/apt/lists/*
                        
# 2022-12-17 04:51:40  0.00B 
/bin/sh -c #(nop)  ENV DEBIAN_FRONTEND=noninteractive
                        
# 2022-12-17 04:51:40  329.08KB 
|2 TRITON_CONTAINER_VERSION=22.12 TRITON_VERSION=2.29.0 /bin/sh -c userdel tensorrt-server > /dev/null 2>&1 || true &&     if ! id -u $TRITON_SERVER_USER > /dev/null 2>&1 ; then         useradd $TRITON_SERVER_USER;     fi &&     [ `id -u $TRITON_SERVER_USER` -eq 1000 ] &&     [ `id -g $TRITON_SERVER_USER` -eq 1000 ]
                        
# 2022-12-17 04:51:40  0.00B 
/bin/sh -c #(nop)  ENV TRITON_SERVER_USER=triton-server
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV TRITON_SERVER_GPU_ENABLED=1
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV TF_AUTOTUNE_THRESHOLD=2
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV TF_ENABLE_WINOGRAD_NONFUSED=1
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV TF_ADJUST_SATURATION_FUSED=1
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV TF_ADJUST_HUE_FUSED=1
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV LD_LIBRARY_PATH=/opt/tritonserver/backends/onnxruntime:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV PATH=/opt/tritonserver/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  LABEL com.nvidia.tritonserver.version=2.29.0
                        
# 2022-12-17 04:51:39  0.00B 
/bin/sh -c #(nop)  ENV NVIDIA_TRITON_SERVER_VERSION=22.12
                        
# 2022-12-17 04:51:38  0.00B 
/bin/sh -c #(nop)  ENV TRITON_SERVER_VERSION=2.29.0
                        
# 2022-12-17 04:51:38  0.00B 
/bin/sh -c #(nop)  ARG TRITON_CONTAINER_VERSION
                        
# 2022-12-17 04:51:38  0.00B 
/bin/sh -c #(nop)  ARG TRITON_VERSION
                        
# 2022-12-15 06:49:06  0.00B 执行命令并创建新的镜像层
RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.13 RDMACORE_VERSION=36.0 MOFED_VERSION=5.4-rdmacore36.0 OPENUCX_VERSION=1.14.0 OPENMPI_VERSION=4.1.4 TARGETARCH=amd64 /bin/sh -c if [[ "$CUDA_VERSION" == "11.2.1.007" && $(dpkg --print-architecture) == "amd64" ]]; then wget http://sqrl.nvidia.com/dldata/sgodithi/bug3254800/cicc >/dev/null 2>&1 && cp cicc /usr/local/cuda/nvvm/bin/. ; fi # buildkit
                        
# 2022-12-15 06:49:05  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
                        
# 2022-12-15 06:49:05  1.02GB 执行命令并创建新的镜像层
RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.13 RDMACORE_VERSION=36.0 MOFED_VERSION=5.4-rdmacore36.0 OPENUCX_VERSION=1.14.0 OPENMPI_VERSION=4.1.4 TARGETARCH=amd64 /bin/sh -c export DEVEL=1 BASE=0  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUDA.sh  && /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installNCCL.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installCUTENSOR.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi  && rm -f /tmp/cuda-*.patch # buildkit
                        
# 2022-12-15 06:43:30  1.49KB 复制新文件或目录到容器中
COPY cuda-*.patch /tmp # buildkit
                        
# 2022-12-15 06:43:30  0.00B 设置环境变量 OMPI_MCA_coll_hcoll_enable
ENV OMPI_MCA_coll_hcoll_enable=0
                        
# 2022-12-15 06:43:30  0.00B 设置环境变量 OPAL_PREFIX PATH
ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin
                        
# 2022-12-15 06:43:30  241.70MB 执行命令并创建新的镜像层
RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.13 RDMACORE_VERSION=36.0 MOFED_VERSION=5.4-rdmacore36.0 OPENUCX_VERSION=1.14.0 OPENMPI_VERSION=4.1.4 TARGETARCH=amd64 /bin/sh -c cd /nvidia  && ( cd opt/rdma-core/                             && dpkg -i libibverbs1_*.deb                            libibverbs-dev_*.deb                         librdmacm1_*.deb                             librdmacm-dev_*.deb                          libibumad3_*.deb                             libibumad-dev_*.deb                          ibverbs-utils_*.deb                          ibverbs-providers_*.deb           && rm $(dpkg-query -L                                    libibverbs-dev                               librdmacm-dev                                libibumad-dev                            | grep "\(\.so\|\.a\)$")          )                                            && ( cd opt/gdrcopy/                              && dpkg -i libgdrapi_*.deb                   )                                         && ( cp -r opt/hpcx /opt/                                         && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/          && ln -sf /opt/hpcx/ompi /usr/local/mpi                      && ln -sf /opt/hpcx/ucx  /usr/local/ucx                      && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf         && sed -i 's/^\(btl = self\)$/#\1/'                             /opt/hpcx/ompi/etc/openmpi-mca-params.conf       )                                                         && ldconfig # buildkit
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2022-12-15 06:43:30  0.00B 设置环境变量 GDRCOPY_VERSION HPCX_VERSION MOFED_VERSION OPENUCX_VERSION OPENMPI_VERSION RDMACORE_VERSION
ENV GDRCOPY_VERSION=2.3 HPCX_VERSION=2.13 MOFED_VERSION=5.4-rdmacore36.0 OPENUCX_VERSION=1.14.0 OPENMPI_VERSION=4.1.4 RDMACORE_VERSION=36.0
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG OPENMPI_VERSION
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG OPENUCX_VERSION
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG MOFED_VERSION=5.4-rdmacore36.0
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG RDMACORE_VERSION
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG HPCX_VERSION
                        
# 2022-12-15 06:43:30  0.00B 定义构建参数
ARG GDRCOPY_VERSION
                        
# 2022-12-15 06:43:27  102.34MB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         git         libglib2.0-0         less         libnl-route-3-200         libnl-3-dev         libnl-route-3-dev         libnuma-dev         libnuma1         libpmi2-0-dev         nano         numactl         openssh-client         vim         wget  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2022-12-15 06:30:51  148.72KB 复制新文件或目录到容器中
COPY NVIDIA_Deep_Learning_Container_License.pdf /workspace/ # buildkit
                        
# 2022-12-15 06:30:51  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2022-12-15 06:30:51  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2022-12-15 06:30:51  12.46KB 复制新文件或目录到容器中
COPY entrypoint/ /opt/nvidia/ # buildkit
                        
# 2022-12-15 06:30:50  0.00B 设置环境变量 PATH LD_LIBRARY_PATH NVIDIA_VISIBLE_DEVICES NVIDIA_DRIVER_CAPABILITIES
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
                        
# 2022-12-15 06:30:50  46.00B 执行命令并创建新的镜像层
RUN |21 CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.15.5 CUBLAS_VERSION=11.11.3.6 CUFFT_VERSION=10.9.0.58 CURAND_VERSION=10.3.0.86 CUSPARSE_VERSION=11.7.5.86 CUSOLVER_VERSION=11.4.1.48 CUTENSOR_VERSION=1.6.1.5 NPP_VERSION=11.8.0.86 NVJPEG_VERSION=11.9.0.86 CUDNN_VERSION=8.7.0.84 TRT_VERSION=8.5.1.7 TRTOSS_VERSION=22.12 NSIGHT_SYSTEMS_VERSION=2022.4.2.1 NSIGHT_COMPUTE_VERSION=2022.3.0.22 DALI_VERSION=1.20.0 DALI_BUILD=6562491 POLYGRAPHY_VERSION=0.43.1 TRANSFORMER_ENGINE_VERSION=0.3 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf  && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2022-12-15 06:30:50  13.39KB 复制文件或目录到容器中
ADD docs.tgz / # buildkit
                        
# 2022-12-15 06:30:50  0.00B 设置环境变量 DALI_VERSION DALI_BUILD POLYGRAPHY_VERSION TRANSFORMER_ENGINE_VERSION
ENV DALI_VERSION=1.20.0 DALI_BUILD=6562491 POLYGRAPHY_VERSION=0.43.1 TRANSFORMER_ENGINE_VERSION=0.3
                        
# 2022-12-15 06:30:50  0.00B 定义构建参数
ARG TRANSFORMER_ENGINE_VERSION
                        
# 2022-12-15 06:30:50  0.00B 定义构建参数
ARG POLYGRAPHY_VERSION
                        
# 2022-12-15 06:30:50  0.00B 定义构建参数
ARG DALI_BUILD
                        
# 2022-12-15 06:30:50  0.00B 定义构建参数
ARG DALI_VERSION
                        
# 2022-12-15 06:30:50  0.00B 添加元数据标签
LABEL com.nvidia.nccl.version=2.15.5 com.nvidia.cublas.version=11.11.3.6 com.nvidia.cufft.version=10.9.0.58 com.nvidia.curand.version=10.3.0.86 com.nvidia.cusparse.version=11.7.5.86 com.nvidia.cusolver.version=11.4.1.48 com.nvidia.cutensor.version=1.6.1.5 com.nvidia.npp.version=11.8.0.86 com.nvidia.nvjpeg.version=11.9.0.86 com.nvidia.cudnn.version=8.7.0.84 com.nvidia.tensorrt.version=8.5.1.7 com.nvidia.tensorrtoss.version=22.12 com.nvidia.nsightsystems.version=2022.4.2.1 com.nvidia.nsightcompute.version=2022.3.0.22
                        
# 2022-12-15 06:30:50  4.68GB 执行命令并创建新的镜像层
RUN |17 CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.15.5 CUBLAS_VERSION=11.11.3.6 CUFFT_VERSION=10.9.0.58 CURAND_VERSION=10.3.0.86 CUSPARSE_VERSION=11.7.5.86 CUSOLVER_VERSION=11.4.1.48 CUTENSOR_VERSION=1.6.1.5 NPP_VERSION=11.8.0.86 NVJPEG_VERSION=11.9.0.86 CUDNN_VERSION=8.7.0.84 TRT_VERSION=8.5.1.7 TRTOSS_VERSION=22.12 NSIGHT_SYSTEMS_VERSION=2022.4.2.1 NSIGHT_COMPUTE_VERSION=2022.3.0.22 /bin/sh -c /nvidia/build-scripts/installNCCL.sh  && /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUTENSOR.sh # buildkit
                        
# 2022-12-15 06:27:34  0.00B 设置环境变量 NCCL_VERSION CUBLAS_VERSION CUFFT_VERSION CURAND_VERSION CUSPARSE_VERSION CUSOLVER_VERSION CUTENSOR_VERSION NPP_VERSION NVJPEG_VERSION CUDNN_VERSION TRT_VERSION TRTOSS_VERSION NSIGHT_SYSTEMS_VERSION NSIGHT_COMPUTE_VERSION
ENV NCCL_VERSION=2.15.5 CUBLAS_VERSION=11.11.3.6 CUFFT_VERSION=10.9.0.58 CURAND_VERSION=10.3.0.86 CUSPARSE_VERSION=11.7.5.86 CUSOLVER_VERSION=11.4.1.48 CUTENSOR_VERSION=1.6.1.5 NPP_VERSION=11.8.0.86 NVJPEG_VERSION=11.9.0.86 CUDNN_VERSION=8.7.0.84 TRT_VERSION=8.5.1.7 TRTOSS_VERSION=22.12 NSIGHT_SYSTEMS_VERSION=2022.4.2.1 NSIGHT_COMPUTE_VERSION=2022.3.0.22
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG NSIGHT_COMPUTE_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG NSIGHT_SYSTEMS_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG TRTOSS_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG TRT_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUDNN_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG NVJPEG_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG NPP_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUTENSOR_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUSOLVER_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUSPARSE_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CURAND_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUFFT_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUBLAS_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG NCCL_VERSION
                        
# 2022-12-15 06:27:34  0.00B 添加元数据标签
LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
                        
# 2022-12-15 06:27:34  0.00B 设置环境变量 _CUDA_COMPAT_PATH ENV BASH_ENV SHELL NVIDIA_REQUIRE_CUDA
ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
                        
# 2022-12-15 06:27:34  656.37KB 执行命令并创建新的镜像层
RUN |3 CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 JETPACK_HOST_MOUNTS= /bin/sh -c cp -vprd /nvidia/. /  &&  patch -p0 < /etc/startup_scripts.patch  &&  rm -f /etc/startup_scripts.patch # buildkit
                        
# 2022-12-15 06:27:34  396.52MB 执行命令并创建新的镜像层
RUN |3 CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 JETPACK_HOST_MOUNTS= /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
                        
# 2022-12-15 06:27:34  0.00B 设置环境变量 CUDA_VERSION CUDA_DRIVER_VERSION CUDA_CACHE_DISABLE NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS
ENV CUDA_VERSION=11.8.0.065 CUDA_DRIVER_VERSION=520.61.05 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG JETPACK_HOST_MOUNTS
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUDA_DRIVER_VERSION
                        
# 2022-12-15 06:27:34  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2022-12-13 05:08:57  301.56MB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         apt-utils         build-essential         ca-certificates         curl         libncurses5         libncursesw5         patch         wget         rsync         unzip         jq         gnupg         libtcmalloc-minimal4 # buildkit
                        
# 2022-12-09 09:20:21  0.00B 
/bin/sh -c #(nop)  CMD ["bash"]
                        
# 2022-12-09 09:20:21  72.79MB 
/bin/sh -c #(nop) ADD file:9d282119af0c42bc823c95b4192a3350cf2cad670622017356dd2e637762e425 in / 
                        
                    

镜像信息

{
    "Id": "sha256:20d3e6634cd3bce981e653ef55cd27d3ea5fba26de1613eeb16c9915a09ae058",
    "RepoTags": [
        "nvcr.io/nvidia/tritonserver:22.12-py3",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver:22.12-py3"
    ],
    "RepoDigests": [
        "nvcr.io/nvidia/tritonserver@sha256:306b5b7fbf244a708c4bb2380ec127561912704a2bcc463e11348fa1300afa8e",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/tritonserver@sha256:4cb4cfd031d5613d87543f0802026cd8708d031a1f49055550fe8acfc98a429f"
    ],
    "Parent": "",
    "Comment": "",
    "Created": "2022-12-16T20:54:33.340516329Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/opt/tritonserver/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin",
            "CUDA_VERSION=11.8.0.065",
            "CUDA_DRIVER_VERSION=520.61.05",
            "CUDA_CACHE_DISABLE=1",
            "NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=",
            "_CUDA_COMPAT_PATH=/usr/local/cuda/compat",
            "ENV=/etc/shinit_v2",
            "BASH_ENV=/etc/bash.bashrc",
            "SHELL=/bin/bash",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=9.0",
            "NCCL_VERSION=2.15.5",
            "CUBLAS_VERSION=11.11.3.6",
            "CUFFT_VERSION=10.9.0.58",
            "CURAND_VERSION=10.3.0.86",
            "CUSPARSE_VERSION=11.7.5.86",
            "CUSOLVER_VERSION=11.4.1.48",
            "CUTENSOR_VERSION=1.6.1.5",
            "NPP_VERSION=11.8.0.86",
            "NVJPEG_VERSION=11.9.0.86",
            "CUDNN_VERSION=8.7.0.84",
            "TRT_VERSION=8.5.1.7",
            "TRTOSS_VERSION=22.12",
            "NSIGHT_SYSTEMS_VERSION=2022.4.2.1",
            "NSIGHT_COMPUTE_VERSION=2022.3.0.22",
            "DALI_VERSION=1.20.0",
            "DALI_BUILD=6562491",
            "POLYGRAPHY_VERSION=0.43.1",
            "TRANSFORMER_ENGINE_VERSION=0.3",
            "LD_LIBRARY_PATH=/opt/tritonserver/backends/onnxruntime:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility,video",
            "NVIDIA_PRODUCT_NAME=Triton Server",
            "GDRCOPY_VERSION=2.3",
            "HPCX_VERSION=2.13",
            "MOFED_VERSION=5.4-rdmacore36.0",
            "OPENUCX_VERSION=1.14.0",
            "OPENMPI_VERSION=4.1.4",
            "RDMACORE_VERSION=36.0",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs:",
            "TRITON_SERVER_VERSION=2.29.0",
            "NVIDIA_TRITON_SERVER_VERSION=22.12",
            "TF_ADJUST_HUE_FUSED=1",
            "TF_ADJUST_SATURATION_FUSED=1",
            "TF_ENABLE_WINOGRAD_NONFUSED=1",
            "TF_AUTOTUNE_THRESHOLD=2",
            "TRITON_SERVER_GPU_ENABLED=1",
            "TRITON_SERVER_USER=triton-server",
            "DEBIAN_FRONTEND=noninteractive",
            "TCMALLOC_RELEASE_RATE=200",
            "DCGM_VERSION=2.2.9",
            "NVIDIA_BUILD_ID=50109463"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/opt/tritonserver",
        "Entrypoint": [
            "/opt/nvidia/nvidia_entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "com.amazonaws.sagemaker.capabilities.accept-bind-to-port": "true",
            "com.amazonaws.sagemaker.capabilities.multi-models": "true",
            "com.nvidia.build.id": "50109463",
            "com.nvidia.build.ref": "1a651ccb23c8f4416b5540653b207154a531194d",
            "com.nvidia.cublas.version": "11.11.3.6",
            "com.nvidia.cuda.version": "9.0",
            "com.nvidia.cudnn.version": "8.7.0.84",
            "com.nvidia.cufft.version": "10.9.0.58",
            "com.nvidia.curand.version": "10.3.0.86",
            "com.nvidia.cusolver.version": "11.4.1.48",
            "com.nvidia.cusparse.version": "11.7.5.86",
            "com.nvidia.cutensor.version": "1.6.1.5",
            "com.nvidia.nccl.version": "2.15.5",
            "com.nvidia.npp.version": "11.8.0.86",
            "com.nvidia.nsightcompute.version": "2022.3.0.22",
            "com.nvidia.nsightsystems.version": "2022.4.2.1",
            "com.nvidia.nvjpeg.version": "11.9.0.86",
            "com.nvidia.tensorrt.version": "8.5.1.7",
            "com.nvidia.tensorrtoss.version": "22.12",
            "com.nvidia.tritonserver.version": "2.29.0",
            "com.nvidia.volumes.needed": "nvidia_driver"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 14001134438,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/dff91170a4c4daf4e307d05797d4fcb055068e562e572128abb2e1580c5dc525/diff:/var/lib/docker/overlay2/a86528272bbc7a2e676adfe771348438dffa3876b03f1859acbbd17b4a887702/diff:/var/lib/docker/overlay2/1d447bafc9860e96abecbccf5fc8319a8d295ab74d6a821b164befa2607de31c/diff:/var/lib/docker/overlay2/de293563c4da6e22fa43abee135b0a6bd13064e64267de2cb5ef8a5d6e42d87e/diff:/var/lib/docker/overlay2/bf10a0cb8408c957c5fe709f5b7f6171722169a0e919262c261a0da7a3fde0a1/diff:/var/lib/docker/overlay2/6e90594a6d951097c2dfd5e78de70fdc5aa60e3552aac920db1a627fb32b6dc6/diff:/var/lib/docker/overlay2/ca78e47b805013f42bab2785f51685bb5115d167dc73d9569582a75f47b99013/diff:/var/lib/docker/overlay2/9c1a84ea523a13a5d18aa331498c021b149b61b792ef3552c0a3260fd12103f7/diff:/var/lib/docker/overlay2/cbf72a4f9809717880d616fa251ef197b752b48925a645ba360b709dc966739c/diff:/var/lib/docker/overlay2/f46f8540de9e3f6471452cc368eafed0236489bb8f8965fc0a87d094b9ec3d1d/diff:/var/lib/docker/overlay2/80784190ca909cae81473ce24d50b710063deba8133e638ad7185169729ea1ba/diff:/var/lib/docker/overlay2/e7beb5a848f89b234b866cc1105a5f5bb928689b02571ae8e91e0d71e12381cd/diff:/var/lib/docker/overlay2/ec9c389ce04f7773ff5c1c29ac82e33a6b69b1f56443896538e954831c8634cf/diff:/var/lib/docker/overlay2/51d1cf6424f396d219eb9a72e6596c877d91caea96243b417d3a213b6a6ddc00/diff:/var/lib/docker/overlay2/74bc2e8d56c2fcb16b58d7a581f34ac24bd06dcb001880b3cfb55c877e8c2ecf/diff:/var/lib/docker/overlay2/9493a8b811115ad2a7da2613395ada61ef80a591762be2228f9930aac5dc6764/diff:/var/lib/docker/overlay2/374eb1374a540d4e123cca2aa11939890b346541869907c3aeff7dcbbf226080/diff:/var/lib/docker/overlay2/9679016ca55db53687bb8ef5f48026b2c55e2f1078ad8ded6aff4a00b4383363/diff:/var/lib/docker/overlay2/80ed0cd786e069e60e943ef3f20ffdfcd14e86555fc4844ca568e12dd210a193/diff:/var/lib/docker/overlay2/d96135cd7e109224886b3c8fab7f7803a327f431203214a98c905c2087e52dbf/diff:/var/lib/docker/overlay2/6bef7ba56e91efc12ac5aa7c32f07c7fb59887931c5501bfd3e7aa2b82474875/diff:/var/lib/docker/overlay2/e735691e90f2ee94805461cad8642711978d5167903546d435bebb4be05d2f90/diff:/var/lib/docker/overlay2/4403a598da262722aaf72507f2b2f036e42f3b389a04ef6efe27dba9afc91128/diff",
            "MergedDir": "/var/lib/docker/overlay2/1c661dc21e568301da5cc896219980189b902f388b465d18120a256297e4c6ea/merged",
            "UpperDir": "/var/lib/docker/overlay2/1c661dc21e568301da5cc896219980189b902f388b465d18120a256297e4c6ea/diff",
            "WorkDir": "/var/lib/docker/overlay2/1c661dc21e568301da5cc896219980189b902f388b465d18120a256297e4c6ea/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:0002c93bdb3704dd9e36ce5153ef637f84de253015f3ee330468dccdeacad60b",
            "sha256:f1133182f9176f808836a1da986d49a743ffef123c3e043665c969641d476a0e",
            "sha256:5315510b97a53b3798577a5d3ab016711a172e6592dcb62eb9a379334ff8137e",
            "sha256:1aa2a552b9cc22f74b5bd4f845d814cf8f98179353c589c9524f5fd14066893f",
            "sha256:60e5365b20887bea5ff0d4943c1e7d3cabc5fc88895460d747f9715bb89374c3",
            "sha256:99d243a0c0a688b2d656423b7ee3534278d8b24b13e9ae07b6c69dc3d0d061ae",
            "sha256:c0df662a799d1c11c3db86f201628479dbd362e85286be0a17af3e0d4318a62d",
            "sha256:0aded1f539825c359320b67eeed512e7b63f43284f4272969918ef4cbf335e64",
            "sha256:eb8680c17196ae075b20a67d17900bceff407d2363ff0fec238a63040868fbed",
            "sha256:20c3eb845057dde8d4428f0d7a5d8b07d5896b5b881f43442d20c76cc3d962e1",
            "sha256:c9bece9394f22680edac326b1716e23c2ef1c2b57b68bf294643b6b6f6ab9f86",
            "sha256:e09071e1a97b0c1a459e8a831c4ec0e7aba4b220ecd3e39af0e378e27c86538a",
            "sha256:5214ca09c8e0e4c898f0a9ac9156f3e080d5e81bf165bd89c7ea35c8bd1ffb97",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:d4a5665c1e9ac751139dce36416b0be2f8845da1a2dc44825c298b8359880969",
            "sha256:684ac84ee1cf6767b2b7cf2faed4107305f24524f7601deea59848def2850cf1",
            "sha256:16dd7ede9ae17377d233e4c20530e9cf8ff00552f54a1cc1c47f620e606dee03",
            "sha256:db382f914e7ae085880c475b1b8096ec632625176b10160a86e736344330804a",
            "sha256:8396974084f88922a2b97e510867ec1d0f6bef879e46c7110d73accd58cd1335",
            "sha256:6707b1f5efb6a4211dc37189433dafe72a6d072f0c5cecc55a8be3991b9f93da",
            "sha256:c92500931d4fa67998ffbe45258132020563b101b5e5abd9ed142a6e9b530a43",
            "sha256:6a648379ea314f7fde518a1255f1e20e6cb7ab6842d62c42723fc393a8608f5c",
            "sha256:c87e24ccb80cf8c38402c2854e5f79c69da62989be260ea6e16ed4059e623831",
            "sha256:ec19dd273f9384df98219b520ea7e2dfdd9d477aa21074fd0dce9f6e8f8def28"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-06-04T06:25:56.894766173+08:00"
    }
}

更多版本

docker.io/nvcr.io/nvidia/tritonserver:24.11-trtllm-python-py3

linux/amd64 docker.io24.86GB2025-02-26 02:25
356

docker.io/nvcr.io/nvidia/tritonserver:25.04-py3

linux/amd64 docker.io19.59GB2025-05-22 00:46
144

docker.io/nvcr.io/nvidia/tritonserver:25.02-trtllm-python-py3

linux/amd64 docker.io30.15GB2025-05-25 02:51
51

docker.io/nvcr.io/nvidia/tritonserver:22.12-py3

linux/amd64 docker.io14.00GB2025-06-04 06:32
15