docker.io/nvcr.io/nvidia/pytorch:25.12-py3 linux/amd64

docker.io/nvcr.io/nvidia/pytorch:25.12-py3 - 国内下载镜像源 浏览次数:21
这里是镜像的描述信息: NVIDIA PyTorch Docker Image

这是一个基于PyTorch框架的Docker容器镜像,提供了一个完整的深度学习环境。该镜像包含了PyTorch 1.x版本,以及其他必需的依赖包,如CUDA、cuDNN等。

使用这个镜像,您可以轻松地在本地环境中搭建一个深度学习工作站,进行各种机器学习和计算机视觉任务的实验和开发。

此外,该镜像还支持GPU加速,通过NVIDIA的CUDA和cuDNN技术,可以显著提高PyTorch的性能和效率。

源镜像 docker.io/nvcr.io/nvidia/pytorch:25.12-py3
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3
镜像ID sha256:dd94fce2f83a180e2271f02f69713de5360aaf916b01e1f7f54173519fd54efd
镜像TAG 25.12-py3
大小 20.39GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /opt/nvidia/nvidia_entrypoint.sh
工作目录 /workspace
OS/平台 linux/amd64
浏览量 21 次
贡献者 kc**e@baai.ac.cn
镜像创建 2025-12-17T09:04:58.04449515Z
同步时间 2026-01-28 01:35
更新时间 2026-01-28 17:22
开放端口
6006/tcp 8888/tcp
环境变量
PATH=/usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin:/opt/tensorrt/bin NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS= GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=56.0 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSPARSELT_VERSION=0.8.1.1 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video NVIDIA_PRODUCT_NAME=PyTorch CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc LIBRARY_PATH=/usr/local/cuda/lib64/stubs:/usr/local/cuda/lib64/stubs: PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 PYTORCH_VERSION=2.10.0a0+b4e4ee8 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=25.12 NVFUSER_BUILD_VERSION=073e91b NVFUSER_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 PIP_BREAK_SYSTEM_PACKAGES=1 PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python PIP_CONSTRAINT=/etc/pip/constraint.txt NVPL_LAPACK_MATH_MODE=PEDANTIC PYTHONIOENCODING=utf-8 LC_ALL=C.UTF-8 PIP_DEFAULT_TIMEOUT=100 JUPYTER_PORT=8888 TENSORBOARD_PORT=6006 UCC_CL_BASIC_TLS=^sharp UCC_EC_CUDA_EXEC_NUM_THREADS=256 TORCH_CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0+PTX PYTORCH_HOME=/opt/pytorch/pytorch CUDA_HOME=/usr/local/cuda TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1 TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas TRITON_CUOBJDUMP_PATH=/usr/local/cuda/bin/cuobjdump TRITON_NVDISASM_PATH=/usr/local/cuda/bin/nvdisasm TRITON_CUDACRT_PATH=/usr/local/cuda/include TRITON_CUDART_PATH=/usr/local/cuda/include TRITON_CUPTI_LIB_PATH=/usr/local/cuda/lib64 TRITON_CUPTI_INCLUDE_PATH=/usr/local/cuda/include COCOAPI_VERSION=2.0+nv0.8.1 CUDA_BINARY_LOADER_THREAD_COUNT=8 CUDA_MODULE_LOADING=LAZY TORCH_NCCL_USE_COMM_NONBLOCKING=0 TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION=0 NVIDIA_BUILD_ID=245654590
镜像标签
245654590: com.nvidia.build.id 0214a1e1471678f0c2928e94807c940c6961ca2c: com.nvidia.build.ref 13.2.0.9: com.nvidia.cublas.version 0.7.0.125: com.nvidia.cublasmp.version 9.0: com.nvidia.cuda.version 13.1.0.036: com.nvidia.cudla.version 9.17.0.29: com.nvidia.cudnn.version 12.1.0.31: com.nvidia.cufft.version 10.4.1.34: com.nvidia.curand.version 12.0.7.41: com.nvidia.cusolver.version 12.7.2.19: com.nvidia.cusparse.version 0.8.1.1: com.nvidia.cusparselt.version 2.28.9+cuda13.0: com.nvidia.nccl.version 13.0.2.21: com.nvidia.npp.version 2025.4.0.12: com.nvidia.nsightcompute.version 2025.5.2.266: com.nvidia.nsightsystems.version 13.0.2.28: com.nvidia.nvjpeg.version 13.1.80: com.nvidia.nvvm.version 2.10.0a0+b4e4ee8: com.nvidia.pytorch.version 10.14.1.48+cuda13.0: com.nvidia.tensorrt.version : com.nvidia.tensorrtoss.version nvidia_driver: com.nvidia.volumes.needed ubuntu: org.opencontainers.image.ref.name 24.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3  docker.io/nvcr.io/nvidia/pytorch:25.12-py3

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3  docker.io/nvcr.io/nvidia/pytorch:25.12-py3

Shell快速替换命令

sed -i 's#nvcr.io/nvidia/pytorch:25.12-py3#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3  docker.io/nvcr.io/nvidia/pytorch:25.12-py3'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3  docker.io/nvcr.io/nvidia/pytorch:25.12-py3'

镜像构建历史


# 2025-12-17 17:04:58  0.00B 添加元数据标签
LABEL com.nvidia.build.ref=0214a1e1471678f0c2928e94807c940c6961ca2c
                        
# 2025-12-17 17:04:58  0.00B 定义构建参数
ARG NVIDIA_BUILD_REF=0214a1e1471678f0c2928e94807c940c6961ca2c
                        
# 2025-12-17 17:04:58  0.00B 添加元数据标签
LABEL com.nvidia.build.id=245654590
                        
# 2025-12-17 17:04:58  0.00B 设置环境变量 NVIDIA_BUILD_ID
ENV NVIDIA_BUILD_ID=245654590
                        
# 2025-12-17 17:04:58  0.00B 定义构建参数
ARG NVIDIA_BUILD_ID=245654590
                        
# 2025-12-17 17:04:58  0.00B 执行命令并创建新的镜像层
RUN |10 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali ENABLE_FIPS=0 /bin/sh -c if [ "${ENABLE_FIPS}" = "1" ] && [ "${TARGETARCH}" = "amd64" ]; then         echo "Running fips-fix.sh";         /tmp/fips-fix.sh;     fi # buildkit
                        
# 2025-12-17 17:04:57  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-12-17 17:04:57  0.00B 定义构建参数
ARG ENABLE_FIPS=0
                        
# 2025-12-17 17:04:57  719.00B 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2025-12-17 17:04:57  273.00B 复制新文件或目录到容器中
COPY constraint.txt /etc/pip/original_constraint.txt # buildkit
                        
# 2025-12-17 17:04:57  0.00B 复制新文件或目录到容器中
COPY restricted_constraint.txt /etc/pip/constraint.txt # buildkit
                        
# 2025-12-17 17:04:57  71.55KB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib  && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf  && ldconfig  && rm -f ${_CUDA_COMPAT_PATH}/lib # buildkit
                        
# 2025-12-17 17:04:57  0.00B 设置环境变量 TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION
ENV TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION=0
                        
# 2025-12-17 17:04:57  0.00B 设置环境变量 TORCH_NCCL_USE_COMM_NONBLOCKING
ENV TORCH_NCCL_USE_COMM_NONBLOCKING=0
                        
# 2025-12-17 17:04:57  0.00B 设置环境变量 CUDA_MODULE_LOADING
ENV CUDA_MODULE_LOADING=LAZY
                        
# 2025-12-17 17:04:57  0.00B 设置环境变量 CUDA_BINARY_LOADER_THREAD_COUNT
ENV CUDA_BINARY_LOADER_THREAD_COUNT=8
                        
# 2025-12-17 17:04:57  214.24KB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c patch -d /usr/local/lib/python3.12/dist-packages/numba/cuda/cudadrv/ -p1 < /tmp/numba-cuda-patch.txt # buildkit
                        
# 2025-12-17 17:04:57  0.00B 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c /tmp/manage_cert.sh uninstall # buildkit
                        
# 2025-12-17 17:04:57  695.14MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Transformer Engine in iGPU container until Version variable is set"; else     git clone -b release_v${TRANSFORMER_ENGINE_VERSION} --single-branch --recursive https://github.com/NVIDIA/TransformerEngine.git &&     _NVTE_CUDA_ARCHS=$(echo "${CUDA_ARCH_LIST}" | tr -d '.') &&     NVTE_CUDA_ARCHS=$(echo "${_NVTE_CUDA_ARCHS}" | tr ' ' ';') &&     env NVTE_CUDA_ARCHS="${NVTE_CUDA_ARCHS};89;100a;103a" NVTE_BUILD_THREADS_PER_JOB=8 pip install --no-cache-dir --no-build-isolation ./TransformerEngine &&     rm -rf TransformerEngine; fi # buildkit
                        
# 2025-12-17 17:00:14  1.76GB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Flash Attention wheel in iGPU as it is a requirement for Transformer Engine"; else     pip install --no-cache-dir /opt/transfer/flash_attn*.whl; fi # buildkit
                        
# 2025-12-17 17:00:08  47.58MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c pip install --no-cache-dir /opt/pytorch/torch_tensorrt/dist/*.whl # buildkit
                        
# 2025-12-17 17:00:07  723.78MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c pip install --no-cache-dir /opt/pytorch/apex/dist/*.whl # buildkit
                        
# 2025-12-17 16:42:31  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin:/opt/tensorrt/bin
                        
# 2025-12-17 16:42:31  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2025-12-17 16:42:31  0.00B 定义构建参数
ARG PYVER=3.12
                        
# 2025-12-17 16:42:31  163.56MB 复制新文件或目录到容器中
COPY torch_tensorrt/ /opt/pytorch/torch_tensorrt/ # buildkit
                        
# 2025-12-17 16:42:29  62.78MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c pip --version && python -c 'import sys; print(sys.platform)'     && pip install --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/sw-tensorrt-pypi/simple --no-cache-dir "polygraphy==${POLYGRAPHY_VERSION}"     && pip install  --index-url https://gitlab-master.nvidia.com/api/v4/projects/omniml%2Fmodelopt/packages/pypi/simple --extra-index-url https://pypi.nvidia.com "nvidia-modelopt[torch]==${MODEL_OPT_VERSION}"     && pip install nvidia-resiliency-ext==${NVRX_VERSION} --index-url https://gitlab-master.nvidia.com/api/v4/projects/dl%2Fosiris%2Fnvidia-resiliency-ext-ci/packages/pypi/simple     && pip uninstall -y pynvml # buildkit
                        
# 2025-12-17 16:42:21  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin:/opt/tensorrt/bin
                        
# 2025-12-17 16:42:21  6.33MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c set -x     && WHEELS=1 /nvidia/build-scripts/installTRT.sh # buildkit
                        
# 2025-12-17 16:41:29  34.90MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c chmod -R a+w+t . # buildkit
                        
# 2025-12-17 16:41:29  34.89MB 复制新文件或目录到容器中
COPY tutorials tutorials # buildkit
                        
# 2025-12-17 16:41:29  2.07KB 复制新文件或目录到容器中
COPY docker-examples docker-examples # buildkit
                        
# 2025-12-17 16:41:29  2.05KB 复制新文件或目录到容器中
COPY NVREADME.md README.md # buildkit
                        
# 2025-12-17 16:41:29  0.00B 设置工作目录为/workspace
WORKDIR /workspace
                        
# 2025-12-17 16:41:29  224.04KB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c pip install tabulate # buildkit
                        
# 2025-12-17 16:41:28  621.20MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c ( if [ "${L4T}" = "1" ]; then echo "Not installing nvshmem in iGPU container";       else /nvidia/build-scripts/installCUBLASMP.sh; fi )  && ( cd fuser && pip install -r requirements.txt && pip install -v ./python --no-build-isolation && rm -rf ./python/build && rm -rf ./bin)  && ( cd lightning-thunder && pip install --no-build-isolation . && rm -rf build *.egg-info)  && ( cd lightning-thunder && mkdir tmp && cd tmp && git clone -b v${CUDNN_FRONTEND_VERSION} --recursive --single-branch https://github.com/NVIDIA/cudnn-frontend.git cudnn_frontend && cd cudnn_frontend && pip install --no-build-isolation . && cd ../../ && rm -rf tmp )  && ( cd pytorch/third_party/onnx && pip uninstall typing -y && CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" pip install --no-build-isolation . )  && ( if [ "${L4T}" = "1" ]; then echo "Not installing torchao in iGPU container"; else cd ao && TORCH_CUDA_ARCH_LIST="9.0a 10.0a ${TORCH_CUDA_ARCH_LIST}" VERSION_SUFFIX="${TORCHAO_BUILD_VERSION}" pip install --no-build-isolation . && python setup.py clean --all ; fi )  && ( if [ "${L4T}" = "1" ]; then echo "Not installing torchtitan in iGPU container"; else cd torchtitan && pip install --no-build-isolation . && rm -rf build/ ; fi ) # buildkit
                        
# 2025-12-17 16:28:39  31.22MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c ( if [ "${L4T}" = "1" ]; then echo "Not installing nvshmem in iGPU container";       else cp /tmp/transfer/libnvshmem_device.bc /usr/local/cuda/lib64/libnvshmem_device.bc; fi ) # buildkit
                        
# 2025-12-17 16:28:38  0.00B 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-17 16:28:38  2.21KB 复制新文件或目录到容器中
COPY singularity/ /.singularity.d/ # buildkit
                        
# 2025-12-17 16:28:38  71.72MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c export COCOAPI_TAG=$(echo ${COCOAPI_VERSION} | sed 's/^.*+n//')  && pip install --no-build-isolation git+https://github.com/nvidia/cocoapi.git@${COCOAPI_TAG}#subdirectory=PythonAPI # buildkit
                        
# 2025-12-17 16:28:14  0.00B 设置环境变量 COCOAPI_VERSION
ENV COCOAPI_VERSION=2.0+nv0.8.1
                        
# 2025-12-17 16:28:14  721.08MB 执行命令并创建新的镜像层
RUN |9 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali /bin/sh -c if [ -z "${DALI_VERSION}" ] ; then   echo "Not Installing DALI for L4T Build." ; exit 0; fi   && export CUDA_VERSION_MAJOR=$(ls /usr/local/cuda/lib64/libcudart.so.*.*.* | cut -d . -f 3)   && export DALI_PKG_SUFFIX="cuda${CUDA_VERSION_MAJOR}0"   && if [ -z "${DALI_URL_SUFFIX}" ] ; then export DALI_EXTRA_INDEX_URL="${DALI_EXTRA_INDEX_URL}-qa/nvidia-dali-cudagpgpu"; fi   && pip install                 --extra-index-url https://developer.download.nvidia.com/compute/redist                 --extra-index-url "${DALI_EXTRA_INDEX_URL}"                 --extra-index-url "http://sqrl/nvdl/datasets/dali/misc"                 --trusted-host sqrl         nvidia-dali-${DALI_PKG_SUFFIX}==${DALI_VERSION} # buildkit
                        
# 2025-12-17 16:28:04  0.00B 定义构建参数
ARG DALI_EXTRA_INDEX_URL=http://sqrl/nvdl/datasets/dali/pip-dali
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_CUPTI_INCLUDE_PATH
ENV TRITON_CUPTI_INCLUDE_PATH=/usr/local/cuda/include
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_CUPTI_LIB_PATH
ENV TRITON_CUPTI_LIB_PATH=/usr/local/cuda/lib64
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_CUDART_PATH
ENV TRITON_CUDART_PATH=/usr/local/cuda/include
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_CUDACRT_PATH
ENV TRITON_CUDACRT_PATH=/usr/local/cuda/include
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_NVDISASM_PATH
ENV TRITON_NVDISASM_PATH=/usr/local/cuda/bin/nvdisasm
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_CUOBJDUMP_PATH
ENV TRITON_CUOBJDUMP_PATH=/usr/local/cuda/bin/cuobjdump
                        
# 2025-12-17 16:28:04  0.00B 设置环境变量 TRITON_PTXAS_PATH
ENV TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas
                        
# 2025-12-17 16:28:04  771.04MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c pip install --no-cache-dir --no-deps /tmp/dist/*.whl # buildkit
                        
# 2025-12-17 16:27:58  63.84MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c cd pytorch && pip install -v -r /opt/pytorch/pytorch/requirements.txt # buildkit
                        
# 2025-12-17 16:27:57  440.99KB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c patchelf --set-soname libjpeg.so.62 --output /usr/local/lib/libjpeg.so.62 $(readlink -f $(ldd /usr/local/lib/python3.12/dist-packages/torchvision/image.so | grep libjpeg | awk '{print $3}')) # buildkit
                        
# 2025-12-17 16:27:56  0.00B 复制新文件或目录到容器中
COPY /usr/local/lib64/libjpeg* /usr/local/lib/ # buildkit
                        
# 2025-12-17 16:27:56  13.47MB 复制新文件或目录到容器中
COPY /usr/local/lib64/libtorchvision.so.1.0 /usr/local/lib/libtorchvision.so.1.0 # buildkit
                        
# 2025-12-17 16:27:56  398.98KB 复制新文件或目录到容器中
COPY /usr/local/include/torchvision/ /usr/local/include/torchvision/ # buildkit
                        
# 2025-12-17 16:27:56  9.02KB 复制新文件或目录到容器中
COPY /usr/local/share/cmake/TorchVision/ /usr/local/share/cmake/TorchVision/ # buildkit
                        
# 2025-12-17 16:27:56  1.42GB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c echo "TORCH_CUDA_ARCH_LIST=${TORCH_CUDA_ARCH_LIST}"     && pip install /opt/transfer/torch*.whl     && patchelf --set-rpath '/usr/local/lib' /usr/local/lib/python${PYVER}/dist-packages/torch/lib/libtorch_global_deps.so # buildkit
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 TORCH_ALLOW_TF32_CUBLAS_OVERRIDE
ENV TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 CUDA_HOME
ENV CUDA_HOME=/usr/local/cuda
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 PYTORCH_HOME
ENV PYTORCH_HOME=/opt/pytorch/pytorch
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 TORCH_CUDA_ARCH_LIST
ENV TORCH_CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0+PTX
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 UCC_EC_CUDA_EXEC_NUM_THREADS
ENV UCC_EC_CUDA_EXEC_NUM_THREADS=256
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 UCC_CL_BASIC_TLS
ENV UCC_CL_BASIC_TLS=^sharp
                        
# 2025-12-17 16:27:41  0.00B 声明容器运行时监听的端口
EXPOSE [6006/tcp]
                        
# 2025-12-17 16:27:41  0.00B 声明容器运行时监听的端口
EXPOSE [8888/tcp]
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 TENSORBOARD_PORT
ENV TENSORBOARD_PORT=6006
                        
# 2025-12-17 16:27:41  0.00B 设置环境变量 JUPYTER_PORT
ENV JUPYTER_PORT=8888
                        
# 2025-12-17 16:27:41  248.00B 复制新文件或目录到容器中
COPY jupyter_config/settings.jupyterlab-settings /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/ # buildkit
                        
# 2025-12-17 16:27:41  236.00B 复制新文件或目录到容器中
COPY jupyter_config/manager.jupyterlab-settings /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/ # buildkit
                        
# 2025-12-17 16:27:41  519.00B 复制新文件或目录到容器中
COPY jupyter_config/jupyter_notebook_config.py /usr/local/etc/jupyter/ # buildkit
                        
# 2025-12-17 16:27:41  15.93MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c pip install --no-cache-dir /builder/*.whl jupytext black isort  && mkdir -p /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/  && jupyter lab clean # buildkit
                        
# 2025-12-17 16:27:02  0.00B 设置工作目录为/opt/pytorch
WORKDIR /opt/pytorch
                        
# 2025-12-17 16:27:02  27.81KB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c PATCHED_FILE=$(python -c "from tensorboard.plugins.core import core_plugin as _; print(_.__file__)") &&     sed -i 's/^\( *"--bind_all",\)$/\1 default=True,/' "$PATCHED_FILE" &&     test $(grep '^ *"--bind_all", default=True,$' "$PATCHED_FILE" | wc -l) -eq 1 # buildkit
                        
# 2025-12-17 16:27:02  287.06MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c git config --global url."https://github".insteadOf git://github &&     pip install jupyterlab notebook tensorboard     jupyterlab_code_formatter python-hostlist # buildkit
                        
# 2025-12-17 16:26:49  2.11GB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c pip install         numpy         scipy         PyYAML         astunparse         typing_extensions         cffi         mock         tqdm         librosa         expecttest         hypothesis         xdoctest         pytest         pytest-xdist         pytest-rerunfailures         pytest-shard         pytest-flakefinder         pybind11         Cython         regex         protobuf         six &&     if [[ $TARGETARCH = "amd64" ]] ; then pip install --no-cache-dir mkl mkl-include mkl-devel ;     find /usr/local/lib -maxdepth 1 -type f -regex '.*\/lib\(tbb\|mkl\).*\.so\($\|\.[0-9]*\.[0-9]*\)' -exec rm -v {} + ; fi # buildkit
                        
# 2025-12-17 16:26:22  0.00B 设置环境变量 PIP_DEFAULT_TIMEOUT
ENV PIP_DEFAULT_TIMEOUT=100
                        
# 2025-12-17 16:26:22  0.00B 设置环境变量 LC_ALL
ENV LC_ALL=C.UTF-8
                        
# 2025-12-17 16:26:22  0.00B 设置环境变量 PYTHONIOENCODING
ENV PYTHONIOENCODING=utf-8
                        
# 2025-12-17 16:26:22  2.09GB 复制新文件或目录到容器中
COPY . . # buildkit
                        
# 2025-12-12 04:51:04  0.00B 设置工作目录为/opt/pytorch
WORKDIR /opt/pytorch
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 NVPL_LAPACK_MATH_MODE
ENV NVPL_LAPACK_MATH_MODE=PEDANTIC
                        
# 2025-12-16 09:08:42  0.00B 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c if [ $TARGETARCH = "arm64" ]; then cd /opt &&     curl "https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/nvpl_slim_24.04/sbsa/nvpl_slim_24.04.tar" --output nvpl_slim_24.04.tar &&     tar -xf nvpl_slim_24.04.tar &&     cp -r nvpl_slim_24.04/lib/* /usr/local/lib &&     cp -r nvpl_slim_24.04/include/* /usr/local/include &&     rm -rf nvpl_slim_24.04.tar nvpl_slim_24.04 ; fi # buildkit
                        
# 2025-12-12 04:51:04  76.34MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c pip install pip setuptools &&     pip install cmake # buildkit
                        
# 2025-12-12 04:51:00  273.00B 复制新文件或目录到容器中
COPY constraint.txt /etc/pip/constraint.txt # buildkit
                        
# 2025-12-05 04:06:07  0.00B 设置环境变量 PIP_CONSTRAINT
ENV PIP_CONSTRAINT=/etc/pip/constraint.txt
                        
# 2025-12-05 04:06:07  12.04MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c curl -O https://bootstrap.pypa.io/get-pip.py &&     python get-pip.py &&     rm get-pip.py # buildkit
                        
# 2025-12-05 04:06:05  0.00B 设置环境变量 PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
                        
# 2025-12-05 04:06:05  226.63MB 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c export PYSFX=`echo "${PYVER}" | cut -c1-1` &&     export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -y --no-install-recommends         python$PYVER-dev         python$PYSFX         python$PYSFX-dev         python$PYSFX-venv         python-is-python$PYSFX         autoconf         automake         libatlas-base-dev         libgoogle-glog-dev         libbz2-dev         libc-ares2         libre2-dev         libleveldb-dev         liblmdb-dev         libprotobuf-dev         libsnappy-dev         libtool         nasm         protobuf-compiler         pkg-config         unzip         sox         libsndfile1         libpng-dev         libhdf5-dev         gfortran         rapidjson-dev         ninja-build         libedit-dev         build-essential         patchelf     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-12-16 09:08:42  0.00B 执行命令并创建新的镜像层
RUN |8 NVIDIA_PYTORCH_VERSION=25.12 PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 NVFUSER_BUILD_VERSION=073e91b TORCHAO_BUILD_VERSION=+git01374eb5 TARGETARCH=amd64 PYVER=3.12 L4T=0 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG ENABLE_MITMPROXY=0
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG L4T=0
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 PIP_BREAK_SYSTEM_PACKAGES
ENV PIP_BREAK_SYSTEM_PACKAGES=1
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG PYVER=3.12
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-12-16 09:08:42  0.00B 添加元数据标签
LABEL com.nvidia.pytorch.version=2.10.0a0+b4e4ee8
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 TORCHAO_BUILD_VERSION
ENV TORCHAO_BUILD_VERSION=+git01374eb5
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG TORCHAO_BUILD_VERSION=+git01374eb5
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 NVFUSER_BUILD_VERSION NVFUSER_VERSION
ENV NVFUSER_BUILD_VERSION=073e91b NVFUSER_VERSION=073e91b
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 PYTORCH_BUILD_VERSION PYTORCH_VERSION PYTORCH_BUILD_NUMBER NVIDIA_PYTORCH_VERSION
ENV PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8 PYTORCH_VERSION=2.10.0a0+b4e4ee8 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=25.12
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG NVFUSER_BUILD_VERSION=073e91b
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8
                        
# 2025-12-16 09:08:42  0.00B 定义构建参数
ARG NVIDIA_PYTORCH_VERSION=25.12
                        
# 2025-12-16 09:08:42  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=PyTorch
                        
# 2025-12-04 14:50:42  0.00B 执行命令并创建新的镜像层
RUN |3 TARGETARCH=amd64 ENABLE_FIPS=0 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh uninstall # buildkit
                        
# 2025-12-04 14:50:42  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:/usr/local/cuda/lib64/stubs:
                        
# 2025-12-04 14:50:42  349.50MB 执行命令并创建新的镜像层
RUN |3 TARGETARCH=amd64 ENABLE_FIPS=0 ENABLE_MITMPROXY=0 /bin/sh -c export DEVEL=1 BASE=0  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUDA.sh  && /nvidia/build-scripts/installLIBS.sh  && if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then /nvidia/build-scripts/installNCCL.sh; fi  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi  && rm -f /tmp/cuda-*.patch # buildkit
                        
# 2025-12-04 14:47:25  1.49KB 复制新文件或目录到容器中
COPY cuda-*.patch /tmp # buildkit
                        
# 2025-12-04 14:47:25  68.97MB 执行命令并创建新的镜像层
RUN |3 TARGETARCH=amd64 ENABLE_FIPS=0 ENABLE_MITMPROXY=0 /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         git         libglib2.0-0         less         libhwloc15         libnuma-dev         libnuma1         libpmi2-0-dev         nano         numactl         openssh-client         vim         wget  && rm -rf /var/lib/apt/lists/*  && if [ "${ENABLE_FIPS}" = "1" -a "${TARGETARCH}" = "amd64" ]; then       apt-get remove -y openssh-client       && cp -f -p /tmp/ubuntu.sources.fips /etc/apt/sources.list.d/ubuntu.sources       && apt-get update       && apt-get install -y --no-install-recommends openssh-client       && rm -rf /var/lib/apt/lists/*;     fi  && [ -x /bin/ssh-agent ] && chgrp root /bin/ssh-agent || true  && [ -x /usr/bin/ssh-agent ] && chgrp root /usr/bin/ssh-agent || true # buildkit
                        
# 2025-12-04 14:47:19  0.00B 执行命令并创建新的镜像层
RUN |3 TARGETARCH=amd64 ENABLE_FIPS=0 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-04 14:47:19  0.00B 定义构建参数
ARG ENABLE_MITMPROXY=0
                        
# 2025-12-04 14:47:19  0.00B 定义构建参数
ARG ENABLE_FIPS=0
                        
# 2025-12-04 14:47:19  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-12-04 14:43:13  0.00B 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh uninstall # buildkit
                        
# 2025-12-04 14:43:13  0.00B 添加元数据标签
LABEL com.nvidia.nccl.version=2.28.9+cuda13.0 com.nvidia.cublas.version=13.2.0.9 com.nvidia.cufft.version=12.1.0.31 com.nvidia.curand.version=10.4.1.34 com.nvidia.cusparse.version=12.7.2.19 com.nvidia.cusparselt.version=0.8.1.1 com.nvidia.cusolver.version=12.0.7.41 com.nvidia.npp.version=13.0.2.21 com.nvidia.nvjpeg.version=13.0.2.28 com.nvidia.cublasmp.version=0.7.0.125 com.nvidia.nvvm.version=13.1.80 com.nvidia.cudla.version=13.1.0.036 com.nvidia.cudnn.version=9.17.0.29 com.nvidia.tensorrt.version=10.14.1.48+cuda13.0 com.nvidia.tensorrtoss.version= com.nvidia.nsightsystems.version=2025.5.2.266 com.nvidia.nsightcompute.version=2025.4.0.12
                        
# 2025-12-04 14:43:13  3.47GB 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c PURGESTUBS="1" /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -z "${JETPACK_HOST_MOUNTS}" ]; then       /nvidia/build-scripts/installNCCL.sh;     fi; # buildkit
                        
# 2025-12-04 14:42:13  184.11MB 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
                        
# 2025-12-04 14:41:26  0.00B 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         libncurses6         libncursesw6         unzip         jq         gnupg         libtcmalloc-minimal4  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-12-04 14:41:22  0.00B 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-04 14:41:22  0.00B 定义构建参数
ARG ENABLE_MITMPROXY=0
                        
# 2025-12-04 13:26:19  0.00B 添加元数据标签
LABEL com.nvidia.nccl.version=2.28.9+cuda13.0 com.nvidia.cublas.version=13.2.0.9 com.nvidia.cufft.version=12.1.0.31 com.nvidia.curand.version=10.4.1.34 com.nvidia.cusparse.version=12.7.2.19 com.nvidia.cusparselt.version=0.8.1.1 com.nvidia.cusolver.version=12.0.7.41 com.nvidia.npp.version=13.0.2.21 com.nvidia.nvjpeg.version=13.0.2.28 com.nvidia.cudnn.version=9.17.0.29
                        
# 2025-12-04 13:26:19  0.00B 执行命令并创建新的镜像层
RUN |3 ENABLE_MITMPROXY=0 CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc VERSION_LIST=CURAND_VERSION CUBLAS_VERSION CUFILE_VERSION /bin/sh -c /tmp/manage_cert.sh uninstall # buildkit
                        
# 2025-12-04 13:26:18  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
                        
# 2025-12-04 13:26:18  2.47GB 执行命令并创建新的镜像层
RUN |3 ENABLE_MITMPROXY=0 CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc VERSION_LIST=CURAND_VERSION CUBLAS_VERSION CUFILE_VERSION /bin/sh -c set -exo pipefail
VERSION_LIST="${VERSION_LIST}" /nvidia/build-scripts/installLIBS.sh

# Hack to grab stubs from the libs stage and put them under /usr/local/cuda/lib64 without actually installing the libs
HACK_LIST="cusparse cusolver cufft nvJitLink"
for lib in ${HACK_LIST}; do
  _LIB_VERSION_ENV_VAR="${lib^^}_VERSION"
  _LIB_MAJOR_VERSION=$(echo "${!_LIB_VERSION_ENV_VAR}" | cut -d. -f1)
  
  if [[ -z "${_LIB_MAJOR_VERSION}" ]]; then
    echo "[ERROR] ${_LIB_VERSION_ENV_VAR} is not set"
    exit 1
  fi
  
  for _stub in $(ls /tmp/stubs/*${lib}*); do
    cp ${_stub} /usr/local/cuda/lib64/$(basename ${_stub}).${_LIB_MAJOR_VERSION}
    # Create a symlink to the file with .0 extension
    ln -sf $(basename ${_stub}).${_LIB_MAJOR_VERSION} /usr/local/cuda/lib64/$(basename ${_stub}).0
  done
done
( set +x; echo "[INFO] /usr/local/cuda/lib64 after hack:"; ls -larth /usr/local/cuda/lib64/ )

/nvidia/build-scripts/installCUDNN.sh

/nvidia/build-scripts/installCUSPARSELT.sh

NOBUILDER=1 /nvidia/build-scripts/installTRT.sh

if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then
  /nvidia/build-scripts/installNCCL.sh
  /nvidia/build-scripts/installNVSHMEM.sh
  # Manually create symlinks nvshmem so we don't need to use dpkg-divert
  # nvshmem default installation is under /usr/lib/x86_64-linux-gnu/nvshmem/${CUDA_VERSION_MAJOR}
  # We need to libs to be available under /usr/local/cuda/lib64
  export CUDA_VERSION_MAJOR=$(echo "${CUDA_VERSION}" | cut -d. -f1)
  find /usr/lib/*-linux-gnu/nvshmem/${CUDA_VERSION_MAJOR}/ -maxdepth 1 -type f -exec ln -sf {} /usr/local/cuda/lib64/ \;
  # For symlinks under /usr/lib/*-linux-gnu/nvshmem/${CUDA_VERSION_MAJOR}/, we want to create it under /usr/local/cuda/lib64/ and link it to the same target
  find /usr/lib/*-linux-gnu/nvshmem/${CUDA_VERSION_MAJOR}/ -maxdepth 1 -type l -exec ln -sf {} /usr/local/cuda/lib64/ \;
  # Ensure libnvshmem_host.so* files exist under /usr/local/cuda/lib64
  ls /usr/local/cuda/lib64/libnvshmem_host.so* || { echo "[ERROR] libnvshmem_host.so* not found under /usr/local/cuda/lib64"; exit 1; }
fi
 # buildkit
                        
# 2025-12-04 12:50:42  0.00B 定义构建参数
ARG VERSION_LIST=CURAND_VERSION CUBLAS_VERSION CUFILE_VERSION
                        
# 2025-12-04 12:50:42  551.37MB 执行命令并创建新的镜像层
RUN |2 ENABLE_MITMPROXY=0 CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc /bin/sh -c set -exo pipefail
if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then
    COMPONENT_LIST="${CUDA_COMPONENT_LIST}" /nvidia/build-scripts/installCUDA.sh
else
    # Don't use COMPONENT_LIST in iGPU environment
    /nvidia/build-scripts/installCUDA.sh
fi
 # buildkit
                        
# 2025-12-04 12:49:43  0.00B 设置环境变量 CUDA_COMPONENT_LIST
ENV CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc
                        
# 2025-12-04 12:49:43  0.00B 定义构建参数
ARG CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc
                        
# 2025-12-04 12:49:43  313.73MB 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         apt-utils         build-essential         libncurses6         libncursesw6         unzip         jq         gnupg         libtcmalloc-minimal4         git         libglib2.0-0         libhwloc15         libnuma-dev         libnuma1         libpmi2-0-dev         numactl         openssh-client         vim-tiny         wget  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-12-04 13:26:19  0.00B 执行命令并创建新的镜像层
RUN |1 ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-04 13:26:19  0.00B 定义构建参数
ARG ENABLE_MITMPROXY=0
                        
# 2025-12-04 12:46:44  0.00B 执行命令并创建新的镜像层
RUN |47 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 _LIBPATH_SUFFIX= /bin/sh -c /tmp/manage_cert.sh uninstall # buildkit
                        
# 2025-12-04 12:46:44  467.00B 执行命令并创建新的镜像层
RUN |47 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 _LIBPATH_SUFFIX= /bin/sh -c mkdir -p /workspace && cp -f -p /opt/nvidia/entrypoint.d/30-container-license.txt /workspace/license.txt # buildkit
                        
# 2025-12-04 12:46:44  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2025-12-04 12:46:44  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2025-12-04 12:46:44  16.17KB 复制新文件或目录到容器中
COPY entrypoint/ /opt/nvidia/ # buildkit
                        
# 2025-12-04 12:46:44  35.81KB 执行命令并创建新的镜像层
RUN |47 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 _LIBPATH_SUFFIX= /bin/sh -c if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then            echo "/opt/amazon/aws-ofi-nccl/lib" > /etc/ld.so.conf.d/aws-ofi-nccl.conf       && ldconfig;                                                 fi # buildkit
                        
# 2025-12-04 12:46:44  9.66MB 复制新文件或目录到容器中
COPY /opt/amazon/aws-ofi-nccl /opt/amazon/aws-ofi-nccl # buildkit
                        
# 2025-12-04 12:42:23  105.68MB 执行命令并创建新的镜像层
RUN |47 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 _LIBPATH_SUFFIX= /bin/sh -c if [ -n "${DOCA_VERSION}" ] && dpkg --compare-versions "${HPCX_VERSION}" "ge" "2.24"; then         /nvidia/build-scripts/installDOCA.sh;     else         echo "Not running installDOCA.sh";     fi # buildkit
                        
# 2025-12-04 12:41:45  0.00B 设置环境变量 PATH LD_LIBRARY_PATH NVIDIA_VISIBLE_DEVICES NVIDIA_DRIVER_CAPABILITIES
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
                        
# 2025-12-04 12:41:45  0.00B 定义构建参数
ARG _LIBPATH_SUFFIX=
                        
# 2025-12-04 12:41:45  46.00B 执行命令并创建新的镜像层
RUN |46 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf  && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-12-04 12:41:45  13.39KB 复制文件或目录到容器中
ADD docs.tgz / # buildkit
                        
# 2025-12-04 12:41:45  0.00B 设置环境变量 NCCL_VERSION CUBLAS_VERSION CUFFT_VERSION CURAND_VERSION CUSPARSE_VERSION CUSPARSELT_VERSION CUSOLVER_VERSION NPP_VERSION NVJPEG_VERSION CUFILE_VERSION NVJITLINK_VERSION NVFATBIN_VERSION CUBLASMP_VERSION NVSHMEM_VERSION CUDLA_VERSION NVPTXCOMPILER_VERSION CUDNN_VERSION CUDNN_FRONTEND_VERSION TRT_VERSION TRTOSS_VERSION NSIGHT_SYSTEMS_VERSION NSIGHT_COMPUTE_VERSION DALI_VERSION DALI_BUILD DALI_URL_SUFFIX POLYGRAPHY_VERSION TRANSFORMER_ENGINE_VERSION MODEL_OPT_VERSION CUDA_ARCH_LIST MAXSMVER NVRX_VERSION
ENV NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSPARSELT_VERSION=0.8.1.1 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0
                        
# 2025-12-04 12:41:45  0.00B 定义构建参数
ARG NCCL_VERSION=2.28.9+cuda13.0 CUBLAS_VERSION=13.2.0.9 CUFFT_VERSION=12.1.0.31 CURAND_VERSION=10.4.1.34 CUSPARSE_VERSION=12.7.2.19 CUSOLVER_VERSION=12.0.7.41 NPP_VERSION=13.0.2.21 NVJPEG_VERSION=13.0.2.28 CUFILE_VERSION=1.16.0.49 NVJITLINK_VERSION=13.1.80 NVFATBIN_VERSION=13.1.80 CUBLASMP_VERSION=0.7.0.125 NVSHMEM_VERSION=3.4.5 CUDLA_VERSION=13.1.0.036 NVPTXCOMPILER_VERSION=13.1.80 CUDNN_VERSION=9.17.0.29 CUDNN_FRONTEND_VERSION=1.16.0 TRT_VERSION=10.14.1.48+cuda13.0 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2025.5.2.266 NSIGHT_COMPUTE_VERSION=2025.4.0.12 CUSPARSELT_VERSION=0.8.1.1 DALI_VERSION=1.52.0 DALI_BUILD= DALI_URL_SUFFIX=130 POLYGRAPHY_VERSION=0.49.26 TRANSFORMER_ENGINE_VERSION=2.10 MODEL_OPT_VERSION=0.39.0 CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0 MAXSMVER= NVRX_VERSION=0.5.0
                        
# 2025-12-04 12:41:45  0.00B 添加元数据标签
LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
                        
# 2025-12-04 12:41:45  0.00B 设置环境变量 _CUDA_COMPAT_PATH ENV BASH_ENV SHELL NVIDIA_REQUIRE_CUDA
ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
                        
# 2025-12-04 12:41:45  57.01KB 执行命令并创建新的镜像层
RUN |15 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 /bin/sh -c cp -vprd /nvidia/. /  &&  patch -p0 < /etc/startup_scripts.patch  &&  rm -f /etc/startup_scripts.patch # buildkit
                        
# 2025-12-04 12:41:45  476.72MB 执行命令并创建新的镜像层
RUN |15 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0 /bin/sh -c BASE=min /nvidia/build-scripts/installCUDA.sh # buildkit
                        
# 2025-12-04 14:23:53  0.00B 设置环境变量 CUDA_VERSION CUDA_DRIVER_VERSION NVVM_VERSION DOCA_VERSION
ENV CUDA_VERSION=13.1.0.036 CUDA_DRIVER_VERSION=590.44.01 NVVM_VERSION=13.1.80 DOCA_VERSION=3.1.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG DOCA_VERSION=3.1.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG NVVM_VERSION=13.1.80
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG CUDA_DRIVER_VERSION=590.44.01
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG CUDA_VERSION=13.1.0.036
                        
# 2025-12-04 14:23:53  0.00B 设置环境变量 OMPI_MCA_coll_hcoll_enable
ENV OMPI_MCA_coll_hcoll_enable=0
                        
# 2025-12-04 14:23:53  0.00B 设置环境变量 OPAL_PREFIX PATH
ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin
                        
# 2025-12-04 14:23:53  245.35MB 执行命令并创建新的镜像层
RUN |11 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 RDMACORE_VERSION=56.0 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0 TARGETARCH=amd64 /bin/sh -c cd /nvidia  && ( cd opt/rdma-core/                            && dpkg -i libibverbs1_*.deb                            libibverbs-dev_*.deb                         librdmacm1_*.deb                             librdmacm-dev_*.deb                          libibumad3_*.deb                             libibumad-dev_*.deb                          ibverbs-utils_*.deb                          ibverbs-providers_*.deb           && rm $(dpkg-query -L                                    libibverbs-dev                               librdmacm-dev                                libibumad-dev                            | grep "\(\.so\|\.a\)$")          )                                            && ( cd opt/gdrcopy/                              && dpkg -i libgdrapi_*.deb                   )                                         && ( cp -r opt/hpcx /opt/                                         && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/          && ln -sf /opt/hpcx/ompi /usr/local/mpi                      && ln -sf /opt/hpcx/ucx  /usr/local/ucx                      && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf         && sed -i 's/^\(btl = self\)$/#\1/'                             /opt/hpcx/ompi/etc/openmpi-mca-params.conf       )                                                         && ( if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then           cd opt/amazon/efa/                                           && dpkg -i libfabric*.deb                                    && rm /opt/amazon/efa/lib/libfabric.a                        && echo "/opt/amazon/efa/lib" > /etc/ld.so.conf.d/efa.conf;         fi                                                         )                                                         && ldconfig # buildkit
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-12-04 14:23:53  0.00B 设置环境变量 GDRCOPY_VERSION HPCX_VERSION MOFED_VERSION OPENUCX_VERSION OPENMPI_VERSION RDMACORE_VERSION EFA_VERSION AWS_OFI_NCCL_VERSION
ENV GDRCOPY_VERSION=2.5.1 HPCX_VERSION=2.25.1-RC2 MOFED_VERSION=5.4-rdmacore56.0 OPENUCX_VERSION=1.20.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=56.0 EFA_VERSION=1.43.1 AWS_OFI_NCCL_VERSION=1.17.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG AWS_OFI_NCCL_VERSION=1.17.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG EFA_VERSION=1.43.1
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG OPENMPI_VERSION=4.1.7
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG OPENUCX_VERSION=1.20.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG MOFED_VERSION=5.4-rdmacore56.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG RDMACORE_VERSION=56.0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG HPCX_VERSION=2.25.1-RC2
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG GDRCOPY_VERSION=2.5.1
                        
# 2025-12-04 14:23:53  9.32MB 执行命令并创建新的镜像层
RUN |2 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         adduser         curl         libnl-route-3-200         libnl-3-200         libnl-3-dev         libnl-route-3-dev         patch         wget  && rm -rf /var/lib/apt/lists/*  && echo "hsts=0" > /root/.wgetrc # buildkit
                        
# 2025-12-04 14:23:53  0.00B 执行命令并创建新的镜像层
RUN |2 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-04 14:23:53  3.01MB 执行命令并创建新的镜像层
RUN |2 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 /bin/sh -c if [ -d /usr/share/ca-certificates ]; then exit; fi  && export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends ca-certificates  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-12-04 14:23:53  0.00B 执行命令并创建新的镜像层
RUN |2 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 /bin/sh -c /tmp/manage_cert.sh install # buildkit
                        
# 2025-12-04 14:23:53  0.00B 执行命令并创建新的镜像层
RUN |2 JETPACK_HOST_MOUNTS= ENABLE_MITMPROXY=0 /bin/sh -c if [ -n "${JETPACK_HOST_MOUNTS}" ]; then        echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf     && echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf;     fi # buildkit
                        
# 2025-12-04 14:23:53  0.00B 设置环境变量 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS
ENV NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG ENABLE_MITMPROXY=0
                        
# 2025-12-04 14:23:53  0.00B 定义构建参数
ARG JETPACK_HOST_MOUNTS=
                        
# 2025-10-17 03:23:03  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-10-17 03:23:03  78.12MB 
/bin/sh -c #(nop) ADD file:ddf1aa62235de6657123492b19d27d937c25668011b5ebf923a3f019200f8540 in / 
                        
# 2025-10-17 03:23:01  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=24.04
                        
# 2025-10-17 03:23:01  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-10-17 03:23:01  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-10-17 03:23:01  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:dd94fce2f83a180e2271f02f69713de5360aaf916b01e1f7f54173519fd54efd",
    "RepoTags": [
        "nvcr.io/nvidia/pytorch:25.12-py3",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:25.12-py3"
    ],
    "RepoDigests": [
        "nvcr.io/nvidia/pytorch@sha256:1dc787f5c6264fcc1c99809f99b84823e73ed4588d5a581b94290fc2a8fecff8",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch@sha256:1dec7522c2a20b19a506ee72dc57e75e4b12fd21954741a233b3da66baacb3e7"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-12-17T09:04:58.04449515Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "6006/tcp": {},
            "8888/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/mpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/amazon/efa/bin:/opt/tensorrt/bin",
            "NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=",
            "GDRCOPY_VERSION=2.5.1",
            "HPCX_VERSION=2.25.1-RC2",
            "MOFED_VERSION=5.4-rdmacore56.0",
            "OPENUCX_VERSION=1.20.0",
            "OPENMPI_VERSION=4.1.7",
            "RDMACORE_VERSION=56.0",
            "EFA_VERSION=1.43.1",
            "AWS_OFI_NCCL_VERSION=1.17.0",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "CUDA_VERSION=13.1.0.036",
            "CUDA_DRIVER_VERSION=590.44.01",
            "NVVM_VERSION=13.1.80",
            "DOCA_VERSION=3.1.0",
            "_CUDA_COMPAT_PATH=/usr/local/cuda/compat",
            "ENV=/etc/shinit_v2",
            "BASH_ENV=/etc/bash.bashrc",
            "SHELL=/bin/bash",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=9.0",
            "NCCL_VERSION=2.28.9+cuda13.0",
            "CUBLAS_VERSION=13.2.0.9",
            "CUFFT_VERSION=12.1.0.31",
            "CURAND_VERSION=10.4.1.34",
            "CUSPARSE_VERSION=12.7.2.19",
            "CUSPARSELT_VERSION=0.8.1.1",
            "CUSOLVER_VERSION=12.0.7.41",
            "NPP_VERSION=13.0.2.21",
            "NVJPEG_VERSION=13.0.2.28",
            "CUFILE_VERSION=1.16.0.49",
            "NVJITLINK_VERSION=13.1.80",
            "NVFATBIN_VERSION=13.1.80",
            "CUBLASMP_VERSION=0.7.0.125",
            "NVSHMEM_VERSION=3.4.5",
            "CUDLA_VERSION=13.1.0.036",
            "NVPTXCOMPILER_VERSION=13.1.80",
            "CUDNN_VERSION=9.17.0.29",
            "CUDNN_FRONTEND_VERSION=1.16.0",
            "TRT_VERSION=10.14.1.48+cuda13.0",
            "TRTOSS_VERSION=",
            "NSIGHT_SYSTEMS_VERSION=2025.5.2.266",
            "NSIGHT_COMPUTE_VERSION=2025.4.0.12",
            "DALI_VERSION=1.52.0",
            "DALI_BUILD=",
            "DALI_URL_SUFFIX=130",
            "POLYGRAPHY_VERSION=0.49.26",
            "TRANSFORMER_ENGINE_VERSION=2.10",
            "MODEL_OPT_VERSION=0.39.0",
            "CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0",
            "MAXSMVER=",
            "NVRX_VERSION=0.5.0",
            "LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility,video",
            "NVIDIA_PRODUCT_NAME=PyTorch",
            "CUDA_COMPONENT_LIST=cccl crt nvrtc driver-dev culibos-dev cudart cudart-dev nvcc",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs:/usr/local/cuda/lib64/stubs:",
            "PYTORCH_BUILD_VERSION=2.10.0a0+b4e4ee8",
            "PYTORCH_VERSION=2.10.0a0+b4e4ee8",
            "PYTORCH_BUILD_NUMBER=0",
            "NVIDIA_PYTORCH_VERSION=25.12",
            "NVFUSER_BUILD_VERSION=073e91b",
            "NVFUSER_VERSION=073e91b",
            "TORCHAO_BUILD_VERSION=+git01374eb5",
            "PIP_BREAK_SYSTEM_PACKAGES=1",
            "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python",
            "PIP_CONSTRAINT=/etc/pip/constraint.txt",
            "NVPL_LAPACK_MATH_MODE=PEDANTIC",
            "PYTHONIOENCODING=utf-8",
            "LC_ALL=C.UTF-8",
            "PIP_DEFAULT_TIMEOUT=100",
            "JUPYTER_PORT=8888",
            "TENSORBOARD_PORT=6006",
            "UCC_CL_BASIC_TLS=^sharp",
            "UCC_EC_CUDA_EXEC_NUM_THREADS=256",
            "TORCH_CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0+PTX",
            "PYTORCH_HOME=/opt/pytorch/pytorch",
            "CUDA_HOME=/usr/local/cuda",
            "TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1",
            "TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas",
            "TRITON_CUOBJDUMP_PATH=/usr/local/cuda/bin/cuobjdump",
            "TRITON_NVDISASM_PATH=/usr/local/cuda/bin/nvdisasm",
            "TRITON_CUDACRT_PATH=/usr/local/cuda/include",
            "TRITON_CUDART_PATH=/usr/local/cuda/include",
            "TRITON_CUPTI_LIB_PATH=/usr/local/cuda/lib64",
            "TRITON_CUPTI_INCLUDE_PATH=/usr/local/cuda/include",
            "COCOAPI_VERSION=2.0+nv0.8.1",
            "CUDA_BINARY_LOADER_THREAD_COUNT=8",
            "CUDA_MODULE_LOADING=LAZY",
            "TORCH_NCCL_USE_COMM_NONBLOCKING=0",
            "TORCHINDUCTOR_LOOP_ORDERING_AFTER_FUSION=0",
            "NVIDIA_BUILD_ID=245654590"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/workspace",
        "Entrypoint": [
            "/opt/nvidia/nvidia_entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "com.nvidia.build.id": "245654590",
            "com.nvidia.build.ref": "0214a1e1471678f0c2928e94807c940c6961ca2c",
            "com.nvidia.cublas.version": "13.2.0.9",
            "com.nvidia.cublasmp.version": "0.7.0.125",
            "com.nvidia.cuda.version": "9.0",
            "com.nvidia.cudla.version": "13.1.0.036",
            "com.nvidia.cudnn.version": "9.17.0.29",
            "com.nvidia.cufft.version": "12.1.0.31",
            "com.nvidia.curand.version": "10.4.1.34",
            "com.nvidia.cusolver.version": "12.0.7.41",
            "com.nvidia.cusparse.version": "12.7.2.19",
            "com.nvidia.cusparselt.version": "0.8.1.1",
            "com.nvidia.nccl.version": "2.28.9+cuda13.0",
            "com.nvidia.npp.version": "13.0.2.21",
            "com.nvidia.nsightcompute.version": "2025.4.0.12",
            "com.nvidia.nsightsystems.version": "2025.5.2.266",
            "com.nvidia.nvjpeg.version": "13.0.2.28",
            "com.nvidia.nvvm.version": "13.1.80",
            "com.nvidia.pytorch.version": "2.10.0a0+b4e4ee8",
            "com.nvidia.tensorrt.version": "10.14.1.48+cuda13.0",
            "com.nvidia.tensorrtoss.version": "",
            "com.nvidia.volumes.needed": "nvidia_driver",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "24.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 20389731832,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/61e8c4efea9a07e291b586f72a968e12830098b88e454c0746353b0670ebe035/diff:/var/lib/docker/overlay2/fe93c5aea7cb97e5988a09e4a1623f09c917c63bcfa5a88e8159616e06ec211d/diff:/var/lib/docker/overlay2/24a540c17a0b0b87c98ed57291bf40b75966035f3cec8de10af42154aa1c8c0c/diff:/var/lib/docker/overlay2/f1fcc68033d21bf4074813773da30e00ff6a57e22b988055949e475a591feede/diff:/var/lib/docker/overlay2/b5b2f3945cf89a865a24de010c38afc48b6d147bf419b25eaef5f8de870125e9/diff:/var/lib/docker/overlay2/746579f0ed19fc4aecb821547b8224b52b03e89d6e72f78eede93b4ac4b961db/diff:/var/lib/docker/overlay2/4ec0d0e070d7b896614a64ec5d693c5896342045eff0690ede24a83ec590f8a8/diff:/var/lib/docker/overlay2/491b606b41aa87052ac78c98a88277ba41ac0dcce9078d0bbccd2ab5fca60140/diff:/var/lib/docker/overlay2/ab798a26402453ce778122dd0d74df854385a7eeab53d982857d33d3b3c0adca/diff:/var/lib/docker/overlay2/077b5c269327d7ebc7cea45b922579a3f0997b5f195910581bc7e0d8b1313a9a/diff:/var/lib/docker/overlay2/b9d30652b1359d56059e875c4edd8208d6780f61c2b259d5c63d53a755ffc02e/diff:/var/lib/docker/overlay2/9597d0089747f4739788813fe39cf3d9177cccea53736b38679df6be1e48b8cd/diff:/var/lib/docker/overlay2/e4700679e382bd4c761560c5884b805dba468c466d48f1938dcd33d3a20810a2/diff:/var/lib/docker/overlay2/4fe601745644a33de6fee9f2542b4bc85c662f3ddc4c52772fd0c71894422aa5/diff:/var/lib/docker/overlay2/8ae09255aeda4681dd25f607868a745f6ce08ec51074b3f01beb60fa696e79df/diff:/var/lib/docker/overlay2/bb4f31d9fffa1609c56bcaa04d5fafd95217b29f9087f9fb90f63c91a2bd9cfd/diff:/var/lib/docker/overlay2/9dcbd95c2f9c862cfcba3ffcad872d7dcebc24758461b376eeadbf719e87547f/diff:/var/lib/docker/overlay2/26096e469061a662b8a863d8f83bf8d0d330dd90db07b6e20eab07c77fb2eaa6/diff:/var/lib/docker/overlay2/21146ac8ee969dad25c06a7af60fa53eb09182f6002bf87b51b3c5ed3f8bfa98/diff:/var/lib/docker/overlay2/e256eee8abaa40b2c11c1e136c88aaa45e8abc638838f0b73fd91a263b7c2ab9/diff:/var/lib/docker/overlay2/a21ba90954f3bde8a6146657c85449604aff1a3805f73fcafec8a0b3cce9aead/diff:/var/lib/docker/overlay2/1e01a8f0562bf1e4700d7d84f7a8c5c897b9af58664756041d44899ae7ced2a0/diff:/var/lib/docker/overlay2/58b9116f739bc88a9d0e46165c536bf132167003385fd304d75e2adb9b0740d0/diff:/var/lib/docker/overlay2/14da37bdc6a29b4ef76530f1f04c3d109f2503e4616f259c3ddf271a2c418b53/diff:/var/lib/docker/overlay2/c7c524702b4f447b5520c1f7de4af7c1aaa017c17e15722d3df0ba9f640c262a/diff:/var/lib/docker/overlay2/07480f0b7900bb07fa060a997994eb066c1e850f83ab27dcad81454fe17d3f8c/diff:/var/lib/docker/overlay2/f92d3ede37cc8d73028a731699171cbc2bd11ac41b6f2ef5fa06641afd45ff88/diff:/var/lib/docker/overlay2/03bd484240c1e8a849bae0247118d4a2f6b7328e1ec0019b4f9546021c395c5b/diff:/var/lib/docker/overlay2/68023b18e32d57c937481c34a2a6c83ee0ceed255538cb5b02e32b9f3f708c1e/diff:/var/lib/docker/overlay2/d8cdf59a65d97fe4aeb9e70f252a3587ad7e5220e82fa867c770cbe2925fb2e8/diff:/var/lib/docker/overlay2/8577fb680318c1f4826a0d9bb81217f01c810e121a8df052c206a5b334757ba9/diff:/var/lib/docker/overlay2/5bf4f75506562b53034353556dc98123321f7266a165f24ce7dd3566aa05a44a/diff:/var/lib/docker/overlay2/142360ef3813342b7e94f22f2dbbedfe740a115c46acaeefd31b37b09cf3afdf/diff:/var/lib/docker/overlay2/dae8549768ca0610fc4225c4e07f69fb009225ac8fe68cc166f1d2571a18ea93/diff:/var/lib/docker/overlay2/5130be2e33301a4d63c60278aaa107f077c0335073dbd367594495bb01ff62b4/diff:/var/lib/docker/overlay2/884355bf2811372702725719bcd7e11ae21d8099c9ea898f36d94f161648e8e9/diff:/var/lib/docker/overlay2/d147433f15af4871c6bd8338342316b7c770fb1a7ea8a50dad12ff211b8f5fca/diff:/var/lib/docker/overlay2/e1217c49619fdcfc8e8d6b86850fefd95eaa413947b9e2b554c59eee00a1adcc/diff:/var/lib/docker/overlay2/82d3b9a4e56a73949d4e9fe6861921c3ec9c643a0f78c4a9fcf07429836c1146/diff:/var/lib/docker/overlay2/2019bb1992adb9dd7cf9e92198f60a6c66b162fb4d3d34b4cae1326da46ad814/diff:/var/lib/docker/overlay2/0236d4bd30b39be21cf0e153731f13ea71176e460435ebe89dbbbd357d3c9d07/diff:/var/lib/docker/overlay2/4346ec07028635a6ea4fd7ffc14a3af924041c059576fed3056df283c090e710/diff:/var/lib/docker/overlay2/22c451a0c259fff20bcaaab3e0feb1a08022d52a0f6cc42e083d7b4a5351ed96/diff:/var/lib/docker/overlay2/19a73f98aaf1d000295143f5a4ea3852491e3abe67b3c907bbbe12bce0c8da57/diff:/var/lib/docker/overlay2/7bf17d18f72407103fe11f1ba298f4c9ad7f020064121fa569577f7c8117717d/diff:/var/lib/docker/overlay2/f951ba30c0f2045aa0f8eb4c47421cc36ee9918092b6a733c9e8910e45b0621a/diff:/var/lib/docker/overlay2/bc1702dd745adc3b03b25f1569c797b4d18556bf2d8bc8a7c5dca46e55a6d271/diff:/var/lib/docker/overlay2/68781a27088e978d66f370b5212eb6f5df6ae663ace4f2b959a894197efde0ad/diff:/var/lib/docker/overlay2/1eee780a6f21c0aad879deedaa31b53e8c9f5df21f999a56312f452c624930ff/diff:/var/lib/docker/overlay2/b1d7a5643e0954c98ab222f4251c2a5f4940b73e0f4b2bd50a9152d102800d84/diff:/var/lib/docker/overlay2/ffeb0b5543a747ea915a807f409060d2899a0dcc0b5166998889a22a8bc055fa/diff:/var/lib/docker/overlay2/f949d41f30161c104f1a627bc1b1d5ab8ff3bda842f3d38939a91958d587028c/diff:/var/lib/docker/overlay2/18382d1e1258cbac57be89ece2a52e7682a1a7a63189e1d6ba50d0ce8de0f9aa/diff:/var/lib/docker/overlay2/2249c099ee907e4fd4001c185af405d044d7432cc3a83089736b26333bd07587/diff:/var/lib/docker/overlay2/8773e41d4fcabeb429459542ff58d03b0724d9d5e4401d11c434bcaa2b068882/diff:/var/lib/docker/overlay2/cb144279bbaadc88b8ea77f769783f8cb89289b6d5727ed2de14c0b35a73a28a/diff:/var/lib/docker/overlay2/d117a09ad7dd0b732fcc9ec36ec62490337d1517e1cc6266939b8659dae0f77b/diff:/var/lib/docker/overlay2/e5e733604e042004d95488efa6175f2cfed3e716328b13a14a52ca4c763d7979/diff:/var/lib/docker/overlay2/2b501044bd1cef6466d7d5915d86e52dba8675ec109e12198836ff75ef4be431/diff:/var/lib/docker/overlay2/c5778269c926df4508dddf7236cdbd68dd15c159d6738011912af33a1672dd13/diff:/var/lib/docker/overlay2/b8322d122ed126a2380dd64e8315139b8478f7ea46089bcf7c7fd87d5bc67f7a/diff:/var/lib/docker/overlay2/b11e291607ecb69a8d219668b1ac785ed49358397a64ab83328cf1b34dc05d82/diff:/var/lib/docker/overlay2/7ea65271f80aa3490de0c80d3ffd13d138927283f24a89cd1e15352f9a0e4ce8/diff:/var/lib/docker/overlay2/08d9ab5bbc7c39e4e8652ea9b3af30acd0edc70cc10a69479293c00119c6c4ba/diff:/var/lib/docker/overlay2/23823f6ee197ff950b15bf347dc7fec7e331e3005a5d7fa478854323e5828e44/diff:/var/lib/docker/overlay2/0cfd6bcb7895194ca7e3797619cfa4613cc6e5fbaec5321616d22a85e6f0190f/diff:/var/lib/docker/overlay2/dbc2286b7e123c8e5ae0291be4a1109b73235f7f9601f9cd28662213edc4f974/diff:/var/lib/docker/overlay2/2d073e71db9b7d4739b52e9de3be8de32e005c97228942b4f4e04d6e3f777852/diff:/var/lib/docker/overlay2/2b51afdbd14cfb7f874ad04a81edeb67fb024888b08fd179c041eb4d6dd21cd1/diff:/var/lib/docker/overlay2/dfc6565c4bc9e4bae4955474af0634bbe2b910cbea60e229c54bc2009196fba2/diff:/var/lib/docker/overlay2/4da1670ddb779f5fdd8b83e4b3d442d4341dbc5c13165aed6312d24ade4aafd4/diff:/var/lib/docker/overlay2/d98c8322a9189efe0b790a81ee27c36942470f5b85a5afca28300d433215130d/diff:/var/lib/docker/overlay2/218fd7f3dcc1c2e4d33a82ad9097756e42322bda1d3918d48d3860d4f0d9266e/diff:/var/lib/docker/overlay2/8e9ac2c90462a3520bab6ed31cf71eb1466fac34c28a997686ee1aa95581897a/diff:/var/lib/docker/overlay2/0f7d856af0d072ab41b4c32a80df73d3956042a37d68d497313a71e859be7d40/diff:/var/lib/docker/overlay2/5862e5b9e0010a3dfd6f370ae133369f5fad60fe925501d528e35c94cbc63256/diff:/var/lib/docker/overlay2/f6578cd0fd3962acadd709df3465d92c40586f0e6dd5b8be0cf9254f74747a5f/diff:/var/lib/docker/overlay2/df53c92990b1d6760416116274a07bf49b67cd246975a775fec52b727049748a/diff:/var/lib/docker/overlay2/92be57a580fec7d90b4fbb6e6e7436437f7560fcaf57331232b798a601179c31/diff:/var/lib/docker/overlay2/ce237b9abf189c58051fb525919936c6048e2e090e26e4a6cde87a22b4933e0b/diff:/var/lib/docker/overlay2/eb70712178461cb65c5a956187c966c31d60ac41eacf62af98ead2c9415d1d0a/diff",
            "MergedDir": "/var/lib/docker/overlay2/2dbd4068a499d49ccfc496c9030a939b9eacc19010fb49884e4708cfaab988e8/merged",
            "UpperDir": "/var/lib/docker/overlay2/2dbd4068a499d49ccfc496c9030a939b9eacc19010fb49884e4708cfaab988e8/diff",
            "WorkDir": "/var/lib/docker/overlay2/2dbd4068a499d49ccfc496c9030a939b9eacc19010fb49884e4708cfaab988e8/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:e8bce0aabd687e9ee90e0bada33884f40b277196f72aac9934357472863a80ae",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:ea22878e3933111a0822596c79611300d4d7ca3164b2bfa1c6b6b3bc5d216976",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:83c28e1d9012a9a83ad7e6e1a9f6926c35245205a59318c97441e622d8f3fa5f",
            "sha256:34a26442038bc2a3925772ed10d9ecaf0e65e232cab746383078395a8a8d6f66",
            "sha256:9fd5c06969613caa2446b4286e4cbb9e5a7ecbcbc12165fec42def40e69d1856",
            "sha256:b7a83899cba7a8aa8d916e10b926a2892ebd966449aa19a14478fae4ad04335a",
            "sha256:8227d8db32f329a323a058737b3d4149092a0c7575f276221e04f047a1a8a075",
            "sha256:46e80b10bdc92b872eb1a8b6e8251a904f090b6340a2f39e54f578eebdab16a6",
            "sha256:5161682b19333fd62f5798b69ef425cea3cfed6072f04888a63e32fc6afdf151",
            "sha256:662edf069e4554a3e00aeedc29b11ede7a376d489103b703497e141def239964",
            "sha256:a17e6a17c70b2351d4553206f210aae5c3ef795af56c4cb9a2166c199d74aaee",
            "sha256:dc8aed900216d8752dd1a8224267fd4055915923dd2d5fed7109aeb6319a2a9b",
            "sha256:9bfa445b70efd2d5a00d9a479f311ed59669b9b5a970da12820077e20322c1c2",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:e41d9f65d02db68880aeb6d96f5832dc50342899520bc8eedccab4f675c38f6e",
            "sha256:fabb1f5c2079ebc137b9cc30c2726c2c55b155bf3c657be2af28684ed871d0fc",
            "sha256:cac5c774c9b9fc395b655bfd4fe7450b28e690aa4f51f11791cacfe02dbda60f",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:48d231fdeb649ce4106c2845ab1ae4e36b3112626482e45d42b7d07d8c17ff13",
            "sha256:a273ef8e73062bcb5feef44081bd044b710dbb301500c31179de00e91663bff2",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:cf938f6ddcc61953ce25c46bf65ef4cbddfff9f9e6d85dfaff125697263396dd",
            "sha256:01c8036ff6a66fd19801dc98206bc67440fb44d5cdb7f5cec5909e8685a201b1",
            "sha256:62b4df84d99cc63fb84265957cad039af62146977602b9b2d79c66551043af8f",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:46a70a4e6a3f1f4c40446d7d0c0cb00db19993dc02a4ad21e8869a74aeecd796",
            "sha256:01b9d1e83d4282f27f9395b060f7814cdd202ca0999510934f8269cdd533b217",
            "sha256:d2c970eb98122ee7656e1ef8c89e3890b41dbc955a945b9f211727088cd6baee",
            "sha256:af73093750d9a4511a9d07cc7a77c69d30a708544e2e41a962190e4295fd914d",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:578b4717238910bb1918499499330f4bb6a22223de36618f0bbbe98a8bb773f0",
            "sha256:e3559277697354358e053e6b7ec2a70e36c3b35005dae03697020b84c42edbba",
            "sha256:0f8b6a1ea52a7830f0d2f362ce0be54c378e955687dec7c953c1d6c751cdda27",
            "sha256:dc8fc088cd4a871436942708c53f3d7d98654989080f313d5bf5d278b6207ff0",
            "sha256:0eaa7f27ce9e2ee349069a4a4ba76b4a404815f83ce061f9f89b7f661cf77ca4",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:b81f0840a3a1032ec3dc9a0407db6f274ee6b35daee63fa8167cb9f54f8ca767",
            "sha256:7e72c2b0f0a57749c1f9dae640c238f13d5d9b682941c160510fc70a68701d08",
            "sha256:547a6c80a5310d8b191bebc342a81fc71b47afbb67dfe51d66258c2ddaee1ba1",
            "sha256:667c2c37be3ce009ea3e6571fdc02cf53ece48a93e806f908849e5518e3adcd3",
            "sha256:0ebda99c7a0d9dde62c6d50e3d413146bb8ad29672be0fa8ec6a9b5cd632a974",
            "sha256:643aae16026972d465b5aea7a073b843c456f9617eb960904e221db7e8a037c7",
            "sha256:ed03caf2e2792cdbbfe41770cb62e83ed6320b23fe35be26e2af2b9a06f55e55",
            "sha256:da60d21bcbe8f388f030ebb11ec72fe7efd6c954721da49d18bcfc12ab34f82d",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:e714eb3e583541923a9691b353b53c7605482fe2276c393d197154b7835f2c8e",
            "sha256:ba88392f59cf1063e057661592c85da0f237cf832d9e716895dca04ca07bce0a",
            "sha256:7952831b3e4c0a9cae083cfe3989667f7c25aaf49962a1da6a378268d9912213",
            "sha256:dad05a96ea1a66e989e96c22553d429c03c6dedf69999d2890726beb4655dd7d",
            "sha256:14c34f2ec5a8a396a0ce4e116733a961b8555892502e00d20976b36bad264100",
            "sha256:c552db33fab3e9bd2942dd399e8df2a7bfc68a7fa08a6c838f83123aa8f4b349",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:dc6056122a4572c46f16a9adf29a080f0f7d096c738873351cccdb70581a0a5b",
            "sha256:49328bd8251dbeac505f7c81f8ecb03d2fd8da8016d39816b2755966a81342ec",
            "sha256:805541d6ec1b3b2f056245915d935d0d955e8ac3caf6c806a69baa7ac81fa424",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:a20b476ef00ca463bbe0821aea1b0664e85bd290999617910b0b6a621b6aaff5",
            "sha256:22c27035450d4141da9ed5e293cfcb00815facd473e2bd5260040375cc9d5690",
            "sha256:76c82e3d53e219ec2f5404ad86ae8ffa2f83303378b58f121e2f08afd2804a26",
            "sha256:b3c6ebb6998f742f952b0e5e84acc98e32ebc9ba970f8f9e449c26f9f9f24e9f",
            "sha256:2c5103ddc1bee7274698ef444ef57908a9c143fd979360f7d5f0a0a1d3066881",
            "sha256:5528d991e935a7dea404fd06c06f8d880b421ae3a5286123b866c25442819adb",
            "sha256:bdc5b11f77a58fc73f7bb339d33c3f0e04137b14c0bda9303aceb1b5ae806f90",
            "sha256:5a9c78fbcde0d9e60363b61981aca7906e760bc7f3c880219af5fe736f7537c3",
            "sha256:06f30efe3ca0a927fbfb7d1be204330efaf7cc8ab6cf54e8dc2433c47d866f62",
            "sha256:de852dc2cfca70ca03f22eda5538396556fcc4cc8e36185bf8e426a9505a8e8e",
            "sha256:38f1bbdcfb353a5ff159afc6db2a3074c033f09a987145b4a63b2e99a479e4e7",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:4ff57dbf91a7d28e44a9ab8b2d4ac5e79313221d6232375886e8d346d9fc2c88",
            "sha256:cb6adc877b41f9f3a01cbe6b88b73a8b79aebe068e415bb59502e8524d2f4e87",
            "sha256:91a7bf77d5585dc2101746ac2031bf47b256e3bd7f75738565a7e9cb9ae1a183",
            "sha256:dc2de9b326f8200a475f57b520188f245644db1a9c0725fc098de428280a6507",
            "sha256:77d9610f0360fea2fa30c3224dd1223be03fdef384c3d54edc7ca4867df962c6",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-01-28T01:11:17.57943728+08:00"
    }
}

更多版本

docker.io/nvcr.io/nvidia/pytorch:24.01-py3

linux/amd64 docker.io22.02GB2024-09-20 00:38
2235

docker.io/nvcr.io/nvidia/pytorch:22.12-py3

linux/amd64 docker.io18.27GB2024-10-17 00:56
1476

docker.io/nvcr.io/nvidia/pytorch:23.04-py3

linux/amd64 docker.io20.38GB2024-10-18 01:26
1119

docker.io/nvcr.io/nvidia/pytorch:24.06-py3

linux/amd64 docker.io19.15GB2024-10-22 00:29
1932

docker.io/nvcr.io/nvidia/pytorch:21.11-py3

linux/amd64 docker.io14.47GB2024-10-22 10:38
1456

docker.io/nvcr.io/nvidia/pytorch:24.07-py3

linux/amd64 docker.io20.19GB2025-01-09 00:29
1861

docker.io/nvcr.io/nvidia/pytorch:24.02-py3

linux/amd64 docker.io22.21GB2025-02-23 20:44
2563

docker.io/nvcr.io/nvidia/pytorch:24.05-py3

linux/amd64 docker.io18.78GB2025-03-18 01:37
1049

docker.io/nvcr.io/nvidia/pytorch:24.11-py3

linux/amd64 docker.io21.77GB2025-04-04 03:43
1249

docker.io/nvcr.io/nvidia/pytorch:24.10-py3

linux/amd64 docker.io21.03GB2025-04-04 03:50
1601

docker.io/nvcr.io/nvidia/pytorch:24.12-py3

linux/amd64 docker.io21.66GB2025-04-11 02:13
1489

docker.io/nvcr.io/nvidia/pytorch:25.04-py3

linux/amd64 docker.io24.66GB2025-05-23 01:14
1646

docker.io/nvcr.io/nvidia/pytorch:23.09-py3

linux/amd64 docker.io22.01GB2025-05-29 03:30
535

docker.io/nvcr.io/nvidia/pytorch:25.04-py3

linux/arm64 docker.io22.42GB2025-06-04 07:57
1236

docker.io/nvcr.io/nvidia/pytorch:25.05-py3

linux/amd64 docker.io25.72GB2025-06-04 09:16
2105

docker.io/nvcr.io/nvidia/pytorch:24.03-py3

linux/arm64 docker.io17.47GB2025-06-12 06:11
879

docker.io/nvcr.io/nvidia/pytorch:25.12-py3

linux/amd64 docker.io20.39GB2026-01-28 01:35
20