docker.io/nvcr.io/nvidia/pytorch:24.01-py3 linux/amd64

docker.io/nvcr.io/nvidia/pytorch:24.01-py3 - 国内下载镜像源 浏览次数:9
这里是镜像的描述信息: NVIDIA PyTorch Docker Image

这是一个基于PyTorch框架的Docker容器镜像,提供了一个完整的深度学习环境。该镜像包含了PyTorch 1.x版本,以及其他必需的依赖包,如CUDA、cuDNN等。

使用这个镜像,您可以轻松地在本地环境中搭建一个深度学习工作站,进行各种机器学习和计算机视觉任务的实验和开发。

此外,该镜像还支持GPU加速,通过NVIDIA的CUDA和cuDNN技术,可以显著提高PyTorch的性能和效率。

源镜像 docker.io/nvcr.io/nvidia/pytorch:24.01-py3
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3
镜像ID sha256:8470a68886ff2f480d76858d930fb72d0e680e69276fc1d068fe8b6625b4cb9f
镜像TAG 24.01-py3
大小 22.02GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /opt/nvidia/nvidia_entrypoint.sh
工作目录 /workspace
OS/平台 linux/amd64
浏览量 9 次
贡献者
镜像创建 2024-01-25T05:13:50.47453137Z
同步时间 2024-09-20 00:38
更新时间 2024-09-20 08:50
开放端口
6006/tcp 8888/tcp
环境变量
PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS= _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0 NCCL_VERSION=2.19.4 CUBLAS_VERSION=12.3.4.1 CUFFT_VERSION=11.0.12.1 CURAND_VERSION=10.3.4.107 CUSPARSE_VERSION=12.2.0.103 CUSOLVER_VERSION=11.5.4.101 CUTENSOR_VERSION=2.0.0.7 NPP_VERSION=12.2.3.2 NVJPEG_VERSION=12.3.0.81 CUDNN_VERSION=8.9.7.29+cuda12.2 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.11 NSIGHT_SYSTEMS_VERSION=2023.4.1.97 NSIGHT_COMPUTE_VERSION=2023.3.1.1 DALI_VERSION=1.33.0 DALI_BUILD=11414174 POLYGRAPHY_VERSION=0.49.1 TRANSFORMER_ENGINE_VERSION=1.2 LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video NVIDIA_PRODUCT_NAME=PyTorch GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 RDMACORE_VERSION=39.0 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 LIBRARY_PATH=/usr/local/cuda/lib64/stubs: PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 PYTORCH_VERSION=2.2.0a0+81ea7a4 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=24.01 PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python PYTHONIOENCODING=utf-8 LC_ALL=C.UTF-8 PIP_DEFAULT_TIMEOUT=100 NVM_DIR=/usr/local/nvm JUPYTER_PORT=8888 TENSORBOARD_PORT=6006 UCC_CL_BASIC_TLS=^sharp TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX PYTORCH_HOME=/opt/pytorch/pytorch CUDA_HOME=/usr/local/cuda TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1 USE_EXPERIMENTAL_CUDNN_V8_API=1 COCOAPI_VERSION=2.0+nv0.8.0 TORCH_CUDNN_V8_API_ENABLED=1 CUDA_MODULE_LOADING=LAZY NVIDIA_BUILD_ID=80741402
镜像标签
80741402: com.nvidia.build.id 3a8f39e58d71996b362a9358b971d42d695351fd: com.nvidia.build.ref 12.3.4.1: com.nvidia.cublas.version 9.0: com.nvidia.cuda.version 8.9.7.29+cuda12.2: com.nvidia.cudnn.version 11.0.12.1: com.nvidia.cufft.version 10.3.4.107: com.nvidia.curand.version 11.5.4.101: com.nvidia.cusolver.version 12.2.0.103: com.nvidia.cusparse.version 2.0.0.7: com.nvidia.cutensor.version 2.19.4: com.nvidia.nccl.version 12.2.3.2: com.nvidia.npp.version 2023.3.1.1: com.nvidia.nsightcompute.version 2023.4.1.97: com.nvidia.nsightsystems.version 12.3.0.81: com.nvidia.nvjpeg.version 2.2.0a0+81ea7a4: com.nvidia.pytorch.version 8.6.1.6+cuda12.0.1.011: com.nvidia.tensorrt.version 23.11: com.nvidia.tensorrtoss.version nvidia_driver: com.nvidia.volumes.needed ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令 无权限下载?点我修复

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3  docker.io/nvcr.io/nvidia/pytorch:24.01-py3

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3  docker.io/nvcr.io/nvidia/pytorch:24.01-py3

Shell快速替换命令

sed -i 's#nvcr.io/nvidia/pytorch:24.01-py3#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3#' deployment.yaml

镜像历史

大小 创建时间 层信息
0.00B 2024-01-25 13:13:50 LABEL com.nvidia.build.ref=3a8f39e58d71996b362a9358b971d42d695351fd
0.00B 2024-01-25 13:13:50 ARG NVIDIA_BUILD_REF
0.00B 2024-01-25 13:13:50 LABEL com.nvidia.build.id=80741402
0.00B 2024-01-25 13:13:50 ENV NVIDIA_BUILD_ID=80741402
0.00B 2024-01-25 13:13:50 ARG NVIDIA_BUILD_ID
720.00B 2024-01-25 13:13:50 COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
60.83KB 2024-01-25 13:13:50 RUN |1 PYVER=3.10 /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf && ldconfig && rm -f ${_CUDA_COMPAT_PATH}/lib # buildkit
0.00B 2024-01-25 13:13:50 ENV CUDA_MODULE_LOADING=LAZY
0.00B 2024-01-25 13:13:50 ENV TORCH_CUDNN_V8_API_ENABLED=1
260.21MB 2024-01-25 13:13:50 RUN |1 PYVER=3.10 /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Transformer Engine in iGPU container until Version variable is set"; else pip install --no-cache-dir --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@release_v${TRANSFORMER_ENGINE_VERSION}; fi # buildkit
369.51MB 2024-01-25 13:09:03 RUN |1 PYVER=3.10 /bin/sh -c env MAX_JOBS=4 pip install flash-attn==2.0.4 # buildkit
0.00B 2024-01-25 12:52:00 ENV PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
0.00B 2024-01-25 12:52:00 ENV LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
43.02MB 2024-01-25 12:52:00 RUN |1 PYVER=3.10 /bin/sh -c pip install --no-cache-dir /opt/pytorch/torch_tensorrt/dist/*.whl # buildkit
0.00B 2024-01-25 12:48:58 ARG PYVER
148.31MB 2024-01-25 12:48:58 COPY torch_tensorrt/ /opt/pytorch/torch_tensorrt/ # buildkit
13.72MB 2024-01-25 12:48:57 RUN /bin/sh -c pip --version && python -c 'import sys; print(sys.platform)' && pip install --no-cache-dir nvidia-pyindex && if [ "${L4T}" = "1" ]; then pip install polygraphy; else pip install --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/sw-tensorrt-pypi/simple --no-cache-dir polygraphy==${POLYGRAPHY_VERSION}; fi && pip install --extra-index-url http://sqrl/dldata/pip-simple --trusted-host sqrl --no-cache-dir pytorch-quantization==2.1.2 # buildkit
0.00B 2024-01-25 12:48:42 ENV PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
6.10MB 2024-01-25 12:48:42 RUN /bin/sh -c set -x && URL=$(VERIFY=1 /nvidia/build-scripts/installTRT.sh | sed -n "s/^.*\(http.*\)tar.*$/\1/p")tar && FILE=$(wget -O - $URL | sed -n 's/^.*href="\(TensorRT[^"]*\)".*$/\1/p' | egrep -v "internal|safety") && wget $URL/$FILE -O - | tar -xz && PY=$(python -c 'import sys; print(str(sys.version_info[0])+str(sys.version_info[1]))') && pip install TensorRT-*/python/tensorrt-*-cp$PY*.whl && pip install TensorRT-*/graphsurgeon/graphsurgeon-*.whl && pip install TensorRT-*/uff/uff-*.whl && mv /usr/src/tensorrt /opt && ln -s /opt/tensorrt /usr/src/tensorrt && rm -r TensorRT-* && UFF_PATH=$(pip show uff | sed -n 's/Location: \(.*\)$/\1/p')/uff && sed -i 's/from tensorflow import GraphDef/from tensorflow.python import GraphDef/' $UFF_PATH/converters/tensorflow/conversion_helpers.py && chmod +x ${UFF_PATH}/bin/convert_to_uff.py && ln -sf ${UFF_PATH}/bin/convert_to_uff.py /usr/local/bin/convert-to-uff # buildkit
51.00MB 2024-01-25 12:47:59 RUN /bin/sh -c chmod -R a+w . # buildkit
34.89MB 2024-01-25 12:47:59 COPY tutorials tutorials # buildkit
15.96MB 2024-01-25 12:47:59 COPY examples examples # buildkit
2.07KB 2024-01-25 12:47:59 COPY docker-examples docker-examples # buildkit
2.05KB 2024-01-25 12:47:59 COPY NVREADME.md README.md # buildkit
0.00B 2024-01-25 12:47:59 WORKDIR /workspace
3.31GB 2024-01-25 12:47:58 RUN /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing rapids for L4T build." ; else find /rapids -name "*-Linux.tar.gz" -exec tar -C /usr --exclude="*.a" --exclude="bin/xgboost" --strip-components=1 -xvf {} \; && find /rapids -name "*.whl" ! -name "Pillow-*" ! -name "certifi-*" ! -name "protobuf-*" -exec pip install --no-cache-dir {} + && pip install --no-cache-dir networkx==2.6.3 && rm $(pip show xgboost | grep Location | awk '{print $2}')/xgboost/lib/libxgboost.so; fi # buildkit
201.84KB 2024-01-25 12:47:03 RUN /bin/sh -c pip install --no-cache-dir --disable-pip-version-check tabulate # buildkit
3.66MB 2024-01-25 12:47:01 RUN /bin/sh -c pip uninstall -y pillow && cd /tmp && git clone https://github.com/uploadcare/pillow-simd && cd pillow-simd && git fetch --all --tags --prune && git checkout tags/9.5.0 && sed -i 's/DEBUG = False/DEBUG = True/' setup.py && patch -p1 < /opt/pytorch/pil_10.0.0_CVE-2023-44271_for_pillow_simd_9.5.0.patch && if [[ $TARGETARCH = "amd64" ]] ; then CC="cc -mavx" pip install --no-cache-dir --disable-pip-version-check . ; fi && if [[ $TARGETARCH = "arm64" ]] ; then pip install --no-cache-dir --disable-pip-version-check . ; fi && rm -rf ../pillow-simd # buildkit
1.87GB 2024-01-25 12:46:40 RUN /bin/sh -c ( cd vision && CFLAGS="-g0" FORCE_CUDA=1 NVCC_APPEND_FLAGS="--threads 8" pip install --no-cache-dir --no-build-isolation --disable-pip-version-check . ) && ( cd vision && cmake -Bbuild -H. -GNinja -DWITH_CUDA=1 -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` && cmake --build build --target install && rm -rf build ) && ( cd fuser && pip install -r requirements.txt && python setup.py install && python setup.py clean) && ( cd apex && CFLAGS="-g0" NVCC_APPEND_FLAGS="--threads 8" pip install -v --no-build-isolation --no-cache-dir --disable-pip-version-check --config-settings "--build-option=--cpp_ext --cuda_ext --bnp --xentropy --deprecated_fused_adam --deprecated_fused_lamb --fast_multihead_attn --distributed_lamb --fast_layer_norm --transducer --distributed_adam --fmha --fast_bottleneck --nccl_p2p --peer_memory --permutation_search --focal_loss --fused_conv_bias_relu --index_mul_2d --cudnn_gbn --group_norm" . ) && ( cd data && pip install --no-build-isolation --no-cache-dir --disable-pip-version-check --no-deps -v . && rm -rf build ) && ( cd text && export TORCHDATA_VERSION="$(python -c 'import torchdata; print(torchdata.__version__)')" && pip install --no-build-isolation --no-cache-dir --disable-pip-version-check --no-deps -v . && unset TORCHDATA_VERSION ) && ( cd pytorch/third_party/onnx && pip uninstall typing -y && CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" pip install --no-build-isolation --no-cache-dir --disable-pip-version-check . ) # buildkit
2.21KB 2024-01-25 12:13:53 COPY singularity/ /.singularity.d/ # buildkit
90.85MB 2024-01-25 12:13:53 RUN /bin/sh -c export COCOAPI_TAG=$(echo ${COCOAPI_VERSION} | sed 's/^.*+n//') && pip install --disable-pip-version-check --no-cache-dir git+https://github.com/nvidia/cocoapi.git@${COCOAPI_TAG}#subdirectory=PythonAPI # buildkit
0.00B 2024-01-25 12:13:31 ENV COCOAPI_VERSION=2.0+nv0.8.0
609.53MB 2024-01-25 12:13:31 RUN /bin/sh -c if [ -z "${DALI_VERSION}" ] ; then echo "Not Installing DALI for L4T Build." ; else export DALI_PKG_SUFFIX="cuda${CUDA_VERSION%%.*}0" && pip install --disable-pip-version-check --no-cache-dir --extra-index-url https://developer.download.nvidia.com/compute/redist --extra-index-url http://sqrl/dldata/pip-dali${DALI_URL_SUFFIX:-} --trusted-host sqrl nvidia-dali-${DALI_PKG_SUFFIX}==${DALI_VERSION}; fi # buildkit
318.63MB 2024-01-25 12:13:21 RUN /bin/sh -c pip install --no-cache-dir /tmp/dist/*.whl # buildkit
946.13KB 2024-01-25 12:08:42 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir -v -r /opt/pytorch/pytorch/requirements.txt # buildkit
3.22GB 2024-01-25 12:08:39 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c mkdir -p /tmp/pip/ && cp /opt/transfer/torch*.whl /tmp/pip/. && pip install /tmp/pip/torch*.whl && patchelf --set-rpath '/usr/local/lib' /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_global_deps.so # buildkit
0.00B 2024-01-20 07:25:30 ENV USE_EXPERIMENTAL_CUDNN_V8_API=1
0.00B 2024-01-20 07:25:30 ENV TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
0.00B 2024-01-20 07:25:30 ENV CUDA_HOME=/usr/local/cuda
0.00B 2024-01-20 07:25:30 ENV PYTORCH_HOME=/opt/pytorch/pytorch
0.00B 2024-01-20 07:25:30 ENV TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
0.00B 2024-01-20 07:25:30 ENV UCC_CL_BASIC_TLS=^sharp
53.68MB 2024-01-20 07:25:30 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c OPENCV_VERSION=4.7.0 && cd / && wget -q -O - https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.tar.gz | tar -xzf - && cd /opencv-${OPENCV_VERSION} && cmake -GNinja -Bbuild -H. -DWITH_CUDA=OFF -DWITH_1394=OFF -DPYTHON3_PACKAGES_PATH="/usr/local/lib/python${PYVER}/dist-packages" -DBUILD_opencv_cudalegacy=OFF -DBUILD_opencv_stitching=OFF -DWITH_IPP=OFF -DWITH_PROTOBUF=OFF && cmake --build build --target install && cd modules/python/package && pip install --no-cache-dir --disable-pip-version-check -v . && rm -rf /opencv-${OPENCV_VERSION} # buildkit
0.00B 2024-01-20 07:22:44 EXPOSE map[6006/tcp:{}]
0.00B 2024-01-20 07:22:44 EXPOSE map[8888/tcp:{}]
0.00B 2024-01-20 07:22:44 ENV TENSORBOARD_PORT=6006
0.00B 2024-01-20 07:22:44 ENV JUPYTER_PORT=8888
427.00B 2024-01-20 07:22:44 COPY jupyter_notebook_config.py /usr/local/etc/jupyter/ # buildkit
161.44MB 2024-01-20 07:22:44 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --disable-pip-version-check --no-cache-dir git+https://github.com/cliffwoolley/jupyter_tensorboard.git@0.2.0+nv21.03 && mkdir -p $NVM_DIR && curl -Lo- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh | bash && source "$NVM_DIR/nvm.sh" && nvm install 16.20.2 node && jupyter labextension install jupyterlab_tensorboard && jupyter serverextension enable jupyterlab && pip install --no-cache-dir jupytext && jupyter labextension install jupyterlab-jupytext@1.2.2 && ( cd /usr/local/share/jupyter/lab/staging && npm prune --production ) && npm cache clean --force && rm -rf /usr/local/share/.cache && echo "source $NVM_DIR/nvm.sh" >> /etc/bash.bashrc && mv /root/.jupyter/jupyter_notebook_config.json /usr/local/etc/jupyter/ && jupyter lab clean # buildkit
0.00B 2024-01-20 07:20:50 ENV NVM_DIR=/usr/local/nvm
27.51KB 2024-01-20 07:20:50 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c PATCHED_FILE=$(python -c "from tensorboard.plugins.core import core_plugin as _; print(_.__file__)") && sed -i 's/^\( *"--bind_all",\)$/\1 default=True,/' "$PATCHED_FILE" && test $(grep '^ *"--bind_all", default=True,$' "$PATCHED_FILE" | wc -l) -eq 1 # buildkit
178.21MB 2024-01-20 07:20:49 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c git config --global url."https://github".insteadOf git://github && pip install --no-cache-dir notebook==6.4.10 jupyterlab==2.3.2 python-hostlist traitlets==5.9.0 && pip install --no-cache-dir tensorboard==2.9.0 # buildkit
2.13GB 2024-01-20 07:20:32 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir numpy==1.24.4 scipy==1.11.3 "PyYAML>=5.4.1" astunparse typing_extensions cffi spacy mock tqdm librosa==0.10.1 expecttest==0.1.3 hypothesis==5.35.1 xdoctest==1.0.2 pytest pytest-xdist pytest-rerunfailures pytest-shard pytest-flakefinder pybind11 Cython "regex>=2020.1.8" protobuf==4.24.4 && if [[ $TARGETARCH = "amd64" ]] ; then pip install --no-cache-dir mkl==2021.1.1 mkl-include==2021.1.1 mkl-devel==2021.1.1 ; find /usr/local/lib -maxdepth 1 -type f -regex '.*\/lib\(tbb\|mkl\).*\.so\($\|\.[0-9]*\.[0-9]*\)' -exec rm -v {} + ; fi # buildkit
0.00B 2024-01-20 07:19:11 ENV PIP_DEFAULT_TIMEOUT=100
0.00B 2024-01-20 07:19:11 ENV LC_ALL=C.UTF-8
0.00B 2024-01-20 07:19:11 ENV PYTHONIOENCODING=utf-8
2.16GB 2024-01-20 07:19:11 COPY . . # buildkit
0.00B 2024-01-05 05:43:56 WORKDIR /opt/pytorch
46.71MB 2024-01-05 05:43:56 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c curl "https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/OpenBLAS/0.3.24-$(uname -m)/OpenBLAS-0.3.24-$(uname -m).tar.gz" --output OpenBLAS.tar.gz && tar -xf OpenBLAS.tar.gz -C /usr/local/ && rm OpenBLAS.tar.gz # buildkit
69.55MB 2024-01-05 05:43:56 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir pip setuptools==68.2.2 && pip install --no-cache-dir cmake # buildkit
20.71MB 2024-01-05 05:43:56 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py && rm get-pip.py # buildkit
0.00B 2024-01-05 05:43:56 ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
198.20MB 2024-01-05 05:43:56 RUN |5 NVIDIA_PYTORCH_VERSION=24.01 PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c export PYSFX=`echo "$PYVER" | cut -c1-1` && export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y --no-install-recommends python$PYVER-dev python$PYSFX python$PYSFX-dev python$PYSFX-distutils python-is-python$PYSFX autoconf automake libatlas-base-dev libgoogle-glog-dev libbz2-dev libleveldb-dev liblmdb-dev libprotobuf-dev libsnappy-dev libtool nasm protobuf-compiler pkg-config unzip sox libsndfile1 libpng-dev libhdf5-103 libhdf5-dev gfortran rapidjson-dev ninja-build libedit-dev build-essential patchelf && rm -rf /var/lib/apt/lists/* # buildkit
0.00B 2024-01-05 05:43:56 ARG L4T=0
0.00B 2024-01-05 05:43:56 ARG PYVER=3.10
0.00B 2024-01-05 05:43:56 ARG TARGETARCH
0.00B 2024-01-05 05:43:56 LABEL com.nvidia.pytorch.version=2.2.0a0+81ea7a4
0.00B 2024-01-05 05:43:56 ENV PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4 PYTORCH_VERSION=2.2.0a0+81ea7a4 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=24.01
0.00B 2024-01-05 05:43:56 ARG PYTORCH_BUILD_VERSION
0.00B 2024-01-05 05:43:56 ARG NVIDIA_PYTORCH_VERSION
0.00B 2024-01-05 05:43:56 ENV NVIDIA_PRODUCT_NAME=PyTorch
0.00B 2024-01-04 10:39:17 ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
933.23MB 2024-01-04 10:39:17 RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 TARGETARCH=amd64 /bin/sh -c export DEVEL=1 BASE=0 && /nvidia/build-scripts/installNCU.sh && /nvidia/build-scripts/installCUDA.sh && /nvidia/build-scripts/installLIBS.sh && /nvidia/build-scripts/installNCCL.sh && /nvidia/build-scripts/installCUDNN.sh && /nvidia/build-scripts/installCUTENSOR.sh && /nvidia/build-scripts/installTRT.sh && /nvidia/build-scripts/installNSYS.sh && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi && rm -f /tmp/cuda-*.patch # buildkit
1.49KB 2024-01-04 10:34:26 COPY cuda-*.patch /tmp # buildkit
0.00B 2024-01-04 10:34:26 ENV OMPI_MCA_coll_hcoll_enable=0
0.00B 2024-01-04 10:34:26 ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin
224.34MB 2024-01-04 10:34:26 RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 TARGETARCH=amd64 /bin/sh -c cd /nvidia && ( export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y --no-install-recommends libibverbs1 libibverbs-dev librdmacm1 librdmacm-dev libibumad3 libibumad-dev ibverbs-utils ibverbs-providers && rm -rf /var/lib/apt/lists/* && rm $(dpkg-query -L libibverbs-dev librdmacm-dev libibumad-dev | grep "\(\.so\|\.a\)$") ) && ( cd opt/gdrcopy/ && dpkg -i libgdrapi_*.deb ) && ( cp -r opt/hpcx /opt/ && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/ && ln -sf /opt/hpcx/ompi /usr/local/mpi && ln -sf /opt/hpcx/ucx /usr/local/ucx && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf && sed -i 's/^\(btl = self\)$/#\1/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf ) && ldconfig # buildkit
0.00B 2024-01-04 10:34:26 ARG TARGETARCH
0.00B 2024-01-04 10:34:26 ENV GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 RDMACORE_VERSION=39.0
0.00B 2024-01-04 10:34:26 ARG OPENMPI_VERSION
0.00B 2024-01-04 10:34:26 ARG OPENUCX_VERSION
0.00B 2024-01-04 10:34:26 ARG MOFED_VERSION=5.4-rdmacore39.0
0.00B 2024-01-04 10:34:26 ARG RDMACORE_VERSION
0.00B 2024-01-04 10:34:26 ARG HPCX_VERSION
0.00B 2024-01-04 10:34:26 ARG GDRCOPY_VERSION
84.87MB 2024-01-04 10:34:19 RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y --no-install-recommends build-essential git libglib2.0-0 less libnl-route-3-200 libnl-3-dev libnl-route-3-dev libnuma-dev libnuma1 libpmi2-0-dev nano numactl openssh-client vim wget && rm -rf /var/lib/apt/lists/* # buildkit
148.72KB 2024-01-04 10:20:04 COPY NVIDIA_Deep_Learning_Container_License.pdf /workspace/ # buildkit
0.00B 2024-01-04 10:20:04 ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
0.00B 2024-01-04 10:20:04 ENV NVIDIA_PRODUCT_NAME=CUDA
14.53KB 2024-01-04 10:20:04 COPY entrypoint/ /opt/nvidia/ # buildkit
0.00B 2024-01-04 10:20:04 ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
0.00B 2024-01-04 10:20:04 ARG _LIBPATH_SUFFIX
46.00B 2024-01-04 10:20:04 RUN |21 CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.19.4 CUBLAS_VERSION=12.3.4.1 CUFFT_VERSION=11.0.12.1 CURAND_VERSION=10.3.4.107 CUSPARSE_VERSION=12.2.0.103 CUSOLVER_VERSION=11.5.4.101 CUTENSOR_VERSION=2.0.0.7 NPP_VERSION=12.2.3.2 NVJPEG_VERSION=12.3.0.81 CUDNN_VERSION=8.9.7.29+cuda12.2 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.11 NSIGHT_SYSTEMS_VERSION=2023.4.1.97 NSIGHT_COMPUTE_VERSION=2023.3.1.1 DALI_VERSION=1.33.0 DALI_BUILD=11414174 POLYGRAPHY_VERSION=0.49.1 TRANSFORMER_ENGINE_VERSION=1.2 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
13.39KB 2024-01-04 10:20:04 ADD docs.tgz / # buildkit
0.00B 2024-01-04 10:20:04 ENV DALI_VERSION=1.33.0 DALI_BUILD=11414174 POLYGRAPHY_VERSION=0.49.1 TRANSFORMER_ENGINE_VERSION=1.2
0.00B 2024-01-04 10:20:04 ARG TRANSFORMER_ENGINE_VERSION
0.00B 2024-01-04 10:20:04 ARG POLYGRAPHY_VERSION
0.00B 2024-01-04 10:20:04 ARG DALI_BUILD
0.00B 2024-01-04 10:20:04 ARG DALI_VERSION
0.00B 2024-01-04 10:20:04 LABEL com.nvidia.nccl.version=2.19.4 com.nvidia.cublas.version=12.3.4.1 com.nvidia.cufft.version=11.0.12.1 com.nvidia.curand.version=10.3.4.107 com.nvidia.cusparse.version=12.2.0.103 com.nvidia.cusolver.version=11.5.4.101 com.nvidia.cutensor.version=2.0.0.7 com.nvidia.npp.version=12.2.3.2 com.nvidia.nvjpeg.version=12.3.0.81 com.nvidia.cudnn.version=8.9.7.29+cuda12.2 com.nvidia.tensorrt.version=8.6.1.6+cuda12.0.1.011 com.nvidia.tensorrtoss.version=23.11 com.nvidia.nsightsystems.version=2023.4.1.97 com.nvidia.nsightcompute.version=2023.3.1.1
4.55GB 2024-01-04 10:20:04 RUN |17 CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.19.4 CUBLAS_VERSION=12.3.4.1 CUFFT_VERSION=11.0.12.1 CURAND_VERSION=10.3.4.107 CUSPARSE_VERSION=12.2.0.103 CUSOLVER_VERSION=11.5.4.101 CUTENSOR_VERSION=2.0.0.7 NPP_VERSION=12.2.3.2 NVJPEG_VERSION=12.3.0.81 CUDNN_VERSION=8.9.7.29+cuda12.2 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.11 NSIGHT_SYSTEMS_VERSION=2023.4.1.97 NSIGHT_COMPUTE_VERSION=2023.3.1.1 /bin/sh -c /nvidia/build-scripts/installNCCL.sh && /nvidia/build-scripts/installLIBS.sh && /nvidia/build-scripts/installCUDNN.sh && /nvidia/build-scripts/installTRT.sh && /nvidia/build-scripts/installNSYS.sh && /nvidia/build-scripts/installNCU.sh && /nvidia/build-scripts/installCUTENSOR.sh # buildkit
0.00B 2024-01-04 10:17:25 ENV NCCL_VERSION=2.19.4 CUBLAS_VERSION=12.3.4.1 CUFFT_VERSION=11.0.12.1 CURAND_VERSION=10.3.4.107 CUSPARSE_VERSION=12.2.0.103 CUSOLVER_VERSION=11.5.4.101 CUTENSOR_VERSION=2.0.0.7 NPP_VERSION=12.2.3.2 NVJPEG_VERSION=12.3.0.81 CUDNN_VERSION=8.9.7.29+cuda12.2 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.11 NSIGHT_SYSTEMS_VERSION=2023.4.1.97 NSIGHT_COMPUTE_VERSION=2023.3.1.1
0.00B 2024-01-04 10:17:25 ARG NSIGHT_COMPUTE_VERSION
0.00B 2024-01-04 10:17:25 ARG NSIGHT_SYSTEMS_VERSION
0.00B 2024-01-04 10:17:25 ARG TRTOSS_VERSION
0.00B 2024-01-04 10:17:25 ARG TRT_VERSION
0.00B 2024-01-04 10:17:25 ARG CUDNN_VERSION
0.00B 2024-01-04 10:17:25 ARG NVJPEG_VERSION
0.00B 2024-01-04 10:17:25 ARG NPP_VERSION
0.00B 2024-01-04 10:17:25 ARG CUTENSOR_VERSION
0.00B 2024-01-04 10:17:25 ARG CUSOLVER_VERSION
0.00B 2024-01-04 10:17:25 ARG CUSPARSE_VERSION
0.00B 2024-01-04 10:17:25 ARG CURAND_VERSION
0.00B 2024-01-04 10:17:25 ARG CUFFT_VERSION
0.00B 2024-01-04 10:17:25 ARG CUBLAS_VERSION
0.00B 2024-01-04 10:17:25 ARG NCCL_VERSION
0.00B 2024-01-04 10:17:25 LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
0.00B 2024-01-04 10:17:25 ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
58.45KB 2024-01-04 10:17:25 RUN |3 CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 JETPACK_HOST_MOUNTS= /bin/sh -c cp -vprd /nvidia/. / && patch -p0 < /etc/startup_scripts.patch && rm -f /etc/startup_scripts.patch # buildkit
449.28MB 2024-01-04 10:17:25 RUN |3 CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 JETPACK_HOST_MOUNTS= /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
0.00B 2024-01-02 12:48:15 RUN |3 CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 JETPACK_HOST_MOUNTS= /bin/sh -c if [ -n "${JETPACK_HOST_MOUNTS}" ]; then echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf; fi # buildkit
0.00B 2024-01-02 12:48:15 ENV CUDA_VERSION=12.3.2.001 CUDA_DRIVER_VERSION=545.23.08 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
0.00B 2024-01-02 12:48:15 ARG JETPACK_HOST_MOUNTS
0.00B 2024-01-02 12:48:15 ARG CUDA_DRIVER_VERSION
0.00B 2024-01-02 12:48:15 ARG CUDA_VERSION
316.56MB 2024-01-02 12:37:01 RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y --no-install-recommends apt-utils build-essential ca-certificates curl libncurses5 libncursesw5 patch wget rsync unzip jq gnupg libtcmalloc-minimal4 # buildkit
0.00B 2023-12-12 19:38:59 /bin/sh -c #(nop) CMD ["/bin/bash"]
77.85MB 2023-12-12 19:38:59 /bin/sh -c #(nop) ADD file:2b3b5254f38a790d40e31cb26155609f7fc99ef7bc99eae1e0d67fa9ae605f77 in /
0.00B 2023-12-12 19:38:57 /bin/sh -c #(nop) LABEL org.opencontainers.image.version=22.04
0.00B 2023-12-12 19:38:57 /bin/sh -c #(nop) LABEL org.opencontainers.image.ref.name=ubuntu
0.00B 2023-12-12 19:38:57 /bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH
0.00B 2023-12-12 19:38:57 /bin/sh -c #(nop) ARG RELEASE

镜像信息

{
    "Id": "sha256:8470a68886ff2f480d76858d930fb72d0e680e69276fc1d068fe8b6625b4cb9f",
    "RepoTags": [
        "nvcr.io/nvidia/pytorch:24.01-py3",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch:24.01-py3"
    ],
    "RepoDigests": [
        "nvcr.io/nvidia/pytorch@sha256:afd682405d620a620f61f38cb9d9bbc6a5230817699a48e9ed193546e81fb2ee",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nvcr.io/nvidia/pytorch@sha256:8ddc35d22fd7a42063a9f6b4026ff25c9a772b1e3af44f672b7df1ff58478498"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2024-01-25T05:13:50.47453137Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "6006/tcp": {},
            "8888/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin",
            "CUDA_VERSION=12.3.2.001",
            "CUDA_DRIVER_VERSION=545.23.08",
            "CUDA_CACHE_DISABLE=1",
            "NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=",
            "_CUDA_COMPAT_PATH=/usr/local/cuda/compat",
            "ENV=/etc/shinit_v2",
            "BASH_ENV=/etc/bash.bashrc",
            "SHELL=/bin/bash",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=9.0",
            "NCCL_VERSION=2.19.4",
            "CUBLAS_VERSION=12.3.4.1",
            "CUFFT_VERSION=11.0.12.1",
            "CURAND_VERSION=10.3.4.107",
            "CUSPARSE_VERSION=12.2.0.103",
            "CUSOLVER_VERSION=11.5.4.101",
            "CUTENSOR_VERSION=2.0.0.7",
            "NPP_VERSION=12.2.3.2",
            "NVJPEG_VERSION=12.3.0.81",
            "CUDNN_VERSION=8.9.7.29+cuda12.2",
            "TRT_VERSION=8.6.1.6+cuda12.0.1.011",
            "TRTOSS_VERSION=23.11",
            "NSIGHT_SYSTEMS_VERSION=2023.4.1.97",
            "NSIGHT_COMPUTE_VERSION=2023.3.1.1",
            "DALI_VERSION=1.33.0",
            "DALI_BUILD=11414174",
            "POLYGRAPHY_VERSION=0.49.1",
            "TRANSFORMER_ENGINE_VERSION=1.2",
            "LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility,video",
            "NVIDIA_PRODUCT_NAME=PyTorch",
            "GDRCOPY_VERSION=2.3",
            "HPCX_VERSION=2.16rc4",
            "MOFED_VERSION=5.4-rdmacore39.0",
            "OPENUCX_VERSION=1.15.0",
            "OPENMPI_VERSION=4.1.5rc2",
            "RDMACORE_VERSION=39.0",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs:",
            "PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4",
            "PYTORCH_VERSION=2.2.0a0+81ea7a4",
            "PYTORCH_BUILD_NUMBER=0",
            "NVIDIA_PYTORCH_VERSION=24.01",
            "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python",
            "PYTHONIOENCODING=utf-8",
            "LC_ALL=C.UTF-8",
            "PIP_DEFAULT_TIMEOUT=100",
            "NVM_DIR=/usr/local/nvm",
            "JUPYTER_PORT=8888",
            "TENSORBOARD_PORT=6006",
            "UCC_CL_BASIC_TLS=^sharp",
            "TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX",
            "PYTORCH_HOME=/opt/pytorch/pytorch",
            "CUDA_HOME=/usr/local/cuda",
            "TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1",
            "USE_EXPERIMENTAL_CUDNN_V8_API=1",
            "COCOAPI_VERSION=2.0+nv0.8.0",
            "TORCH_CUDNN_V8_API_ENABLED=1",
            "CUDA_MODULE_LOADING=LAZY",
            "NVIDIA_BUILD_ID=80741402"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/workspace",
        "Entrypoint": [
            "/opt/nvidia/nvidia_entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "com.nvidia.build.id": "80741402",
            "com.nvidia.build.ref": "3a8f39e58d71996b362a9358b971d42d695351fd",
            "com.nvidia.cublas.version": "12.3.4.1",
            "com.nvidia.cuda.version": "9.0",
            "com.nvidia.cudnn.version": "8.9.7.29+cuda12.2",
            "com.nvidia.cufft.version": "11.0.12.1",
            "com.nvidia.curand.version": "10.3.4.107",
            "com.nvidia.cusolver.version": "11.5.4.101",
            "com.nvidia.cusparse.version": "12.2.0.103",
            "com.nvidia.cutensor.version": "2.0.0.7",
            "com.nvidia.nccl.version": "2.19.4",
            "com.nvidia.npp.version": "12.2.3.2",
            "com.nvidia.nsightcompute.version": "2023.3.1.1",
            "com.nvidia.nsightsystems.version": "2023.4.1.97",
            "com.nvidia.nvjpeg.version": "12.3.0.81",
            "com.nvidia.pytorch.version": "2.2.0a0+81ea7a4",
            "com.nvidia.tensorrt.version": "8.6.1.6+cuda12.0.1.011",
            "com.nvidia.tensorrtoss.version": "23.11",
            "com.nvidia.volumes.needed": "nvidia_driver",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 22018632570,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/718098ef4bb7cc0426c0ce6547628ad088b960ac641dc1aafe387ce37f42702c/diff:/var/lib/docker/overlay2/7df0bf5f09e7a7fa01ba49633a644e3d86ec8f585b1b722273647e3a4627115d/diff:/var/lib/docker/overlay2/f95d820e6dbbd007814012d7b334ccf48fb4d5a6c3f2fc86fd428190ff279ed3/diff:/var/lib/docker/overlay2/d2feee48512c585ff092038f0ca76b82adce9646e5464171d0b5dbd288252e99/diff:/var/lib/docker/overlay2/1af72a718507c383f318281d0b5b38180ae536268729dad264fe7944c411a8c8/diff:/var/lib/docker/overlay2/6b651b85ca91b87d72e0b8e2c54e011e46c0e7ee1bbb9bf1ab95cc727e7a4474/diff:/var/lib/docker/overlay2/e43fc66d0f8d89002ce3e4bde03e5e467cfb1629e3831fe2fee20c6d7143c7ba/diff:/var/lib/docker/overlay2/fc9bbffcf1278a469130a9e610026545c92917e2ccdb1d2df2579f0aa9f4373f/diff:/var/lib/docker/overlay2/def4754c3a7f09321116fa826012540a5d2fe553161761ad2d384552fc0ea028/diff:/var/lib/docker/overlay2/c4b28232fc94b04309cd6cadc0f2e2c4f8d137c1981e66d85d22bc6cd6c8ac86/diff:/var/lib/docker/overlay2/9117c717a82c8e5255e629e7173af0cc62df4447ae38e7533744c6bd2fa45dc8/diff:/var/lib/docker/overlay2/0cb3c53fd3d52486cd024eb1bb4be1eb537939867b52e323386d089b7986a2b6/diff:/var/lib/docker/overlay2/6cd4195e8cd4a4cc6218a026aea58c0ea614c337d469f61e79049d5a671c7d9d/diff:/var/lib/docker/overlay2/fa6d88500cb0a4112aedbcfcc6c4a711ffa5e169ce137500508fc494efb3cdee/diff:/var/lib/docker/overlay2/89ac8889b346a42e4830879d93bee995dd828e6bf10275c8e43ce0b48dd43f54/diff:/var/lib/docker/overlay2/313bb8c01583462341c32ec17d2984c5799b0f491b333710181b9f2aae321810/diff:/var/lib/docker/overlay2/456e3ff00cca22e8a70d4daf602ae4bf64161abecffa061d411579b89ade103d/diff:/var/lib/docker/overlay2/a1e555a8c601e40f5eb306d11a94b2d783b018a0b1f86ff282bc0b7d0645a704/diff:/var/lib/docker/overlay2/f4f38727cdc33ff75f8ccd8397426e284edb1189f9e096d75d609b73266121e8/diff:/var/lib/docker/overlay2/bc5f939eb64e4da1aabc1ee9a0974523cd7bcbfadd322473f51c654135d857ca/diff:/var/lib/docker/overlay2/dbc85be046d02146ca9b1971fb7f46702259754ae3f3db4b6abfaac47bb45481/diff:/var/lib/docker/overlay2/01e5a398d742cf8c740817f69d1142a9116673b2fda4dd2fa98888ff70a4a7bf/diff:/var/lib/docker/overlay2/432d9c3ffc694ed29c38988168eab5313a77132e59cf3522fbc1e362e636f801/diff:/var/lib/docker/overlay2/bde648309d8bf17b2e5a631e09e48cec027d18a07a8dd381456dfefc71045817/diff:/var/lib/docker/overlay2/36cf57b1cec51e2d44a4ad824b71fb3e07b77c38ed7997bf4596ac11bd590d61/diff:/var/lib/docker/overlay2/9600749ad3026009021d55396bdb218c848e3b6e1f27d4be6ef4f381a6e5da83/diff:/var/lib/docker/overlay2/b4da622d738d0f531d549b48e558601f06a1e6d866d43f09476ddf1efb05b634/diff:/var/lib/docker/overlay2/b171fccd41c011d449a5d94e49cec7bd5a9cb1449d9412795d2e7211e7f852e2/diff:/var/lib/docker/overlay2/d9cc216258608ada8ee7fb4bac188f69953e0538afacb3f27313a77d468581d1/diff:/var/lib/docker/overlay2/57403ac4690d0dbd722268f8b48751f0a46950e03c481811eebedd3b9bfb9222/diff:/var/lib/docker/overlay2/182c085552c40bed5fab5b03a7a939e8871ffbef1ab8caebc0d2f8cc34e1b512/diff:/var/lib/docker/overlay2/09d04e165d054a6f81f62037a49511b70c2fe99cf27308b4dfc2ef077a22e3f7/diff:/var/lib/docker/overlay2/c3c61541d7d130e1ab61e5642735ae2905e5b95ab8e2f5a0938c7b8964d19029/diff:/var/lib/docker/overlay2/a20375f37266dcd7afe61854d443bd24fe56638556253ca85261ccbc52478d4c/diff:/var/lib/docker/overlay2/01bfa187e28fb7e43e97bdea5a036ed6defb5c21eb9af91a9977b705122d03b2/diff:/var/lib/docker/overlay2/dfade3ea181558dc331ddf45672aaa3245c749d61ab8d142c6b1ad0836fe19d3/diff:/var/lib/docker/overlay2/47ec20d5bdf57e43557282ce4f0aa8c2efba495d10fc59fd498ab6461b696245/diff:/var/lib/docker/overlay2/4ce51ad7dabb34554af46190fb16766112bd9768ba57d2f0a2832d0df9594abc/diff:/var/lib/docker/overlay2/114fe78a93c36d12ee7d15604654915c01d2bcf5d22c98354737439a99ede236/diff:/var/lib/docker/overlay2/9b0ca97a1cb7e2147a432dc01f3a6f8789f697e371346953d8d72130d1c8bf46/diff:/var/lib/docker/overlay2/362c3222df8d1774bcd7c2582affae48e42bf42840131d0b50fb8ead885da553/diff:/var/lib/docker/overlay2/aba6146a566334189bcf5c556541b3884e005235052868f57f40a2fc3bedc668/diff:/var/lib/docker/overlay2/ee3e42818d9c28528214c21eb45366b408543e70209a315877ed637346dc43fa/diff:/var/lib/docker/overlay2/adc6879bd314012cc8a62fd2469bf91d3ea3cb03d73e0cddb8b414368cd5d0c8/diff:/var/lib/docker/overlay2/f2ba60f9022d07583fa44a8d2413a286fabd979d7d15782087ba796478c13bbe/diff:/var/lib/docker/overlay2/73891be647fca2fcc90387eab08cd622f7fb221b33659ab1bdb3265e0751dab2/diff:/var/lib/docker/overlay2/b98e5deae9b571a6225a45171fd4ee234e0ed094a5523248d69072bd6e85c4e8/diff:/var/lib/docker/overlay2/71cdcb7a026103a7e50a629f026e00a9da827dd7ff6b254e9f89058ba30f18b9/diff:/var/lib/docker/overlay2/24d47026511a09a8e1f14a78b0847810f35b905a90e2d2686eaab7ec60b6f13c/diff",
            "MergedDir": "/var/lib/docker/overlay2/090f26877f769958463645793abb2bb32fabf31b11f9ca3e07ad06cf4f11edb9/merged",
            "UpperDir": "/var/lib/docker/overlay2/090f26877f769958463645793abb2bb32fabf31b11f9ca3e07ad06cf4f11edb9/diff",
            "WorkDir": "/var/lib/docker/overlay2/090f26877f769958463645793abb2bb32fabf31b11f9ca3e07ad06cf4f11edb9/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:a1360aae5271bbbf575b4057cb4158dbdfbcae76698189b55fb1039bc0207400",
            "sha256:aad8fbafa1e9713a4cfad1deccfb8c8a0d06110b8f45f887368d380f6d80726e",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:a1d5f3691cf238734c1ef4728a8692c760efd720aef59d7ad5c294a3c4375ef4",
            "sha256:408dbdcce3b1f995d34f8fa23358475ac0152facee78e0e3b79eb4028b3819f9",
            "sha256:d922555f193d68b8ec4c4e75f69b13ecc539c70249d940f3d68aa3aa79eef568",
            "sha256:7fa243a59b176545b05707ff123cf4e7ed7e5944a9e32472c3ff4cd4b309f0cd",
            "sha256:967fc1edf1f6cdedc28558157571ed750a43312aec3db8e20e1e27faf956307f",
            "sha256:61f4c0fdd1780d8d35d18b6c831ed7cb5d35e1fafdd4e4105fbb29aa30ac7ee2",
            "sha256:ea7a603182231c7d9330cb24360e0a5dd181829c974e8f42c4cad2cc957a5718",
            "sha256:c272ef64f0f53f44a389b02d0af93bd1319a06305906871dbf65a789c8473d43",
            "sha256:5ca5b67aba1f197581b8b58dba67084f099c4825bb3d3e3aef188af732da450e",
            "sha256:94fb17ce577d6a4c3cb2a1b139be8292d791136ebd226b8864a785aa9e318652",
            "sha256:7b64623cadb715d200d512836ca83e46ca2842093846d806008aa4a69e9b4c6c",
            "sha256:3fc4574df7409e8e94aad0d4e249f67282e923512f293cd1be23db9c2ea51ab1",
            "sha256:bd7fb5f6570e20188a0dd8f4b1f3a45839cb257ccef2b8d5b3d62f18321ef0bd",
            "sha256:0277b7e58fa6ce4685bbf11f19ec6d62c64ab3e02dd4eac596ae55fb29a4f0eb",
            "sha256:c3f04d2806955db3368722919bb71b8aff5b40ab134e505926462da05d2f1321",
            "sha256:c630ef1b7b61d7a5073749de0d1c2e3c2c92574a2ac0be8e8de8c9b54060f934",
            "sha256:565ddb635a797172c7a4de83e499ac0ec036e3110c684dda7f22062124d644fc",
            "sha256:7d6bea944b144eda00d8b5ff942aa7af38f120a1e0a0f889e8183e67f5b4ca7a",
            "sha256:10880933e71506233a2b28e8c1679a0a96d5690da20076efb8a7549d9063f253",
            "sha256:2646186d85246d0f06c5a926db00ac8a245d4179c0e6af17c6cc3d9025e66d7e",
            "sha256:acd428170ab7c52df320cf1c999ec140fa54349bdc517cb4ad613f2b9c9f67f2",
            "sha256:37dc326733c197b2c5abf0c4767784206838dcc5b0a15d55323fee035b28d1a9",
            "sha256:213a49c786000b40836669d8c37a686665fed24031d0a590e63f940ab1d76aeb",
            "sha256:818822d7503f6e57fa0abdf07f2a9018ea00c93996f196b96db0dd9b434e708b",
            "sha256:7b43594788a26eeecb14d091a6fd8dbd3fb0feca9305195cbd870cab8908cabb",
            "sha256:bc1d39032f971c831defd95368d44f6039e4aae902e269124819064c7756cc06",
            "sha256:8ae576ba2e9421772581de1267897257be9d0ad15b30a2a64ee1f3f778acffa5",
            "sha256:1c09125232fc9f5dbdb506663a16cee7be1c848e732a49bfab64aa277d801fa7",
            "sha256:5c8f5b34bde3273a198e6dfc94a122e4fa179837ca55a813dd2c1aafe1100401",
            "sha256:09e62598d36f9afe1754e03d66627fb4dba69c525294ed4708b152c771d9c9bc",
            "sha256:1f5a1978101485d7fd25f65a7933bf1f61c2c8dba014e79f5cba3819ad452000",
            "sha256:0ea8f48013cae752ac80409b74e1a19a407584eccdee1fc5fa2103d49065336e",
            "sha256:2cc93dd0720d9f393de109b905fdebc55dcce49c0f5e6cd7430a98acc1a09494",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:ccb64ab424a87d498e3a244cbdfce873b16ab39a7fa94827feb01c9f8b3f8791",
            "sha256:cee9721d2e5dd9c919965cb6041f8bf4eab75d8574fd2a5e0ca9e33d3f9d3bda",
            "sha256:85f4236315a0941c1da862b4b09191b11b7507ef68c419bf53010e15241da264",
            "sha256:7e02e3f124abb962c20b8e4e710516a725883cadff579175639345435742669a",
            "sha256:a867e76178d115a64c7dd9124d0859e13b8eed61c9c64533c8793cfb396b3e4c",
            "sha256:fa2a013a2a9acf0095dde4812d7b09dd9f53edc1b899aa7f658f2def42b74c20",
            "sha256:47585a11ae4fb3e4123a32b2bdf4166dbef3da31b9ac0a6768423bc82953f406",
            "sha256:98a833959cd6619d8e3934eed44d394a461020c7e4ddcfe30bec36af556294ca",
            "sha256:ee8be49aec6b80da0380eb8c824941b47d3b6272380bb7a5ee7153de5a47020f",
            "sha256:6ab1e25b0f53f6800629908275257e6d112537c0a431509c44919323ef35b368",
            "sha256:023d19c1b22ba5f3b9bdfc2296d15c23876d5186af97e5f71092964b58245b32",
            "sha256:c9daaf1b703deac55ad973f4e7ecd3fb7ea7e38bb17dd59524c46b093bba3fb0",
            "sha256:6cbef91399c5b8c180ae552ab4d61eab8097fdaca1f17bb3bc04cb979adb1cd9"
        ]
    },
    "Metadata": {
        "LastTagTime": "2024-09-20T00:13:12.572127223+08:00"
    }
}

更多版本