docker.io/kenneth850511/llamafactory:latest linux/amd64

docker.io/kenneth850511/llamafactory:latest - 国内下载镜像源 浏览次数:126

温馨提示:此镜像为latest tag镜像,本站无法保证此版本为最新镜像

源镜像 docker.io/kenneth850511/llamafactory:latest
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest
镜像ID sha256:473f149c93198da556dac3418f8236064f75e6a77b17651c26b203da5cb500cc
镜像TAG latest
大小 29.21GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /opt/nvidia/nvidia_entrypoint.sh
工作目录 /app
OS/平台 linux/amd64
浏览量 126 次
贡献者
镜像创建 2025-03-04T15:40:35.081376334+08:00
同步时间 2025-03-05 02:04
更新时间 2025-04-19 02:46
开放端口
6006/tcp 7860/tcp 8000/tcp 8888/tcp
目录挂载
/app/data /app/output /root/.cache/huggingface /root/.cache/modelscope
环境变量
PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS= _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0 NCCL_VERSION=2.22.3 CUBLAS_VERSION=12.6.3.3 CUFFT_VERSION=11.3.0.4 CURAND_VERSION=10.3.7.77 CUSPARSE_VERSION=12.5.4.2 CUSPARSELT_VERSION=0.6.2.3 CUSOLVER_VERSION=11.7.1.2 CUTENSOR_VERSION=2.0.2.5 NPP_VERSION=12.3.1.54 NVJPEG_VERSION=12.3.3.54 CUDNN_VERSION=9.5.0.50 CUDNN_FRONTEND_VERSION=1.7.0 TRT_VERSION=10.5.0.18 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2024.6.1.90 NSIGHT_COMPUTE_VERSION=2024.3.2.3 DALI_VERSION=1.42.0 DALI_BUILD=18507157 POLYGRAPHY_VERSION=0.49.13 TRANSFORMER_ENGINE_VERSION=1.11 MODEL_OPT_VERSION=0.17.0 LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video NVIDIA_PRODUCT_NAME=PyTorch GDRCOPY_VERSION=2.3.1-1 HPCX_VERSION=2.20 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.17.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=39.0 OPAL_PREFIX=/opt/hpcx/ompi OMPI_MCA_coll_hcoll_enable=0 LIBRARY_PATH=/usr/local/cuda/lib64/stubs: NVIDIA_BUILD_ID=114410972 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 PYTORCH_VERSION=2.5.0a0+e000cf0 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=24.10 NVFUSER_BUILD_VERSION=f669fcf NVFUSER_VERSION=f669fcf PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python NVPL_LAPACK_MATH_MODE=PEDANTIC PYTHONIOENCODING=utf-8 LC_ALL=C.UTF-8 PIP_DEFAULT_TIMEOUT=100 JUPYTER_PORT=8888 TENSORBOARD_PORT=6006 UCC_CL_BASIC_TLS=^sharp TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX PYTORCH_HOME=/opt/pytorch/pytorch CUDA_HOME=/usr/local/cuda TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1 USE_EXPERIMENTAL_CUDNN_V8_API=1 COCOAPI_VERSION=2.0+nv0.8.0 TORCH_CUDNN_V8_API_ENABLED=1 CUDA_MODULE_LOADING=LAZY MAX_JOBS=4 FLASH_ATTENTION_FORCE_BUILD=TRUE VLLM_WORKER_MULTIPROC_METHOD=spawn GRADIO_SERVER_PORT=7860 API_PORT=8000
镜像标签
docker-cuda: com.docker.compose.project llamafactory: com.docker.compose.service 2.33.0: com.docker.compose.version 114410972: com.nvidia.build.id 3e3c067dd015e6d16d2cf59ac18e9f2e2466b68a: com.nvidia.build.ref 12.6.3.3: com.nvidia.cublas.version 9.0: com.nvidia.cuda.version 9.5.0.50: com.nvidia.cudnn.version 11.3.0.4: com.nvidia.cufft.version 10.3.7.77: com.nvidia.curand.version 11.7.1.2: com.nvidia.cusolver.version 12.5.4.2: com.nvidia.cusparse.version 0.6.2.3: com.nvidia.cusparselt.version 2.0.2.5: com.nvidia.cutensor.version 2.22.3: com.nvidia.nccl.version 12.3.1.54: com.nvidia.npp.version 2024.3.2.3: com.nvidia.nsightcompute.version 2024.6.1.90: com.nvidia.nsightsystems.version 12.3.3.54: com.nvidia.nvjpeg.version 2.5.0a0+e000cf0: com.nvidia.pytorch.version 10.5.0.18: com.nvidia.tensorrt.version : com.nvidia.tensorrtoss.version nvidia_driver: com.nvidia.volumes.needed ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest  docker.io/kenneth850511/llamafactory:latest

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest  docker.io/kenneth850511/llamafactory:latest

Shell快速替换命令

sed -i 's#kenneth850511/llamafactory:latest#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest  docker.io/kenneth850511/llamafactory:latest'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest  docker.io/kenneth850511/llamafactory:latest'

镜像构建历史


# 2025-03-04 15:40:35  0.00B 声明容器运行时监听的端口
EXPOSE map[8000/tcp:{}]
                        
# 2025-03-04 15:40:35  0.00B 设置环境变量 API_PORT
ENV API_PORT=8000
                        
# 2025-03-04 15:40:35  0.00B 声明容器运行时监听的端口
EXPOSE map[7860/tcp:{}]
                        
# 2025-03-04 15:40:35  0.00B 设置环境变量 GRADIO_SERVER_PORT
ENV GRADIO_SERVER_PORT=7860
                        
# 2025-03-04 15:40:35  0.00B 创建挂载点用于持久化数据或共享数据
VOLUME [/root/.cache/huggingface /root/.cache/modelscope /app/data /app/output]
                        
# 2025-03-04 15:40:35  0.00B 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c if [ -n "$HTTP_PROXY" ]; then         unset http_proxy;         unset https_proxy;     fi # buildkit
                        
# 2025-03-04 15:40:33  593.11MB 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c pip uninstall -y transformer-engine flash-attn &&     if [ "$INSTALL_FLASHATTN" == "true" ]; then         pip uninstall -y ninja &&         if [ -n "$HTTP_PROXY" ]; then             pip install --proxy=$HTTP_PROXY ninja &&             pip install --proxy=$HTTP_PROXY --no-cache-dir flash-attn --no-build-isolation;         else             pip install ninja &&             pip install --no-cache-dir flash-attn --no-build-isolation;         fi;     fi # buildkit
                        
# 2025-03-04 13:36:41  40.85MB 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c pip uninstall -y transformer-engine flash-attn &&     if [ "$INSTALL_EETQ" == "true" ]; then         if [ -n "$HTTP_PROXY" ]; then             pip install --proxy=$HTTP_PROXY --no-cache-dir https://github.com/NetEase-FuXi/EETQ/releases/download/v1.0.1/EETQ-1.0.1-cp310-cp310-linux_x86_64.whl;         else             pip install --no-cache-dir https://github.com/NetEase-FuXi/EETQ/releases/download/v1.0.1/EETQ-1.0.1-cp310-cp310-linux_x86_64.whl;         fi;     fi # buildkit
                        
# 2025-03-04 13:36:16  7.03GB 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c EXTRA_PACKAGES="metrics";     if [ "$INSTALL_BNB" == "true" ]; then         EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes";     fi;     if [ "$INSTALL_VLLM" == "true" ]; then         EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm";     fi;     if [ "$INSTALL_DEEPSPEED" == "true" ]; then         EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed";     fi;     if [ "$INSTALL_LIGER_KERNEL" == "true" ]; then         EXTRA_PACKAGES="${EXTRA_PACKAGES},liger-kernel";     fi;     if [ "$INSTALL_HQQ" == "true" ]; then         EXTRA_PACKAGES="${EXTRA_PACKAGES},hqq";     fi;     if [ -n "$HTTP_PROXY" ]; then         pip install --proxy=$HTTP_PROXY -e ".[$EXTRA_PACKAGES]" &&         pip install --no-cache-dir --proxy=$HTTP_PROXY unsloth;     else         pip install -e ".[$EXTRA_PACKAGES]" &&         pip install --no-cache-dir unsloth;     fi # buildkit
                        
# 2025-03-04 13:19:09  6.68MB 复制新文件或目录到容器中
COPY . /app # buildkit
                        
# 2025-02-26 00:24:25  504.96MB 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c pip config set global.index-url "$PIP_INDEX" &&     pip config set global.extra-index-url "$PIP_INDEX" &&     python -m pip install --upgrade pip &&     if [ -n "$HTTP_PROXY" ]; then         python -m pip install --proxy=$HTTP_PROXY -r requirements.txt;     else         python -m pip install -r requirements.txt;     fi # buildkit
                        
# 2025-02-26 00:21:15  494.00B 复制新文件或目录到容器中
COPY requirements.txt /app # buildkit
                        
# 2025-02-22 13:03:13  0.00B 执行命令并创建新的镜像层
RUN |9 INSTALL_BNB=true INSTALL_VLLM=true INSTALL_DEEPSPEED=true INSTALL_FLASHATTN=true INSTALL_LIGER_KERNEL=true INSTALL_HQQ=true INSTALL_EETQ=true PIP_INDEX=https://pypi.org/simple HTTP_PROXY= /bin/sh -c if [ -n "$HTTP_PROXY" ]; then         echo "Configuring proxy...";         export http_proxy=$HTTP_PROXY;         export https_proxy=$HTTP_PROXY;     fi # buildkit
                        
# 2025-02-22 13:03:13  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG HTTP_PROXY=
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG PIP_INDEX=https://pypi.org/simple
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_EETQ=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_HQQ=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_LIGER_KERNEL=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_FLASHATTN=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_DEEPSPEED=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_VLLM=true
                        
# 2025-02-22 13:03:13  0.00B 定义构建参数
ARG INSTALL_BNB=true
                        
# 2025-02-22 13:03:13  0.00B 设置环境变量 VLLM_WORKER_MULTIPROC_METHOD
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
                        
# 2025-02-22 13:03:13  0.00B 设置环境变量 FLASH_ATTENTION_FORCE_BUILD
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
                        
# 2025-02-22 13:03:13  0.00B 设置环境变量 MAX_JOBS
ENV MAX_JOBS=4
                        
# 2024-10-04 05:46:33  0.00B 添加元数据标签
LABEL com.nvidia.build.ref=3e3c067dd015e6d16d2cf59ac18e9f2e2466b68a
                        
# 2024-10-04 05:46:33  0.00B 定义构建参数
ARG NVIDIA_BUILD_REF=3e3c067dd015e6d16d2cf59ac18e9f2e2466b68a
                        
# 2024-10-04 05:46:33  0.00B 添加元数据标签
LABEL com.nvidia.build.id=114410972
                        
# 2024-10-04 05:46:33  0.00B 设置环境变量 NVIDIA_BUILD_ID
ENV NVIDIA_BUILD_ID=114410972
                        
# 2024-10-04 05:46:33  0.00B 定义构建参数
ARG NVIDIA_BUILD_ID=114410972
                        
# 2024-10-04 05:46:33  719.00B 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2024-10-04 05:46:33  84.54KB 执行命令并创建新的镜像层
RUN |7 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 PYVER=3.10 /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib  && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf  && ldconfig  && rm -f ${_CUDA_COMPAT_PATH}/lib # buildkit
                        
# 2024-10-04 05:46:33  0.00B 设置环境变量 CUDA_MODULE_LOADING
ENV CUDA_MODULE_LOADING=LAZY
                        
# 2024-10-04 05:46:33  0.00B 设置环境变量 TORCH_CUDNN_V8_API_ENABLED
ENV TORCH_CUDNN_V8_API_ENABLED=1
                        
# 2024-10-04 05:46:33  352.54MB 执行命令并创建新的镜像层
RUN |7 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 PYVER=3.10 /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Transformer Engine in iGPU container until Version variable is set"; else      NVTE_BUILD_THREADS_PER_JOB=8 pip install --no-cache-dir --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@release_v${TRANSFORMER_ENGINE_VERSION}; fi # buildkit
                        
# 2024-10-04 05:41:43  401.25MB 执行命令并创建新的镜像层
RUN |7 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 PYVER=3.10 /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Flash Attention in iGPU as it is a requirement for Transformer Engine"; else     total_mem_gb=$(grep MemTotal /proc/meminfo | awk '{print int($2 / 1024 / 1024)}');     max_jobs=$(( (total_mem_gb - 40) / 6 ));     max_jobs=$(( max_jobs < 4 ? 4 : max_jobs ));     max_jobs=$(( max_jobs > $(nproc) ? $(nproc) : max_jobs ));     echo "Using MAX_JOBS=${max_jobs} to build flash-attn";     env MAX_JOBS=$max_jobs pip install flash-attn==2.4.2; fi # buildkit
                        
# 2024-10-04 05:32:24  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
                        
# 2024-10-04 05:32:24  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2024-10-04 05:32:24  45.75MB 执行命令并创建新的镜像层
RUN |7 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 PYVER=3.10 /bin/sh -c pip install --no-cache-dir /opt/pytorch/torch_tensorrt/dist/*.whl # buildkit
                        
# 2024-10-04 05:29:56  0.00B 定义构建参数
ARG PYVER=3.10
                        
# 2024-10-04 05:29:56  153.04MB 复制新文件或目录到容器中
COPY torch_tensorrt/ /opt/pytorch/torch_tensorrt/ # buildkit
                        
# 2024-10-04 05:29:55  60.61MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip --version && python -c 'import sys; print(sys.platform)'     && pip install --no-cache-dir nvidia-pyindex     && pip install --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/sw-tensorrt-pypi/simple --no-cache-dir polygraphy==0.49.12     && pip install --extra-index-url https://pypi.nvidia.com "nvidia-modelopt[torch]==${MODEL_OPT_VERSION}" # buildkit
                        
# 2024-10-04 05:29:43  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
                        
# 2024-10-04 05:29:43  6.50MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c set -x  && URL=$(VERIFY=1 /nvidia/build-scripts/installTRT.sh | sed -n "s/^.*\(http.*\)tar.*$/\1/p")tar  && FILE=$(wget -O - $URL | sed -n 's/^.*href="\(TensorRT[^"]*\)".*$/\1/p' | egrep -v "internal|safety")  && wget -q $URL/$FILE -O - | tar -xz  && PY=$(python -c 'import sys; print(str(sys.version_info[0])+str(sys.version_info[1]))')  && pip install TensorRT-*/python/tensorrt-*-cp$PY*.whl  && mv /usr/src/tensorrt /opt  && ln -s /opt/tensorrt /usr/src/tensorrt  && rm -r TensorRT-* # buildkit
                        
# 2024-10-04 05:28:35  51.00MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c chmod -R a+w . # buildkit
                        
# 2024-10-04 05:28:35  34.89MB 复制新文件或目录到容器中
COPY tutorials tutorials # buildkit
                        
# 2024-10-04 05:28:34  15.96MB 复制新文件或目录到容器中
COPY examples examples # buildkit
                        
# 2024-10-04 05:28:34  2.07KB 复制新文件或目录到容器中
COPY docker-examples docker-examples # buildkit
                        
# 2024-10-04 05:28:34  2.05KB 复制新文件或目录到容器中
COPY NVREADME.md README.md # buildkit
                        
# 2024-10-04 05:28:34  0.00B 设置工作目录为/workspace
WORKDIR /workspace
                        
# 2024-10-04 05:28:34  3.07GB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c if [ "${L4T}" = "1" ]; then     echo "Not installing rapids for L4T build." ; else     find /rapids  -name "*-Linux.tar.gz" -exec     tar -C /usr --exclude="*.a" --exclude="bin/xgboost" --strip-components=1 -xvf {} \;  && find /rapids -name "*.whl"     ! -name "tornado-*"     ! -name "Pillow-*"     ! -name "certifi-*"     ! -name "protobuf-*" -exec     pip install --no-cache-dir {} + ; pip install numpy==1.24.4; fi # buildkit
                        
# 2024-10-04 05:27:56  201.84KB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir --disable-pip-version-check tabulate # buildkit
                        
# 2024-10-04 05:27:55  766.25MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c ( cd vision && export PYTORCH_VERSION=$(python -c "import torch; print(torch.__version__)") && CFLAGS="-g0" FORCE_CUDA=1 NVCC_APPEND_FLAGS="--threads 8" pip install --no-cache-dir --no-build-isolation --disable-pip-version-check . )  && ( cd vision && cmake -Bbuild -H. -GNinja -DWITH_CUDA=1 -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` && cmake --build build --target install && rm -rf build )  && ( cd fuser && pip install -r requirements.txt &&  python setup.py -version-tag=a0+${NVFUSER_VERSION} install && python setup.py clean && cp $(find /usr/local/lib/python3.10/dist-packages/ -name libnvfuser_codegen.so)  /usr/local/lib/python3.10/dist-packages/torch/lib/ )  && ( cd lightning-thunder && python setup.py install && rm -rf build)  && BUILD_OPTIONS="--cpp_ext --cuda_ext --bnp --xentropy --deprecated_fused_adam --deprecated_fused_lamb --fast_multihead_attn --distributed_lamb --fast_layer_norm --transducer --distributed_adam --fmha --permutation_search --focal_loss --fused_conv_bias_relu --index_mul_2d --cudnn_gbn --group_norm --gpu_direct_storage"  && if [ "${L4T}" != "1" ]; then BUILD_OPTIONS="--fast_bottleneck --nccl_p2p --peer_memory --nccl_allocator ${BUILD_OPTIONS}"; fi && ( cd apex && CFLAGS="-g0" NVCC_APPEND_FLAGS="--threads 8" pip install -v --no-build-isolation --no-cache-dir --disable-pip-version-check --config-settings "--build-option=${BUILD_OPTIONS}" . && rm -rf build )  && ( cd lightning-thunder && mkdir tmp && cd tmp && git clone -b v${CUDNN_FRONTEND_VERSION} --recursive --single-branch https://github.com/NVIDIA/cudnn-frontend.git cudnn_frontend && cd cudnn_frontend && pip install --no-build-isolation --no-cache-dir --disable-pip-version-check . && cd ../../ && rm -rf tmp )  && ( cd pytorch/third_party/onnx && pip uninstall typing -y && CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" pip install --no-build-isolation --no-cache-dir --disable-pip-version-check . ) # buildkit
                        
# 2024-10-04 04:52:50  2.21KB 复制新文件或目录到容器中
COPY singularity/ /.singularity.d/ # buildkit
                        
# 2024-10-04 04:52:50  79.29MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c export COCOAPI_TAG=$(echo ${COCOAPI_VERSION} | sed 's/^.*+n//')  && pip install --disable-pip-version-check --no-cache-dir git+https://github.com/nvidia/cocoapi.git@${COCOAPI_TAG}#subdirectory=PythonAPI # buildkit
                        
# 2024-10-04 04:52:30  0.00B 设置环境变量 COCOAPI_VERSION
ENV COCOAPI_VERSION=2.0+nv0.8.0
                        
# 2024-10-04 04:52:30  674.19MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c if [ -z "${DALI_VERSION}" ] ; then   echo "Not Installing DALI for L4T Build." ; else   export DALI_PKG_SUFFIX="cuda${CUDA_VERSION%%.*}0"   && pip install --disable-pip-version-check --no-cache-dir                 --extra-index-url https://developer.download.nvidia.com/compute/redist                 --extra-index-url http://sqrl/dldata/pip-dali${DALI_URL_SUFFIX:-} --trusted-host sqrl         nvidia-dali-${DALI_PKG_SUFFIX}==${DALI_VERSION}; fi # buildkit
                        
# 2024-10-04 04:52:12  570.57MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir /tmp/dist/*.whl # buildkit
                        
# 2024-10-04 04:52:04  11.50MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c cd pytorch && pip install --no-cache-dir -v -r /opt/pytorch/pytorch/requirements.txt # buildkit
                        
# 2024-10-04 04:52:02  1.95GB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install /opt/transfer/torch*.whl      && patchelf --set-rpath '/usr/local/lib' /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_global_deps.so # buildkit
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 USE_EXPERIMENTAL_CUDNN_V8_API
ENV USE_EXPERIMENTAL_CUDNN_V8_API=1
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 TORCH_ALLOW_TF32_CUBLAS_OVERRIDE
ENV TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 CUDA_HOME
ENV CUDA_HOME=/usr/local/cuda
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 PYTORCH_HOME
ENV PYTORCH_HOME=/opt/pytorch/pytorch
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 TORCH_CUDA_ARCH_LIST
ENV TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
                        
# 2024-10-04 04:51:40  0.00B 设置环境变量 UCC_CL_BASIC_TLS
ENV UCC_CL_BASIC_TLS=^sharp
                        
# 2024-10-04 04:51:40  53.68MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c OPENCV_VERSION=4.7.0 &&     cd / &&     wget -q -O - https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.tar.gz | tar -xzf - &&     cd /opencv-${OPENCV_VERSION} &&     cmake -GNinja -Bbuild -H.           -DWITH_CUDA=OFF -DWITH_1394=OFF           -DPYTHON3_PACKAGES_PATH="/usr/local/lib/python${PYVER}/dist-packages"           -DBUILD_opencv_cudalegacy=OFF -DBUILD_opencv_stitching=OFF -DWITH_IPP=OFF -DWITH_PROTOBUF=OFF &&     cmake --build build --target install &&     cd modules/python/package &&     pip install --no-cache-dir --disable-pip-version-check -v . &&     rm -rf /opencv-${OPENCV_VERSION} # buildkit
                        
# 2024-10-04 04:49:10  0.00B 声明容器运行时监听的端口
EXPOSE map[6006/tcp:{}]
                        
# 2024-10-04 04:49:10  0.00B 声明容器运行时监听的端口
EXPOSE map[8888/tcp:{}]
                        
# 2024-10-04 04:49:10  0.00B 设置环境变量 TENSORBOARD_PORT
ENV TENSORBOARD_PORT=6006
                        
# 2024-10-04 04:49:10  0.00B 设置环境变量 JUPYTER_PORT
ENV JUPYTER_PORT=8888
                        
# 2024-10-04 04:49:10  248.00B 复制新文件或目录到容器中
COPY jupyter_config/settings.jupyterlab-settings /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/ # buildkit
                        
# 2024-10-04 04:49:10  236.00B 复制新文件或目录到容器中
COPY jupyter_config/manager.jupyterlab-settings /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/ # buildkit
                        
# 2024-10-04 04:49:10  554.00B 复制新文件或目录到容器中
COPY jupyter_config/jupyter_notebook_config.py /usr/local/etc/jupyter/ # buildkit
                        
# 2024-10-04 04:49:10  17.26MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir jupyterlab-tensorboard-pro jupytext     black isort  && mkdir -p /root/.jupyter/lab/user-settings/@jupyterlab/completer-extension/  && jupyter lab clean # buildkit
                        
# 2024-10-04 04:49:08  27.81KB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c PATCHED_FILE=$(python -c "from tensorboard.plugins.core import core_plugin as _; print(_.__file__)") &&     sed -i 's/^\( *"--bind_all",\)$/\1 default=True,/' "$PATCHED_FILE" &&     test $(grep '^ *"--bind_all", default=True,$' "$PATCHED_FILE" | wc -l) -eq 1 # buildkit
                        
# 2024-10-04 04:49:07  188.39MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c git config --global url."https://github".insteadOf git://github &&     pip install --no-cache-dir 'jupyterlab>=4.1.0,<5.0.0a0' notebook tensorboard==2.16.2     jupyterlab_code_formatter python-hostlist # buildkit
                        
# 2024-10-04 04:48:55  2.16GB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir         numpy==1.24.4         scipy==1.11.3         "PyYAML>=5.4.1"         astunparse         typing_extensions         cffi         spacy==3.7.5         mock         tqdm         librosa==0.10.1         expecttest==0.1.3         hypothesis==5.35.1         xdoctest==1.0.2         pytest==8.1.1         pytest-xdist         pytest-rerunfailures         pytest-shard         pytest-flakefinder         pybind11         Cython         "regex>=2020.1.8"         protobuf==4.24.4 &&     if [[ $TARGETARCH = "amd64" ]] ; then pip install --no-cache-dir mkl==2021.1.1 mkl-include==2021.1.1 mkl-devel==2021.1.1 ;     find /usr/local/lib -maxdepth 1 -type f -regex '.*\/lib\(tbb\|mkl\).*\.so\($\|\.[0-9]*\.[0-9]*\)' -exec rm -v {} + ; fi # buildkit
                        
# 2024-10-04 04:48:13  0.00B 设置环境变量 PIP_DEFAULT_TIMEOUT
ENV PIP_DEFAULT_TIMEOUT=100
                        
# 2024-10-04 04:48:13  0.00B 设置环境变量 LC_ALL
ENV LC_ALL=C.UTF-8
                        
# 2024-10-04 04:48:13  0.00B 设置环境变量 PYTHONIOENCODING
ENV PYTHONIOENCODING=utf-8
                        
# 2024-10-04 04:48:13  1.35GB 复制新文件或目录到容器中
COPY . . # buildkit
                        
# 2024-10-04 04:48:04  0.00B 设置工作目录为/opt/pytorch
WORKDIR /opt/pytorch
                        
# 2024-10-04 04:48:04  0.00B 设置环境变量 NVPL_LAPACK_MATH_MODE
ENV NVPL_LAPACK_MATH_MODE=PEDANTIC
                        
# 2024-10-04 04:48:04  0.00B 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c if [ $TARGETARCH = "arm64" ]; then cd /opt &&     curl "https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/nvpl_slim_24.04/sbsa/nvpl_slim_24.04.tar" --output nvpl_slim_24.04.tar &&     tar -xf nvpl_slim_24.04.tar &&     cp -r nvpl_slim_24.04/lib/* /usr/local/lib &&     cp -r nvpl_slim_24.04/include/* /usr/local/include &&     rm -rf nvpl_slim_24.04.tar nvpl_slim_24.04 ; fi # buildkit
                        
# 2024-10-04 04:48:04  46.71MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c curl "https://gitlab-master.nvidia.com/api/v4/projects/105799/packages/generic/OpenBLAS/0.3.24-$(uname -m)/OpenBLAS-0.3.24-$(uname -m).tar.gz" --output OpenBLAS.tar.gz &&     tar -xf OpenBLAS.tar.gz -C /usr/local/ &&     rm OpenBLAS.tar.gz # buildkit
                        
# 2024-10-04 04:48:04  71.47MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir pip 'setuptools<71' &&     pip install --no-cache-dir cmake # buildkit
                        
# 2024-10-04 04:48:00  20.80MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c curl -O https://bootstrap.pypa.io/get-pip.py &&     python get-pip.py &&     rm get-pip.py # buildkit
                        
# 2024-10-04 04:47:56  0.00B 设置环境变量 PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
                        
# 2024-10-04 04:47:56  198.74MB 执行命令并创建新的镜像层
RUN |6 NVIDIA_PYTORCH_VERSION=24.10 PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 NVFUSER_BUILD_VERSION=f669fcf TARGETARCH=amd64 PYVER=3.10 L4T=0 /bin/sh -c export PYSFX=`echo "$PYVER" | cut -c1-1` &&     export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -y --no-install-recommends         python$PYVER-dev         python$PYSFX         python$PYSFX-dev         python$PYSFX-distutils         python-is-python$PYSFX         autoconf         automake         libatlas-base-dev         libgoogle-glog-dev         libbz2-dev         libc-ares2         libre2-9         libleveldb-dev         liblmdb-dev         libprotobuf-dev         libsnappy-dev         libtool         nasm         protobuf-compiler         pkg-config         unzip         sox         libsndfile1         libpng-dev         libhdf5-103         libhdf5-dev         gfortran         rapidjson-dev         ninja-build         libedit-dev         build-essential         patchelf      && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG L4T=0
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG PYVER=3.10
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2024-10-04 04:47:56  0.00B 添加元数据标签
LABEL com.nvidia.pytorch.version=2.5.0a0+e000cf0
                        
# 2024-10-04 04:47:56  0.00B 设置环境变量 NVFUSER_BUILD_VERSION NVFUSER_VERSION
ENV NVFUSER_BUILD_VERSION=f669fcf NVFUSER_VERSION=f669fcf
                        
# 2024-10-04 04:47:56  0.00B 设置环境变量 PYTORCH_BUILD_VERSION PYTORCH_VERSION PYTORCH_BUILD_NUMBER NVIDIA_PYTORCH_VERSION
ENV PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0 PYTORCH_VERSION=2.5.0a0+e000cf0 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=24.10
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG NVFUSER_BUILD_VERSION=f669fcf
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0
                        
# 2024-10-04 04:47:56  0.00B 定义构建参数
ARG NVIDIA_PYTORCH_VERSION=24.10
                        
# 2024-10-04 04:47:56  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=PyTorch
                        
# 2024-10-04 02:32:38  0.00B 添加元数据标签
LABEL com.nvidia.build.ref=bb0f0608792391d35d9686e6d86a7cb319bddadc
                        
# 2024-10-04 02:32:38  0.00B 添加元数据标签
LABEL com.nvidia.build.id=114391310
                        
# 2024-10-04 02:32:38  0.00B 设置环境变量 NVIDIA_BUILD_ID
ENV NVIDIA_BUILD_ID=114391310
                        
# 2024-10-04 02:32:38  0.00B 定义构建参数
ARG NVIDIA_BUILD_ID=114391310
                        
# 2024-10-04 02:32:38  0.00B 定义构建参数
ARG NVIDIA_BUILD_REF=bb0f0608792391d35d9686e6d86a7cb319bddadc
                        
# 2024-10-04 02:32:38  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
                        
# 2024-10-04 02:32:38  1.01GB 执行命令并创建新的镜像层
RUN |7 GDRCOPY_VERSION=2.3.1-1 HPCX_VERSION=2.20 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.17.0 OPENMPI_VERSION=4.1.7 TARGETARCH=amd64 /bin/sh -c export DEVEL=1 BASE=0  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUDA.sh  && /nvidia/build-scripts/installLIBS.sh  && if [ ! -f /etc/ld.so.conf.d/nvidia-tegra.conf ]; then /nvidia/build-scripts/installNCCL.sh; fi  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installCUTENSOR.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi  && rm -f /tmp/cuda-*.patch # buildkit
                        
# 2024-10-04 02:26:58  1.49KB 复制新文件或目录到容器中
COPY cuda-*.patch /tmp # buildkit
                        
# 2024-10-04 02:26:58  0.00B 设置环境变量 OMPI_MCA_coll_hcoll_enable
ENV OMPI_MCA_coll_hcoll_enable=0
                        
# 2024-10-04 02:26:58  0.00B 设置环境变量 OPAL_PREFIX PATH
ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin
                        
# 2024-10-04 02:26:58  229.68MB 执行命令并创建新的镜像层
RUN |7 GDRCOPY_VERSION=2.3.1-1 HPCX_VERSION=2.20 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.17.0 OPENMPI_VERSION=4.1.7 TARGETARCH=amd64 /bin/sh -c cd /nvidia  && ( export DEBIAN_FRONTEND=noninteractive        && apt-get update                            && apt-get install -y --no-install-recommends              libibverbs1                                  libibverbs-dev                               librdmacm1                                   librdmacm-dev                                libibumad3                                   libibumad-dev                                ibverbs-utils                                ibverbs-providers                     && rm -rf /var/lib/apt/lists/*               && rm $(dpkg-query -L                                    libibverbs-dev                               librdmacm-dev                                libibumad-dev                            | grep "\(\.so\|\.a\)$")          )                                            && ( cd opt/gdrcopy/                              && dpkg -i libgdrapi_*.deb                   )                                         && ( cp -r opt/hpcx /opt/                                         && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/          && ln -sf /opt/hpcx/ompi /usr/local/mpi                      && ln -sf /opt/hpcx/ucx  /usr/local/ucx                      && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf         && sed -i 's/^\(btl = self\)$/#\1/'                             /opt/hpcx/ompi/etc/openmpi-mca-params.conf       )                                                         && ldconfig # buildkit
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2024-10-04 02:26:58  0.00B 设置环境变量 GDRCOPY_VERSION HPCX_VERSION MOFED_VERSION OPENUCX_VERSION OPENMPI_VERSION RDMACORE_VERSION
ENV GDRCOPY_VERSION=2.3.1-1 HPCX_VERSION=2.20 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.17.0 OPENMPI_VERSION=4.1.7 RDMACORE_VERSION=39.0
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG OPENMPI_VERSION=4.1.7
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG OPENUCX_VERSION=1.17.0
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG MOFED_VERSION=5.4-rdmacore39.0
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG RDMACORE_VERSION=39.0
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG HPCX_VERSION=2.20
                        
# 2024-10-04 02:26:58  0.00B 定义构建参数
ARG GDRCOPY_VERSION=2.3.1-1
                        
# 2024-10-04 02:26:51  84.91MB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         git         libglib2.0-0         less         libnl-route-3-200         libnl-3-dev         libnl-route-3-dev         libnuma-dev         libnuma1         libpmi2-0-dev         nano         numactl         openssh-client         vim         wget  && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-10-04 02:11:59  148.72KB 复制新文件或目录到容器中
COPY NVIDIA_Deep_Learning_Container_License.pdf /workspace/ # buildkit
                        
# 2024-10-04 02:11:59  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2024-10-04 02:11:59  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2024-10-04 02:11:59  14.85KB 复制新文件或目录到容器中
COPY entrypoint/ /opt/nvidia/ # buildkit
                        
# 2024-10-04 02:11:59  0.00B 设置环境变量 PATH LD_LIBRARY_PATH NVIDIA_VISIBLE_DEVICES NVIDIA_DRIVER_CAPABILITIES
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG _LIBPATH_SUFFIX=
                        
# 2024-10-04 02:11:59  46.00B 执行命令并创建新的镜像层
RUN |24 CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.22.3 CUBLAS_VERSION=12.6.3.3 CUFFT_VERSION=11.3.0.4 CURAND_VERSION=10.3.7.77 CUSPARSE_VERSION=12.5.4.2 CUSOLVER_VERSION=11.7.1.2 CUTENSOR_VERSION=2.0.2.5 NPP_VERSION=12.3.1.54 NVJPEG_VERSION=12.3.3.54 CUDNN_VERSION=9.5.0.50 CUDNN_FRONTEND_VERSION=1.7.0 TRT_VERSION=10.5.0.18 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2024.6.1.90 NSIGHT_COMPUTE_VERSION=2024.3.2.3 CUSPARSELT_VERSION=0.6.2.3 DALI_VERSION=1.42.0 DALI_BUILD=18507157 POLYGRAPHY_VERSION=0.49.13 TRANSFORMER_ENGINE_VERSION=1.11 MODEL_OPT_VERSION=0.17.0 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf  && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2024-10-04 02:11:59  13.39KB 复制文件或目录到容器中
ADD docs.tgz / # buildkit
                        
# 2024-10-04 02:11:59  0.00B 设置环境变量 DALI_VERSION DALI_BUILD POLYGRAPHY_VERSION TRANSFORMER_ENGINE_VERSION MODEL_OPT_VERSION
ENV DALI_VERSION=1.42.0 DALI_BUILD=18507157 POLYGRAPHY_VERSION=0.49.13 TRANSFORMER_ENGINE_VERSION=1.11 MODEL_OPT_VERSION=0.17.0
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG MODEL_OPT_VERSION=0.17.0
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG TRANSFORMER_ENGINE_VERSION=1.11
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG POLYGRAPHY_VERSION=0.49.13
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG DALI_BUILD=18507157
                        
# 2024-10-04 02:11:59  0.00B 定义构建参数
ARG DALI_VERSION=1.42.0
                        
# 2024-10-04 02:11:59  0.00B 添加元数据标签
LABEL com.nvidia.nccl.version=2.22.3 com.nvidia.cublas.version=12.6.3.3 com.nvidia.cufft.version=11.3.0.4 com.nvidia.curand.version=10.3.7.77 com.nvidia.cusparse.version=12.5.4.2 com.nvidia.cusparselt.version=0.6.2.3 com.nvidia.cusolver.version=11.7.1.2 com.nvidia.cutensor.version=2.0.2.5 com.nvidia.npp.version=12.3.1.54 com.nvidia.nvjpeg.version=12.3.3.54 com.nvidia.cudnn.version=9.5.0.50 com.nvidia.tensorrt.version=10.5.0.18 com.nvidia.tensorrtoss.version= com.nvidia.nsightsystems.version=2024.6.1.90 com.nvidia.nsightcompute.version=2024.3.2.3
                        
# 2024-10-04 02:11:59  6.56GB 执行命令并创建新的镜像层
RUN |19 CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.22.3 CUBLAS_VERSION=12.6.3.3 CUFFT_VERSION=11.3.0.4 CURAND_VERSION=10.3.7.77 CUSPARSE_VERSION=12.5.4.2 CUSOLVER_VERSION=11.7.1.2 CUTENSOR_VERSION=2.0.2.5 NPP_VERSION=12.3.1.54 NVJPEG_VERSION=12.3.3.54 CUDNN_VERSION=9.5.0.50 CUDNN_FRONTEND_VERSION=1.7.0 TRT_VERSION=10.5.0.18 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2024.6.1.90 NSIGHT_COMPUTE_VERSION=2024.3.2.3 CUSPARSELT_VERSION=0.6.2.3 /bin/sh -c /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUTENSOR.sh  && /nvidia/build-scripts/installCUSPARSELT.sh  && if [ -z "${JETPACK_HOST_MOUNTS}" ]; then       /nvidia/build-scripts/installNCCL.sh;     fi; # buildkit
                        
# 2024-09-26 06:18:30  0.00B 设置环境变量 NCCL_VERSION CUBLAS_VERSION CUFFT_VERSION CURAND_VERSION CUSPARSE_VERSION CUSPARSELT_VERSION CUSOLVER_VERSION CUTENSOR_VERSION NPP_VERSION NVJPEG_VERSION CUDNN_VERSION CUDNN_FRONTEND_VERSION TRT_VERSION TRTOSS_VERSION NSIGHT_SYSTEMS_VERSION NSIGHT_COMPUTE_VERSION
ENV NCCL_VERSION=2.22.3 CUBLAS_VERSION=12.6.3.3 CUFFT_VERSION=11.3.0.4 CURAND_VERSION=10.3.7.77 CUSPARSE_VERSION=12.5.4.2 CUSPARSELT_VERSION=0.6.2.3 CUSOLVER_VERSION=11.7.1.2 CUTENSOR_VERSION=2.0.2.5 NPP_VERSION=12.3.1.54 NVJPEG_VERSION=12.3.3.54 CUDNN_VERSION=9.5.0.50 CUDNN_FRONTEND_VERSION=1.7.0 TRT_VERSION=10.5.0.18 TRTOSS_VERSION= NSIGHT_SYSTEMS_VERSION=2024.6.1.90 NSIGHT_COMPUTE_VERSION=2024.3.2.3
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUSPARSELT_VERSION=0.6.2.3
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG NSIGHT_COMPUTE_VERSION=2024.3.2.3
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG NSIGHT_SYSTEMS_VERSION=2024.6.1.90
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG TRTOSS_VERSION=
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG TRT_VERSION=10.5.0.18
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUDNN_FRONTEND_VERSION=1.7.0
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUDNN_VERSION=9.5.0.50
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG NVJPEG_VERSION=12.3.3.54
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG NPP_VERSION=12.3.1.54
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUTENSOR_VERSION=2.0.2.5
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUSOLVER_VERSION=11.7.1.2
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUSPARSE_VERSION=12.5.4.2
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CURAND_VERSION=10.3.7.77
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUFFT_VERSION=11.3.0.4
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG CUBLAS_VERSION=12.6.3.3
                        
# 2024-09-26 06:18:30  0.00B 定义构建参数
ARG NCCL_VERSION=2.22.3
                        
# 2024-09-26 06:18:30  0.00B 添加元数据标签
LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
                        
# 2024-09-26 06:18:30  0.00B 设置环境变量 _CUDA_COMPAT_PATH ENV BASH_ENV SHELL NVIDIA_REQUIRE_CUDA
ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
                        
# 2024-09-26 06:18:30  58.91KB 执行命令并创建新的镜像层
RUN |3 CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 JETPACK_HOST_MOUNTS= /bin/sh -c cp -vprd /nvidia/. /  &&  patch -p0 < /etc/startup_scripts.patch  &&  rm -f /etc/startup_scripts.patch # buildkit
                        
# 2024-09-26 06:18:30  459.94MB 执行命令并创建新的镜像层
RUN |3 CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 JETPACK_HOST_MOUNTS= /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
                        
# 2024-09-25 05:11:43  0.00B 执行命令并创建新的镜像层
RUN |3 CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 JETPACK_HOST_MOUNTS= /bin/sh -c if [ -n "${JETPACK_HOST_MOUNTS}" ]; then        echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf     && echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf;     fi # buildkit
                        
# 2024-09-25 05:11:43  0.00B 设置环境变量 CUDA_VERSION CUDA_DRIVER_VERSION CUDA_CACHE_DISABLE NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS
ENV CUDA_VERSION=12.6.2.004 CUDA_DRIVER_VERSION=560.35.03 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
                        
# 2024-09-25 05:11:43  0.00B 定义构建参数
ARG JETPACK_HOST_MOUNTS=
                        
# 2024-09-25 05:11:43  0.00B 定义构建参数
ARG CUDA_DRIVER_VERSION=560.35.03
                        
# 2024-09-25 05:11:43  0.00B 定义构建参数
ARG CUDA_VERSION=12.6.2.004
                        
# 2024-09-25 00:13:10  269.69MB 执行命令并创建新的镜像层
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         apt-utils         build-essential         ca-certificates         curl         libncurses5         libncursesw5         patch         wget         rsync         unzip         jq         gnupg         libtcmalloc-minimal4  && rm -rf /var/lib/apt/lists/*  && echo "hsts=0" > /root/.wgetrc # buildkit
                        
# 2024-09-12 00:25:18  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2024-09-12 00:25:17  77.86MB 
/bin/sh -c #(nop) ADD file:ebe009f86035c175ba244badd298a2582914415cf62783d510eab3a311a5d4e1 in / 
                        
# 2024-09-12 00:25:16  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2024-09-12 00:25:16  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2024-09-12 00:25:16  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2024-09-12 00:25:16  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:473f149c93198da556dac3418f8236064f75e6a77b17651c26b203da5cb500cc",
    "RepoTags": [
        "kenneth850511/llamafactory:latest",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory:latest"
    ],
    "RepoDigests": [
        "kenneth850511/llamafactory@sha256:346ede80a454f14f5c636f834ff8ab2445ab6a164c41fb82d3dd6246e3a8319e",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kenneth850511/llamafactory@sha256:346ede80a454f14f5c636f834ff8ab2445ab6a164c41fb82d3dd6246e3a8319e"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-03-04T15:40:35.081376334+08:00",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "6006/tcp": {},
            "7860/tcp": {},
            "8000/tcp": {},
            "8888/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin",
            "CUDA_VERSION=12.6.2.004",
            "CUDA_DRIVER_VERSION=560.35.03",
            "CUDA_CACHE_DISABLE=1",
            "NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=",
            "_CUDA_COMPAT_PATH=/usr/local/cuda/compat",
            "ENV=/etc/shinit_v2",
            "BASH_ENV=/etc/bash.bashrc",
            "SHELL=/bin/bash",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=9.0",
            "NCCL_VERSION=2.22.3",
            "CUBLAS_VERSION=12.6.3.3",
            "CUFFT_VERSION=11.3.0.4",
            "CURAND_VERSION=10.3.7.77",
            "CUSPARSE_VERSION=12.5.4.2",
            "CUSPARSELT_VERSION=0.6.2.3",
            "CUSOLVER_VERSION=11.7.1.2",
            "CUTENSOR_VERSION=2.0.2.5",
            "NPP_VERSION=12.3.1.54",
            "NVJPEG_VERSION=12.3.3.54",
            "CUDNN_VERSION=9.5.0.50",
            "CUDNN_FRONTEND_VERSION=1.7.0",
            "TRT_VERSION=10.5.0.18",
            "TRTOSS_VERSION=",
            "NSIGHT_SYSTEMS_VERSION=2024.6.1.90",
            "NSIGHT_COMPUTE_VERSION=2024.3.2.3",
            "DALI_VERSION=1.42.0",
            "DALI_BUILD=18507157",
            "POLYGRAPHY_VERSION=0.49.13",
            "TRANSFORMER_ENGINE_VERSION=1.11",
            "MODEL_OPT_VERSION=0.17.0",
            "LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility,video",
            "NVIDIA_PRODUCT_NAME=PyTorch",
            "GDRCOPY_VERSION=2.3.1-1",
            "HPCX_VERSION=2.20",
            "MOFED_VERSION=5.4-rdmacore39.0",
            "OPENUCX_VERSION=1.17.0",
            "OPENMPI_VERSION=4.1.7",
            "RDMACORE_VERSION=39.0",
            "OPAL_PREFIX=/opt/hpcx/ompi",
            "OMPI_MCA_coll_hcoll_enable=0",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs:",
            "NVIDIA_BUILD_ID=114410972",
            "PYTORCH_BUILD_VERSION=2.5.0a0+e000cf0",
            "PYTORCH_VERSION=2.5.0a0+e000cf0",
            "PYTORCH_BUILD_NUMBER=0",
            "NVIDIA_PYTORCH_VERSION=24.10",
            "NVFUSER_BUILD_VERSION=f669fcf",
            "NVFUSER_VERSION=f669fcf",
            "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python",
            "NVPL_LAPACK_MATH_MODE=PEDANTIC",
            "PYTHONIOENCODING=utf-8",
            "LC_ALL=C.UTF-8",
            "PIP_DEFAULT_TIMEOUT=100",
            "JUPYTER_PORT=8888",
            "TENSORBOARD_PORT=6006",
            "UCC_CL_BASIC_TLS=^sharp",
            "TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX",
            "PYTORCH_HOME=/opt/pytorch/pytorch",
            "CUDA_HOME=/usr/local/cuda",
            "TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1",
            "USE_EXPERIMENTAL_CUDNN_V8_API=1",
            "COCOAPI_VERSION=2.0+nv0.8.0",
            "TORCH_CUDNN_V8_API_ENABLED=1",
            "CUDA_MODULE_LOADING=LAZY",
            "MAX_JOBS=4",
            "FLASH_ATTENTION_FORCE_BUILD=TRUE",
            "VLLM_WORKER_MULTIPROC_METHOD=spawn",
            "GRADIO_SERVER_PORT=7860",
            "API_PORT=8000"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": {
            "/app/data": {},
            "/app/output": {},
            "/root/.cache/huggingface": {},
            "/root/.cache/modelscope": {}
        },
        "WorkingDir": "/app",
        "Entrypoint": [
            "/opt/nvidia/nvidia_entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "com.docker.compose.project": "docker-cuda",
            "com.docker.compose.service": "llamafactory",
            "com.docker.compose.version": "2.33.0",
            "com.nvidia.build.id": "114410972",
            "com.nvidia.build.ref": "3e3c067dd015e6d16d2cf59ac18e9f2e2466b68a",
            "com.nvidia.cublas.version": "12.6.3.3",
            "com.nvidia.cuda.version": "9.0",
            "com.nvidia.cudnn.version": "9.5.0.50",
            "com.nvidia.cufft.version": "11.3.0.4",
            "com.nvidia.curand.version": "10.3.7.77",
            "com.nvidia.cusolver.version": "11.7.1.2",
            "com.nvidia.cusparse.version": "12.5.4.2",
            "com.nvidia.cusparselt.version": "0.6.2.3",
            "com.nvidia.cutensor.version": "2.0.2.5",
            "com.nvidia.nccl.version": "2.22.3",
            "com.nvidia.npp.version": "12.3.1.54",
            "com.nvidia.nsightcompute.version": "2024.3.2.3",
            "com.nvidia.nsightsystems.version": "2024.6.1.90",
            "com.nvidia.nvjpeg.version": "12.3.3.54",
            "com.nvidia.pytorch.version": "2.5.0a0+e000cf0",
            "com.nvidia.tensorrt.version": "10.5.0.18",
            "com.nvidia.tensorrtoss.version": "",
            "com.nvidia.volumes.needed": "nvidia_driver",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 29205110855,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/cdcf6d3e9c1f0d680239d6fb96fe2d266bb7e039e9934b45147756adbc47036f/diff:/var/lib/docker/overlay2/9b54865d78a7efef4ad6b1669d1395017ef8113a8e9247beb4e905d7978ee50c/diff:/var/lib/docker/overlay2/800949b515965273bc7c7db22691b7cdc6852e53617d07fb29a2aa6b540007c1/diff:/var/lib/docker/overlay2/cd3d007cce97cc9502ff40502679230cd152c2f2ca2f08f255c55b6abeaf8d24/diff:/var/lib/docker/overlay2/d3e206ae5687219c2fa55f29d7935b5686f28caf01a57f7b557ae7e04bb5f1ef/diff:/var/lib/docker/overlay2/52c4845b14dfe19d3cf499c3bd09a98b02452d6d4400daf30f73be04031d297b/diff:/var/lib/docker/overlay2/8223b708704fa6e7e089a6bf343a9568640e75f2d2de452a6d5ef12afe359e64/diff:/var/lib/docker/overlay2/b85ed98ef2da750f6effd758bd0fea7abf462b42b241c378b1675d27b28f1baf/diff:/var/lib/docker/overlay2/086c3e50be8b5b528f813047ed9842413e7e4aabd09d0f87a42f2154083b6814/diff:/var/lib/docker/overlay2/394f0c054b2e1fa84688f9de780a63c3efb08af044a8c2a591b1c4021701f5a0/diff:/var/lib/docker/overlay2/3d08d4104a26c5ec75686c05ab32276d0ca9699ab535b3f698461a9a8288a69c/diff:/var/lib/docker/overlay2/c4d76152aeb0655a098678437356523ce5c790cf59c7ada2321e4f65cd67ae8f/diff:/var/lib/docker/overlay2/cd9d08a0515c51e921b612968f7015b3ff0aef1b22821cf18eae341157b99138/diff:/var/lib/docker/overlay2/e3fc9f1aa1661bab204e10f823c861d23e5a44feae4d2f185b3f31849e9e8c93/diff:/var/lib/docker/overlay2/5bc4669e5271ec743a07a4481de956dafc37d967012904106276338a00dc1f88/diff:/var/lib/docker/overlay2/4c0beecb2b56d5ca4e9ca1cc2b5908260ad0d35c5cde58f6298f4846170b4887/diff:/var/lib/docker/overlay2/e9f311a866bef57fc4e7b2f32e51fbe1a19ad97530fe3e7c96633bb8597c0c17/diff:/var/lib/docker/overlay2/e10b4f90637dc8c939a45d6c1b9f2d668b06730ec3d9c3e7074eef256f86ee0d/diff:/var/lib/docker/overlay2/6aebf62075df2c28f6ff35f50c8c15b32d7de0a5ce07209ad22aefa79cfda9de/diff:/var/lib/docker/overlay2/c4b65ec26c773ac465d7e886b3ecfb99d287de905f2267e2c2a8ee0bb7eb51d0/diff:/var/lib/docker/overlay2/3cf8eda3fcfe46480e557c678557a2bbdb570bce00d840e9b66e67a978d00698/diff:/var/lib/docker/overlay2/6664f00331d6d04e1b3078233bb94cc4845bfb2bb6b0259a7943c46d70038f87/diff:/var/lib/docker/overlay2/ba7ce877c95cb67ff1aa6477f5a4fc622f3e87819c8bd83d29e2afdcee433422/diff:/var/lib/docker/overlay2/6e3933664e8502163c565a6f8cd9f1173d573e2554b4fbb73842ffc2d1113768/diff:/var/lib/docker/overlay2/345f89a46ec15cf98ee861322b6b8668cbb20e8d71062bde51c172986ce018e3/diff:/var/lib/docker/overlay2/4fa6a76d0ca14edf3d6bcd32592ad1efbbd33f11e593a15fdaceb334815f0d2b/diff:/var/lib/docker/overlay2/1ad20b6996bb22a7d81330fed20838545950909e1cf0d40ef07396a31da9162f/diff:/var/lib/docker/overlay2/8ca4da66e0cc1bd6d5b432396e192cd52561d977d36c431e821dfe60bdd4777b/diff:/var/lib/docker/overlay2/2539354cfec2b3c0b0eb0cf66b82f73d515ebe9ae38198779aea5ae43351487e/diff:/var/lib/docker/overlay2/55e2c5ad2563a443f48dc892a05e16102a3a1216967823752851ad50721f423d/diff:/var/lib/docker/overlay2/e2d0958e20ab132fb8ce83af99e2db588d721b5e52623215cf6ffa10568a7621/diff:/var/lib/docker/overlay2/da2f4e3d80cd711f2ad0294b7667beeb7f23fb4d002a1e6b4e320d3a06647dc8/diff:/var/lib/docker/overlay2/7a44961c4ec2f556acbf3a73e2b7e6559b08c072bb23684358cd762f46382136/diff:/var/lib/docker/overlay2/c36dafab2ac4588bd9d72db14705bf30d585cf5a32d86c1aaafdb38f6d679168/diff:/var/lib/docker/overlay2/dbd38f0d9e0c983e18bf219ddd80923015b938dcb8d5c4948d2011ac3dcbbd92/diff:/var/lib/docker/overlay2/c85a08616df47d7fe5dcad792dbfe0ba10f9f296973da003979072b720918182/diff:/var/lib/docker/overlay2/f95cdb1a57299fde9ae39c51101b52280b19b935cb7e9c515b876ec53c31eeee/diff:/var/lib/docker/overlay2/30e2bcfc6fa52dd64f7016d0f4dfb2f06139a77cf1fe49e1bd98f74446b11f0e/diff:/var/lib/docker/overlay2/926cfb22168d8dcb0a7c8ddbfe3130b61b43755e916493be8acb260f52eec60d/diff:/var/lib/docker/overlay2/dcbad910778aa645e09d4f56ec5ba9c09581f30f16c72739174df87b7c3b157b/diff:/var/lib/docker/overlay2/63f4e218b941bdde686d0a1979c9e53291ef3bac8e876bb385c0d3ac25830514/diff:/var/lib/docker/overlay2/1fe1c66cb4e6d8d48e458ce19522dfce40b79d82b3a15dca6d40bd7b278fbe7f/diff:/var/lib/docker/overlay2/f6f40fc5a5ee1e279ce048aa1f4904f5e0aaae480ab96ecfdc09ed3e3e1ac48e/diff:/var/lib/docker/overlay2/1b233b3c64792b7fa3452729db34e90e70e04f3f4ec2b7749d5460b23ef49843/diff:/var/lib/docker/overlay2/740b2c0d7e067d51728c5740bbc5c18cba09761aae4bc81165c4cc5858aec06a/diff:/var/lib/docker/overlay2/5a13b47f20da762c07bc3a7788e9da0caca51e5534c1caa94f7d26605b614c64/diff:/var/lib/docker/overlay2/bd6b99bee46eff3e457ec5b391d3c6eb9cb79ce2f950ca43492233ca8a88fbd4/diff:/var/lib/docker/overlay2/e2218e776727d2feb5de0632026821d90d0f4bb9d6e67d7f11074a3dceef27f4/diff:/var/lib/docker/overlay2/cb050aa0c66750f727494d5478883e321f725879fcc25e1f97e004b1f04015c7/diff:/var/lib/docker/overlay2/c54bc937648acb4b10b9264e97345f273d8c250272a496a4b1ddd08be2e37b1d/diff:/var/lib/docker/overlay2/c1082305f633ea28fefacf25f954fca7c301a09cc66255787979bc86ef673694/diff:/var/lib/docker/overlay2/7dc5ccee0429e485debe0f3235a089b82f4b4880cbc39f0d71c084676961ae39/diff:/var/lib/docker/overlay2/4d1dd7103abb2f7baac3f8952425a4ebfda4672d57f74258a57fa217f99b8e56/diff:/var/lib/docker/overlay2/373e354ccfdd831d22e71790305500f073c12a490362fc9be0d9eea8cb027973/diff:/var/lib/docker/overlay2/e6bbc07f555c5433dabb869e73dc970cfe981ff2df301d7f4a77b8cefe0b7f12/diff:/var/lib/docker/overlay2/7b8739888706e717dca7b980e8825d202d0a3918b18a3c442adc73dd52cb65b0/diff:/var/lib/docker/overlay2/f32b9256e317d9d955c7539fb1be71441200a5d3e5c25a19355158bef5d708a8/diff:/var/lib/docker/overlay2/ce01b5a9b06d3f6862cd60c66422cbb318ad94d6154ae3c6b608f32f6196d4e1/diff:/var/lib/docker/overlay2/6534f959d87b749e2e44263f1f7544f918598eb066dd66f156bd45b5888144da/diff:/var/lib/docker/overlay2/4cfb2ff6eb670d08d805fcc326973c76acabc424b2f6ce5f1903149f34750452/diff",
            "MergedDir": "/var/lib/docker/overlay2/d09a7349121cce1421b8dcad452ee2194c1e898ec0afd24c6a1e20fd5faa2123/merged",
            "UpperDir": "/var/lib/docker/overlay2/d09a7349121cce1421b8dcad452ee2194c1e898ec0afd24c6a1e20fd5faa2123/diff",
            "WorkDir": "/var/lib/docker/overlay2/d09a7349121cce1421b8dcad452ee2194c1e898ec0afd24c6a1e20fd5faa2123/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:2573e0d8158209ed54ab25c87bcdcb00bd3d2539246960a3d592a1c599d70465",
            "sha256:0db37749123ed077f490b8481792ba660fc7b82b2035d2f37f540691c102e5b4",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:f805df76568cf3834df5ae477cf7fd99d22b345ab31103d96a3f138af2b6c8ec",
            "sha256:176c746bdb5ad24a387e0d855c44bd57391d7c33a2bad8e19d4aced54bea5a00",
            "sha256:8bac727098a300370c2116a8e848abea54cb4c1d5d9f3851c9467961e90748b5",
            "sha256:f5f79ac10bb874bdbe60f05aefdf89d24c8f07b24910dbd787b9ee4cfd390565",
            "sha256:caf07e7743c0eb80a8a7ac78b631cd93b73f96e2d1a1dabe4d9ae7a9b922d24b",
            "sha256:6c75d6484379aa51f50d3e6a3c1f0b7acc2364aed0b9fe643224ce3134c970f3",
            "sha256:0b6a520db613be9ef2d808547aefba361788a92f82ccaa532fa3b2895f94debc",
            "sha256:c567e58addb329a64228dad6849f96a0a7e353f59a3edb8cc11d3db0afd1fd62",
            "sha256:0c928a220a5504afdc5d09a152d728398d7e981acbc592a7a9e5f8f16f8439b5",
            "sha256:b05947e518f595236486c35a836848c01ccd7c06539a481d2084667e8288ddf4",
            "sha256:0d5982fa150d5361485513c3ff3fc8f61acd8bdb05a6b97105aa4c2dc949d7ab",
            "sha256:5f05a327d6f0789c362b017217dff0f222286345e60fc7f5a39aa64921e70f00",
            "sha256:cc3c5388749c30c515e144fe3d0388eda6571044451318741873c1b480da25fd",
            "sha256:842ae06d692e91ecd589aeeebb9f7bded9792ea5760e1bd6d5bf170882b54087",
            "sha256:14a5884be64602ca61592f4f8669cb814c3410e0fe6311e84d0b5ce3f63030e8",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:82bfdc19647b6fd5f6e62b51c553f918ac65e9f83599d34cfc55f39fa39e748f",
            "sha256:40891febcfe1593da207aa7276bd9038429c01cdc782e9de6e085c167179a533",
            "sha256:e7fe069215fde05fc74239b502b22b8428187d7dd9bdd7d2ee9d703f02a6ddf3",
            "sha256:38b268bb5eaf917dce874fd554a20b4372dcfb200ca9ea374cb87022cf854a9f",
            "sha256:d92cd4a73f27f6cb0d3b5bd5f088f69fc8fb56903a10fca67ae6fe08ee464d08",
            "sha256:31c8933840b6654a89d8dd6bcbe612d15aec949a793aed42bba299aca9373fb3",
            "sha256:8299a22da1688a1301e8ab581ba60633ec99fcd9e6aaea95f131c26b8937e3ad",
            "sha256:0dc80674aa427c049d5f9fe07eee34ddbb39fbd434fff14f7b12983936397a79",
            "sha256:363bc3575fa10e5ed65d62cd7d68a4bf1ba0fb69af6e367c24d15cf0fe2fac70",
            "sha256:70592ceb17a08452c8dceb9efe1db812782a7edef328552cb18d45e1413e2af0",
            "sha256:d601ab0ef09ae25a9c235423d3fa3568048c9b772597d8432bff8f8bb997f8f5",
            "sha256:7cb6c1ba725a9c919553b3171c5383fb70c8f6ee4cf4cb62b62e0d52d22994bc",
            "sha256:ca5eeb55eed9b4091681733b328616ead889da9ca9c39c304981750e067c3ce0",
            "sha256:4578b2a29ed618501f5db6d3c624b8253d75fa16486a368c11d63ac0f99f564c",
            "sha256:2cfaf81d33de11437cad46532f1b908e4fd387af8ca9b0a92c2c93728562e0ab",
            "sha256:cb061c900bc14ce86573951afa10a6c37bd77a9fe77f145d88ea5a43a01924a2",
            "sha256:4eaaafc81dc19538b054802c1b46201cb93b5cb2da13921cdea808408be0b89d",
            "sha256:50b68377ac8a30f1972bc4c23b4c4b955ba27e205684a8326d67659e5384c84e",
            "sha256:ba3365687021b8e7729ad53887699da3406025399ed4e0a5f3334bd6531c8367",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:c90b71a1c82fde66ff4de3cdf9565f6c86ce407becfd03fb0b4184fada2ce38e",
            "sha256:f13c04ef0dac22ab4ec68ccde5316f868f35e0186da9353b7967a287701b4b2e",
            "sha256:3693de7b447c5db64b0dd3fba4f77229effcc3afd78204fb21d1a19ef4b8c17e",
            "sha256:6d7c7c6821337ac8d65283ecfa4854a3af68232a5d0cb22970d2e96951852b58",
            "sha256:f892bdb37827cd91a896b40ef3721de310be105634cdc767d4913f7b9bae2fcc",
            "sha256:11c6c668e3af6fffa6171004796f3367dcf39d198270c92e63858ee19068b322",
            "sha256:3e1ad43fbc031cb41fa3d2d6f6dc887ac343700d25b8f60c7cbaffcc0674c83a",
            "sha256:7d3c4fb6788b76bd356260614ea4b28ea829f92277c9979c6c5a012beea41132",
            "sha256:84bfcb623e6df59bc0903a15e6ac7c4c5b7297f1980702961eeba16dd52e9e49",
            "sha256:b4de8880d2ba8f9a0369f9724ad1941c1648627d5605ca2a424c4b552f160850",
            "sha256:2c945b3d2c6d458d5b0bfceb3f8090aa7b6e4dfd99fbae862f3ca6f6dc798ecc",
            "sha256:e46df17937ff4435723507bb7a6ed8bb846a8093a9ae4127e4c60a9739398861",
            "sha256:2c775d712f3758139745fd0fb18ab3aa15eaa09562da6964f6341e21552b450e",
            "sha256:ad24517f141d54c6fb110ae02fbf8d80fd256997a1865fcaeede99dbcc8b8d35",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:7ab7a6804b51128b52790815291b7a765f4c5e2bd176612bd5144b26e455760f",
            "sha256:8f81cbfd163666ae7887ecb30b2191952f36b332a6ea038993339248718e62f8",
            "sha256:4b880e75c002758b9b97b71906b2b4a1b3e272bb050131785253369f3ba71eb1",
            "sha256:5bfbde38ba2ac7c78f4d194acdf1e521b403c9e3c8731709f1eac10ae27f5c32",
            "sha256:715e420b477aad884a549bc159b057dd449ba3a5ece7a69775771c712a421bea",
            "sha256:30b962e3659dfca231434953eab24f77d6f6884601c3f8c75f4a0c2b82f5a656",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-03-05T01:55:19.239191865+08:00"
    }
}

更多版本

docker.io/kenneth850511/llamafactory:v0.9.0

linux/amd64 docker.io22.68GB2025-01-18 00:42
97

docker.io/kenneth850511/llamafactory:v0.9.1

linux/amd64 docker.io22.79GB2025-01-18 01:19
364

docker.io/kenneth850511/llamafactory:2025.02.22

linux/amd64 docker.io29.15GB2025-03-05 01:48
73

docker.io/kenneth850511/llamafactory:latest

linux/amd64 docker.io29.21GB2025-03-05 02:04
125