docker.io/vllm/vllm-openai:v0.9.2 linux/amd64

docker.io/vllm/vllm-openai:v0.9.2 - 国内下载镜像源 浏览次数:86
这是镜像描述:

vllm/openai

基于 OpenAI 的 GPT-3 模型的 API 服务,支持自然语言处理等功能。

源镜像 docker.io/vllm/vllm-openai:v0.9.2
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2
镜像ID sha256:28b059682165f5dc585337f49d4441f75299f0aa1f90149c1694b4775e27553b
镜像TAG v0.9.2
大小 20.76GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 python3 -m vllm.entrypoints.openai.api_server
工作目录 /vllm-workspace
OS/平台 linux/amd64
浏览量 86 次
贡献者
镜像创建 2025-07-07T18:38:32.912234073Z
同步时间 2025-07-09 03:00
更新时间 2025-07-10 02:45
环境变量
PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 NV_CUDA_CUDART_VERSION=12.8.90-1 CUDA_VERSION=12.8.1 LD_LIBRARY_PATH=/usr/local/cuda/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.8.1-1 NV_NVTX_VERSION=12.8.90-1 NV_LIBNPP_VERSION=12.3.3.100-1 NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1 NV_LIBCUSPARSE_VERSION=12.5.8.93-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8 NV_LIBCUBLAS_VERSION=12.8.4.1-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1 NCCL_VERSION=2.25.1-1 NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8 NVIDIA_PRODUCT_NAME=CUDA NV_CUDA_CUDART_DEV_VERSION=12.8.90-1 NV_NVML_DEV_VERSION=12.8.90-1 NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1 NV_LIBNPP_DEV_VERSION=12.3.3.100-1 NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1 NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1 NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8 NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1 NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1 NV_NVPROF_VERSION=12.8.90-1 NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8 LIBRARY_PATH=/usr/local/cuda/lib64/stubs DEBIAN_FRONTEND=noninteractive UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match VLLM_USAGE_SOURCE=production-docker-image
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2  docker.io/vllm/vllm-openai:v0.9.2

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2  docker.io/vllm/vllm-openai:v0.9.2

Shell快速替换命令

sed -i 's#vllm/vllm-openai:v0.9.2#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2  docker.io/vllm/vllm-openai:v0.9.2'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2  docker.io/vllm/vllm-openai:v0.9.2'

镜像构建历史


# 2025-07-08 02:38:32  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["python3" "-m" "vllm.entrypoints.openai.api_server"]
                        
# 2025-07-08 02:38:32  0.00B 设置环境变量 VLLM_USAGE_SOURCE
ENV VLLM_USAGE_SOURCE=production-docker-image
                        
# 2025-07-08 02:38:32  372.81MB 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 INSTALL_KV_CONNECTORS=true PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= /bin/bash -c if [ "$INSTALL_KV_CONNECTORS" = "true" ]; then         uv pip install --system -r requirements/kv_connectors.txt;     fi;     if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         uv pip install --system accelerate hf_transfer 'modelscope!=1.15.0' 'bitsandbytes>=0.42.0' 'timm==0.9.10' boto3 runai-model-streamer runai-model-streamer[s3];     else         uv pip install --system accelerate hf_transfer 'modelscope!=1.15.0' 'bitsandbytes>=0.46.1' 'timm==0.9.10' boto3 runai-model-streamer runai-model-streamer[s3];     fi # buildkit
                        
# 2025-07-08 02:38:24  7.00B 复制新文件或目录到容器中
COPY requirements/kv_connectors.txt requirements/kv_connectors.txt # buildkit
                        
# 2025-07-08 02:38:24  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2025-07-08 02:38:24  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2025-07-08 02:38:24  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2025-07-08 02:38:24  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2025-07-08 02:38:24  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2025-07-08 02:38:24  68.41MB 执行命令并创建新的镜像层
RUN |19 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_CUDA128_INDEX_URL=https://download.pytorch.org/whl/cu128/flashinfer FLASHINFER_CUDA128_WHEEL=flashinfer_python-0.2.6.post1%2Bcu128torch2.7-cp39-abi3-linux_x86_64.whl FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.6.post1 /bin/bash -c uv pip install --system -r requirements/build.txt         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2025-07-08 02:38:22  159.00B 复制新文件或目录到容器中
COPY requirements/build.txt requirements/build.txt # buildkit
                        
# 2025-07-08 02:38:22  0.00B 执行命令并创建新的镜像层
RUN |19 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_CUDA128_INDEX_URL=https://download.pytorch.org/whl/cu128/flashinfer FLASHINFER_CUDA128_WHEEL=flashinfer_python-0.2.6.post1%2Bcu128torch2.7-cp39-abi3-linux_x86_64.whl FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.6.post1 /bin/bash -c . /etc/environment && uv pip list # buildkit
                        
# 2025-07-08 02:38:22  28.29KB 复制新文件或目录到容器中
COPY ./vllm/collect_env.py . # buildkit
                        
# 2025-07-08 02:38:22  562.44KB 复制新文件或目录到容器中
COPY benchmarks benchmarks # buildkit
                        
# 2025-07-08 02:38:22  658.72KB 复制新文件或目录到容器中
COPY examples examples # buildkit
                        
# 2025-07-08 02:38:22  910.65MB 执行命令并创建新的镜像层
RUN |19 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_CUDA128_INDEX_URL=https://download.pytorch.org/whl/cu128/flashinfer FLASHINFER_CUDA128_WHEEL=flashinfer_python-0.2.6.post1%2Bcu128torch2.7-cp39-abi3-linux_x86_64.whl FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.6.post1 /bin/bash -c bash - <<'BASH'
  . /etc/environment
  if [ "$TARGETPLATFORM" != "linux/arm64" ]; then
      # FlashInfer already has a wheel for PyTorch 2.7.0 and CUDA 12.8. This is enough for CI use
      if [[ "$CUDA_VERSION" == 12.8* ]]; then
          uv pip install --system ${FLASHINFER_CUDA128_INDEX_URL}/${FLASHINFER_CUDA128_WHEEL}
      else
          export TORCH_CUDA_ARCH_LIST='7.5 8.0 8.9 9.0a 10.0a 12.0'
          git clone ${FLASHINFER_GIT_REPO} --single-branch --branch ${FLASHINFER_GIT_REF} --recursive
          # Needed to build AOT kernels
          (cd flashinfer && \
              python3 -m flashinfer.aot && \
              uv pip install --system --no-build-isolation . \
          )
          rm -rf flashinfer

          # Default arches (skipping 10.0a and 12.0 since these need 12.8)
          # TODO: Update this to allow setting TORCH_CUDA_ARCH_LIST as a build arg.
          TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a"
          if [[ "${CUDA_VERSION}" == 11.* ]]; then
              TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9"
          fi
          echo "🏗️  Building FlashInfer for arches: ${TORCH_CUDA_ARCH_LIST}"

          git clone --depth 1 --recursive --shallow-submodules \
            --branch v0.2.6.post1 \
            https://github.com/flashinfer-ai/flashinfer.git flashinfer

          pushd flashinfer
            python3 -m flashinfer.aot
            TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" \
              uv pip install --system --no-build-isolation .
          popd

          rm -rf flashinfer
      fi \
  fi
BASH # buildkit
                        
# 2025-07-08 02:38:05  0.00B 定义构建参数
ARG FLASHINFER_GIT_REF=v0.2.6.post1
                        
# 2025-07-08 02:38:05  0.00B 定义构建参数
ARG FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git
                        
# 2025-07-08 02:38:05  0.00B 定义构建参数
ARG FLASHINFER_CUDA128_WHEEL=flashinfer_python-0.2.6.post1%2Bcu128torch2.7-cp39-abi3-linux_x86_64.whl
                        
# 2025-07-08 02:38:05  0.00B 定义构建参数
ARG FLASHINFER_CUDA128_INDEX_URL=https://download.pytorch.org/whl/cu128/flashinfer
                        
# 2025-07-08 02:38:05  9.17GB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2025-07-08 02:17:07  0.00B 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         uv pip install --system             --index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')             "torch==2.8.0.dev20250318+cu128" "torchvision==0.22.0.dev20250319" ;         uv pip install --system             --index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')             --pre pytorch_triton==3.3.0+gitab727c40 ;     fi # buildkit
                        
# 2025-07-08 02:17:06  57.59KB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/ # buildkit
                        
# 2025-07-08 02:17:05  0.00B 设置环境变量 UV_INDEX_STRATEGY
ENV UV_INDEX_STRATEGY=unsafe-best-match
                        
# 2025-07-08 02:17:05  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2025-07-08 02:17:05  64.30MB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c python3 -m pip install uv # buildkit
                        
# 2025-07-08 02:15:39  0.00B 定义构建参数
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
                        
# 2025-07-08 02:15:39  0.00B 定义构建参数
ARG PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL
                        
# 2025-07-08 02:15:39  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2025-07-08 02:15:39  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2025-07-08 02:15:39  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2025-07-08 02:15:39  828.75MB 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/bash -c echo 'tzdata tzdata/Areas select America' | debconf-set-selections     && echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections     && apt-get update -y     && apt-get install -y ccache software-properties-common git curl wget sudo vim python3-pip     && apt-get install -y ffmpeg libsm6 libxext6 libgl1     && if [ ! -z ${DEADSNAKES_MIRROR_URL} ] ; then         if [ ! -z "${DEADSNAKES_GPGKEY_URL}" ] ; then             mkdir -p -m 0755 /etc/apt/keyrings ;             curl -L ${DEADSNAKES_GPGKEY_URL} | gpg --dearmor > /etc/apt/keyrings/deadsnakes.gpg ;             sudo chmod 644 /etc/apt/keyrings/deadsnakes.gpg ;             echo "deb [signed-by=/etc/apt/keyrings/deadsnakes.gpg] ${DEADSNAKES_MIRROR_URL} $(lsb_release -cs) main" > /etc/apt/sources.list.d/deadsnakes.list ;         fi ;     else         for i in 1 2 3; do             add-apt-repository -y ppa:deadsnakes/ppa && break ||             { echo "Attempt $i failed, retrying in 5s..."; sleep 5; };         done ;     fi     && apt-get update -y     && apt-get install -y python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv libibverbs-dev     && update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1     && update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION}     && ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config     && curl -sS ${GET_PIP_URL} | python${PYTHON_VERSION}     && python3 --version && python3 -m pip --version # buildkit
                        
# 2025-07-08 02:13:31  136.00B 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/bash -c PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') &&     echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment # buildkit
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG GET_PIP_URL
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG DEADSNAKES_GPGKEY_URL
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG DEADSNAKES_MIRROR_URL
                        
# 2025-07-08 02:13:28  0.00B 
SHELL [/bin/bash -c]
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2025-07-08 02:13:28  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-07-08 02:13:28  0.00B 设置工作目录为/vllm-workspace
WORKDIR /vllm-workspace
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG PYTHON_VERSION
                        
# 2025-07-08 02:13:28  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs
                        
# 2025-03-11 06:36:52  389.48KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_DEV_PACKAGE_NAME} ${NV_LIBNCCL_DEV_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:36:52  5.94GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-dev-12-8=${NV_CUDA_CUDART_DEV_VERSION}     cuda-command-line-tools-12-8=${NV_CUDA_LIB_VERSION}     cuda-minimal-build-12-8=${NV_CUDA_LIB_VERSION}     cuda-libraries-dev-12-8=${NV_CUDA_LIB_VERSION}     cuda-nvml-dev-12-8=${NV_NVML_DEV_VERSION}     ${NV_NVPROF_DEV_PACKAGE}     ${NV_LIBNPP_DEV_PACKAGE}     libcusparse-dev-12-8=${NV_LIBCUSPARSE_DEV_VERSION}     ${NV_LIBCUBLAS_DEV_PACKAGE}     ${NV_LIBNCCL_DEV_PACKAGE}     ${NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:36:52  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:36:52  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE
ENV NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_VERSION
ENV NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_NAME
ENV NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_DEV_PACKAGE
ENV NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_VERSION
ENV NV_NVPROF_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE
ENV NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_VERSION
ENV NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE
ENV NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE_NAME
ENV NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_VERSION
ENV NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_PACKAGE
ENV NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_VERSION
ENV NV_LIBNPP_DEV_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUSPARSE_DEV_VERSION
ENV NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVML_DEV_VERSION
ENV NV_NVML_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_CUDART_DEV_VERSION
ENV NV_CUDA_CUDART_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:24:31  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2025-03-11 06:24:31  2.53KB 复制新文件或目录到容器中
COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
                        
# 2025-03-11 06:24:31  3.06KB 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2025-03-11 06:24:31  263.00KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:24:31  3.11GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-libraries-12-8=${NV_CUDA_LIB_VERSION}     ${NV_LIBNPP_PACKAGE}     cuda-nvtx-12-8=${NV_NVTX_VERSION}     libcusparse-12-8=${NV_LIBCUSPARSE_VERSION}     ${NV_LIBCUBLAS_PACKAGE}     ${NV_LIBNCCL_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:24:31  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:24:31  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE
ENV NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_VERSION
ENV NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_NAME
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE
ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_VERSION
ENV NV_LIBCUBLAS_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE_NAME
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUSPARSE_VERSION
ENV NV_LIBCUSPARSE_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_PACKAGE
ENV NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_VERSION
ENV NV_LIBNPP_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_NVTX_VERSION
ENV NV_NVTX_VERSION=12.8.90-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-03-11 06:19:20  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-03-11 06:19:20  22.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-03-11 06:19:20  203.35MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-8=${NV_CUDA_CUDART_VERSION}     cuda-compat-12-8     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.8.1
                        
# 2025-03-11 06:19:05  10.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:19:05  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.8.90-1
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2025-01-26 13:31:11  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-01-26 13:31:10  77.86MB 
/bin/sh -c #(nop) ADD file:1b6c8c9518be42fa2afe5e241ca31677fce58d27cdfa88baa91a65a259be3637 in / 
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:28b059682165f5dc585337f49d4441f75299f0aa1f90149c1694b4775e27553b",
    "RepoTags": [
        "vllm/vllm-openai:v0.9.2",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.9.2"
    ],
    "RepoDigests": [
        "vllm/vllm-openai@sha256:37cd5bd18d220a0f4c70401ce1d4a0cc588fbfe03cc210579428f2c47e6eac33",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai@sha256:37cd5bd18d220a0f4c70401ce1d4a0cc588fbfe03cc210579428f2c47e6eac33"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-07-07T18:38:32.912234073Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.8 brand=unknown,driver\u003e=470,driver\u003c471 brand=grid,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=vapps,driver\u003e=470,driver\u003c471 brand=vpc,driver\u003e=470,driver\u003c471 brand=vcs,driver\u003e=470,driver\u003c471 brand=vws,driver\u003e=470,driver\u003c471 brand=cloudgaming,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551 brand=unknown,driver\u003e=560,driver\u003c561 brand=grid,driver\u003e=560,driver\u003c561 brand=tesla,driver\u003e=560,driver\u003c561 brand=nvidia,driver\u003e=560,driver\u003c561 brand=quadro,driver\u003e=560,driver\u003c561 brand=quadrortx,driver\u003e=560,driver\u003c561 brand=nvidiartx,driver\u003e=560,driver\u003c561 brand=vapps,driver\u003e=560,driver\u003c561 brand=vpc,driver\u003e=560,driver\u003c561 brand=vcs,driver\u003e=560,driver\u003c561 brand=vws,driver\u003e=560,driver\u003c561 brand=cloudgaming,driver\u003e=560,driver\u003c561 brand=unknown,driver\u003e=565,driver\u003c566 brand=grid,driver\u003e=565,driver\u003c566 brand=tesla,driver\u003e=565,driver\u003c566 brand=nvidia,driver\u003e=565,driver\u003c566 brand=quadro,driver\u003e=565,driver\u003c566 brand=quadrortx,driver\u003e=565,driver\u003c566 brand=nvidiartx,driver\u003e=565,driver\u003c566 brand=vapps,driver\u003e=565,driver\u003c566 brand=vpc,driver\u003e=565,driver\u003c566 brand=vcs,driver\u003e=565,driver\u003c566 brand=vws,driver\u003e=565,driver\u003c566 brand=cloudgaming,driver\u003e=565,driver\u003c566",
            "NV_CUDA_CUDART_VERSION=12.8.90-1",
            "CUDA_VERSION=12.8.1",
            "LD_LIBRARY_PATH=/usr/local/cuda/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.8.1-1",
            "NV_NVTX_VERSION=12.8.90-1",
            "NV_LIBNPP_VERSION=12.3.3.100-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1",
            "NV_LIBCUSPARSE_VERSION=12.5.8.93-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8",
            "NV_LIBCUBLAS_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1",
            "NCCL_VERSION=2.25.1-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8",
            "NVIDIA_PRODUCT_NAME=CUDA",
            "NV_CUDA_CUDART_DEV_VERSION=12.8.90-1",
            "NV_NVML_DEV_VERSION=12.8.90-1",
            "NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1",
            "NV_LIBNPP_DEV_VERSION=12.3.3.100-1",
            "NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1",
            "NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8",
            "NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1",
            "NV_NVPROF_VERSION=12.8.90-1",
            "NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1",
            "NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev",
            "NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1",
            "NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs",
            "DEBIAN_FRONTEND=noninteractive",
            "UV_HTTP_TIMEOUT=500",
            "UV_INDEX_STRATEGY=unsafe-best-match",
            "VLLM_USAGE_SOURCE=production-docker-image"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/vllm-workspace",
        "Entrypoint": [
            "python3",
            "-m",
            "vllm.entrypoints.openai.api_server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        },
        "Shell": [
            "/bin/bash",
            "-c"
        ]
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 20761835846,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/5d9f19088ef1ac6da88f4ee7de1d780ef6f99d0174b5fc9164679a3da2cf76b9/diff:/var/lib/docker/overlay2/775c13a24972cfa5ab14fa6f4a890a6910d0be398677ab525176a09e338ee5ac/diff:/var/lib/docker/overlay2/008358b930d3d5e3d1303426290e99f25df2b3746330d4144113d4b18325eafc/diff:/var/lib/docker/overlay2/329fbe86027606e3537833b332f69ea09e7f7dd913ce6513eb9e3fba3a22f3b2/diff:/var/lib/docker/overlay2/0369ff125c5d1d027ea2f6461e5571ff5f6c53ba1830b58cdd93140debf0773a/diff:/var/lib/docker/overlay2/faf99b1e8f07316cd2bd1b2d04da49be143891072746936c88e1689d06e07bf4/diff:/var/lib/docker/overlay2/993c8ed927074b572c77d8bcafdf3a689c9af8414897164db773a2d3df787107/diff:/var/lib/docker/overlay2/b879293969365a856c580f1dd88c7d33b12f40a52f6403b825cca54b690b1dc8/diff:/var/lib/docker/overlay2/3f1cc24cad355bdfb3af9be1be5029ffbae86408d8bd2fb678bb840626b7c248/diff:/var/lib/docker/overlay2/6d3e2b0bd5dd7389eba2cd88869f17333e4b8db7b442e49e6fcd507cc862a8e7/diff:/var/lib/docker/overlay2/703a0a664aea845956366301781deafe6751442eeba08b5bcd4665ecd06d9c06/diff:/var/lib/docker/overlay2/4bfcea7f737d127f9ca6b5ab48a4cd24cd2a70b80c78c19f831d5f5242ccd0eb/diff:/var/lib/docker/overlay2/a6f85062084c58a0f60a8fc62983d2b654467139201ff33294fc9730bc1ceaf9/diff:/var/lib/docker/overlay2/69bdee5ba5861b08d71e1790b20cfb2e259e27db9e5d308ef4f34348485104c2/diff:/var/lib/docker/overlay2/a647ee06c926e20668fc6a4354c56595a367b952786e018c0466472c36b37cc8/diff:/var/lib/docker/overlay2/0e407d407e974db69cefbceead0b675d3710bae905b6c4c1d5f5a6f1a2208f13/diff:/var/lib/docker/overlay2/e3e63358ea16be7fe43db23639b73ac0f21c835b05ffbf3d16b86a140065dfe4/diff:/var/lib/docker/overlay2/01d0ad7c15728a68872995477a712dc0a2f447b0ce99ab2ab3eda501023811a8/diff:/var/lib/docker/overlay2/eb19588e502845d8ce46d99d20c99b072ec802075b31a9cf8d1ba20f6e03a7ce/diff:/var/lib/docker/overlay2/d4e972221bbabb352331d80c6fe11d34088cc2d0166c77533e452bc4478d5165/diff:/var/lib/docker/overlay2/791e250eb5be883c8f6db7f152c289a36df81bb6ed60e72012debef0ee1f68a1/diff:/var/lib/docker/overlay2/799d975fc91d69c3b00cdfb40966336b02953c1fc811dbc0aed2888d86459039/diff:/var/lib/docker/overlay2/ccfe551d1424e5793cf9d703d4d9fcd8b465f6ab706eda1b424226b6341f33ce/diff:/var/lib/docker/overlay2/1cd93478138a0996c586db5d0b48cf85c9ade729210168ede4dd7311a3b299f1/diff:/var/lib/docker/overlay2/c5ada06daaa28838f584281789f51bccdc7bf607ea9a56c36b35a82fff68375f/diff:/var/lib/docker/overlay2/ace3f972cf88bd330727fa9a25fd0df2c3fec1df161ac9102bf9f5739b40b82c/diff",
            "MergedDir": "/var/lib/docker/overlay2/6ad72fd1eb74ac334a5b41379aeed11af8327e41a6feaa89f4869b7f2b9e04ff/merged",
            "UpperDir": "/var/lib/docker/overlay2/6ad72fd1eb74ac334a5b41379aeed11af8327e41a6feaa89f4869b7f2b9e04ff/diff",
            "WorkDir": "/var/lib/docker/overlay2/6ad72fd1eb74ac334a5b41379aeed11af8327e41a6feaa89f4869b7f2b9e04ff/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:270a1170e7e398434ff1b31e17e233f7d7b71aa99a40473615860068e86720af",
            "sha256:fb456a9e7760e7e4481d3d9eddee3ef62358753326d669c3efe6b0cc254ab9c3",
            "sha256:981a144d9fa2efffdbdfbdcfd3881dccb4fbf07867d4fa326a386126ca11b50f",
            "sha256:583f1d6040fc90a1380cb4d11d46b78bbbd4ad7d6274b9656ab9832ef7ba6342",
            "sha256:6020d84069cf575521faca653d2817e2195b8233b8c4e2d7bc65f3296296c2bb",
            "sha256:eb0143fcce68bd06a29bf314125d30f2a1f44d606fc419e7a54ab602bc26b2d0",
            "sha256:cb395f276e984728ca05fdee4e1dc203e1310a9e943a2d16d7aa87e5b2cf754b",
            "sha256:70a158f70e3777a129c15ffe51399bb6c226218c08ff6819948b9b1f26a46277",
            "sha256:9bb0510c7b4b74b056648f24e0a55c26f7521a1a1b766250a0776e14ce9d93f6",
            "sha256:7215ae9b9700d9530f70dea6f546b7ebdaac0098e4436b4c99e3e8063170c274",
            "sha256:acbc0d2ed199232b8f482ce41318ccd603e3c9a7731d39f0b08b88b253a6ca67",
            "sha256:01a6d1c2dfdac89761c3d387212d939340a9d8137d184cd2abd9c1ce5735ac92",
            "sha256:89906786ff47118123801e327e98661bba41596b39a997c58eec5d6460007e0c",
            "sha256:13dc673a674f8d965c35c71320b4d0e35e7af455d8696f464eb054d926be8323",
            "sha256:f8d87e81cd4851bcfde0e54d5648ebbe832fabab063018763a1add386cf21ca2",
            "sha256:4abfc765176c181e7253e092735eb534e81f209186302d8e38f93a9dc989cf8f",
            "sha256:5947633b85a2bc8acf6f131f5f047882dd8af2bc632dfbbd51353287cab469bd",
            "sha256:d24cc143d6ca89f22c3356721c77d2a831dc6c49a3ed90bf209e336baddab9f9",
            "sha256:f43a6620e96bb429c6295ec4a2ecb879d2f5bd5a77e655df0eb1e63756b3d132",
            "sha256:8c845dc079610ff243065284ab5989e6505b1f4838171ea17b895880f523c9b7",
            "sha256:c2942ba6d656c7351937d69721afa29961c586ab09e04177f8fed69441497a93",
            "sha256:b3d819c9deb1ff47b885b0103615b3c364ed412b4c0cae51af29f6d44e012276",
            "sha256:c9845bcc0748ded0c661bd69db45d93824c9906ebbd0698056e11a27da4c4757",
            "sha256:17542b0c92aa57a4d5224c2daf3dc2f08dc2fdb35346a240a8398565b405efa4",
            "sha256:c1ae024fb2efc37e2be058c9a2fd310f9f125e883b18298b7d35a966f7761006",
            "sha256:a67555d65c7e2bdde4bf04459ee5e70616f18fe5e92c5dd70522ffb9c43a8249",
            "sha256:bdea0678ab522c5f1685860ae4a0cc75527790d4d1feb332128a8c67dca86ecc"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-07-09T02:48:04.254906861+08:00"
    }
}

更多版本

docker.io/vllm/vllm-openai:v0.5.4

linux/amd64 docker.io9.90GB2024-09-07 06:20
926

docker.io/vllm/vllm-openai:v0.6.0

linux/amd64 docker.io9.72GB2024-09-11 01:51
866

docker.io/vllm/vllm-openai:v0.6.1.post2

linux/amd64 docker.io9.81GB2024-09-24 01:43
514

docker.io/vllm/vllm-openai:latest

linux/amd64 docker.io10.24GB2024-10-11 00:43
1483

docker.io/vllm/vllm-openai:v0.6.4.post1

linux/amd64 docker.io10.64GB2024-11-19 00:42
567

docker.io/vllm/vllm-openai:v0.6.4

linux/amd64 docker.io10.64GB2024-12-11 02:08
412

docker.io/vllm/vllm-openai:v0.6.3

linux/amd64 docker.io10.43GB2024-12-12 02:41
332

docker.io/vllm/vllm-openai:v0.6.6

linux/amd64 docker.io10.23GB2025-01-04 00:37
603

docker.io/vllm/vllm-openai:v0.6.6.post1

linux/amd64 docker.io10.23GB2025-01-24 00:21
359

docker.io/vllm/vllm-openai:v0.7.1

linux/amd64 docker.io16.53GB2025-02-08 02:05
457

docker.io/vllm/vllm-openai:v0.7.2

linux/amd64 docker.io16.53GB2025-02-09 00:28
1250

docker.io/vllm/vllm-openai:v0.7.3

linux/amd64 docker.io16.43GB2025-02-24 00:50
1705

docker.io/vllm/vllm-openai:v0.8.0

linux/amd64 docker.io16.62GB2025-03-20 00:23
633

docker.io/vllm/vllm-openai:v0.8.1

linux/amd64 docker.io16.62GB2025-03-21 00:28
528

docker.io/vllm/vllm-openai:v0.8.2

linux/amd64 docker.io16.92GB2025-03-27 01:12
628

docker.io/vllm/vllm-openai:v0.8.3

linux/amd64 docker.io17.13GB2025-04-08 00:58
687

docker.io/vllm/vllm-openai:v0.8.4

linux/amd64 docker.io17.16GB2025-04-17 01:16
836

docker.io/vllm/vllm-openai:v0.8.5

linux/amd64 docker.io17.30GB2025-04-30 02:45
1187

docker.io/vllm/vllm-openai:v0.8.5.post1

linux/amd64 docker.io17.30GB2025-05-07 02:06
1140

docker.io/vllm/vllm-openai:v0.9.0.1

linux/amd64 docker.io20.81GB2025-06-05 01:12
536

docker.io/vllm/vllm-openai:v0.9.1

linux/amd64 docker.io20.85GB2025-06-12 01:29
874

docker.io/vllm/vllm-openai:v0.9.2

linux/amd64 docker.io20.76GB2025-07-09 03:00
85