docker.io/vllm/vllm-openai:v0.15.0 linux/amd64

docker.io/vllm/vllm-openai:v0.15.0 - 国内下载镜像源 浏览次数:11
这是镜像描述:

vllm/openai

基于 OpenAI 的 GPT-3 模型的 API 服务,支持自然语言处理等功能。

源镜像 docker.io/vllm/vllm-openai:v0.15.0
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0
镜像ID sha256:59c018a60d965607819387096449351ab9d7ca3134b0a6f7e3f239f33dad9f4b
镜像TAG v0.15.0
大小 20.13GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 vllm serve
工作目录 /vllm-workspace
OS/平台 linux/amd64
浏览量 11 次
贡献者
镜像创建 2026-01-29T07:13:03.912321732Z
同步时间 2026-01-31 00:51
更新时间 2026-01-31 07:21
环境变量
PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571 NV_CUDA_CUDART_VERSION=12.9.79-1 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility DEBIAN_FRONTEND=noninteractive UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match UV_LINK_MODE=copy TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0 VLLM_USAGE_SOURCE=production-docker-image
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0  docker.io/vllm/vllm-openai:v0.15.0

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0  docker.io/vllm/vllm-openai:v0.15.0

Shell快速替换命令

sed -i 's#vllm/vllm-openai:v0.15.0#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0  docker.io/vllm/vllm-openai:v0.15.0'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0  docker.io/vllm/vllm-openai:v0.15.0'

镜像构建历史


# 2026-01-29 15:13:03  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["vllm" "serve"]
                        
# 2026-01-29 15:13:03  0.00B 设置环境变量 VLLM_USAGE_SOURCE
ENV VLLM_USAGE_SOURCE=production-docker-image
                        
# 2026-01-29 15:13:03  231.22MB 执行命令并创建新的镜像层
RUN |8 TARGETPLATFORM=linux/amd64 INSTALL_KV_CONNECTORS=true CUDA_VERSION=12.9.1 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= torch_cuda_arch_list=7.0 7.5 8.0 8.9 9.0 10.0 12.0 /bin/sh -c CUDA_MAJOR="${CUDA_VERSION%%.*}";     CUDA_VERSION_DASH=$(echo $CUDA_VERSION | cut -d. -f1,2 | tr '.' '-');     CUDA_HOME=/usr/local/cuda;     BUILD_PKGS="libcusparse-dev-${CUDA_VERSION_DASH}                 libcublas-dev-${CUDA_VERSION_DASH}                 libcusolver-dev-${CUDA_VERSION_DASH}";     if [ "$INSTALL_KV_CONNECTORS" = "true" ]; then         if [ "$CUDA_MAJOR" -ge 13 ]; then             uv pip install --system nixl-cu13;         fi;         uv pip install --system -r /tmp/kv_connectors.txt --no-build || (             apt-get update -y &&             apt-get install -y --no-install-recommends ${BUILD_PKGS} &&             uv pip install --system -r /tmp/kv_connectors.txt --no-build-isolation &&             apt-get purge -y ${BUILD_PKGS} &&             rm -rf /var/lib/apt/lists/*         );     fi # buildkit
                        
# 2026-01-29 15:12:59  0.00B 设置环境变量 TORCH_CUDA_ARCH_LIST
ENV TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG torch_cuda_arch_list=7.0 7.5 8.0 8.9 9.0 10.0 12.0
                        
# 2026-01-29 15:12:59  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2026-01-29 15:12:59  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2026-01-29 15:12:59  28.08KB 复制新文件或目录到容器中
COPY ./vllm/collect_env.py . # buildkit
                        
# 2026-01-29 15:12:59  799.22KB 复制新文件或目录到容器中
COPY benchmarks benchmarks # buildkit
                        
# 2026-01-29 15:12:59  1.04MB 复制新文件或目录到容器中
COPY examples examples # buildkit
                        
# 2026-01-29 15:12:59  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64
                        
# 2026-01-29 15:12:59  162.87MB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c uv pip install --system ep_kernels/dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2026-01-29 15:12:56  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib64
                        
# 2026-01-29 15:12:56  47.57MB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c sh -c 'if ls /tmp/deepgemm/dist/*.whl >/dev/null 2>&1; then               uv pip install --system /tmp/deepgemm/dist/*.whl;            else               echo "No DeepGEMM wheels to install; skipping.";            fi' # buildkit
                        
# 2026-01-29 15:12:55  0.00B 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c . /etc/environment && uv pip list # buildkit
                        
# 2026-01-29 15:12:55  1.31GB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c if [ "${PYTORCH_NIGHTLY}" = "1" ]; then         echo "Installing torch nightly..."         && uv pip install --system $(cat torch_lib_versions.txt | xargs) --pre         --index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/nightly/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')         && echo "Installing vLLM..."         && uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/nightly/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.');     else         echo "Installing vLLM..."         && uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.');     fi # buildkit
                        
# 2026-01-29 15:03:12  69.00B 复制新文件或目录到容器中
COPY /workspace/torch_lib_versions.txt torch_lib_versions.txt # buildkit
                        
# 2026-01-29 15:03:12  0.00B 定义构建参数
ARG PYTORCH_NIGHTLY
                        
# 2026-01-29 15:03:12  0.00B 定义构建参数
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
                        
# 2026-01-29 15:03:12  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2026-01-29 15:03:12  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2026-01-29 15:03:12  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2026-01-29 15:03:12  288.53MB 执行命令并创建新的镜像层
RUN |14 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 /bin/sh -c if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         BITSANDBYTES_VERSION="${BITSANDBYTES_VERSION_ARM64}";     else         BITSANDBYTES_VERSION="${BITSANDBYTES_VERSION_X86}";     fi;     uv pip install --system accelerate hf_transfer modelscope         "bitsandbytes>=${BITSANDBYTES_VERSION}" "timm${TIMM_VERSION}" "runai-model-streamer[s3,gcs]${RUNAI_MODEL_STREAMER_VERSION}" # buildkit
                        
# 2026-01-29 15:03:05  0.00B 定义构建参数
ARG RUNAI_MODEL_STREAMER_VERSION=>=0.15.3
                        
# 2026-01-29 15:03:05  0.00B 定义构建参数
ARG TIMM_VERSION=>=1.0.17
                        
# 2026-01-29 15:03:05  0.00B 定义构建参数
ARG BITSANDBYTES_VERSION_ARM64=0.42.0
                        
# 2026-01-29 15:03:05  0.00B 定义构建参数
ARG BITSANDBYTES_VERSION_X86=0.46.1
                        
# 2026-01-29 15:03:05  2.43MB 执行命令并创建新的镜像层
RUN |10 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 /bin/sh -c set -eux;     case "${TARGETPLATFORM}" in       linux/arm64) UUARCH="aarch64" ;;       linux/amd64) UUARCH="x64" ;;       *) echo "Unsupported TARGETPLATFORM: ${TARGETPLATFORM}" >&2; exit 1 ;;     esac;     /tmp/install_gdrcopy.sh "${GDRCOPY_OS_VERSION}" "${GDRCOPY_CUDA_VERSION}" "${UUARCH}" &&     rm /tmp/install_gdrcopy.sh # buildkit
                        
# 2026-01-29 15:02:57  1.44KB 复制新文件或目录到容器中
COPY tools/install_gdrcopy.sh /tmp/install_gdrcopy.sh # buildkit
                        
# 2026-01-29 15:02:56  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2026-01-29 15:02:56  0.00B 定义构建参数
ARG GDRCOPY_OS_VERSION=Ubuntu22_04
                        
# 2026-01-29 15:02:56  0.00B 定义构建参数
ARG GDRCOPY_CUDA_VERSION=12.8
                        
# 2026-01-29 15:02:56  5.21GB 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 /bin/sh -c uv pip install --system flashinfer-cubin==${FLASHINFER_VERSION}     && uv pip install --system flashinfer-jit-cache==${FLASHINFER_VERSION}         --extra-index-url https://flashinfer.ai/whl/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')     && flashinfer show-config # buildkit
                        
# 2026-01-29 15:00:54  0.00B 定义构建参数
ARG FLASHINFER_VERSION=0.6.1
                        
# 2026-01-29 15:00:54  9.21GB 执行命令并创建新的镜像层
RUN |6 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl /bin/sh -c uv pip install --system -r /tmp/requirements-cuda.txt         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') &&     rm /tmp/requirements-cuda.txt /tmp/common.txt # buildkit
                        
# 2026-01-29 14:59:24  515.00B 复制新文件或目录到容器中
COPY requirements/cuda.txt /tmp/requirements-cuda.txt # buildkit
                        
# 2026-01-29 14:59:24  2.66KB 复制新文件或目录到容器中
COPY requirements/common.txt /tmp/common.txt # buildkit
                        
# 2026-01-29 14:59:23  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2026-01-29 14:59:23  49.58KB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c echo "/usr/local/cuda-$(echo "$CUDA_VERSION" | cut -d. -f1,2)/compat/" > /etc/ld.so.conf.d/00-cuda-compat.conf && ldconfig # buildkit
                        
# 2026-01-29 14:59:22  0.00B 设置环境变量 UV_LINK_MODE
ENV UV_LINK_MODE=copy
                        
# 2026-01-29 14:59:22  0.00B 设置环境变量 UV_INDEX_STRATEGY
ENV UV_INDEX_STRATEGY=unsafe-best-match
                        
# 2026-01-29 14:59:22  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2026-01-29 14:59:22  78.31MB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c python3 -m pip install uv # buildkit
                        
# 2026-01-29 14:59:18  2.52GB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c CUDA_VERSION_DASH=$(echo $CUDA_VERSION | cut -d. -f1,2 | tr '.' '-') &&     apt-get update -y &&     apt-get install -y --no-install-recommends         cuda-nvcc-${CUDA_VERSION_DASH}         cuda-cudart-${CUDA_VERSION_DASH}         cuda-nvrtc-${CUDA_VERSION_DASH}         cuda-cuobjdump-${CUDA_VERSION_DASH}         libcurand-dev-${CUDA_VERSION_DASH}         libcublas-${CUDA_VERSION_DASH}         libnccl-dev &&     rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2026-01-29 14:58:07  664.22MB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c echo 'tzdata tzdata/Areas select America' | debconf-set-selections     && echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections     && apt-get update -y     && apt-get install -y --no-install-recommends         software-properties-common         curl         sudo         python3-pip         ffmpeg         libsm6         libxext6         libgl1     && if [ ! -z ${DEADSNAKES_MIRROR_URL} ] ; then         if [ ! -z "${DEADSNAKES_GPGKEY_URL}" ] ; then             mkdir -p -m 0755 /etc/apt/keyrings ;             curl -L ${DEADSNAKES_GPGKEY_URL} | gpg --dearmor > /etc/apt/keyrings/deadsnakes.gpg ;             sudo chmod 644 /etc/apt/keyrings/deadsnakes.gpg ;             echo "deb [signed-by=/etc/apt/keyrings/deadsnakes.gpg] ${DEADSNAKES_MIRROR_URL} $(lsb_release -cs) main" > /etc/apt/sources.list.d/deadsnakes.list ;         fi ;     else         for i in 1 2 3; do             add-apt-repository -y ppa:deadsnakes/ppa && break ||             { echo "Attempt $i failed, retrying in 5s..."; sleep 5; };         done ;     fi     && apt-get update -y     && apt-get install -y --no-install-recommends         python${PYTHON_VERSION}         python${PYTHON_VERSION}-dev         python${PYTHON_VERSION}-venv         libibverbs-dev     && rm -rf /var/lib/apt/lists/*     && update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1     && update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION}     && ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config     && curl -sS ${GET_PIP_URL} | python${PYTHON_VERSION}     && python3 --version && python3 -m pip --version # buildkit
                        
# 2026-01-29 14:54:15  136.00B 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=12.9.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') &&     echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment # buildkit
                        
# 2026-01-29 14:53:35  0.00B 设置工作目录为/vllm-workspace
WORKDIR /vllm-workspace
                        
# 2026-01-29 14:53:35  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2026-01-29 14:53:35  0.00B 定义构建参数
ARG GET_PIP_URL
                        
# 2026-01-29 14:53:35  0.00B 定义构建参数
ARG DEADSNAKES_GPGKEY_URL
                        
# 2026-01-29 14:53:35  0.00B 定义构建参数
ARG DEADSNAKES_MIRROR_URL
                        
# 2026-01-29 14:53:35  0.00B 定义构建参数
ARG PYTHON_VERSION
                        
# 2026-01-29 14:53:35  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2025-07-19 04:11:19  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-07-19 04:11:19  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-07-19 04:11:19  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2025-07-19 04:11:19  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
                        
# 2025-07-19 04:11:19  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-07-19 04:11:19  22.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-07-19 04:11:19  315.65MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-9=${NV_CUDA_CUDART_VERSION}     cuda-compat-12-9     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-07-19 04:11:02  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.9.1
                        
# 2025-07-19 04:11:02  10.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-07-19 04:11:02  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-07-19 04:11:02  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-07-19 04:11:02  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.9.79-1
                        
# 2025-07-19 04:11:02  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571
                        
# 2025-07-19 04:11:02  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2025-07-15 00:33:32  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-07-15 00:33:31  77.87MB 
/bin/sh -c #(nop) ADD file:415bbc01dfb447d002e2d8173e113ef025d2bbfa20f1205823fa699dc87a2019 in / 
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:59c018a60d965607819387096449351ab9d7ca3134b0a6f7e3f239f33dad9f4b",
    "RepoTags": [
        "vllm/vllm-openai:v0.15.0",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.0"
    ],
    "RepoDigests": [
        "vllm/vllm-openai@sha256:7764931211e5b408a10d6e289ab9eca8d6ecd105e9239cb2d528c2c4d0ad67b0",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai@sha256:97187c9535fd6d6040444d68bb073f17344fd454e9241cc7a4e998141f244543"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2026-01-29T07:13:03.912321732Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.9 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551 brand=unknown,driver\u003e=560,driver\u003c561 brand=grid,driver\u003e=560,driver\u003c561 brand=tesla,driver\u003e=560,driver\u003c561 brand=nvidia,driver\u003e=560,driver\u003c561 brand=quadro,driver\u003e=560,driver\u003c561 brand=quadrortx,driver\u003e=560,driver\u003c561 brand=nvidiartx,driver\u003e=560,driver\u003c561 brand=vapps,driver\u003e=560,driver\u003c561 brand=vpc,driver\u003e=560,driver\u003c561 brand=vcs,driver\u003e=560,driver\u003c561 brand=vws,driver\u003e=560,driver\u003c561 brand=cloudgaming,driver\u003e=560,driver\u003c561 brand=unknown,driver\u003e=565,driver\u003c566 brand=grid,driver\u003e=565,driver\u003c566 brand=tesla,driver\u003e=565,driver\u003c566 brand=nvidia,driver\u003e=565,driver\u003c566 brand=quadro,driver\u003e=565,driver\u003c566 brand=quadrortx,driver\u003e=565,driver\u003c566 brand=nvidiartx,driver\u003e=565,driver\u003c566 brand=vapps,driver\u003e=565,driver\u003c566 brand=vpc,driver\u003e=565,driver\u003c566 brand=vcs,driver\u003e=565,driver\u003c566 brand=vws,driver\u003e=565,driver\u003c566 brand=cloudgaming,driver\u003e=565,driver\u003c566 brand=unknown,driver\u003e=570,driver\u003c571 brand=grid,driver\u003e=570,driver\u003c571 brand=tesla,driver\u003e=570,driver\u003c571 brand=nvidia,driver\u003e=570,driver\u003c571 brand=quadro,driver\u003e=570,driver\u003c571 brand=quadrortx,driver\u003e=570,driver\u003c571 brand=nvidiartx,driver\u003e=570,driver\u003c571 brand=vapps,driver\u003e=570,driver\u003c571 brand=vpc,driver\u003e=570,driver\u003c571 brand=vcs,driver\u003e=570,driver\u003c571 brand=vws,driver\u003e=570,driver\u003c571 brand=cloudgaming,driver\u003e=570,driver\u003c571",
            "NV_CUDA_CUDART_VERSION=12.9.79-1",
            "CUDA_VERSION=12.9.1",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "DEBIAN_FRONTEND=noninteractive",
            "UV_HTTP_TIMEOUT=500",
            "UV_INDEX_STRATEGY=unsafe-best-match",
            "UV_LINK_MODE=copy",
            "TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0",
            "VLLM_USAGE_SOURCE=production-docker-image"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/vllm-workspace",
        "Entrypoint": [
            "vllm",
            "serve"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 20133144023,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/a11f7669dd5fc9fab8f666a8f2d4498c6b584417801248b19f2911a3fdb096b6/diff:/var/lib/docker/overlay2/982f9f9dc989a28727decbddadca1ecfbc40dcbef6567d61988d872120f0d7fe/diff:/var/lib/docker/overlay2/95510c56b2a5ecf74cd77855d15bc9317c6f86aeee1190273ed4eaf9a41e250e/diff:/var/lib/docker/overlay2/331408b389395d6697b74d372cd7f72effc012913c61e9203877c6914bab1d9c/diff:/var/lib/docker/overlay2/14ba8bb0cc90abcf556f34d23861c8a6f2d1e4404f8f13d1213ce69f5bffac19/diff:/var/lib/docker/overlay2/7e002999ad779ca6597a7e6a1d03e9ff8459f4b03f90d625adb745c311bd8e97/diff:/var/lib/docker/overlay2/4aa09a622d25dd9eec82c756ace2d1dff3aef880dc3681f1ec504bba27853838/diff:/var/lib/docker/overlay2/5f135716ff6b008a7fd3b5de57c9cb13ab366b5d71cd40d170e5d688c6c6e24c/diff:/var/lib/docker/overlay2/62eb78bb5471c374ade9f3dfa0c348e98510e2607ec8b9650209b959969f2fe2/diff:/var/lib/docker/overlay2/76bd9cf33071e5d0e3f60b8d49c1745415ea4cc2570f6b02592571b14e4b187d/diff:/var/lib/docker/overlay2/58efe055c37f1a09e71cccba37d69ef848fd2a6826b2433978167726e59a673e/diff:/var/lib/docker/overlay2/51f74aee6823260f1e52302e73265e637e8bcbcf44968a3811f130e2325bdc52/diff:/var/lib/docker/overlay2/6b7b0b3185ff2c47f71b7bc73161e6e5076539932db400fb96b150b3605f01e3/diff:/var/lib/docker/overlay2/49d4987fe78d21768d5f98ae8503e9cac7b751e20e8ac0dc40fdef611c1f89e8/diff:/var/lib/docker/overlay2/5525da942ee40cf4f71df16a87bfac2dcedb1f2cb5d455d2e692a3acbdfb4bb3/diff:/var/lib/docker/overlay2/3cbe09ea2b3b0e882288cddbdad30f001ac4987f2efe956298d1c24cc1a3fa16/diff:/var/lib/docker/overlay2/19ab6418ddc75e6300397ee12c93720c6acf8c36b2e382d1922e3142d6888126/diff:/var/lib/docker/overlay2/86058470a668bd34ee589b81abb417845f0204ba0fd2220679beb14c32158b85/diff:/var/lib/docker/overlay2/08b30dc4fcabf43154b27a2ebf53c1f3533fee41ee1af5a49a29c3821a872c7b/diff:/var/lib/docker/overlay2/ce85028ca0fde4801eea208322329e82cfc6941d24ba013351fe8677cc73ae17/diff:/var/lib/docker/overlay2/8bfeb9741cbb7ff99eb17adec62bebcd9bf27721c818f4fb8c28a510ff32601e/diff:/var/lib/docker/overlay2/73817e6f1d559da16e91d63e3d45a45e629c405ca4b9b4da6a8285be5166d872/diff:/var/lib/docker/overlay2/b5165af533a5dbf41f26807aba3e018f240c04d1eb03d137a07bc2e0a8246eca/diff:/var/lib/docker/overlay2/51feab2ff01a0b9ab3470eb4c28dedd3eaaa6a603667789edc5db1c9f62efaa7/diff:/var/lib/docker/overlay2/bea31f088ef880f0e0fa30c252b4784a914c6ace17b604af5d5aba6ba86812e5/diff:/var/lib/docker/overlay2/e7497b8b4fd586a4d779cf6e4f6365c897308879e6cb44107c5ebbc839adbfd0/diff",
            "MergedDir": "/var/lib/docker/overlay2/02cc4481cbd5ba09a18943579efbe105f1ca6c484a204d7b4fffba94a902409b/merged",
            "UpperDir": "/var/lib/docker/overlay2/02cc4481cbd5ba09a18943579efbe105f1ca6c484a204d7b4fffba94a902409b/diff",
            "WorkDir": "/var/lib/docker/overlay2/02cc4481cbd5ba09a18943579efbe105f1ca6c484a204d7b4fffba94a902409b/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:3cc982388b71ef357e0157e0b7d3059dcefa4dc9fd2e3815bde6c6ce040302f3",
            "sha256:b5e294e75ffe843434721e036fd14b2ac9323ec7e3fd6d5daa4ab18009e8f2ef",
            "sha256:2f58442919fa6fc058366388cf4cf5cac69571b9e4100715a0659dcb3f2b464f",
            "sha256:68b381704cbd81ffb973ff5841fde44df7a1e544836551e969bdde6a7d4937a0",
            "sha256:455bca42f6ec40ca42fea4bea15c6c17b97101af90413cdd25647de2b9d98960",
            "sha256:ed79395659409fcda4b5523453dda784dc3f6f83aa8a411c68a96409151ec913",
            "sha256:adfaef702a3ca1064cffb012616f6163cb2b4b03f6c5eef7e96b826e4669c16b",
            "sha256:22462faf8ff16f941ca6d46cc7aa9989b15048af25952a4114fb327ac80ec126",
            "sha256:e7f8ec31fb4799cd861fd4c667ef5b5cffec3912d10a851efaa19f140754f249",
            "sha256:7c3eda601c5374af6a1d577ea05a7a09bbc9817fa3a922bdd02b45e24ae0521d",
            "sha256:0b9e1ab83309b8312a3cd9e057c37ede9ffd0618357a76339f896464a1161207",
            "sha256:a4ad89a12dcfcfbb5737a4d34af53318a8ced4cd7fd156f1d6b68e3df8d8a972",
            "sha256:484dbc6be6f61804e40ecbd51bc5fdbd3071c0e624558b8aa9338bb3bee46e84",
            "sha256:8a54478b2c5cbdebf2dba2e72ad988b59c633370ad588050e3b8fba0ed765520",
            "sha256:11bf38cd05ec89050f12ce4280502942e1339838e6b6b90ccff276b167d4cabf",
            "sha256:5c765e2b5cd398bb3573035ccc6810c28b1218f81ed53b6b939564456b64aa94",
            "sha256:72c422947a2938c4c2a8f08d193a3ce314664ac558238b3c858c2eb3bf2ff9a0",
            "sha256:fbb11e934d289b9e7d40ebd592921618d7a04acb1d2838c34bda5e9657db6ac8",
            "sha256:0432aed0869ff041cbffb605d59e6b21fd29f5b330627da170cb9b75f78a8a3e",
            "sha256:57bcf27ef1b8bddb45c0b5d5c6d78c15c641c35982a291dce545ce975467906f",
            "sha256:23c95db2b06b4adaa96284497c359154b5ff79646b950cec727425e6ef100145",
            "sha256:5e939fc2d4e11c9d847444df29775ada12e3c72348285c75bcc716d4c1167f4f",
            "sha256:fb8eb373877e23fbc48a8d7543f6fd7b052f992c336ff0c0039ef009088bdf96",
            "sha256:c27e0e2762ae594a26682d7a3472bdd16ab9160b513b3dbf1767900b7f15212e",
            "sha256:5cafc6ae31253116e473d0d7d427af47da3117583cf63cab09074108934da76d",
            "sha256:f36ead5a47604e1717722b7925310eeb79b3aab96b8423801d245b65151d117c",
            "sha256:2c798f7151b1403f80d9de12a469bb703e96e785a1f8b4ca3880abd5aa952a85"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-01-31T00:27:14.634780727+08:00"
    }
}

更多版本

docker.io/vllm/vllm-openai:v0.5.4

linux/amd64 docker.io9.90GB2024-09-07 06:20
2082

docker.io/vllm/vllm-openai:v0.6.0

linux/amd64 docker.io9.72GB2024-09-11 01:51
1364

docker.io/vllm/vllm-openai:v0.6.1.post2

linux/amd64 docker.io9.81GB2024-09-24 01:43
993

docker.io/vllm/vllm-openai:latest

linux/amd64 docker.io10.24GB2024-10-11 00:43
5490

docker.io/vllm/vllm-openai:v0.6.4.post1

linux/amd64 docker.io10.64GB2024-11-19 00:42
1006

docker.io/vllm/vllm-openai:v0.6.4

linux/amd64 docker.io10.64GB2024-12-11 02:08
838

docker.io/vllm/vllm-openai:v0.6.3

linux/amd64 docker.io10.43GB2024-12-12 02:41
957

docker.io/vllm/vllm-openai:v0.6.6

linux/amd64 docker.io10.23GB2025-01-04 00:37
1377

docker.io/vllm/vllm-openai:v0.6.6.post1

linux/amd64 docker.io10.23GB2025-01-24 00:21
921

docker.io/vllm/vllm-openai:v0.7.1

linux/amd64 docker.io16.53GB2025-02-08 02:05
1052

docker.io/vllm/vllm-openai:v0.7.2

linux/amd64 docker.io16.53GB2025-02-09 00:28
2578

docker.io/vllm/vllm-openai:v0.7.3

linux/amd64 docker.io16.43GB2025-02-24 00:50
3307

docker.io/vllm/vllm-openai:v0.8.0

linux/amd64 docker.io16.62GB2025-03-20 00:23
1286

docker.io/vllm/vllm-openai:v0.8.1

linux/amd64 docker.io16.62GB2025-03-21 00:28
1068

docker.io/vllm/vllm-openai:v0.8.2

linux/amd64 docker.io16.92GB2025-03-27 01:12
1299

docker.io/vllm/vllm-openai:v0.8.3

linux/amd64 docker.io17.13GB2025-04-08 00:58
1346

docker.io/vllm/vllm-openai:v0.8.4

linux/amd64 docker.io17.16GB2025-04-17 01:16
1714

docker.io/vllm/vllm-openai:v0.8.5

linux/amd64 docker.io17.30GB2025-04-30 02:45
3049

docker.io/vllm/vllm-openai:v0.8.5.post1

linux/amd64 docker.io17.30GB2025-05-07 02:06
2938

docker.io/vllm/vllm-openai:v0.9.0.1

linux/amd64 docker.io20.81GB2025-06-05 01:12
1784

docker.io/vllm/vllm-openai:v0.9.1

linux/amd64 docker.io20.85GB2025-06-12 01:29
2665

docker.io/vllm/vllm-openai:v0.9.2

linux/amd64 docker.io20.76GB2025-07-09 03:00
5837

docker.io/vllm/vllm-openai:v0.10.0

linux/amd64 docker.io26.13GB2025-07-26 03:15
1569

docker.io/vllm/vllm-openai:gptoss

linux/amd64 docker.io33.86GB2025-08-07 01:52
1149

docker.io/vllm/vllm-openai:v0.10.1

linux/amd64 docker.io20.25GB2025-08-20 03:05
1162

docker.io/vllm/vllm-openai:v0.10.1.1

linux/amd64 docker.io20.26GB2025-08-23 01:43
1728

docker.io/vllm/vllm-openai:v0.10.2

linux/amd64 docker.io22.49GB2025-09-16 03:40
1328

docker.io/vllm/vllm-openai:v0.2.7

linux/amd64 docker.io6.34GB2025-10-01 01:07
341

docker.io/vllm/vllm-openai:v0.11.0-x86_64

linux/amd64 docker.io25.86GB2025-10-09 02:14
1520

docker.io/vllm/vllm-openai:v0.10.2-x86_64

linux/amd64 docker.io22.49GB2025-10-09 02:22
445

docker.io/vllm/vllm-openai:v0.11.0

linux/amd64 docker.io25.86GB2025-10-09 11:24
1678

docker.io/vllm/vllm-openai:v0.11.0

linux/arm64 docker.io24.17GB2025-10-30 00:47
739

docker.io/vllm/vllm-openai:v0.3.3

linux/amd64 docker.io9.13GB2025-11-18 01:01
231

docker.io/vllm/vllm-openai:v0.11.1

linux/amd64 docker.io28.72GB2025-11-21 01:03
567

docker.io/vllm/vllm-openai:v0.11.2

linux/amd64 docker.io28.82GB2025-11-22 00:46
955

docker.io/vllm/vllm-openai:v0.11.1

linux/arm64 docker.io26.54GB2025-11-22 01:23
301

docker.io/vllm/vllm-openai:v0.4.0

linux/amd64 docker.io9.88GB2025-11-22 01:58
319

docker.io/vllm/vllm-openai:v0.11.2

linux/arm64 docker.io26.54GB2025-11-22 04:06
464

docker.io/vllm/vllm-openai:nightly

linux/amd64 docker.io18.74GB2025-12-03 02:43
427

docker.io/vllm/vllm-openai:v0.12.0-aarch64

linux/arm64 docker.io17.89GB2025-12-05 03:12
376

docker.io/vllm/vllm-openai:v0.12.0

linux/amd64 docker.io19.47GB2025-12-05 03:59
1331

docker.io/vllm/vllm-openai:v0.13.0

linux/amd64 docker.io19.51GB2026-01-22 01:41
158

docker.io/vllm/vllm-openai:v0.14.0

linux/amd64 docker.io19.66GB2026-01-22 03:16
271

docker.io/vllm/vllm-openai:v0.14.1

linux/amd64 docker.io19.69GB2026-01-27 01:52
176

docker.io/vllm/vllm-openai:v0.15.0

linux/amd64 docker.io20.13GB2026-01-31 00:51
10