docker.io/vllm/vllm-openai:v0.15.1-cu130 linux/amd64

docker.io/vllm/vllm-openai:v0.15.1-cu130 - 国内下载镜像源 浏览次数:10
这是镜像描述:

vllm/openai

基于 OpenAI 的 GPT-3 模型的 API 服务,支持自然语言处理等功能。

源镜像 docker.io/vllm/vllm-openai:v0.15.1-cu130
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130
镜像ID sha256:50c07525d9711ea838079e13c7745cacbf2acccc5940de9ccf93ba10e2324996
镜像TAG v0.15.1-cu130
大小 18.77GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 vllm serve
工作目录 /vllm-workspace
OS/平台 linux/amd64
浏览量 10 次
贡献者
镜像创建 2026-02-04T19:01:47.758885511Z
同步时间 2026-02-07 00:39
环境变量
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=13.0 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571 brand=unknown,driver>=575,driver<576 brand=grid,driver>=575,driver<576 brand=tesla,driver>=575,driver<576 brand=nvidia,driver>=575,driver<576 brand=quadro,driver>=575,driver<576 brand=quadrortx,driver>=575,driver<576 brand=nvidiartx,driver>=575,driver<576 brand=vapps,driver>=575,driver<576 brand=vpc,driver>=575,driver<576 brand=vcs,driver>=575,driver<576 brand=vws,driver>=575,driver<576 brand=cloudgaming,driver>=575,driver<576 NV_CUDA_CUDART_VERSION=13.0.88-1 CUDA_VERSION=13.0.1 LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility DEBIAN_FRONTEND=noninteractive UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match UV_LINK_MODE=copy TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0 VLLM_USAGE_SOURCE=production-docker-image
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130  docker.io/vllm/vllm-openai:v0.15.1-cu130

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130  docker.io/vllm/vllm-openai:v0.15.1-cu130

Shell快速替换命令

sed -i 's#vllm/vllm-openai:v0.15.1-cu130#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130  docker.io/vllm/vllm-openai:v0.15.1-cu130'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130  docker.io/vllm/vllm-openai:v0.15.1-cu130'

镜像构建历史


# 2026-02-05 03:01:47  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["vllm" "serve"]
                        
# 2026-02-05 03:01:47  0.00B 设置环境变量 VLLM_USAGE_SOURCE
ENV VLLM_USAGE_SOURCE=production-docker-image
                        
# 2026-02-05 03:01:47  433.32MB 执行命令并创建新的镜像层
RUN |8 TARGETPLATFORM=linux/amd64 INSTALL_KV_CONNECTORS=true CUDA_VERSION=13.0.1 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= torch_cuda_arch_list=7.0 7.5 8.0 8.9 9.0 10.0 12.0 /bin/sh -c CUDA_MAJOR="${CUDA_VERSION%%.*}";     CUDA_VERSION_DASH=$(echo $CUDA_VERSION | cut -d. -f1,2 | tr '.' '-');     CUDA_HOME=/usr/local/cuda;     BUILD_PKGS="libcusparse-dev-${CUDA_VERSION_DASH}                 libcublas-dev-${CUDA_VERSION_DASH}                 libcusolver-dev-${CUDA_VERSION_DASH}";     if [ "$INSTALL_KV_CONNECTORS" = "true" ]; then         if [ "$CUDA_MAJOR" -ge 13 ]; then             uv pip install --system nixl-cu13;         fi;         uv pip install --system -r /tmp/kv_connectors.txt --no-build || (             apt-get update -y &&             apt-get install -y --no-install-recommends ${BUILD_PKGS} &&             uv pip install --system -r /tmp/kv_connectors.txt --no-build-isolation &&             apt-get purge -y ${BUILD_PKGS} &&             rm -rf /var/lib/apt/lists/*         );     fi # buildkit
                        
# 2026-02-05 03:01:39  0.00B 设置环境变量 TORCH_CUDA_ARCH_LIST
ENV TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG torch_cuda_arch_list=7.0 7.5 8.0 8.9 9.0 10.0 12.0
                        
# 2026-02-05 03:01:39  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2026-02-05 03:01:39  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2026-02-05 03:01:39  28.08KB 复制新文件或目录到容器中
COPY ./vllm/collect_env.py . # buildkit
                        
# 2026-02-05 03:01:39  799.22KB 复制新文件或目录到容器中
COPY benchmarks benchmarks # buildkit
                        
# 2026-02-05 03:01:39  1.04MB 复制新文件或目录到容器中
COPY examples examples # buildkit
                        
# 2026-02-05 03:01:39  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64
                        
# 2026-02-05 03:01:39  64.74MB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c uv pip install --system ep_kernels/dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2026-02-05 03:01:38  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64
                        
# 2026-02-05 03:01:38  47.55MB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c sh -c 'if ls /tmp/deepgemm/dist/*.whl >/dev/null 2>&1; then               uv pip install --system /tmp/deepgemm/dist/*.whl;            else               echo "No DeepGEMM wheels to install; skipping.";            fi' # buildkit
                        
# 2026-02-05 03:01:37  0.00B 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c . /etc/environment && uv pip list # buildkit
                        
# 2026-02-05 03:01:37  606.20MB 执行命令并创建新的镜像层
RUN |22 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled PYTORCH_NIGHTLY= /bin/sh -c if [ "${PYTORCH_NIGHTLY}" = "1" ]; then         echo "Installing torch nightly..."         && uv pip install --system $(cat torch_lib_versions.txt | xargs) --pre         --index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/nightly/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')         && echo "Installing vLLM..."         && uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/nightly/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.');     else         echo "Installing vLLM..."         && uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.');     fi # buildkit
                        
# 2026-02-05 02:54:03  69.00B 复制新文件或目录到容器中
COPY /workspace/torch_lib_versions.txt torch_lib_versions.txt # buildkit
                        
# 2026-02-05 02:54:03  0.00B 定义构建参数
ARG PYTORCH_NIGHTLY
                        
# 2026-02-05 02:54:03  0.00B 定义构建参数
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
                        
# 2026-02-05 02:54:03  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2026-02-05 02:54:03  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2026-02-05 02:54:03  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2026-02-05 02:54:03  288.56MB 执行命令并创建新的镜像层
RUN |14 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 BITSANDBYTES_VERSION_X86=0.46.1 BITSANDBYTES_VERSION_ARM64=0.42.0 TIMM_VERSION=>=1.0.17 RUNAI_MODEL_STREAMER_VERSION=>=0.15.3 /bin/sh -c if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         BITSANDBYTES_VERSION="${BITSANDBYTES_VERSION_ARM64}";     else         BITSANDBYTES_VERSION="${BITSANDBYTES_VERSION_X86}";     fi;     uv pip install --system accelerate hf_transfer modelscope         "bitsandbytes>=${BITSANDBYTES_VERSION}" "timm${TIMM_VERSION}" "runai-model-streamer[s3,gcs]${RUNAI_MODEL_STREAMER_VERSION}" # buildkit
                        
# 2026-02-05 02:53:56  0.00B 定义构建参数
ARG RUNAI_MODEL_STREAMER_VERSION=>=0.15.3
                        
# 2026-02-05 02:53:56  0.00B 定义构建参数
ARG TIMM_VERSION=>=1.0.17
                        
# 2026-02-05 02:53:56  0.00B 定义构建参数
ARG BITSANDBYTES_VERSION_ARM64=0.42.0
                        
# 2026-02-05 02:53:56  0.00B 定义构建参数
ARG BITSANDBYTES_VERSION_X86=0.46.1
                        
# 2026-02-05 02:53:56  2.43MB 执行命令并创建新的镜像层
RUN |10 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 GDRCOPY_CUDA_VERSION=12.8 GDRCOPY_OS_VERSION=Ubuntu22_04 TARGETPLATFORM=linux/amd64 /bin/sh -c set -eux;     case "${TARGETPLATFORM}" in       linux/arm64) UUARCH="aarch64" ;;       linux/amd64) UUARCH="x64" ;;       *) echo "Unsupported TARGETPLATFORM: ${TARGETPLATFORM}" >&2; exit 1 ;;     esac;     /tmp/install_gdrcopy.sh "${GDRCOPY_OS_VERSION}" "${GDRCOPY_CUDA_VERSION}" "${UUARCH}" &&     rm /tmp/install_gdrcopy.sh # buildkit
                        
# 2026-02-05 02:53:39  1.44KB 复制新文件或目录到容器中
COPY tools/install_gdrcopy.sh /tmp/install_gdrcopy.sh # buildkit
                        
# 2026-02-05 02:53:39  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2026-02-05 02:53:39  0.00B 定义构建参数
ARG GDRCOPY_OS_VERSION=Ubuntu22_04
                        
# 2026-02-05 02:53:39  0.00B 定义构建参数
ARG GDRCOPY_CUDA_VERSION=12.8
                        
# 2026-02-05 02:53:39  7.80GB 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl FLASHINFER_VERSION=0.6.1 /bin/sh -c uv pip install --system flashinfer-cubin==${FLASHINFER_VERSION}     && uv pip install --system flashinfer-jit-cache==${FLASHINFER_VERSION}         --extra-index-url https://flashinfer.ai/whl/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')     && flashinfer show-config # buildkit
                        
# 2026-02-05 02:50:55  0.00B 定义构建参数
ARG FLASHINFER_VERSION=0.6.1
                        
# 2026-02-05 02:50:55  6.18GB 执行命令并创建新的镜像层
RUN |6 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl /bin/sh -c uv pip install --system -r /tmp/requirements-cuda.txt         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') &&     rm /tmp/requirements-cuda.txt /tmp/common.txt # buildkit
                        
# 2026-02-05 02:50:00  515.00B 复制新文件或目录到容器中
COPY requirements/cuda.txt /tmp/requirements-cuda.txt # buildkit
                        
# 2026-02-05 02:50:00  2.70KB 复制新文件或目录到容器中
COPY requirements/common.txt /tmp/common.txt # buildkit
                        
# 2026-02-05 02:50:00  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2026-02-05 02:50:00  49.58KB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c echo "/usr/local/cuda-$(echo "$CUDA_VERSION" | cut -d. -f1,2)/compat/" > /etc/ld.so.conf.d/00-cuda-compat.conf && ldconfig # buildkit
                        
# 2026-02-05 02:49:58  0.00B 设置环境变量 UV_LINK_MODE
ENV UV_LINK_MODE=copy
                        
# 2026-02-05 02:49:58  0.00B 设置环境变量 UV_INDEX_STRATEGY
ENV UV_INDEX_STRATEGY=unsafe-best-match
                        
# 2026-02-05 02:49:58  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2026-02-05 02:49:58  78.36MB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c python3 -m pip install uv # buildkit
                        
# 2026-02-05 02:49:48  2.20GB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c CUDA_VERSION_DASH=$(echo $CUDA_VERSION | cut -d. -f1,2 | tr '.' '-') &&     apt-get update -y &&     apt-get install -y --no-install-recommends         cuda-nvcc-${CUDA_VERSION_DASH}         cuda-cudart-${CUDA_VERSION_DASH}         cuda-nvrtc-${CUDA_VERSION_DASH}         cuda-cuobjdump-${CUDA_VERSION_DASH}         libcurand-dev-${CUDA_VERSION_DASH}         libcublas-${CUDA_VERSION_DASH}         libnccl-dev &&     rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2026-02-05 02:48:35  653.84MB 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c echo 'tzdata tzdata/Areas select America' | debconf-set-selections     && echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections     && apt-get update -y     && apt-get install -y --no-install-recommends         software-properties-common         curl         sudo         python3-pip         ffmpeg         libsm6         libxext6         libgl1     && if [ ! -z ${DEADSNAKES_MIRROR_URL} ] ; then         if [ ! -z "${DEADSNAKES_GPGKEY_URL}" ] ; then             mkdir -p -m 0755 /etc/apt/keyrings ;             curl -L ${DEADSNAKES_GPGKEY_URL} | gpg --dearmor > /etc/apt/keyrings/deadsnakes.gpg ;             sudo chmod 644 /etc/apt/keyrings/deadsnakes.gpg ;             echo "deb [signed-by=/etc/apt/keyrings/deadsnakes.gpg] ${DEADSNAKES_MIRROR_URL} $(lsb_release -cs) main" > /etc/apt/sources.list.d/deadsnakes.list ;         fi ;     else         for i in 1 2 3; do             add-apt-repository -y ppa:deadsnakes/ppa && break ||             { echo "Attempt $i failed, retrying in 5s..."; sleep 5; };         done ;     fi     && apt-get update -y     && apt-get install -y --no-install-recommends         python${PYTHON_VERSION}         python${PYTHON_VERSION}-dev         python${PYTHON_VERSION}-venv         libibverbs-dev     && rm -rf /var/lib/apt/lists/*     && update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1     && update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION}     && ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config     && curl -sS ${GET_PIP_URL} | python${PYTHON_VERSION}     && python3 --version && python3 -m pip --version # buildkit
                        
# 2026-02-05 02:45:31  136.00B 执行命令并创建新的镜像层
RUN |5 CUDA_VERSION=13.0.1 PYTHON_VERSION=3.12 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/sh -c PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') &&     echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment # buildkit
                        
# 2026-02-05 02:45:05  0.00B 设置工作目录为/vllm-workspace
WORKDIR /vllm-workspace
                        
# 2026-02-05 02:45:05  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2026-02-05 02:45:05  0.00B 定义构建参数
ARG GET_PIP_URL
                        
# 2026-02-05 02:45:05  0.00B 定义构建参数
ARG DEADSNAKES_GPGKEY_URL
                        
# 2026-02-05 02:45:05  0.00B 定义构建参数
ARG DEADSNAKES_MIRROR_URL
                        
# 2026-02-05 02:45:05  0.00B 定义构建参数
ARG PYTHON_VERSION
                        
# 2026-02-05 02:45:05  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-09-09 01:23:07  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-09-09 01:23:07  22.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-09-09 01:23:07  322.88MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-13-0=${NV_CUDA_CUDART_VERSION}     cuda-compat-13-0     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=13.0.1
                        
# 2025-09-09 01:23:07  10.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-09-09 01:23:07  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-09-09 01:23:07  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=13.0.88-1
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=13.0 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571 brand=unknown,driver>=575,driver<576 brand=grid,driver>=575,driver<576 brand=tesla,driver>=575,driver<576 brand=nvidia,driver>=575,driver<576 brand=quadro,driver>=575,driver<576 brand=quadrortx,driver>=575,driver<576 brand=nvidiartx,driver>=575,driver<576 brand=vapps,driver>=575,driver<576 brand=vpc,driver>=575,driver<576 brand=vcs,driver>=575,driver<576 brand=vws,driver>=575,driver<576 brand=cloudgaming,driver>=575,driver<576
                        
# 2025-09-09 01:23:07  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2025-08-20 01:17:10  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-08-20 01:17:10  77.87MB 
/bin/sh -c #(nop) ADD file:9303cc1f788d2a9a8f909b154339f7c637b2a53c75c0e7f3da62eb1fefe371b1 in / 
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:50c07525d9711ea838079e13c7745cacbf2acccc5940de9ccf93ba10e2324996",
    "RepoTags": [
        "vllm/vllm-openai:v0.15.1-cu130",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.15.1-cu130"
    ],
    "RepoDigests": [
        "vllm/vllm-openai@sha256:6daf4c3bab1bfb0180069bd8f2a035c1c981a424584f5b0480caa9efd9933f72",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai@sha256:2189e50f5b917ffd3138a6b0aefbc8894c460c8f03a074b59574fa2a0c7d9422"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2026-02-04T19:01:47.758885511Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=13.0 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551 brand=unknown,driver\u003e=565,driver\u003c566 brand=grid,driver\u003e=565,driver\u003c566 brand=tesla,driver\u003e=565,driver\u003c566 brand=nvidia,driver\u003e=565,driver\u003c566 brand=quadro,driver\u003e=565,driver\u003c566 brand=quadrortx,driver\u003e=565,driver\u003c566 brand=nvidiartx,driver\u003e=565,driver\u003c566 brand=vapps,driver\u003e=565,driver\u003c566 brand=vpc,driver\u003e=565,driver\u003c566 brand=vcs,driver\u003e=565,driver\u003c566 brand=vws,driver\u003e=565,driver\u003c566 brand=cloudgaming,driver\u003e=565,driver\u003c566 brand=unknown,driver\u003e=570,driver\u003c571 brand=grid,driver\u003e=570,driver\u003c571 brand=tesla,driver\u003e=570,driver\u003c571 brand=nvidia,driver\u003e=570,driver\u003c571 brand=quadro,driver\u003e=570,driver\u003c571 brand=quadrortx,driver\u003e=570,driver\u003c571 brand=nvidiartx,driver\u003e=570,driver\u003c571 brand=vapps,driver\u003e=570,driver\u003c571 brand=vpc,driver\u003e=570,driver\u003c571 brand=vcs,driver\u003e=570,driver\u003c571 brand=vws,driver\u003e=570,driver\u003c571 brand=cloudgaming,driver\u003e=570,driver\u003c571 brand=unknown,driver\u003e=575,driver\u003c576 brand=grid,driver\u003e=575,driver\u003c576 brand=tesla,driver\u003e=575,driver\u003c576 brand=nvidia,driver\u003e=575,driver\u003c576 brand=quadro,driver\u003e=575,driver\u003c576 brand=quadrortx,driver\u003e=575,driver\u003c576 brand=nvidiartx,driver\u003e=575,driver\u003c576 brand=vapps,driver\u003e=575,driver\u003c576 brand=vpc,driver\u003e=575,driver\u003c576 brand=vcs,driver\u003e=575,driver\u003c576 brand=vws,driver\u003e=575,driver\u003c576 brand=cloudgaming,driver\u003e=575,driver\u003c576",
            "NV_CUDA_CUDART_VERSION=13.0.88-1",
            "CUDA_VERSION=13.0.1",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "DEBIAN_FRONTEND=noninteractive",
            "UV_HTTP_TIMEOUT=500",
            "UV_INDEX_STRATEGY=unsafe-best-match",
            "UV_LINK_MODE=copy",
            "TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.9 9.0 10.0 12.0",
            "VLLM_USAGE_SOURCE=production-docker-image"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/vllm-workspace",
        "Entrypoint": [
            "vllm",
            "serve"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 18772708901,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/708b5c50ac1722abd0d16dff9e47c6595c874b070817d30cdf98de3107693af3/diff:/var/lib/docker/overlay2/3d8d1b207ac3d3465fb21d3d6cfa85a425bf355eb87e0c2f0e41de3cfc4b97a5/diff:/var/lib/docker/overlay2/98207af586aa2b287368b41d8a422f4c4d8d1854647036109d9d4be53baf72da/diff:/var/lib/docker/overlay2/fc535f1b5d8ccaec27cddf5d0c9130bd058fc67bc690292b8f29fe20e8f1808f/diff:/var/lib/docker/overlay2/d5ca4fe706f0f9a4aee2d2c44106348554b6c7a97834171c34592a8e1a7618c3/diff:/var/lib/docker/overlay2/cde0be5289f508f4e68a6e88ad7b55c4ae1cd5bb4ad78ac697059830191bac45/diff:/var/lib/docker/overlay2/c0f2a2b18b3f51a888a5f9d1b52c06058e6016b5077291fd1d0dcbace2551a6d/diff:/var/lib/docker/overlay2/80a922ecf20f5bbd3662701ae5d169b4400b133a7945f5bc73138ca51e26a2da/diff:/var/lib/docker/overlay2/fcaa80324251b55943ac094f15e0944b614cb66d165fcc2e857b90d3cd147bc4/diff:/var/lib/docker/overlay2/297c663060c63b552f29067488ffe619ef8081d0830d707d6adf989419dbc07f/diff:/var/lib/docker/overlay2/8398c44da5857ee760f6eab596ca97816e6182b34236177bdcde5c3cb5ce1680/diff:/var/lib/docker/overlay2/d81c73fc0b2fd6fd7d7ab2f6ac865860be1ee4a79f7b65464ded097a7604a38e/diff:/var/lib/docker/overlay2/1fb28396d61664de3c3736004dc2debe9686aee3460727675aabba6d3b75a3ce/diff:/var/lib/docker/overlay2/2d80a3df7ec7a8344ef108d3367d5b46719e8c7fce5e58633bf47be0aeb5053a/diff:/var/lib/docker/overlay2/398d66d1cc9274148eaf3a9d5f515ed5263ad69f526fc4e256816948df67e30d/diff:/var/lib/docker/overlay2/c99a9626a91d552020febebecdbdc4d34783b8babc480f883e3307cb6c65e8bd/diff:/var/lib/docker/overlay2/98f5e0eabacdcca24f4d7b929358ba32070f603ba29f45b229059c3fb792d572/diff:/var/lib/docker/overlay2/9f36bce0cc91048b9795db65211b6ee64e7b544dc31b692490d8c22855d6cdc3/diff:/var/lib/docker/overlay2/99ee2fe5c55accd34412ee0bdebb5b26f5e0e82f34b11e892c79208916aa0380/diff:/var/lib/docker/overlay2/0f7034de8fb02bc46db4e9a3510f466e80ac35bf5839cbdb26ae225bf819861e/diff:/var/lib/docker/overlay2/85ef3dc97267e73dc6445ef1e5529f6269d6a0336532dcc2d8c0fb67ac4d8150/diff:/var/lib/docker/overlay2/4224ff2aefb7d19a6313e077f906e3089608aa481d6f84e4e9d4446fad1dcb0a/diff:/var/lib/docker/overlay2/ea8fdb8158b2e7815c43630a7d200b9a2d2faaa85fe039c119f3608938822c8a/diff:/var/lib/docker/overlay2/0f0908da131b163dd4f3865f81f12f939b0822d01f2893f742bdeaa1d05498aa/diff:/var/lib/docker/overlay2/0f90c9b721ff4ead60b31f1229a121abec6670e74f7fd51555480823771bc574/diff:/var/lib/docker/overlay2/ff0e7e77be1e985a85b03d79fb493150fb6a244a5d19436b72328fc5c72d59d7/diff",
            "MergedDir": "/var/lib/docker/overlay2/61343fe1c3f371a783d24dbbc8f7f14a732a8a139042e227b86ce55b09793854/merged",
            "UpperDir": "/var/lib/docker/overlay2/61343fe1c3f371a783d24dbbc8f7f14a732a8a139042e227b86ce55b09793854/diff",
            "WorkDir": "/var/lib/docker/overlay2/61343fe1c3f371a783d24dbbc8f7f14a732a8a139042e227b86ce55b09793854/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:dc6eb6dad5f9e332f00af553440e857b1467db1be43dd910cdb6830ba0898d50",
            "sha256:e2a50251472b1dae8bd3e8f3c7610b60293462494a6089ce12ec200d0532390e",
            "sha256:3e1159f70fa8cd9cf23103a830dd3fa91fd5dc5da0c209c43bf4846576cec32e",
            "sha256:7482e185693edb89761eb19d8f1a24e01c178ead3e5cac251253943f6c1d113e",
            "sha256:495bbaf4cc5ca53d8286fe305d2fc2fd3ecb2c8791cfdd4473e984d2eb0f3ea1",
            "sha256:2a6b55ab6440eb44262c57bd518f61ce9eb3d59c58cb0dacb22b63a0e34637ff",
            "sha256:b2eeeb8f4921979d69241b84356f28f9c910915643989f358eb9efadee1c429f",
            "sha256:322d6be48ca8866f72752d15654e77b61acc87f7ba6223776756cfaede55fe05",
            "sha256:7f00cc103fe28ed41b2182cc2e3947c8c25fa677fd94b5a829c4af1fc1dd5bff",
            "sha256:0ea979732ecbd22bbbca5c86546c35e093c2033f22c9da0bb57d0dd7e74778e3",
            "sha256:bfd7878b5c9ac7b5991ec9f9d1221575cdd42ee4685053023ffbb168b25e8ffa",
            "sha256:9e4619126716ed14bb8add266d110e30eb3a07a6c1d20de1e61f18eca534dc59",
            "sha256:c544e56872ee8b050a9aed03131f5fb9dd5cc3a3c8dc6e0dd236663dc4e80d40",
            "sha256:aa8e44231d396714c8ae64ed6ab7643096222488f1d694c1d8c2417fce459544",
            "sha256:0179e348a203845b0ba7be4de4017ba090a9f803b0bb899bbf21e50788f6b83a",
            "sha256:f6df134b48d6e9e85045fd3d192efa42864398c5c3eac10b065509be91d725d9",
            "sha256:4ce7a3d5ff2901fde797156e27242c3a57bc64a7db2d24e431b3e185abf1fd53",
            "sha256:3b9d6731caffc29c97dc52d75128df1fdf29e46042987ab0b06d94252e9247f4",
            "sha256:0255311776c3012fbdd830b254e92168e5e02e85e74cb80de510075e168ef489",
            "sha256:2af673f1000713561782029aebe26dd91d8c6ed3482d5cbd7b8fb850f4df53ab",
            "sha256:684b7850299ca0efe3dadb9d2857b5be0f8d5a8ec16e48de26aebdbbff8e8b38",
            "sha256:331235d73c0f662d5c661dc2b426c172324318de41d6c12f3cdbe40b3b5633f1",
            "sha256:e476fff60987a1d67f63b82a4d21e5ec9f344ebb985f16099d9617131c62c7d2",
            "sha256:c7b5f17e64a2920dce64387bc1fb168faf1103c3c36a523ca04a1eb150fef1e0",
            "sha256:7dbdccd801e03ec60ef3ae18e4ab1747ba76e431c7b45879cf6eb826bba0eb9d",
            "sha256:ca6095cb985ed6e0200ee8babfbb9b19719cc7189e4338fcd06ed7d3694bc9a7",
            "sha256:b6ffe0b7ee80ad98afc0c3b7c0c753c53c3862853977614cb687ca66f9a9f51d"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-02-07T00:19:41.21363666+08:00"
    }
}

更多版本

docker.io/vllm/vllm-openai:v0.5.4

linux/amd64 docker.io9.90GB2024-09-07 06:20
2138

docker.io/vllm/vllm-openai:v0.6.0

linux/amd64 docker.io9.72GB2024-09-11 01:51
1384

docker.io/vllm/vllm-openai:v0.6.1.post2

linux/amd64 docker.io9.81GB2024-09-24 01:43
1009

docker.io/vllm/vllm-openai:latest

linux/amd64 docker.io10.24GB2024-10-11 00:43
5724

docker.io/vllm/vllm-openai:v0.6.4.post1

linux/amd64 docker.io10.64GB2024-11-19 00:42
1027

docker.io/vllm/vllm-openai:v0.6.4

linux/amd64 docker.io10.64GB2024-12-11 02:08
851

docker.io/vllm/vllm-openai:v0.6.3

linux/amd64 docker.io10.43GB2024-12-12 02:41
982

docker.io/vllm/vllm-openai:v0.6.6

linux/amd64 docker.io10.23GB2025-01-04 00:37
1401

docker.io/vllm/vllm-openai:v0.6.6.post1

linux/amd64 docker.io10.23GB2025-01-24 00:21
971

docker.io/vllm/vllm-openai:v0.7.1

linux/amd64 docker.io16.53GB2025-02-08 02:05
1071

docker.io/vllm/vllm-openai:v0.7.2

linux/amd64 docker.io16.53GB2025-02-09 00:28
2641

docker.io/vllm/vllm-openai:v0.7.3

linux/amd64 docker.io16.43GB2025-02-24 00:50
3374

docker.io/vllm/vllm-openai:v0.8.0

linux/amd64 docker.io16.62GB2025-03-20 00:23
1311

docker.io/vllm/vllm-openai:v0.8.1

linux/amd64 docker.io16.62GB2025-03-21 00:28
1100

docker.io/vllm/vllm-openai:v0.8.2

linux/amd64 docker.io16.92GB2025-03-27 01:12
1317

docker.io/vllm/vllm-openai:v0.8.3

linux/amd64 docker.io17.13GB2025-04-08 00:58
1361

docker.io/vllm/vllm-openai:v0.8.4

linux/amd64 docker.io17.16GB2025-04-17 01:16
1736

docker.io/vllm/vllm-openai:v0.8.5

linux/amd64 docker.io17.30GB2025-04-30 02:45
3111

docker.io/vllm/vllm-openai:v0.8.5.post1

linux/amd64 docker.io17.30GB2025-05-07 02:06
3030

docker.io/vllm/vllm-openai:v0.9.0.1

linux/amd64 docker.io20.81GB2025-06-05 01:12
1908

docker.io/vllm/vllm-openai:v0.9.1

linux/amd64 docker.io20.85GB2025-06-12 01:29
2695

docker.io/vllm/vllm-openai:v0.9.2

linux/amd64 docker.io20.76GB2025-07-09 03:00
5970

docker.io/vllm/vllm-openai:v0.10.0

linux/amd64 docker.io26.13GB2025-07-26 03:15
1623

docker.io/vllm/vllm-openai:gptoss

linux/amd64 docker.io33.86GB2025-08-07 01:52
1187

docker.io/vllm/vllm-openai:v0.10.1

linux/amd64 docker.io20.25GB2025-08-20 03:05
1183

docker.io/vllm/vllm-openai:v0.10.1.1

linux/amd64 docker.io20.26GB2025-08-23 01:43
1815

docker.io/vllm/vllm-openai:v0.10.2

linux/amd64 docker.io22.49GB2025-09-16 03:40
1382

docker.io/vllm/vllm-openai:v0.2.7

linux/amd64 docker.io6.34GB2025-10-01 01:07
357

docker.io/vllm/vllm-openai:v0.11.0-x86_64

linux/amd64 docker.io25.86GB2025-10-09 02:14
1675

docker.io/vllm/vllm-openai:v0.10.2-x86_64

linux/amd64 docker.io22.49GB2025-10-09 02:22
466

docker.io/vllm/vllm-openai:v0.11.0

linux/amd64 docker.io25.86GB2025-10-09 11:24
1773

docker.io/vllm/vllm-openai:v0.11.0

linux/arm64 docker.io24.17GB2025-10-30 00:47
781

docker.io/vllm/vllm-openai:v0.3.3

linux/amd64 docker.io9.13GB2025-11-18 01:01
260

docker.io/vllm/vllm-openai:v0.11.1

linux/amd64 docker.io28.72GB2025-11-21 01:03
606

docker.io/vllm/vllm-openai:v0.11.2

linux/amd64 docker.io28.82GB2025-11-22 00:46
1027

docker.io/vllm/vllm-openai:v0.11.1

linux/arm64 docker.io26.54GB2025-11-22 01:23
324

docker.io/vllm/vllm-openai:v0.4.0

linux/amd64 docker.io9.88GB2025-11-22 01:58
340

docker.io/vllm/vllm-openai:v0.11.2

linux/arm64 docker.io26.54GB2025-11-22 04:06
494

docker.io/vllm/vllm-openai:nightly

linux/amd64 docker.io18.74GB2025-12-03 02:43
518

docker.io/vllm/vllm-openai:v0.12.0-aarch64

linux/arm64 docker.io17.89GB2025-12-05 03:12
415

docker.io/vllm/vllm-openai:v0.12.0

linux/amd64 docker.io19.47GB2025-12-05 03:59
1479

docker.io/vllm/vllm-openai:v0.13.0

linux/amd64 docker.io19.51GB2026-01-22 01:41
207

docker.io/vllm/vllm-openai:v0.14.0

linux/amd64 docker.io19.66GB2026-01-22 03:16
368

docker.io/vllm/vllm-openai:v0.14.1

linux/amd64 docker.io19.69GB2026-01-27 01:52
327

docker.io/vllm/vllm-openai:v0.15.0

linux/amd64 docker.io20.13GB2026-01-31 00:51
350
111

docker.io/vllm/vllm-openai:v0.15.1

linux/amd64 docker.io20.14GB2026-02-06 01:14
47

docker.io/vllm/vllm-openai:v0.15.1-cu130

linux/amd64 docker.io18.77GB2026-02-07 00:39
9