docker.io/vllm/vllm-openai:v0.10.0 linux/amd64

docker.io/vllm/vllm-openai:v0.10.0 - 国内下载镜像源 浏览次数:21
这是镜像描述:

vllm/openai

基于 OpenAI 的 GPT-3 模型的 API 服务,支持自然语言处理等功能。

源镜像 docker.io/vllm/vllm-openai:v0.10.0
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0
镜像ID sha256:1a8b81250b916b03fa4b31460f2963634c6551c73b5aacb21f334f12ae9ec703
镜像TAG v0.10.0
大小 26.13GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 python3 -m vllm.entrypoints.openai.api_server
工作目录 /vllm-workspace
OS/平台 linux/amd64
浏览量 21 次
贡献者
镜像创建 2025-07-24T23:35:53.673317058Z
同步时间 2025-07-26 03:15
更新时间 2025-07-27 04:35
环境变量
PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 NV_CUDA_CUDART_VERSION=12.8.90-1 CUDA_VERSION=12.8.1 LD_LIBRARY_PATH=/usr/local/cuda/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.8.1-1 NV_NVTX_VERSION=12.8.90-1 NV_LIBNPP_VERSION=12.3.3.100-1 NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1 NV_LIBCUSPARSE_VERSION=12.5.8.93-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8 NV_LIBCUBLAS_VERSION=12.8.4.1-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1 NCCL_VERSION=2.25.1-1 NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8 NVIDIA_PRODUCT_NAME=CUDA NV_CUDA_CUDART_DEV_VERSION=12.8.90-1 NV_NVML_DEV_VERSION=12.8.90-1 NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1 NV_LIBNPP_DEV_VERSION=12.3.3.100-1 NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1 NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1 NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8 NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1 NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1 NV_NVPROF_VERSION=12.8.90-1 NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8 LIBRARY_PATH=/usr/local/cuda/lib64/stubs DEBIAN_FRONTEND=noninteractive UV_HTTP_TIMEOUT=500 UV_INDEX_STRATEGY=unsafe-best-match VLLM_USAGE_SOURCE=production-docker-image
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0  docker.io/vllm/vllm-openai:v0.10.0

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0  docker.io/vllm/vllm-openai:v0.10.0

Shell快速替换命令

sed -i 's#vllm/vllm-openai:v0.10.0#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0  docker.io/vllm/vllm-openai:v0.10.0'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0  docker.io/vllm/vllm-openai:v0.10.0'

镜像构建历史


# 2025-07-25 07:35:53  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["python3" "-m" "vllm.entrypoints.openai.api_server"]
                        
# 2025-07-25 07:35:53  0.00B 设置环境变量 VLLM_USAGE_SOURCE
ENV VLLM_USAGE_SOURCE=production-docker-image
                        
# 2025-07-25 07:35:53  5.21GB 执行命令并创建新的镜像层
RUN |6 TARGETPLATFORM=linux/amd64 INSTALL_KV_CONNECTORS=true PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= /bin/bash -c if [ "$INSTALL_KV_CONNECTORS" = "true" ]; then         uv pip install --system -r requirements/kv_connectors.txt;     fi;     if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         BITSANDBYTES_VERSION="0.42.0";     else         BITSANDBYTES_VERSION="0.46.1";     fi;     uv pip install --system accelerate hf_transfer modelscope "bitsandbytes>=${BITSANDBYTES_VERSION}" 'timm==0.9.10' boto3 runai-model-streamer runai-model-streamer[s3] # buildkit
                        
# 2025-07-25 07:34:23  7.00B 复制新文件或目录到容器中
COPY requirements/kv_connectors.txt requirements/kv_connectors.txt # buildkit
                        
# 2025-07-25 07:34:23  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2025-07-25 07:34:23  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2025-07-25 07:34:23  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2025-07-25 07:34:23  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2025-07-25 07:34:23  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2025-07-25 07:34:23  68.41MB 执行命令并创建新的镜像层
RUN |17 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.8rc1 /bin/bash -c uv pip install --system -r requirements/build.txt         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2025-07-25 07:34:21  159.00B 复制新文件或目录到容器中
COPY requirements/build.txt requirements/build.txt # buildkit
                        
# 2025-07-25 07:34:21  0.00B 执行命令并创建新的镜像层
RUN |17 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.8rc1 /bin/bash -c . /etc/environment && uv pip list # buildkit
                        
# 2025-07-25 07:34:21  28.53KB 复制新文件或目录到容器中
COPY ./vllm/collect_env.py . # buildkit
                        
# 2025-07-25 07:34:21  583.14KB 复制新文件或目录到容器中
COPY benchmarks benchmarks # buildkit
                        
# 2025-07-25 07:34:21  690.41KB 复制新文件或目录到容器中
COPY examples examples # buildkit
                        
# 2025-07-25 07:34:21  1.23GB 执行命令并创建新的镜像层
RUN |17 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git FLASHINFER_GIT_REF=v0.2.8rc1 /bin/bash -c bash - <<'BASH'
  . /etc/environment
    git clone --depth 1 --recursive --shallow-submodules \
        --branch ${FLASHINFER_GIT_REF} \
        ${FLASHINFER_GIT_REPO} flashinfer
    # Exclude CUDA arches for older versions (11.x and 12.0-12.7)
    # TODO: Update this to allow setting TORCH_CUDA_ARCH_LIST as a build arg.
    if [[ "${CUDA_VERSION}" == 11.* ]]; then
        FI_TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9"
    elif [[ "${CUDA_VERSION}" == 12.[0-7]* ]]; then
        FI_TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a"
    else
        # CUDA 12.8+ supports 10.0a and 12.0
        FI_TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a 10.0a 12.0"
    fi
    echo "🏗️  Building FlashInfer for arches: ${FI_TORCH_CUDA_ARCH_LIST}"
    # Needed to build AOT kernels
    pushd flashinfer
        TORCH_CUDA_ARCH_LIST="${FI_TORCH_CUDA_ARCH_LIST}" \
            python3 -m flashinfer.aot
        TORCH_CUDA_ARCH_LIST="${FI_TORCH_CUDA_ARCH_LIST}" \
            uv pip install --system --no-build-isolation .
    popd
    rm -rf flashinfer
BASH # buildkit
                        
# 2025-07-25 07:16:57  0.00B 定义构建参数
ARG FLASHINFER_GIT_REF=v0.2.8rc1
                        
# 2025-07-25 07:16:57  0.00B 定义构建参数
ARG FLASHINFER_GIT_REPO=https://github.com/flashinfer-ai/flashinfer.git
                        
# 2025-07-25 07:16:57  9.39GB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c uv pip install --system dist/*.whl --verbose         --extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') # buildkit
                        
# 2025-07-25 06:56:10  0.00B 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c if [ "$TARGETPLATFORM" = "linux/arm64" ]; then         uv pip install --system             --index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')             "torch==2.8.0.dev20250318+cu128" "torchvision==0.22.0.dev20250319" ;         uv pip install --system             --index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')             --pre pytorch_triton==3.3.0+gitab727c40 ;     fi # buildkit
                        
# 2025-07-25 06:56:09  57.59KB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/ # buildkit
                        
# 2025-07-25 06:56:07  0.00B 设置环境变量 UV_INDEX_STRATEGY
ENV UV_INDEX_STRATEGY=unsafe-best-match
                        
# 2025-07-25 06:56:07  0.00B 设置环境变量 UV_HTTP_TIMEOUT
ENV UV_HTTP_TIMEOUT=500
                        
# 2025-07-25 06:56:07  65.21MB 执行命令并创建新的镜像层
RUN |15 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py PIP_INDEX_URL= UV_INDEX_URL= PIP_EXTRA_INDEX_URL= UV_EXTRA_INDEX_URL= PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly PIP_KEYRING_PROVIDER=disabled UV_KEYRING_PROVIDER=disabled /bin/bash -c python3 -m pip install uv # buildkit
                        
# 2025-07-25 06:56:04  0.00B 定义构建参数
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
                        
# 2025-07-25 06:56:04  0.00B 定义构建参数
ARG PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL
                        
# 2025-07-25 06:56:04  0.00B 定义构建参数
ARG PYTORCH_CUDA_INDEX_BASE_URL
                        
# 2025-07-25 06:56:04  0.00B 定义构建参数
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
                        
# 2025-07-25 06:56:04  0.00B 定义构建参数
ARG PIP_INDEX_URL UV_INDEX_URL
                        
# 2025-07-25 06:56:04  829.88MB 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/bash -c echo 'tzdata tzdata/Areas select America' | debconf-set-selections     && echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections     && apt-get update -y     && apt-get install -y ccache software-properties-common git curl wget sudo vim python3-pip     && apt-get install -y ffmpeg libsm6 libxext6 libgl1     && if [ ! -z ${DEADSNAKES_MIRROR_URL} ] ; then         if [ ! -z "${DEADSNAKES_GPGKEY_URL}" ] ; then             mkdir -p -m 0755 /etc/apt/keyrings ;             curl -L ${DEADSNAKES_GPGKEY_URL} | gpg --dearmor > /etc/apt/keyrings/deadsnakes.gpg ;             sudo chmod 644 /etc/apt/keyrings/deadsnakes.gpg ;             echo "deb [signed-by=/etc/apt/keyrings/deadsnakes.gpg] ${DEADSNAKES_MIRROR_URL} $(lsb_release -cs) main" > /etc/apt/sources.list.d/deadsnakes.list ;         fi ;     else         for i in 1 2 3; do             add-apt-repository -y ppa:deadsnakes/ppa && break ||             { echo "Attempt $i failed, retrying in 5s..."; sleep 5; };         done ;     fi     && apt-get update -y     && apt-get install -y python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv libibverbs-dev     && update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1     && update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION}     && ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config     && curl -sS ${GET_PIP_URL} | python${PYTHON_VERSION}     && python3 --version && python3 -m pip --version # buildkit
                        
# 2025-07-25 06:53:01  136.00B 执行命令并创建新的镜像层
RUN |7 CUDA_VERSION=12.8.1 PYTHON_VERSION=3.12 INSTALL_KV_CONNECTORS=true TARGETPLATFORM=linux/amd64 DEADSNAKES_MIRROR_URL= DEADSNAKES_GPGKEY_URL= GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py /bin/bash -c PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') &&     echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment # buildkit
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG GET_PIP_URL
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG DEADSNAKES_GPGKEY_URL
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG DEADSNAKES_MIRROR_URL
                        
# 2025-07-25 06:53:01  0.00B 
SHELL [/bin/bash -c]
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG TARGETPLATFORM
                        
# 2025-07-25 06:53:01  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-07-25 06:53:01  0.00B 设置工作目录为/vllm-workspace
WORKDIR /vllm-workspace
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG INSTALL_KV_CONNECTORS=false
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG PYTHON_VERSION
                        
# 2025-07-25 06:53:01  0.00B 定义构建参数
ARG CUDA_VERSION
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 LIBRARY_PATH
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs
                        
# 2025-03-11 06:36:52  389.48KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_DEV_PACKAGE_NAME} ${NV_LIBNCCL_DEV_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:36:52  5.94GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-dev-12-8=${NV_CUDA_CUDART_DEV_VERSION}     cuda-command-line-tools-12-8=${NV_CUDA_LIB_VERSION}     cuda-minimal-build-12-8=${NV_CUDA_LIB_VERSION}     cuda-libraries-dev-12-8=${NV_CUDA_LIB_VERSION}     cuda-nvml-dev-12-8=${NV_NVML_DEV_VERSION}     ${NV_NVPROF_DEV_PACKAGE}     ${NV_LIBNPP_DEV_PACKAGE}     libcusparse-dev-12-8=${NV_LIBCUSPARSE_DEV_VERSION}     ${NV_LIBCUBLAS_DEV_PACKAGE}     ${NV_LIBNCCL_DEV_PACKAGE}     ${NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:36:52  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:36:52  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE
ENV NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_VERSION
ENV NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNCCL_DEV_PACKAGE_NAME
ENV NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_DEV_PACKAGE
ENV NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVPROF_VERSION
ENV NV_NVPROF_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE
ENV NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_NSIGHT_COMPUTE_VERSION
ENV NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE
ENV NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_PACKAGE_NAME
ENV NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUBLAS_DEV_VERSION
ENV NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_PACKAGE
ENV NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBNPP_DEV_VERSION
ENV NV_LIBNPP_DEV_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_LIBCUSPARSE_DEV_VERSION
ENV NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_NVML_DEV_VERSION
ENV NV_NVML_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_CUDART_DEV_VERSION
ENV NV_CUDA_CUDART_DEV_VERSION=12.8.90-1
                        
# 2025-03-11 06:36:52  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:24:31  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2025-03-11 06:24:31  2.53KB 复制新文件或目录到容器中
COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
                        
# 2025-03-11 06:24:31  3.06KB 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2025-03-11 06:24:31  263.00KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
                        
# 2025-03-11 06:24:31  3.11GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-libraries-12-8=${NV_CUDA_LIB_VERSION}     ${NV_LIBNPP_PACKAGE}     cuda-nvtx-12-8=${NV_NVTX_VERSION}     libcusparse-12-8=${NV_LIBCUSPARSE_VERSION}     ${NV_LIBCUBLAS_PACKAGE}     ${NV_LIBNCCL_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:24:31  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:24:31  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE
ENV NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_VERSION
ENV NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_NAME
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE
ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_VERSION
ENV NV_LIBCUBLAS_VERSION=12.8.4.1-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE_NAME
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBCUSPARSE_VERSION
ENV NV_LIBCUSPARSE_VERSION=12.5.8.93-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_PACKAGE
ENV NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_LIBNPP_VERSION
ENV NV_LIBNPP_VERSION=12.3.3.100-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_NVTX_VERSION
ENV NV_NVTX_VERSION=12.8.90-1
                        
# 2025-03-11 06:24:31  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.8.1-1
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-03-11 06:19:20  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
                        
# 2025-03-11 06:19:20  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-03-11 06:19:20  22.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2025-03-11 06:19:20  203.35MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-8=${NV_CUDA_CUDART_VERSION}     cuda-compat-12-8     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.8.1
                        
# 2025-03-11 06:19:05  10.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-03-11 06:19:05  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2025-03-11 06:19:05  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.8.90-1
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.8 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566
                        
# 2025-03-11 06:19:05  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2025-01-26 13:31:11  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-01-26 13:31:10  77.86MB 
/bin/sh -c #(nop) ADD file:1b6c8c9518be42fa2afe5e241ca31677fce58d27cdfa88baa91a65a259be3637 in / 
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:1a8b81250b916b03fa4b31460f2963634c6551c73b5aacb21f334f12ae9ec703",
    "RepoTags": [
        "vllm/vllm-openai:v0.10.0",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:v0.10.0"
    ],
    "RepoDigests": [
        "vllm/vllm-openai@sha256:af9dc182ee24be77a81ade64a15aa73250440a81224b9c4b7df897d025410b30",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai@sha256:af9dc182ee24be77a81ade64a15aa73250440a81224b9c4b7df897d025410b30"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-07-24T23:35:53.673317058Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.8 brand=unknown,driver\u003e=470,driver\u003c471 brand=grid,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=vapps,driver\u003e=470,driver\u003c471 brand=vpc,driver\u003e=470,driver\u003c471 brand=vcs,driver\u003e=470,driver\u003c471 brand=vws,driver\u003e=470,driver\u003c471 brand=cloudgaming,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551 brand=unknown,driver\u003e=560,driver\u003c561 brand=grid,driver\u003e=560,driver\u003c561 brand=tesla,driver\u003e=560,driver\u003c561 brand=nvidia,driver\u003e=560,driver\u003c561 brand=quadro,driver\u003e=560,driver\u003c561 brand=quadrortx,driver\u003e=560,driver\u003c561 brand=nvidiartx,driver\u003e=560,driver\u003c561 brand=vapps,driver\u003e=560,driver\u003c561 brand=vpc,driver\u003e=560,driver\u003c561 brand=vcs,driver\u003e=560,driver\u003c561 brand=vws,driver\u003e=560,driver\u003c561 brand=cloudgaming,driver\u003e=560,driver\u003c561 brand=unknown,driver\u003e=565,driver\u003c566 brand=grid,driver\u003e=565,driver\u003c566 brand=tesla,driver\u003e=565,driver\u003c566 brand=nvidia,driver\u003e=565,driver\u003c566 brand=quadro,driver\u003e=565,driver\u003c566 brand=quadrortx,driver\u003e=565,driver\u003c566 brand=nvidiartx,driver\u003e=565,driver\u003c566 brand=vapps,driver\u003e=565,driver\u003c566 brand=vpc,driver\u003e=565,driver\u003c566 brand=vcs,driver\u003e=565,driver\u003c566 brand=vws,driver\u003e=565,driver\u003c566 brand=cloudgaming,driver\u003e=565,driver\u003c566",
            "NV_CUDA_CUDART_VERSION=12.8.90-1",
            "CUDA_VERSION=12.8.1",
            "LD_LIBRARY_PATH=/usr/local/cuda/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.8.1-1",
            "NV_NVTX_VERSION=12.8.90-1",
            "NV_LIBNPP_VERSION=12.3.3.100-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-8=12.3.3.100-1",
            "NV_LIBCUSPARSE_VERSION=12.5.8.93-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-8",
            "NV_LIBCUBLAS_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-8=12.8.4.1-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.25.1-1",
            "NCCL_VERSION=2.25.1-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.25.1-1+cuda12.8",
            "NVIDIA_PRODUCT_NAME=CUDA",
            "NV_CUDA_CUDART_DEV_VERSION=12.8.90-1",
            "NV_NVML_DEV_VERSION=12.8.90-1",
            "NV_LIBCUSPARSE_DEV_VERSION=12.5.8.93-1",
            "NV_LIBNPP_DEV_VERSION=12.3.3.100-1",
            "NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-8=12.3.3.100-1",
            "NV_LIBCUBLAS_DEV_VERSION=12.8.4.1-1",
            "NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-8",
            "NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-8=12.8.4.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_VERSION=12.8.1-1",
            "NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-8=12.8.1-1",
            "NV_NVPROF_VERSION=12.8.90-1",
            "NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-8=12.8.90-1",
            "NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev",
            "NV_LIBNCCL_DEV_PACKAGE_VERSION=2.25.1-1",
            "NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.25.1-1+cuda12.8",
            "LIBRARY_PATH=/usr/local/cuda/lib64/stubs",
            "DEBIAN_FRONTEND=noninteractive",
            "UV_HTTP_TIMEOUT=500",
            "UV_INDEX_STRATEGY=unsafe-best-match",
            "VLLM_USAGE_SOURCE=production-docker-image"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/vllm-workspace",
        "Entrypoint": [
            "python3",
            "-m",
            "vllm.entrypoints.openai.api_server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        },
        "Shell": [
            "/bin/bash",
            "-c"
        ]
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 26131711273,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/3ed772e8922abc08419bd62c75e409504ec4a85a4fc26c455455569582d41a5a/diff:/var/lib/docker/overlay2/0e1303bebb47f161bf61741915675100e954800c492cb64c9ac6296563f3d32a/diff:/var/lib/docker/overlay2/05c2ed296746f18543adc050416e9b5210e6ee99c0f84b98d592a3a55d737609/diff:/var/lib/docker/overlay2/e4a010d445657b03c164749b1ec3ce2bf34df7dbd0febab528d3ef9e67093f9a/diff:/var/lib/docker/overlay2/7526565d7f2ba64458032fe4078b1e337843dbd570d4b1e31d98139d69cce343/diff:/var/lib/docker/overlay2/3351396b26f48bbe1248db4da01c8d9d0e2fbe966d905b142e4584630b53ec7f/diff:/var/lib/docker/overlay2/2917be36d3db5767c78bd018260041ab9599ca98db43ff0310a2ed7dc4a29183/diff:/var/lib/docker/overlay2/f81e27552ae00a7a3e6bd3324aad586b07f426694e3a96adf8e316ee65d158ab/diff:/var/lib/docker/overlay2/0712a80d9f3371dc15c3995a1f0c560ccc140e044b5e524e42c2e22176f79408/diff:/var/lib/docker/overlay2/ff0d8129afc1b8ce60849abe21010c83dadd2fec8e76ac4ce5f47b288d4215f7/diff:/var/lib/docker/overlay2/2c4ab597f74407e69ff0ce22b8c2d0301d4861674a0e5fe2805253f3de012c9e/diff:/var/lib/docker/overlay2/ffaad95ed3953df4f2f39c6a8b8473b43db849e350682792953efa6031e6b4e7/diff:/var/lib/docker/overlay2/e2d334df7477dc5b57eff85131765a86c969efae521a6dfcc18edaa3ba75b1b3/diff:/var/lib/docker/overlay2/df544d3a0510e2de82f1aaa501770610310165cbe0f7cf7a0819f08c5492f09d/diff:/var/lib/docker/overlay2/dec25cd9770df3e229ed6100129686faacaf8dc4e378c922cef3b3899e4d12b6/diff:/var/lib/docker/overlay2/7e8d1727ba6f8d5a3ca3a9b4cf4a0db4767d5fa045101bcc447e6ff7e15c1018/diff:/var/lib/docker/overlay2/dd529b8c7abf07e4bdad8872df4e86ff5ba2a7f07a717cbc3f6f10afac1beb67/diff:/var/lib/docker/overlay2/a8fecfd45ffa2d317cd53f303acbe078fb5840d1351c9c5a0cd5165144c03f37/diff:/var/lib/docker/overlay2/dcfb2677f8c3c60e8d3d4116d7a64f9f79bf3ede86b307908f1bdf0c0920ba2f/diff:/var/lib/docker/overlay2/7864d310b225f01a31a99e18302601a0bc6f51fb3c4b0914b63b76b6252da21d/diff:/var/lib/docker/overlay2/4fddecfccd19f5fdf75ba15e7c4e6c8df7dcfaa198e9f8705f88c16b98a7812c/diff:/var/lib/docker/overlay2/5b97962622e01b450bce6252497eb1e0a972edf39075683db153b0db41189605/diff:/var/lib/docker/overlay2/eb03b2be6a63ac4902985c612376c782b122f9eef3af69073f2e0c1306ea181f/diff:/var/lib/docker/overlay2/151168d6b5dc33e684376ad8ace0efabf8bc3a418742fcf99a432ed77de99c57/diff:/var/lib/docker/overlay2/8793691dc2f4982c739e7d9c16e15c32e0a493d0d87e97763ca89ea9ddd33b8d/diff:/var/lib/docker/overlay2/ace3f972cf88bd330727fa9a25fd0df2c3fec1df161ac9102bf9f5739b40b82c/diff",
            "MergedDir": "/var/lib/docker/overlay2/a364b04983bbccadab5de57ff13615e104def9a41d768a396f2a594b8f258073/merged",
            "UpperDir": "/var/lib/docker/overlay2/a364b04983bbccadab5de57ff13615e104def9a41d768a396f2a594b8f258073/diff",
            "WorkDir": "/var/lib/docker/overlay2/a364b04983bbccadab5de57ff13615e104def9a41d768a396f2a594b8f258073/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:270a1170e7e398434ff1b31e17e233f7d7b71aa99a40473615860068e86720af",
            "sha256:fb456a9e7760e7e4481d3d9eddee3ef62358753326d669c3efe6b0cc254ab9c3",
            "sha256:981a144d9fa2efffdbdfbdcfd3881dccb4fbf07867d4fa326a386126ca11b50f",
            "sha256:583f1d6040fc90a1380cb4d11d46b78bbbd4ad7d6274b9656ab9832ef7ba6342",
            "sha256:6020d84069cf575521faca653d2817e2195b8233b8c4e2d7bc65f3296296c2bb",
            "sha256:eb0143fcce68bd06a29bf314125d30f2a1f44d606fc419e7a54ab602bc26b2d0",
            "sha256:cb395f276e984728ca05fdee4e1dc203e1310a9e943a2d16d7aa87e5b2cf754b",
            "sha256:70a158f70e3777a129c15ffe51399bb6c226218c08ff6819948b9b1f26a46277",
            "sha256:9bb0510c7b4b74b056648f24e0a55c26f7521a1a1b766250a0776e14ce9d93f6",
            "sha256:7215ae9b9700d9530f70dea6f546b7ebdaac0098e4436b4c99e3e8063170c274",
            "sha256:acbc0d2ed199232b8f482ce41318ccd603e3c9a7731d39f0b08b88b253a6ca67",
            "sha256:90407a289cc4c4802288301e5b3bf02a9dc94c4be7d5282b94ba14179945c6d9",
            "sha256:2ab354b8f3e0fe914b04784aa6600244dd3dedd7ab11956a227abe92b7423b13",
            "sha256:324e1b0941a9b86788c8e579134cfc8efdef654dc414e3a60e82c38b54d9b39f",
            "sha256:9a92c26696c4ea14e44cede67f3008211044c1dd12f37972a289b6cea101f532",
            "sha256:452792cf55014699cc70068490471c894b21dc5211c6c6eede691faa3dd41a5c",
            "sha256:8896c3f2d7632df63907b1f80c2ec7071131a08e50ce27bb0204bba2d9100e26",
            "sha256:abe6f096ad6a40fe8d91a1f16e6905cc5690b78f194efa193182952ea655f63d",
            "sha256:d92f234c4d10113c06361b9f509738da6fee3e86ee65c31daf53429f00c3155b",
            "sha256:f08be12e9e2d92b9ed22f05062852db0974784dfc99ec7b6b4903e62853ad8a2",
            "sha256:f456f42b53564a1af7e1584ff56d18108e682d109f37387fd99d6b5f22c669bd",
            "sha256:346e866837bd217d2b0218898546e13a467d1240b7491ead2b12b01dbcb45707",
            "sha256:76f08044cbd8cf92eb414b1ca3ef2b7d7b181c4f4c437fd077d4fac8b5bf6a07",
            "sha256:2b5bc4ca69c96d9f23d1a5a7e8bc85fc339e1b24a7b2fc78fc9ff8fc418dc814",
            "sha256:9e98475051792b18b1f543bf9d24bea1bf21efbc4204f8b12696ee3495eea3b8",
            "sha256:b4b86d93ebd05b020c07550a2e98992a1dac26f805ae7d2e72f4621109eafe77",
            "sha256:1718d930ae19e1658a3b47e7243b803f84eb9d2cdced6709f5677dd988475a3e"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-07-26T02:56:40.127895733+08:00"
    }
}

更多版本

docker.io/vllm/vllm-openai:v0.5.4

linux/amd64 docker.io9.90GB2024-09-07 06:20
993

docker.io/vllm/vllm-openai:v0.6.0

linux/amd64 docker.io9.72GB2024-09-11 01:51
920

docker.io/vllm/vllm-openai:v0.6.1.post2

linux/amd64 docker.io9.81GB2024-09-24 01:43
541

docker.io/vllm/vllm-openai:latest

linux/amd64 docker.io10.24GB2024-10-11 00:43
1637

docker.io/vllm/vllm-openai:v0.6.4.post1

linux/amd64 docker.io10.64GB2024-11-19 00:42
587

docker.io/vllm/vllm-openai:v0.6.4

linux/amd64 docker.io10.64GB2024-12-11 02:08
465

docker.io/vllm/vllm-openai:v0.6.3

linux/amd64 docker.io10.43GB2024-12-12 02:41
366

docker.io/vllm/vllm-openai:v0.6.6

linux/amd64 docker.io10.23GB2025-01-04 00:37
645

docker.io/vllm/vllm-openai:v0.6.6.post1

linux/amd64 docker.io10.23GB2025-01-24 00:21
389

docker.io/vllm/vllm-openai:v0.7.1

linux/amd64 docker.io16.53GB2025-02-08 02:05
513

docker.io/vllm/vllm-openai:v0.7.2

linux/amd64 docker.io16.53GB2025-02-09 00:28
1360

docker.io/vllm/vllm-openai:v0.7.3

linux/amd64 docker.io16.43GB2025-02-24 00:50
1839

docker.io/vllm/vllm-openai:v0.8.0

linux/amd64 docker.io16.62GB2025-03-20 00:23
681

docker.io/vllm/vllm-openai:v0.8.1

linux/amd64 docker.io16.62GB2025-03-21 00:28
564

docker.io/vllm/vllm-openai:v0.8.2

linux/amd64 docker.io16.92GB2025-03-27 01:12
678

docker.io/vllm/vllm-openai:v0.8.3

linux/amd64 docker.io17.13GB2025-04-08 00:58
773

docker.io/vllm/vllm-openai:v0.8.4

linux/amd64 docker.io17.16GB2025-04-17 01:16
900

docker.io/vllm/vllm-openai:v0.8.5

linux/amd64 docker.io17.30GB2025-04-30 02:45
1382

docker.io/vllm/vllm-openai:v0.8.5.post1

linux/amd64 docker.io17.30GB2025-05-07 02:06
1311

docker.io/vllm/vllm-openai:v0.9.0.1

linux/amd64 docker.io20.81GB2025-06-05 01:12
623

docker.io/vllm/vllm-openai:v0.9.1

linux/amd64 docker.io20.85GB2025-06-12 01:29
1094

docker.io/vllm/vllm-openai:v0.9.2

linux/amd64 docker.io20.76GB2025-07-09 03:00
879

docker.io/vllm/vllm-openai:v0.10.0

linux/amd64 docker.io26.13GB2025-07-26 03:15
20