docker.io/vllm/vllm-openai:latest linux/amd64

docker.io/vllm/vllm-openai:latest - 国内下载镜像源 浏览次数:5811

温馨提示:此镜像为latest tag镜像,本站无法保证此版本为最新镜像

这是镜像描述:

vllm/openai

基于 OpenAI 的 GPT-3 模型的 API 服务,支持自然语言处理等功能。

源镜像 docker.io/vllm/vllm-openai:latest
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest
镜像ID sha256:ad509f9e7a0a58f39a25e4fd1dd41b799d37b8f0d511028ca425710338ec674f
镜像TAG latest
大小 10.24GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 python3 -m vllm.entrypoints.openai.api_server
工作目录 /vllm-workspace
OS/平台 linux/amd64
浏览量 5811 次
贡献者
镜像创建 2024-09-25T15:08:09.928625091-07:00
同步时间 2024-10-11 00:43
环境变量
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536 NV_CUDA_CUDART_VERSION=12.4.127-1 NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4 CUDA_VERSION=12.4.1 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility DEBIAN_FRONTEND=noninteractive VLLM_USAGE_SOURCE=production-docker-image
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 20.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest  docker.io/vllm/vllm-openai:latest

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest  docker.io/vllm/vllm-openai:latest

Shell快速替换命令

sed -i 's#vllm/vllm-openai:latest#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest  docker.io/vllm/vllm-openai:latest'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest  docker.io/vllm/vllm-openai:latest'

镜像构建历史


# 2024-09-26 06:08:09  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["python3" "-m" "vllm.entrypoints.openai.api_server"]
                        
# 2024-09-26 06:08:09  0.00B 设置环境变量 VLLM_USAGE_SOURCE
ENV VLLM_USAGE_SOURCE=production-docker-image
                        
# 2024-09-26 06:08:09  467.50MB 执行命令并创建新的镜像层
RUN /bin/sh -c pip install accelerate hf_transfer 'modelscope!=1.15.0' bitsandbytes>=0.44.0 timm==0.9.10 # buildkit
                        
# 2024-09-26 06:07:55  1.84GB 执行命令并创建新的镜像层
RUN |2 CUDA_VERSION=12.4.1 PYTHON_VERSION=3.12 /bin/sh -c . /etc/environment &&     python3 -m pip install https://github.com/flashinfer-ai/flashinfer/releases/download/v0.1.6/flashinfer-0.1.6+cu121torch2.4-cp${PYTHON_VERSION_STR}-cp${PYTHON_VERSION_STR}-linux_x86_64.whl # buildkit
                        
# 2024-09-26 06:07:04  6.70GB 执行命令并创建新的镜像层
RUN |2 CUDA_VERSION=12.4.1 PYTHON_VERSION=3.12 /bin/sh -c python3 -m pip install dist/*.whl --verbose # buildkit
                        
# 2024-09-12 05:52:09  56.73KB 执行命令并创建新的镜像层
RUN |2 CUDA_VERSION=12.4.1 PYTHON_VERSION=3.12 /bin/sh -c ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/ # buildkit
                        
# 2024-09-12 05:52:09  982.17MB 执行命令并创建新的镜像层
RUN |2 CUDA_VERSION=12.4.1 PYTHON_VERSION=3.12 /bin/sh -c echo 'tzdata tzdata/Areas select America' | debconf-set-selections     && echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections     && apt-get update -y     && apt-get install -y ccache software-properties-common git curl sudo vim python3-pip     && apt-get install -y ffmpeg libsm6 libxext6 libgl1     && add-apt-repository ppa:deadsnakes/ppa     && apt-get update -y     && apt-get install -y python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv libibverbs-dev     && update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1     && update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION}     && ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config     && curl -sS https://bootstrap.pypa.io/get-pip.py | python${PYTHON_VERSION}     && python3 --version && python3 -m pip --version # buildkit
                        
# 2024-09-12 05:48:46  136.00B 执行命令并创建新的镜像层
RUN |2 CUDA_VERSION=12.4.1 PYTHON_VERSION=3.12 /bin/sh -c PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') &&     echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment # buildkit
                        
# 2024-07-23 15:03:19  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2024-07-23 15:03:19  0.00B 设置工作目录为/vllm-workspace
WORKDIR /vllm-workspace
                        
# 2024-07-23 15:03:19  0.00B 定义构建参数
ARG PYTHON_VERSION=3.12
                        
# 2024-07-23 15:03:19  0.00B 定义构建参数
ARG CUDA_VERSION=12.4.1
                        
# 2024-04-23 07:42:36  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2024-04-23 07:42:36  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2024-04-23 07:42:36  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2024-04-23 07:42:36  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2024-04-23 07:42:36  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2024-04-23 07:42:36  46.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf     && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2024-04-23 07:42:36  155.93MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-4=${NV_CUDA_CUDART_VERSION}     ${NV_CUDA_COMPAT_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-04-23 07:42:24  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.4.1
                        
# 2024-04-23 07:42:24  18.32MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/${NVARCH}/3bf863cc.pub | apt-key add - &&     echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/${NVARCH} /" > /etc/apt/sources.list.d/cuda.list &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-04-23 07:42:24  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2024-04-23 07:42:24  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2024-04-23 07:42:24  0.00B 设置环境变量 NV_CUDA_COMPAT_PACKAGE
ENV NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4
                        
# 2024-04-23 07:42:24  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.4.127-1
                        
# 2024-04-23 07:42:24  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
                        
# 2024-04-23 07:42:24  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2024-04-11 02:50:37  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2024-04-11 02:50:37  72.81MB 
/bin/sh -c #(nop) ADD file:ea2128e23dce0162557abadd80656bd5ae047d573095d1d4323eb4154490dfdc in / 
                        
# 2024-04-11 02:50:35  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=20.04
                        
# 2024-04-11 02:50:35  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2024-04-11 02:50:35  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2024-04-11 02:50:35  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:ad509f9e7a0a58f39a25e4fd1dd41b799d37b8f0d511028ca425710338ec674f",
    "RepoTags": [
        "vllm/vllm-openai:latest",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest"
    ],
    "RepoDigests": [
        "vllm/vllm-openai@sha256:730ef3d3c17a217b34cfdbfd99be80b3f459e37ef2fc0c5c43ba70752dad08ae",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai@sha256:c9df95505714304d912b11eacfe49cd154ad9ceea59a3b91518bd6a9579efe51"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2024-09-25T15:08:09.928625091-07:00",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.4 brand=tesla,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=geforce,driver\u003e=470,driver\u003c471 brand=geforcertx,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=titan,driver\u003e=470,driver\u003c471 brand=titanrtx,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=525,driver\u003c526 brand=unknown,driver\u003e=525,driver\u003c526 brand=nvidia,driver\u003e=525,driver\u003c526 brand=nvidiartx,driver\u003e=525,driver\u003c526 brand=geforce,driver\u003e=525,driver\u003c526 brand=geforcertx,driver\u003e=525,driver\u003c526 brand=quadro,driver\u003e=525,driver\u003c526 brand=quadrortx,driver\u003e=525,driver\u003c526 brand=titan,driver\u003e=525,driver\u003c526 brand=titanrtx,driver\u003e=525,driver\u003c526 brand=tesla,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=geforce,driver\u003e=535,driver\u003c536 brand=geforcertx,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=titan,driver\u003e=535,driver\u003c536 brand=titanrtx,driver\u003e=535,driver\u003c536",
            "NV_CUDA_CUDART_VERSION=12.4.127-1",
            "NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4",
            "CUDA_VERSION=12.4.1",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "DEBIAN_FRONTEND=noninteractive",
            "VLLM_USAGE_SOURCE=production-docker-image"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/vllm-workspace",
        "Entrypoint": [
            "python3",
            "-m",
            "vllm.entrypoints.openai.api_server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "20.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 10241005778,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/1a29a668c3e1a46550b52bdffc85c058792c8c24a18f4678dd79406e09a8ba3d/diff:/var/lib/docker/overlay2/6db4ab1db552167d246568f192e43f776c3abf414bb70ae56f84270cd38051d2/diff:/var/lib/docker/overlay2/00d6cbc34f2f02de48ec1fdec85872ab238a0e6e77e506f4e0a5d5a9de3620d9/diff:/var/lib/docker/overlay2/d53beed5fc39abf5118ab526dd2722a479f147bab1637f37de9d577d5ed13a37/diff:/var/lib/docker/overlay2/3401dd819105c2b875728cef7ae69424b3d72fe1f108ef7b8d063125bf1eaaae/diff:/var/lib/docker/overlay2/962475ebce9523d6374e2e7552caa0864b56853cb7f04428e868aa1687049c21/diff:/var/lib/docker/overlay2/a09a1d8516011449f8a6b483919051762f2ff2f2f885a446a3cf587585190bce/diff:/var/lib/docker/overlay2/b57e68ecab6ec33049eda8635b7cde3b8c4954bc31a457ae1211eb40363ec75b/diff:/var/lib/docker/overlay2/fcd973f3f16887f8ba42c88b1dc35c2e9d591951b5743752f3a7b0d267ddc0f2/diff:/var/lib/docker/overlay2/49f6feb76f594d5b8d376e605c050f07db2c84c54c1743232afed02b3f2b679a/diff:/var/lib/docker/overlay2/4ecf8b5e165a32fed5a3986800218d154eb9052742580fd3a7e942b41c2d52ba/diff",
            "MergedDir": "/var/lib/docker/overlay2/3551b726ce8be771050d9798370e5e78c3280a1bff41cc022a5a53740d6f8075/merged",
            "UpperDir": "/var/lib/docker/overlay2/3551b726ce8be771050d9798370e5e78c3280a1bff41cc022a5a53740d6f8075/diff",
            "WorkDir": "/var/lib/docker/overlay2/3551b726ce8be771050d9798370e5e78c3280a1bff41cc022a5a53740d6f8075/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:106e8431b412f51ccd75ea46a2d5cb4343b23273cbcf50188377cb93aa9a6d82",
            "sha256:cd76869b72ab2a56badf9a068c3f6231203c316e6d1f7a9206e1a9a1a8009fd4",
            "sha256:2ecbf7829cb760264b926dfd4b3cd036c1bdbb50d311bb1c31a6c217ca208bb4",
            "sha256:5136ffc45974618abb8b5ac96d2be9a16fecf79d6ab6759d896f5355c52f10b9",
            "sha256:809d3bb9c80fb3d31d4c061ba0b38ba4e83b6329e33c2cb2bbf27251a8e527c6",
            "sha256:f64d335e5a99796db8621cc422978eba0bb9fbde78c7af53ca1867d545fde504",
            "sha256:ed46219a7228d2c02d581a58ce83fca6de396a5d24860b87d9b20494c64da001",
            "sha256:5ab9f0b492e634bd2dad569fc1ec6d7adb1d276ae4223baf22211c6391ec817b",
            "sha256:e799597e9d91c75f33d72cd8a3b1a5914701dcb55fd24c687d5adc6dd474192e",
            "sha256:3bbb6c21ed5c3b16089b60bdb320c2da60be23d5d8acc95fd2c9dbfda7accd72",
            "sha256:e141ac4318eaa56b27e1247f40ff98c326cb89e30df98df0326916c7e268f5b6",
            "sha256:fd2b79643cfd2f51852c3392ff9a9950e2392270a62348b0a364388d97b8d951"
        ]
    },
    "Metadata": {
        "LastTagTime": "2024-10-11T00:30:55.937023096+08:00"
    }
}

更多版本

docker.io/vllm/vllm-openai:v0.5.4

linux/amd64 docker.io9.90GB2024-09-07 06:20
2154

docker.io/vllm/vllm-openai:v0.6.0

linux/amd64 docker.io9.72GB2024-09-11 01:51
1392

docker.io/vllm/vllm-openai:v0.6.1.post2

linux/amd64 docker.io9.81GB2024-09-24 01:43
1021

docker.io/vllm/vllm-openai:latest

linux/amd64 docker.io10.24GB2024-10-11 00:43
5810

docker.io/vllm/vllm-openai:v0.6.4.post1

linux/amd64 docker.io10.64GB2024-11-19 00:42
1031

docker.io/vllm/vllm-openai:v0.6.4

linux/amd64 docker.io10.64GB2024-12-11 02:08
856

docker.io/vllm/vllm-openai:v0.6.3

linux/amd64 docker.io10.43GB2024-12-12 02:41
992

docker.io/vllm/vllm-openai:v0.6.6

linux/amd64 docker.io10.23GB2025-01-04 00:37
1416

docker.io/vllm/vllm-openai:v0.6.6.post1

linux/amd64 docker.io10.23GB2025-01-24 00:21
985

docker.io/vllm/vllm-openai:v0.7.1

linux/amd64 docker.io16.53GB2025-02-08 02:05
1077

docker.io/vllm/vllm-openai:v0.7.2

linux/amd64 docker.io16.53GB2025-02-09 00:28
2658

docker.io/vllm/vllm-openai:v0.7.3

linux/amd64 docker.io16.43GB2025-02-24 00:50
3402

docker.io/vllm/vllm-openai:v0.8.0

linux/amd64 docker.io16.62GB2025-03-20 00:23
1321

docker.io/vllm/vllm-openai:v0.8.1

linux/amd64 docker.io16.62GB2025-03-21 00:28
1106

docker.io/vllm/vllm-openai:v0.8.2

linux/amd64 docker.io16.92GB2025-03-27 01:12
1325

docker.io/vllm/vllm-openai:v0.8.3

linux/amd64 docker.io17.13GB2025-04-08 00:58
1372

docker.io/vllm/vllm-openai:v0.8.4

linux/amd64 docker.io17.16GB2025-04-17 01:16
1749

docker.io/vllm/vllm-openai:v0.8.5

linux/amd64 docker.io17.30GB2025-04-30 02:45
3134

docker.io/vllm/vllm-openai:v0.8.5.post1

linux/amd64 docker.io17.30GB2025-05-07 02:06
3050

docker.io/vllm/vllm-openai:v0.9.0.1

linux/amd64 docker.io20.81GB2025-06-05 01:12
1945

docker.io/vllm/vllm-openai:v0.9.1

linux/amd64 docker.io20.85GB2025-06-12 01:29
2704

docker.io/vllm/vllm-openai:v0.9.2

linux/amd64 docker.io20.76GB2025-07-09 03:00
6002

docker.io/vllm/vllm-openai:v0.10.0

linux/amd64 docker.io26.13GB2025-07-26 03:15
1638

docker.io/vllm/vllm-openai:gptoss

linux/amd64 docker.io33.86GB2025-08-07 01:52
1209

docker.io/vllm/vllm-openai:v0.10.1

linux/amd64 docker.io20.25GB2025-08-20 03:05
1189

docker.io/vllm/vllm-openai:v0.10.1.1

linux/amd64 docker.io20.26GB2025-08-23 01:43
1853

docker.io/vllm/vllm-openai:v0.10.2

linux/amd64 docker.io22.49GB2025-09-16 03:40
1400

docker.io/vllm/vllm-openai:v0.2.7

linux/amd64 docker.io6.34GB2025-10-01 01:07
363

docker.io/vllm/vllm-openai:v0.11.0-x86_64

linux/amd64 docker.io25.86GB2025-10-09 02:14
1739

docker.io/vllm/vllm-openai:v0.10.2-x86_64

linux/amd64 docker.io22.49GB2025-10-09 02:22
475

docker.io/vllm/vllm-openai:v0.11.0

linux/amd64 docker.io25.86GB2025-10-09 11:24
1785

docker.io/vllm/vllm-openai:v0.11.0

linux/arm64 docker.io24.17GB2025-10-30 00:47
789

docker.io/vllm/vllm-openai:v0.3.3

linux/amd64 docker.io9.13GB2025-11-18 01:01
267

docker.io/vllm/vllm-openai:v0.11.1

linux/amd64 docker.io28.72GB2025-11-21 01:03
621

docker.io/vllm/vllm-openai:v0.11.2

linux/amd64 docker.io28.82GB2025-11-22 00:46
1049

docker.io/vllm/vllm-openai:v0.11.1

linux/arm64 docker.io26.54GB2025-11-22 01:23
333

docker.io/vllm/vllm-openai:v0.4.0

linux/amd64 docker.io9.88GB2025-11-22 01:58
346

docker.io/vllm/vllm-openai:v0.11.2

linux/arm64 docker.io26.54GB2025-11-22 04:06
507

docker.io/vllm/vllm-openai:nightly

linux/amd64 docker.io18.74GB2025-12-03 02:43
534

docker.io/vllm/vllm-openai:v0.12.0-aarch64

linux/arm64 docker.io17.89GB2025-12-05 03:12
423

docker.io/vllm/vllm-openai:v0.12.0

linux/amd64 docker.io19.47GB2025-12-05 03:59
1515

docker.io/vllm/vllm-openai:v0.13.0

linux/amd64 docker.io19.51GB2026-01-22 01:41
220

docker.io/vllm/vllm-openai:v0.14.0

linux/amd64 docker.io19.66GB2026-01-22 03:16
380

docker.io/vllm/vllm-openai:v0.14.1

linux/amd64 docker.io19.69GB2026-01-27 01:52
361

docker.io/vllm/vllm-openai:v0.15.0

linux/amd64 docker.io20.13GB2026-01-31 00:51
390
148

docker.io/vllm/vllm-openai:v0.15.1

linux/amd64 docker.io20.14GB2026-02-06 01:14
142

docker.io/vllm/vllm-openai:v0.15.1-cu130

linux/amd64 docker.io18.77GB2026-02-07 00:39
88

docker.io/vllm/vllm-openai:latest

linux/arm64 docker.io20.65GB2026-02-08 00:59
39

docker.io/vllm/vllm-openai:v0.15.1-aarch64-cu130

linux/arm64 docker.io19.60GB2026-02-10 00:44
11