docker.io/pytorch/torchserve:0.12.0-gpu linux/amd64

docker.io/pytorch/torchserve:0.12.0-gpu - 国内下载镜像源 浏览次数:12 安全受验证的发布者-Pytorch
Torcheserve是PyTorch推出的一个用于模型部署和服务的框架。它提供了一个易于使用的API,使得开发者能够轻松地将模型部署到各种环境中。
  1. 快速部署:Torcheserve使得开发者能够快速部署模型,并且可以在短时间内进行迭代和更新。
  2. 易于使用:Torcheserve提供了一个简单易用的API,使得开发者能够轻松地将模型部署到各种环境中。
  3. 支持多种模型:Torcheserve支持多种类型的模型,包括PyTorch、TensorFlow和Keras等。

Torcheserve是一个强大且易于使用的框架,使得开发者能够快速部署模型并将其推广到各种环境中。它是 PyTorch 生态系统中的一个重要组成部分,旨在使模型部署和服务更加高效和容易。

源镜像 docker.io/pytorch/torchserve:0.12.0-gpu
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu
镜像ID sha256:10b7b4c915f11e597d930c6f418e3b077e474c89cda8d17ecf31c917f13216e5
镜像TAG 0.12.0-gpu
大小 6.61GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD serve
启动入口 /usr/local/bin/dockerd-entrypoint.sh
工作目录 /home/model-server
OS/平台 linux/amd64
浏览量 12 次
贡献者
镜像创建 2024-09-30T21:49:44.460287506Z
同步时间 2026-01-22 00:42
更新时间 2026-01-22 07:03
开放端口
7070/tcp 7071/tcp 8080/tcp 8081/tcp 8082/tcp
环境变量
PATH=/home/venv/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 NV_CUDA_CUDART_VERSION=12.1.55-1 NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1 CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility PYTHONUNBUFFERED=TRUE TEMP=/home/model-server/tmp
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 20.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu  docker.io/pytorch/torchserve:0.12.0-gpu

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu  docker.io/pytorch/torchserve:0.12.0-gpu

Shell快速替换命令

sed -i 's#pytorch/torchserve:0.12.0-gpu#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu  docker.io/pytorch/torchserve:0.12.0-gpu'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu  docker.io/pytorch/torchserve:0.12.0-gpu'

镜像构建历史


# 2024-10-01 05:49:44  0.00B 设置默认要执行的命令
CMD ["serve"]
                        
# 2024-10-01 05:49:44  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]
                        
# 2024-10-01 05:49:44  0.00B 设置环境变量 TEMP
ENV TEMP=/home/model-server/tmp
                        
# 2024-10-01 05:49:44  0.00B 设置工作目录为/home/model-server
WORKDIR /home/model-server
                        
# 2024-10-01 05:49:44  0.00B 指定运行容器时使用的用户
USER model-server
                        
# 2024-10-01 05:49:44  0.00B 声明容器运行时监听的端口
EXPOSE map[7070/tcp:{} 7071/tcp:{} 8080/tcp:{} 8081/tcp:{} 8082/tcp:{}]
                        
# 2024-10-01 05:49:44  0.00B 执行命令并创建新的镜像层
RUN |1 PYTHON_VERSION=3.9 /bin/sh -c mkdir /home/model-server/model-store && chown -R model-server /home/model-server/model-store # buildkit
                        
# 2024-10-01 05:49:44  309.00B 复制新文件或目录到容器中
COPY docker/config.properties /home/model-server/config.properties # buildkit
                        
# 2024-10-01 05:49:44  0.00B 执行命令并创建新的镜像层
RUN |1 PYTHON_VERSION=3.9 /bin/sh -c chmod +x /usr/local/bin/dockerd-entrypoint.sh     && chown -R model-server /home/model-server # buildkit
                        
# 2024-10-01 05:49:44  0.00B 设置环境变量 PATH
ENV PATH=/home/venv/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2024-10-01 05:49:44  219.00B 复制新文件或目录到容器中
COPY /usr/local/bin/dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh # buildkit
                        
# 2024-10-01 05:49:43  5.43GB 复制新文件或目录到容器中
COPY /home/venv /home/venv # buildkit
                        
# 2024-09-15 14:31:08  333.23KB 执行命令并创建新的镜像层
RUN |1 PYTHON_VERSION=3.9 /bin/sh -c useradd -m model-server     && mkdir -p /home/model-server/tmp # buildkit
                        
# 2024-09-15 14:31:08  935.73MB 执行命令并创建新的镜像层
RUN |1 PYTHON_VERSION=3.9 /bin/sh -c apt-get update &&     apt-get upgrade -y &&     apt-get install software-properties-common -y &&     add-apt-repository ppa:deadsnakes/ppa -y &&     apt remove python-pip  python3-pip &&     DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y     python$PYTHON_VERSION     python3-distutils     python$PYTHON_VERSION-dev     python$PYTHON_VERSION-venv     openjdk-17-jdk     build-essential     && rm -rf /var/lib/apt/lists/*     && cd /tmp # buildkit
                        
# 2024-09-15 14:31:08  0.00B 设置环境变量 PYTHONUNBUFFERED
ENV PYTHONUNBUFFERED=TRUE
                        
# 2024-09-15 14:31:08  0.00B 定义构建参数
ARG PYTHON_VERSION
                        
# 2023-11-10 13:42:53  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2023-11-10 13:42:53  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2023-11-10 13:42:53  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2023-11-10 13:42:52  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2023-11-10 13:42:52  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2023-11-10 13:42:52  46.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf     && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2023-11-10 13:42:52  149.60MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-1=${NV_CUDA_CUDART_VERSION}     ${NV_CUDA_COMPAT_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2023-11-10 13:42:36  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.1.0
                        
# 2023-11-10 13:42:36  18.32MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/${NVARCH}/3bf863cc.pub | apt-key add - &&     echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/${NVARCH} /" > /etc/apt/sources.list.d/cuda.list &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2023-11-10 13:42:36  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2023-11-10 13:42:36  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2023-11-10 13:42:36  0.00B 设置环境变量 NV_CUDA_COMPAT_PACKAGE
ENV NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1
                        
# 2023-11-10 13:42:36  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.1.55-1
                        
# 2023-11-10 13:42:36  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526
                        
# 2023-11-10 13:42:36  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2023-10-03 18:45:52  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2023-10-03 18:45:51  72.79MB 
/bin/sh -c #(nop) ADD file:4809da414c2d478b4d991cbdaa2df457f2b3d07d0ff6cf673f09a66f90833e81 in / 
                        
# 2023-10-03 18:45:50  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=20.04
                        
# 2023-10-03 18:45:50  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2023-10-03 18:45:50  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2023-10-03 18:45:50  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:10b7b4c915f11e597d930c6f418e3b077e474c89cda8d17ecf31c917f13216e5",
    "RepoTags": [
        "pytorch/torchserve:0.12.0-gpu",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve:0.12.0-gpu"
    ],
    "RepoDigests": [
        "pytorch/torchserve@sha256:94d33a9310b9e5fd6adc4f0ddffdc420dd0b8099ee348494c937dae6f4c12008",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/pytorch/torchserve@sha256:76660eb71797c07754ded7c9be23dfcf9c1325c1a6ca400ea9f5fbf4e07980b2"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2024-09-30T21:49:44.460287506Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "model-server",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "7070/tcp": {},
            "7071/tcp": {},
            "8080/tcp": {},
            "8081/tcp": {},
            "8082/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/home/venv/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.1 brand=tesla,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=geforce,driver\u003e=470,driver\u003c471 brand=geforcertx,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=titan,driver\u003e=470,driver\u003c471 brand=titanrtx,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=525,driver\u003c526 brand=unknown,driver\u003e=525,driver\u003c526 brand=nvidia,driver\u003e=525,driver\u003c526 brand=nvidiartx,driver\u003e=525,driver\u003c526 brand=geforce,driver\u003e=525,driver\u003c526 brand=geforcertx,driver\u003e=525,driver\u003c526 brand=quadro,driver\u003e=525,driver\u003c526 brand=quadrortx,driver\u003e=525,driver\u003c526 brand=titan,driver\u003e=525,driver\u003c526 brand=titanrtx,driver\u003e=525,driver\u003c526",
            "NV_CUDA_CUDART_VERSION=12.1.55-1",
            "NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1",
            "CUDA_VERSION=12.1.0",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "PYTHONUNBUFFERED=TRUE",
            "TEMP=/home/model-server/tmp"
        ],
        "Cmd": [
            "serve"
        ],
        "ArgsEscaped": true,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/home/model-server",
        "Entrypoint": [
            "/usr/local/bin/dockerd-entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "20.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 6608894732,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/2b8f74af231917591bf77e63d5dec333573fb0f63ecb23bd5aab90b57dcc9fb0/diff:/var/lib/docker/overlay2/fee46b7421c4d1e5cb18115c34e740b54331947359c16ed358134d5088ae5094/diff:/var/lib/docker/overlay2/0a010315ff2e8d1d170635bf331ce1ef31216943b13aaaa0dbd6b21b00488029/diff:/var/lib/docker/overlay2/7bb9539525fa2b11fc3c9726b8d78004adecb444f9dcf8a9028fda3610b71c00/diff:/var/lib/docker/overlay2/5082ebeff89d9beeb3250614855e8abbcbe87b9c557586c99eca614a5027cfca/diff:/var/lib/docker/overlay2/8302a6b621d6d8638e861a23085931ecadf03dd1c163e8ce381619524ffd144e/diff:/var/lib/docker/overlay2/5b69368bd7e9c549183b4ec78a27c3280405dc0b32227dfbe26b47ee376b4db8/diff:/var/lib/docker/overlay2/69fb0e5c0344b674c108aefe19500df857b24df183cfd6a09bb322cba3c8c7c9/diff:/var/lib/docker/overlay2/760686e0bd39f67734a4681ea944ac738e602975f430e5708bf519047e82cd7e/diff:/var/lib/docker/overlay2/0a5ac69c32f49a34f1325e9c0b7bbf1bb8e6d1c40379670ddb28222e17d9d08f/diff:/var/lib/docker/overlay2/14378f7ff88d309ffc7f861684d94ea3ae6f4532233974566fb4ce365c12e34a/diff:/var/lib/docker/overlay2/5408abd3ec726f4d055e10fddd8f488a8839c8a355fe1fb064cdc9fee660e07b/diff",
            "MergedDir": "/var/lib/docker/overlay2/e258dc9877701986a53658de59560ff3e382aecf9aaa0d35e255ab00540c2b17/merged",
            "UpperDir": "/var/lib/docker/overlay2/e258dc9877701986a53658de59560ff3e382aecf9aaa0d35e255ab00540c2b17/diff",
            "WorkDir": "/var/lib/docker/overlay2/e258dc9877701986a53658de59560ff3e382aecf9aaa0d35e255ab00540c2b17/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:6c3e7df31590f02f10cb71fc4eb27653e9b428df2e6e5421a455b062bd2e39f9",
            "sha256:2651516ff8de098646c66d3a2869845f49f5910f9073a654f4cdd1bd69163c02",
            "sha256:35d40f4df845e4ec2e4d9606f2b00285daa57616c98a27ce0de1913d3f01a445",
            "sha256:1eeecbd4dbae984e15018caeab879b50811fbcf75b59345d4547cfa33151be98",
            "sha256:1ff8f721b9dbee7ce469524967a4138db73097d81672dad87e13860c5f9f1b6c",
            "sha256:3cd5803a5d7aa467f3027261e28ec3e1db69c5437853227303e8aa0564104a47",
            "sha256:03599f3074a3bd75855e7e0df5d7243f52ddaf6a17a670a08aeed6598fc42e23",
            "sha256:e3f8c59a7ef6416d65517aaa12a6e95991b1d3cc857275f069bd7b56c3836caf",
            "sha256:d79f2cbecae26205f9bc0cf5901ff54c34505fb122ccf1f463c754097cdadaaf",
            "sha256:0000ab07f7e8768d305ef6617b97c2e2de599036dc02f739868193108a84f97e",
            "sha256:982cac852e4d00366a9c3c8f4e73ba1c84f1a7243420173ee29a2232949e19f7",
            "sha256:2b99057b3dba100dbb8d4a2bdd5795f350e498aaf0acd5649e53d1192c1c0730",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-01-22T00:32:51.061086453+08:00"
    }
}

更多版本

docker.io/pytorch/torchserve:0.11.0-gpu

linux/amd64 docker.io6.59GB2024-08-30 15:55
693

docker.io/pytorch/torchserve-kfs:0.9.0

linux/amd64 docker.io2.92GB2025-04-28 16:07
321

docker.io/pytorch/torchserve:0.11.0-cpu

linux/amd64 docker.io2.00GB2025-06-26 16:49
285

docker.io/pytorch/torchserve:0.12.0-cpu

linux/amd64 docker.io2.03GB2026-01-21 09:27
23

docker.io/pytorch/torchserve:0.12.0-gpu

linux/amd64 docker.io6.61GB2026-01-22 00:42
11