docker.io/localai/localai:latest-gpu-nvidia-cuda-11 linux/amd64

docker.io/localai/localai:latest-gpu-nvidia-cuda-11 - 国内下载镜像源 浏览次数:13

温馨提示:此镜像为latest tag镜像,本站无法保证此版本为最新镜像

LocalAI Docker 镜像

这是一个包含 LocalAI 软件的 Docker 镜像, LocalAI 是一款用于构建、训练和部署机器学习模型的开源平台。

镜像用途

* 在 Docker 容器中快速部署和运行 LocalAI * 方便地进行机器学习项目开发和实验 * 轻松共享和部署机器学习模型

镜像内容

* LocalAI 软件及所有依赖项 * 必要的配置文件和工具 * 示例数据集和模型

使用说明

请参考 LocalAI 官方文档获取详细的使用说明和安装步骤。
源镜像 docker.io/localai/localai:latest-gpu-nvidia-cuda-11
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11
镜像ID sha256:b9a80d69534a7e1411598f074be458aa445f95ce138e3ab7797cb61238acb6da
镜像TAG latest-gpu-nvidia-cuda-11
大小 4.11GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /entrypoint.sh
工作目录 /
OS/平台 linux/amd64
浏览量 13 次
贡献者 13*******6@qq.com
镜像创建 2025-09-03T20:33:17.441074064Z
同步时间 2025-09-05 09:51
更新时间 2025-09-05 19:56
开放端口
8080/tcp
目录挂载
/backends /models
环境变量
PATH=/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DEBIAN_FRONTEND=noninteractive BUILD_TYPE=cublas HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz NVIDIA_DRIVER_CAPABILITIES=compute,utility NVIDIA_REQUIRE_CUDA=cuda>=11.0 NVIDIA_VISIBLE_DEVICES=all
镜像标签
2025-09-03T20:28:57.475Z: org.opencontainers.image.created :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference: org.opencontainers.image.description MIT: org.opencontainers.image.licenses ubuntu: org.opencontainers.image.ref.name 034b9b691b242d74dba7d9b77e23146fd1e6d05c: org.opencontainers.image.revision https://github.com/mudler/LocalAI: org.opencontainers.image.source LocalAI: org.opencontainers.image.title https://github.com/mudler/LocalAI: org.opencontainers.image.url v3.5.0-gpu-nvidia-cuda-11: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11  docker.io/localai/localai:latest-gpu-nvidia-cuda-11

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11  docker.io/localai/localai:latest-gpu-nvidia-cuda-11

Shell快速替换命令

sed -i 's#localai/localai:latest-gpu-nvidia-cuda-11#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11  docker.io/localai/localai:latest-gpu-nvidia-cuda-11'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11  docker.io/localai/localai:latest-gpu-nvidia-cuda-11'

镜像构建历史


# 2025-09-04 04:33:17  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/entrypoint.sh"]
                        
# 2025-09-04 04:33:17  0.00B 声明容器运行时监听的端口
EXPOSE map[8080/tcp:{}]
                        
# 2025-09-04 04:33:17  0.00B 创建挂载点用于持久化数据或共享数据
VOLUME [/models /backends]
                        
# 2025-09-04 04:33:17  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD-SHELL" "curl -f ${HEALTHCHECK_ENDPOINT} || exit 1"] "1m0s" "10m0s" "0s" "0s" '\n'}
                        
# 2025-09-04 04:33:17  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c mkdir -p /models /backends # buildkit
                        
# 2025-09-04 04:33:17  71.26MB 复制新文件或目录到容器中
COPY /build/local-ai ./ # buildkit
                        
# 2025-09-04 04:31:31  777.00B 复制新文件或目录到容器中
COPY ./entrypoint.sh . # buildkit
                        
# 2025-09-04 04:31:31  0.00B 设置工作目录为/
WORKDIR /
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA
ENV NVIDIA_REQUIRE_CUDA=cuda>=11.0
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-09-04 04:31:31  0.00B 定义构建参数
ARG CUDA_MAJOR_VERSION=11
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 HEALTHCHECK_ENDPOINT
ENV HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 PATH
ENV PATH=/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-09-04 04:31:31  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2025-09-04 04:31:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c expr "${BUILD_TYPE}" = intel && echo "intel" > /run/localai/capability || echo "not intel" # buildkit
                        
# 2025-09-04 04:31:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c if [ "${BUILD_TYPE}" = "hipblas" ]; then     ln -s /opt/rocm-**/lib/llvm/lib/libomp.so /usr/lib/libomp.so     ; fi # buildkit
                        
# 2025-09-04 04:31:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then         apt-get update &&         apt-get install -y --no-install-recommends             hipblas-dev             rocblas-dev &&         apt-get clean &&         rm -rf /var/lib/apt/lists/* &&         echo "amd" > /run/localai/capability &&         ldconfig     ; fi # buildkit
                        
# 2025-09-04 04:31:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then         apt-get update &&         apt-get install -y --no-install-recommends             libclblast-dev &&         apt-get clean &&         rm -rf /var/lib/apt/lists/*     ; fi # buildkit
                        
# 2025-09-04 04:31:30  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c <<EOT bash
    if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
        echo "nvidia-l4t" > /run/localai/capability
    fi
EOT # buildkit
                        
# 2025-09-04 04:31:30  3.44GB 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c <<EOT bash
    if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
        apt-get update && \
        apt-get install -y  --no-install-recommends \
            software-properties-common pciutils
        if [ "amd64" = "$TARGETARCH" ]; then
            curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
        fi
        if [ "arm64" = "$TARGETARCH" ]; then
            curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
        fi
        dpkg -i cuda-keyring_1.1-1_all.deb && \
        rm -f cuda-keyring_1.1-1_all.deb && \
        apt-get update && \
        apt-get install -y --no-install-recommends \
            cuda-nvcc-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcufft-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/* && \
        echo "nvidia" > /run/localai/capability
    fi
EOT # buildkit
                        
# 2025-09-04 04:29:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c <<EOT bash
    if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
        apt-get update && \
        apt-get install -y  --no-install-recommends \
            software-properties-common pciutils wget gpg-agent && \
        wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | apt-key add - && \
        wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list https://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list && \
        apt-get update && \
        apt-get install -y \
            vulkan-sdk && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/* && \
        echo "vulkan" > /run/localai/capability
    fi
EOT # buildkit
                        
# 2025-09-04 04:29:31  8.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c echo "default" > /run/localai/capability # buildkit
                        
# 2025-09-04 04:29:31  0.00B 执行命令并创建新的镜像层
RUN |6 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=11 CUDA_MINOR_VERSION=7 SKIP_DRIVERS=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c mkdir -p /run/localai # buildkit
                        
# 2025-09-04 04:29:31  0.00B 设置环境变量 BUILD_TYPE
ENV BUILD_TYPE=cublas
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG TARGETVARIANT=
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG SKIP_DRIVERS=false
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG CUDA_MINOR_VERSION=7
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG CUDA_MAJOR_VERSION=11
                        
# 2025-09-04 04:29:31  0.00B 定义构建参数
ARG BUILD_TYPE=cublas
                        
# 2025-09-04 04:29:31  516.78MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends         ca-certificates curl wget espeak-ng libgomp1         ffmpeg libopenblas-base libopenblas-dev &&     apt-get clean &&     rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-09-04 04:29:31  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-08-20 01:17:10  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-08-20 01:17:10  77.87MB 
/bin/sh -c #(nop) ADD file:9303cc1f788d2a9a8f909b154339f7c637b2a53c75c0e7f3da62eb1fefe371b1 in / 
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:b9a80d69534a7e1411598f074be458aa445f95ce138e3ab7797cb61238acb6da",
    "RepoTags": [
        "localai/localai:latest-gpu-nvidia-cuda-11",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-11"
    ],
    "RepoDigests": [
        "localai/localai@sha256:fe8f7261a35cc3739918d9ac24dc4aa12ae9dbd9cd42f8ff94cd9f638c33c994",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai@sha256:097f6ff1481148510dc5ebe3c0a01cf018d2ea23f5594d9a6cc7a8e533d55d08"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-09-03T20:33:17.441074064Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "8080/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "DEBIAN_FRONTEND=noninteractive",
            "BUILD_TYPE=cublas",
            "HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=11.0",
            "NVIDIA_VISIBLE_DEVICES=all"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD-SHELL",
                "curl -f ${HEALTHCHECK_ENDPOINT} || exit 1"
            ],
            "Interval": 60000000000,
            "Timeout": 600000000000,
            "Retries": 10
        },
        "Image": "",
        "Volumes": {
            "/backends": {},
            "/models": {}
        },
        "WorkingDir": "/",
        "Entrypoint": [
            "/entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.created": "2025-09-03T20:28:57.475Z",
            "org.opencontainers.image.description": ":robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI,  running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference",
            "org.opencontainers.image.licenses": "MIT",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.revision": "034b9b691b242d74dba7d9b77e23146fd1e6d05c",
            "org.opencontainers.image.source": "https://github.com/mudler/LocalAI",
            "org.opencontainers.image.title": "LocalAI",
            "org.opencontainers.image.url": "https://github.com/mudler/LocalAI",
            "org.opencontainers.image.version": "v3.5.0-gpu-nvidia-cuda-11"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 4110160731,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/454fa779dbc083d5e55fae5880d1beb4290a5861bc2c153f1021310f4d593db0/diff:/var/lib/docker/overlay2/b6ce07137752b26c978106b2837648597bde0d00f40dcade82b190e1a09835df/diff:/var/lib/docker/overlay2/c68f6e3869f03d3868aac7adccfff82e68528ab7040184fe1c009f4b34a41eba/diff:/var/lib/docker/overlay2/a808960c469c1dd2aa660db2214b6682b1acdb7ebe590ff4e9bc5e3e358d67e7/diff:/var/lib/docker/overlay2/b96a1bc397c4eca1be38818e0e53330a1bc9783707687aa81d85fc377c207793/diff:/var/lib/docker/overlay2/e834d0f82cc5119b10897f0ce1b826d1afeafbefaf4e3a8905b76a44efac21e5/diff:/var/lib/docker/overlay2/a217ccd8f5c58c927a01c7dadc7ea3b05544608c10cd41e325cc3b2bb7ecba97/diff:/var/lib/docker/overlay2/c8d07d7de5b80d4d2249cbb8f14d73fa5c5da25ea1631c2e95e2e104d65ddc1a/diff:/var/lib/docker/overlay2/8c9e2ef051d799f2026560b455cb463ff69fd97336ba3e5e1b71ae8de4d67aca/diff:/var/lib/docker/overlay2/a565cf76db25dd367c7bfb0decd8a61c01376e3f2c30671a1bd18e6a7bc5753f/diff:/var/lib/docker/overlay2/efa5f63bf6143a950ed81991c98afff65907c5dcdee7e6b2642b8678baaf2e81/diff:/var/lib/docker/overlay2/ef7e49d101951e966c252f8e6d0ccd1ea5f17ddc4888fac3757809002524170c/diff:/var/lib/docker/overlay2/686020265ba704bac267cf5b6fc8dfaf73aa31d484836dd3225bbe9a3899f285/diff",
            "MergedDir": "/var/lib/docker/overlay2/2b20b736ed14b943879c134585bfa3555be62265d8b8408d976352c4fce32fdc/merged",
            "UpperDir": "/var/lib/docker/overlay2/2b20b736ed14b943879c134585bfa3555be62265d8b8408d976352c4fce32fdc/diff",
            "WorkDir": "/var/lib/docker/overlay2/2b20b736ed14b943879c134585bfa3555be62265d8b8408d976352c4fce32fdc/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:dc6eb6dad5f9e332f00af553440e857b1467db1be43dd910cdb6830ba0898d50",
            "sha256:aa39741442d49eb0437bc8a96926056ac9f7b9e418ce0d57dbb6d5c098dd9eaf",
            "sha256:49d4f93120401dc1def6cc29754e2ce1faea4f7177c1e3b343f17a0ce6f86e6e",
            "sha256:08c2b1419657c3c461377c9312bf3ee769c7d3fb2171f300f451c1dd2e60f6c5",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:0f4e75c410e4da3d76f3f2e39e26097528bcdcebc7c2bb9e44d009a33950e180",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:9ad56a9f3c3817f0064b0a6c07d33791c3bfa1a9fc3962cffdb7c116d6f6fd70",
            "sha256:2fa3d8ad90a82f87dcd30b1c259e47fe315757425d8c60e0dab92aafac890242",
            "sha256:b9aa02f9c5090b918fd9a23340be3357c076443f0f741e8ce1d2415d7ffef16d"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-09-05T09:46:55.018327959+08:00"
    }
}

更多版本

docker.io/localai/localai:latest-aio-cpu

linux/amd64 docker.io6.42GB2024-11-08 14:23
674

docker.io/localai/localai:latest-aio-gpu-nvidia-cuda-12

linux/amd64 docker.io45.94GB2024-11-21 01:51
335

docker.io/localai/localai:master-aio-gpu-nvidia-cuda-12

linux/amd64 docker.io42.47GB2025-02-28 01:41
332

docker.io/localai/localai:master-vulkan-ffmpeg-core

linux/amd64 docker.io5.91GB2025-03-03 18:48
211

docker.io/localai/localai:latest-aio-gpu-hipblas

linux/amd64 docker.io88.18GB2025-03-10 02:52
313

docker.io/localai/localai:latest-gpu-nvidia-cuda-12

linux/amd64 docker.io41.80GB2025-03-11 02:52
336

docker.io/localai/localai:v2.29.0-cublas-cuda12

linux/amd64 docker.io13.57GB2025-05-22 16:07
254

docker.io/localai/localai:v2.29.0-aio-gpu-nvidia-cuda-12

linux/amd64 docker.io48.62GB2025-05-29 02:49
183

docker.io/localai/localai:latest-gpu-nvidia-cuda-11

linux/amd64 docker.io4.11GB2025-09-05 09:51
12