docker.io/localai/localai:latest-gpu-nvidia-cuda-12 linux/amd64

docker.io/localai/localai:latest-gpu-nvidia-cuda-12 - 国内下载镜像源 浏览次数:79

温馨提示:此镜像为latest tag镜像,本站无法保证此版本为最新镜像

LocalAI Docker 镜像

这是一个包含 LocalAI 软件的 Docker 镜像, LocalAI 是一款用于构建、训练和部署机器学习模型的开源平台。

镜像用途

* 在 Docker 容器中快速部署和运行 LocalAI * 方便地进行机器学习项目开发和实验 * 轻松共享和部署机器学习模型

镜像内容

* LocalAI 软件及所有依赖项 * 必要的配置文件和工具 * 示例数据集和模型

使用说明

请参考 LocalAI 官方文档获取详细的使用说明和安装步骤。
源镜像 docker.io/localai/localai:latest-gpu-nvidia-cuda-12
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12
镜像ID sha256:cd7bf69ed24b0b16d6ec6333f36fee5aa28d55b756f05d76c3bb9f0ac58bc914
镜像TAG latest-gpu-nvidia-cuda-12
大小 41.80GB
镜像源 docker.io
项目信息 Docker-Hub主页 🚀项目TAG 🚀
CMD
启动入口 /build/entrypoint.sh
工作目录 /build
OS/平台 linux/amd64
浏览量 79 次
贡献者
镜像创建 2025-02-15T21:05:20.465744828Z
同步时间 2025-03-11 02:52
更新时间 2025-03-31 10:13
开放端口
8080/tcp
环境变量
PATH=/root/.cargo/bin:/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin DEBIAN_FRONTEND=noninteractive EXTERNAL_GRPC_BACKENDS=coqui:/build/backend/python/coqui/run.sh,transformers:/build/backend/python/transformers/run.sh,rerankers:/build/backend/python/rerankers/run.sh,autogptq:/build/backend/python/autogptq/run.sh,bark:/build/backend/python/bark/run.sh,diffusers:/build/backend/python/diffusers/run.sh,faster-whisper:/build/backend/python/faster-whisper/run.sh,kokoro:/build/backend/python/kokoro/run.sh,vllm:/build/backend/python/vllm/run.sh,exllama2:/build/backend/python/exllama2/run.sh BUILD_TYPE=cublas REBUILD=false HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz MAKEFLAGS=--jobs=3 --output-sync=target NVIDIA_DRIVER_CAPABILITIES=compute,utility NVIDIA_REQUIRE_CUDA=cuda>=12.0 NVIDIA_VISIBLE_DEVICES=all
镜像标签
2025-02-15T18:14:19.154Z: org.opencontainers.image.created :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference: org.opencontainers.image.description MIT: org.opencontainers.image.licenses ubuntu: org.opencontainers.image.ref.name 09941c0bfb9119bb01a04b2a0a16897ecf2cd087: org.opencontainers.image.revision https://github.com/mudler/LocalAI: org.opencontainers.image.source LocalAI: org.opencontainers.image.title https://github.com/mudler/LocalAI: org.opencontainers.image.url v2.26.0-cublas-cuda12-ffmpeg: org.opencontainers.image.version

Docker拉取命令 无权限下载?点我修复

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12  docker.io/localai/localai:latest-gpu-nvidia-cuda-12

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12  docker.io/localai/localai:latest-gpu-nvidia-cuda-12

Shell快速替换命令

sed -i 's#localai/localai:latest-gpu-nvidia-cuda-12#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12  docker.io/localai/localai:latest-gpu-nvidia-cuda-12'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12  docker.io/localai/localai:latest-gpu-nvidia-cuda-12'

镜像构建历史


# 2025-02-16 05:05:20  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/build/entrypoint.sh"]
                        
# 2025-02-16 05:05:20  0.00B 声明容器运行时监听的端口
EXPOSE map[8080/tcp:{}]
                        
# 2025-02-16 05:05:20  0.00B 创建挂载点用于持久化数据或共享数据
VOLUME [/build/models]
                        
# 2025-02-16 05:05:20  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD-SHELL" "curl -f ${HEALTHCHECK_ENDPOINT} || exit 1"] "1m0s" "10m0s" "0s" "0s" '\n'}
                        
# 2025-02-16 05:05:20  0.00B 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/bash -c mkdir -p /build/models # buildkit
                        
# 2025-02-16 05:05:18  14.94GB 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/bash -c if [[ ( "${EXTRA_BACKENDS}" =~ "vllm" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/vllm     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "autogptq" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/autogptq     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "bark" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/bark     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "rerankers" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/rerankers     ; fi # buildkit
                        
# 2025-02-16 05:03:20  7.21GB 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/bash -c if [[ ( "${EXTRA_BACKENDS}" =~ "kokoro" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/kokoro     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "exllama2" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/exllama2     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "transformers" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/transformers     ; fi # buildkit
                        
# 2025-02-16 05:01:58  7.59GB 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/bash -c if [[ ( "${EXTRA_BACKENDS}" =~ "coqui" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/coqui     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "faster-whisper" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/faster-whisper     ; fi &&     if [[ ( "${EXTRA_BACKENDS}" =~ "diffusers" || -z "${EXTRA_BACKENDS}" ) && "$IMAGE_TYPE" == "extras" ]]; then         make -C backend/python/diffusers     ; fi # buildkit
                        
# 2025-02-16 05:00:53  0.00B 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/bash -c if [[ ( "${IMAGE_TYPE}" == "extras ")]]; then         apt-get -qq -y install espeak-ng     ; fi # buildkit
                        
# 2025-02-16 05:00:53  0.00B 
SHELL [/bin/bash -c]
                        
# 2025-02-16 05:00:53  34.64MB 复制新文件或目录到容器中
COPY /build/sources/go-piper/piper-phonemize/pi/lib/* /usr/lib/ # buildkit
                        
# 2025-02-16 05:00:53  1.67GB 复制新文件或目录到容器中
COPY /build/local-ai ./ # buildkit
                        
# 2025-02-16 05:00:43  1.51GB 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/sh -c make prepare-sources # buildkit
                        
# 2025-02-16 05:00:10  1.21GB 复制新文件或目录到容器中
COPY /opt/grpc /usr/local # buildkit
                        
# 2025-02-16 04:59:59  907.93MB 复制新文件或目录到容器中
COPY /build/sources ./sources/ # buildkit
                        
# 2025-02-16 02:19:47  16.05MB 复制新文件或目录到容器中
COPY . . # buildkit
                        
# 2025-02-16 02:19:47  0.00B 设置工作目录为/build
WORKDIR /build
                        
# 2025-02-16 02:19:47  329.45MB 执行命令并创建新的镜像层
RUN |13 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false FFMPEG=true IMAGE_TYPE=extras EXTRA_BACKENDS= MAKEFLAGS=--jobs=3 --output-sync=target /bin/sh -c if [ "${FFMPEG}" = "true" ]; then         apt-get update &&         apt-get install -y --no-install-recommends             ffmpeg &&         apt-get clean &&         rm -rf /var/lib/apt/lists/*     ; fi # buildkit
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.0
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG CUDA_MAJOR_VERSION=12
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 MAKEFLAGS --output-sync
ENV MAKEFLAGS=--jobs=3 --output-sync=target
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 HEALTHCHECK_ENDPOINT
ENV HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 REBUILD
ENV REBUILD=false
                        
# 2025-02-16 02:19:08  0.00B 设置环境变量 BUILD_TYPE
ENV BUILD_TYPE=cublas
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG MAKEFLAGS=--jobs=3 --output-sync=target
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG EXTRA_BACKENDS
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG IMAGE_TYPE=extras
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG BUILD_TYPE=cublas
                        
# 2025-02-16 02:19:08  0.00B 定义构建参数
ARG FFMPEG=true
                        
# 2025-02-16 02:19:08  0.00B 执行命令并创建新的镜像层
RUN |9 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false /bin/sh -c if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then         apt-get update &&         apt-get install -y --no-install-recommends             hipblas-dev             rocblas-dev &&         apt-get clean &&         rm -rf /var/lib/apt/lists/* &&         ldconfig     ; fi # buildkit
                        
# 2025-02-16 02:19:07  0.00B 执行命令并创建新的镜像层
RUN |9 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false /bin/sh -c if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then         apt-get update &&         apt-get install -y --no-install-recommends             libclblast-dev &&         apt-get clean &&         rm -rf /var/lib/apt/lists/*     ; fi # buildkit
                        
# 2025-02-16 02:19:07  3.94GB 执行命令并创建新的镜像层
RUN |9 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false /bin/sh -c <<EOT bash
    if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
        apt-get update && \
        apt-get install -y  --no-install-recommends \
            software-properties-common pciutils
        if [ "amd64" = "$TARGETARCH" ]; then
            curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
        fi
        if [ "arm64" = "$TARGETARCH" ]; then
            curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
        fi
        dpkg -i cuda-keyring_1.1-1_all.deb && \
        rm -f cuda-keyring_1.1-1_all.deb && \
        apt-get update && \
        apt-get install -y --no-install-recommends \
            cuda-nvcc-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcufft-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
            libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    fi
EOT # buildkit
                        
# 2025-02-16 02:16:24  0.00B 执行命令并创建新的镜像层
RUN |9 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= BUILD_TYPE=cublas CUDA_MAJOR_VERSION=12 CUDA_MINOR_VERSION=0 SKIP_DRIVERS=false /bin/sh -c <<EOT bash
    if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
        apt-get update && \
        apt-get install -y  --no-install-recommends \
            software-properties-common pciutils wget gpg-agent && \
        wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | apt-key add - && \
        wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list https://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list && \
        apt-get update && \
        apt-get install -y \
            vulkan-sdk && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    fi
EOT # buildkit
                        
# 2025-02-16 02:16:24  0.00B 设置环境变量 BUILD_TYPE
ENV BUILD_TYPE=cublas
                        
# 2025-02-16 02:16:24  0.00B 定义构建参数
ARG SKIP_DRIVERS=false
                        
# 2025-02-16 02:16:24  0.00B 定义构建参数
ARG CUDA_MINOR_VERSION=0
                        
# 2025-02-16 02:16:24  0.00B 定义构建参数
ARG CUDA_MAJOR_VERSION=12
                        
# 2025-02-16 02:16:24  0.00B 定义构建参数
ARG BUILD_TYPE=cublas
                        
# 2025-02-16 02:16:24  35.30MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c pip install --user grpcio-tools # buildkit
                        
# 2025-02-16 02:16:21  281.45MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends         espeak-ng         espeak         python3-pip         python-is-python3         python3-dev llvm         python3-venv &&     apt-get clean &&     rm -rf /var/lib/apt/lists/* &&     pip install --upgrade pip # buildkit
                        
# 2025-02-16 02:16:00  1.16GB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y # buildkit
                        
# 2025-02-16 02:15:42  0.00B 设置环境变量 PATH
ENV PATH=/root/.cargo/bin:/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin
                        
# 2025-02-16 02:15:42  39.83MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c curl -LsSf https://astral.sh/uv/install.sh | UV_INSTALL_DIR=/usr/bin sh # buildkit
                        
# 2025-02-16 02:15:40  0.00B 设置工作目录为/build
WORKDIR /build
                        
# 2025-02-16 02:15:40  110.51MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends         libopenblas-dev &&     apt-get clean &&     rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-02-16 02:15:32  0.00B 设置环境变量 PATH
ENV PATH=/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin
                        
# 2025-02-16 02:15:32  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin
                        
# 2025-02-16 02:15:32  0.00B 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c echo "Target Variant: $TARGETVARIANT" # buildkit
                        
# 2025-02-16 02:15:32  0.00B 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c echo "Target Architecture: $TARGETARCH" # buildkit
                        
# 2025-02-16 02:15:32  0.00B 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c test -n "$TARGETARCH"     || (echo 'warn: missing $TARGETARCH, either set this `ARG` manually, or run using `docker buildkit`') # buildkit
                        
# 2025-02-16 02:15:32  219.34KB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c update-ca-certificates # buildkit
                        
# 2025-02-16 02:15:32  0.00B 复制新文件或目录到容器中
COPY --chmod=644 custom-ca-certs/* /usr/local/share/ca-certificates/ # buildkit
                        
# 2025-02-16 02:15:32  136.50MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2 &&     go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af # buildkit
                        
# 2025-02-16 02:15:16  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin
                        
# 2025-02-16 02:15:16  222.31MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c curl -L -s https://go.dev/dl/go${GO_VERSION}.linux-${TARGETARCH}.tar.gz | tar -C /usr/local -xz # buildkit
                        
# 2025-02-16 02:15:12  68.61MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c <<EOT bash
    if [ "${CMAKE_FROM_SOURCE}}" = "true" ]; then
        curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
    else
        apt-get update && \
        apt-get install -y \
            cmake && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    fi
EOT # buildkit
                        
# 2025-02-16 02:15:01  303.20MB 执行命令并创建新的镜像层
RUN |5 GO_VERSION=1.22.6 CMAKE_VERSION=3.26.4 CMAKE_FROM_SOURCE=false TARGETARCH=amd64 TARGETVARIANT= /bin/sh -c apt-get update &&     apt-get install -y --no-install-recommends         build-essential         ccache         ca-certificates         curl libssl-dev         git         unzip upx-ucl &&     apt-get clean &&     rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-02-16 02:15:01  0.00B 设置环境变量 EXTERNAL_GRPC_BACKENDS
ENV EXTERNAL_GRPC_BACKENDS=coqui:/build/backend/python/coqui/run.sh,transformers:/build/backend/python/transformers/run.sh,rerankers:/build/backend/python/rerankers/run.sh,autogptq:/build/backend/python/autogptq/run.sh,bark:/build/backend/python/bark/run.sh,diffusers:/build/backend/python/diffusers/run.sh,faster-whisper:/build/backend/python/faster-whisper/run.sh,kokoro:/build/backend/python/kokoro/run.sh,vllm:/build/backend/python/vllm/run.sh,exllama2:/build/backend/python/exllama2/run.sh
                        
# 2025-02-16 02:15:01  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-02-16 02:15:01  0.00B 定义构建参数
ARG TARGETVARIANT=
                        
# 2025-02-16 02:15:01  0.00B 定义构建参数
ARG TARGETARCH=amd64
                        
# 2025-02-16 02:15:01  0.00B 定义构建参数
ARG CMAKE_FROM_SOURCE=false
                        
# 2025-02-16 02:15:01  0.00B 定义构建参数
ARG CMAKE_VERSION=3.26.4
                        
# 2025-02-16 02:15:01  0.00B 定义构建参数
ARG GO_VERSION=1.22.6
                        
# 2025-02-16 02:15:01  0.00B 指定运行容器时使用的用户
USER root
                        
# 2025-01-26 13:31:11  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-01-26 13:31:10  77.86MB 
/bin/sh -c #(nop) ADD file:1b6c8c9518be42fa2afe5e241ca31677fce58d27cdfa88baa91a65a259be3637 in / 
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-01-26 13:31:07  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:cd7bf69ed24b0b16d6ec6333f36fee5aa28d55b756f05d76c3bb9f0ac58bc914",
    "RepoTags": [
        "localai/localai:latest-gpu-nvidia-cuda-12",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai:latest-gpu-nvidia-cuda-12"
    ],
    "RepoDigests": [
        "localai/localai@sha256:6b4cf2bb3a6638d0dcca21b93dd66bc508ffe8b29573f6442cd276b387899613",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/localai/localai@sha256:8ef2f62c4fbef104b4a22dba443cd110169552831f4ceadb9c7da294a0fb6042"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-02-15T21:05:20.465744828Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "root",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "8080/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/root/.cargo/bin:/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin",
            "DEBIAN_FRONTEND=noninteractive",
            "EXTERNAL_GRPC_BACKENDS=coqui:/build/backend/python/coqui/run.sh,transformers:/build/backend/python/transformers/run.sh,rerankers:/build/backend/python/rerankers/run.sh,autogptq:/build/backend/python/autogptq/run.sh,bark:/build/backend/python/bark/run.sh,diffusers:/build/backend/python/diffusers/run.sh,faster-whisper:/build/backend/python/faster-whisper/run.sh,kokoro:/build/backend/python/kokoro/run.sh,vllm:/build/backend/python/vllm/run.sh,exllama2:/build/backend/python/exllama2/run.sh",
            "BUILD_TYPE=cublas",
            "REBUILD=false",
            "HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz",
            "MAKEFLAGS=--jobs=3 --output-sync=target",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.0",
            "NVIDIA_VISIBLE_DEVICES=all"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD-SHELL",
                "curl -f ${HEALTHCHECK_ENDPOINT} || exit 1"
            ],
            "Interval": 60000000000,
            "Timeout": 600000000000,
            "Retries": 10
        },
        "Image": "",
        "Volumes": {
            "/build/models": {}
        },
        "WorkingDir": "/build",
        "Entrypoint": [
            "/build/entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.created": "2025-02-15T18:14:19.154Z",
            "org.opencontainers.image.description": ":robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI,  running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference",
            "org.opencontainers.image.licenses": "MIT",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.revision": "09941c0bfb9119bb01a04b2a0a16897ecf2cd087",
            "org.opencontainers.image.source": "https://github.com/mudler/LocalAI",
            "org.opencontainers.image.title": "LocalAI",
            "org.opencontainers.image.url": "https://github.com/mudler/LocalAI",
            "org.opencontainers.image.version": "v2.26.0-cublas-cuda12-ffmpeg"
        },
        "Shell": [
            "/bin/bash",
            "-c"
        ]
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 41804421068,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/bd006ff4f77bd5d4fb3141aa4cd054811f7d741814aee42c384c7dec16220fa8/diff:/var/lib/docker/overlay2/865e90d86af7a5dea0355d84c989f70154112b282ac44eb1daf9833a8558f8b6/diff:/var/lib/docker/overlay2/f69b007d54d6f371e5c8318349d41a1fe7bc06520ba92d7736d484faace415a5/diff:/var/lib/docker/overlay2/b4f62d5ae4412f915da3446cb83778374dd29651643ab67f4a22f4829fee6778/diff:/var/lib/docker/overlay2/1c3d95f0572dc9f14c5d498c818956d5fa4e855b76282906e0e649e24d42e70d/diff:/var/lib/docker/overlay2/03eda149b17c5c9e250907e60a149b9f3c0d690ea2d8641d1e07783466fdc32a/diff:/var/lib/docker/overlay2/01f75224ad94f92b8dde880ed29caad215b6c5212240c580138ecebe517e1ab2/diff:/var/lib/docker/overlay2/1798c7364cb8e7c7d38b48091a64141a3beb0877f4ba4ab96c5da315e724b88f/diff:/var/lib/docker/overlay2/e63fa47bae45aedc81d7f7335ab8a1be4411aeb468ece654c6016b2bb926f9d9/diff:/var/lib/docker/overlay2/cdf5c0e7a2abf745b6dd38827e39c73259f61ee900df29f1780dab79a8e780b3/diff:/var/lib/docker/overlay2/d481bfe32d6b1bd728a886441ca916c68cf2983e019376be97cdca5d6c1382a6/diff:/var/lib/docker/overlay2/ee9cdfcbf02b22c8162518f0436a1869df38898b1f37f1c16dd92f1060f6e4c3/diff:/var/lib/docker/overlay2/396ea1c60332e5fee0d5678fa04933867fbfb4957d742e1a534c7d62bf6dfc7c/diff:/var/lib/docker/overlay2/85a0870e28a8e0d774e15ec73756d2eb455b6dcb24fb4e2a63be68325565bca0/diff:/var/lib/docker/overlay2/12876b89e45fcda632a7ba92e3080264cbe52bae545c9b4d04024f240795c7f2/diff:/var/lib/docker/overlay2/2ae2a808a9da47e817810e39b4ee62702210479811babce4f433c6be79dd6601/diff:/var/lib/docker/overlay2/15095861ce932abce997bf1045449297b63576964dca74a4da3db4abac7692e0/diff:/var/lib/docker/overlay2/bb90514cf521f11b148f7d7c55d7b58ada2aeb694d7363c22992bc28347c7be1/diff:/var/lib/docker/overlay2/08e1b2c7852875b4783fc0227647328cb5a7c6d43809b2c47c5179111cd0394a/diff:/var/lib/docker/overlay2/14606ba9e2ac2187aeb84326dcf8288e129bf94f7553c13c4e4375f2ca4554ff/diff:/var/lib/docker/overlay2/ed000f2e1e019e3d22ad843c17e5dec553acbd303c9098645a9f687757c0c2d5/diff:/var/lib/docker/overlay2/1cf99a9d6c7e7b5caec71ce51df1101ac4f037c24caea17d85bf923f982ccc60/diff:/var/lib/docker/overlay2/e436aa8a4179146a33c4f702db433b45179d8bf4ccaf5c7d330f5f5a03d304d2/diff:/var/lib/docker/overlay2/dee14f0fcf2cc84bf534be026900c425b296748c8b5c9df39d5867e4978b4ce9/diff:/var/lib/docker/overlay2/c4eb2fb8d3fcb191cf1c6c6d3f121c3be0bbead6d9113d1a5c27df8c02c76872/diff:/var/lib/docker/overlay2/fb32d0c397c9c8152be9b9f4cc71726f042829278188dabb3f792cdbaeb2343d/diff:/var/lib/docker/overlay2/3c4b7c1b6ac537f5cef5fa35cd0c26234abdaba41f8cc851c3efd53b6b0c66ce/diff:/var/lib/docker/overlay2/43075a472d42fedc25e334b87aaab7b7391a7df8effb9f1893f67476509775d4/diff:/var/lib/docker/overlay2/04cd98e340d50dd4e4088581f98d20251971058be1ea61a87a176cc8a9171edf/diff:/var/lib/docker/overlay2/6a4d3c12e138a2ad0bafadc6533fcb9a8cf51a75f3d314fe0cb03feccfdfeb47/diff:/var/lib/docker/overlay2/fea47567818ab0d014b12b9f47ef8c85d572f2b63bc365e87c7c701712ead649/diff:/var/lib/docker/overlay2/ace3f972cf88bd330727fa9a25fd0df2c3fec1df161ac9102bf9f5739b40b82c/diff",
            "MergedDir": "/var/lib/docker/overlay2/54d04226aafd112802e0693755811427ae3570bcb9a7ff4a5b11adfbedc14b78/merged",
            "UpperDir": "/var/lib/docker/overlay2/54d04226aafd112802e0693755811427ae3570bcb9a7ff4a5b11adfbedc14b78/diff",
            "WorkDir": "/var/lib/docker/overlay2/54d04226aafd112802e0693755811427ae3570bcb9a7ff4a5b11adfbedc14b78/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:270a1170e7e398434ff1b31e17e233f7d7b71aa99a40473615860068e86720af",
            "sha256:a00b5723702847c90a6c5a182ef42f8f932e72111a7f689f3e9409568de4e324",
            "sha256:26501cd8a41a92b2e9d3c51bb5d15fb23c8eae299d09c99302029249ddf619d1",
            "sha256:f3e8f032a637a9c578532d3ab1fd261a463fac76818bdeb97d2138e8f07f5d5a",
            "sha256:eaca4cd3f14b72dd9d50dc7b46e16fea22db82d92f87784cd10252a0d42cc6a3",
            "sha256:06e430b20a2dc1e11a87441007647b9ca46016917b1685e76827cb04f7cf9163",
            "sha256:9d7d33d7196f303fd62cf20d19f28a94e4e0eacf1ab865d91fa2fcae90d042fe",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:c237970378921007023450f87cbc9360802ba0816875519bb966e1c46fa1c94f",
            "sha256:08fb8c3f28eac7149ac28b417f1d525039d28b91a90af8de8601a24ab5fcd591",
            "sha256:7f5af79dc23cf4bed9e5d31485b866874129a6db071cf77b7676a804e07731cb",
            "sha256:3cb4718784bb755bc11f7a1e15f1a0fa9d1618a05e1502bf7fed8d817e341eee",
            "sha256:00b096e5bb3dd8dd4eb53e3a2ffeee4231ed5fb68d9d2db676c9e7b0e482ede2",
            "sha256:6f0bf7ce3a54d190c85a992ecafbe064413999a23eae757700753bd0cdaa4181",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:623dc8adc6326f836a69f7008d11e5948711be8bf415c2f94eccbfb609c90f76",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:3806950542ca6d8e8c048d0d77cb95b058490e121cb91470d574d8f2a7027d96",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:5568c42b228ae80d699ca1bf1247a2900c9c397141afa59867e00f80a58e4c2c",
            "sha256:1dc12a7aab386ac2134c17c6d95bbce6161535a98b6e3db96929aa2c8be209b9",
            "sha256:07cb08656714804b7e047977e313280462ea36d19d72b64eb650ac41dadf1452",
            "sha256:baa49a0ad8080d59ca9d6374dc65da284d515d573cd8f6f7a81977a261b0db1e",
            "sha256:5b17cb88fe9e60d215e53964f5d0fedfeeeeff87aace7795aa1fd1719eb4feb4",
            "sha256:f3371929daaf63d4aa17c569df909f04253fb98974eb94b7ede2dcfe0aacdc6f",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:94df29578f0909ae1847e4abc94fd04e8e5228ffb0bf4247f283be8826ed26fe",
            "sha256:c5f9194c28676f9bb30d4d9050aa7e1e6dcd54a3912ad33a0b6c89dc9b52dfad",
            "sha256:8f979af2d009fef418155b8f3bab37eb77bdc3636b5d2f4911d797c7a296f567",
            "sha256:c1a4262f3c7296fa62f72f894f33557bb2ec0c0e01bc4361c7850a6c71425000"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-03-11T02:09:12.915276791+08:00"
    }
}

更多版本

docker.io/localai/localai:latest-aio-cpu

linux/amd64 docker.io6.42GB2024-11-08 14:23
228

docker.io/localai/localai:latest-aio-gpu-nvidia-cuda-12

linux/amd64 docker.io45.94GB2024-11-21 01:51
133

docker.io/localai/localai:master-aio-gpu-nvidia-cuda-12

linux/amd64 docker.io42.47GB2025-02-28 01:41
88

docker.io/localai/localai:master-vulkan-ffmpeg-core

linux/amd64 docker.io5.91GB2025-03-03 18:48
64

docker.io/localai/localai:latest-aio-gpu-hipblas

linux/amd64 docker.io88.18GB2025-03-10 02:52
83

docker.io/localai/localai:latest-gpu-nvidia-cuda-12

linux/amd64 docker.io41.80GB2025-03-11 02:52
78