ghcr.io/ggml-org/llama.cpp:full-cuda-b6823 linux/amd64

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823 - 国内下载镜像源 浏览次数:17

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:full-cuda-b6823
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823
镜像ID sha256:208bc8698e673f7a2e1971bb54996df28b2d57577b8462224fd0ba2de6c20c6b
镜像TAG full-cuda-b6823
大小 5.05GB
镜像源 ghcr.io
CMD
启动入口 /app/tools.sh
工作目录 /app
OS/平台 linux/amd64
浏览量 17 次
贡献者
镜像创建 2025-10-23T04:59:30.275574307Z
同步时间 2025-10-23 14:36
更新时间 2025-10-23 21:27
环境变量
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536 NV_CUDA_CUDART_VERSION=12.4.99-1 NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4 CUDA_VERSION=12.4.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.4.0-1 NV_NVTX_VERSION=12.4.99-1 NV_LIBNPP_VERSION=12.2.5.2-1 NV_LIBNPP_PACKAGE=libnpp-12-4=12.2.5.2-1 NV_LIBCUSPARSE_VERSION=12.3.0.142-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-4 NV_LIBCUBLAS_VERSION=12.4.2.65-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-4=12.4.2.65-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.20.5-1 NCCL_VERSION=2.20.5-1 NV_LIBNCCL_PACKAGE=libnccl2=2.20.5-1+cuda12.4 NVIDIA_PRODUCT_NAME=CUDA
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823  ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823  ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:full-cuda-b6823#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823  ghcr.io/ggml-org/llama.cpp:full-cuda-b6823'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823  ghcr.io/ggml-org/llama.cpp:full-cuda-b6823'

镜像构建历史


# 2025-10-23 12:59:30  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/tools.sh"]
                        
# 2025-10-23 12:59:30  1.90GB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y     git     python3     python3-pip     && pip install --upgrade pip setuptools wheel     && pip install --break-system-packages -r requirements.txt     && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-10-23 12:57:52  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-10-23 12:57:52  466.30MB 复制新文件或目录到容器中
COPY /app/full /app # buildkit
                        
# 2025-10-23 12:57:51  401.76MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-10-09 12:53:36  6.84MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2024-04-05 07:40:07  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2024-04-05 07:40:07  2.53KB 复制新文件或目录到容器中
COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
                        
# 2024-04-05 07:40:07  3.06KB 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2024-04-05 07:40:07  262.98KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
                        
# 2024-04-05 07:40:07  2.03GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-libraries-12-4=${NV_CUDA_LIB_VERSION}     ${NV_LIBNPP_PACKAGE}     cuda-nvtx-12-4=${NV_NVTX_VERSION}     libcusparse-12-4=${NV_LIBCUSPARSE_VERSION}     ${NV_LIBCUBLAS_PACKAGE}     ${NV_LIBNCCL_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-04-05 07:40:07  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2024-04-05 07:40:07  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE
ENV NV_LIBNCCL_PACKAGE=libnccl2=2.20.5-1+cuda12.4
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.20.5-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_VERSION
ENV NV_LIBNCCL_PACKAGE_VERSION=2.20.5-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_NAME
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE
ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-4=12.4.2.65-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBCUBLAS_VERSION
ENV NV_LIBCUBLAS_VERSION=12.4.2.65-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE_NAME
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-4
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBCUSPARSE_VERSION
ENV NV_LIBCUSPARSE_VERSION=12.3.0.142-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBNPP_PACKAGE
ENV NV_LIBNPP_PACKAGE=libnpp-12-4=12.2.5.2-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_LIBNPP_VERSION
ENV NV_LIBNPP_VERSION=12.2.5.2-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_NVTX_VERSION
ENV NV_NVTX_VERSION=12.4.99-1
                        
# 2024-04-05 07:40:07  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.4.0-1
                        
# 2024-04-05 07:36:23  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2024-04-05 07:36:23  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2024-04-05 07:36:23  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2024-04-05 07:36:23  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2024-04-05 07:36:23  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2024-04-05 07:36:23  46.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf     && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2024-04-05 07:36:23  155.92MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-4=${NV_CUDA_CUDART_VERSION}     ${NV_CUDA_COMPAT_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-04-05 07:36:11  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.4.0
                        
# 2024-04-05 07:36:11  10.56MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-04-05 07:36:11  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2024-04-05 07:36:11  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2024-04-05 07:36:11  0.00B 设置环境变量 NV_CUDA_COMPAT_PACKAGE
ENV NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4
                        
# 2024-04-05 07:36:11  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.4.99-1
                        
# 2024-04-05 07:36:11  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
                        
# 2024-04-05 07:36:11  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2024-02-28 02:52:59  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2024-02-28 02:52:58  77.86MB 
/bin/sh -c #(nop) ADD file:21c2e8d95909bec6f4acdaf4aed55b44ee13603681f93b152e423e3e6a4a207b in / 
                        
# 2024-02-28 02:52:57  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2024-02-28 02:52:57  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2024-02-28 02:52:57  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2024-02-28 02:52:57  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:208bc8698e673f7a2e1971bb54996df28b2d57577b8462224fd0ba2de6c20c6b",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:full-cuda-b6823",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-cuda-b6823"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:9b6ee7d9484971b71c7528c4d39e6f0916d3494ca9bb8a4dee3dad2ffca0dfd5",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:818a403b05c7d69e728115048ace57d5b91f9204d649a75ea082a930a76e0c7e"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-10-23T04:59:30.275574307Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.4 brand=tesla,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=geforce,driver\u003e=470,driver\u003c471 brand=geforcertx,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=titan,driver\u003e=470,driver\u003c471 brand=titanrtx,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=525,driver\u003c526 brand=unknown,driver\u003e=525,driver\u003c526 brand=nvidia,driver\u003e=525,driver\u003c526 brand=nvidiartx,driver\u003e=525,driver\u003c526 brand=geforce,driver\u003e=525,driver\u003c526 brand=geforcertx,driver\u003e=525,driver\u003c526 brand=quadro,driver\u003e=525,driver\u003c526 brand=quadrortx,driver\u003e=525,driver\u003c526 brand=titan,driver\u003e=525,driver\u003c526 brand=titanrtx,driver\u003e=525,driver\u003c526 brand=tesla,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=geforce,driver\u003e=535,driver\u003c536 brand=geforcertx,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=titan,driver\u003e=535,driver\u003c536 brand=titanrtx,driver\u003e=535,driver\u003c536",
            "NV_CUDA_CUDART_VERSION=12.4.99-1",
            "NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-4",
            "CUDA_VERSION=12.4.0",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.4.0-1",
            "NV_NVTX_VERSION=12.4.99-1",
            "NV_LIBNPP_VERSION=12.2.5.2-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-4=12.2.5.2-1",
            "NV_LIBCUSPARSE_VERSION=12.3.0.142-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-4",
            "NV_LIBCUBLAS_VERSION=12.4.2.65-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-4=12.4.2.65-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.20.5-1",
            "NCCL_VERSION=2.20.5-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.20.5-1+cuda12.4",
            "NVIDIA_PRODUCT_NAME=CUDA"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/tools.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 5052036975,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/7fb73aceb770bbc97a11e47088ee9f4329cc4b9fab8012bc4411f26ec64f4ebe/diff:/var/lib/docker/overlay2/7e6ec8b59013389b8fc97ca65ef866b370af4b306beb8b33d64b25e48039d0d7/diff:/var/lib/docker/overlay2/2ba67def159740e8afc373ab15ff618a74dd40f4bf1ebd895758c9e0088c29a3/diff:/var/lib/docker/overlay2/098409ad74af5c4d9555497b837f30ffadf2586c562610dc928812ff3aa144b5/diff:/var/lib/docker/overlay2/7f7a77f91efd89c5c046f51f2dc6dc4e6e94b5348923ff3ef71ebe6c474e3684/diff:/var/lib/docker/overlay2/959059feba7e00f07930dc3bb3a617ec6b068a0e20313fb891d3e24c3af8e6e7/diff:/var/lib/docker/overlay2/32e5bba4855a86f2572ed3c50bb9c76b6c7c04a0258355f770de30e4574fa9fa/diff:/var/lib/docker/overlay2/5c57424f6feedfc26b2fbba9c74ba1e0ecf1f817048e23c7ddb9df74fa9e0467/diff:/var/lib/docker/overlay2/461ca7fd6e39d861b2311f1e1433d554d7027501f7e27f291d664f440cb033f9/diff:/var/lib/docker/overlay2/627a195d1725748e224ba0c8d299e8d5a862ad6133982c6458052c6b839f5e4a/diff:/var/lib/docker/overlay2/297b258cb3a6b3147e80550dbc6c3bd5998554552abd02cb3248496dde840bb9/diff:/var/lib/docker/overlay2/53590fc1976ffdc026d2b1570b36addf9b59e2c2a5e053cb70ec6daa7177f59c/diff:/var/lib/docker/overlay2/36afb5f3b33b3915eca7fc7803ea8c9687feff5bf1dafe376b931e39494e6100/diff",
            "MergedDir": "/var/lib/docker/overlay2/35b2f545b374c0ddf8de662000f3285d1d1145c759bf84dc560bf236bf3db079/merged",
            "UpperDir": "/var/lib/docker/overlay2/35b2f545b374c0ddf8de662000f3285d1d1145c759bf84dc560bf236bf3db079/diff",
            "WorkDir": "/var/lib/docker/overlay2/35b2f545b374c0ddf8de662000f3285d1d1145c759bf84dc560bf236bf3db079/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:5498e8c22f6996f25ef193ee58617d5b37e2a96decf22e72de13c3b34e147591",
            "sha256:4cd4079525948900a02a5734090afd1f3e046fc940dc882c55efcaee0a252dd0",
            "sha256:022bf74291b27404b223ba9ee16a7f3fb067253df9c65e23dfb3339800b28dfa",
            "sha256:eeb5315df33c9e700b3b8b8a3cdd1cf11e13c9dd44bfd946e340573478303349",
            "sha256:e942261d196e5e686398e2326c033119112f910191143b0497f13f78c377fa03",
            "sha256:421c5b38d6e056b3eee631bc65e1d2b24cee88b8f858457ec2ed1604b68cdbbb",
            "sha256:520e0f301880ab5ca0650a44703c86d75428b6ece6c3190b8ecd850e55372f60",
            "sha256:700fe921ad1f9a93e69a6a4faec3406f3f51e0f4ab4e9b732a9141261d941a4f",
            "sha256:b0dfaf1ca5c560107272f1e55220f7cef07d30d44ebac56c4cf8c45308538b91",
            "sha256:7679982bf4df27d4a1a1da6ae54f07c50cac56eaa40c0720de5b7025037c41d2",
            "sha256:2d35d674d0e1e3faae2bd48861fb03379c14aa5a593d53690f941dc8af254cc8",
            "sha256:100e7b6a241f04668cab122cd7d8a86a265731a44bab6027ba0b542b09d4d732",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:bf7184f1cd4f6ae9d21891e8ce9e0207950f1022615a6b5473d18b7c23534195"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-10-23T14:33:41.875350162+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
716

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
761

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
790

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
892

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
266

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
107

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
107

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
132

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
94

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
60

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
44

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
43

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
52

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
46

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
38

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
16