ghcr.io/ggerganov/llama.cpp:server-cuda-b4641 linux/amd64

ghcr.io/ggerganov/llama.cpp:server-cuda-b4641 - 国内下载镜像源 浏览次数:44
这里是镜像ghcr.io/ggerganov/llama.cpp 的描述信息:

LLaMA 是一个由 Google 的研究人员开发的预训练语言模型,旨在通过生成高质量、相关的内容来改善人机对话和文本理解。该模型以其高效的计算性能、广泛的知识覆盖范围以及简单易用的界面而闻名。使用 LLaMA 可以实现各种应用,如智能客服、内容创作、自然语言处理等。

源镜像 ghcr.io/ggerganov/llama.cpp:server-cuda-b4641
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641
镜像ID sha256:05b95f3b152acfb5235684462260bef987cc051cd23d6b9354a4e31bf4c2842b
镜像TAG server-cuda-b4641
大小 2.67GB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 44 次
贡献者
镜像创建 2025-02-05T04:52:24.800566667Z
同步时间 2025-02-05 14:38
更新时间 2025-02-20 01:24
环境变量
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.6 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 NV_CUDA_CUDART_VERSION=12.6.37-1 CUDA_VERSION=12.6.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.6.0-1 NV_NVTX_VERSION=12.6.37-1 NV_LIBNPP_VERSION=12.3.1.23-1 NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1 NV_LIBCUSPARSE_VERSION=12.5.2.23-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6 NV_LIBCUBLAS_VERSION=12.6.0.22-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1 NCCL_VERSION=2.22.3-1 NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6 NVIDIA_PRODUCT_NAME=CUDA LLAMA_ARG_HOST=0.0.0.0
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version
镜像安全扫描 查看Trivy扫描报告

系统OS: ubuntu 22.04 扫描引擎: Trivy 扫描时间: 2025-02-05 14:38

低危漏洞:51 中危漏洞:24 高危漏洞:0 严重漏洞:0

Docker拉取命令 无权限下载?点我修复

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641  ghcr.io/ggerganov/llama.cpp:server-cuda-b4641

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641  ghcr.io/ggerganov/llama.cpp:server-cuda-b4641

Shell快速替换命令

sed -i 's#ghcr.io/ggerganov/llama.cpp:server-cuda-b4641#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641  ghcr.io/ggerganov/llama.cpp:server-cuda-b4641'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641  ghcr.io/ggerganov/llama.cpp:server-cuda-b4641'

镜像构建历史


# 2025-02-05 12:52:24  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2025-02-05 12:52:24  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2025-02-05 12:52:24  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-02-05 12:52:24  4.00MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2025-02-05 12:52:24  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2025-02-05 12:44:48  361.07MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-02-05 12:18:35  3.83MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2024-08-10 01:43:33  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
                        
# 2024-08-10 01:43:33  0.00B 设置环境变量 NVIDIA_PRODUCT_NAME
ENV NVIDIA_PRODUCT_NAME=CUDA
                        
# 2024-08-10 01:43:33  2.53KB 复制新文件或目录到容器中
COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
                        
# 2024-08-10 01:43:33  3.06KB 复制新文件或目录到容器中
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
                        
# 2024-08-10 01:43:33  262.99KB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
                        
# 2024-08-10 01:43:32  2.05GB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-libraries-12-6=${NV_CUDA_LIB_VERSION}     ${NV_LIBNPP_PACKAGE}     cuda-nvtx-12-6=${NV_NVTX_VERSION}     libcusparse-12-6=${NV_LIBCUSPARSE_VERSION}     ${NV_LIBCUBLAS_PACKAGE}     ${NV_LIBNCCL_PACKAGE}     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-08-10 01:43:32  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2024-08-10 01:43:32  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE
ENV NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NCCL_VERSION
ENV NCCL_VERSION=2.22.3-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_VERSION
ENV NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBNCCL_PACKAGE_NAME
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE
ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBCUBLAS_VERSION
ENV NV_LIBCUBLAS_VERSION=12.6.0.22-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBCUBLAS_PACKAGE_NAME
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBCUSPARSE_VERSION
ENV NV_LIBCUSPARSE_VERSION=12.5.2.23-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBNPP_PACKAGE
ENV NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_LIBNPP_VERSION
ENV NV_LIBNPP_VERSION=12.3.1.23-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_NVTX_VERSION
ENV NV_NVTX_VERSION=12.6.37-1
                        
# 2024-08-10 01:43:32  0.00B 设置环境变量 NV_CUDA_LIB_VERSION
ENV NV_CUDA_LIB_VERSION=12.6.0-1
                        
# 2024-08-10 01:38:07  0.00B 设置环境变量 NVIDIA_DRIVER_CAPABILITIES
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
                        
# 2024-08-10 01:38:07  0.00B 设置环境变量 NVIDIA_VISIBLE_DEVICES
ENV NVIDIA_VISIBLE_DEVICES=all
                        
# 2024-08-10 01:38:07  17.29KB 复制新文件或目录到容器中
COPY NGC-DL-CONTAINER-LICENSE / # buildkit
                        
# 2024-08-10 01:38:07  0.00B 设置环境变量 LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
                        
# 2024-08-10 01:38:07  0.00B 设置环境变量 PATH
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                        
# 2024-08-10 01:38:07  46.00B 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf     && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
                        
# 2024-08-10 01:38:07  161.58MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     cuda-cudart-12-6=${NV_CUDA_CUDART_VERSION}     cuda-compat-12-6     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-08-10 01:37:56  0.00B 设置环境变量 CUDA_VERSION
ENV CUDA_VERSION=12.6.0
                        
# 2024-08-10 01:37:56  10.57MB 执行命令并创建新的镜像层
RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends     gnupg2 curl ca-certificates &&     curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb &&     dpkg -i cuda-keyring_1.1-1_all.deb &&     apt-get purge --autoremove -y curl     && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2024-08-10 01:37:56  0.00B 添加元数据标签
LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
                        
# 2024-08-10 01:37:56  0.00B 定义构建参数
ARG TARGETARCH
                        
# 2024-08-10 01:37:56  0.00B 设置环境变量 NV_CUDA_CUDART_VERSION
ENV NV_CUDA_CUDART_VERSION=12.6.37-1
                        
# 2024-08-10 01:37:56  0.00B 设置环境变量 NVIDIA_REQUIRE_CUDA brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand brand
ENV NVIDIA_REQUIRE_CUDA=cuda>=12.6 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
                        
# 2024-08-10 01:37:56  0.00B 设置环境变量 NVARCH
ENV NVARCH=x86_64
                        
# 2024-06-28 04:10:12  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2024-06-28 04:10:12  77.86MB 
/bin/sh -c #(nop) ADD file:d5da92199726e42da09a6f75a778befb607fe3f79e4afaf7ef5188329b26b386 in / 
                        
# 2024-06-28 04:10:10  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2024-06-28 04:10:10  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2024-06-28 04:10:10  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2024-06-28 04:10:10  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:05b95f3b152acfb5235684462260bef987cc051cd23d6b9354a4e31bf4c2842b",
    "RepoTags": [
        "ghcr.io/ggerganov/llama.cpp:server-cuda-b4641",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda-b4641"
    ],
    "RepoDigests": [
        "ghcr.io/ggerganov/llama.cpp@sha256:15655152c324bf793287213e488c31b3baa543afeb0652fe19fc38f6bc416e68",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp@sha256:833745726732e899995f73d2cd5160d658f17493019eb26a9fde7e56ddeca388"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-02-05T04:52:24.800566667Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.6 brand=unknown,driver\u003e=470,driver\u003c471 brand=grid,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=vapps,driver\u003e=470,driver\u003c471 brand=vpc,driver\u003e=470,driver\u003c471 brand=vcs,driver\u003e=470,driver\u003c471 brand=vws,driver\u003e=470,driver\u003c471 brand=cloudgaming,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551",
            "NV_CUDA_CUDART_VERSION=12.6.37-1",
            "CUDA_VERSION=12.6.0",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.6.0-1",
            "NV_NVTX_VERSION=12.6.37-1",
            "NV_LIBNPP_VERSION=12.3.1.23-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1",
            "NV_LIBCUSPARSE_VERSION=12.5.2.23-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6",
            "NV_LIBCUBLAS_VERSION=12.6.0.22-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1",
            "NCCL_VERSION=2.22.3-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6",
            "NVIDIA_PRODUCT_NAME=CUDA",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 2672620111,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/60dc6b8ac50b3d56f820c4648660df0499c77f3c4d042a752a873a9ef48a1bbc/diff:/var/lib/docker/overlay2/23e43b21e10a1ab454979acaf2d217a860e5ac3f7a1d49dac2b0a7283f1ceea4/diff:/var/lib/docker/overlay2/8c1323b0752cc72b7834f881c7699e2a2e85d3d051fb2768abb8804d3dff4d27/diff:/var/lib/docker/overlay2/dd14db9baa4454d3c866a3b9b99ff55363840179cee3ad674f11f4bbd0d08a29/diff:/var/lib/docker/overlay2/7a8ea2a416550438a2603700c6b22f923e60b0d2b1b55997247d9dda94544e68/diff:/var/lib/docker/overlay2/434b3cc2d28f11b0e2634d6a603600114de9b69bee11e2419a749adfc36ff879/diff:/var/lib/docker/overlay2/5480e65cb04a474c98650817e6537845e1cc8158ceb3a25713559d98737ac0fa/diff:/var/lib/docker/overlay2/79e712b81d9a3a5eee80b53bf1eb87a70ac5b029366d6d99bb9c8b010d5f95a0/diff:/var/lib/docker/overlay2/68bacb86e692b9cfda7ac4b93030d8439240d7dde97b98c027b0b01b8e0062d4/diff:/var/lib/docker/overlay2/df8868a3cd8463820de78df9d98553b2ac8496c28de2f3441ab863b2f29e322c/diff:/var/lib/docker/overlay2/6d3741fe9297930dc9e55b2b25e03aa765d3cea1c82ef1f2054ede617d5de06f/diff:/var/lib/docker/overlay2/a87f4e197f50eba5d3c155f8f71198b5efc5a79a132e68b6fdd236bf36dd0a45/diff",
            "MergedDir": "/var/lib/docker/overlay2/cc287fa64c20d6dcb030139e4466d47a0921bd7d0dc4df518415b4eb786298a8/merged",
            "UpperDir": "/var/lib/docker/overlay2/cc287fa64c20d6dcb030139e4466d47a0921bd7d0dc4df518415b4eb786298a8/diff",
            "WorkDir": "/var/lib/docker/overlay2/cc287fa64c20d6dcb030139e4466d47a0921bd7d0dc4df518415b4eb786298a8/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:931b7ff0cb6f494b27d31a4cbec3efe62ac54676add9c7469560302f1541ecaf",
            "sha256:1e9c40c384ef892c8b044b8deea344d77213ef86e5e2e10556b176e3b66cd7e3",
            "sha256:2591292aa496711009497eef309bd42371956b22e56756c512c18b326162f9c3",
            "sha256:cc51bf61b66d15057d4c73ba89e3c3ba387cdeaac86d594c98a042b55455ee17",
            "sha256:3b6bc3c2c74bad1746ed46ebdcf13c564d4efd61b1198ca172bfaebea05f8843",
            "sha256:eca6787b9f1f10205282688c75d49c132662217890128118be425b229f8815f0",
            "sha256:92caa6c8a151862616e064ad09df5a7388c6e6e50f78bb48bb10f6804b412d69",
            "sha256:12799c4e382b7cf61cda8157b8f3cb7306e6443eb0b34a5f60615a31039d260d",
            "sha256:8b3e2824dc15bac5d49e6ae466686963e32446313cefcd29ca000e7346169d64",
            "sha256:a770336fecd94b06ba9ff21429c5f4ac3ff195989084eb41eb3958ae11c65274",
            "sha256:e683019b581abc6c65e810c033840680a8c8658f2fa75882aad76461d4ae4c17",
            "sha256:c2b73229bf88e07931fd5e9c2283be8a0adab06505ffef61a938bc5432a121c9",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-02-05T14:38:19.274734026+08:00"
    }
}

更多版本

ghcr.io/ggerganov/llama.cpp:server-cuda

linux/amd64 ghcr.io2.73GB2024-09-12 11:55
236

ghcr.io/ggerganov/llama.cpp:server-cuda--b1-7d1a378

linux/amd64 ghcr.io2.32GB2024-11-03 15:07
117

ghcr.io/ggerganov/llama.cpp:server-cuda--b1-a59f8fd

linux/amd64 ghcr.io2.55GB2024-11-03 15:35
182

ghcr.io/ggerganov/llama.cpp:light

linux/amd64 ghcr.io175.71MB2024-11-05 16:15
114

ghcr.io/ggerganov/llama.cpp:full

linux/amd64 ghcr.io3.52GB2024-11-08 14:49
179

ghcr.io/ggerganov/llama.cpp:server-cuda-b4641

linux/amd64 ghcr.io2.67GB2025-02-05 14:38
43

ghcr.io/ggerganov/llama.cpp:server-cuda-b4646

linux/amd64 ghcr.io2.67GB2025-02-06 19:31
64

ghcr.io/ggerganov/llama.cpp:full-cuda

linux/amd64 ghcr.io4.68GB2025-02-07 15:47
90

ghcr.io/ggerganov/llama.cpp:server-cuda-b4563

linux/amd64 ghcr.io2.68GB2025-02-10 16:54
48