ghcr.io/ggerganov/llama.cpp:server-cuda linux/amd64

ghcr.io/ggerganov/llama.cpp:server-cuda - 国内下载镜像源 浏览次数:130
这里是镜像ghcr.io/ggerganov/llama.cpp 的描述信息:

LLaMA 是一个由 Google 的研究人员开发的预训练语言模型,旨在通过生成高质量、相关的内容来改善人机对话和文本理解。该模型以其高效的计算性能、广泛的知识覆盖范围以及简单易用的界面而闻名。使用 LLaMA 可以实现各种应用,如智能客服、内容创作、自然语言处理等。

源镜像 ghcr.io/ggerganov/llama.cpp:server-cuda
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda
镜像ID sha256:fa3bbdfa71ecae29cb7e4e43cdf4ac4fe218fad0c73bc59bef8d3bdb0332a617
镜像TAG server-cuda
大小 2.73GB
镜像源 ghcr.io
CMD
启动入口 /llama-server
工作目录
OS/平台 linux/amd64
浏览量 130 次
贡献者
镜像创建 2024-09-12T01:50:49.238429114Z
同步时间 2024-09-12 11:55
更新时间 2024-11-22 07:22
环境变量
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NVARCH=x86_64 NVIDIA_REQUIRE_CUDA=cuda>=12.6 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 NV_CUDA_CUDART_VERSION=12.6.37-1 CUDA_VERSION=12.6.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_CUDA_LIB_VERSION=12.6.0-1 NV_NVTX_VERSION=12.6.37-1 NV_LIBNPP_VERSION=12.3.1.23-1 NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1 NV_LIBCUSPARSE_VERSION=12.5.2.23-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6 NV_LIBCUBLAS_VERSION=12.6.0.22-1 NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1 NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1 NCCL_VERSION=2.22.3-1 NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6 NVIDIA_PRODUCT_NAME=CUDA LLAMA_ARG_HOST=0.0.0.0
镜像标签
NVIDIA CORPORATION <cudatools@nvidia.com>: maintainer ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令 无权限下载?点我修复

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda  ghcr.io/ggerganov/llama.cpp:server-cuda

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda  ghcr.io/ggerganov/llama.cpp:server-cuda

Shell快速替换命令

sed -i 's#ghcr.io/ggerganov/llama.cpp:server-cuda#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda  ghcr.io/ggerganov/llama.cpp:server-cuda'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda  ghcr.io/ggerganov/llama.cpp:server-cuda'

镜像历史

大小 创建时间 层信息
0.00B 2024-09-12 09:50:49 ENTRYPOINT ["/llama-server"]
0.00B 2024-09-12 09:50:49 HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
0.00B 2024-09-12 09:50:49 ENV LLAMA_ARG_HOST=0.0.0.0
2.10MB 2024-09-12 09:50:49 COPY /app/build/bin/llama-server /llama-server # buildkit
1.98MB 2024-09-12 09:50:49 COPY /app/build/src/libllama.so /libllama.so # buildkit
360.41MB 2024-09-12 09:50:49 COPY /app/build/ggml/src/libggml.so /libggml.so # buildkit
62.18MB 2024-09-12 09:25:41 RUN /bin/sh -c apt-get update && apt-get install -y libcurl4-openssl-dev libgomp1 curl # buildkit
0.00B 2024-08-10 01:43:33 ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
0.00B 2024-08-10 01:43:33 ENV NVIDIA_PRODUCT_NAME=CUDA
2.53KB 2024-08-10 01:43:33 COPY nvidia_entrypoint.sh /opt/nvidia/ # buildkit
3.06KB 2024-08-10 01:43:33 COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
262.99KB 2024-08-10 01:43:33 RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME} # buildkit
2.05GB 2024-08-10 01:43:32 RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends cuda-libraries-12-6=${NV_CUDA_LIB_VERSION} ${NV_LIBNPP_PACKAGE} cuda-nvtx-12-6=${NV_NVTX_VERSION} libcusparse-12-6=${NV_LIBCUSPARSE_VERSION} ${NV_LIBCUBLAS_PACKAGE} ${NV_LIBNCCL_PACKAGE} && rm -rf /var/lib/apt/lists/* # buildkit
0.00B 2024-08-10 01:43:32 LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
0.00B 2024-08-10 01:43:32 ARG TARGETARCH
0.00B 2024-08-10 01:43:32 ENV NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6
0.00B 2024-08-10 01:43:32 ENV NCCL_VERSION=2.22.3-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
0.00B 2024-08-10 01:43:32 ENV NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBCUBLAS_VERSION=12.6.0.22-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6
0.00B 2024-08-10 01:43:32 ENV NV_LIBCUSPARSE_VERSION=12.5.2.23-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1
0.00B 2024-08-10 01:43:32 ENV NV_LIBNPP_VERSION=12.3.1.23-1
0.00B 2024-08-10 01:43:32 ENV NV_NVTX_VERSION=12.6.37-1
0.00B 2024-08-10 01:43:32 ENV NV_CUDA_LIB_VERSION=12.6.0-1
0.00B 2024-08-10 01:38:07 ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
0.00B 2024-08-10 01:38:07 ENV NVIDIA_VISIBLE_DEVICES=all
17.29KB 2024-08-10 01:38:07 COPY NGC-DL-CONTAINER-LICENSE / # buildkit
0.00B 2024-08-10 01:38:07 ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
0.00B 2024-08-10 01:38:07 ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
46.00B 2024-08-10 01:38:07 RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
161.58MB 2024-08-10 01:38:07 RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends cuda-cudart-12-6=${NV_CUDA_CUDART_VERSION} cuda-compat-12-6 && rm -rf /var/lib/apt/lists/* # buildkit
0.00B 2024-08-10 01:37:56 ENV CUDA_VERSION=12.6.0
10.57MB 2024-08-10 01:37:56 RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get update && apt-get install -y --no-install-recommends gnupg2 curl ca-certificates && curl -fsSLO https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/cuda-keyring_1.1-1_all.deb && dpkg -i cuda-keyring_1.1-1_all.deb && apt-get purge --autoremove -y curl && rm -rf /var/lib/apt/lists/* # buildkit
0.00B 2024-08-10 01:37:56 LABEL maintainer=NVIDIA CORPORATION <cudatools@nvidia.com>
0.00B 2024-08-10 01:37:56 ARG TARGETARCH
0.00B 2024-08-10 01:37:56 ENV NV_CUDA_CUDART_VERSION=12.6.37-1
0.00B 2024-08-10 01:37:56 ENV NVIDIA_REQUIRE_CUDA=cuda>=12.6 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
0.00B 2024-08-10 01:37:56 ENV NVARCH=x86_64
0.00B 2024-06-28 04:10:12 /bin/sh -c #(nop) CMD ["/bin/bash"]
77.86MB 2024-06-28 04:10:12 /bin/sh -c #(nop) ADD file:d5da92199726e42da09a6f75a778befb607fe3f79e4afaf7ef5188329b26b386 in /
0.00B 2024-06-28 04:10:10 /bin/sh -c #(nop) LABEL org.opencontainers.image.version=22.04
0.00B 2024-06-28 04:10:10 /bin/sh -c #(nop) LABEL org.opencontainers.image.ref.name=ubuntu
0.00B 2024-06-28 04:10:10 /bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH
0.00B 2024-06-28 04:10:10 /bin/sh -c #(nop) ARG RELEASE

镜像信息

{
    "Id": "sha256:fa3bbdfa71ecae29cb7e4e43cdf4ac4fe218fad0c73bc59bef8d3bdb0332a617",
    "RepoTags": [
        "ghcr.io/ggerganov/llama.cpp:server-cuda",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp:server-cuda"
    ],
    "RepoDigests": [
        "ghcr.io/ggerganov/llama.cpp@sha256:6031408e9111d00ab3d6a00b49d8b9ccde9e1e5f038ed423396773ce898bdd1f",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggerganov/llama.cpp@sha256:c460a3017c5ea7bd134d44a5ff8bfaa3653cb758b3096fa465c8b3b850be41f8"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2024-09-12T01:50:49.238429114Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "NVARCH=x86_64",
            "NVIDIA_REQUIRE_CUDA=cuda\u003e=12.6 brand=unknown,driver\u003e=470,driver\u003c471 brand=grid,driver\u003e=470,driver\u003c471 brand=tesla,driver\u003e=470,driver\u003c471 brand=nvidia,driver\u003e=470,driver\u003c471 brand=quadro,driver\u003e=470,driver\u003c471 brand=quadrortx,driver\u003e=470,driver\u003c471 brand=nvidiartx,driver\u003e=470,driver\u003c471 brand=vapps,driver\u003e=470,driver\u003c471 brand=vpc,driver\u003e=470,driver\u003c471 brand=vcs,driver\u003e=470,driver\u003c471 brand=vws,driver\u003e=470,driver\u003c471 brand=cloudgaming,driver\u003e=470,driver\u003c471 brand=unknown,driver\u003e=535,driver\u003c536 brand=grid,driver\u003e=535,driver\u003c536 brand=tesla,driver\u003e=535,driver\u003c536 brand=nvidia,driver\u003e=535,driver\u003c536 brand=quadro,driver\u003e=535,driver\u003c536 brand=quadrortx,driver\u003e=535,driver\u003c536 brand=nvidiartx,driver\u003e=535,driver\u003c536 brand=vapps,driver\u003e=535,driver\u003c536 brand=vpc,driver\u003e=535,driver\u003c536 brand=vcs,driver\u003e=535,driver\u003c536 brand=vws,driver\u003e=535,driver\u003c536 brand=cloudgaming,driver\u003e=535,driver\u003c536 brand=unknown,driver\u003e=550,driver\u003c551 brand=grid,driver\u003e=550,driver\u003c551 brand=tesla,driver\u003e=550,driver\u003c551 brand=nvidia,driver\u003e=550,driver\u003c551 brand=quadro,driver\u003e=550,driver\u003c551 brand=quadrortx,driver\u003e=550,driver\u003c551 brand=nvidiartx,driver\u003e=550,driver\u003c551 brand=vapps,driver\u003e=550,driver\u003c551 brand=vpc,driver\u003e=550,driver\u003c551 brand=vcs,driver\u003e=550,driver\u003c551 brand=vws,driver\u003e=550,driver\u003c551 brand=cloudgaming,driver\u003e=550,driver\u003c551",
            "NV_CUDA_CUDART_VERSION=12.6.37-1",
            "CUDA_VERSION=12.6.0",
            "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64",
            "NVIDIA_VISIBLE_DEVICES=all",
            "NVIDIA_DRIVER_CAPABILITIES=compute,utility",
            "NV_CUDA_LIB_VERSION=12.6.0-1",
            "NV_NVTX_VERSION=12.6.37-1",
            "NV_LIBNPP_VERSION=12.3.1.23-1",
            "NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.23-1",
            "NV_LIBCUSPARSE_VERSION=12.5.2.23-1",
            "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6",
            "NV_LIBCUBLAS_VERSION=12.6.0.22-1",
            "NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.0.22-1",
            "NV_LIBNCCL_PACKAGE_NAME=libnccl2",
            "NV_LIBNCCL_PACKAGE_VERSION=2.22.3-1",
            "NCCL_VERSION=2.22.3-1",
            "NV_LIBNCCL_PACKAGE=libnccl2=2.22.3-1+cuda12.6",
            "NVIDIA_PRODUCT_NAME=CUDA",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "",
        "Entrypoint": [
            "/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "NVIDIA CORPORATION \u003ccudatools@nvidia.com\u003e",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 2730398875,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/3bf15f1a0e3518a67de209cb22bb2178e6da3242c0e5899f88896557df6a750a/diff:/var/lib/docker/overlay2/5e1996da7eee3c869c2e5a34bd4987aa97773f4296937be13e52839b31024107/diff:/var/lib/docker/overlay2/b91347ee4edf38cf2d77068a8045ac20ea522f0f2da2c865124a769132df4036/diff:/var/lib/docker/overlay2/ea58e9d6869b53b5c505241362bc342a2e7ae8fca74dea4392f927dad3990b4f/diff:/var/lib/docker/overlay2/a6c6b2dc5d00d1880faf253b16bf9aa3f9cb860fcb07b7377f4ec245fad5022e/diff:/var/lib/docker/overlay2/50c85156e60eeea3e04bc8e658d0fd513f3c5c8662ed29a3190eae25ca5d0b27/diff:/var/lib/docker/overlay2/acaa56197f336ffdfb912743adc3467a9cbf2e90a1e9cc26e3d447ff89e3782e/diff:/var/lib/docker/overlay2/e6ea45ee4321a090d27ef69da575c4e3e64929ff97a01fd3a05a0925919175cc/diff:/var/lib/docker/overlay2/3988b1e274417a7f40d527378c5b759477a5c509b6852cc1782bcaf9fb4479b3/diff:/var/lib/docker/overlay2/2b8bcf81903793bf2f236bffb3ae9c8e957bb71e81ccd24ed3d2066add0fcf9d/diff:/var/lib/docker/overlay2/9a3fa782ad83bb1af016e7680652116acd946f4e57b1538d349092dbb8793625/diff:/var/lib/docker/overlay2/a1b2bc5ac57e0f6deb49c26a21cf2bc419896d00be14b86405d87fd51bed5df3/diff",
            "MergedDir": "/var/lib/docker/overlay2/e5fae158d138acf02a080bf62703440c38829b335870fbd35ea28f1c9628b16b/merged",
            "UpperDir": "/var/lib/docker/overlay2/e5fae158d138acf02a080bf62703440c38829b335870fbd35ea28f1c9628b16b/diff",
            "WorkDir": "/var/lib/docker/overlay2/e5fae158d138acf02a080bf62703440c38829b335870fbd35ea28f1c9628b16b/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:931b7ff0cb6f494b27d31a4cbec3efe62ac54676add9c7469560302f1541ecaf",
            "sha256:1e9c40c384ef892c8b044b8deea344d77213ef86e5e2e10556b176e3b66cd7e3",
            "sha256:2591292aa496711009497eef309bd42371956b22e56756c512c18b326162f9c3",
            "sha256:cc51bf61b66d15057d4c73ba89e3c3ba387cdeaac86d594c98a042b55455ee17",
            "sha256:3b6bc3c2c74bad1746ed46ebdcf13c564d4efd61b1198ca172bfaebea05f8843",
            "sha256:eca6787b9f1f10205282688c75d49c132662217890128118be425b229f8815f0",
            "sha256:92caa6c8a151862616e064ad09df5a7388c6e6e50f78bb48bb10f6804b412d69",
            "sha256:12799c4e382b7cf61cda8157b8f3cb7306e6443eb0b34a5f60615a31039d260d",
            "sha256:8b3e2824dc15bac5d49e6ae466686963e32446313cefcd29ca000e7346169d64",
            "sha256:8fbd4fb1ed1f735d838c8c0c32ca5eb1eabe468b5cd9a91657550963c31cc6c5",
            "sha256:ead6f29cf024c07607a67f31f05a8eff42ae771458472e8521aa17461294f390",
            "sha256:95c3e4feb11441a6fd802bbd05f196cbd2a90232d542194adadb8d630b4d2ee6",
            "sha256:abb759eedef5202f2d339061205cfd0c79c1988ed1ea89503903159deb95bf7d"
        ]
    },
    "Metadata": {
        "LastTagTime": "2024-09-12T11:51:22.245056976+08:00"
    }
}

更多版本

ghcr.io/ggerganov/llama.cpp:server-cuda

linux/amd64 ghcr.io2.73GB2024-09-12 11:55
129

ghcr.io/ggerganov/llama.cpp:server-cuda--b1-7d1a378

linux/amd64 ghcr.io2.32GB2024-11-03 15:07
40

ghcr.io/ggerganov/llama.cpp:server-cuda--b1-a59f8fd

linux/amd64 ghcr.io2.55GB2024-11-03 15:35
53

ghcr.io/ggerganov/llama.cpp:light

linux/amd64 ghcr.io175.71MB2024-11-05 16:15
28

ghcr.io/ggerganov/llama.cpp:full

linux/amd64 ghcr.io3.52GB2024-11-08 14:49
25