ghcr.io/ggml-org/llama.cpp:server-b7850 linux/amd64

ghcr.io/ggml-org/llama.cpp:server-b7850 - 国内下载镜像源 浏览次数:14

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server-b7850
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850
镜像ID sha256:affdc39af9f3ab85b1800ae2fc0d007e0356f139e34ef62a61f34aaac1ef6147
镜像TAG server-b7850
大小 111.12MB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 14 次
贡献者
镜像创建 2026-01-28T04:40:25.766288218Z
同步时间 2026-01-29 09:41
更新时间 2026-01-29 15:34
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LLAMA_ARG_HOST=0.0.0.0
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850  ghcr.io/ggml-org/llama.cpp:server-b7850

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850  ghcr.io/ggml-org/llama.cpp:server-b7850

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server-b7850#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850  ghcr.io/ggml-org/llama.cpp:server-b7850'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850  ghcr.io/ggml-org/llama.cpp:server-b7850'

镜像构建历史


# 2026-01-28 12:40:25  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2026-01-28 12:40:25  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2026-01-28 12:40:25  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2026-01-28 12:40:25  7.24MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2026-01-28 12:40:25  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2026-01-28 12:37:34  19.68MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2026-01-16 12:27:16  6.33MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2026-01-09 15:01:44  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2026-01-09 15:01:44  77.87MB 
/bin/sh -c #(nop) ADD file:b499000226bd9a7c562ffa8eeb86e2d170f2a563310db6c2d79562ab53e5cb6e in / 
                        
# 2026-01-09 15:01:41  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2026-01-09 15:01:41  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2026-01-09 15:01:41  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2026-01-09 15:01:41  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:affdc39af9f3ab85b1800ae2fc0d007e0356f139e34ef62a61f34aaac1ef6147",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server-b7850",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-b7850"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:f01f7affc7119b857b72e8f2ab96bcfe33932e297e4663ddb2c3f2a6c8eeba90",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:f797a6b19ff652d7818a3f359751e8ac28e6a65e3449a9ca7748d22633ec7cb9"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2026-01-28T04:40:25.766288218Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 111118421,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/86db8ad63a0f971697a3094d2a62d6294883dbec7b5bfd37b6217168d2495aff/diff:/var/lib/docker/overlay2/d8c9cabef1c74fa3512c03c90d876dc5047c586bb457b4f76a0d4d60ad593383/diff:/var/lib/docker/overlay2/9f155704ef70ff02489d1d5dd673ee40e270571f4c372315fa9b127c25944432/diff:/var/lib/docker/overlay2/c3c72e001e8519d702a441512f3c9646797a9152b20abc5bab066e590bd570cd/diff",
            "MergedDir": "/var/lib/docker/overlay2/f66558a1ba76bc3622549223c2ef5102cda2c56d775a2e9ae86872fa1d65c4eb/merged",
            "UpperDir": "/var/lib/docker/overlay2/f66558a1ba76bc3622549223c2ef5102cda2c56d775a2e9ae86872fa1d65c4eb/diff",
            "WorkDir": "/var/lib/docker/overlay2/f66558a1ba76bc3622549223c2ef5102cda2c56d775a2e9ae86872fa1d65c4eb/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:fbb9bbbaf4d2b027acd15252897d5043386eea7121e0e0433e697714bb14beac",
            "sha256:312ed034134a089785907d109ca995da5936892d08ccd8f927a998a930607b9d",
            "sha256:f34aa782b072ee8f1db2d42857447f63ca0c193df593a745e2e9fa63bdf74550",
            "sha256:116ce145c90730c9270972ae8e97c6b18ab44a38153177b314269522ce865b47",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-01-29T09:41:09.998142512+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
1149

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
1351

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
1455

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
1713

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
486

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
258

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
251

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
335

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
355

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
154

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
249

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
189

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
212

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
281

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
206

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
194

ghcr.io/ggml-org/llama.cpp:server-cuda-b6795

linux/amd64 ghcr.io2.69GB2025-10-30 17:31
275

ghcr.io/ggml-org/llama.cpp:server-musa-b6970

linux/amd64 ghcr.io4.47GB2025-11-07 14:50
151

ghcr.io/ggml-org/llama.cpp:full-cuda-b7083

linux/amd64 ghcr.io5.02GB2025-11-18 14:14
257

ghcr.io/ggml-org/llama.cpp:full-b7139

linux/amd64 ghcr.io2.01GB2025-11-24 14:53
309

ghcr.io/ggml-org/llama.cpp:server-b7139

linux/amd64 ghcr.io101.25MB2025-11-24 15:22
203

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7681

linux/amd64 ghcr.io5.16GB2026-01-10 03:32
71

ghcr.io/ggml-org/llama.cpp:server-cuda13-b7728

linux/amd64 ghcr.io2.53GB2026-01-16 13:50
144

ghcr.io/ggml-org/llama.cpp:full-cuda13-b7850

linux/amd64 ghcr.io4.77GB2026-01-29 09:40
13

ghcr.io/ggml-org/llama.cpp:server-b7850

linux/amd64 ghcr.io111.12MB2026-01-29 09:41
13

ghcr.io/ggml-org/llama.cpp:server-cuda-b7850

linux/amd64 ghcr.io2.76GB2026-01-29 12:27
9

ghcr.io/ggml-org/llama.cpp:server-cuda12-b7850

linux/amd64 ghcr.io2.76GB2026-01-29 13:20
8

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7850

linux/amd64 ghcr.io5.21GB2026-01-29 13:35
8

ghcr.io/ggml-org/llama.cpp:server-cuda-b7869

linux/amd64 ghcr.io2.76GB2026-01-29 16:38
6

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7869

linux/amd64 ghcr.io5.21GB2026-01-29 17:04
2