ghcr.io/ggml-org/llama.cpp:full-b7139 linux/amd64

ghcr.io/ggml-org/llama.cpp:full-b7139 - 国内下载镜像源 浏览次数:64

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:full-b7139
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139
镜像ID sha256:ca6d812d3b420003f8c4452c770620dc21a3f0aeb2c120f951770a8bb902dc38
镜像TAG full-b7139
大小 2.01GB
镜像源 ghcr.io
CMD
启动入口 /app/tools.sh
工作目录 /app
OS/平台 linux/amd64
浏览量 64 次
贡献者
镜像创建 2025-11-24T04:29:22.397498558Z
同步时间 2025-11-24 14:53
更新时间 2025-12-15 10:02
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139  ghcr.io/ggml-org/llama.cpp:full-b7139

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139  ghcr.io/ggml-org/llama.cpp:full-b7139

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:full-b7139#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139  ghcr.io/ggml-org/llama.cpp:full-b7139'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139  ghcr.io/ggml-org/llama.cpp:full-b7139'

镜像构建历史


# 2025-11-24 12:29:22  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/tools.sh"]
                        
# 2025-11-24 12:29:22  1.82GB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y     git     python3     python3-pip     && pip install --upgrade pip setuptools wheel     && pip install -r requirements.txt     && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-11-24 12:27:55  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-11-24 12:27:55  91.56MB 复制新文件或目录到容器中
COPY /app/full /app # buildkit
                        
# 2025-11-24 12:27:55  11.78MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-11-14 12:18:54  6.33MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-10-14 01:23:20  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-10-14 01:23:20  77.87MB 
/bin/sh -c #(nop) ADD file:d025507456f1d7d19195885b1c02a346454d60c9348cbd3be92431f2d7e2666e in / 
                        
# 2025-10-14 01:23:18  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-10-14 01:23:18  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-10-14 01:23:18  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-10-14 01:23:18  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:ca6d812d3b420003f8c4452c770620dc21a3f0aeb2c120f951770a8bb902dc38",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:full-b7139",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b7139"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:d98d40c08694b5ba4cf6ce9f796cabe9c79e2d90ccd9fc7be35403f460d24610",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:81587589e8957ea4dbe079f2b3104763e806e4906f866326539db83d75a306d5"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-11-24T04:29:22.397498558Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/tools.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 2009896563,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/2ec3af4ea8cb1bf855b813d86986759eb789c94b37e76a4a08be5c2a6cbd415c/diff:/var/lib/docker/overlay2/ae364c74abf00b6f46c15f23d095edc75f5a17b4a998495749090b7748e7da76/diff:/var/lib/docker/overlay2/ae37f0180539168557ca74fbcaf915fed742f0f623dec9f5c140deb26c12d364/diff:/var/lib/docker/overlay2/5acef074701f9945675803ee9833f590a746be2947324113db1dbc8eed52a24e/diff:/var/lib/docker/overlay2/6f6ec8e5321ca8688879ac4e8387377602db46a15370198137f7e7fb60a45a73/diff",
            "MergedDir": "/var/lib/docker/overlay2/23a7595de33d415c649ad723873650745788f01b988181118a839dd4859bebad/merged",
            "UpperDir": "/var/lib/docker/overlay2/23a7595de33d415c649ad723873650745788f01b988181118a839dd4859bebad/diff",
            "WorkDir": "/var/lib/docker/overlay2/23a7595de33d415c649ad723873650745788f01b988181118a839dd4859bebad/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:73974f74b436f39a2fdb6461b1e3f7c3e41c73325776fa71d16b942a5b4a365b",
            "sha256:e548f12429ef0a87cb5f4e3110708915e1be66bdf32cc326fe9a9139545bbaa6",
            "sha256:035ad9246aa14528a7716eb6169a8b0996489f94826888417d0beecb8bb7e898",
            "sha256:bc442786e1488c912d4201f54a96fdc557c42f9fbf1c883d58142fca4a614424",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:80dfcd7b794bc55a62a82b3a5838a9e16d5885e7798157188a8429fd6c5584c8"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-11-24T14:51:51.586982795+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
978

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
981

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
1158

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
1240

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
384

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
177

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
195

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
239

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
245

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
109

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
166

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
96

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
165

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
197

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
130

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
128

ghcr.io/ggml-org/llama.cpp:server-cuda-b6795

linux/amd64 ghcr.io2.69GB2025-10-30 17:31
171

ghcr.io/ggml-org/llama.cpp:server-musa-b6970

linux/amd64 ghcr.io4.47GB2025-11-07 14:50
84

ghcr.io/ggml-org/llama.cpp:full-cuda-b7083

linux/amd64 ghcr.io5.02GB2025-11-18 14:14
122

ghcr.io/ggml-org/llama.cpp:full-b7139

linux/amd64 ghcr.io2.01GB2025-11-24 14:53
63

ghcr.io/ggml-org/llama.cpp:server-b7139

linux/amd64 ghcr.io101.25MB2025-11-24 15:22
59