ghcr.io/ggml-org/llama.cpp:server-musa-b6970 linux/amd64

ghcr.io/ggml-org/llama.cpp:server-musa-b6970 - 国内下载镜像源 浏览次数:12

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server-musa-b6970
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970
镜像ID sha256:491978261f4474a52ed35e2ed501db79a5bf3c2c832117a1f81f6308ab4f670d
镜像TAG server-musa-b6970
大小 4.47GB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 12 次
贡献者 33******k@163.com
镜像创建 2025-11-07T04:53:24.913100757Z
同步时间 2025-11-07 14:50
更新时间 2025-11-07 22:48
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DEBIAN_FRONTEND=noninteractive MTHREADS_VISIBLE_DEVICES=all MTHREADS_DRIVER_CAPABILITIES=compute,utility LLAMA_ARG_HOST=0.0.0.0
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970  ghcr.io/ggml-org/llama.cpp:server-musa-b6970

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970  ghcr.io/ggml-org/llama.cpp:server-musa-b6970

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server-musa-b6970#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970  ghcr.io/ggml-org/llama.cpp:server-musa-b6970'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970  ghcr.io/ggml-org/llama.cpp:server-musa-b6970'

镜像构建历史


# 2025-11-07 12:53:24  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2025-11-07 12:53:24  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2025-11-07 12:53:24  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-11-07 12:53:24  4.66MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2025-11-07 12:53:24  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2025-11-07 12:49:48  212.98MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-10-02 12:32:19  1.43MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-09-25 11:09:06  0.00B 设置环境变量 MTHREADS_DRIVER_CAPABILITIES
ENV MTHREADS_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-09-25 11:09:06  0.00B 设置环境变量 MTHREADS_VISIBLE_DEVICES
ENV MTHREADS_VISIBLE_DEVICES=all
                        
# 2025-09-25 11:09:06  16.37KB 执行命令并创建新的镜像层
RUN /bin/sh -c printf "/usr/local/musa/lib" > /etc/ld.so.conf.d/000-musa.conf && ldconfig # buildkit
                        
# 2025-09-25 11:01:50  13.00B 执行命令并创建新的镜像层
RUN /bin/sh -c ln -sf /usr/bin/bash /usr/bin/sh # buildkit
                        
# 2025-09-25 11:01:50  4.16GB 复制新文件或目录到容器中
COPY /tmp/musa_lib /usr/local/musa/lib/ # buildkit
                        
# 2025-09-25 10:56:05  10.44MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update -y && apt-get install -y libelf1 libgomp1 libnuma1 libomp5 curl wget &&     apt-get clean && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-09-25 10:56:05  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-08-20 01:17:10  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-08-20 01:17:10  77.87MB 
/bin/sh -c #(nop) ADD file:9303cc1f788d2a9a8f909b154339f7c637b2a53c75c0e7f3da62eb1fefe371b1 in / 
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-08-20 01:17:08  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:491978261f4474a52ed35e2ed501db79a5bf3c2c832117a1f81f6308ab4f670d",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server-musa-b6970",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6970"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:803a035cffe3a92308f66cadec53cae4c9644b7d5df77c911dea9ffd2aebb418",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:0009cd6eebda032a2bf8198ae28aeb205076e695bcb46ad84681dfb07788eea0"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-11-07T04:53:24.913100757Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "DEBIAN_FRONTEND=noninteractive",
            "MTHREADS_VISIBLE_DEVICES=all",
            "MTHREADS_DRIVER_CAPABILITIES=compute,utility",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 4467389429,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/9a69bce17a3647303aee95b6f32bd76dd4f4b4f74134c6f8714585ad0661169c/diff:/var/lib/docker/overlay2/11be76c6b670e85320d4b1aaf8553d870517b4ae5b11c4dc4302227500f830b3/diff:/var/lib/docker/overlay2/82f2d38c94539be15b45e2f45ae816686a4635c7df423357eb4233b150c687bd/diff:/var/lib/docker/overlay2/fe98d52c711f708ab9d420297da87278ea347fdada2873eabb09888040fe8ae4/diff:/var/lib/docker/overlay2/daaa33e5ae1648453fce428dd8b77ffcc3d63eb20fb51da1a34bccd09e6ca5ad/diff:/var/lib/docker/overlay2/f0d0fa5e0d47f6bf3841238bf2496cf2675e3e2fc4b4d5eb1bbcbe6839284b4e/diff:/var/lib/docker/overlay2/c032cbde02d7667fa423511d256cfce5f162e26a32bf601445cf11aa5a4e21bb/diff:/var/lib/docker/overlay2/97073286d5eb7217a14fff08602e32cd878b3428b86cd09aa5efd9e66e76d780/diff",
            "MergedDir": "/var/lib/docker/overlay2/9038598e46c1094a901a4a6d3bd758c9a3df00ece90395cbe7e858e4a885bb1d/merged",
            "UpperDir": "/var/lib/docker/overlay2/9038598e46c1094a901a4a6d3bd758c9a3df00ece90395cbe7e858e4a885bb1d/diff",
            "WorkDir": "/var/lib/docker/overlay2/9038598e46c1094a901a4a6d3bd758c9a3df00ece90395cbe7e858e4a885bb1d/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:dc6eb6dad5f9e332f00af553440e857b1467db1be43dd910cdb6830ba0898d50",
            "sha256:9f693c9930f10b128d04109954a3c1fbe9fc22401a1321ea77014209f401c6d6",
            "sha256:667bcd9de484e3233af29c66f9958115a31a501098e7399dabebc26fb2e5bbae",
            "sha256:9e33c2d1b28e01f523f6f7dba7ad96ef6a342b3ade17c2d7ef0c886f4dcdf98c",
            "sha256:2c607ba05083860b659778588a019891e138c12d83c84c91650e4c6c7eacaf12",
            "sha256:2bd1d83dbfd532496fb8f3f57d468b68d8232d66be700df95c8d9a678ff12839",
            "sha256:bfcc7b3123405a150fa50c8fa9a3016aaece666219e4b7f04d8616771f2331fe",
            "sha256:91680f9138dda845c61052a618fe5d601ac9b1dec3179cf4a63d93593275d625",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-11-07T14:49:57.064876203+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
821

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
809

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
915

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
978

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
310

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
135

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
151

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
156

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
178

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
79

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
111

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
58

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
107

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
123

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
85

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
48

ghcr.io/ggml-org/llama.cpp:server-cuda-b6795

linux/amd64 ghcr.io2.69GB2025-10-30 17:31
62

ghcr.io/ggml-org/llama.cpp:server-musa-b6970

linux/amd64 ghcr.io4.47GB2025-11-07 14:50
11