ghcr.io/ggml-org/llama.cpp:server-musa-b6571 linux/amd64

ghcr.io/ggml-org/llama.cpp:server-musa-b6571 - 国内下载镜像源 浏览次数:11

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server-musa-b6571
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571
镜像ID sha256:e0cd5137804d48dd3b7aa490cfe9b7d5029b07c0b2bd6e355235c3c369c24ebd
镜像TAG server-musa-b6571
大小 4.45GB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 11 次
贡献者 33******k@163.com
镜像创建 2025-09-25T04:53:21.949555109Z
同步时间 2025-09-28 14:58
更新时间 2025-09-28 19:08
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DEBIAN_FRONTEND=noninteractive MTHREADS_VISIBLE_DEVICES=all MTHREADS_DRIVER_CAPABILITIES=compute,utility LLAMA_ARG_HOST=0.0.0.0
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571  ghcr.io/ggml-org/llama.cpp:server-musa-b6571

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571  ghcr.io/ggml-org/llama.cpp:server-musa-b6571

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server-musa-b6571#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571  ghcr.io/ggml-org/llama.cpp:server-musa-b6571'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571  ghcr.io/ggml-org/llama.cpp:server-musa-b6571'

镜像构建历史


# 2025-09-25 12:53:21  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2025-09-25 12:53:21  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2025-09-25 12:53:21  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-09-25 12:53:21  4.23MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2025-09-25 12:53:21  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2025-09-25 12:46:02  202.01MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-09-25 12:28:03  3.79MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-08-04 09:36:55  12.00B 执行命令并创建新的镜像层
RUN /bin/sh -c cd /usr/local/musa/lib &&     for f in libmusa.so.*; do         if [ -f "$f" ] && [[ "$f" =~ ^libmusa\.so\.[0-9]+$ ]]; then             ln -sf "$f" libmusa.so;             break;         fi;     done # buildkit
                        
# 2025-08-04 09:36:55  0.00B 设置环境变量 MTHREADS_DRIVER_CAPABILITIES
ENV MTHREADS_DRIVER_CAPABILITIES=compute,utility
                        
# 2025-08-04 09:36:55  0.00B 设置环境变量 MTHREADS_VISIBLE_DEVICES
ENV MTHREADS_VISIBLE_DEVICES=all
                        
# 2025-08-04 09:36:55  16.63KB 执行命令并创建新的镜像层
RUN /bin/sh -c printf "/usr/local/musa/lib" > /etc/ld.so.conf.d/000-musa.conf && ldconfig # buildkit
                        
# 2025-08-04 09:36:55  13.00B 执行命令并创建新的镜像层
RUN /bin/sh -c ln -sf /usr/bin/bash /usr/bin/sh # buildkit
                        
# 2025-08-04 09:08:45  4.15GB 复制新文件或目录到容器中
COPY /tmp/musa_lib /usr/local/musa/lib/ # buildkit
                        
# 2025-08-04 09:07:35  10.44MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update -y && apt-get install -y libelf1 libgomp1 libnuma1 libomp5 curl wget &&     apt-get clean && rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-08-04 09:07:35  0.00B 设置环境变量 DEBIAN_FRONTEND
ENV DEBIAN_FRONTEND=noninteractive
                        
# 2025-07-15 00:33:32  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-07-15 00:33:31  77.87MB 
/bin/sh -c #(nop) ADD file:415bbc01dfb447d002e2d8173e113ef025d2bbfa20f1205823fa699dc87a2019 in / 
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-07-15 00:33:29  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:e0cd5137804d48dd3b7aa490cfe9b7d5029b07c0b2bd6e355235c3c369c24ebd",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server-musa-b6571",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-musa-b6571"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:75239bbc8027a12cfbf8e5e6fe9e2734abe4d10dbc9c56dc71205b2a24b18c1c",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:2e75de1060dddc3ebe639eb89134e54c7e8aff4cb6627de3e72ac2573bd062b1"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-09-25T04:53:21.949555109Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "DEBIAN_FRONTEND=noninteractive",
            "MTHREADS_VISIBLE_DEVICES=all",
            "MTHREADS_DRIVER_CAPABILITIES=compute,utility",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 4449827015,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/72ff9b0646a1e95d7cbf851d9ed94f9b010c9300ac87f74eb01e9b289a1deedf/diff:/var/lib/docker/overlay2/e42c5129582a1f0df26871da8a35ad3e1292695922404cc99992d663a16a6ed2/diff:/var/lib/docker/overlay2/e65a09febced30cb74b9edcb787ef13d9f171252ea80fe15f7f025ed9bf10641/diff:/var/lib/docker/overlay2/06743199d028a09975d93086646c66ec4b5a15537394c0357c1dbe0a39a37edd/diff:/var/lib/docker/overlay2/53e27643cd04cdcd7cfcb42d1a22a63c05d8b946479eba8a4f4b21ef7c9a94f2/diff:/var/lib/docker/overlay2/f55d6cb7af6e9b36442659925ffcc23f9470f3809f4aa38656fc979578431ad7/diff:/var/lib/docker/overlay2/bf93bc88fa7bd8931fdcf1a6864c6a2571ad767350f886c123002f16ff43e5fc/diff:/var/lib/docker/overlay2/5d2a35bbcf027a5e77bdbf65d4d6dc26f17aa0a94fb3cdfaa0ffb809f56c5054/diff:/var/lib/docker/overlay2/af3da1151f0c0fec1e795790f7279d0611ed3856a5d80b65998328015b86aecf/diff",
            "MergedDir": "/var/lib/docker/overlay2/767a3195347664278097c911f2c6f9ea091ef4af8ad585bafc4ae1f014359d6f/merged",
            "UpperDir": "/var/lib/docker/overlay2/767a3195347664278097c911f2c6f9ea091ef4af8ad585bafc4ae1f014359d6f/diff",
            "WorkDir": "/var/lib/docker/overlay2/767a3195347664278097c911f2c6f9ea091ef4af8ad585bafc4ae1f014359d6f/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:3cc982388b71ef357e0157e0b7d3059dcefa4dc9fd2e3815bde6c6ce040302f3",
            "sha256:25181503e766498ecd49a4760d4b37d80b06b7395634fcfcaab0a956e9b9a8b4",
            "sha256:d645ee616a04c003fb2e4a7ed18960241019e8edf188967024050f8a2004fb91",
            "sha256:cd5ebc5ab05b01bbcbe5b0862d1eeb2d600e7702c2261c51048194a13ba90dfc",
            "sha256:d65e52a017f647e84f0198e08ad954c86ffb613ab4073a084ee4218cfbd7a98e",
            "sha256:bafac1de5d832487d5243540c10ad9fbb3b5f2516f5dbcc3d25e88a598e30f5e",
            "sha256:41042e323e3a0d157ec2932ec509f9bfbad68e33011b02574fdd1c8126f4bb59",
            "sha256:10bc4a2ae1ed28eb6cc90ba45e1ef45d763396bbbe2c5ee2fc781438221b5436",
            "sha256:c08d79c383ab1362a508c310fd0275d6605ba35a79740b91fa0c29656761bfe8",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-09-28T14:58:47.928814002+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
614

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
689

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
697

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
787

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
226

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
93

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
69

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
90

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
56

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
10