ghcr.io/ggml-org/llama.cpp:server-rocm-b7968 linux/amd64

ghcr.io/ggml-org/llama.cpp:server-rocm-b7968 - 国内下载镜像源 浏览次数:5

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server-rocm-b7968
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968
镜像ID sha256:246d1f7308d5f49b2b3bcc941da8ea5abf15b51c75345a4df78d4e6f817310a6
镜像TAG server-rocm-b7968
大小 18.09GB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 5 次
贡献者
镜像创建 2026-02-08T06:21:47.957621985Z
同步时间 2026-02-09 01:39
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LLAMA_ARG_HOST=0.0.0.0
镜像标签
dl.mlsedevops@amd.com: maintainer ubuntu: org.opencontainers.image.ref.name 24.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968  ghcr.io/ggml-org/llama.cpp:server-rocm-b7968

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968  ghcr.io/ggml-org/llama.cpp:server-rocm-b7968

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server-rocm-b7968#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968  ghcr.io/ggml-org/llama.cpp:server-rocm-b7968'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968  ghcr.io/ggml-org/llama.cpp:server-rocm-b7968'

镜像构建历史


# 2026-02-08 14:21:47  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2026-02-08 14:21:47  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2026-02-08 14:21:47  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2026-02-08 14:21:47  7.50MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2026-02-08 14:21:47  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2026-02-08 14:18:42  827.43MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-12-29 12:40:32  15.37KB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-09-16 17:58:22  1.89KB 执行命令并创建新的镜像层
RUN |3 ROCM_VERSION=7.0 AMDGPU_VERSION=7.0 APT_PREF=Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600 /bin/sh -c groupadd -g 109 render # buildkit
                        
# 2025-09-16 17:58:22  17.18GB 执行命令并创建新的镜像层
RUN |3 ROCM_VERSION=7.0 AMDGPU_VERSION=7.0 APT_PREF=Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600 /bin/sh -c apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends ca-certificates curl libnuma-dev gnupg   && curl -sL https://repo.radeon.com/rocm/rocm.gpg.key | apt-key add -   && printf "deb [arch=amd64] https://repo.radeon.com/rocm/apt/$ROCM_VERSION/ noble main" | tee --append /etc/apt/sources.list.d/rocm.list   && printf "deb [arch=amd64] https://repo.radeon.com/amdgpu/$AMDGPU_VERSION/ubuntu noble main" | tee /etc/apt/sources.list.d/amdgpu.list   && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends   sudo   libelf1   kmod   file   python3-dev   python3-pip   rocm-dev   rocm-libs   build-essential &&   apt-get clean &&   rm -rf /var/lib/apt/lists/* # buildkit
                        
# 2025-09-16 17:52:49  60.00B 执行命令并创建新的镜像层
RUN |3 ROCM_VERSION=7.0 AMDGPU_VERSION=7.0 APT_PREF=Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600 /bin/sh -c echo "$APT_PREF" > /etc/apt/preferences.d/rocm-pin-600 # buildkit
                        
# 2025-09-16 17:52:49  0.00B 定义构建参数
ARG APT_PREF
                        
# 2025-09-16 17:52:49  0.00B 定义构建参数
ARG AMDGPU_VERSION=5.3
                        
# 2025-09-16 17:52:49  0.00B 定义构建参数
ARG ROCM_VERSION=5.3
                        
# 2025-09-16 17:52:49  0.00B 添加元数据标签
LABEL maintainer=dl.mlsedevops@amd.com
                        
# 2025-09-10 13:42:34  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-09-10 13:42:34  78.12MB 
/bin/sh -c #(nop) ADD file:dafefa97de6dc66a6734ec6f05e58125ce01225cccce3f50662330c252aad518 in / 
                        
# 2025-09-10 13:42:32  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=24.04
                        
# 2025-09-10 13:42:32  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-09-10 13:42:32  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-09-10 13:42:32  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:246d1f7308d5f49b2b3bcc941da8ea5abf15b51c75345a4df78d4e6f817310a6",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server-rocm-b7968",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-rocm-b7968"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:b1a3ea6918d341f989371742b4e0439119b11472d11bb23c7806375d0efad5ba",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:e2818dfa40829bcf3b6db9318ab9229567669afcf3f472cf6ce8ccebedde2c18"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2026-02-08T06:21:47.957621985Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "maintainer": "dl.mlsedevops@amd.com",
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "24.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 18090228614,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/b73bf3e685bc60bd1d4574c6a38f5165281a079e35010063950cd1c20c60c5b3/diff:/var/lib/docker/overlay2/3f2ee46724cc7de1acfea3d09af792140274a4d1514bd159a8b4ef15a804a7c8/diff:/var/lib/docker/overlay2/b784fd55e4fa1ae5cb5f8176339fb0504cb7b812ac98381e2b9dbd20522e2682/diff:/var/lib/docker/overlay2/0a0d803020c499dd3384864787e9c8c9d9077c7e0fd926d1d3452fa8218078a4/diff:/var/lib/docker/overlay2/2dc60fcd35cb00f4a92a0d09be9f96c7f4f8184cc400a761cac591762785fb3c/diff:/var/lib/docker/overlay2/efa3961baed7078c3acd0de07a32adcabfbfe637fc946fcbbbd0a0c34cf367b3/diff:/var/lib/docker/overlay2/54ef700adf7b77b8c25799c44e8638f5c95d72c8491ca089b721097ef2b1d873/diff",
            "MergedDir": "/var/lib/docker/overlay2/58fc2291b02fa19bfdfe3f49818c0fc44094eb00ba9b55abc64ebb34b2f978b4/merged",
            "UpperDir": "/var/lib/docker/overlay2/58fc2291b02fa19bfdfe3f49818c0fc44094eb00ba9b55abc64ebb34b2f978b4/diff",
            "WorkDir": "/var/lib/docker/overlay2/58fc2291b02fa19bfdfe3f49818c0fc44094eb00ba9b55abc64ebb34b2f978b4/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:f9f52dc133e2af9188960e5a5165cafaa51657ef740ff20219e45a561d78c591",
            "sha256:4f3c403d71d8303fb1af39e22cf512b256af63b9b514e26d43e2086a162eb5d9",
            "sha256:c40d8d348069bc6978c84ddd916addd334a5b6e019e808fc20872669e4a0c0f7",
            "sha256:6846289044e9a60f60f3148c8cbb6132c7c292abc114197f59c30f76b247611b",
            "sha256:3f44d299eb532403345e8be06b369cecf20a666a1b4e6dc94ea2f15e38403d1b",
            "sha256:94b412d6103f1a8138ef5417488a9e110772dc6007c25ba6e7b43e4af12ef07e",
            "sha256:40d5e1500e45b4511915f471a65ce8dfd9d3ffb2716a56c4059379949257f3ad",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2026-02-09T01:38:38.36323472+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
1179

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
1420

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
1517

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
1809

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
513

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
270

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
266

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
367

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
373

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
167

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
262

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
216

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
224

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
298

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
226

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
211

ghcr.io/ggml-org/llama.cpp:server-cuda-b6795

linux/amd64 ghcr.io2.69GB2025-10-30 17:31
290

ghcr.io/ggml-org/llama.cpp:server-musa-b6970

linux/amd64 ghcr.io4.47GB2025-11-07 14:50
168

ghcr.io/ggml-org/llama.cpp:full-cuda-b7083

linux/amd64 ghcr.io5.02GB2025-11-18 14:14
277

ghcr.io/ggml-org/llama.cpp:full-b7139

linux/amd64 ghcr.io2.01GB2025-11-24 14:53
358

ghcr.io/ggml-org/llama.cpp:server-b7139

linux/amd64 ghcr.io101.25MB2025-11-24 15:22
241

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7681

linux/amd64 ghcr.io5.16GB2026-01-10 03:32
83

ghcr.io/ggml-org/llama.cpp:server-cuda13-b7728

linux/amd64 ghcr.io2.53GB2026-01-16 13:50
192

ghcr.io/ggml-org/llama.cpp:full-cuda13-b7850

linux/amd64 ghcr.io4.77GB2026-01-29 09:40
65

ghcr.io/ggml-org/llama.cpp:server-b7850

linux/amd64 ghcr.io111.12MB2026-01-29 09:41
63

ghcr.io/ggml-org/llama.cpp:server-cuda-b7850

linux/amd64 ghcr.io2.76GB2026-01-29 12:27
52

ghcr.io/ggml-org/llama.cpp:server-cuda12-b7850

linux/amd64 ghcr.io2.76GB2026-01-29 13:20
62

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7850

linux/amd64 ghcr.io5.21GB2026-01-29 13:35
42

ghcr.io/ggml-org/llama.cpp:server-cuda-b7869

linux/amd64 ghcr.io2.76GB2026-01-29 16:38
88

ghcr.io/ggml-org/llama.cpp:full-cuda12-b7869

linux/amd64 ghcr.io5.21GB2026-01-29 17:04
86

ghcr.io/ggml-org/llama.cpp:server-cuda13-b7899

linux/amd64 ghcr.io2.53GB2026-02-02 11:24
81

ghcr.io/ggml-org/llama.cpp:server-rocm-b7964

linux/amd64 ghcr.io18.09GB2026-02-09 01:37
4

ghcr.io/ggml-org/llama.cpp:server-rocm-b7968

linux/amd64 ghcr.io18.09GB2026-02-09 01:39
4