ghcr.io/ggml-org/llama.cpp:server-vulkan linux/amd64

ghcr.io/ggml-org/llama.cpp:server-vulkan - 国内下载镜像源 浏览次数:14

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server-vulkan
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan
镜像ID sha256:5eaf334d4f4c531ca93f38ae034a1b4d8683157b51f8e09755458dd393a43f85
镜像TAG server-vulkan
大小 480.55MB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 14 次
贡献者
镜像创建 2025-09-04T04:34:41.668696127Z
同步时间 2025-09-04 17:34
更新时间 2025-09-05 08:10
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LLAMA_ARG_HOST=0.0.0.0
镜像标签
ubuntu: org.opencontainers.image.ref.name 24.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan  ghcr.io/ggml-org/llama.cpp:server-vulkan

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan  ghcr.io/ggml-org/llama.cpp:server-vulkan

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server-vulkan#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan  ghcr.io/ggml-org/llama.cpp:server-vulkan'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan  ghcr.io/ggml-org/llama.cpp:server-vulkan'

镜像构建历史


# 2025-09-04 12:34:41  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2025-09-04 12:34:41  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2025-09-04 12:34:41  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-09-04 12:34:41  5.24MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2025-09-04 12:34:41  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2025-09-04 12:26:32  56.23MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-09-04 12:18:43  340.95MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl libvulkan-dev     && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-08-19 22:37:01  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-08-19 22:37:00  78.12MB 
/bin/sh -c #(nop) ADD file:e67907c77897d27192314f6c4fa0112b6f7dce3e127500516535cc50fe736c92 in / 
                        
# 2025-08-19 22:36:58  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=24.04
                        
# 2025-08-19 22:36:58  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-08-19 22:36:58  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-08-19 22:36:58  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:5eaf334d4f4c531ca93f38ae034a1b4d8683157b51f8e09755458dd393a43f85",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server-vulkan",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server-vulkan"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:042f13f0370e2c4e270ccfb5b254d66e876670e6f999a575e0e90d69a2a2137f",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:381885a7c98337501e2fb2f83a475d75b74389119ff46187a56512e972c077f5"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-09-04T04:34:41.668696127Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "24.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 480549091,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/0ede1e8b90d1934c08a88315a9d53ec07c05ab20f3a77920e07972ed7a2149c4/diff:/var/lib/docker/overlay2/35cbe856a94b00eb836c0646b938e9d401efe52bdf92cd576d61716e4985f026/diff:/var/lib/docker/overlay2/4305a1b4a525855b759e2280acef5798d5fc4f830e34d6cdf673f64e17bdc199/diff:/var/lib/docker/overlay2/17b23f694393b604e8083185b15a2b8e2dc12512d5ebed86c8ec85e456552f19/diff",
            "MergedDir": "/var/lib/docker/overlay2/210c6a125e3176c2d94ac161c6097944fdf2087114d266f53b44dcdb4a5e0108/merged",
            "UpperDir": "/var/lib/docker/overlay2/210c6a125e3176c2d94ac161c6097944fdf2087114d266f53b44dcdb4a5e0108/diff",
            "WorkDir": "/var/lib/docker/overlay2/210c6a125e3176c2d94ac161c6097944fdf2087114d266f53b44dcdb4a5e0108/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:9d592720ced4a7a4ddf16adef8a126e4c8c49f22114de769343320b37674321e",
            "sha256:62b5d53884a947aab6e808b872da9e3968bd11483571c8bae5d56bd6c8b6554d",
            "sha256:0acd046180efe8e7036a2a7c20cee4058e186fa0f8a908f996a38a621349b8b3",
            "sha256:a9af74f1dc4b436e2a763837b99bc3007a062dcf4f10287faaa7efbf1ad26fe1",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-09-04T17:34:13.129062902+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
528

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
609

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
623

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
638

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
179

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
65

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
12

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
13