ghcr.io/ggml-org/llama.cpp:server linux/amd64

ghcr.io/ggml-org/llama.cpp:server - 国内下载镜像源 浏览次数:61

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:server
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server
镜像ID sha256:a6b3b3b6da755e93c4a59a19a83a663aa7ed8d24eeae5372cf8864026ecac72d
镜像TAG server
大小 96.62MB
镜像源 ghcr.io
CMD
启动入口 /app/llama-server
工作目录 /app
OS/平台 linux/amd64
浏览量 61 次
贡献者 89******9@qq.com
镜像创建 2025-04-30T10:07:20.107578327Z
同步时间 2025-05-02 00:26
更新时间 2025-05-09 01:56
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LLAMA_ARG_HOST=0.0.0.0
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server  ghcr.io/ggml-org/llama.cpp:server

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server  ghcr.io/ggml-org/llama.cpp:server

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:server#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server  ghcr.io/ggml-org/llama.cpp:server'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server  ghcr.io/ggml-org/llama.cpp:server'

镜像构建历史


# 2025-04-30 18:07:20  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/llama-server"]
                        
# 2025-04-30 18:07:20  0.00B 指定检查容器健康状态的命令
HEALTHCHECK &{["CMD" "curl" "-f" "http://localhost:8080/health"] "0s" "0s" "0s" "0s" '\x00'}
                        
# 2025-04-30 18:07:20  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-04-30 18:07:20  4.14MB 复制新文件或目录到容器中
COPY /app/full/llama-server /app # buildkit
                        
# 2025-04-30 18:07:20  0.00B 设置环境变量 LLAMA_ARG_HOST
ENV LLAMA_ARG_HOST=0.0.0.0
                        
# 2025-04-30 17:33:32  8.30MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-04-30 17:29:28  6.32MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-04-07 15:24:18  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-04-07 15:24:17  77.86MB 
/bin/sh -c #(nop) ADD file:433cf0b8353e08be3a6582ad5947c57a66bdbb842ed3095246a1ff6876d157f1 in / 
                        
# 2025-04-07 15:24:14  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-04-07 15:24:14  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-04-07 15:24:14  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-04-07 15:24:14  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:a6b3b3b6da755e93c4a59a19a83a663aa7ed8d24eeae5372cf8864026ecac72d",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:server",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:server"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:1800ffc4fa53fbb75ffcf75ef386327799e86147b593c4a15e8cdaa5915eda90",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:726fbbbb7f9d50963f7bffb42a6cd6c7217455f4ceb3c3b0da2c1de965e073ba"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-04-30T10:07:20.107578327Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "LLAMA_ARG_HOST=0.0.0.0"
        ],
        "Cmd": null,
        "Healthcheck": {
            "Test": [
                "CMD",
                "curl",
                "-f",
                "http://localhost:8080/health"
            ]
        },
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/llama-server"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 96617255,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/cdf0a2dc4376a45ebb8cadacf91af84acf0862ab664a9a334f705c4a1f4a62d4/diff:/var/lib/docker/overlay2/fee49b6e678cee9f373793e1ed937853f9720fa8c36dd15fc853c168aea5dc65/diff:/var/lib/docker/overlay2/640ac2230d79e82ff65cc13ee1d59ee6b31e86687472ef81d470aa5203a9f438/diff:/var/lib/docker/overlay2/e7eee3f084d991e18a21a7e4166378a36f815193ac56858dccf54b2bd2152399/diff",
            "MergedDir": "/var/lib/docker/overlay2/2591ac985dbf3109002737ab2ed97c9e17c2ec6cfed636c66ab9239edede33e8/merged",
            "UpperDir": "/var/lib/docker/overlay2/2591ac985dbf3109002737ab2ed97c9e17c2ec6cfed636c66ab9239edede33e8/diff",
            "WorkDir": "/var/lib/docker/overlay2/2591ac985dbf3109002737ab2ed97c9e17c2ec6cfed636c66ab9239edede33e8/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:65c636ce09f299ba8ea7157c8d126dfd5b115fa7bbc5d634a91b34786958546e",
            "sha256:c28af6b21d095ddb4d82e0c68117a4872eb20c277ab839875abc17d9874b5b5f",
            "sha256:bb497adac8d38034ef23ce29fbae2bb5bc4314db9af5a807258f8a5a6054363d",
            "sha256:648f9e1847cab47ab829dd4990f5337f47986fad552c0c05d79e0eb586e31e8a",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-05-02T00:26:52.002688643+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
120

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
199

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
60