ghcr.io/ggml-org/llama.cpp:full-b6746 linux/amd64

ghcr.io/ggml-org/llama.cpp:full-b6746 - 国内下载镜像源 浏览次数:79

这是一个包含llama.cpp项目的Docker容器镜像。llama.cpp是一个开源项目,允许在CPU和GPU上运行大型语言模型 (LLMs),例如 LLaMA。

源镜像 ghcr.io/ggml-org/llama.cpp:full-b6746
国内镜像 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746
镜像ID sha256:3d0641d893c999fa7e2bf9722965d216979909af4905699d4eb56b37ec746704
镜像TAG full-b6746
大小 2.06GB
镜像源 ghcr.io
CMD
启动入口 /app/tools.sh
工作目录 /app
OS/平台 linux/amd64
浏览量 79 次
贡献者
镜像创建 2025-10-13T04:23:23.432279451Z
同步时间 2025-10-14 17:12
更新时间 2025-11-05 14:36
环境变量
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
镜像标签
ubuntu: org.opencontainers.image.ref.name 22.04: org.opencontainers.image.version

Docker拉取命令

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746  ghcr.io/ggml-org/llama.cpp:full-b6746

Containerd拉取命令

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746  ghcr.io/ggml-org/llama.cpp:full-b6746

Shell快速替换命令

sed -i 's#ghcr.io/ggml-org/llama.cpp:full-b6746#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746#' deployment.yaml

Ansible快速分发-Docker

#ansible k8s -m shell -a 'docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746 && docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746  ghcr.io/ggml-org/llama.cpp:full-b6746'

Ansible快速分发-Containerd

#ansible k8s -m shell -a 'ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746 && ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746  ghcr.io/ggml-org/llama.cpp:full-b6746'

镜像构建历史


# 2025-10-13 12:23:23  0.00B 配置容器启动时运行的命令
ENTRYPOINT ["/app/tools.sh"]
                        
# 2025-10-13 12:23:23  1.89GB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y     git     python3     python3-pip     && pip install --upgrade pip setuptools wheel     && pip install -r requirements.txt     && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-10-13 12:21:53  0.00B 设置工作目录为/app
WORKDIR /app
                        
# 2025-10-13 12:21:53  75.88MB 复制新文件或目录到容器中
COPY /app/full /app # buildkit
                        
# 2025-10-13 12:21:53  11.38MB 复制新文件或目录到容器中
COPY /app/lib/ /app # buildkit
                        
# 2025-10-03 12:25:40  6.33MB 执行命令并创建新的镜像层
RUN /bin/sh -c apt-get update     && apt-get install -y libgomp1 curl    && apt autoremove -y     && apt clean -y     && rm -rf /tmp/* /var/tmp/*     && find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete     && find /var/cache -type f -delete # buildkit
                        
# 2025-10-01 15:05:10  0.00B 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
                        
# 2025-10-01 15:05:09  77.87MB 
/bin/sh -c #(nop) ADD file:32d41b6329e8f89fa4ac92ef97c04b7cfd5e90fb74e1509c3e27d7c91195b7c7 in / 
                        
# 2025-10-01 15:05:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
                        
# 2025-10-01 15:05:07  0.00B 
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
                        
# 2025-10-01 15:05:07  0.00B 
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
                        
# 2025-10-01 15:05:07  0.00B 
/bin/sh -c #(nop)  ARG RELEASE
                        
                    

镜像信息

{
    "Id": "sha256:3d0641d893c999fa7e2bf9722965d216979909af4905699d4eb56b37ec746704",
    "RepoTags": [
        "ghcr.io/ggml-org/llama.cpp:full-b6746",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp:full-b6746"
    ],
    "RepoDigests": [
        "ghcr.io/ggml-org/llama.cpp@sha256:77bd2838eda3f97ee75d702a45e08967360169f565dea0f9053c77f15ab27cc4",
        "swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/ggml-org/llama.cpp@sha256:022eab9dba62b3fa371d4afc8ce191e7fb77c55e3737bd63ecfbcd3dc28bf425"
    ],
    "Parent": "",
    "Comment": "buildkit.dockerfile.v0",
    "Created": "2025-10-13T04:23:23.432279451Z",
    "Container": "",
    "ContainerConfig": null,
    "DockerVersion": "",
    "Author": "",
    "Config": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "WorkingDir": "/app",
        "Entrypoint": [
            "/app/tools.sh"
        ],
        "OnBuild": null,
        "Labels": {
            "org.opencontainers.image.ref.name": "ubuntu",
            "org.opencontainers.image.version": "22.04"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 2056831093,
    "GraphDriver": {
        "Data": {
            "LowerDir": "/var/lib/docker/overlay2/d2a56b4fe56c7ca5dfec3d72382ed2e2f352bec5c9f5b600cf6d569d0bb72d68/diff:/var/lib/docker/overlay2/36c5266d58171375fc5763d962a917b72c442f4509470bb5d4cb9b9f691443de/diff:/var/lib/docker/overlay2/31ce48700627431b0baa625c282b129eba14569831544469307a6259d82f6f23/diff:/var/lib/docker/overlay2/159b64eb38b3799e710a884e242467825ae414d9c82acc8ccbda301afc0dec96/diff:/var/lib/docker/overlay2/99a8a7af45ffa1dc430375fde8c3084ee85be5839f687d98d2857fb82cd37c67/diff",
            "MergedDir": "/var/lib/docker/overlay2/b6c1797c88877681f1979026f6ccba6b439e778ac183dc32c3d1aa267e4567a4/merged",
            "UpperDir": "/var/lib/docker/overlay2/b6c1797c88877681f1979026f6ccba6b439e778ac183dc32c3d1aa267e4567a4/diff",
            "WorkDir": "/var/lib/docker/overlay2/b6c1797c88877681f1979026f6ccba6b439e778ac183dc32c3d1aa267e4567a4/work"
        },
        "Name": "overlay2"
    },
    "RootFS": {
        "Type": "layers",
        "Layers": [
            "sha256:767e56ba346ae714b6e6b816baa839051145ed78cfa0e4524a86cc287b0c4b00",
            "sha256:e25ef44164b26bac3614d599fad13a24d8f8f99b8a7b26aa4d6b3941b3dada5a",
            "sha256:6ef171daa4ac8dfb537b563f652fdd8e9c7366a916061b1a5ec11a57bfccb7fc",
            "sha256:e14ef1929e508387559876de434c84385e8f2cf2abbcfc07adebcb59dc2ff9d7",
            "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
            "sha256:25fa658cb624aa6d60a3c8ab18310f4d6814f2e267edf00b4438a26de86c95cb"
        ]
    },
    "Metadata": {
        "LastTagTime": "2025-10-14T17:10:33.153764295+08:00"
    }
}

更多版本

ghcr.io/ggml-org/llama.cpp:full

linux/amd64 ghcr.io1.96GB2025-03-17 14:48
799

ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 ghcr.io5.05GB2025-03-18 10:58
793

ghcr.io/ggml-org/llama.cpp:server

linux/amd64 ghcr.io96.62MB2025-05-02 00:26
893

ghcr.io/ggml-org/llama.cpp:server-cuda

linux/amd64 ghcr.io2.57GB2025-06-14 16:26
951

ghcr.io/ggml-org/llama.cpp:server-cuda-b6006

linux/amd64 ghcr.io2.58GB2025-07-28 15:06
306

ghcr.io/ggml-org/llama.cpp:server-musa-b6189

linux/amd64 ghcr.io4.44GB2025-08-18 19:58
134

ghcr.io/ggml-org/llama.cpp:server-musa-b6375

linux/amd64 ghcr.io4.45GB2025-09-04 16:53
148

ghcr.io/ggml-org/llama.cpp:server-vulkan

linux/amd64 ghcr.io480.55MB2025-09-04 17:34
150

ghcr.io/ggml-org/llama.cpp:server-cuda-b6485

linux/amd64 ghcr.io2.63GB2025-09-16 16:27
169

ghcr.io/ggml-org/llama.cpp:server-musa-b6571

linux/amd64 ghcr.io4.45GB2025-09-28 14:58
75

ghcr.io/ggml-org/llama.cpp:server-cuda-b6725

linux/amd64 ghcr.io2.64GB2025-10-10 16:46
107

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda

linux/amd64 docker.io5.01GB2025-10-13 17:40
57

docker.io/ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 docker.io5.01GB2025-10-13 17:42
104

ghcr.io/ggml-org/llama.cpp:full-cuda-b6746

linux/amd64 ghcr.io5.01GB2025-10-13 18:03
117

ghcr.io/ggml-org/llama.cpp:full-b6746

linux/amd64 ghcr.io2.06GB2025-10-14 17:12
78

ghcr.io/ggml-org/llama.cpp:full-cuda-b6823

linux/amd64 ghcr.io5.05GB2025-10-23 14:36
45

ghcr.io/ggml-org/llama.cpp:server-cuda-b6795

linux/amd64 ghcr.io2.69GB2025-10-30 17:31
54