Subsections of Proxy
Daocloud Binary
使用方法
在原始 URL 上面加入 files.m.daocloud.io 的 前缀 就可以使用。比如:
# Helm 下载原始URL
wget https://get.helm.sh/helm-v3.9.1-linux-amd64.tar.gz
# 加速后的 URL
wget https://files.m.daocloud.io/get.helm.sh/helm-v3.9.1-linux-amd64.tar.gz即可加速下载, 所以如果指定的文件没有被缓存, 会卡住等待缓存完成, 后续下载就无带宽限制。
最佳实践
使用场景1 - 安装 Helm
cd /tmp
export HELM_VERSION="v3.9.3"
wget "https://files.m.daocloud.io/get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz"
tar -zxvf helm-${HELM_VERSION}-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version使用场景2 - 安装 KubeSpray
加入如下配置即可:
files_repo: "https://files.m.daocloud.io"
## Kubernetes components
kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
## CNI Plugins
cni_download_url: "{{ files_repo }}/github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
## cri-tools
crictl_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
## [Optional] etcd: only if you **DON'T** use etcd_deployment=host
etcd_download_url: "{{ files_repo }}/github.com/etcd-io/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] Calico: If using Calico network plugin
calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
calicoctl_alternate_download_url: "{{ files_repo }}/github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
# [Optional] Flannel: If using Falnnel network plugin
flannel_cni_download_url: "{{ files_repo }}/kubernetes/flannel/{{ flannel_cni_version }}/flannel-{{ image_arch }}"
# [Optional] helm: only if you set helm_enabled: true
helm_download_url: "{{ files_repo }}/get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] crun: only if you set crun_enabled: true
crun_download_url: "{{ files_repo }}/github.com/containers/crun/releases/download/{{ crun_version }}/crun-{{ crun_version }}-linux-{{ image_arch }}"
# [Optional] kata: only if you set kata_containers_enabled: true
kata_containers_download_url: "{{ files_repo }}/github.com/kata-containers/kata-containers/releases/download/{{ kata_containers_version }}/kata-static-{{ kata_containers_version }}-{{ ansible_architecture }}.tar.xz"
# [Optional] cri-dockerd: only if you set container_manager: docker
cri_dockerd_download_url: "{{ files_repo }}/github.com/Mirantis/cri-dockerd/releases/download/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz"
# [Optional] runc,containerd: only if you set container_runtime: containerd
runc_download_url: "{{ files_repo }}/github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
containerd_download_url: "{{ files_repo }}/github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
nerdctl_download_url: "{{ files_repo }}/github.com/containerd/nerdctl/releases/download/v{{ nerdctl_version }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"实测下载速度可以达到Downloaded: 19 files, 603M in 23s (25.9 MB/s), 下载全部文件可以在 23s 内完成!
完整方法可以参考 https://gist.github.com/yankay/a863cf2e300bff6f9040ab1c6c58fbae
使用场景3 - 安装 KIND
cd /tmp
export KIND_VERSION="v0.22.0"
curl -Lo ./kind https://files.m.daocloud.io/github.com/kubernetes-sigs/kind/releases/download/${KIND_VERSION}/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/bin/kind
kind version使用场景4 - 安装 K9S
cd /tmp
export K9S_VERSION="v0.32.4"
wget https://files.m.daocloud.io/github.com/derailed/k9s/releases/download/${K9S_VERSION}/k9s_Linux_x86_64.tar.gz
tar -zxvf k9s_Linux_x86_64.tar.gz
chmod +x k9s
mv k9s /usr/bin/k9s
k9s version使用场景5 - 安装 istio
cd /tmp
export ISTIO_VERSION="1.14.3"
wget "https://files.m.daocloud.io/github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-linux-amd64.tar.gz"
tar -zxvf istio-${ISTIO_VERSION}-linux-amd64.tar.gz
# Do follow the istio docs to install istio使用场景6 - 安装 nerdctl (代替 docker 工具)
这里是root安装,其他安装方式请参考源站: https://github.com/containerd/nerdctl
export NERDCTL_VERSION="1.7.6"
mkdir -p nerdctl ;cd nerdctl
wget https://files.m.daocloud.io/github.com/containerd/nerdctl/releases/download/v${NERDCTL_VERSION}/nerdctl-full-${NERDCTL_VERSION}-linux-amd64.tar.gz
tar -zvxf nerdctl-full-${NERDCTL_VERSION}-linux-amd64.tar.gz
mkdir -p /opt/cni/bin ;cp -f libexec/cni/* /opt/cni/bin/ ;cp bin/* /usr/local/bin/ ;cp lib/systemd/system/*.service /usr/lib/systemd/system/
systemctl enable containerd ;systemctl start containerd --now
systemctl enable buildkit;systemctl start buildkit --now欢迎贡献更多的场景
禁止加速的后缀
以下后缀的文件会直接响应 403
- .bmp
- .jpg
- .jpeg
- .png
- .gif
- .webp
- .tiff
- .mp4
- .webm
- .ogg
- .avi
- .mov
- .flv
- .mkv
- .mp3
- .wav
- .rar
Daocloud Image
快速开始
docker run -d -P m.daocloud.io/docker.io/library/nginx使用方法
增加前缀 (推荐方式)。比如:
docker.io/library/busybox
|
V
m.daocloud.io/docker.io/library/busybox或者 支持的镜像仓库 的 前缀替换 就可以使用。比如:
docker.io/library/busybox
|
V
docker.m.daocloud.io/library/busybox无缓存
在拉取的时候如果Daocloud没有缓存, 将会在 同步队列 添加同步缓存的任务.
支持前缀替换的 Registry (不推荐)
推荐使用添加前缀的方式.
前缀替换的 Registry 的规则, 这是人工配置的, 有需求提 Issue.
| 源站 | 替换为 | 备注 |
|---|---|---|
| docker.elastic.co | elastic.m.daocloud.io | |
| docker.io | docker.m.daocloud.io | |
| gcr.io | gcr.m.daocloud.io | |
| ghcr.io | ghcr.m.daocloud.io | |
| k8s.gcr.io | k8s-gcr.m.daocloud.io | k8s.gcr.io 已被迁移到 registry.k8s.io |
| registry.k8s.io | k8s.m.daocloud.io | |
| mcr.microsoft.com | mcr.m.daocloud.io | |
| nvcr.io | nvcr.m.daocloud.io | |
| quay.io | quay.m.daocloud.io | |
| registry.ollama.ai | ollama.m.daocloud.io |
最佳实践
加速 Kubneretes
加速安装 kubeadm
kubeadm config images pull --image-repository k8s-gcr.m.daocloud.io加速安装 kind
kind create cluster --name kind --image m.daocloud.io/docker.io/kindest/node:v1.22.1加速 Containerd
- 参考 Containerd 官方文档: hosts.md
- 如果您使用 kubespray 安装 containerd, 可以配置
containerd_registries_mirrors
加速 Docker
添加到 /etc/docker/daemon.json
{
"registry-mirrors": [
"https://docker.m.daocloud.io"
]
}加速 Ollama & DeepSeek
加速安装 Ollama
CPU:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.m.daocloud.io/ollama/ollamaGPU 版本:
- 首先安装 Nvidia Container Toolkit
- 运行以下命令启动 Ollama 容器:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.m.daocloud.io/ollama/ollama更多信息请参考:
加速使用 Deepseek-R1 模型
如上述步骤,在启动了ollama容器的前提下,还可以通过加速源,加速启动DeepSeek相关的模型服务
注:目前 Ollama 官方源的下载速度已经很快,您也可以直接使用官方源。
# 使用加速源
docker exec -it ollama ollama run ollama.m.daocloud.io/library/deepseek-r1:1.5b
# 或直接使用官方源下载模型
# docker exec -it ollama ollama run deepseek-r1:1.5bKubeVPN
1.install krew
- download and install
krew
- download and install
- Add the $HOME/.krew/bin directory to your PATH environment variable.
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"- Run kubectl krew to check the installation
kubectl krew list2. Download from kubevpn source from github
kubectl krew index add kubevpn https://gitclone.com/github.com/kubenetworks/kubevpn.git
kubectl krew install kubevpn/kubevpn
kubectl kubevpn 3. Deploy VPN in some cluster
Using different config to access different cluster and deploy vpn in that k8s.
kubectl kubevpn connectYour terminal should look like this:
➜ ~ kubectl kubevpn connect
Password:
Starting connect
Getting network CIDR from cluster info...
Getting network CIDR from CNI...
Getting network CIDR from services...
Labeling Namespace default
Creating ServiceAccount kubevpn-traffic-manager
Creating Roles kubevpn-traffic-manager
Creating RoleBinding kubevpn-traffic-manager
Creating Service kubevpn-traffic-manager
Creating MutatingWebhookConfiguration kubevpn-traffic-manager
Creating Deployment kubevpn-traffic-manager
Pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
Container Reason Message
control-plane ContainerCreating
vpn ContainerCreating
webhook ContainerCreating
Pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
Forwarding port...
Connected tunnel
Adding route...
Configured DNS service
+----------------------------------------------------------+
| Now you can access resources in the kubernetes cluster ! |
+----------------------------------------------------------+already connected to cluster network, use command kubectl kubevpn status to check status
➜ ~ kubectl kubevpn status
ID Mode Cluster Kubeconfig Namespace Status Netif
0 full ops-dev /root/.kube/zverse_config data-and-computing Connected utun0use pod productpage-788df7ff7f-jpkcs IP 172.29.2.134
➜ ~ kubectl get pods -o wide
NAME AGE IP NODE NOMINATED NODE GATES
authors-dbb57d856-mbgqk 7d23h 172.29.2.132 192.168.0.5 <none>
details-7d8b5f6bcf-hcl4t 61d 172.29.0.77 192.168.104.255 <none>
kubevpn-traffic-manager-66d969fd45-9zlbp 74s 172.29.2.136 192.168.0.5 <none>
productpage-788df7ff7f-jpkcs 61d 172.29.2.134 192.168.0.5 <none>
ratings-77b6cd4499-zvl6c 61d 172.29.0.86 192.168.104.255 <none>
reviews-85c88894d9-vgkxd 24d 172.29.2.249 192.168.0.5 <none> use ping to test connection, seems good
➜ ~ ping 172.29.2.134
PING 172.29.2.134 (172.29.2.134): 56 data bytes
64 bytes from 172.29.2.134: icmp_seq=0 ttl=63 time=55.727 ms
64 bytes from 172.29.2.134: icmp_seq=1 ttl=63 time=56.270 ms
64 bytes from 172.29.2.134: icmp_seq=2 ttl=63 time=55.228 ms
64 bytes from 172.29.2.134: icmp_seq=3 ttl=63 time=54.293 ms
^C
--- 172.29.2.134 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 54.293/55.380/56.270/0.728 msuse service productpage IP 172.21.10.49
➜ ~ kubectl get services -o wide
NAME TYPE CLUSTER-IP PORT(S) SELECTOR
authors ClusterIP 172.21.5.160 9080/TCP app=authors
details ClusterIP 172.21.6.183 9080/TCP app=details
kubernetes ClusterIP 172.21.0.1 443/TCP <none>
kubevpn-traffic-manager ClusterIP 172.21.2.86 84xxxxxx0/TCP app=kubevpn-traffic-manager
productpage ClusterIP 172.21.10.49 9080/TCP app=productpage
ratings ClusterIP 172.21.3.247 9080/TCP app=ratings
reviews ClusterIP 172.21.8.24 9080/TCP app=reviewsuse command curl to test service connection
➜ ~ curl 172.21.10.49:9080
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">seems good too~