OpenClaw

🚀Installation

Install By

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


1.prepare `openclaw-env-secret.yaml`

Details
kubectl get namespaces claw > /dev/null 2>&1 || kubectl create namespace claw
kubectl create secret generic openclaw-env-secret -n claw \
--from-literal=ANTHROPIC_API_KEY=sk-uMA2rRCqxr5kSnnyD1JGPnzoCnhlnCN73UAcCF1SjYfwV4JC \
--from-literal=OPENCLAW_GATEWAY_TOKEN=8951116b5cb15bc47f496a93168a036cadecec8dfcd4f0ad056f0b65183d732d

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: openclaw
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://serhanekicii.github.io/openclaw-helm
    chart: openclaw
    targetRevision: 1.4.4
    helm:
      releaseName: openclaw
      values: |
        app-template:
          openclawVersion: "2026.2.23"
          chromiumVersion: "124"
          configMode: merge
          controllers:
            main:
              containers:
                main:
                  image:
                    repository: ghcr.io/openclaw/openclaw
                    tag: "2026.2.23"
                    pullPolicy: IfNotPresent
                  envFrom:
                    - secretRef:
                        name: openclaw-env-secret
                  args:
                    - "gateway"
                    - "--bind"
                    - "lan"
                    - "--port"
                    - "18789"
                    - "--allow-unconfigured"
                  resources:
                    requests:
                      cpu: 200m
                      memory: 512Mi
                    limits:
                      cpu: 2000m
                      memory: 2Gi
                chromium:
                  image:
                    repository: zenika/alpine-chrome
                    tag: "124"
                    pullPolicy: IfNotPresent
                  args:
                    - "--no-sandbox"
                    - "--disable-dev-shm-usage"
                    - "--remote-debugging-address=0.0.0.0"
                    - "--remote-debugging-port=9222"
                  resources:
                    requests:
                      cpu: 200m
                      memory: 512Mi
                    limits:
                      cpu: 2000m
                      memory: 2Gi
          persistence:
            data:
              enabled: true
              type: persistentVolumeClaim
              accessMode: ReadWriteOnce
              size: 10Gi
              globalMounts:
                - path: /root/.openclaw
          ingress:
            main:
              enabled: true
              className: nginx
              annotations:
                kubernetes.io/ingress.class: nginx
                cert-manager.io/cluster-issuer: letsencrypt
                nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-body-size: "50m"
                nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
                nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
              hosts:
                - host: openclaw.72602.online
                  paths:
                    - path: /
                      pathType: Prefix
                      service:
                        identifier: main
                        port: http
              tls:
                - secretName: openclaw-tls
                  hosts:
                    - openclaw.72602.online
          configMaps:
            config:
              data:
                openclaw.json: |
                  {
                    gateway: {
                      controlUi: {
                        allowedOrigins: ["https://openclaw.72602.online"],
                        dangerouslyAllowHostHeaderOriginFallback: true,
                      },
                    },
                    browser: {
                      gatewayToken: "${OPENCLAW_GATEWAY_TOKEN}",
                    },
                    agents: {
                      main: {
                        brain: {
                          provider: "anthropic",
                          model: "claude-sonnet-4-20250514",
                          apiKey: "${ANTHROPIC_API_KEY}",
                        },
                      },
                    },
                  }
  destination:
    server: https://kubernetes.default.svc
    namespace: claw
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/openclaw
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


4. Database postgresql has been installed, if not check 🔗link


1.prepare `n8n-middleware-credientials.yaml`

Details
kubectl get namespaces n8n > /dev/null 2>&1 || kubectl create namespace n8n
N8N_PASSWORD=$(kubectl -n database get secret postgresql-credentials -o jsonpath='{.data.password}' | base64 -d)
kubectl -n n8n create secret generic n8n-middleware-credential \
--from-literal=postgres-password="${N8N_PASSWORD}"

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: n8n
spec:
  project: default
  source:
    repoURL: https://community-charts.github.io/helm-charts
    targetRevision: 1.16.22
    helm:
      releaseName: n8n
      values: |
        global:
          security:
            allowInsecureImages: true
        image:
          repository: n8nio/n8n
        log:
          level: info
        encryptionKey: "ay-dev-n8n"
        timezone: Asia/Shanghai
        db:
          type: postgresdb
        externalPostgresql:
          host: postgresql-hl.database.svc.cluster.local
          port: 5432
          username: "n8n"
          database: "n8n"
          existingSecret: "n8n-middleware-credential"
        main:
          count: 1
          extraEnvVars:
            "N8N_BLOCK_ENV_ACCESS_IN_NODE": "false"
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/home/node/.n8n-files"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "10"
            "CACHE_ENABLED": "true"
            "N8N_CONCURRENCY_PRODUCTION_LIMIT": "5"
            "NODE_TLS_REJECT_UNAUTHORIZED": "0"
            "N8N_SECURE_COOKIE": "false"
            "WEBHOOK_URL": "https://webhook.n8n.ay.dev"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /home/node/.n8n-files
          resources:
            requests:
              cpu: 1000m
              memory: 1024Mi
            limits:
              cpu: 2000m
              memory: 2048Mi
        worker:
          mode: queue
          count: 2
          waitMainNodeReady:
            enabled: false
          extraEnvVars:
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/home/node/.n8n-files"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "5"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /home/node/.n8n-files
          resources:
            requests:
              cpu: 500m
              memory: 1024Mi
            limits:
              cpu: 1000m
              memory: 2048Mi
        nodes:
          builtin:
            enabled: true
            modules:
              - crypto
              - fs
          external:
            allowAll: true
            packages:
              - n8n-nodes-globals
        npmRegistry:
          enabled: true
          url: http://mirrors.cloud.tencent.com/npm/
        redis:
          enabled: true
          image:
            registry: m.daocloud.io/docker.io
            repository: bitnamilegacy/redis
          master:
            resourcesPreset: "small"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              storageClass: "local-path"
              size: 10Gi
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: self-signed-ca-issuer
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: n8n.ay.dev
              paths:
                - path: /
                  pathType: Prefix
          tls:
          - hosts:
            - n8n.ay.dev
            - webhook.n8n.ay.dev
            secretName: n8n.ay.dev-tls
        webhook:
          mode: queue
          url: "https://webhook.n8n.ay.dev"
          autoscaling:
            enabled: false
          waitMainNodeReady:
            enabled: true
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 512m
              memory: 512Mi
    chart: n8n
  destination:
    server: https://kubernetes.default.svc
    namespace: n8n
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/n8n
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

🛎️FAQ

Q1: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Q2: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)