Subsections of Application

Datahub

Preliminary

  • Kubernetes has installed, if not check 🔗link
  • argoCD has installed, if not check 🔗link
  • Elasticsearch has installed, if not check 🔗link
  • MariaDB has installed, if not check 🔗link
  • Kafka has installed, if not check 🔗link

Steps

1. prepare datahub credentials secret

kubectl -n application \
    create secret generic datahub-credentials \
    --from-literal=mysql-root-password="$(kubectl get secret mariadb-credentials --namespace database -o jsonpath='{.data.mariadb-root-password}' | base64 -d)"
kubectl -n application \
    create secret generic datahub-credentials \
    --from-literal=mysql-root-password="$(kubectl get secret mariadb-credentials --namespace database -o jsonpath='{.data.mariadb-root-password}' | base64 -d)" \
    --from-literal=security.protocol="SASL_PLAINTEXT" \
    --from-literal=sasl.mechanism="SCRAM-SHA-256" \
    --from-literal=sasl.jaas.config="org.apache.kafka.common.security.scram.ScramLoginModule required username=\"user1\" password=\"$(kubectl get secret kafka-user-passwords --namespace database -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)\";"

5. prepare deploy-datahub.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: datahub
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
  project: default
  source:
    repoURL: https://helm.datahubproject.io
    chart: datahub
    targetRevision: 0.4.8
    helm:
      releaseName: datahub
      values: |
        global:
          elasticsearch:
            host: elastic-search-elasticsearch.application.svc.cluster.local
            port: 9200
            skipcheck: "false"
            insecure: "false"
            useSSL: "false"
          kafka:
            bootstrap:
              server: kafka.database.svc.cluster.local:9092
            zookeeper:
              server: kafka-zookeeper.database.svc.cluster.local:2181
          sql:
            datasource:
              host: mariadb.database.svc.cluster.local:3306
              hostForMysqlClient: mariadb.database.svc.cluster.local
              port: 3306
              url: jdbc:mysql://mariadb.database.svc.cluster.local:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2
              driver: com.mysql.cj.jdbc.Driver
              username: root
              password:
                secretRef: datahub-credentials
                secretKey: mysql-root-password
        datahub-gms:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-gms
          service:
            type: ClusterIP
          ingress:
            enabled: false
        datahub-frontend:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-frontend-react
          defaultUserCredentials:
            randomAdminPassword: true
          service:
            type: ClusterIP
          ingress:
            enabled: true
            className: nginx
            annotations:
              cert-manager.io/cluster-issuer: self-signed-ca-issuer
            hosts:
            - host: datahub.dev.geekcity.tech
              paths:
              - /
            tls:
            - secretName: "datahub.dev.geekcity.tech-tls"
              hosts:
              - datahub.dev.geekcity.tech
        acryl-datahub-actions:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-actions
        datahub-mae-consumer:
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mae-consumer
          ingress:
            enabled: false
        datahub-mce-consumer:
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mce-consumer
          ingress:
            enabled: false
        datahub-ingestion-cron:
          enabled: false
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-ingestion
        elasticsearchSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-elasticsearch-setup
        kafkaSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-kafka-setup
        mysqlSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mysql-setup
        postgresqlSetupJob:
          enabled: false
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-postgres-setup
        datahubUpgrade:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-upgrade
        datahubSystemUpdate:
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-upgrade
  destination:
    server: https://kubernetes.default.svc
    namespace: application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: datahub
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
  project: default
  source:
    repoURL: https://helm.datahubproject.io
    chart: datahub
    targetRevision: 0.4.8
    helm:
      releaseName: datahub
      values: |
        global:
          springKafkaConfigurationOverrides:
            security.protocol: SASL_PLAINTEXT
            sasl.mechanism: SCRAM-SHA-256
          credentialsAndCertsSecrets:
            name: datahub-credentials
            secureEnv:
              sasl.jaas.config: sasl.jaas.config
          elasticsearch:
            host: elastic-search-elasticsearch.application.svc.cluster.local
            port: 9200
            skipcheck: "false"
            insecure: "false"
            useSSL: "false"
          kafka:
            bootstrap:
              server: kafka.database.svc.cluster.local:9092
            zookeeper:
              server: kafka-zookeeper.database.svc.cluster.local:2181
          neo4j:
            host: neo4j.database.svc.cluster.local:7474
            uri: bolt://neo4j.database.svc.cluster.local
            username: neo4j
            password:
              secretRef: datahub-credentials
              secretKey: neo4j-password
          sql:
            datasource:
              host: mariadb.database.svc.cluster.local:3306
              hostForMysqlClient: mariadb.database.svc.cluster.local
              port: 3306
              url: jdbc:mysql://mariadb.database.svc.cluster.local:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2
              driver: com.mysql.cj.jdbc.Driver
              username: root
              password:
                secretRef: datahub-credentials
                secretKey: mysql-root-password
        datahub-gms:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-gms
          service:
            type: ClusterIP
          ingress:
            enabled: false
        datahub-frontend:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-frontend-react
          defaultUserCredentials:
            randomAdminPassword: true
          service:
            type: ClusterIP
          ingress:
            enabled: true
            className: nginx
            annotations:
              cert-manager.io/cluster-issuer: self-signed-ca-issuer
            hosts:
            - host: datahub.dev.geekcity.tech
              paths:
              - /
            tls:
            - secretName: "datahub.dev.geekcity.tech-tls"
              hosts:
              - datahub.dev.geekcity.tech
        acryl-datahub-actions:
          enabled: true
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-actions
        datahub-mae-consumer:
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mae-consumer
          ingress:
            enabled: false
        datahub-mce-consumer:
          replicaCount: 1
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mce-consumer
          ingress:
            enabled: false
        datahub-ingestion-cron:
          enabled: false
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-ingestion
        elasticsearchSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-elasticsearch-setup
        kafkaSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-kafka-setup
        mysqlSetupJob:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-mysql-setup
        postgresqlSetupJob:
          enabled: false
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-postgres-setup
        datahubUpgrade:
          enabled: true
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-upgrade
        datahubSystemUpdate:
          image:
            repository: m.daocloud.io/docker.io/acryldata/datahub-upgrade
  destination:
    server: https://kubernetes.default.svc
    namespace: application
if you wannna start one more gms

add this under global, if you wanna start one more gms

  datahub_standalone_consumers_enabled: true

3. apply to k8s

kubectl -n argocd apply -f deploy-datahub.yaml

4. sync by argocd

argocd app sync argocd/datahub

5. extract credientials

kubectl -n application get secret datahub-user-secret -o jsonpath='{.data.user\.props}' | base64 -d

[Optional] Visit though browser

add $K8S_MASTER_IP datahub.dev.geekcity.tech to /etc/hosts

[Optional] Visit though DatahubCLI

We recommend Python virtual environments (venv-s) to namespace pip modules. Here’s an example setup:

python3 -m venv venv             # create the environment
source venv/bin/activate         # activate the environment

NOTE: If you install datahub in a virtual environment, that same virtual environment must be re-activated each time a shell window or session is created.

Once inside the virtual environment, install datahub using the following commands

# Requires Python 3.8+
python3 -m pip install --upgrade pip wheel setuptools
python3 -m pip install --upgrade acryl-datahub
# validate that the install was successful
datahub version
# If you see "command not found", try running this instead: python3 -m datahub version
datahub init
# authenticate your datahub CLI with your datahub instance
Mar 7, 2024

N8N

🚀Installation

Install By

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


4. Database postgresql has been installed, if not check 🔗link


1.prepare `n8n-middleware-credientials.yaml`

Details
kubectl get namespaces n8n > /dev/null 2>&1 || kubectl create namespace n8n
N8N_PASSWORD=$(kubectl -n database get secret postgresql-credentials -o jsonpath='{.data.password}' | base64 -d)
kubectl -n n8n create secret generic n8n-middleware-credential \
--from-literal=postgres-password="${N8N_PASSWORD}"

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: n8n
spec:
  project: default
  source:
    repoURL: https://community-charts.github.io/helm-charts
    targetRevision: 1.16.30
    helm:
      releaseName: n8n
      values: |
        global:
          security:
            allowInsecureImages: true
        image:
          repository: n8nio/n8n
        log:
          level: info
        encryptionKey: "ay-dev-n8n"
        timezone: Asia/Shanghai
        db:
          type: postgresdb
        externalPostgresql:
          host: postgresql-hl.database.svc.cluster.local
          port: 5432
          username: "n8n"
          database: "n8n"
          existingSecret: "n8n-middleware-credential"
        main:
          count: 1
          extraEnvVars:
            "N8N_BLOCK_ENV_ACCESS_IN_NODE": "false"
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/data"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "10"
            "CACHE_ENABLED": "true"
            "N8N_CONCURRENCY_PRODUCTION_LIMIT": "5"
            "NODE_TLS_REJECT_UNAUTHORIZED": "0"
            "N8N_SECURE_COOKIE": "false"
            "WEBHOOK_URL": "https://webhook.n8n.ay.dev"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /data
          resources:
            requests:
              cpu: 1000m
              memory: 1024Mi
            limits:
              cpu: 2000m
              memory: 2048Mi
        worker:
          mode: queue
          count: 2
          waitMainNodeReady:
            enabled: false
          extraEnvVars:
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/data"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "5"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /data
          resources:
            requests:
              cpu: 500m
              memory: 1024Mi
            limits:
              cpu: 1000m
              memory: 2048Mi
        nodes:
          builtin:
            enabled: true
            modules:
              - crypto
              - fs
          external:
            allowAll: true
            packages:
              - n8n-nodes-globals
        npmRegistry:
          enabled: true
          url: http://mirrors.cloud.tencent.com/npm/
        redis:
          enabled: true
          image:
            registry: m.daocloud.io/docker.io
            repository: bitnamilegacy/redis
          master:
            resourcesPreset: "small"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              storageClass: "local-path"
              size: 10Gi
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: self-signed-ca-issuer
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: n8n.ay.dev
              paths:
                - path: /
                  pathType: Prefix
          tls:
          - hosts:
            - n8n.ay.dev
            - webhook.n8n.ay.dev
            secretName: n8n.ay.dev-tls
        webhook:
          mode: queue
          url: "https://webhook.n8n.ay.dev"
          autoscaling:
            enabled: false
          waitMainNodeReady:
            enabled: true
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 512m
              memory: 512Mi
    chart: n8n
  destination:
    server: https://kubernetes.default.svc
    namespace: n8n
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/n8n
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


4. Database postgresql has been installed, if not check 🔗link


1.prepare `n8n-middleware-credientials.yaml`

Details
kubectl get namespaces n8n > /dev/null 2>&1 || kubectl create namespace n8n
N8N_PASSWORD=$(kubectl -n database get secret postgresql-credentials -o jsonpath='{.data.password}' | base64 -d)
kubectl -n n8n create secret generic n8n-middleware-credential \
--from-literal=postgres-password="${N8N_PASSWORD}"

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: n8n
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://community-charts.github.io/helm-charts
    targetRevision: 1.16.30
    chart: n8n
    helm:
      releaseName: n8n
      values: |
        global:
          security:
            allowInsecureImages: true
        image:
          repository: m.daocloud.io/docker.io/n8nio/n8n
        log:
          level: info
        encryptionKey: "72602-n8n"
        timezone: Asia/Shanghai
        db:
          type: postgresdb
        externalPostgresql:
          host: postgresql-hl.database.svc.cluster.local
          port: 5432
          username: "n8n"
          database: "n8n"
          existingSecret: "n8n-middleware-credential"
        main:
          count: 1
          extraEnvVars:
            HTTP_PROXY: "http://47.110.67.161:30890"
            HTTPS_PROXY: "http://47.110.67.161:30890"
            NO_PROXY: "registry.npmjs.org,npmjs.org,npmmirror.com,registry.npmmirror.com"
            no_proxy: "registry.npmjs.org,npmjs.org,npmmirror.com,registry.npmmirror.com"
            NPM_CONFIG_REGISTRY: "https://registry.npmmirror.com"
            N8N_BLOCK_ENV_ACCESS_IN_NODE: "false"
            N8N_FILE_SYSTEM_ALLOWED_PATHS: "/data"
            EXECUTIONS_TIMEOUT: "300"
            EXECUTIONS_TIMEOUT_MAX: "600"
            DB_POSTGRESDB_POOL_SIZE: "10"
            CACHE_ENABLED: "true"
            N8N_CONCURRENCY_PRODUCTION_LIMIT: "5"
            NODE_TLS_REJECT_UNAUTHORIZED: "0"
            N8N_SECURE_COOKIE: "false"
            WEBHOOK_URL: "https://webhook.72602.online"
            QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD: "60000"
            N8N_COMMUNITY_PACKAGES_ENABLED: "true"
            N8N_GIT_NODE_DISABLE_BARE_REPOS: "true"
            N8N_LICENSE_AUTO_RENEW_ENABLED: "true"
            N8N_LICENSE_RENEW_ON_INIT: "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 5Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /mnt/e/N8N_DATA
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /data
          resources:
            requests:
              cpu: 1000m
              memory: 1024Mi
            limits:
              cpu: 2000m
              memory: 2048Mi
        worker:
          mode: queue
          count: 2
          waitMainNodeReady:
            enabled: false
          extraEnvVars:
            HTTP_PROXY: "http://47.110.67.161:30890"
            HTTPS_PROXY: "http://47.110.67.161:30890"
            NO_PROXY: "registry.npmjs.org,npmjs.org,npmmirror.com,registry.npmmirror.com"
            no_proxy: "registry.npmjs.org,npmjs.org,npmmirror.com,registry.npmmirror.com"
            NPM_CONFIG_REGISTRY: "https://registry.npmmirror.com"
            N8N_FILE_SYSTEM_ALLOWED_PATHS: "/data"
            EXECUTIONS_TIMEOUT: "300"
            EXECUTIONS_TIMEOUT_MAX: "600"
            DB_POSTGRESDB_POOL_SIZE: "5"
            QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD: "60000"
            N8N_COMMUNITY_PACKAGES_ENABLED: "true"
            N8N_GIT_NODE_DISABLE_BARE_REPOS: "true"
            N8N_LICENSE_AUTO_RENEW_ENABLED: "true"
            N8N_LICENSE_RENEW_ON_INIT: "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /mnt/e/N8N_DATA
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /data
          resources:
            requests:
              cpu: 500m
              memory: 1024Mi
            limits:
              cpu: 1000m
              memory: 2048Mi
        nodes:
          builtin:
            enabled: true
            modules:
              - crypto
              - fs
          external:
            allowAll: true
            packages:
              - n8n-nodes-globals
              - n8n-nodes-wechat-formatter
        npmRegistry:
          enabled: true
          url: https://registry.npmmirror.com
        redis:
          enabled: true
          image:
            registry: m.daocloud.io/docker.io
            repository: bitnamilegacy/redis
          master:
            resourcesPreset: "small"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              storageClass: "local-path"
              size: 50Gi
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: n8n.72602.online
              paths:
                - path: /
                  pathType: Prefix
          tls:
            - hosts:
                - n8n.72602.online
              secretName: n8n.72602.online-tls
        webhook:
          mode: queue
          url: "https://webhook.72602.online"
          autoscaling:
            enabled: false
          waitMainNodeReady:
            enabled: true
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 512m
              memory: 512Mi
  destination:
    server: https://kubernetes.default.svc
    namespace: n8n
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/n8n
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

🛎️FAQ

Q1: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Q2: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Mar 7, 2024

OpenClaw

🚀Installation

Install By

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


1.prepare `openclaw-env-secret.yaml`

Details
kubectl get namespaces claw > /dev/null 2>&1 || kubectl create namespace claw
kubectl create secret generic openclaw-env-secret -n claw \
--from-literal=ANTHROPIC_API_KEY=sk-uMA2rRCqxr5kSnnyD1JGPnzoCnhlnCN73UAcCF1SjYfwV4JC \
--from-literal=OPENCLAW_GATEWAY_TOKEN=8951116b5cb15bc47f496a93168a036cadecec8dfcd4f0ad056f0b65183d732d

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: openclaw
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://serhanekicii.github.io/openclaw-helm
    chart: openclaw
    targetRevision: 1.4.4
    helm:
      releaseName: openclaw
      values: |
        app-template:
          openclawVersion: "2026.2.23"
          chromiumVersion: "124"
          configMode: merge
          controllers:
            main:
              containers:
                main:
                  image:
                    repository: ghcr.io/openclaw/openclaw
                    tag: "2026.2.23"
                    pullPolicy: IfNotPresent
                  envFrom:
                    - secretRef:
                        name: openclaw-env-secret
                  args:
                    - "gateway"
                    - "--bind"
                    - "lan"
                    - "--port"
                    - "18789"
                    - "--allow-unconfigured"
                  resources:
                    requests:
                      cpu: 200m
                      memory: 512Mi
                    limits:
                      cpu: 2000m
                      memory: 2Gi
                chromium:
                  image:
                    repository: zenika/alpine-chrome
                    tag: "124"
                    pullPolicy: IfNotPresent
                  args:
                    - "--no-sandbox"
                    - "--disable-dev-shm-usage"
                    - "--remote-debugging-address=0.0.0.0"
                    - "--remote-debugging-port=9222"
                  resources:
                    requests:
                      cpu: 200m
                      memory: 512Mi
                    limits:
                      cpu: 2000m
                      memory: 2Gi
          persistence:
            data:
              enabled: true
              type: persistentVolumeClaim
              accessMode: ReadWriteOnce
              size: 10Gi
              globalMounts:
                - path: /root/.openclaw
          ingress:
            main:
              enabled: true
              className: nginx
              annotations:
                kubernetes.io/ingress.class: nginx
                cert-manager.io/cluster-issuer: letsencrypt
                nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
                nginx.ingress.kubernetes.io/proxy-body-size: "50m"
                nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
                nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
              hosts:
                - host: openclaw.72602.online
                  paths:
                    - path: /
                      pathType: Prefix
                      service:
                        identifier: main
                        port: http
              tls:
                - secretName: openclaw-tls
                  hosts:
                    - openclaw.72602.online
          configMaps:
            config:
              data:
                openclaw.json: |
                  {
                    gateway: {
                      controlUi: {
                        allowedOrigins: ["https://openclaw.72602.online"],
                        dangerouslyAllowHostHeaderOriginFallback: true,
                      },
                    },
                    browser: {
                      gatewayToken: "${OPENCLAW_GATEWAY_TOKEN}",
                    },
                    agents: {
                      main: {
                        brain: {
                          provider: "anthropic",
                          model: "claude-sonnet-4-20250514",
                          apiKey: "${ANTHROPIC_API_KEY}",
                        },
                      },
                    },
                  }
  destination:
    server: https://kubernetes.default.svc
    namespace: claw
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/openclaw
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


4. Database postgresql has been installed, if not check 🔗link


1.prepare `n8n-middleware-credientials.yaml`

Details
kubectl get namespaces n8n > /dev/null 2>&1 || kubectl create namespace n8n
N8N_PASSWORD=$(kubectl -n database get secret postgresql-credentials -o jsonpath='{.data.password}' | base64 -d)
kubectl -n n8n create secret generic n8n-middleware-credential \
--from-literal=postgres-password="${N8N_PASSWORD}"

2.prepare `deploy-n8n.yaml`

Details
kubectl -n argocd apply -f - <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: n8n
spec:
  project: default
  source:
    repoURL: https://community-charts.github.io/helm-charts
    targetRevision: 1.16.22
    helm:
      releaseName: n8n
      values: |
        global:
          security:
            allowInsecureImages: true
        image:
          repository: n8nio/n8n
        log:
          level: info
        encryptionKey: "ay-dev-n8n"
        timezone: Asia/Shanghai
        db:
          type: postgresdb
        externalPostgresql:
          host: postgresql-hl.database.svc.cluster.local
          port: 5432
          username: "n8n"
          database: "n8n"
          existingSecret: "n8n-middleware-credential"
        main:
          count: 1
          extraEnvVars:
            "N8N_BLOCK_ENV_ACCESS_IN_NODE": "false"
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/home/node/.n8n-files"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "10"
            "CACHE_ENABLED": "true"
            "N8N_CONCURRENCY_PRODUCTION_LIMIT": "5"
            "NODE_TLS_REJECT_UNAUTHORIZED": "0"
            "N8N_SECURE_COOKIE": "false"
            "WEBHOOK_URL": "https://webhook.n8n.ay.dev"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /home/node/.n8n-files
          resources:
            requests:
              cpu: 1000m
              memory: 1024Mi
            limits:
              cpu: 2000m
              memory: 2048Mi
        worker:
          mode: queue
          count: 2
          waitMainNodeReady:
            enabled: false
          extraEnvVars:
            "N8N_FILE_SYSTEM_ALLOWED_PATHS": "/home/node/.n8n-files"
            "EXECUTIONS_TIMEOUT": "300"
            "EXECUTIONS_TIMEOUT_MAX": "600"
            "DB_POSTGRESDB_POOL_SIZE": "5"
            "QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD": "60000"
            "N8N_COMMUNITY_PACKAGES_ENABLED": "true"
            "N8N_GIT_NODE_DISABLE_BARE_REPOS": "true"
            "N8N_LICENSE_AUTO_RENEW_ENABLED": "true"
            "N8N_LICENSE_RENEW_ON_INIT": "true"
          persistence:
            enabled: true
            accessMode: ReadWriteOnce
            storageClass: "local-path"
            size: 50Gi
          volumes:
            - name: downloads-volume
              hostPath:
                path: /home/aaron/Downloads
                type: DirectoryOrCreate
          volumeMounts:
            - name: downloads-volume
              mountPath: /home/node/.n8n-files
          resources:
            requests:
              cpu: 500m
              memory: 1024Mi
            limits:
              cpu: 1000m
              memory: 2048Mi
        nodes:
          builtin:
            enabled: true
            modules:
              - crypto
              - fs
          external:
            allowAll: true
            packages:
              - n8n-nodes-globals
        npmRegistry:
          enabled: true
          url: http://mirrors.cloud.tencent.com/npm/
        redis:
          enabled: true
          image:
            registry: m.daocloud.io/docker.io
            repository: bitnamilegacy/redis
          master:
            resourcesPreset: "small"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              storageClass: "local-path"
              size: 10Gi
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: self-signed-ca-issuer
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: n8n.ay.dev
              paths:
                - path: /
                  pathType: Prefix
          tls:
          - hosts:
            - n8n.ay.dev
            - webhook.n8n.ay.dev
            secretName: n8n.ay.dev-tls
        webhook:
          mode: queue
          url: "https://webhook.n8n.ay.dev"
          autoscaling:
            enabled: false
          waitMainNodeReady:
            enabled: true
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 512m
              memory: 512Mi
    chart: n8n
  destination:
    server: https://kubernetes.default.svc
    namespace: n8n
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=false
EOF

3.sync by argocd

Details
argocd app sync argocd/n8n
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

🛎️FAQ

Q1: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Q2: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Mar 7, 2024

OpenCode

🚀Installation

Install By

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


1.prepare `opencode-configuration.yaml`

Details
kubectl get namespaces opencode > /dev/null 2>&1 || kubectl create namespace opencode

kubectl -n opencode create secret generic opencode-server-secret \
  --from-literal=OPENCODE_SERVER_PASSWORD=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 16)

1.1.choose different LLM configuration

kubectl -n opencode apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencode-config
data:
  opencode.json: |
    {
      "provider": {
        "openai": {
          "options": {
            "baseURL": "https://v2.qixuw.com/v1",
            "apiKey": "sk-ss"
          },
          "models": {
            "gpt-5.3-codex-spark": {
              "name": "GPT-5.3 Codex Spark",
              "limit": {
                "context": 128000,
                "output": 32000
              },
              "options": {
                "store": false
              },
              "variants": {
                "low": {},
                "medium": {},
                "high": {},
                "xhigh": {}
              }
            }
          }
        }
      },
      "mcp": {
        "euclid-catalog": {
          "type": "remote",
          "url": "https://catalog.euclid.mcp.ay.dev:32443/sse",
          "enabled": true
        },
        "astro_k3s_mcp": {
          "type": "remote",
          "url": "http://eva24002-entrance.lab.zverse.space:30082/mcp",
          "enabled": true,
          "oauth": false,
          "timeout": 15000
        }
      },
      "agent": {
        "build": {
          "options": {
            "store": false
          }
        },
        "plan": {
          "options": {
            "store": false
          }
        }
      },
      "$schema": "https://opencode.ai/config.json"
    }
EOF
kubectl -n opencode apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencode-config
data:
  opencode.json: |
    {
      "$schema": "https://opencode.ai/config.json",
      "provider": {
        "minimax": {
          "npm": "@ai-sdk/openai-compatible",
          "name": "MiniMax M2.5",
          "options": {
            "baseURL": "http://10.200.92.41:31551/v1",
            "apiKey": "sk-sss"
          },
          "models": {
            "minimax-m2.5": {
              "name": "MiniMax M2.5",
              "id": "MiniMaxAI/MiniMax-M2.5",
              "limit": {
                "context": 196608,
                "output": 8192
              }
            }
          }
        }
      },
      "model": "minimax/minimax-m2.5"
    }
EOF
kubectl -n opencode apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencode-config
data:
  opencode.json: |
    {
      "$schema": "https://opencode.ai/config.json",
      "model": "opencode/minimax-m2.5-free",
      "small_model": "opencode/minimax-m2.5-free"
    }
EOF

2.prepare `deploy-opencode.yaml` ; change default model when you apply different configmap

Details
kubectl -n argocd apply -f - <<'EOF'
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: opencode
  namespace: argocd
spec:
  project: default
  source:
    repoURL: oci://ghcr.io/aaronyang0628/opencode
    targetRevision: 0.20.0
    chart: opencode
    helm:
      values: |
        image:
          repository: ghcr.io/nimbleflux/opencode-docker
          tag: 1.2.26
          pullPolicy: Always
        replicaCount: 1
        command:
          - opencode
        args:
          - serve
          - --port
          - "4000"
          - --hostname
          - "0.0.0.0"
        service:
          type: ClusterIP
          port: 4000
        env:
          OPENCODE_PORT: "4000"
        extraVolumes:
          - name: opencode-config
            configMap:
              name: opencode-config
        extraVolumeMounts:
          - name: opencode-config
            mountPath: /home/opencode/.config/opencode/opencode.json
            subPath: opencode.json
            readOnly: true
        persistence:
          enabled: true
          storageClass: local-path
          config:
            enabled: false 
          data:
            enabled: true
            size: 1Gi
          workspace:
            enabled: true
            size: 5Gi
          playbook:
            enabled: true
            mountPath: /home/opencode/workspace/playbook
            example:
              enabled: true
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
          limits:
            cpu: 2
            memory: 4Gi
        probes:
          liveness:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 15
            periodSeconds: 30
          readiness:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 10
            periodSeconds: 10
          startup:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 30
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: self-signed-ca-issuer
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: opencode.ay.dev
              paths:
                - path: /
                  pathType: Prefix
          tls:
          - hosts:
            - opencode.ay.dev
            secretName: opencode.ay.dev-tls
        globalLabels:
          app.kubernetes.io/part-of: opencode
          environment: production
        bridge:
          enabled: true
          image:
            repository: crpi-wixjy6gci86ms14e.cn-hongkong.personal.cr.aliyuncs.com/ay-dev/opencode-bridge
            tag: "v20260326r4"
          env:
            defaultModel: "opencode/minimax-m2.5-free"
            openaiStreamChunkSize: "4"
            openaiStreamChunkDelayMs: "10"
            enableLeadingEchoFilter: "false"
          resources:
            limits:
              cpu: 500m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 128Mi
          ingress:
            enabled: true
            annotations:
              kubernetes.io/ingress.class: nginx
              cert-manager.io/cluster-issuer: self-signed-ca-issuer
              nginx.ingress.kubernetes.io/proxy-buffering: "off"
              nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
              nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
              nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
              nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
              nginx.ingress.kubernetes.io/proxy-body-size: "50m"
              nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
              nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
            hosts:
              - host: opencode-bridge.ay.dev
                paths:
                  - path: /
                    pathType: Prefix
            tls:
              - secretName: opencode-bridge-tls
                hosts:
                  - opencode-bridge.ay.dev

  destination:
    server: https://kubernetes.default.svc
    namespace: opencode
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true
EOF

3.sync by argocd

Details
argocd app sync argocd/opencode

4.then you can talk with LLM with rest api

Details
curl -k -X POST https://opencode.ay.dev:32443/session \
-H "Content-Type: application/json" \
-d '{"model": "opencode/minimax-m2.5-free"}'

## {"id":"ses_30a879abeffe6KRC0Rmg4aPrmK","slug":"brave-eagle","version":"1.2.26","projectID":"global","directory":"/home/opencode/workspace","title":"New session - 2026-03-16T07:07:14.113Z","time":{"created":1773644834113,"updated":1773644834113}}

5.reuse the same session

Details
curl -k -X POST https://opencode.ay.dev:32443/session/ses_30a879abeffe6KRC0Rmg4aPrmK/message \
-H "Content-Type: application/json" \
-d '{"parts": [{"type": "text", "text": "你好"}]}'

## {"info":{"role":"assistant","time":{"created":1773644844131,"completed":1773644848700},"parentID":"msg_cf5788c12001RS4wX3hwMRe0If","modelID":"minimax-m2.5","providerID":"minimax","mode":"build","agent":"build","path":{"cwd":"/home/opencode/workspace","root":"/"},"cost":0,"tokens":{"total":10628,"input":10567,"output":61,"reasoning":0,"cache":{"read":0,"write":0}},"finish":"stop","id":"msg_cf5788c63001hBrorYejcHc1tO","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK"},"parts":[{"type":"step-start","id":"prt_cf57899120016WXp4jT4AHeTiG","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"},{"type":"text","text":"<think>The user said \"你好\" which means \"Hello\" in Chinese. According to the instructions, I should be concise and direct. I should respond briefly without unnecessary preamble. Since this is a simple greeting, I can just respond with a greeting back.\n</think>\n\n你好!有什么可以帮你的吗?","time":{"start":1773644848689,"end":1773644848689},"id":"prt_cf5789913001dUZnoW9w63ThkC","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"},{"type":"step-finish","reason":"stop","cost":0,"tokens":{"total":10628,"input":10567,"output":61,"reasoning":0,"cache":{"read":0,"write":0}},"id":"prt_cf5789e35001dLGIXJb5WHizMX","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"}]}

Preliminary

1. Kubernetes has installed, if not check 🔗link


2. Helm has installed, if not check 🔗link


3. ArgoCD has installed, if not check 🔗link


1.prepare `opencode-configuration.yaml`

Details
kubectl get namespaces opencode > /dev/null 2>&1 || kubectl create namespace opencode

kubectl -n opencode create secret generic opencode-server-secret \
  --from-literal=OPENCODE_SERVER_PASSWORD=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 16)

2.prepare `deploy-opencode.yaml`

Details
kubectl -n argocd apply -f - <<'EOF'
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: opencode
  namespace: argocd
spec:
  project: default
  source:
    repoURL: oci://ghcr.io/aaronyang0628/opencode
    targetRevision: 0.20.0
    chart: opencode
    helm:
      values: |
        image:
          repository: ghcr.io/nimbleflux/opencode-docker
          tag: 1.2.26
          pullPolicy: Always
        replicaCount: 1
        command:
          - opencode
        args:
          - serve
          - --port
          - "4000"
          - --hostname
          - "0.0.0.0"
        service:
          type: ClusterIP
          port: 4000
        env:
          OPENCODE_PORT: "4000"
        extraVolumes:
          - name: opencode-config
            configMap:
              name: opencode-config
        extraVolumeMounts:
          - name: opencode-config
            mountPath: /home/opencode/.config/opencode/opencode.json
            subPath: opencode.json
            readOnly: true
        persistence:
          enabled: true
          storageClass: local-path
          config:
            enabled: false 
          data:
            enabled: true
            size: 1Gi
          workspace:
            enabled: true
            size: 5Gi
          playbook:
            enabled: true
            mountPath: /home/opencode/workspace/playbook
            example:
              enabled: true
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
          limits:
            cpu: 2
            memory: 4Gi
        probes:
          liveness:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 15
            periodSeconds: 30
          readiness:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 10
            periodSeconds: 10
          startup:
            tcpSocket:
              port: 4000
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 30
        ingress:
          enabled: true
          className: nginx
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt
            nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
            nginx.ingress.kubernetes.io/proxy-body-size: "50m"
            nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
            nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
          hosts:
            - host: opencode.72602.online
              paths:
                - path: /
                  pathType: Prefix
          tls:
          - hosts:
            - opencode.72602.online
            secretName: opencode.72602.online-tls
        globalLabels:
          app.kubernetes.io/part-of: opencode
          environment: production
        bridge:
          enabled: true
          image:
            repository: crpi-wixjy6gci86ms14e.cn-hongkong.personal.cr.aliyuncs.com/ay-dev/opencode-bridge
            tag: "v20260326r4"
          env:
            defaultModel: "openai/gpt-5.3-codex-spark"
            openaiStreamChunkSize: "4"
            openaiStreamChunkDelayMs: "10"
            enableLeadingEchoFilter: "false"
          resources:
            limits:
              cpu: 500m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 128Mi
          ingress:
            enabled: true
            annotations:
              kubernetes.io/ingress.class: nginx
              cert-manager.io/cluster-issuer: letsencrypt
              nginx.ingress.kubernetes.io/proxy-buffering: "off"
              nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
              nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
              nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
              nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
              nginx.ingress.kubernetes.io/proxy-body-size: "50m"
              nginx.ingress.kubernetes.io/upstream-keepalive-connections: "50"
              nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "60"
            hosts:
              - host: opencode-bridge.72602.online
                paths:
                  - path: /
                    pathType: Prefix
            tls:
              - secretName: opencode-bridge-tls
                hosts:
                  - opencode-bridge.72602.online

  destination:
    server: https://kubernetes.default.svc
    namespace: opencode
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true
EOF

3.sync by argocd

Details
argocd app sync argocd/opencode

4.then you can talk with LLM with rest api

Details
curl -s "https://opencode-bridge.72602.online/v1/models"

## {"id":"ses_30a879abeffe6KRC0Rmg4aPrmK","slug":"brave-eagle","version":"1.2.26","projectID":"global","directory":"/home/opencode/workspace","title":"New session - 2026-03-16T07:07:14.113Z","time":{"created":1773644834113,"updated":1773644834113}}

4.then you can talk with LLM with rest api

Details
curl -k -X POST https://opencode.72602.online/session \
-H "Content-Type: application/json" \
-d '{"model": "MiniMaxAI/MiniMax-M2.5"}'

## {"id":"ses_30a879abeffe6KRC0Rmg4aPrmK","slug":"brave-eagle","version":"1.2.26","projectID":"global","directory":"/home/opencode/workspace","title":"New session - 2026-03-16T07:07:14.113Z","time":{"created":1773644834113,"updated":1773644834113}}

5.reuse the same session

Details
curl -k -X POST https://opencode.72602.online/session/ses_30a879abeffe6KRC0Rmg4aPrmK/message \
-H "Content-Type: application/json" \
-d '{"parts": [{"type": "text", "text": "你好"}]}'

## {"info":{"role":"assistant","time":{"created":1773644844131,"completed":1773644848700},"parentID":"msg_cf5788c12001RS4wX3hwMRe0If","modelID":"minimax-m2.5","providerID":"minimax","mode":"build","agent":"build","path":{"cwd":"/home/opencode/workspace","root":"/"},"cost":0,"tokens":{"total":10628,"input":10567,"output":61,"reasoning":0,"cache":{"read":0,"write":0}},"finish":"stop","id":"msg_cf5788c63001hBrorYejcHc1tO","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK"},"parts":[{"type":"step-start","id":"prt_cf57899120016WXp4jT4AHeTiG","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"},{"type":"text","text":"<think>The user said \"你好\" which means \"Hello\" in Chinese. According to the instructions, I should be concise and direct. I should respond briefly without unnecessary preamble. Since this is a simple greeting, I can just respond with a greeting back.\n</think>\n\n你好!有什么可以帮你的吗?","time":{"start":1773644848689,"end":1773644848689},"id":"prt_cf5789913001dUZnoW9w63ThkC","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"},{"type":"step-finish","reason":"stop","cost":0,"tokens":{"total":10628,"input":10567,"output":61,"reasoning":0,"cache":{"read":0,"write":0}},"id":"prt_cf5789e35001dLGIXJb5WHizMX","sessionID":"ses_30a879abeffe6KRC0Rmg4aPrmK","messageID":"msg_cf5788c63001hBrorYejcHc1tO"}]}

1. following the steps in `https://opencode.ai`

Details
curl -fsSL https://opencode.ai/install | bash

6.use bridge to manage session

bridge

🛎️FAQ

Q1: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Q2: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Mar 7, 2024

Wechat Markdown Editor

🚀Installation

Install By

1.get helm repo

Details
helm repo add xxxxx https://xxxx
helm repo update

2.install chart

Details
helm install xxxxx/chart-name --generate-name --version a.b.c
Using AY Helm Mirror

1.prepare `xxxxx-credientials.yaml`

Details

2.prepare `deploy-xxxxx.yaml`

Details
kubectl -n argocd apply -f -<< EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: xxxx
spec:
  project: default
  source:
    repoURL: https://xxxxx
    chart: xxxx
    targetRevision: a.b.c
EOF

3.sync by argocd

Details
argocd app sync argocd/xxxx
Using AY Helm Mirror

for more information, you can check 🔗https://github.com/AaronYang0628/helm-chart-mirror

helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  helm repo update
  helm install ay-helm-mirror/chart-name --generate-name --version a.b.c
Using AY ACR Image Mirror
Using DaoCloud Mirror

1.init server

Details
Using AY ACR Image Mirror
Using DaoCloud Mirror

1.init server

Details
Using AY ACR Image Mirror
Using DaoCloud Mirror

🛎️FAQ

Q1: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)

Q2: Show me almost endless possibilities

You can add standard markdown syntax:

  • multiple paragraphs
  • bullet point lists
  • emphasized, bold and even bold emphasized text
  • links
  • etc.
...and even source code

the possibilities are endless (almost - including other shortcodes may or may not work)