Conatiner
- Docker
Install Docker, you can refer to Docker Installation.
- Podman
Install Podman, you can refer to Podman Installation.
Install Docker, you can refer to Docker Installation.
Install Podman, you can refer to Podman Installation.
podman rmi <$image_id><none> imagespodman rmi `podamn images | grep '<none>' | awk '{print $3}'`podman container prunepodman image prunesudo podman volume prune
podman inspect --format='{{.NetworkSettings.IPAddress}}' minio-serverpodman run -it <$container_id> /bin/bashpodman run -d --replace
-p 18123:8123 -p 19000:9000 \
--name clickhouse-server \
-e ALLOW_EMPTY_PASSWORD=yes \
--ulimit nofile=262144:262144 \
quay.m.daocloud.io/kryptonite/clickhouse-docker-rootless:20.9.3.45 --ulimit nofile=262144:262144: 262144 is the maximum users process or for showing maximum user process limit for the logged-in user
ulimitis admin access required Linux shell command which is used to see, set, or limit the resource usage of the current user. It is used to return the number of open file descriptors for each process. It is also used to set restrictions on the resources used by a process.
export ZJLAB_CR_PAT=ghp_xxxxxxxxxxxx
echo $ZJLAB_CR_PAT | podman login --tls-verify=false cr.registry.res.cloud.zhejianglab.com -u ascm-org-1710208820455 --password-stdin
export GITHUB_CR_PAT=ghp_xxxxxxxxxxxx
echo $GITHUB_CR_PAT | podman login ghcr.io -u aaronyang0628 --password-stdin
export DOCKER_CR_PAT=dckr_pat_bBN_Xkgz-xxxx
echo $DOCKER_CR_PAT | podman login docker.io -u aaron666 --password-stdin
export HARBOR_CR_PAT=Aaron
echo $DOCKER_CR_PAT | podman login --tls-verify=false harbor.zhejianglab.com -u byang628@zhejianglab.org --password-stdinpodman tag 76fdac66291c cr.registry.res.cloud.zhejianglab.com/ay-dev/datahub-s3-fits:1.0.0podman push cr.registry.res.cloud.zhejianglab.com/ay-dev/datahub-s3-fits:1.0.0docker rmi <$image_id><none> imagesdocker rmi `docker images | grep '<none>' | awk '{print $3}'`docker container prunedocker image prunedocker inspect --format='{{.NetworkSettings.IPAddress}}' minio-serverdocker exec -it <$container_id> /bin/bashdocker run -d --replace -p 18123:8123 -p 19000:9000 --name clickhouse-server -e ALLOW_EMPTY_PASSWORD=yes --ulimit nofile=262144:262144 quay.m.daocloud.io/kryptonite/clickhouse-docker-rootless:20.9.3.45 --ulimit nofile=262144:262144: sssss
copy file
Copy a local file into container
docker cp ./some_file CONTAINER:/workor copy files from container to local path
docker cp CONTAINER:/var/logs/ /tmp/app_logsload a volume
docker run --rm \
--entrypoint bash \
-v $PWD/data:/app:ro \
-it docker.io/minio/mc:latest \
-c "mc --insecure alias set minio https://oss-cn-hangzhou-zjy-d01-a.ops.cloud.zhejianglab.com/ g83B2sji1CbAfjQO 2h8NisFRELiwOn41iXc6sgufED1n1A \
&& mc --insecure ls minio/csst-prod/ \
&& mc --insecure mb --ignore-existing minio/csst-prod/crp-test \
&& mc --insecure cp /app/modify.pdf minio/csst-prod/crp-test/ \
&& mc --insecure ls --recursive minio/csst-prod/".devcontainer.json{
"name": "Go & Java DevContainer",
"build": {
"dockerfile": "Dockerfile"
},
"mounts": [
"source=/root/.kube/config,target=/root/.kube/config,type=bind",
"source=/root/.minikube/profiles/minikube/client.crt,target=/root/.minikube/profiles/minikube/client.crt,type=bind",
"source=/root/.minikube/profiles/minikube/client.key,target=/root/.minikube/profiles/minikube/client.key,type=bind",
"source=/root/.minikube/ca.crt,target=/root/.minikube/ca.crt,type=bind"
],
"customizations": {
"vscode": {
"extensions": [
"golang.go",
"vscjava.vscode-java-pack",
"redhat.java",
"vscjava.vscode-maven",
"Alibaba-Cloud.tongyi-lingma",
"vscjava.vscode-java-debug",
"vscjava.vscode-java-dependency",
"vscjava.vscode-java-test"
]
}
},
"remoteUser": "root",
"postCreateCommand": "go version && java -version && mvn -v"
}DockerfileFROM m.daocloud.io/docker.io/ubuntu:24.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
curl \
git \
wget \
gnupg \
vim \
lsb-release \
apt-transport-https \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# install OpenJDK 21
RUN mkdir -p /etc/apt/keyrings && \
wget -qO - https://packages.adoptium.net/artifactory/api/gpg/key/public | gpg --dearmor -o /etc/apt/keyrings/adoptium.gpg && \
echo "deb [signed-by=/etc/apt/keyrings/adoptium.gpg arch=amd64] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list > /dev/null && \
apt-get update && \
apt-get install -y temurin-21-jdk && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# set java env
ENV JAVA_HOME=/usr/lib/jvm/temurin-21-jdk-amd64
# install maven
ARG MAVEN_VERSION=3.9.10
RUN wget https://dlcdn.apache.org/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz -O /tmp/maven.tar.gz && \
mkdir -p /opt/maven && \
tar -C /opt/maven -xzf /tmp/maven.tar.gz --strip-components=1 && \
rm /tmp/maven.tar.gz
ENV MAVEN_HOME=/opt/maven
ENV PATH="${MAVEN_HOME}/bin:${PATH}"
# install go 1.24.4
ARG GO_VERSION=1.24.4
RUN wget https://dl.google.com/go/go${GO_VERSION}.linux-amd64.tar.gz -O /tmp/go.tar.gz && \
tar -C /usr/local -xzf /tmp/go.tar.gz && \
rm /tmp/go.tar.gz
# set go env
ENV GOROOT=/usr/local/go
ENV GOPATH=/go
ENV PATH="${GOROOT}/bin:${GOPATH}/bin:${PATH}"
# install other binarys
ARG KUBECTL_VERSION=v1.33.0
RUN wget https://files.m.daocloud.io/dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl -O /tmp/kubectl && \
chmod u+x /tmp/kubectl && \
mv -f /tmp/kubectl /usr/local/bin/kubectl
ARG HELM_VERSION=v3.13.3
RUN wget https://files.m.daocloud.io/get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz -O /tmp/helm-${HELM_VERSION}-linux-amd64.tar.gz && \
mkdir -p /opt/helm && \
tar -C /opt/helm -xzf /tmp/helm-${HELM_VERSION}-linux-amd64.tar.gz && \
rm /tmp/helm-${HELM_VERSION}-linux-amd64.tar.gz
ENV HELM_HOME=/opt/helm/linux-amd64
ENV PATH="${HELM_HOME}:${PATH}"
USER root
WORKDIR /workspace.devcontainer.json{
"name": "DALI Learning Environment",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
"VARIANT": "3.11",
"HTTP_PROXY": "",
"HTTPS_PROXY": "",
"http_proxy": "",
"https_proxy": ""
}
},
"forwardPorts": [8000],
"portsAttributes": {
"8000": {
"label": "HTTP Server",
"protocol": "http",
"onAutoForward": "notify"
}
},
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"ms-python.debugpy"
],
"settings": {
"python.defaultInterpreterPath": "/usr/bin/python",
"files.exclude": {
"**/__pycache__": true,
"**/*.pyc": true
}
}
}
},
"postCreateCommand": "bash .devcontainer/post-create.sh",
"remoteUser": "vscode",
"runArgs": [
"-p", "0.0.0.0:8000:8000",
"--device=/dev/nvidiactl",
"--device=/dev/nvidia0",
"--device=/dev/nvidia-uvm",
"--device=/dev/nvidia-uvm-tools",
"--ipc=host",
"--ulimit", "memlock=-1",
"--ulimit", "stack=67108864",
"--env", "LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu:/host/usr/lib/x86_64-linux-gnu"
],
"mounts": [
"type=bind,src=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,readonly",
"type=bind,src=/usr/lib/x86_64-linux-gnu,target=/host/usr/lib/x86_64-linux-gnu,readonly",
"type=bind,src=/usr/bin/nvidia-smi,target=/usr/bin/nvidia-smi,readonly",
"type=bind,src=/usr/bin/nvidia-debugdump,target=/usr/bin/nvidia-debugdump,readonly"
],
"containerEnv": {
"CUDA_VISIBLE_DEVICES": "0",
"NVIDIA_VISIBLE_DEVICES": "all",
"NVIDIA_DRIVER_CAPABILITIES": "compute,utility",
"HTTP_PROXY": "",
"HTTPS_PROXY": "",
"http_proxy": "",
"https_proxy": "",
"NO_PROXY": "",
"no_proxy": ""
},
"description": "NVIDIA DALI MCP开发环境 - GPU支持的轻量级镜像"
}Dockerfile# Use runtime image instead of devel to reduce size (4GB vs 10GB)
FROM m.daocloud.io/docker.io/nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Shanghai
# Add deadsnakes PPA for Python 3.11 and install base dependencies
# Clear proxy settings to avoid connection issues during build
RUN unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy && \
apt-get update && apt-get install -y --no-install-recommends \
software-properties-common \
curl \
&& add-apt-repository ppa:deadsnakes/ppa -y && \
apt-get update && apt-get install -y --no-install-recommends \
python3.11 \
python3.11-dev \
python3.11-distutils \
python3-pip \
git \
wget \
&& rm -rf /var/lib/apt/lists/*
# Install Node.js (LTS) for Claude CLI
RUN unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy && \
curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - && \
apt-get install -y --no-install-recommends nodejs && \
rm -rf /var/lib/apt/lists/*
# Set NVIDIA library paths
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}
ENV CUDA_HOME=/usr/local/cuda
ENV PATH=${CUDA_HOME}/bin:${PATH}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Set Python 3.11 as default version
RUN unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy && \
update-alternatives --install /usr/bin/python python /usr/bin/python3.11 1 && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
# Upgrade pip
RUN unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy && \
python -m pip install --no-cache-dir --upgrade pip setuptools wheel
# Install DALI and minimal packages for MCP development
RUN unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy && \
pip install --no-cache-dir \
--extra-index-url https://pypi.nvidia.com \
nvidia-dali-cuda120 \
numpy \
ipython
# Create non-root user
RUN useradd -m -s /bin/bash vscode && \
mkdir -p /workspace && \
chown -R vscode:vscode /workspace
WORKDIR /workspace
USER vscode
CMD ["/bin/bash"]#!/bin/bash
# Clear proxy settings (should already be cleared by containerEnv, but double-check)
unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy
# Install MCP SDK and minimal dependencies
pip install --no-cache-dir \
-i https://pypi.tuna.tsinghua.edu.cn/simple \
mcp \
anthropic
curl -fsSL https://claude.ai/install.sh | bash
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
# Create working directories
mkdir -p /workspace/scripts
echo "✅ DALI environment setup completed!"# just copy ~/.kube/configfor example, the original config
apiVersion: v1
clusters:
- cluster:
certificate-authority: <$file_path>
extensions:
- extension:
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://<$minikube_ip>:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: <$file_path>
client-key: <$file_path>you need to rename clusters.cluster.certificate-authority, clusters.cluster.server, users.user.client-certificate, users.user.client-key.
clusters.cluster.certificate-authority -> clusters.cluster.certificate-authority-data
clusters.cluster.server -> ip set to `localhost`
users.user.client-certificate -> users.user.client-certificate-data
users.user.client-key -> users.user.client-key-datathe data you paste after each key should be base64
cat <$file_path> | base64then, modified config file should be look like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxxxxxxxxxxx
extensions:
- extension:
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://127.0.0.1:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate-data: xxxxxxxxxxxx
client-key-data: xxxxxxxxxxxxxxxxthen we should forward minikube port in your own pc
#where you host minikube
MACHINE_IP_ADDRESS=10.200.60.102
USER=ayay
MINIKUBE_IP_ADDRESS=$(ssh -o 'UserKnownHostsFile /dev/null' $USER@$MACHINE_IP_ADDRESS '$HOME/bin/minikube ip')
ssh -o 'UserKnownHostsFile /dev/null' $USER@$MACHINE_IP_ADDRESS -L "*:8443:$MINIKUBE_IP_ADDRESS:8443" -N -ffor more information, you can check 🔗link to install kubectl
How to use it in devpod
Everything works fine.
when you in pod, and using kubectl you should change clusters.cluster.server in ~/.kube/config to https://<$minikube_ip>:8443
exec into devpod
kubectl -n devpod exec -it <$resource_id> -c devpod -- bin/bash10.aaa.bbb.ccc gitee.zhejianglab.com# check if port 8443 is already open
netstat -aon|findstr "8443"
# find PID
ps | grep ssh
# kill the process
taskkill /PID <$PID> /T /F# check if port 8443 is already open
netstat -aon|findstr "8443"
# find PID
ps | grep ssh
# kill the process
kill -9 <$PID> Local Jumpserver virtual node (develop/k3s)
________ _______ ________
╱ ╲ ╱ ╲╲ ╱ ╲
╱ ╱ ------ ╱ ╱╱ -------- ╱ ╱
╱ ╱ ╱ ╱ ╱ ╱
╲________╱ ╲________╱ ╲________╱
IP: 10.A.B.C IP: jumpserver.ay.dev IP: 192.168.100.xxx
30022has ssh service at jumpserver
cat .ssh/config
Host jumpserver
HostName jumpserver.ay.dev
Port 30022
User ay
IdentityFile ~/.ssh/id_rsa
Host virtual
HostName 192.168.100.xxx
Port 22
User ay
ProxyJump jumpserver
IdentityFile ~/.ssh/id_rsaAnd then you can directly connect to the virtual node
30022has ssh service at jumpserver
32524is a service which you wanna forward
ssh -o 'UserKnownHostsFile /dev/null' -o 'ServerAliveInterval=60' -L 32524:192.168.100.xxx:32524 -p 30022 ay@jumpserver.ay.dev