Subsections of 🧪Demos

Subsections of Game

LOL Overlay Assistant

Using deep learning techniques to help you to win the game.

State Machine Event Bus Python 3.6 TensorFlow2 Captain InfoNew Awesome

ScreenShots

There are four main funcs in this tool.

  1. The first one is to detect your game client thread and recognize which status you are in. func1 func1

  2. The second one is to recommend some champions to play. Based on your enemy’s team banned champion, this tool will provide you three more choices to counter your enemies. func2 func2

  3. The third func will scans the mini-map, and when someone is heading to you, a notification window will pop up. func3 func3

  4. The last func will provides you some gear recommendation based on your enemy’s item list. fun4 fun4

Framework

mvc mvc

Checkout in Bilibili

Checkout in Youtube

Repo

you can get code from github, gitee

Roller Coin Assistant

Using deep learning techniques to help you to mining the cryptos, such as BTC, ETH and DOGE.

ScreenShots

There are two main funcs in this tool.

  1. Help you to crack the game
  • only support ‘Coin-Flip’ Game for now.

    right, rollercoin.com had decrease the benefit from this game, thats why I make the repo public. update

  1. Help you to pass the geetest.

How to use

  1. open a web browser.
  2. go to this link https://rollercoin.com, and create an account.(https://rollercoin.com)
  3. keep the lang equals to ‘English’ (you can click the bottom button to change it).
  4. click the ‘Game’ button.
  5. start the application, and enjoy it.

Tips

  1. only supprot 1920*1080, 2560*1440 and higher resolution screen.
  2. and if you use 1920*1080 screen, strongly recommend you to fullscreen you web browser.

Repo

you can get code from gitee

Subsections of HPC

Slurm On K8S

slurm_on_k8s slurm_on_k8s

Trying to run slurm cluster on kubernets

Install

You can directly use helm to manage this slurm chart

  1. helm repo add ay-helm-mirror https://aaronyang0628.github.io/helm-chart-mirror/charts
  2. helm install slurm ay-helm-mirror/slurm --version 1.0.4

And then, you should see something like this func1 func1

Also, you can modify the values.yaml by yourself, and reinstall the slurm cluster

helm upgrade --create-namespace -n slurm --install -f ./values.yaml slurm ay-helm-mirror/slurm --version=1.0.4
Important

And you even can build your own image, especially for people wanna use their own libs. For now, the image we used is

login –> docker.io/aaron666/slurm-login:intel-mpi

slurmd –> docker.io/aaron666/slurm-slurmd:intel-mpi

slurmctld -> docker.io/aaron666/slurm-slurmctld:latest

slurmdbd –> docker.io/aaron666/slurm-slurmdbd:latest

munged –> docker.io/aaron666/slurm-munged:latest

Slurm Operator

if you wanna change slurm configuration ,please check slurm configuration generator click

  • for helm user

    just run for fun!

    1. helm repo add ay-helm-repo https://aaronyang0628.github.io/helm-chart-mirror/charts
    2. helm install slurm ay-helm-repo/slurm --version 1.0.4
  • for opertaor user

    pull an image and apply

    1. docker pull aaron666/slurm-operator:latest
    2. kubectl apply -f https://raw.githubusercontent.com/AaronYang0628/helm-chart-mirror/refs/heads/main/templates/slurm/install.yaml
    3. kubectl apply -f https://raw.githubusercontent.com/AaronYang0628/helm-chart-mirror/refs/heads/main/templates/slurm/slurmdeployment.values.yaml

Subsections of Plugins

Flink S3 F3 Multiple

Normally, Flink only can access only one S3 endpoint during the runtime. But we need to process some files from multiple minio simultaneously.

So I modified the original flink-s3-fs-hadoop and enable flink to do so.

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
env.setParallelism(1);
env.setStateBackend(new HashMapStateBackend());
env.getCheckpointConfig().setCheckpointStorage("file:///./checkpoints");

final FileSource<String> source =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(),
            new Path(
                "s3u://admin:ZrwpsezF1Lt85dxl@10.11.33.132:9000/user-data/home/conti/2024-02-08--10"))
        .build();

final FileSource<String> source2 =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(),
            new Path(
                "s3u://minioadmin:minioadmin@10.101.16.72:9000/user-data/home/conti"))
        .build();

env.fromSource(source, WatermarkStrategy.noWatermarks(), "file-source")
    .union(env.fromSource(source2, WatermarkStrategy.noWatermarks(), "file-source2"))
    .print("union-result");
    
env.execute();

using default flink-s3-fs-hadoop, the configuration value will set into Hadoop configuration map. Only one value functioning at the same, there is no way for user to operate different in single one job context.

Configuration pluginConfiguration = new Configuration();
pluginConfiguration.setString("s3a.access-key", "admin");
pluginConfiguration.setString("s3a.secret-key", "ZrwpsezF1Lt85dxl");
pluginConfiguration.setString("s3a.connection.maximum", "1000");
pluginConfiguration.setString("s3a.endpoint", "http://10.11.33.132:9000");
pluginConfiguration.setBoolean("s3a.path.style.access", Boolean.TRUE);
FileSystem.initialize(
    pluginConfiguration, PluginUtils.createPluginManagerFromRootFolder(pluginConfiguration));

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
env.setParallelism(1);
env.setStateBackend(new HashMapStateBackend());
env.getCheckpointConfig().setCheckpointStorage("file:///./checkpoints");

final FileSource<String> source =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(), new Path("s3a://user-data/home/conti/2024-02-08--10"))
        .build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "file-source").print();

env.execute();

Usage

There

Install From

For now, you can directly download flink-s3-fs-hadoop-$VERSION.jar and load in your project.
$VERSION is the flink version you are using.

  implementation(files("flink-s3-fs-hadoop-$flinkVersion.jar"))
  <dependency>
      <groupId>org.apache</groupId>
      <artifactId>flink</artifactId>
      <version>$flinkVersion</version>
      <systemPath>${project.basedir}flink-s3-fs-hadoop-$flinkVersion.jar</systemPath>
  </dependency>
the jar we provided was based on original flink-s3-fs-hadoop plugin, so you should use original protocal prefix s3a://

Or maybe you can wait from the PR, after I mereged into flink-master, you don't need to do anything, just update your flink version.
and directly use s3u://

Repo

you can get code from github, gitlab

Subsections of Stream

Cosmic Antenna

Design Architecture

  • objects

continuously processing antenna signal records and convert them into 3 dimension data matrixes, sending them to different astronomical algorithm endpoints. asdsaa asdsaa

  • how data flows

asdsaa asdsaa

Building From Zero

Following these steps, you may build comic-antenna from nothing.

1. install podman

you can check article Install Podman

2. install kind and kubectl

you can check article install kubectl

# create a cluster using podman
curl -o kind.cluster.yaml -L https://gitlab.com/-/snippets/3686427/raw/main/kind-cluster.yaml \
&& export KIND_EXPERIMENTAL_PROVIDER=podman \
&& kind create cluster --name cs-cluster --image m.daocloud.io/docker.io/kindest/node:v1.27.3 --config=./kind.cluster.yaml
Modify ~/.kube/config

vim ~/.kube/config

in line 5, change server: http://::xxxx -> server: http://0.0.0.0:xxxxx

asdsaa asdsaa

3. [Optional] pre-downloaded slow images

DOCKER_IMAGE_PATH=/root/docker-images && mkdir -p $DOCKER_IMAGE_PATH
BASE_URL="https://resource-ops-dev.lab.zjvis.net:32443/docker-images"
for IMAGE in "quay.io_argoproj_argocd_v2.9.3.dim" \
    "ghcr.io_dexidp_dex_v2.37.0.dim" \
    "docker.io_library_redis_7.0.11-alpine.dim" \
    "docker.io_library_flink_1.17.dim"
do
    IMAGE_FILE=$DOCKER_IMAGE_PATH/$IMAGE
    if [ ! -f $IMAGE_FILE ]; then
        TMP_FILE=$IMAGE_FILE.tmp \
        && curl -o "$TMP_FILE" -L "$BASE_URL/$IMAGE" \
        && mv $TMP_FILE $IMAGE_FILE
    fi
    kind -n cs-cluster load image-archive $IMAGE_FILE
done

4. install argocd

you can check article Install ArgoCD

5. install essential app on argocd

# install cert manger    
curl -LO https://gitlab.com/-/snippets/3686424/raw/main/cert-manager.yaml \
&& kubectl -n argocd apply -f cert-manager.yaml \
&& argocd app sync argocd/cert-manager

# install ingress
curl -LO https://gitlab.com/-/snippets/3686426/raw/main/ingress-nginx.yaml \
&& kubectl -n argocd apply -f ingress-nginx.yaml \
&& argocd app sync argocd/ingress-nginx

# install flink-kubernetes-operator
curl -LO https://gitlab.com/-/snippets/3686429/raw/main/flink-operator.yaml \
&& kubectl -n argocd apply -f flink-operator.yaml \
&& argocd app sync argocd/flink-operator

6. install git

sudo dnf install -y git \
&& rm -rf $HOME/cosmic-antenna-demo \
&& mkdir $HOME/cosmic-antenna-demo \
&& git clone --branch pv_pvc_template https://github.com/AaronYang2333/cosmic-antenna-demo.git $HOME/cosmic-antenna-demo

7. prepare application image

# cd into  $HOME/cosmic-antenna-demo
sudo dnf install -y java-11-openjdk.x86_64 \
&& $HOME/cosmic-antenna-demo/gradlew :s3sync:buildImage \
&& $HOME/cosmic-antenna-demo/gradlew :fpga-mock:buildImage
# save and load into cluster
VERSION="1.0.3"
podman save --quiet -o $DOCKER_IMAGE_PATH/fpga-mock_$VERSION.dim localhost/fpga-mock:$VERSION \
&& kind -n cs-cluster load image-archive $DOCKER_IMAGE_PATH/fpga-mock_$VERSION.dim
podman save --quiet -o $DOCKER_IMAGE_PATH/s3sync_$VERSION.dim localhost/s3sync:$VERSION \
&& kind -n cs-cluster load image-archive $DOCKER_IMAGE_PATH/s3sync_$VERSION.dim
kubectl -n flink edit role/flink -o yaml
Modify role config
kubectl -n flink edit role/flink -o yaml

add services and endpoints to the rules.resources

8. prepare k8s resources [pv, pvc, sts]

cp -rf $HOME/cosmic-antenna-demo/flink/*.yaml /tmp \
&& podman exec -d cs-cluster-control-plane mkdir -p /mnt/flink-job
# create persist volume
kubectl -n flink create -f /tmp/pv.template.yaml
# create pv claim
kubectl -n flink create -f /tmp/pvc.template.yaml
# start up flink application
kubectl -n flink create -f /tmp/job.template.yaml
# start up ingress
kubectl -n flink create -f /tmp/ingress.forward.yaml
# start up fpga UDP client, sending data 
cp $HOME/cosmic-antenna-demo/fpga-mock/client.template.yaml /tmp \
&& kubectl -n flink create -f /tmp/client.template.yaml

9. check dashboard in browser

http://job-template-example.flink.lab.zjvis.net

Repo

you can get code from github


Reference

  1. https://github.com/ben-wangz/blog/tree/main/docs/content/6.kubernetes/7.installation/ha-cluster
  2. xxx

Subsections of Design

Yaml Crawler

Steps

  1. define which web url you wanna crawl, lets say https://www.xxx.com/aaa.apex
  2. create a page pojo org.example.business.page.MainPage to describe that page

Then you can create a yaml file named root-pages.yaml and its content is

- '@class': "org.example.business.page.MainPage"
  url: "https://www.xxx.com/aaa.apex"
  1. and then define a process flow yaml file, implying how to process web pages the crawler will meet.
processorChain:
  - '@class': "org.example.crawler.core.processor.decorator.ExceptionRecord"
    processor:
      '@class': "org.example.crawler.core.processor.decorator.RetryControl"
      processor:
        '@class': "org.example.crawler.core.processor.decorator.SpeedControl"
        processor:
          '@class': "org.example.business.hs.code.MainPageProcessor"
          application: "app-name"
        time: 100
        unit: "MILLISECONDS"
      retryTimes: 1
  - '@class': "org.example.crawler.core.processor.decorator.ExceptionRecord"
    processor:
      '@class': "org.example.crawler.core.processor.decorator.RetryControl"
      processor:
        '@class': "org.example.crawler.core.processor.decorator.SpeedControl"
        processor:
          '@class': "org.example.crawler.core.processor.download.DownloadProcessor"
          pagePersist:
            '@class': "org.example.business.persist.DownloadPageDatabasePersist"
            downloadPageRepositoryBeanName: "downloadPageRepository"
          downloadPageTransformer:
            '@class': "org.example.crawler.download.DefaultDownloadPageTransformer"
          skipExists:
            '@class': "org.example.crawler.download.SkipExistsById"
        time: 1
        unit: "SECONDS"
      retryTimes: 1
nThreads: 1
pollWaitingTime: 30
pollWaitingTimeUnit: "SECONDS"
waitFinishedTimeout: 180
waitFinishedTimeUnit: "SECONDS" 

ExceptionRecord, RetryControl, SpeedControl are provided by the yaml crawler itself, dont worry. you only need to extend how to process your page MainPage, for example, you defined a MainPageProcessor. each processor will produce a set of other page or DownloadPage. DownloadPage like a ship containing information you need, and this framework will help you process DownloadPage and download or persist.

  1. Vola, run your crawler then.

Repo

you can get code from github, gitlab