🧪Demos 的子部分

Game

Game 的子部分

LOL游戏助手

使用深度学习方法,帮你赢得游戏

State Machine Event Bus Python 3.6 TensorFlow2 Captain 信息New Awesome

应用截图

这款应用程序共有四项功能

  1. 监控识别LOL游戏应用程序,并判断当前游戏处在什么样的运行状态 func1 func1

  2. 第二个是推荐一些英雄来玩。根据你的敌方队伍已经禁用的英雄,这个工具会为你提供三个推荐选项来帮你提前针对你的敌人,获得先发优势。 func2 func2

  3. 第三个功能将扫描小地图,当有人向你走来时,会弹出一个通知窗口来提醒你。 func3 func3

  4. 最后一个功能将根据敌人的装备列表为您提供一些装备推荐。 fun4 fun4

应用架构

mvc mvc

视频链接

Bilibili 上观看

Youtube 上观看

Repo

可以通过 github 或者 gitlab 获得原始代码。

Roller Coin Assistant

Using deep learning techniques to help you to mining the cryptos, such as BTC, ETH and DOGE.

ScreenShots

There are two main funcs in this tool.

  1. Help you to crack the game, go Watch Video

RollerCoin Game Cracker RollerCoin Game Cracker

  • only support ‘Coin-Flip’ Game for now. (I know, rollercoin.com has lower down the benefit from this game, thats why I make the repo public. update)
  1. Help you to pass the geetest.

How to use

  1. open a web browser.
  2. go to this link https://rollercoin.com, and create an account.(https://rollercoin.com)
  3. keep the lang equals to ‘English’ (you can click the bottom button to change it).
  4. click the ‘Game’ button.
  5. start the application, and enjoy it.

Tips

  1. only supprot 1920*1080, 2560*1440 and higher resolution screen.
  2. and if you use 1920*1080 screen, strongly recommend you to fullscreen you web browser.

Checkout in Bilibili

Checkout in Youtube

HPC

Plugins

Plugins 的子部分

Flink S3 F3 Multiple

Normally, Flink only can access one S3 endpoint during the runtime. But we need to process some files from multiple minio simultaneously.

So I modified the original flink-s3-fs-hadoop and enable flink to do so.

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
env.setParallelism(1);
env.setStateBackend(new HashMapStateBackend());
env.getCheckpointConfig().setCheckpointStorage("file:///./checkpoints");

final FileSource<String> source =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(),
            new Path(
                "s3u://admin:ZrwpsezF1Lt85dxl@10.11.33.132:9000/user-data/home/conti/2024-02-08--10"))
        .build();

final FileSource<String> source2 =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(),
            new Path(
                "s3u://minioadmin:minioadmin@10.101.16.72:9000/user-data/home/conti"))
        .build();

env.fromSource(source, WatermarkStrategy.noWatermarks(), "file-source")
    .union(env.fromSource(source2, WatermarkStrategy.noWatermarks(), "file-source2"))
    .print("union-result");
    
env.execute();

using default flink-s3-fs-hadoop, the configuration value will set into Hadoop configuration map. Only one value functioning at the same, there is no way for user to operate different in single one job context.

Configuration pluginConfiguration = new Configuration();
pluginConfiguration.setString("s3a.access-key", "admin");
pluginConfiguration.setString("s3a.secret-key", "ZrwpsezF1Lt85dxl");
pluginConfiguration.setString("s3a.connection.maximum", "1000");
pluginConfiguration.setString("s3a.endpoint", "http://10.11.33.132:9000");
pluginConfiguration.setBoolean("s3a.path.style.access", Boolean.TRUE);
FileSystem.initialize(
    pluginConfiguration, PluginUtils.createPluginManagerFromRootFolder(pluginConfiguration));

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
env.setParallelism(1);
env.setStateBackend(new HashMapStateBackend());
env.getCheckpointConfig().setCheckpointStorage("file:///./checkpoints");

final FileSource<String> source =
    FileSource.forRecordStreamFormat(
            new TextLineInputFormat(), new Path("s3a://user-data/home/conti/2024-02-08--10"))
        .build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "file-source").print();

env.execute();

Usage

There

Install From

For now, you can directly download flink-s3-fs-hadoop-$VERSION.jar and load in your project.
$VERSION is the flink version you are using.

  implementation(files("flink-s3-fs-hadoop-$flinkVersion.jar"))
  <dependency>
      <groupId>org.apache</groupId>
      <artifactId>flink</artifactId>
      <version>$flinkVersion</version>
      <systemPath>${project.basedir}flink-s3-fs-hadoop-$flinkVersion.jar</systemPath>
  </dependency>
the jar we provided was based on original flink-s3-fs-hadoop plugin, so you should use original protocal prefix s3a://

Or maybe you can wait from the PR, after I mereged into flink-master, you don't need to do anything, just update your flink version.
and directly use s3u://

Stream

Stream 的子部分

Cosmic Antenna

Design Architecture

  • objects

continuous processing antenna signal and sending 3 dimension data matrixes to different astronomical algorithm. asdsaa asdsaa

  • how data flows

asdsaa asdsaa

Building From Zero

Following these steps, you may build comic-antenna from nothing.

1. install podman

you can check article Install Podman

2. install kind and kubectl

mkdir -p $HOME/bin \
&& export PATH="$HOME/bin:$PATH" \
&& curl -o kind -L https://resource-ops.lab.zjvis.net:32443/binary/kind/v0.20.0/kind-linux-amd64 \
&& chmod u+x kind && mv kind $HOME/bin \
&& curl -o kubectl -L https://resource-ops.lab.zjvis.net:32443/binary/kubectl/v1.21.2/bin/linux/amd64/kubectl \
&& chmod u+x kubectl && mv kubectl $HOME/bin
# create a cluster using podman
curl -o kind.cluster.yaml -L https://gitlab.com/-/snippets/3686427/raw/main/kind-cluster.yaml \
&& export KIND_EXPERIMENTAL_PROVIDER=podman \
&& kind create cluster --name cs-cluster --image m.daocloud.io/docker.io/kindest/node:v1.27.3 --config=./kind.cluster.yaml
Modify ~/.kube/config

vim ~/.kube/config

in line 5, change server: http://::xxxx -> server: http://0.0.0.0:xxxxx

asdsaa asdsaa

3. [Optional] pre-downloaded slow images

DOCKER_IMAGE_PATH=/root/docker-images && mkdir -p $DOCKER_IMAGE_PATH
BASE_URL="https://resource-ops-dev.lab.zjvis.net:32443/docker-images"
for IMAGE in "quay.io_argoproj_argocd_v2.9.3.dim" \
    "ghcr.io_dexidp_dex_v2.37.0.dim" \
    "docker.io_library_redis_7.0.11-alpine.dim" \
    "docker.io_library_flink_1.17.dim"
do
    IMAGE_FILE=$DOCKER_IMAGE_PATH/$IMAGE
    if [ ! -f $IMAGE_FILE ]; then
        TMP_FILE=$IMAGE_FILE.tmp \
        && curl -o "$TMP_FILE" -L "$BASE_URL/$IMAGE" \
        && mv $TMP_FILE $IMAGE_FILE
    fi
    kind -n cs-cluster load image-archive $IMAGE_FILE
done

4. install argocd

you can check article Install ArgoCD

5. install essential app on argocd

# install cert manger    
curl -LO https://gitlab.com/-/snippets/3686424/raw/main/cert-manager.yaml \
&& kubectl -n argocd apply -f cert-manager.yaml \
&& argocd app sync argocd/cert-manager

# install ingress
curl -LO https://gitlab.com/-/snippets/3686426/raw/main/ingress-nginx.yaml \
&& kubectl -n argocd apply -f ingress-nginx.yaml \
&& argocd app sync argocd/ingress-nginx

# install flink-kubernetes-operator
curl -LO https://gitlab.com/-/snippets/3686429/raw/main/flink-operator.yaml \
&& kubectl -n argocd apply -f flink-operator.yaml \
&& argocd app sync argocd/flink-operator

6. install git

sudo dnf install -y git \
&& rm -rf $HOME/cosmic-antenna-demo \
&& mkdir $HOME/cosmic-antenna-demo \
&& git clone --branch pv_pvc_template https://github.com/AaronYang2333/cosmic-antenna-demo.git $HOME/cosmic-antenna-demo

7. prepare application image

# cd into  $HOME/cosmic-antenna-demo
sudo dnf install -y java-11-openjdk.x86_64 \
&& $HOME/cosmic-antenna-demo/gradlew :s3sync:buildImage \
&& $HOME/cosmic-antenna-demo/gradlew :fpga-mock:buildImage
# save and load into cluster
VERSION="1.0.3"
podman save --quiet -o $DOCKER_IMAGE_PATH/fpga-mock_$VERSION.dim localhost/fpga-mock:$VERSION \
&& kind -n cs-cluster load image-archive $DOCKER_IMAGE_PATH/fpga-mock_$VERSION.dim
podman save --quiet -o $DOCKER_IMAGE_PATH/s3sync_$VERSION.dim localhost/s3sync:$VERSION \
&& kind -n cs-cluster load image-archive $DOCKER_IMAGE_PATH/s3sync_$VERSION.dim
kubectl -n flink edit role/flink -o yaml
Modify role config

kubectl -n flink edit role/flink -o yaml

add services and endpoints to the rules.resources

8. prepare k8s resources [pv, pvc, sts]

cp -rf $HOME/cosmic-antenna-demo/flink/*.yaml /tmp \
&& podman exec -d cs-cluster-control-plane mkdir -p /mnt/flink-job
# create persist volume
kubectl -n flink create -f /tmp/pv.template.yaml
# create pv claim
kubectl -n flink create -f /tmp/pvc.template.yaml
# start up flink application
kubectl -n flink create -f /tmp/job.template.yaml
# start up ingress
kubectl -n flink create -f /tmp/ingress.forward.yaml
# start up fpga UDP client, sending data 
cp $HOME/cosmic-antenna-demo/fpga-mock/client.template.yaml /tmp \
&& kubectl -n flink create -f /tmp/client.template.yaml

9. check dashboard in browser

http://job-template-example.flink.lab.zjvis.net


Reference

  1. https://github.com/ben-wangz/blog/tree/main/docs/content/6.kubernetes/7.installation/ha-cluster
  2. xxx

Design

Design 的子部分

Yaml Crawler

Steps

  1. define which url you wanna crawl, lets say https://www.xxx.com/aaa.apex
  2. create a page pojo to describe what kind of web page you need to process

Then you can create a yaml file named root-pages.yaml and its content is

- '@class': "org.example.business.hs.code.MainPage"
  url: "https://www.xxx.com/aaa.apex"
  1. and then define a process flow yaml file, implying how to process web pages the crawler will meet.
processorChain:
  - '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.ExceptionRecord"
    processor:
      '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.RetryControl"
      processor:
        '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.SpeedControl"
        processor:
          '@class': "org.example.business.hs.code.MainPageProcessor"
          application: "hs-code"
        time: 100
        unit: "MILLISECONDS"
      retryTimes: 1
  - '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.ExceptionRecord"
    processor:
      '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.RetryControl"
      processor:
        '@class': "net.zjvis.lab.nebula.crawler.core.processor.decorator.SpeedControl"
        processor:
          '@class': "net.zjvis.lab.nebula.crawler.core.processor.download.DownloadProcessor"
          pagePersist:
            '@class': "org.example.business.hs.code.persist.DownloadPageDatabasePersist"
            downloadPageRepositoryBeanName: "downloadPageRepository"
          downloadPageTransformer:
            '@class': "net.nebula.crawler.download.DefaultDownloadPageTransformer"
          skipExists:
            '@class': "net.nebula.crawler.download.SkipExistsById"
        time: 1
        unit: "SECONDS"
      retryTimes: 1
nThreads: 1
pollWaitingTime: 30
pollWaitingTimeUnit: "SECONDS"
waitFinishedTimeout: 180
waitFinishedTimeUnit: "SECONDS" 

ExceptionRecord, RetryControl, SpeedControl are provided by the yaml crawler itself, dont worry. you only need to extend how to process your page MainPage, for example, you defined a MainPageProcessor. each processor will produce a set of other page or DownloadPage. DownloadPage like a ship containing information you need, and this framework will help you process DownloadPage and download or persist.

  1. Vola, run your crawler then.