Subsections of 📃Articles
BuckUp
Subsections of BuckUp
ES [Local Disk]
Preliminary
ElasticSearch has installed, if not check link
The
elasticsearch.yml
has configedpath.repo
, which should be set the same value ofsettings.location
(this will be handled by helm chart, dont worry)diff from oirginal file :
extraConfig: path: repo: /tmp
Methods
Elasticsearch 做备份有两种方式,
- 是将数据导出成文本文件,比如通过elasticdump、esm等工具将存储在 Elasticsearch 中的数据导出到文件中。
- 是使用
snapshot
接口实现快照功能,增量备份文件
第一种方式相对简单,在数据量小的时候比较实用,但当应对大数据量场景时,更推荐使用snapshot api 的方式。
Steps
asdadas
- 创建快照仓库repo ->
my_fs_repository
curl -k -X PUT "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "/tmp"
}
}
'
你也能使用storage-class 挂载一个路径在pod中,将snapshot文件存放在外挂路径上
- 验证集群各个节点是否可以使用这个快照仓库repo
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository/_verify?pretty"
- 查看快照仓库repo
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/_all?pretty"
- 查看某一个快照仓库repo的具体setting
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository?pretty"
- 分析一个快照仓库repo
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s&pretty"
- 手动打快照
curl -k -X PUT "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository/ay_snap_02?pretty"
- 查看指定快照仓库repo 可用的快照
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository/*?verbose=false&pretty"
- 测试恢复
# Delete an index
curl -k -X DELETE "https://elastic-search.dev.tech:32443/books?pretty"
# restore that index
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_fs_repository/ay_snap_02/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "books"
}
'
# query
curl -k -X GET "https://elastic-search.dev.tech:32443/books/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
ES [S3 Compatible]
Preliminary
ElasticSearch has installed, if not check link
diff from oirginal file :
extraEnvVars: - name: S3_ACCESSKEY value: admin - name: S3_SECRETKEY value: ZrwpsezF1Lt85dxl extraConfig: s3: client: default: protocol: http endpoint: "http://192.168.31.111:9090" path_style_access: true initScripts: configure-s3-client.sh: | elasticsearch_set_key_value "s3.client.default.access_key" "${S3_ACCESSKEY}" elasticsearch_set_key_value "s3.client.default.secret_key" "${S3_SECRETKEY}" hostAliases: - ip: 192.168.31.111 hostnames: - minio-api.dev.tech
Methods
Elasticsearch 做备份有两种方式,
- 是将数据导出成文本文件,比如通过elasticdump、esm等工具将存储在 Elasticsearch 中的数据导出到文件中。
- 是使用
snapshot
接口实现快照功能,增量备份文件
第一种方式相对简单,在数据量小的时候比较实用,但当应对大数据量场景时,更推荐使用snapshot api 的方式。
Steps
asdadas
- 创建快照仓库repo ->
my_s3_repository
curl -k -X PUT "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository?pretty" -H 'Content-Type: application/json' -d'
{
"type": "s3",
"settings": {
"bucket": "local-test",
"client": "default",
"endpoint": "http://192.168.31.111:9000"
}
}
'
你也能使用storage-class 挂载一个路径在pod中,将snapshot文件存放在外挂路径上
- 验证集群各个节点是否可以使用这个快照仓库repo
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository/_verify?pretty"
- 查看快照仓库repo
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/_all?pretty"
- 查看某一个快照仓库repo的具体setting
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository?pretty"
- 分析一个快照仓库repo
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s&pretty"
- 手动打快照
curl -k -X PUT "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository/ay_s3_snap_02?pretty"
- 查看指定快照仓库repo 可用的快照
curl -k -X GET "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository/*?verbose=false&pretty"
- 测试恢复
# Delete an index
curl -k -X DELETE "https://elastic-search.dev.tech:32443/books?pretty"
# restore that index
curl -k -X POST "https://elastic-search.dev.tech:32443/_snapshot/my_s3_repository/ay_s3_snap_02/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "books"
}
'
# query
curl -k -X GET "https://elastic-search.dev.tech:32443/books/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
ES Auto BackUp
Preliminary
ElasticSearch has installed, if not check link
We use local disk to save the snapshots, more deatils check link
And the
security
is enabled.diff from oirginal file :
security: enabled: true extraConfig: path: repo: /tmp
Methods
Steps
- 创建快照仓库repo ->
slm_fs_repository
curl --user elastic:L9shjg6csBmPZgCZ -k -X PUT "https://10.88.0.143:30294/_snapshot/slm_fs_repository?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "/tmp"
}
}
'
你也能使用storage-class 挂载一个路径在pod中,将snapshot文件存放在外挂路径上
- 验证集群各个节点是否可以使用这个快照仓库repo
curl --user elastic:L9shjg6csBmPZgCZ -k -X POST "https://10.88.0.143:30294/_snapshot/slm_fs_repository/_verify?pretty"
- 查看快照仓库repo
curl --user elastic:L9shjg6csBmPZgCZ -k -X GET "https://10.88.0.143:30294/_snapshot/_all?pretty"
- 查看某一个快照仓库repo的具体setting
curl --user elastic:L9shjg6csBmPZgCZ -k -X GET "https://10.88.0.143:30294/_snapshot/slm_fs_repository?pretty"
- 分析一个快照仓库repo
curl --user elastic:L9shjg6csBmPZgCZ -k -X POST "https://10.88.0.143:30294/_snapshot/slm_fs_repository/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s&pretty"
- 查看指定快照仓库repo 可用的快照
curl --user elastic:L9shjg6csBmPZgCZ -k -X GET "https://10.88.0.143:30294/_snapshot/slm_fs_repository/*?verbose=false&pretty"
- 创建SLM admin 角色
curl --user elastic:L9shjg6csBmPZgCZ -k -X POST "https://10.88.0.143:30294/_security/role/slm-admin?pretty" -H 'Content-Type: application/json' -d'
{
"cluster": [ "manage_slm", "cluster:admin/snapshot/*" ],
"indices": [
{
"names": [ ".slm-history-*" ],
"privileges": [ "all" ]
}
]
}
'
- 创建自动备份cornjob
curl --user elastic:L9shjg6csBmPZgCZ -k -X PUT "https://10.88.0.143:30294/_slm/policy/nightly-snapshots?pretty" -H 'Content-Type: application/json' -d'
{
"schedule": "0 30 1 * * ?",
"name": "<nightly-snap-{now/d}>",
"repository": "slm_fs_repository",
"config": {
"indices": "*",
"include_global_state": true
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
}
'
- 启动自动备份
curl --user elastic:L9shjg6csBmPZgCZ -k -X POST "https://10.88.0.143:30294/_slm/policy/nightly-snapshots/_execute?pretty"
- 查看SLM备份历史
curl --user elastic:L9shjg6csBmPZgCZ -k -X GET "https://10.88.0.143:30294/_slm/stats?pretty"
- 测试恢复
# Delete an index
curl --user elastic:L9shjg6csBmPZgCZ -k -X DELETE "https://10.88.0.143:30294/books?pretty"
# restore that index
curl --user elastic:L9shjg6csBmPZgCZ -k -X POST "https://10.88.0.143:30294/_snapshot/slm_fs_repository/my_snapshot_2099.05.06/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "books"
}
'
# query
curl --user elastic:L9shjg6csBmPZgCZ -k -X GET "https://10.88.0.143:30294/books/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
Cheat Sheet
Subsections of Cheat Sheet
Aliyun Related
Subsections of Aliyun Related
OSSutil
Ali version of Minio
(https://min.io/)
download ossutil
first, you need to download ossutil
first
curl https://gosspublic.alicdn.com/ossutil/install.sh | sudo bash
curl -o ossutil-v1.7.19-windows-386.zip https://gosspublic.alicdn.com/ossutil/1.7.19/ossutil-v1.7.19-windows-386.zip
config ossutil
./ossutil config
Params | Description | Instruction |
---|---|---|
endpoint | the Endpoint of the region where the Bucket is located | |
accessKeyID | OSS AccessKey | get from user info panel |
accessKeySecret | OSS AccessKeySecret | get from user info panel |
stsToken | token for sts service | could be empty |
and you can also modify /home/<$user>/.ossutilconfig
file directly to change the configuration.
list files
ossutil ls oss://<$PATH>
download file/dir
you can use cp
to download or upload file
ossutil cp -r oss://<$PATH> <$PTHER_PATH>
upload file/dir
ossutil cp -r <$SOURCE_PATH> oss://<$PATH>
ECS DNS
ZJADC (Aliyun Directed Cloud)
Append content in /etc/resolv.conf
options timeout:2 attempts:3 rotate
nameserver 10.255.9.2
nameserver 10.200.12.5
And then you probably need to modify yum.repo.d
as well, check link
YQGCY (Aliyun Directed Cloud)
Append content in /etc/resolv.conf
nameserver 172.27.205.79
And then restart kube-system
.coredns-xxxx
Google DNS
nameserver 8.8.8.8
nameserver 4.4.4.4
nameserver 223.5.5.5
nameserver 223.6.6.6
Restart DNS
vim /etc/NetworkManager/NetworkManager.conf
vim /etc/NetworkManager/NetworkManager.conf
add "dns=none"
under '[main]'
part
systemctl restart NetworkManager
Modify ifcfg-ethX
[Optional]
if you cannot get ipv4 address, you can try to modify ifcfg-ethX
vim /etc/sysconfig/network-scripts/ifcfg-ens33
set ONBOOT=yes
OS Mirrors
Fedora
- Fedora 40 located in
/etc/yum.repos.d/
CentOS
CentOS 7 located in
/etc/yum.repos.d/
CentOS 8 stream located in
/etc/yum.repos.d/
Ubuntu
Ubuntu 18.04 located in
/etc/apt/sources.list
Ubuntu 20.04 located in
/etc/apt/sources.list
Ubuntu 22.04 located in
/etc/apt/sources.list
Debian
Debian Buster located in
/etc/apt/sources.list
Debian Bullseye located in
/etc/apt/sources.list
Anolis
Anolis 3 located in
/etc/yum.repos.d/
Anolis 2 located in
/etc/yum.repos.d/
Refresh Repo
dnf clean all && dnf makecache
yum clean all && yum makecache
apt-get clean all
App Related
Subsections of App Related
Mirrors [Aliyun, Tsinghua]
Gradle Tencent Mirror
https://mirrors.cloud.tencent.com/gradle/gradle-8.0-bin.zip
PIP Tuna Mirror -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
Maven Mirror
<mirror>
<id>aliyunmaven</id>
<mirrorOf>*</mirrorOf>
<name>阿里云公共仓库</name>
<url>https://maven.aliyun.com/repository/public</url>
</mirror>
Git Related
Subsections of Git Related
Not Allow Push
Cannot push to your own branch
Edit
.git/config
file under your repo directory.Find
url
=entry under section[remote "origin"]
.Change it from:
url=https://gitlab.com/AaronYang2333/ska-src-dm-local-data-preparer.git/
url=ssh://git@gitlab.com/AaronYang2333/ska-src-dm-local-data-preparer.git
try push again
Linux Related
Subsections of Linux Related
Disable Service
Disable firewall、selinux、dnsmasq、swap service
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/selinux/config
reboot
getenforce
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
Example Shell Script
Init ES Backup Setting
create an ES backup setting in s3, and make an snapshot after creation
#!/bin/bash
ES_HOST="http://192.168.58.2:30910"
ES_BACKUP_REPO_NAME="s3_fs_repository"
S3_CLIENT="default"
ES_BACKUP_BUCKET_IN_S3="es-snapshot"
ES_SNAPSHOT_TAG="auto"
CHECK_RESPONSE=$(curl -s -k -X POST "$ES_HOST/_snapshot/$ES_BACKUP_REPO_NAME/_verify?pretty" )
CHECKED_NODES=$(echo "$CHECK_RESPONSE" | jq -r '.nodes')
if [ "$CHECKED_NODES" == null ]; then
echo "Doesn't exist an ES backup setting..."
echo "A default backup setting will be generated. (using '$S3_CLIENT' s3 client and all backup files will be saved in a bucket : '$ES_BACKUP_BUCKET_IN_S3'"
CREATE_RESPONSE=$(curl -s -k -X PUT "$ES_HOST/_snapshot/$ES_BACKUP_REPO_NAME?pretty" -H 'Content-Type: application/json' -d "{\"type\":\"s3\",\"settings\":{\"bucket\":\"$ES_BACKUP_BUCKET_IN_S3\",\"client\":\"$S3_CLIENT\"}}")
CREATE_ACKNOWLEDGED_FLAG=$(echo "$CREATE_RESPONSE" | jq -r '.acknowledged')
if [ "$CREATE_ACKNOWLEDGED_FLAG" == true ]; then
echo "Buckup setting '$ES_BACKUP_REPO_NAME' has been created successfully!"
else
echo "Failed to create backup setting '$ES_BACKUP_REPO_NAME', since $$CREATE_RESPONSE"
fi
else
echo "Already exist an ES backup setting '$ES_BACKUP_REPO_NAME'"
fi
CHECK_RESPONSE=$(curl -s -k -X POST "$ES_HOST/_snapshot/$ES_BACKUP_REPO_NAME/_verify?pretty" )
CHECKED_NODES=$(echo "$CHECK_RESPONSE" | jq -r '.nodes')
if [ "$CHECKED_NODES" != null ]; then
SNAPSHOT_NAME="meta-data-$ES_SNAPSHOT_TAG-snapshot-$(date +%s)"
SNAPSHOT_CREATION=$(curl -s -k -X PUT "$ES_HOST/_snapshot/$ES_BACKUP_REPO_NAME/$SNAPSHOT_NAME")
echo "Snapshot $SNAPSHOT_NAME has been created."
else
echo "Failed to create snapshot $SNAPSHOT_NAME ."
fi
Login Without Pwd
copy id_rsa
to other nodes
yum install sshpass -y
mkdir -p /extend/shell
cat >>/extend/shell/fenfa_pub.sh<< EOF
#!/bin/bash
ROOT_PASS=root123
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
for ip in 101 102 103
do
sshpass -p\$ROOT_PASS ssh-copy-id -o StrictHostKeyChecking=no 192.168.29.\$ip
done
EOF
cd /extend/shell
chmod +x fenfa_pub.sh
./fenfa_pub.sh
Set Http Proxy
set http proxy
export https_proxy=http://localhost:20171
Storage Related
Subsections of Storage Related
User Based Policy
User Based Policy
you can change <$bucket>
to control the permission
${aws:username}
is a build-in variable, indicating the logined user name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<$bucket>"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
"",
"<$path>/",
"<$path>/${aws:username}"
],
"s3:delimiter": [
"/"
]
}
}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<$bucket>"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"<$path>/${aws:username}/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<$bucket>/<$path>/${aws:username}/*"
]
}
]
}
<$uid>
is Aliyun UID
{
"Version": "1",
"Statement": [{
"Effect": "Allow",
"Action": [
"oss:*"
],
"Principal": [
"<$uid>"
],
"Resource": [
"acs:oss:*:<$oss_id>:<$bucket>/<$path>/*"
]
}, {
"Effect": "Allow",
"Action": [
"oss:ListObjects",
"oss:GetObject"
],
"Principal": [
"<$uid>"
],
"Resource": [
"acs:oss:*:<$oss_id>:<$bucket>"
],
"Condition": {
"StringLike": {
"oss:Prefix": [
"<$path>/*"
]
}
}
}]
}
Example:
{
"Version": "1",
"Statement": [{
"Effect": "Allow",
"Action": [
"oss:*"
],
"Principal": [
"203415213249511533"
],
"Resource": [
"acs:oss:*:1007296819402486:conti-csst/test/*"
]
}, {
"Effect": "Allow",
"Action": [
"oss:ListObjects",
"oss:GetObject"
],
"Principal": [
"203415213249511533"
],
"Resource": [
"acs:oss:*:1007296819402486:conti-csst"
],
"Condition": {
"StringLike": {
"oss:Prefix": [
"test/*"
]
}
}
}]
}
Command
Subsections of Command
Git CMD
Init global config
git config --list
git config --global user.name "AaronYang"
git config --global user.email aaron19940628@gmail.com
git config --global pager.branch false
git config --global pull.ff only
git --no-pager diff
change user and email (locally)
git config user.name ""
git config user.email ""
list all remote repo
git remote -v
Get specific file from remote
git archive --remote=git@github.com:<$user>/<$repo>.git <$branch>:<$source_file_path> -o <$target_source_path>
Clone specific branch
git clone -b slurm-23.02 --single-branch --depth=1 https://github.com/SchedMD/slurm.git
Update submodule
git submodule add –depth 1 https://github.com/xxx/xxxx a/b/c
git submodule update --init --recursive
Save credential
login first and then execute this
git config --global credential.helper store
Delete Branch
- Deleting a remote branch
git push origin --delete <branch> # Git version 1.7.0 or newer git push origin -d <branch> # Shorter version (Git 1.7.0 or newer) git push origin :<branch> # Git versions older than 1.7.0
- Deleting a local branch
git branch --delete <branch> git branch -d <branch> # Shorter version git branch -D <branch> # Force-delete un-merged branches
Prune remote branches
git remote prune origin
Update remote repo
git remote set-url origin http://xxxxx.git
Linux
useradd
sudo useradd <$name> -m -r -s /bin/bash -p <$password>
telnet
a command line interface for communication with a remote device or serve
telnet <$ip> <$port>
lsof (list as open files)
everything is a file
lsof <$option:value>
awk (Aho, Weinberger, and Kernighan [Names])
awk
is a scripting language used for manipulating data and generating reports.
# awk [params] 'script'
awk <$params> <$string_content>
ss (socket statistics)
view detailed information about your system’s network connections, including TCP/IP, UDP, and Unix domain sockets
ss [options]
clean files 3 days ago
find /aaa/bbb/ccc/*.gz -mtime +3 -exec rm {} \;
ssh without affect $HOME/.ssh/known_hosts
ssh -o "UserKnownHostsFile /dev/null" root@aaa.domain.com
ssh -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking=no" root@aaa.domain.com
sync clock
[yum|dnf] install -y chrony \
&& systemctl enable chronyd \
&& (systemctl is-active chronyd || systemctl start chronyd) \
&& chronyc sources \
&& chronyc tracking \
&& timedatectl set-timezone 'Asia/Shanghai'
set hostname
hostnamectl set-hostname develop
add remote key to other server
ssh -o "UserKnownHostsFile /dev/null" \
root@aaa.bbb.ccc \
"mkdir -p /root/.ssh && chmod 700 /root/.ssh && echo '$SOME_PUBLIC_KEY' \
>> /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys"
set -x
This will print each command to the standard error before executing it, which is useful for debugging scripts.
set -x
set -e
Exit immediately if a command exits with a non-zero status.
set -x
sed (Stream Editor)
sed <$option> <$file_path>
fdisk
list all disk
fdisk -l
create CFS file system
Use mkfs.xfs command to create xfs file system and internal log on the same disk, example is shown below:
mkfs.xfs <$path>
modprobe
program to add and remove modules from the Linux Kernel
modprobe nfs && modprobe nfsd
disown
disown
command in Linux is used to remove jobs from the job table.
disown [options] jobID1 jobID2 ... jobIDN
generate SSH key
ssh-keygen -t rsa -b 4096 -C "aaron19940628@gmail.com"
create soft link
sudo ln -sf <$install_path>/bin/* /usr/local/bin
append dir into $PATH (temporary)
export PATH="/root/bin:$PATH"
copy public key to ECS
ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.200.60.53
Maven
1. build from submodule
You dont need to build from the head of project.
./mvnw clean package -DskipTests -rf :<$submodule-name>
you can find the <$submodule-name>
from submodule ’s pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.apache.flink</groupId>
<artifactId>flink-formats</artifactId>
<version>1.20-SNAPSHOT</version>
</parent>
<artifactId>flink-avro</artifactId>
<name>Flink : Formats : Avro</name>
Then you can modify the command as
./mvnw clean package -DskipTests -rf :flink-avro
2. skip some other test
For example, you can skip RAT test by doing this:
./mvnw clean package -DskipTests '-Drat.skip=true'
Gradle
1. spotless
keep your code spotless, check more detail in https://github.com/diffplug/spotless
And the, you can execute follwoing command to format your code.
./gradlew spotlessApply
./mvnw spotless:apply
2. shadowJar
shadowjar could combine a project’s dependency classes and resources into a single jar. check https://imperceptiblethoughts.com/shadow/
./gradlew shadowJar
3. check dependency
list your project’s dependencies in tree view
./gradlew dependencies --configuration compileClasspath
./gradlew :<$module_name>:dependencies --configuration compileClasspath
Elastic Search DSL
Basic Query
Returns documents that contain an indexed value for a field
.
GET /_search
{
"query": {
"exists": {
"field": "user"
}
}
}
The following search returns documents that are missing an indexed value for the user.id
field.
GET /_search
{
"query": {
"bool": {
"must_not": {
"exists": {
"field": "user.id"
}
}
}
}
}
Returns documents that contain terms similar to the search term, as measured by a Levenshtein edit distance.
GET /_search
{
"query": {
"fuzzy": {
"filed_A": {
"value": "ki"
}
}
}
}
Returns documents that contain terms similar to the search term, as measured by a Levenshtein edit distance.
GET /_search
{
"query": {
"fuzzy": {
"filed_A": {
"value": "ki",
"fuzziness": "AUTO",
"max_expansions": 50,
"prefix_length": 0,
"transpositions": true,
"rewrite": "constant_score_blended"
}
}
}
}
rewrite:
- constant_score_boolean
- constant_score_filter
- top_terms_blended_freqs_N
- top_terms_boost_N, top_terms_N
- frequent_terms, score_delegating
Returns documents based on their IDs. This query uses document IDs stored in the _id
field.
GET /_search
{
"query": {
"ids" : {
"values" : ["2NTC5ZIBNLuBWC6V5_0Y"]
}
}
}
The following search returns documents where the filed_A
field contains a term that begins with ki
.
GET /_search
{
"query": {
"prefix": {
"filed_A": {
"value": "ki",
"rewrite": "constant_score_blended",
"case_insensitive": true
}
}
}
}
You can simplify the prefix query syntax by combining the <field>
and value
parameters.
GET /_search
{
"query": {
"prefix" : { "filed_A" : "ki" }
}
}
Returns documents that contain terms within a provided range.
GET /_search
{
"query": {
"range": {
"filed_number": {
"gte": 10,
"lte": 20,
"boost": 2.0
}
}
}
}
GET /_search
{
"query": {
"range": {
"filed_timestamp": {
"time_zone": "+01:00",
"gte": "2020-01-01T00:00:00",
"lte": "now"
}
}
}
}
Returns documents that contain terms matching a regular expression.
GET /_search
{
"query": {
"regexp": {
"filed_A": {
"value": "k.*y",
"flags": "ALL",
"case_insensitive": true,
"max_determinized_states": 10000,
"rewrite": "constant_score_blended"
}
}
}
}
Returns documents that contain an exact term in a provided field.
You can use the term query to find documents based on a precise value such as a price, a product ID, or a username.
GET /_search
{
"query": {
"term": {
"filed_A": {
"value": "kimchy",
"boost": 1.0
}
}
}
}
Returns documents that contain terms matching a wildcard pattern.
A wildcard operator is a placeholder that matches one or more characters. For example, the * wildcard operator matches zero or more characters. You can combine wildcard operators with other characters to create a wildcard pattern.
GET /_search
{
"query": {
"wildcard": {
"filed_A": {
"value": "ki*y",
"boost": 1.0,
"rewrite": "constant_score_blended"
}
}
}
}