Testing disk performance with fio

Disk performance is often the main performance issue in any high traffic servers and databases.

Instane sizing at humio do provide a good way to measure the disk read/write bandwidth using fio. The test might simulate how humio will
read/write. However, change the parameters accordingly.

Read more : https://docs.humio.com/cluster-management/infrastructure/instance-sizing/

sudo fio –filename=/data/fio-test.tmp –filesize=1Gi –bs=256K -rw=read –time_based –runtime=5s –name=read_bandwidth_test –numjobs=8 –thread –direct=1

I can be executed using a configuration file as well.

Create a file : humio-read-test.fio

[global]
thread
rw=read
bs=256Ki
directory=/data/fio-tmp-dir
direct=1

[read8]
stonewall
size=1Gi
numjobs=8
fio --bandwidth-log ./humio-read-test.fio
# Clean tmp files from fio:
rm /data/fio-tmp-dir/read8.?.?

elasticsearch on docker

# docker pull elasticsearch:6.8.5 
 SCRIPT=`realpath $0`
 BASE=`dirname $SCRIPT` 

         mkdir -p $BASE/esdata1
         docker run -p 9200:9200 --name elasticsearch -v $BASE/esdata1:/usr/share/elasticsearch/data \
         -e "http.host=0.0.0.0" \
         -e "cluster.name=elasticlogging" \
         -e "node.name=esnode1" \
         -e "node.master=true" \
         -e "node.data=true" \
         -e "http.cors.allow-origin=*" \
         -e "ES_JAVA_OPTS=-Xms256m -Xmx256m" \
         -e "discovery.zen.minimum_master_nodes=1" \
         -d elasticsearch:6.8.5
 

         mkdir -p $BASE/esdata2
         docker run --name elasticsearch2 -v $BASE/esdata2:/usr/share/elasticsearch/data --link elasticsearch \
         -e "http.host=0.0.0.0" \
         -e "cluster.name=elasticlogging" \
         -e "node.name=esnode2" \
         -e "http.cors.allow-origin=*" \
         -e "ES_JAVA_OPTS=-Xms256m -Xmx256m" \
         -e "discovery.zen.ping.unicast.hosts=elasticsearch" \
         -d elasticsearch:6.8.5 

All about docker

Docker installation (Ubuntu)

sudo apt update
sudo apt -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
apt-cache policy docker-ce
sudo apt -y install docker-ce
sudo systemctl status docker
Add into bashrc
alias drm="docker rm"
alias dps="docker ps -a"
alias dmi="docker images"
function da () {
    docker start $1 && docker attach $1
}

docker-moving container to another server

I have a running container, going through edits, changes, and i need to move this to a new server.

commit

First, commit (ref)

docker commit <containerid>  myimages/lamp:v1.1

You can see the list of images that you have with “docker images.”

Save to a file(ref)

sudo docker save -o <imagefile.tar> imageid

Transfer the imagefile.tar to the new server.

Load them back (ref)

docker load -i <path to image tar file>
docker tag <Image-ID> myimages/lamp:v1.1

Run back.

Afterlogic webmail for cpanel

Afterlogic have some interesting new look to the webmail of cpanel. Guides available at https://afterlogic.com/docs/webmail-lite/installation/install-on-cpanel

cd /root/

wget https://afterlogic.com/download/webmail-panel-installer.tar.gz

tar -xzvf ./webmail-panel-installer.tar.gz
cd ./webmail-panel-installer
chmod a+x ./installer
./installer -t lite -a install

elasticsearch reference

Tools
– head for Chrome (ElasticSearch Head – Chrome Web Store)
– Postman (link)
– Insomenia (link)
– elasticdump – nodejs (link)

Monitoring
– ps_mem.py – monitor real memory utilization (github link)
ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x< =NF ; x++ ) { printf("%s ",$x) } print "" }' |cut -d "" -f2 | cut -d "-" -f1 | head -n 40
from : here
– netdata, dockerable too – (link)

System tuning
sysctl -w vm.max_map_count=262144
sysctl -w vm.swappiness = 0

verify
sysctl vm.max_map_count
sysctl vm.swappiness

Reference
https://stefanprodan.com/2016/elasticsearch-cluster-with-docker/

Memory tuning
https://qbox.io/blog/memory-considerations-in-elasticsearch-deployment
https://plumbr.io/handbook/gc-tuning-in-practice

Stuck shards
https://thoughts.t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode-ce196e20ba95
https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

elasticdump (link)

# Backup index data to a file: 
elasticdump \  
    --input=http://production.es.com:9200/my_index \  
    --output=/data/my_index_mapping.json \  
    --type=mapping
elasticdump \  
    --input=http://production.es.com:9200/my_index \  
    --output=/data/my_index.json \  
    --type=data 

# Backup and index to a gzip using stdout: 
elasticdump \  
    --input=http://production.es.com:9200/my_index \  
    --output=$ \  
           | gzip > /data/my_index.json.gz

Export elasticsearch to csv (link)

docker pull nimmis/java-centos:oracle-8-jdk
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.1.tar.gz
tar zxf logstash-7.1.1.tar.gz
ln -s logstash-7.1.1 logstash
docker run -ti -d --name logstash -v `pwd`/logstash:/home/logstash nimmis/java-centos:oracle-8-jdk
docker exec logstash /home/logstash/bin/logstash-plugin install logstash-input-elasticsearch
docker exec logstash /home/logstash/bin/logstash-plugin install logstash-output-csv
Put this into `pwd`/logstash/export-csv.conf
input {
 elasticsearch {
    hosts => "elastic:9200"
    index => "datafeed"
    query => '
    {
     "query": {
     "match_all": {}
     } 
    } 
  '
  }
}
output {
  csv {
    # elastic field name
    fields => ["field1", "field2", "field3", "field4", "field5"]
    # This is path where we store output.   
    path => "/home/logstash/exported-data.csv"
  }
}

filter {
  mutate {
    convert => {
 "lat" => "float"
 "lon" => "float"
 "weight" => "float"
 }
  }
}
./bin/logstash -f /home/logstash/export-csv.conf