Storware Cluster with 3 alma server and mariaDB replication
We have prepared 3 machines with almalinux 9 operating system in the same network:
- 10.5.24.58 vprotect1
- 10.5.24.51 vprotect2
- 10.5.24.23 vprotect3
We will use IP 10.5.24.200/24 for floating IP of our cluster.
run as root
[almalinux@almasw-1 ~]$ sudo -s
update all machine
[root@almasw-1 almalinux]# yum update
add hosts
[root@almasw-1 almalinux]# sudo vi /etc/hosts
add following command
10.5.24.58 vprotect1
10.5.24.51 vprotect2
10.5.24.23 vprotect3
save and exit
Cloud Image – Enable Password Login via SSH
1.Open file /etc/ssh/sshd_config.d/50-cloud-init.conf
$ sudo vi /etc/ssh/sshd_config.d/50-cloud-init.conf
2. Edit to like following:
PasswordAuthentication yes
PermitRootLogin yes
3.Save and quit
4. Restart sshd service
sudo systemctl restart sshd
# Configure SSH
[root@pcmk-1 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ""
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:h5AFPmXsGU4woOxRLYHW9lnU2wIQVOxpSRrsXbo/AX8 root@pcmk-1
The key’s randomart image is:
+—[RSA 3072]—-+
| o+*BX*. |
| .oo+.+*O o |
| .+. +=% O o |
| . . =o%.o . |
| . .S+.. |
| ..o E |
| . o |
| o |
| . |
+—-[SHA256]—–+
[root@pcmk-1 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@pcmk-1 ~]# ssh-copy-id root@pcmk-2
[root@pcmk-1 ~]# ssh-copy-id root@pcmk-3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘pcmk-2 (10.5.24.***)’ can’t be established.
ED25519 key fingerprint is SHA256:QkJnJ3fmszY7kAuuZ7wxUC5CC+eQThSCF13XYWnZJPo.
This host key is known by the following other names/addresses:
~/.ssh/known_hosts:1: 192.168.122.102
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are␣
↪→already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to␣
↪→install the new keys
root@pcmk-2’s password:
Number of key(s) added: 1
Now try logging into the machine, with: “ssh ‘pcmk-2′”
and check to make sure that only the key(s) you wanted were added.
# 1\. Storware server installation
Run that steps on all machines under pacemaker cluster:
1. Add Storware repository `/etc/yum.repos.d/vProtect.repo`
vi /etc/yum.repos.d/vProtect.repo
“`
# Storware Backup & Recovery - Enterprise backup solution for virtual environments repository
[vprotect]
name = vProtect
baseurl = https://repo.storware.eu/storware/current/el9/
gpgcheck = 0
“`
2. Add MariaDB repository `/etc/yum.repos.d/MariaDB.repo`
vi /etc/yum.repos.d/MariaDB.repo
“`
# MariaDB 10.10 RedHatEnterpriseLinux repository list - created 2023-08-23 08:49 UTC
# https://mariadb.org/download/
[mariadb]
name = MariaDB
# rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
# baseurl = https://rpm.mariadb.org/10.10/rhel/$releasever/$basearch
baseurl = https://mirror.creoline.net/mariadb/yum/10.10/rhel/$releasever/$basearch
# gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB
gpgkey = https://mirror.creoline.net/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck = 1
“`
3. Install Storware server
“`
dnf install -y vprotect-server
“`
4. Initialize Storware server
“`
vprotect-server-configure
“`
set the mysql password
5. Redirect 8181 port to 443 on firewall
“`
/opt/vprotect/scripts/ssl_port_forwarding_firewall-cmd.sh
“`
6. Add redirection to allow local node to comunicate with server on cluster IP
“`
firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181
firewall-cmd --complete-reload
“`
7. Open firewall for MariaDB replication:
“`
firewall-cmd --add-port=3306/tcp --permanent
firewall-cmd --complete-reload
“`
# 3\. Storware node installation
To execute on all pacemaker nodes, and other Storware node machines.
1. Add Storware repository `/etc/yum.repos.d/vProtect.repo`
“`
# Storware Backup & Recovery – Enterprise backup solution for virtual environments repository
[vprotect]
name = vProtect
baseurl = https://repo.storware.eu/storware/current/el9/
gpgcheck = 0
“`
2. Install Storware node
“`
dnf install -y vprotect-node
“`
3. Initialize Storware node
“`
vprotect-node-configure
“`
2. Create directiories for mount service:
“`
mkdir /vprotect_data
“`
Create subdirectories for backup destinations (run only on single node)
“`
mkdir /vprotect_data/backup
mkdir /vprotect_data/backup/synthetic
mkdir /vprotect_data/backup/filesystem
mkdir /vprotect_data/backup/dbbackup
“`
5. Add privileges for newly created shares
“`
chown vprotect:vprotect -R /vprotect_data
“`
# 5\. Cluster Configuration
Cluster is controled under pacemaker.
## 5.1 Prepare operating system
All steps run as root user.
Run that steps on all machines under pacemaker cluster:
1. Stop all services controlled by cluster, and disable autostart.
“`
systemctl stop vprotect-node
systemctl stop vprotect-server
systemctl disable vprotect-node
systemctl disable vprotect-server
“`
## 5.2 Set MariaDB replication
All steps run as root user.
Run on all cluster nodes:
systemctl stop mariadb
mariadbd-safe --skip-grant-tables --skip-networking &
mariadb -u root
(from mariadb client)
FLUSH PRIVILEGES;
ALTER USER 'root'@'localhost' IDENTIFIED BY '<password>';
EXIT;
1. Create MariaDB user `replication` with password `vPr0tect` for replication:
“`
mysql
CREATE USER replicator@'10.5.24.%' IDENTIFIED BY 'vPr0tect';
GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.5.24.%' IDENTIFIED BY 'vPr0tect';
“`
2. Add changes to /etc/my.cnf.d/server.cnf in \[mysqld\] section:
“`
[mysqld]
lower_case_table_names=1
log-bin=mysql-bin
relay-log=relay-bin
log-slave-updates
max_allowed_packet=500M
log_bin_trust_function_creators=1
“`
3. Add changes to /etc/my.cnf.d/server.cnf in \[mysqld\] section:
On vprotect1.local:
“`
server-id=10
“`
On vprotect2.local:
“`
server-id=20
“`
On vprotect3.local:
“`
server-id=30
“`
4. Restart MariaDB service:
“`
systemctl restart mariadb
“`
5. On each host show output from command:
“`
SHOW MASTER STATUS;
“`
Output from vprotect3.local:
“`output
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000006 | 374 | | |
+——————+———-+————–+——————+
1 row in set (0.000 sec)
“`
Output from vprotect1.local:
“`output
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000007 | 358 | | |
+——————+———-+————–+——————+
1 row in set (0.000 sec)
“`
Output from vprotect2.local:
“`output
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000004 | 358 | | |
+——————+———-+————–+——————+
1 row in set (0.000 sec)
“`
6. Set replication on each MariaDB server:
Execute on vprotect1.local:
“`
CHANGE MASTER TO
MASTER_HOST='10.5.24.51',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000002',
MASTER_LOG_POS=358;
“`
Execute on vprotect2.local:
“`
CHANGE MASTER TO
MASTER_HOST='10.5.24.23',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000007',
MASTER_LOG_POS=358;
“`
Execute on vprotect3.local:
“`
CHANGE MASTER TO
MASTER_HOST='10.5.24.58',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000004',
MASTER_LOG_POS=358;
“`
7. Start replication MariaDB:
Execute on vprotect1.local:
“`
START SLAVE;
“`
Show output from command:
“`
SHOW SLAVE STATUS\G
“`
and wait until you see in output:
“`
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
“`
Then go to next host vprotect2.local/vprotect3.local, and again execute `START SLAVE`, and wait for correct output from `SHOW SLAVE STATUS\G`.
### 5.2.1 Make same passwords for vprotect user in MariaDB
#### 5.2.1.1 From that moment follow only on primary node of cluster.
1. Copy password from file `/opt/vprotect/payara.properties`
“`
eu.storware.vprotect.db.password=SECRETPASSWORD
“`
2. Log in to MariaDB
“`
mysql -u root -p
“`
3. Set password for vprotect user:
“`
SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD');
quit;
“`
#### 5.2.1.1 Run only on first node of cluster
1. Copy configuration files from vprotect1.local to other cluster hosts
“`
cd /opt/vprotect/
scp \
#keystore.jks \
log4j2-server.xml \
payara.properties \
vprotect.env \
vprotect-keystore.jks \
#license.key \
root@vprotect2:/opt/vprotect
scp \
#keystore.jks \
log4j2-server.xml \
payara.properties \
vprotect.env \
vprotect-keystore.jks \
#license.key \
root@vprotect3:/opt/vprotect
“`
2. Add permissions for copied files
“`
chown vprotect:vprotect -R /opt/vprotect/
“`
## 5.3 Configure pacemaker
All steps run as root user.
### 5.3.1 Run both nodes of cluster
1. Install pacemaker packages
“`
dnf install -y pacemaker pcs psmisc policycoreutils-python-utils fence-agents-all
“`
2. Create SSH keys, and add them on other hosts to known.
done before
3. Open ports on firewall
“`
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --reload
“`
4. Start pcsd service
“`
systemctl start pcsd.service
systemctl enable pcsd.service
“`
5. Set identical password for user hacluster
“`
passwd hacluster
“`
### 5.3.2 Run only on first node of cluster
1. Authenticate nodes of cluster
“`
pcs host auth vprotect1.local vprotect2.local vprotect3.local
“`
2. Create cluster
“`
pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.local
“`
3. Run cluster
“`
pcs cluster start --all
“`
4. Power off stonith
“`
pcs property set stonith-enabled=false
“`
5. Create floating IP in cluster
“`
pcs resource create vp-vip1 ocf:heartbeat:IPaddr2 ip=10.5.24.200 cidr_netmask=24 --group vpgrp
“`
6. Add vprotect-server to cluster
“`
pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
“`
7. Add vprotect-node to cluster
“`
pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
“`
# 6\. Register storware node on server (on all hosts)
1. Add certificate to trusted
“`
/opt/vprotect/scripts/node_add_ssl_cert.sh 10.5.24.200 443
“`
2. Register node on server
“`
vprotect node -r ${HOSTNAME%%.*} admin https://10.5.24.200:443/api
“`
# 7\. Usefull commands to control cluster:
For update, or service Storware unmanage services from cluster:
“`
pcs resource unmanage vpgrp
“`
Back to manage:
“`
pcs resource manage vpgrp
“`
Show status of cluster:
“`
pcs status
“`
Stop cluster node:
“`
pcs cluster stop vprotect1.local
“`
Stop all nodes of cluster:
“`
pcs cluster stop --all
“`
Start all nodes of cluster:
“`
pcs cluster start --all
“`
Clear old errors in cluster:
“`
pcs resource cleanup
“`