Bartas1979
Mission Specialist
Mission Specialist
  • 2,249 Views

Global lock failed: check that global lockspace is started

Hi all,

I'm preparing for EX436 exam on RH labs and testing many scenarios. One of these scenarios is prepare cluster with assigned two IP addresses to each node and then use iscsi target with dlm, multipath and LVM's.

Everything goes somoothly till... I starting with LVM configuration. I getting error "Global lock failed: check that global lockspace is started".

Below steps which I using for reproduce that error:

1. pcs cluster setup prod1 \
node1.mydomain.com addr=172.25.250.10 addr=172.25.250.50\
node2.mydomain.com addr=172.25.250.11 addr=172.25.250.51\
node3.mydomain.com addr=172.25.250.12 addr=172.25.250.52\
 
2. pcs stonith create fence_node1 fence_ipmilan pcmk_host_list=node1.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
pcs stonith create fence_node2 fence_ipmilan pcmk_host_list=node2.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
pcs stonith create fence_node3 fence_ipmilan pcmk_host_list=node2.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
 
3. cat /etc/corosync/corosync.conf
nodelist {
node {
ring0_addr 172.25.250.10
ring1_addr 172.25.250.50
name: node1.mydomain.com
nodeid :1
}
node {
ring0_addr 172.25.250.11
ring1_addr 172.25.250.51
name: node2.mydomain.com
nodeid :2
}
node {
ring0_addr 172.25.250.12
ring1_addr 172.25.250.52
name: node3.mydomain.com
nodeid :3
}
}
 
4. dnf install -y dlm iscsi-initiator-utils lvm2-lockd device-mapper-multipath gfs2-utils [on all nodes]
 
5. Edit /etc/iscsi/initiatorname.iscsi file and set the IQN for the client initiator. 
(iqn.2023-11.com.mydomain:<short_hostname>)[on all nodes]
 
6. systemctl enable --now iscsid [on all nodes]
 
7. iscsiadm -m discovery -t st -p 192.168.1.15 192.168.1.15
iscsiadm -m node -T iqn.2023-11.com.mydomain:store-prod -p 192.168.1.15 -l
iscsiadm -m discovery -t st -p 192.168.1.15 192.168.2.15
iscsiadm -m node -T iqn.2023-11.com.mydomain:store-prod -p 192.168.2.15 -l
[on all nodes]
 
8. mpathconf --enable
 
9. iscsiadm -m session -P 3
 
10. udevadm info /dev/sdb | grep ID_SERIAL=
 
11. Edit the /etc/multipath.conf
multipaths {
multipath {
wwid 3600140562aeac25dc4c4eb5842574c7a
alias diska
}
}
 
12. Copy /etc/multipath.conf to two nodes in cluster node2 and node3:
scp /etc/multipath.conf root@node2:/etc/
scp /etc/multipath.conf root@node3:/etc/
 
13. systemctl enable --now multipathd [on all nodes]
 
14. pcs resource create dlm ocf:pacemaker:controld --group=locking
 
15. pcs resource create lvmlockd ocf:heartbeat:lvmlockd --group=locking
 
16. pcs resource clone locking interleave=true
 
17. pvcreate /dev/mapper/diska
 
Global lock failed: check that global lockspace is started
 
Could I ask you for help to find me where I made mistake? Thank you.
Labels (2)
13 Replies
Boolabs
Cadet
Cadet
  • 296 Views

Hello,

In my scenario, I have a two node HA cluster with a separate QNet device for quorum. Cluster nodes have two network rings for cluster communication. I was coming across the same error when trying to configure a GFS2 volume ("Global Lock failed: check that global lock space is started").

What appeared to have solved it for me is a combination of:

1. Enabling the "sctp" kernel module by losely following: https://access.redhat.com/solutions/6625041 (note that the module will initially be black-listed)

2. Adding the line "rrp_mode: passive" in the "totem{}" section of /etc/corosync/corosync.conf on each cluster node.

In addition to setting "use_lvmlockd = 1" in "/etc/lvm/lvm.conf"

Only then did the "vcgreate..." command work:

[booboo@server-01 ~]$ sudo vgcreate --shared iscsi-shared /dev/mapper/mpatha
Volume group "iscsi-shared" successfully created
VG iscsi-shared starting dlm lockspace
Starting locking. Waiting until locks are ready...
[booboo@server-01 ~]$ sudo vgdisplay
Devices file PVID ttFV7VFjsRfGtnTkYvOvbdtknr2yfpjn last seen on /dev/sdc not found.
--- Volume group ---
VG Name iscsi-shared
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <50.00 GiB
PE Size 4.00 MiB
Total PE 12799
Alloc PE / Size 0 / 0
Free PE / Size 12799 / <50.00 GiB
VG UUID pvfG10-GM4T-VY4N-GnQN-Y5LH-7iV2-gcASx8

NOTE: My dev device being set up here (/dev/mapper/mpatha) is multi-path to an iSCSI Target on a remote Ceph instance.

This was all essentially reverse-engineered through trial and error. I believe that in a cluster where there are multiple corosync rings configured, "dlm" must use the "sctp" protocol for lock management between cluster nodes, as without the "rrp_mode: passive" setting, it will by default use TCP, which will not work in a multi-homed server (you may indeed see similar messages on the console and in "dmesg").

See section 19.1 of https://documentation.suse.com/sle-ha/15-SP3/html/SLE-HA-all/cha-ha-storage-dlm.html for a little more detail.

With these two pieces in place ("sctp" and "rrp_mode"), dlm should show: "dlm: Using SCTP for communications" in "dmesg" and the subsequent commands for creating shared volume groups should work.

0 Kudos
jeet11
Cadet
Cadet
  • 41 Views

Hello Everyone,

I'm also facing the same issue. not able to create pv and vg.
while lvmlockd is already 1 and service is also running.

Is there any solution?

Thanks

0 Kudos
jeet11
Cadet
Cadet
  • 40 Views

pvcreate /dev/mapper/mpatha

start a lock manager, lvmlockd did not find one running.
Global lock failed: check global lockspace is started.

0 Kudos
jeet11
Cadet
Cadet
  • 40 Views

I also tried to reboot the both cluster nodes,
use_lvmlockd = 1  <<---- Used this /etc/lvm/lvm.conf file.
Manually restart the lvmlockd service also.
also able to see "mpatha" device on  both 2 cluster nodes using "lsblk".
But none of the above trick solved the issue in my case. I'm preparing for  RedHat exam EX436. Any solution will be appriciated.

Thankyou in advance.

0 Kudos
Join the discussion
You must log in to join this conversation.