cancel
Showing results for 
Search instead for 
Did you mean: 
Bartas1979
Mission Specialist
Mission Specialist
  • 5,827 Views

Global lock failed: check that global lockspace is started

Jump to solution

Hi all,

I'm preparing for EX436 exam on RH labs and testing many scenarios. One of these scenarios is prepare cluster with assigned two IP addresses to each node and then use iscsi target with dlm, multipath and LVM's.

Everything goes somoothly till... I starting with LVM configuration. I getting error "Global lock failed: check that global lockspace is started".

Below steps which I using for reproduce that error:

1. pcs cluster setup prod1 \
node1.mydomain.com addr=172.25.250.10 addr=172.25.250.50\
node2.mydomain.com addr=172.25.250.11 addr=172.25.250.51\
node3.mydomain.com addr=172.25.250.12 addr=172.25.250.52\
 
2. pcs stonith create fence_node1 fence_ipmilan pcmk_host_list=node1.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
pcs stonith create fence_node2 fence_ipmilan pcmk_host_list=node2.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
pcs stonith create fence_node3 fence_ipmilan pcmk_host_list=node2.mydomain.com  ip=192.168.0.101 username=myadmin password=secret_password lanplus=1 power_timeout=180
 
3. cat /etc/corosync/corosync.conf
nodelist {
node {
ring0_addr 172.25.250.10
ring1_addr 172.25.250.50
name: node1.mydomain.com
nodeid :1
}
node {
ring0_addr 172.25.250.11
ring1_addr 172.25.250.51
name: node2.mydomain.com
nodeid :2
}
node {
ring0_addr 172.25.250.12
ring1_addr 172.25.250.52
name: node3.mydomain.com
nodeid :3
}
}
 
4. dnf install -y dlm iscsi-initiator-utils lvm2-lockd device-mapper-multipath gfs2-utils [on all nodes]
 
5. Edit /etc/iscsi/initiatorname.iscsi file and set the IQN for the client initiator. 
(iqn.2023-11.com.mydomain:<short_hostname>)[on all nodes]
 
6. systemctl enable --now iscsid [on all nodes]
 
7. iscsiadm -m discovery -t st -p 192.168.1.15 192.168.1.15
iscsiadm -m node -T iqn.2023-11.com.mydomain:store-prod -p 192.168.1.15 -l
iscsiadm -m discovery -t st -p 192.168.1.15 192.168.2.15
iscsiadm -m node -T iqn.2023-11.com.mydomain:store-prod -p 192.168.2.15 -l
[on all nodes]
 
8. mpathconf --enable
 
9. iscsiadm -m session -P 3
 
10. udevadm info /dev/sdb | grep ID_SERIAL=
 
11. Edit the /etc/multipath.conf
multipaths {
multipath {
wwid 3600140562aeac25dc4c4eb5842574c7a
alias diska
}
}
 
12. Copy /etc/multipath.conf to two nodes in cluster node2 and node3:
scp /etc/multipath.conf root@node2:/etc/
scp /etc/multipath.conf root@node3:/etc/
 
13. systemctl enable --now multipathd [on all nodes]
 
14. pcs resource create dlm ocf:pacemaker:controld --group=locking
 
15. pcs resource create lvmlockd ocf:heartbeat:lvmlockd --group=locking
 
16. pcs resource clone locking interleave=true
 
17. pvcreate /dev/mapper/diska
 
Global lock failed: check that global lockspace is started
 
Could I ask you for help to find me where I made mistake? Thank you.
Labels (2)
2 Solutions

Accepted Solutions
AlexonOliveira
Flight Engineer
Flight Engineer
  • 3,391 Views

I could reproduce the reported issue, as follows:

 

[root@nodea ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─diska 253:0 0 10G 0 mpath
sdb 8:16 0 10G 0 disk
└─diska 253:0 0 10G 0 mpath
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1M 0 part
├─vda2 252:2 0 100M 0 part /boot/efi
└─vda3 252:3 0 9.9G 0 part /

[root@nodea ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence --group=locking

[root@nodea ~]# pcs resource create lvmlockd ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence --group=locking

[root@nodea ~]# pcs resource clone locking interleave=true

[root@nodea ~]# pvs
Skipping global lock: lockspace not found or started

[root@nodea ~]# pvs
Global lock failed: error -210

[root@nodea ~]# pvs
Skipping global lock: lockspace not found or started

[root@nodea ~]# pvcreate /dev/mapper/diska
Global lock failed: check that global lockspace is started

 

As you can see, I'm running my cluster with two links for each node. Apparently, that's the issue, according to the following KB:

https://access.redhat.com/solutions/5099971

So, to solve it, remove the extra ring, like the following example:

 

[root@nodea ~]# pcs cluster corosync | grep nodelist -A21
nodelist {
node {
ring0_addr: 192.168.0.10
ring1_addr: 192.168.2.10
name: nodea.private.example.com
nodeid: 1
}

node {
ring0_addr: 192.168.0.11
ring1_addr: 192.168.2.11
name: nodeb.private.example.com
nodeid: 2
}

node {
ring0_addr: 192.168.0.12
ring1_addr: 192.168.2.12
name: nodec.private.example.com
nodeid: 3
}
}

[root@nodea ~]# pcs cluster link remove 1
Sending updated corosync.conf to nodes...
nodea.private.example.com: Succeeded
nodeb.private.example.com: Succeeded
nodec.private.example.com: Succeeded
nodea.private.example.com: Corosync configuration reloaded

[root@nodea ~]# pcs cluster corosync | grep nodelist -A18
nodelist {
node {
ring0_addr: 192.168.0.10
name: nodea.private.example.com
nodeid: 1
}

node {
ring0_addr: 192.168.0.11
name: nodeb.private.example.com
nodeid: 2
}

node {
ring0_addr: 192.168.0.12
name: nodec.private.example.com
nodeid: 3
}
}

[root@nodea ~]# pcs cluster stop --all
nodea.private.example.com: Stopping Cluster (pacemaker)...
nodec.private.example.com: Stopping Cluster (pacemaker)...
nodeb.private.example.com: Stopping Cluster (pacemaker)...
nodea.private.example.com: Stopping Cluster (corosync)...
nodeb.private.example.com: Stopping Cluster (corosync)...
nodec.private.example.com: Stopping Cluster (corosync)...

[root@nodea ~]# reboot

[root@nodeb ~]# reboot

[root@nodec ~]# reboot

[root@nodea ~]# pcs status --full
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: nodec.private.example.com (3) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
* Last updated: Wed May 29 21:38:34 2024
* Last change: Wed May 29 21:31:46 2024 by root via crm_resource on nodea.private.example.com
* 3 nodes configured
* 9 resource instances configured

Node List:
* Online: [ nodea.private.example.com (1) nodeb.private.example.com (2) nodec.private.example.com (3) ]

Full List of Resources:
* fence_nodea (stonith:fence_ipmilan): Started nodea.private.example.com
* fence_nodeb (stonith:fence_ipmilan): Started nodeb.private.example.com
* fence_nodec (stonith:fence_ipmilan): Started nodec.private.example.com
* Clone Set: locking-clone [locking]:
* Resource Group: locking:0:
* dlm (ocf::pacemaker:controld): Started nodec.private.example.com
* lvmlockd (ocf::heartbeat:lvmlockd): Started nodec.private.example.com
* Resource Group: locking:1:
* dlm (ocf::pacemaker:controld): Started nodea.private.example.com
* lvmlockd (ocf::heartbeat:lvmlockd): Started nodea.private.example.com
* Resource Group: locking:2:
* dlm (ocf::pacemaker:controld): Started nodeb.private.example.com
* lvmlockd (ocf::heartbeat:lvmlockd): Started nodeb.private.example.com

Migration Summary:

Tickets:

PCSD Status:
nodea.private.example.com: Online
nodeb.private.example.com: Online
nodec.private.example.com: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@nodea ~]# pvs

[root@nodea ~]# pvcreate /dev/mapper/diska
Physical volume "/dev/mapper/diska" successfully created.

[root@nodea ~]# vgcreate --shared vg1 /dev/mapper/diska
Volume group "vg1" successfully created
VG vg1 starting dlm lockspace
Starting locking. Waiting until locks are ready...

[root@nodea ~]# ssh nodeb vgchange --lock-start vg1
VG vg1 starting dlm lockspace
Starting locking. Waiting until locks are ready...

[root@nodea ~]# ssh nodec vgchange --lock-start vg1
VG vg1 starting dlm lockspace
Starting locking. Waiting until locks are ready...

[root@nodea ~]# lvcreate --activate sy -L4G -n lv1 vg1
Logical volume "lv1" created.

[root@nodea ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/diska vg1 lvm2 a-- <10.00g <6.00g

[root@nodea ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 1 0 wz--ns <10.00g <6.00g

[root@nodea ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 -wi-a----- 4.00g

Alexon Oliveira

View solution in original post

sgvredhat
Cadet
Cadet
  • 2,057 Views

I can see this KB article which explains that DLM does not support configuring more than one ring. Please refer this KB article for details : 

 

https://access.redhat.com/solutions/5099971

 

Here is another policy document with more details : https://access.redhat.com/login?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsolutions%2F5099971

 

 

View solution in original post

16 Replies
Chaitanya83
Flight Engineer
Flight Engineer
  • 4,090 Views
Hello

Are u able to list the disks on all the nodes when u type lsblk on machines
?
Bartas1979
Mission Specialist
Mission Specialist
  • 4,062 Views

Yes. Without any problem. 

Chetan_Tiwary_
Community Manager
Community Manager
  • 4,080 Views

Hello @Bartas1979 !

Thanks for reaching out !

Could you please quickly let me know the guided exercise or the chapter -section which you are practising for this issue in RH436 course ?

0 Kudos
Bartas1979
Mission Specialist
Mission Specialist
  • 4,049 Views

Hello,

Well... there is no section exactly like this. I'm basing it on "compreview-gfs2" with a few small changes (as described above). 

Chetan_Tiwary_
Community Manager
Community Manager
  • 4,023 Views

Ok @Bartas1979 ! What about use_lvmlockd = 1 in the global section of /etc/lvm/lvm.conf ? if it is currently set to 1 then change it to 0 and then recheck. 

or Ensure that only one heartbeat address is configured per cluster node in /etc/corosync/corosync.conf

( check /var/log/messages for the dlm error )

0 Kudos
Bartas1979
Mission Specialist
Mission Specialist
  • 3,983 Views

Hi @Chetan_Tiwary_ 

Thank you for your suggestions. I tried some time ago change use_lvmlockd parameter on 0. Result is pvcreate /dev/mapper/diska command works but next step is failed again:

vgcreate --shared vg1 /dev/mapper/diska

Using a shared lock type requires lvmlockd.

About second part of your suggestion - "Ensure that only one heartbeat address is..." I guess you mean to remove one address from section:

ring0_addr 172.25.250.10
ring1_addr 172.25.250.50
 
Unfortunately it is not option beacuse in my case, one of the goals I want to achieve is two addresses assigned to node.
Bartas1979
Mission Specialist
Mission Specialist
  • 3,946 Views

Thank you @Chetan_Tiwary_  for links with documentation. Really appreciate that but... it still not cover my issue.  

0 Kudos
mohamed42
Mission Specialist
Mission Specialist
  • 3,351 Views

are you able to fix this issue ? because i am facing the same senario also and I don't understand where is the issue really, the shared volume group will require "use_lvmlockd = 1 " which will activate automatic i shouldn't touch the file /etc/lvm/lvm.conf 

 

0 Kudos
Join the discussion
You must log in to join this conversation.