So I'm going through RH234 right now, covering creation and management of LV's. LV's have both an associated path and UUID.
In the video classroom, the instructor adds the LV to the fstab file using the path. On my own system I mount using UUID.
Obviously both work, I'm just curious if there is a best practices approach or if it is completely interchangable.
I thought one of the reasons to prefer UUID was the rare but not unheard of risk of HDD's not mounting in the same order every time and thus not having correct absolute path.
Thanks
UUIDs would be preferred, in my humble opinion, for the very reason you've specified -- the order of the devices, as discovered by the system, can change.
UUIDs are applied to the filesystem itself. Because of this, you can take a block device and attach it to another system and use the same UUID. You might be wondering when would you do that? USB drives.
On thing: device node names for LVMs aren't /dev/sda, /dev/sdb1, etc. They're either
/dev/<vgname>/<lvname>
or
/dev/mapper/<vgname>-<lvname>
The device can be identified by a full path to a block device, a universally unique identifier (UUID) , or a volume label. The device node name of a disk (/dev/sda, /dev/sdb1 etc.) may change in some situations. For example, after switching cables around or upgrading certain packages, sdh & sdb could swap places. This causes problems when /etc/fstab references filesystems by the disk names, use filesystem UUIDs or labels.
UUIDs would be preferred, in my humble opinion, for the very reason you've specified -- the order of the devices, as discovered by the system, can change.
UUIDs are applied to the filesystem itself. Because of this, you can take a block device and attach it to another system and use the same UUID. You might be wondering when would you do that? USB drives.
On thing: device node names for LVMs aren't /dev/sda, /dev/sdb1, etc. They're either
/dev/<vgname>/<lvname>
or
/dev/mapper/<vgname>-<lvname>
So it's good to know the UUID approach is the preferred approach. That's what I recommend to my students.
What it does do, however, is beg the question of why the other approaches are discussed/covered in the class.
There is a lot of material and concepts to cover and it seems to add an unnecessary element of complexity.
I appreciate that one of the great things about linux is the multitude of ways you can achieve the same end but for someone just learning this, sometimes simplier is better. The alternate ways can be learned after you get into the field.
For my students, the whole process of understanding what LV's are, how they are built and managed is difficult enough without adding an "oh you can also do it this way"... That just tends to freak them out.
....now, where do I put my nickel :)
"What it does do, however, is beg the question of why the other approaches are discussed/covered in the class."
Because not everything can be configured in /etc/fstab using a UUID. Most notably, NFS exports (<servername>:/<export_directory>) and Samba shares (//<servername>/<sharename>).
I think I've boiled LVMs down for my students fairly well.
I tell them when they're creating a logical volume just go in order, 1 through 6: fdisk/gdisk/parted, pvcreate, vgcreate, lvcreate, mkfs, mount
When removing one, reverse the order, 6 through 1 (skipping 5 as there's no unformat <g>): umount, lvremove, vgremove, pvremove, fdisk/gdisk/parted
Once they have that down, we then work on resizing VGs, LVs, and filesystems. An the difference between -L and -l as options when creating a LV.
I use pv*, vg*, and lv* because if you type those letters, you can then hit tab tab to see the commands available to you -- it really helps when working with LVMs.
Why not do pvcreate directly on the device? I'm really curious as to why we have to create a partition first.
Partitions define how data is to be stored in the disk. Without them, data cannot be written to the disk.
I did find this article: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager... and this is what I was acutally thinking of.
"If you are using a whole disk device for your physical volume, the disk must have no partition table."
pvcreate /dev/sdd /dev/sde /dev/sdf
Then of course you create your groups and volumes. But I'm not seeing a reason as to why you should partition a drive when using LVM now and days. I feel like in most cases these days, its better to use the entire drive and then set the size at the volume group or logical volume stage. Just trying to think why this would be bad. Am i missing something?
According to that, and this, from the man pvcreate page:
EXAMPLES
Initialize a partition and a full device.
pvcreate /dev/sdc4 /dev/sde
I don't see why not.
I just tried it and it worked fine. So partitioning first does not seem to be required.
Now, with that being said, I'm far from knowing everything (or even half of everything). Perhaps someone else has some insight I can't provide.
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.