cancel
Showing results for 
Search instead for 
Did you mean: 
Trevor
Commander Commander
Commander
  • 514 Views

Check Disk Health during System Boot

How can I go about checking the health of the disk, that contains the root file system, during bootup, using something equivalent to fsck or badblocks?

 

 

Trevor "Red Hat Evangelist" Chandler
Labels (3)
10 Replies
Travis
Moderator
Moderator
  • 338 Views

@Trevor -

You need to provide a little more information with this question. Without knowing the disks and layout (most importantly the filesystem), it would be an almost impossible question to answer directly. 

In a lot of cases when a system fails, it will make a filesystem for the boot-level checks, but I'm assuming the intent of the question is you want to force this at boot for a particular reason manually.

One thing you could do is modify the grub boot menu and process and add ...

fsck.mode=force

This will force a check, if you are brave and want to allow it to automatically fix files, you can also append

fsck.repair=yes

 

If available, the tune2fs has be ability to mark a drive/partition as dirty, so when the kernel sees this flag (just like system failures) on boot it will be checked when it is going to be mounted.

For XFS the process is a bit dfferent because you have the automatic log recovery and review as it is a better journaling filesystem. If it is corrupted, the best thing to do is a LiveUSB. If you are wanting a nice LiveUSB, I have a Fedora 43 Remix you can download and try. It is available on Github via the link to my Google Drive: https://github.com/tmichett/Fedora_Remix 

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Trevor
Commander Commander
Commander
  • 287 Views

Travis, shame on me for not providing more information.
At a minimum, I could have provided the filesystem type.
I'm going to punish myself by standing in a corner, and holding
my head in shame, for less than 15 minutes

I'm gonna go with my old friend, tune2fs, for starters, to see
if it exposes/uncovers anything.  

Thanks for the GRUB boot menu suggestions.  Gonna give
each of them a whirl after tune2fs.  

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Chetan_Tiwary_
Community Manager
Community Manager
  • 330 Views

@Trevor badlocks is recommended in offline mode not automatically on each boot.

force fsck as mentioned in another reply is a good option however I believe both options are meant for unmounted disks.

Travis
Moderator
Moderator
  • 329 Views

@Chetan_Tiwary_ -

Yes, it can't be mounted when the checks are completed. That is why there are ony a few options to do this and why you also might need the boot/LiveUSB.  I guess I wasn't clear to make that statement in what was originally posted. Since it is the "/" or root filesystem, this is even harder to perform checks on, so there are only a couple ways to accomplish this safely as it must be unmounted.

If it were any other disk/partition, and mount point it would be easy to setup something like a SystemD service to run something and check the disks all the time at boot and mount manually (via the service) instead of being part of /etc/fstab.

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Trevor
Commander Commander
Commander
  • 304 Views

Mount manually?  I'm too lazy for that.
Looks like I'm going to have to take that LIVEUSB route.
I knew that was an option, I was just to take an optional approach, 
that would be just as safe and effective.  

I"m gonna give both of those GRUB boot menu options a shot.
The worst thing that can happen is that I'll learn something

Thanks for the suggestion of setting something up with systemD - hadn't
thought about going that route.  That will give me a chance to get some
reps in with it - can never get enough exposure to systemD!!

 

Trevor "Red Hat Evangelist" Chandler
Trevor
Commander Commander
Commander
  • 306 Views

Chetan, it's the unmounting of the filesystem that's prompting the query.
If I have an unmounted root filesystem, I don't have a functioning OS.

Just being a little lazy in using a LIVEUSB as Travis mentioned.

Trevor "Red Hat Evangelist" Chandler
Travis
Moderator
Moderator
  • 304 Views

@Trevor -

I had just taken a wild guess there :-).

However, some of the solutions I gave you with things like grub or marking "dirty" do work. That is why when servers fail and reboot you can sometimes get the check of the filesystem before things boot. It does some of the filesystem check process "before" running mount and mounting what is available in /etc/fstab which means that at this point "root" isn't mounted. Again, though, there are restrictions depending on the filesystem format type.

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Trevor
Commander Commander
Commander
  • 265 Views

Travis, 

I'm so glad that you mentioned tune2fs, cause it has been a while 
since I spent any time with it.  I was looking at an article, and it
showed the following:

/sbin/tune2fs  -l  /dev/sda[n]

where n is in {1, 2, 3, 5, 7, 8}, and /dev/sda[n] is an ext2, ext3 or ext4 partition.

 

Why do you suppose the author didn't include either the values 4 or 6, in 
reference to [n] in the example?

 

 

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Travis
Moderator
Moderator
  • 259 Views

@Trevor -

This is 100% speculation, but I'm pretty sure if the author thinks like me, I'm right.

EXT2/3/4 are all older filesystems and some existed before LVM was a concept and even before PARTED, so we were left with things like either an entire disk as a single partition or using FDISK to make partitions. This could have been back in the day before UEFI and where things were BIOS only and there were more limited filesystems and technologies.

If I recall correctly, there were rules on creating partitions and the number of partitions like primary partition vs. extended partitions. If you had larger disks, you could have X primary partitions, but to get let's say Z total partitions (a value greater than X), you would need to create a special partition "Y" that the rest of your extended partitions would be created on.

So now that we have new systems, it isn't quite as important, but MBR (limit of 4 primary) and GPT (limit of 128 primary). MBR would have a limit of 2TB and GPT has essentially no limit (at least with current disks because it is in Zettabytes).

So for an MBR disk (which can still be common), you have the first few partitions that are primary, and the last partition is extended in which your "extended partitions" reside. It all has to do with the partition tables.

So essentially /dev/sda4 (is a dummy partition - not real but the container for the extended partitions). So that is why there isn't a /dev/sda4. As to /dev/sda6, I can't answer that because I would think it should be an extended partition that contains a filesystem.

I had answered a similar question helping someone troubleshoot a filesystem and we found that one of the reasons he couldn't mount a partition was because it was an "extended" partition container and wasn't a real partition with a filesystem.

Hope this helps and have a great weekend!

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Join the discussion
You must log in to join this conversation.