Q.) How can we configure and ensure that a specific process or service starts after all other service scripts and systemd init tasks have been completed ?
Q.) I am using nfs shares for my kickstart configuration:
inst.repo=nfs:nfsvers=4:10.16.192.151:/var/www/html/rhel8 inst.ks=http://10.16.192.151/var/www/html/rhel8/rhel8.cfg inst.sshd
but getting this error later :
dracut-initqueue[1021]: anaconda: found /run/install/repo//images/install.img
kernel: loop: module loaded
dracut-initqueue[1021]: anaconda: kickstart locations are: nfs:nfsvers=4:10.16.192.151:/rhel8/rhel8.cfg
dracut-initqueue[1021]: anaconda: fetching kickstart from nfs:nfsvers=4:10.16.192.151:/rhel8/rhel8.cfg
dracut-initqueue[1021]: cp: cannot stat '/run/install/repo/isolinux/rhel8/rhel8.cfg': No such file or directory
dracut-initqueue[1021]: Warning: anaconda: failed to fetch kickstart from nfs:nfsvers=4:10.16.192.151:/rhel8/rhel8.cfg
How will you address this issue ?
Q.) yum update fails with the following error : What would you do to resolve this :
[root@rhel9 ~]# yum update
rpmdb: PANIC: fatal region error detected; run recovery
error: db3 error(-30974) from dbenv->open: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 - (-30974)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Level - L2 and above.
I'll be posting a series of Linux-related questions covering various skill levels. Feel free to share your insights and expertise. Your contributions will benefit learners at all stages, from those in current roles to those preparing for Linux interviews.
Question 3: "yum update fails with the following error : What would you do to resolve this?"
My research shows that ths is because the rpm database is corrupt on the local system. To resolve the issue, the repo on the system will require cleaning and rebuilding. Perform the following commands to achieve this:
# yum clean all
# rm -f /var/lib/rpm/__db*
The above commands will remove rpm database packages, and clean the yum cache. Removing the rpm database packages won’t affect the current rpm’s installed.
Now, rebuild the rpm database with command below:
# rpm –rebuilddb
Now, update the rpm database with the command below, and life should be good - at least as far as this issue is concerned!
# yum update –y
@Trevor You are on a ROLL!
No one has weighed in on that first question, so I guess I'll stick my toe in the water:
Q1: How can we configure and ensure that a specific process or service starts after all other service scripts and systemd init tasks have been completed ?
Here's my approach:
Step 1: Create a service unit file for that process or service, in the
/etc/systemd/system directory
Step 2: Enable that new service
Step 3: Reload the systemd daemon
Here's an example, with a little bit more detail of the steps I mentioned above.
For my example, I will assume that the process that will be launched after
everything else has started, will be based on a shell script named "tlc.sh", that
resides in the /tmp/ directory. Also, I will assume that the name of my service
unit file is "lastone.service".
Step 1: The content of the service unit file - could contain more, but this is the bare
minimum.
[ Unit ]
After=default.target
[Service]
Type=simple
ExecStart=/usr/bin/bash /tmp/tlc.sh
Step 2: # systemctl enable lastone.service
Step 3: # systemctl daemon-reload
This is one approach to ensuring that a specific process or service will start after
all other service scripts and systemd init tasks have been completed.
That 2nd question, involving the use of nfs shares for a kickstart configuration,
appears to have an issue with the source information
' /run/instsall/repo/isolinux/rhel/rhel8.cfg'
Why do I say that? Because of what appears in that error message:
cp: cannot stat ' /run/instsall/repo/isolinux/rhel/rhel8.cfg' : No such file or directory
The status information on the file rhel.cfg can't be accessed. So, what's preventing
that is either:
1) the filename rhel8.cfg doesn't exist in that path
2) one of the directories in the path is specified with an incorrect name, or doesn't
exist at all
I should say more, but I'm gonna leave it right there!
@Trevor for Q2.) When installing a system using NFS for multiple file systems, the installer may encounter an error if it attempts to mount the NFS root twice. This can happen even if the kickstart file, which contains the installation configuration, is already located on a mounted NFS share. It is called double mount condition.
To avoid the double mount issue for this inst.repo and inst.ks and ensure a successful installation, you can either use a different NFS root or modify the target method ( meaning use one with nfs and the other with http ). This will prevent the installer from attempting to mount the NFS root twice and allow the installation to proceed without errors.
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.