Happy Friday! Let’s end the week with a real world brain teaser.
This one is not about typing a command. It is about understanding how Linux really handles files under the hood. This scenario shows up in interviews and production more often than people expect.
You receive an urgent alert. The /var partition is 100% full.
You find the culprit fast. A huge 50GB log file:
/var/log/app_debug.log
You delete it immediately:
[root@server ~]# rm /var/log/app_debug.log
You confirm it is gone with ls -l.
But df -h still shows 100% usage. No space was freed.
Why is the disk still full? Time to solve the mystery.
lsof would you run to find which process still holds the deleted file?rm to safely clear the log without breaking the running application?Let’s see who can crack the case of the ghost file.
1) The disk space was not released because a running process still has a link pointing to the file on disk. Whilst a link to the file was deleted by the rm command, the running process still has a link (via a file descriptor) to a file table in the operating system which then points to the inode on the disk. The process will still be writing to the file on the disk, which has not been deleted.
2) Running the following command will show which processes are still using the 'deleted' file:
lsof | grep /var/log/app_debug.log
The results will mark the log file as (deleted), even though it is still present on disk.
3) Now that you have the process id, the cleanest way to release the disk space would be to restart the application using systemctl restart <name.service>.
If the process is not enabled as a service, then the command kill -15 <process id> will attempt to close the running process in a graceful fashion; restart the process manually after this.
Bonus:
The command truncate -s 0 /var/log/app_debug.log would have cleared the log without breaking the running application.
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.