Hi, folks,
I'm greatly puzzled by this. We have a MySQL 8.0 database running on an RHEL 7.9 node and it will. not. swap. I've checked vm.swappiness and it's set to 30. I've seen this system get up to 98% memory utilization, and it still would. not. swap. I added another 4GB of RAM to the machine to keep it from losing its mind, but I'm still curious what's going on. Here's how it looks today:
free
total used free shared buff/cache available
Mem: 16313100 1421004 4708072 574916 10184024 14184440
Swap: 8384508 0 8384508
It's a production machine, so I can't experiment on it.
Thanks,
John A
This is a known gotcha on how 'free' reports memory. You should worry about the "total" and "used" colums: 16 GB RAM total, only 1.1GB used. The rest is free (4 GB) and used by disk buffers/caches (around 10GB). The kernel will never swap any space used by buffers/cache, it will simply delete those pages if there's memory pressure (demand from actual aplications).
There's a good write up about this whole thing in www.linuxatemyram.com .
This is a known gotcha on how 'free' reports memory. You should worry about the "total" and "used" colums: 16 GB RAM total, only 1.1GB used. The rest is free (4 GB) and used by disk buffers/caches (around 10GB). The kernel will never swap any space used by buffers/cache, it will simply delete those pages if there's memory pressure (demand from actual aplications).
There's a good write up about this whole thing in www.linuxatemyram.com .
Hi, Fran_Garcia,
Since we also have Oracle databases in the house, I knew about this. Here's how it looked just before I bumped up the RAM:
free -k
total used free shared buff/cache available
Mem: 12118796 1737640 222212 541660 10158944 9517468
Swap: 8384508 2304 8382204
bc
222212 / 12118796
.0183
That's better than 98% overall usage. It's very hard to monitor a machine effectively when the (alleged) usage is that high.
Further, once it'd gotten over 90%, free started shrinking, and the more it shrank, the faster it (seemed to) shrink and the slower the application it backed (seemed to) run. If it weren't production, I would've let it grow and see what happened. But I could not, in good conscience, satisfy my curiosity.
Possibly I should've approached this as a monitoring issue, but I'm not sure about that. The memory usage has been quite stable since increasing the RAM. It wasn't stable before.
Thanks,
John A
Hi, Fran_Garcia,
You know, I understood this, but I didn't know it till I tried to get this alert to go away. I kept throwing memory at this node until I got it through my skull that I was not having a real issue.
Would you agree it's appropriate for me to set memory utilization warnings at 100%, which would indicate an actual issue of some sort?
Thanks again,
John A
For me, a reasonable monitoring threshold would be to check the used/total is less than 70% or 80%. Beyond that you'll probably be getting performance issues, and if reaching over 90 or 95% you can start getting your processes killed with an OutOfMemory error.
HTH
Fran
Hi, Fran,
The issue with that is that I've bumped the memory up to 64GB on that node and I am still pegging memory. The system runs fine with 16GB, except when I try to do a mysqldump of the database. When I do try that, and watch the system carefully, it remains responsive and nothing dies. I'm already uncomfortable with the amount of memory allocated.
I suppose I could nudge the memory up a little more, but it just seems unreasonable.
Thanks,
John A
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.