cancel
Showing results for 
Search instead for 
Did you mean: 
aluciade
Cadet
Cadet
  • 127 Views

GPU Quota Not Released After Pod Termination in OpenShift

Jump to solution

System Configuration:

  • Cluster: OpenShift cluster with a single node.
  • Hardware: Two NVIDIA A100 GPUs installed.
  • Software: NVIDIA GPU Operator successfully installed.

Issue Details:

  1. I successfully deployed two pods, with each pod allocating one of the two available NVIDIA A100 GPUs.
  2. Subsequently, I terminated one of these pods and confirmed its successful termination.
  3. The Problem: Despite the pod being terminated, the cluster's GPU "used" quota was not decreased.
  4. Impact: Due to the persistent "used" quota, no other pods requiring a GPU can be deployed, as the system still reports both GPUs as allocated.
  5. I have already restarted the nvidia-device-plugin-daemonset, but this action did not resolve the issue.

It appears there is a leak in the cluster's GPU quota management, where GPU resources are not being properly released and accounted for after pod termination.  Any advice on how to diagnose and resolve this GPU quota leak would be greatly appreciated.  Thank you in advance!

Labels (1)
1 Solution

Accepted Solutions
aluciade
Cadet
Cadet
  • 71 Views

Hi, @Chetan_Tiwary_ .  

Thank you very much for your response. I just discovered that I was tracking the wrong problem and jumped to the wrong conclusion.

What actually happened is that the new deployment I'm trying to create is crashing (CrashLoopBackoff). However, even though it is crashing, one GPU is still being allocated to it.

When I was trying to debug the crash, I received an error message stating that there were no GPUs available. This led me to the incorrect conclusion that the pod was crashing due to a lack of GPUs.

After I stopped the other deployment that was allocating a GPU, one GPU became available again. When I tried to debug the crashed pod again, I noticed that OpenShift actually creates a second temporary pod during debugging, which attempts to allocate an additional GPU. So, the reason for the error message was that there was no GPU available for this temporary pod, which is totally acceptable.

My mistake was due to the fact that I didn't know that debugging creates a new pod.

Anyway, sorry for the confusion. I jumped to the wrong conclusion.   And thank you once again for the attention.  

 

View solution in original post

3 Replies
Chetan_Tiwary_
Community Manager
Community Manager
  • 75 Views

@aluciade yes I agree, it appears as a leak. What is the grace period for pod termination set as ? If you reboot the GPU node - then what does it show - back to the normal? 

What about compatibility ?

https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/troubleshooting-gpu-ocp.html  

0 Kudos
aluciade
Cadet
Cadet
  • 72 Views

Hi, @Chetan_Tiwary_ .  

Thank you very much for your response. I just discovered that I was tracking the wrong problem and jumped to the wrong conclusion.

What actually happened is that the new deployment I'm trying to create is crashing (CrashLoopBackoff). However, even though it is crashing, one GPU is still being allocated to it.

When I was trying to debug the crash, I received an error message stating that there were no GPUs available. This led me to the incorrect conclusion that the pod was crashing due to a lack of GPUs.

After I stopped the other deployment that was allocating a GPU, one GPU became available again. When I tried to debug the crashed pod again, I noticed that OpenShift actually creates a second temporary pod during debugging, which attempts to allocate an additional GPU. So, the reason for the error message was that there was no GPU available for this temporary pod, which is totally acceptable.

My mistake was due to the fact that I didn't know that debugging creates a new pod.

Anyway, sorry for the confusion. I jumped to the wrong conclusion.   And thank you once again for the attention.  

 

Chetan_Tiwary_
Community Manager
Community Manager
  • 49 Views

ok @aluciade Glad that it is resolved and clear for you! 

0 Kudos
Join the discussion
You must log in to join this conversation.