cancel
Showing results for 
Search instead for 
Did you mean: 
Trevor
Starfighter Starfighter
Starfighter
  • 1,152 Views

Max Number of Pods on a Node

Hello all,

With OpenShift Container Platform, disregarding performance,
CPU and memory resources, is there a maximum number of
pods that a node (master or worker) can accommodate?

Thanks in advance.

 

Trevor "Red Hat Evangelist" Chandler
Labels (1)
0 Kudos
4 Replies
mighty_quinn
Mission Specialist
Mission Specialist
  • 1,143 Views

In the OpenShift Container Platform documentation, the default maximum number of pods per node is 250.  The maximum number of pods that a node can accommodate is configurable and depends on the resources available on the node. The maximum number of pods is determined by the kubelet configuration and can be adjusted using the --max-pods flag. See for example the following section https://docs.openshift.com/container-platform/4.12/nodes/nodes/nodes-nodes-managing-max-pods.html

Trevor
Starfighter Starfighter
Starfighter
  • 1,111 Views

Very nice mighty_quinn.  Many thanks for this response!!!

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Fran_Garcia
Starfighter Starfighter
Starfighter
  • 1,099 Views

In addition to Quinn's answer, capacity planning, scale planing and maximums is usually a complex conversation, because performance is a multidimensional problem:

- What happens if you have hundreds of mostly iddle pods? (eg: "hello world"-type of pods).

- What happens if you just have a few pods per node, but are sending lots of network traffic?

- What happens if you are using a 3rd party SDN? Will that scale in the same manner as the regular Openshift SDN?

- What happens if you have just a few operators, but they are constantly hammering the Openshift API with hundreds of requests per second? Will they be as demanding with fewer but beefier OCP worker nodes?

- What happens if you are also using Openshift Service Mesh / Istio to gain further observability and control in your microservices' communications?

 

Performance modeling and documented limits provide a way to have an estimation on what has been tested / not tested, and where the bottlenecks might be - but the more specialiced a cluster is, the harder is to get idea of specific limits (besides, of course, testing it).

Trevor
Starfighter Starfighter
Starfighter
  • 1,089 Views

This is wonderful food for thought Fran!!!
Your comments are certainly going to prompt
additional questions down the road.

Many thanks for taking the time to provide
such insightful commentary.

 

 

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Join the discussion
You must log in to join this conversation.