cancel
Showing results for 
Search instead for 
Did you mean: 
bsk
Cadet
Cadet
  • 43 Views

Openshift-installation on baremetal(virtual) LAB -UPI - PXE boot

Hi All,

Today i deployed a 5 node ocp cluster (practice lab), installation shows success but my worker node has role  assigned as master. i used same configs which i used ystd which hasn't have any problems. 

* does anyone knows from where/which config file these roles get assined by openshift-installer?

Later i changed the label & taint to make it work as worker role & schedulable --fine as of now.

but i found authentication cluster operator is continously restarting and few pods below are failing.(which has kube-scheduler on worker02 affect of master-role it seems & later i forcefully delete the scheduler pod as worker02.) 

bsk_0-1736253222102.png

bsk_1-1736253632191.png

upon checking logs found that worker01 has NodeHasInsufficientMemory.

then i deleted remaing pods as well which were in evicted & crashloop. in namespace openshift-operator-lifecycle-manager  & openshift-marketplace. which immediately made worker01 to sufficient memory.

then eventually authentication operator became stable, pods re-scheduled & running well in openshift-operator-lifecycle-manager  & openshift-marketplace.

my question to the folks is:

  • From where/which config file these roles get assined by openshift-installer?
  •  how come the pods in operator-lifecycle-manager  & openshift-marketplace affected the operator authentication & how they are inter-related?

my understanding is that entire failure occurance casued becoz of wrongly assigned role to worker? which made burden on other nodes as this lab have few resources for each node( and there pods which needs to be run on all nodes like a DS) even though i later modified & made worker02 as worker, some pods unable to identify & move to worker02(the DS pods moved). finally delete of pods able to fix the issue.

0 Replies
Join the discussion
You must log in to join this conversation.