Dear Team,
I have deployed a clsuter with 2 NICs. They are bond togeher to form Bond0. I am using vlan 150 for my baremetal network. Now I have 20 VMs across 8 VLANs and need to directly reach the physical network.
What is the best way to do it?
Hello All,
Any inputs?
What is the significance of VLAN 150?
Hello @Trevor,
VLAN 150 is used for machine network. All hosts they have the IP Address from VLAN 150.
Also, API and Ingress VIPs are part of VLAN 150.
Adding the network configuration for one of the hosts in install-config.yaml
networkConfig:
interfaces:
- name: eno1
type: ethernet
state: up
ipv4:
enabled: false
mtu: 9000
- name: eno2
type: ethernet
state: up
pv4:
enabled: false
mtu: 9000
- name: bond0
description: Bond with ports eno1 and en02
type: bond
state: up
ipv4:
enabled: false
link-aggregation:
mode: 802.3ad
options:
miimon: "100"
port:
- eno1
- eno2
mtu: 9000
- name: bond0.150
description: vlan150 using bond0
type: vlan
state: up
ipv4:
address:
- ip: 172.21.20.162
prefix-length: 24
enabled: true
vlan:
base-iface: bond0
id: 150
dns-resolver:
config:
server:
- 172.16.101.9
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 172.21.20.1
next-hop-interface: bond0.150
Hello Team,
Any helpful inputs are highly appriciated.
Hello, how many nics have your worker nodes?
If you only have 2 nics in bonding, my recommendation is to remove the bonding, openshift creates a network called br-ext, which will use the interfaces that you declare in the deployment, "machineNetwork", which will use that interface for the communication of the pod network, for security and not causing inconsistencies in the openshift cluster it is better not to touch that interface, if you separate the bondig, the other interface that is free, you can configure with on it a NodeNetworkConfigurationPolicy adding vlan 150 and then create the NetworkAttachmentDefinition for your projects
Regards!
Hello Dave,
Removing bond will create a single point failure for my cluster and production traffic. If any one of the ToR Switch Fails I will loose access either to my production workload or the cluster network.
I have tried reusing the br-ex bridge itseft and it is working fine. I have done the following:
1. Created a new NNCP to add additional onv bridge-mappings to the bridge br-ex.
2. Createa a NAD for each production VLAN.
I used VLAN 150 (my baremetal VLAN) as native [untagged].
Is it something supported?
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.