I have pre-existing virtual machines that I need to use to install Openshift 4.1 and create a cluster. All the VMs can talk to each other and their IP addressed wont change. Can someone please provide me a detailed installation process for this setup ? I saw the bare metal documentation but I am not sure if all the steps apply to my virtual machines and some steps are not clear, especially how do I get coreOS to install on my VMs ? Where do I run the load balancer ? Do I require 1 or 2 load balancers ? Where is the infra node if there are 3 masters needed ? Where does the router run ?
Also is the http server creation mandatory ? Is there a chance I can use any existing one ?
I also want to use Container Native storage in 4.1. IS it possible ? IF yes, how do I install/configure it ?
The bare metal instructions apply to any environment where the openshift installer is not able to provision VM instances, virtual networks, and so on. You could use it for clould instances you manage manually, bare-metal hosts and of course libvirt VMs. The product documentation provides detailed instrutions but this is an involved setup, be prepared to take some time and fail a few times until you get it all right.
You do need RHEL CoreOS on all your master VMs (and you need three of them), the product documentation tells you where to download the RH CoreOS images and how to create their bootstrap ignition files. Don't try to use older CoreOS (pre-RHEL) because it suppors many more installation options that will not work for OpenShift 4.x. I'd tell you to use RH CoreOS on your worker VMs too, so you don't need to learn and experiment with a second provisioning method for them (use the same process for the masters).
The load balancers run outside of the OpenShift cluster. They are usually provied as a cloud provider service, such as AWS LBS and OpenStack Neutron, or a network applince (such as from Cisco and F5/BigIP) but it could be any one, such as the one from RHEL High Availability Add-On. While it is technically possible to have a single load balancer performing both roles (master and routes/ingress) its recommended for performance and security reasons to have one for each role. Or more for HA of the routers themselves.
OpenShift 4.x does not recommend nor require dedicated infrastructure nodes, as OpenShift 3.x did. You cannot configure dedicated nodes during installation, but you can perform this as a day-two operation if you want, and create a setup similar than for OpenShift 3.x, configuring worker nodes dedicated to run things such as metrics and logging. You would do this following the same process as you would for any kind of workload you want to seggregate, by any criteria.
Currently OpenShift Container Storage (OCS, new name for Container Native Storage CNS) supports only OpenShift 3.x. Sometime after OpenShift 4.2 is released is planned OCS 4.x that will be operator-based and also use Ceph instead of Gluser as the storage back-end. Curently no form of Ceph nor Gluster os supported for OpenShift 4.1.
Thank you for the detailed reply. The installation on bare metal involves a lot of manual steps. IS
Also why do we need 2 load balancers and 3 masters ? Can we do it with 1 load balancer and 1 master ?
So on a high level, I see the below steps to get openshift 4.x running on my virtual machines
1. Configure http server
3. Configure 2 Load balancers
4. DHCP (only needed of IP addresses of my virtual machines change on reboot - which they dont...so I am assuming I dont need this)
5. install openshift
Please correct me if I am wrong
I am also interested in exploring 4.2 since it is in developer preview mode now and I need the RHCS for storage. But there doesnt seem to be any existing documentation other than ones in github which I dont know if they are up to date. They also dont seem to contain any info on how to configure storage during installation. IS there any where else I can look for instructions on how to install 4.2 developer preview on bare metal with container native storage ?
Also for CoreOS on my VMs, do we need to boot it using an iso or will the openshift installer install the OS and boot it ? In other words, what are the OS requirements for my master and nodes before starting the Openshift installation ?
All requirements for your installation are in the product documentation. There is just no short-cut. I don't have the links right here, but there are also blog posts and videos on youtube that provide an overview of the process.
Installing with a single master is possible but requires lots of extra work. A single master will fail when it needs to renew internal certificates, after 24hs, and you'll need to perform manual steps to recover from that failure.
OpenShift 4.2 is not yet released, its documentation will be released alongside the product. I think that OCS comes some time after the 4.2 release, but before 4.3.
It may be easier for you to check the "laptop" infrastructore provider, that gives you a single master, single node, all-in-one cluster: the CodeReady Containers package that replaces minishift from the CDK.
This installation method is based on a bre-buit VM image. The image was exported after the 24hs certificate renew was done.
Also the bare metal installation docs mention:
Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
IS this mandatory pre requisite ? Can I install without pre configuring storage in 4.1 ?
I asked a few people to see if I had the correct understanding about bare-metal installations and learned two things:
1. You can install without storage for logging and metrics. They will use ephemeral storage, and you are expected to add persistent storage as a day-two task. Instalation will not fail because no end-user aplication breaks because you lost logging and metrics data. Of course cluster admins, and developers troubleshooting some application issue, would not be happy. ;-)
2. Installation will procceed without persistent storage for the internal registry but will not complete, displaying the registry operator as pending until you provision storage. Before the installer times out, you have to either provide some storage or to configure the registry operator to use ephemeral storage (and risk loosing some container images developers build using S2I).
You need to know how to configure a storage provider (or manually create PVs) using something available on your network. You could use NFS or iSCSI provided by a RHEL server for a POC, but these are not the recomended options for production scenarios. Unfortunately Gluster and Cepth are not supported for OpenShift 4.1 so you need to use some third-part storage product until OCS 4.x is released.
May I ask about your goals installing OpenShift 4.1 on bare metal? If you want to familiarize yourself with the product, you should prefer CRC or a cloud-based IPI installation. Installing bare-metal is a very advanced topic and requires that you have previous experience managing operators, configuring storage, and other tasks. It is not the way to learn the product.
I know this may be hard to accept, it is not the usual expectation from a traditional sysadmin, that expects to install something to learn how to manage that something. I had these issues myself.
Most real-word customers have a consulting engamenent to perform a POC and later help installing the production environment even when they install on a cloud provider. The documentation is there, but you need to use a lot of it. It is not a single step-by-step how-to because there are many possible variants. It is this complexity that prompted Red Hat to work with the community to develop operators, CoreOS, and other features on OpenShift 4.