Hyperconverged infrastructures combine the storage and compute
components in the same hardware to get the most of our existing
hardware investment. These infrastructures use software-defined
storage systems that can scale in parallel to the infrastructure's
compute needs. Red Hat currently supports hyperconverged
infrastructures with Red Hat Virtualization and Red Hat Gluster
Storage, and with Red Hat OpenStack Platform and Red Hat Ceph Storage.
Let's open the floor to:
- Discuss best practices on architectures for hyperconverged infrastructures
- Share your experience deploying and operating these infrastructures
Some references to Red Hat hyperconverged infrastructures:
I think the first question one should answer before even thinking about this kind of solutions is: why am I doing this?
If the answer is: for saving! You're on the wrong path.
You will not save time installing, you will not save time in troubleshooting and your time and frustration are way more costlier than the general saving of few more boxes for setting up ceph/gluster.
Those solutions have limits, and the worst part is to discover that what started as just a little env, just a demo lab, will require more and more resources.
So defore choosing an hyper converged solution ensure:
- to have a clear idea of why you're doing this and that you use case fit the problems hyper-converged solutions want to solve
- to be aware and conscious of the limitations you'll face
- that the environment you're going to setup will have to tend to be static in its configuration and will not require constant scaling
- that you're ok with a major complexity in the environment
Once you'll have done this self-consciousnes journey, then you will be ready to approach it.
Just 2 cents on this side - I do a fair amount of Hyperconvered setups however vendor based products like VCE/Nutanixs/Huawei FusionCube - seem to be pretty popular and each have thier pros and cons depending on the purpose of deployment. Things start to get expensive when you start to consider officially supported Ceph/Gluster on the storage side, and unlike the other vendors who use their own storage and filesystems - this can be a pain point.
If the setup is a relatively small one or a dev environment, you can also consider other distributed filesystems such as Lizardfs which is based on moosefs - or xstreemfs - it all just depends on your appetite and how specific you are in relation to your needs.