Hi I m newbie. My organization recently implemented Openshift 3.11 Enterprise with native EFK. We are looking to migrating to external ELK. Is this straight forward? Where can I find a comprehensive guide to perform this? Should I engage Red Hat services for this migration or is it possible to perform this with minimal supervision (bearing in mind , the organization are still taking baby steps in the container world)
@cnwdoug Welcome to the Red Hat Learning Community! It is exciting that your organization is making strides to implement containers into your infrastructure. There is a great OpenShift community packed with documentation, here is a link to documentation related to OpenShift 3.11. As far ask getting additional support related to the goal of shifting from native EFK to external ELK - I can point you in the direction of training courses that will enable your team with the general container/OpenShift skills, or consulting services for this specific use case. We may also have additional community members (hint, hint) that can held address specific questions you may have.
The Red Hat OpenShift Administration II (DO380) course, that is a follow-up to the Red Hat Certified Specialist in OpenShift Administration (EX280) offers an introduction to the EFK stack embedded into OpenShift 3.11 as the aggregated logging subsystem. It includes a number of configurations and custom container images designed to aggregate logging from the OpenShift cluster itself and Red Hat does not support it yet as a general purouse log aggregation platform. None of Red Hat training covers the scenario of sending OpenShift logs to an external log aggregator.
That said Red Hat Consulting has a large experience with a number of customers that already had a centralized logging solution and wished to integrate OpenShift with it. They can certainly help you evaulating the pros and cons of using your ELK instance, as opposed to the internal OpenShift EFK instance, and also help you with integrating it.
@cnwdoug - I would definitely recommend buying consulting services to implement this. This should be pretty straightforward - setup an external EFK cluster as per elastic recommendations and simply point your apps to the external URL where elastic is running. all pods/containers can access external IPs from within the cluster with no extra config.
While you are at it, look at wrapping external services using openshift services - https://docs.openshift.com/container-platform/3.11/dev_guide/integrating_external_services.html
If you do this, the client containers need not even know that Elastic is running outside the cluster!
This is not comprehensive but it's a stat:
This is the solution we'd implement from a Red Hat Consulting perspective.
It can be a bit tricky to do and get all the settings right, but the secure forwarder in fluentd is the answer.
I'll also recommend RH Consulting if you're looking for experts to guide your organization through the process.
In addition to that recommendation and the official documentation already shared, take a look at these links for more info:
http://v1.uncontained.io/playbooks/operationalizing/secure-forward-splunk.html
http://v1.uncontained.io/playbooks/operationalizing/secure-forward-splunk-container.html
The articles describe using a splunk forwarder to forward logs from the internal EFK stack to an external log aggregator. They're a little dated and refer specifically to Splunk as a destination for the logs, but the concepts are similar enough
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.