Hello everyone,
I would like to share an experience that left me not only disappointed but deeply surprised by how a major IT company like Red Hat handles a fairly common issue.
I am a candidate and customer, asking only for the fairness and transparency that this company so proudly promotes.
I have around 30 years of experience in IT, albeit on different platforms. For professional reasons, after months of preparation, I took the RHCSA exam with interest and passion for this platform.
Unfortunately, my experience was marked by unexpected negative situations, which I will try to describe.
- 1. First Exam Session (RHEL 9.0)
I was provided with an environment affected by a known bug, which was not fixed due to an internal decision. In an already stressful exam setting, I discovered that the only official password reset procedure did not work because of this. This issue only appeared once the exam had started.
Red Hat did not provide a definitive solution, only a workaround for the candidate, through ambiguous instructions that were released only after the exam began, despite their importance, and easily overlooked because of their placement in the instructions (“Other Information”).
For this reason, a first retake was granted.
- 2. Retake of the First Session (RHEL 9.3)
Another technical problem arose, this time related to the keyboard, with character mapping issues. It was recognized but minimized by Red Hat as a “distraction” and considered by their staff as “not determinant.” However, the exam was characterized by continuous interruptions and distractions for proctor checks, until the keyboard was replaced.In their analysis, they mention the “backslash” key instead of the “pipe” key (which is far more important), highlighting the level of attention applied in their review.
In this case, a further retake was also granted.
These two situations already indicate a certain lack of responsibility and attention to the tools provided, despite the company’s prestige and the costs involved.
- 3. Red Hat Analysis – Container Task
The issue concerns two sessions: one on RHEL 9.0 and another on RHEL 9.3, two versions of the same RHEL 9, but with an updated minor release.
Essentially, it is the same product, with fixes and improvements, but no substantial change or impact on task management. The task was identical, the objective identical, and the work completed up to the same point was correct in both sessions.
Nevertheless, the score was evaluated with completely different criteria (33% vs. 0%), justified by a “product change.”
This claim, coming from a team of experts, appears even more unfounded: the commands required (e.g., podman) and the steps to complete the task were not affected by the update. Podman version 4 was used in both cases, so the minor release update had no impact in this specific context.
I'd love to hear your feedback on this point, as I can't find a concrete reason for this decision. It has no technically plausible basis. The fact that no one on the team has responded further confirms the lack of concrete arguments to support it, other than an arbitrary shift in judgment.
This is the real problem. The grading rules have been changed and overturned. Two different standards have been applied to the same work. This compromises the consistency and reliability of a certification system that should be impartial and meritocratic. And it compromises an entire exam. But no one seems to care. Correctness doesn’t matter; what matters is protecting the image of a system that cannot allow any admission of guilt.
- 4. Red Hat Analysis – Network Configuration Task
The task required the five classic parameters for configuring a basic network. I only omitted the netmask due to a distraction error, also influenced by the continuous interruptions caused by the keyboard problems.
The other four parameters were correct, and the network was functional: this is proven by the fact that I completed other tasks that depended on an operational network.
I would like to point out that the task was designed to simulate a real-world context where all systems were on the same network, making the netmask effectively irrelevant in this specific case.
Despite this, the score given was 0%, as if the task had been ignored or completely incorrect.
This is inconsistent with their own “opportunity for evaluation” rule, which in other circumstances allows proportional logic and rewards partial work.
I do not dispute losing points on this occasion, which cannot obviously be 100%, but the complete lack of consistency is unacceptable: 4 out of 5 parameters cannot count as 0% in a system that defines itself as fair and meritocratic, and that for other evaluations refers to “opportunity for evaluation,” implying a proportional logic.
Especially if this configuration, fully functional in that context, enabled the completion of several other tasks.
There is a clear, unfair, intermittent, and inconsistent application of the rule in question.
- 5. Final Considerations
The Certification Program Director, informed of my concerns, requested a direct meeting with me, promising a clear and constructive discussion that would ensure a fair and transparent resolution. This finally gave me hope that my arguments and responses would be considered in a balanced manner.
Instead, the analysis I received completely ignored my arguments, absolving Red Hat of any responsibility and placing all blame solely on me, in a tone that was sometimes peremptory, sometimes superficial, depending on the context and the desired outcome.
Each argument was justified with arguments that were questionable to say the least, sometimes specious, and above all, always aimed at demonstrating that all the arguments were on their side, without any room for constructive discussion, as previously hoped.
After carefully reading this analysis, which didn't clarify anything but only confirmed and increased my doubts, I obviously wanted to send my detailed response.
I've written to several open addresses and companies and used all the channels known to me, but for about two weeks now, absolute silence has reigned. No response from the team, the administrators, or the director, who initially seemed so available and willing to intervene. Perhaps the verdict has already been passed, and Red Hat standards prohibit further responses.
6. Final Reflection
Despite approaching every commitment with seriousness and perseverance, I have found a company that, on more than one occasion, failed to provide even the minimum tools needed to complete an exam under normal conditions. A system that, when faced with clear and repeatedly demonstrated evaluation errors, proved hostile and self-justifying - promising fairness and transparency while avoiding any real accountability.
A certification system that hides behind "confidentiality" to avoid admitting mistakes, applying double standards and offering pretextual justifications even for the most obvious evidence.
How can you trust such a system? How can you invest time and effort, only to see everything undermined by inadequate tools and assessments that favor form over substance and fail to ensure fair handling of their own errors?
I still hope for clarification and accountability from Red Hat.
A simple additional retake does not solve the problem: the issue is the lack of fairness and consistency in a system that should be based precisely on these principles.
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.