Dear All,
I would like to bring to your attention a situation that is, quite frankly, keeping me up at night.
The session in question was already a retake, granted after a previous attempt was invalidated due to a technical bug in the exam lab environment.
Unfortunately, this second session was again affected by technical problems: some keys on the keyboard were unresponsive. Even after replacing the device, the issue persisted, severely affecting my concentration and leading to some careless mistakes.
Despite that, I was confident I had reached at least the minimum score: I attempted all the tasks, some completed perfectly, others partially, but none left untouched.
Instead, I received a final score of 195 points.
Reviewing the breakdown of scores, I found:
Manage basic networking: 0%
Understand and use essential tools: 44%
Operate running systems: 67%
Configure local storage: 100%
Create and configure file systems: 100%
Deploy, configure and maintain systems: 88%
Manage users and groups: 100%
Manage security: 100%
Manage containers: 0%
The network was fully functional.
All these tasks — evaluated successfully — clearly demonstrate the network was up and running:
I was told one of the five parameters was not correctly set (possibly the netmask, though I was not given a chance to verify it).
Even if that’s accurate: 1 out of 5 parameters does not justify a 0% score. That would imply 80% of the task was done correctly.
A 0% score suggests either the task was not started or was completely incorrect , yet the functional outcome directly contradicts that.
This decision seems completely inconsistent and illogical. I’m not asking for anything I didn’t earn — I’m only asking that the facts and results be acknowledged and reflected in the score.
I’m currently requesting an official review. If the points for networking and containers had been awarded even minimally, based on the successful completion of relevant parts, my exam result would have been more than valid.
I was quickly offered another retake, but to me, this feels like a way to avoid addressing the underlying problem, a flawed assessment inconsistent with the evaluation logic applied to the rest of the exam.
This compromises the credibility of the entire scoring system because, in order to avoid being questioned, it sacrifices fairness, transparency, and accountability towards each individual candidate.
So I ask:
Can a fully functional network be scored 0% because of a single missing parameter?
And what about all the related work that depended on it, where did that go?
Thank you for your attention.
So one thing to keep in mind, some pieces of an objective may not truly have a score or "points" assigned. You are assuming things are weighted equally like a test in school that is multiple choice or something like that. So with 20 questions, the weight of each question is 5 points to get 100%. That isn't really how exams work.
Another point, while networks "can" work in getting from point A to point B, routing and netmask do matter because a /24 (255.255.255.0) newtork is much different than a /22 network based on what addresses are in range. So while this doesn't seem like a big deal to you and think you should get partial credit, in a real world scenario this is wrong and could cause multiple different failures in a real networking environment because gateway addresses, subnets, and routing could all potentially be messed up.
As @Chetan_Tiwary_ mentioned, you can always contact the exam team for a re-review of the results. Everything you do is captured and can be analyzed. I will tell you we do not give detailed results on purpose and we also don't discuss questions and answers for content on the exam as that is a violation of the NDA.
Another example I've used with some people who have complained about the results and partial credit is I make up a pretend scenario ...
Pretend you are a new employee and are getting paid from a brand new tech company. You completed the hiring form and HR has sent everything to a system administrator to setup you account in the system. They create your account with a username and password, put in your banking information, and everything else. However, there is an issue with the routing or account number .... the admin knew how to add you to the system, but the information was slightly incorrect. This is an all or nothing situation in that because the information was wrong, you aren't getting paid.
So while I can't explain anything on weight and how/why you received the percentages you did, think of the above scenario ... the admin got your name right, they got the username and password right, the got your address right, but the bank routing number was a digit off, and the account number was a digit off. You didn't get paid, but hey ... that is 6 pieces, but the last 2 parts had multiple digits, so the routing number is 8 digits and they missed one, and the account number is 10 digits and they missed one.
So they were 87.5% correct on the routing number, 100% correct on name, username/pw, address, and 90% correct on the account number ... the true objective was to get you paid, because of what was in error, you don't get paid ... so the objective was not met, so you would get a 0%.
As @Chetan_Tiwary_ mentioned the official review must be requested https://rhtapps.redhat.com/comments but again, the results will be reviewed and can be corrected (if and only if something was incorrect), otherwise the scores will stand and you likely will not receive any additional feedback as to why/how you received the score that you did.
I see your frustration and totally understand. I will add just a few more pieces to help set your mind at ease. The grading and evaluation is consistent across the exam, but again the weight of questions may not be and don't need to be because some things are worth more points or considered higher value than other items. What are the values and weights, that isn't known to anyone outside the certification team.
In terms of your networking example though, I wanted to offer one more piece of information on why/how it might not have worked. So yes, you tested because you tested from a machine on the same network and traffic could technically get through (maybe you used ping). However, The Netmask tells devices they are on the same network so they can communicate directly with each other, if they need to communicate with a system outside their network, the communication must be routed through a gateway. Subnetting allows dividing a large network into smaller managable networks.
At home one of the things I was using was a 192.168.15.0/24 (255.255.255.0) network. This was fine for the longest time until more devices, IoT things and I began running out of IP addresses. I've now switched to a 192.168.15.0/22 network. This gives me a much larger range. The netmask determines which network one of the systems is on, even though they share the same IP address. I had to make some adjustments on older systems to ensure they got the correct netmask to properly communicate. So while it appeared the older systems were find talking to older systems, they didn't properly work (because of the netmask) on the larger environment.
Lastly, I would like to point out to take care and really read the questions for what is being asked and also where you are performing the work. I had failed an exam because I did work on incorrect systems (by accident). I had checked all my work and thought I had things completed 100% correctly in the instances where I was sure it was right, but unfortunately, even though I did the things right the work was performed on an incorrect system or located in an incorrect directory, or done for/as an incorrect user.
As an exam taker myself, I definitely understand the frustration on the experience and the OCD in me also upsets me when/if I fail an exam attempt, but it is always a learning experience and when I've taken the exam again with a fresh set of eyes, sometimes I see the gotchas, sometimes I don't feel I've done anything different but I end up passing and then I try to reflect on what the questions said and what might have happened.
As an examiner, I've delivered and graded several exams (in-person ... like in the room with the person taking an exam) where an exam taker will ask questions for clarification and we can provide almost no guidance (as you can't help), but in looking at what they are doing you can see the mistakes happening in real-time where something got interpreted wrong or the user is accidentally on the wrong terminal or SSH session (so yes work is being done correctly, but can't be graded). So again, very fine lines in what can/can't be communicated. One of the hallmarks of all the Red Hat certifcations is the integrity of each exam and the consistent experience for all learners in the grading (again consistent in your case a bad experience because points and credit isn't given as expected), but the evaluation is yes/no you get points or credits for these portions of an exam or portions of an exam objective.
As for the suggestion on explanations and being more transparent, this is something that has been discussed before and I do think a nice blog article or write-up would be good explaining our exams and how grading, scoring, and exam integrity work. I've copied in @Lene on here as I think she would be the best person from the certification team to see that effort through.
If you weren't aware already, I would also highly encourage you to watch some of the YouTube videos that Ben created on the Red Hat exam experience. When I was an instructor, I would often show students those videos so they know what the look/feel is of the exam helping them understand what the experience would be like before sitting for their first exam. As you've already taken an exam, those might not be as valuable to you now, but you never know.
@StefanoM for specific feedback on exam questions or grading or your exam experience the only way to provide feedback or get questions answered is through the official certification comment form,
https://rhtapps.redhat.com/comments
Since you have already raised it, please wait for their official response. You can reply to the mail with your followup and you will get a response soon.
Thanks for your understanding!
Hi @Chetan_Tiwary_ ,
In theory, there is support, of course. But after numerous email exchanges and a direct meeting, I found that, despite my arguments being well-founded and supported by evidence, that support turned out to be purely formal.
In my case, they completely disregarded my reasoning, relying on arguments that were, at times, unfounded or treated superficially — adjusting their explanations as needed to justify situations that clearly reflected inconsistent and unfair judgment.
They give the impression of constructive listening, but in reality, there is never a genuine, impartial evaluation of your points: they will always be right, no matter what.
Filing a case is pointless — it’s just another frustrating formality. At most, they grant a retake (which costs them nothing), which may highlight serious issues but never leads to a fair review or the reinstatement of exams that were unjustly cancelled, along with the arbitrary deduction of points.
Furthermore, once the "sentence" is issued (without appeal, obviously), they completely ignore you. Since July 15, I've been updating a case, commenting on their decisions, but for them, it's now resolved, based on their sole and exclusive conclusions. They'll close it any moment now, and you'll have to pay the price: in my case, an exam that was literally stolen from me with ambiguous and specious reasons. You as a client, candidate, or company are worthless, neither before nor after.
I am extremely disappointed. I never imagined I would have to go through everything I have endured over the past four months, only to be left with nothing in hand and consequences that will never be remedied.
Hi @StefanoM ,
I hear your frustration and I understand this has been a long and exhausting process. You have clearly put time and energy into both the exam and the follow-up.
The review process exists to make sure every exam is judged by the same standard. It is meant to protect fairness, even if the outcome is not what a candidate hopes for. I know that when someone feels their points are valid yet not accepted, it can feel like the process is closed.
From what you describe, you’ve already taken the formal steps and shared your reasoning. While I cannot alter the decision myself, I can help by ensuring your feedback is visible to the right teams. This includes your comments on how the review communication was handled and the effect this has had on your trust in the system.
Do you still have the retake left or are the reviewing team offering you another retake ? If yes, which may feel like a poor answer right now, but it is a direct way to earn the result you were aiming for without further delay. If you choose that path, I suggest revisiting the course and labs and make sure you got everything correct this time. Trust me many learners have scored full marks in this exam including me and I can vouch for the accuracy and consistency of this exam ( I cleared this exam when I was not a Red Hatter ) which has been taken by numerous learners around the world and it is a really very popular exam in this domain.
I will make sure your experience is shared internally so it is not ignored. While I can’t promise a different decision, I can make sure your case is not forgotten.
Thanks for your understanding and patience!
Thank you @Chetan_Tiwary_ for your kind response.
No, not even the Director tried to change anything; he himself requested a meeting with me, but then fully accepted an analysis that completely disregarded all my reasons and the damages I suffered, placing all the responsibility on me, entirely.
Actually, I updated the case, but as I’ve said for weeks, I no longer receive any replies. I don’t believe they ever will now—the "verdict" has already been issued...
As mentioned, I even had a call with the Director, requested personally by him, who told me he found my observations more than reasonable. However, in practice, he simply forwarded me the analysis.
I’m still waiting for his personal evaluation, which I believe is the very least expected after our meeting and the long exchange of emails.
I don’t believe the reasons lie entirely on one side (it’s never like that in life), and this whole situation, including his intervention, was meant to ensure a fair and consistent judgment. I assure you, there are very valid reasons — it’s not just frustration because I didn’t get the expected outcome. I’m a mature, serious person with extensive experience; I don’t need this.
But I endured four months with three attempts, technical issues in two, and retakes granted (so the problems exist and repeat).
Finally, I also have to deal with a point deduction that was never adequately justified during the analysis and for which I have seen no willingness to consider my many well-supported observations.
Specifically, I took an exam on version 9.0 and one on 9.3: they claimed the difference in scores for the same task across these two exams was because they are TWO DIFFERENT PRODUCTS!!! In 30 years in IT, I’ve never heard a more far-fetched excuse! But 9.3 is a minor release of the same version — it’s the same product!
The task involved the same conceptual steps and commands, which I performed identically, stopping at the same point. In one case, I received 33%, in the other, 0%!
This is unacceptable because in this particular task, nothing changed between the two versions regarding the use of Podman. So, either it’s always 33% or always 0%. One or the other. If you didn’t make a mistake before, you did after the rule change. In fact, they changed the grading rules midway, resulting in the same work being evaluated very differently. What is this, a game show or a professional system designed to fairly and consistently assess knowledge and skills?
That’s just one example, but there’s more; for instance, a network task that I completed about 80% of but was still scored 0%. This is the only case where they do not apply proportionality (which they call “scoring opportunities”) when grading intermediate completed steps positively. Instead, only in this case, they used a "all or nothing" approach.
Again, either you apply this consistently or you’re neither fair nor consistent.
I understand the scoring is a bit more complicated, but I have already discussed and responded to their points, and 0% cannot be a fair evaluation because I completed half the exam positively regarding the network.
I won’t go into further technical details here, which I’ve already addressed with concrete examples; it’s not worth elaborating...
In my opinion, given the many cases like this and the widespread dissatisfaction over the lack of transparency, this system is perceived as hostile, closed off, impenetrable, and leaves everyone full of doubts. Perhaps something else is needed to at least give the impression of fairness and equity, which is currently nowhere near the case. Maybe it works for Red Hat’s business, but it shows no respect for this right.
The candidate, the client, and the companies paying significant sums for courses and certifications must be taken into account, and the right to transparency must somehow be guaranteed — while maintaining confidentiality. It cannot be ZERO, nor just a facade of cooperation if in the end you always do as you please. We cannot be treated like fools!
In any case, the team already knows everything because I have written multiple times in the case comments, without any further response.
In fact, I am sure that at least some of my reasons cannot be ignored and that they can’t keep dodging responsibility just by offering a retake after deducting many points and canceling an exam that I had almost 100% completed.
This is unacceptable — not after everything I’ve had to endure over these four months, only to end up with NOTHING!
Not with analyses that are anything but consistent. It’s simply an unacceptable mockery.
Sorry for venting, but I’m truly furious and disappointed by all the problems and how they’ve been handled, from support to team management. It’s true, I made mistakes (otherwise I’d have scored 300/300…), but due to their faults, I took an exam under very poor concentration conditions, so they must also take some responsibility for my oversights, even on simple things. I cannot take all the blame alone, especially when the analysis is tailored to make them absolutely right about everything.
I’m only asking for fairness and equity in evaluations and mutual responsibilities.
But EVERYONE turns their back, pretending to cooperate while deciding over my head.
So one thing to keep in mind, some pieces of an objective may not truly have a score or "points" assigned. You are assuming things are weighted equally like a test in school that is multiple choice or something like that. So with 20 questions, the weight of each question is 5 points to get 100%. That isn't really how exams work.
Another point, while networks "can" work in getting from point A to point B, routing and netmask do matter because a /24 (255.255.255.0) newtork is much different than a /22 network based on what addresses are in range. So while this doesn't seem like a big deal to you and think you should get partial credit, in a real world scenario this is wrong and could cause multiple different failures in a real networking environment because gateway addresses, subnets, and routing could all potentially be messed up.
As @Chetan_Tiwary_ mentioned, you can always contact the exam team for a re-review of the results. Everything you do is captured and can be analyzed. I will tell you we do not give detailed results on purpose and we also don't discuss questions and answers for content on the exam as that is a violation of the NDA.
Another example I've used with some people who have complained about the results and partial credit is I make up a pretend scenario ...
Pretend you are a new employee and are getting paid from a brand new tech company. You completed the hiring form and HR has sent everything to a system administrator to setup you account in the system. They create your account with a username and password, put in your banking information, and everything else. However, there is an issue with the routing or account number .... the admin knew how to add you to the system, but the information was slightly incorrect. This is an all or nothing situation in that because the information was wrong, you aren't getting paid.
So while I can't explain anything on weight and how/why you received the percentages you did, think of the above scenario ... the admin got your name right, they got the username and password right, the got your address right, but the bank routing number was a digit off, and the account number was a digit off. You didn't get paid, but hey ... that is 6 pieces, but the last 2 parts had multiple digits, so the routing number is 8 digits and they missed one, and the account number is 10 digits and they missed one.
So they were 87.5% correct on the routing number, 100% correct on name, username/pw, address, and 90% correct on the account number ... the true objective was to get you paid, because of what was in error, you don't get paid ... so the objective was not met, so you would get a 0%.
As @Chetan_Tiwary_ mentioned the official review must be requested https://rhtapps.redhat.com/comments but again, the results will be reviewed and can be corrected (if and only if something was incorrect), otherwise the scores will stand and you likely will not receive any additional feedback as to why/how you received the score that you did.
Thanks for your reply: in just a few lines you've given me all the information that the various support teams haven't dared to provide in several weeks...
Unfortunately, for me, this exam is becoming a real undertaking. It began in April with two systems affected by a bug (known and left there), a retake also marred by technical malfunctions (which caused me constant interruptions and poor concentration), and now this misunderstanding. Allow me this brief digression, but in reality, all of this, in addition to various losses in terms of time and work that no one will ever repay, conveys to me negligence and a lack of attention to the tools, and a truly lack of clarity and transparency in everything else. Why not provide this information clearly on the portals? It would avoid countless discussions and a lot of unnecessary work for everyone. In the end, however, there will be no understanding when faced with 0 or 1, black or white, while the real world we're referring to is necessarily made up of shades of gray, for a thousand reasons.
That said, since I don't know the grading system, which is probably much more complex than you might think, I simplified it because it's the only thing I can do. But if I see percentages assigned consistently, even on tasks where I know exactly what I did wrong, forgot, or left incomplete, the message is that the good work done is still being evaluated. After all, an exam is passed with a 210/300, which means that 70% of the activities are completed, and among these some at 100% and others with percentages assigned. 0%, from my direct experience of three sessions, is an argument I haven't even begun, and is completely inconsistent with objective reality.
Your example is instructive and indicative, but in my opinion, my situation isn't quite the same: the user in my example was "paid." Perhaps he was lucky, because even by taking the wrong route (the netmask), he reached the intended recipient. Maybe they didn't receive the "transfer" notification, but the payment was made, the data traveled, even in multiple ways (the various activities performed), and the delivery was made.
Therefore, I believe the 0% grading is unjustified, pretextual and impartial. An assessment tool must always be consistent, for better or worse, across every topic on the exam.
I believe this is the foundation of the credibility and accuracy of any assessment tool. And in my case, that wasn't the case at all.
If the approach and the evaluation criteria were truly consistent and coherent, then at least part of the assessment should be recognized—the part that allowed me to complete various other tasks, just as it was for the rest of the score. And in my case, even a small percentage nullifies all the work that demonstrates having learned and applied all the different topics on the exam.
But I know they'll never change their decision, and it doesn't take much to understand their reasons... Unfortunately, this will lead me every time (if I ever want to continue this journey) to fail to take exams with the necessary peace of mind, because even with commitment and good preparation, everything hinges on a final paper that uses double standards, never fully clarifying the reasons behind certain choices.
Thanks again for your time.
PS: "Only dead people and fools never change their minds," said James Russell Lowell.
I see your frustration and totally understand. I will add just a few more pieces to help set your mind at ease. The grading and evaluation is consistent across the exam, but again the weight of questions may not be and don't need to be because some things are worth more points or considered higher value than other items. What are the values and weights, that isn't known to anyone outside the certification team.
In terms of your networking example though, I wanted to offer one more piece of information on why/how it might not have worked. So yes, you tested because you tested from a machine on the same network and traffic could technically get through (maybe you used ping). However, The Netmask tells devices they are on the same network so they can communicate directly with each other, if they need to communicate with a system outside their network, the communication must be routed through a gateway. Subnetting allows dividing a large network into smaller managable networks.
At home one of the things I was using was a 192.168.15.0/24 (255.255.255.0) network. This was fine for the longest time until more devices, IoT things and I began running out of IP addresses. I've now switched to a 192.168.15.0/22 network. This gives me a much larger range. The netmask determines which network one of the systems is on, even though they share the same IP address. I had to make some adjustments on older systems to ensure they got the correct netmask to properly communicate. So while it appeared the older systems were find talking to older systems, they didn't properly work (because of the netmask) on the larger environment.
Lastly, I would like to point out to take care and really read the questions for what is being asked and also where you are performing the work. I had failed an exam because I did work on incorrect systems (by accident). I had checked all my work and thought I had things completed 100% correctly in the instances where I was sure it was right, but unfortunately, even though I did the things right the work was performed on an incorrect system or located in an incorrect directory, or done for/as an incorrect user.
As an exam taker myself, I definitely understand the frustration on the experience and the OCD in me also upsets me when/if I fail an exam attempt, but it is always a learning experience and when I've taken the exam again with a fresh set of eyes, sometimes I see the gotchas, sometimes I don't feel I've done anything different but I end up passing and then I try to reflect on what the questions said and what might have happened.
As an examiner, I've delivered and graded several exams (in-person ... like in the room with the person taking an exam) where an exam taker will ask questions for clarification and we can provide almost no guidance (as you can't help), but in looking at what they are doing you can see the mistakes happening in real-time where something got interpreted wrong or the user is accidentally on the wrong terminal or SSH session (so yes work is being done correctly, but can't be graded). So again, very fine lines in what can/can't be communicated. One of the hallmarks of all the Red Hat certifcations is the integrity of each exam and the consistent experience for all learners in the grading (again consistent in your case a bad experience because points and credit isn't given as expected), but the evaluation is yes/no you get points or credits for these portions of an exam or portions of an exam objective.
As for the suggestion on explanations and being more transparent, this is something that has been discussed before and I do think a nice blog article or write-up would be good explaining our exams and how grading, scoring, and exam integrity work. I've copied in @Lene on here as I think she would be the best person from the certification team to see that effort through.
If you weren't aware already, I would also highly encourage you to watch some of the YouTube videos that Ben created on the Red Hat exam experience. When I was an instructor, I would often show students those videos so they know what the look/feel is of the exam helping them understand what the experience would be like before sitting for their first exam. As you've already taken an exam, those might not be as valuable to you now, but you never know.
Thank you again for your comments.
It's true! The weight of exam questions is not—and should not be—the same across the board, because some tasks are objectively worth more points or are considered more valuable than others. You absolutely nailed it, and I appreciate it. There's always a weighting factor. It's quite obvious that creating a user is different from configuring a network or launching a container with persistent storage, and those tasks will naturally carry more points even at the same percentage level.
But if you look at the results, only tasks that were completely skipped or entirely incorrect get a score of 0%. My networking task fits neither of those cases. So why 0%? What kind of proportionality and fairness is that?
Yes, I’m very familiar with what you’re describing: you’re referring to a newer subnetting system called CIDR (Classless Inter-Domain Routing), which replaces the old class-based system (A, B, C, etc.) and allows for more flexible host management.
If I forgot to include the netmask, it was a slip on my part. Not to make excuses, but these are relatively simple and almost automatic tasks, and the added stress, interruptions, and distractions I had to manage during the exam likely affected my overall performance.
That said, for the reasons we’ve already discussed regarding task weight, this is clearly not an absolute failure—especially when compared to how partial credit is granted in other tasks. That’s the core of the issue: the lack of consistency and fairness in judgment.
Without uniform criteria, the credibility and validity of the entire evaluation system are compromised.
Your comment about writing an article or blog post makes a lot of sense, especially in addition to what was said earlier. If candidates are clearly informed that some tasks might be scored as 0% or 100%, regardless of intermediate steps, it would prevent false expectations.
That would be a truly clear and transparent way to inform candidates, and would likely avoid thousands of tickets on the issue. The fact that this remains one of the most common reasons for disputes clearly demonstrates the lack of transparency and the superficiality with which, from an entirely different internal perspective, it's assumed that candidates are fully aware of what they're facing—not just technically, but also procedurally.
Why isn’t it already like this? These are essential pieces of information, after all.
Put yourself in the shoes of someone taking the exam for the first time: they study, prepare, and practice by following Red Hat's official recommendations, taking the RH124 and RH134 courses and using official Red Hat materials. They certainly can't spend months digging through YouTube, watching hundreds of videos, especially since so much of what is out there is low quality and leads only to confusion and disorientation. Then, when things go wrong, the blame is often placed on the use of unofficial materials, saying: see what happens when you don't stick to the official content if you want proper preparation?
This leads people to believe that if they strictly follow the official path, they will be ready for the exam. Instead, they find themselves facing something more like a game show, where form matters more than substance, despite the supposed focus being on education and learning. It feels like sitting an exam on the exam itself, but without the tools or experience to do so. Do I really have to take the exam three times just to understand the trick? Because that's exactly what it feels like, a trick, if you deliberately keep it hidden. I don't know whether this is intentional, but the result is the same: a massive waste of time, energy, and money for the candidate and their company.
Let me open a brief parenthesis. For Red Hat, it's easy to wipe everything clean in case of errors (and I have personally experienced too many, in two out of three attempts) and offer a second chance. For us, it's a serious problem of time, effort, and above all, missed professional opportunities. It's a damage that costs Red Hat nothing, but for the unfortunate candidate, no one will ever repay that loss. Parenthesis closed.
Back to the exam. You yourself have had these negative experiences. But I'd also add this: the exam questions are often unclear, and you have to believe me, it becomes more frightening not to understand the question than to fail at the task itself. Or worse: to take what seems like the correct technical approach, only to find out from the final score that it was considered wrong. That's precisely what the proctor should be there for. Even in university exams, it's allowed to ask, "Excuse me, I didn't understand the question," just to be sure before beginning. That's where exam integrity starts, with the possibility of clarification, so you don't end up taking a technically correct path that's marked wrong due to a formal misunderstanding.
To conclude, at the very least, this situation should be considered a shared responsibility. It's not fair to assume that everything imposed from one side is automatically right, especially when it's done with poor transparency and a clear lack of fairness in judgment.
I continue to believe I've been seriously harmed, and I'm the only one who bears the consequences directly. This is not just frustration it truly feels like being the victim of an unfair imposition. I do not believe I can accept the easy fallback of a retake (as I've already said, it's far too convenient and costless for Red Hat), because doing so would mean accepting that the fault lies entirely with the candidate, when in reality there's a significant lack of transparency and fairness from those who hold the power and seem to take advantage of it.
I remain convinced that I've suffered a serious loss due to the unfair deduction of at least part of my score, which, by a very small margin, has nullified an enormous amount of work and dedication. I will do everything in my power, including requesting an external audit, to obtain a fair and accurate assessment of what really happened.
This experience with Red Hat certifications has really been a disappointing and unexpected one.
Please forgive my frankness, and once again, thank you for your kind and helpful efforts.
I'd like to add something I hadn't yet highlighted, and which, in my opinion, is far from insignificant.
In a previous exam session, despite not having completed the container task, I was given a partial score of 33%. In the most recent session, exactly the same thing happened: same container task, same incomplete result, but a markedly different evaluation, with a score of 0%. The reason for this second assessment? I skipped configuring an important task. As a result, the task didn't complete, regardless of the steps actually performed (accessing the registry and downloading the image).
It's already clear that scoring is handled differently than one might imagine, but here we're dealing with the exact same situation in two different exams, in which there's a clear difference in scoring that's clearly not fair and consistent. This can only raise serious and legitimate doubts about the correctness and consistency of this decision, which demonstrates an inconsistent, fair, and consistent use of the same rules in the same situations.
Given these precedents, it's fair to cast serious doubt on the entire evaluation system, particularly the score assigned to my network task. In this case, the approach was to completely discard all the tasks performed (4 out of 5) and still assign a score of 0%, despite a perfectly functioning setup that was preparatory to several other tasks performed and evaluated positively.
Remember that the parameter is almost certainly the netmask (no objective feedback is allowed). I'd like to point out that if the context referred to were that of a real network, even in that case the netmask would be irrelevant if all servers are connected to the same addressing plane. In a certain sense, it wouldn't even be wrong to decide to adapt to the environment available for the exam, avoiding setting parameters that have no impact on that specific situation.
Let me clarify: if the evaluation logic states that in a real environment, with different networks, the netmask is a crucial parameter (as it obviously is), then the same task should simulate this situation, for example, by placing the servers external to the two nodes in the environment on different networks. Otherwise, it's offering an explanation based on assumptions that, while correct, don't match the provided lab environment, which instead has all objects connected to the same network. And, ironically, the only parameter that isn't strictly necessary in that situation is instead considered crucial, while everything else is completely ignored, resulting in the loss of any percentage points. No, something's wrong with this approach...
Therefore, when faced with similar or even overlapping cases, different, opposing criteria have been adopted. It's not a question of knowing how the evaluation system works in its most minute detail, while respecting the confidentiality of this information; but when the final results give the distinct impression of following an arbitrary or variable logic, doubts inevitably arise. And these doubts become even more frustrating when every request for clarification comes up against that wall of confidentiality that impedes the real and constructive discussion to which the right is not safeguarded.
And this is precisely why, in all honesty and without any pretext, I have been requesting a score review for weeks now. It's a fact that on at least two exam points, the grading appears rather ambiguous when compared to other similar situations; this significantly impacted the final result. Without this arbitrary and unjustified omission, considering the examples provided and the work done, I would have already passed the test without any difficulty.
Why then should I repeat everything from the beginning, assuming full responsibility and accepting all this without even a logical justification? I certainly have my share of blame (otherwise I would have received the highest grade), which, in all honesty, is also due to the negative circumstances beyond my control surrounding this exam; but all I ask is that the work done be given due and proportionate weight, using the same criteria adopted for all other assignments and exams. Consistency, fairness, and transparency, to put it simply.
In the meantime, I have the impression that every possible excuse is being sought to justify this behavior, confident that under the pretext of confidentiality, a final, unquestionable decision can be reached. However, this decision would be seriously tainted by legitimate doubts that cannot be answered in a verifiable manner.
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.