I scored much less that I expected on my first EX294 exam attempt and as I prepare for a retake, I'm starting to wonder that perhaps maybe the grading playbook had something to do with it.
To try to explain, while giving away as little info about the exam content as possible, let me mention few possible conflicting situations. For instance, I added vault_password_file attribute in the ansible.cfg, as I would normaly do in my work. But if there is another vault that requires another password, would that confuse the grading system, which I asume is again ansible and could parse my ansible.cfg? Another situation, I salted the user passwords, again as I usually do. But if grading script just compares the password hashes, it's going to conclude that my password is wrong, although it's actually correct.
Exam instructions stated that we are allowed to make reasonable changes to the ansible configuration, but leaves us guessing what is actually reasonable. Is there some chance that we could get some feedback from someone who is familiar with the exam grading? Is the best strategy on the exam to do only exactly what is stated in the task and try not to touch configuration if possible? Or is the grading very robust and prepared for all sorts of situations, so I have nothing to worry about?
"but leaves us guessing what is actually reasonable" - good point
"... the best strategy on the exam to do only exactly what is stated in the task and try not to touch configuration if possible?" - exactly
"Or is the grading very robust and prepared for all sorts of situations, so I have nothing to worry about?" - nooo, not at all. My candidate experience, based on way less score received than expexted, is that scripts look for some very specific things, and will fail or get confused if they does not find what they are looking for.
Unfortunately, beside the tematic weight, you receive nothing, so if you fail you newer know what went wrong.
My experience was different. At the end of the exam it gives you the breakdown of percentage scored based on the subject area. While you will not know the specific question you will know the area that needs focus. If you are using the official learning materials and have access to the lab environments, you can go back and retake the practice exam questions and see how they are scored. While I have no insider knowledge on the grading playbooks, it seemed to me that the grading of the practice exam was in alignment with the actual exam. If you think you missed a particular question because of a configuration change go back and find the simlar question in the practice question and attempt to use the same method, I would anticpate a similar grading result.
Lol, I think those breakdown percentages are total nonsense. For instance, all of us in the company on RHCSA exam got 67% on containers.
And opposite of RHCSA, RHCE exam tasks are very different from the practice exams in the RH lab. While the neccessary knowledge is in the course, the lab tasks are way too simple and do not prepare for the exam sufficiently.
Same here, my collegaue who is long time senior RHEL engineer just took the exam to re-certify.... he finished every task, double checked, did pass but the score was much less than expected.
I'm going for EX280 currently, and i feel the same: labs are nice, but course content is not enough practice for the real exam. Also ODF / DO370 is on 4.7 which has its support long ended, many changes since
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.