I am on https://rol.redhat.com/rol/app/courses/jb183-7.0/pages/ch03s09.
It has the following step:
"Open a new terminal window and run the following command to grade the lab: [student@workstation ~]$lab ejb-assessment grade."
Please describe this process of Grading. Whose functionality is it? Java/J2EE or a custom program or JBDS or JAP?
Grading is performed by a shell script that does a lot of grepping to look for things that are meant to be cofigured, and gives your PASS or FAIL based on the outcome.
Thanks for responding. What are the examples of things it would look for please? Thanks, Rama.
I'm currently working on DO280 therefore I can give you an example from OpenShift. There is a task to create a new OpenShift user and set a specific password.
The grading script greps the password file and looks for that username, and then counts the lines. If the line count is not zero, I get a PASS. Having said that, the logic for checking the password is identical - the script greps for the username and assumes that the password is set/correct. Go figure. What that means is that I can get a PASS even if my user has no password and is unable to log in. From the grading point of view the task would succeed.
It is worth to mention that those lab grading scripts are Red Hat Training specific material.
Once the grading command for a specific lab has been run once on your workstation, the script for that lab is copied into : /usr/local/lib/
Each script has a lab_grade() function where you can see what the grading command exactly does. This can be very useful, for example if you don't understand a FAIL grading assessment for a specific task that you think you have implemented correctly.
The more Red Hat courses I take, the more I realise how flawed the grading scripts are. In some cases a service doesn't even need to work, as long as there is a required line in the config file I'll get a PASS.
we are always striving to make our grading scripts accurate and useful. At times, what developers use for the grading logic may not be the best way to do it, but we are always open to receive feedback on how to improve that.
Ideally, we would grade every step, practically, we are trying to verify things based on a certain output (for example, we would curl a web page instead of checking wether or not a vhost has been properly configured by the student.)
In the case that you are describing, it sounds like we forgot to check whether or not the service was started.
Feel free to drop us a note with some feedback or suggestions on how to make those scripts better :)
I reported RHLS training material related issues before but the response that I received was somewhat typical - a relevant team is aware, we will fix in the future.
I also provided feedback during one of the exams (there is a feedback section available), but in that case received no response whatsoever. Not sure if I was meant to TBH.
I am happy to provide feedback if you're going to look into it, but I wouldn't bother otherwise.
@RaziqueCould you suggest a relliable forum to report such issues please?
I too found a couple of such issues with the documentation.
You could either send me a message here, but the best way is to contact the support via ROL.
@Lisenet we always have a backlog for all of courses, and always trying to prioritize our resources to ensure that we solve the most critical issues first. That is not to say that what you have reported isn't important, but it could be that we already have a defect open for it.
If you send me more information about the course, the version, and the suggested improvement, I'll see if there's an open defect for it, if not, I will take care of creating one.
For more general feedback, don't hesistate to send me a private message, I'm always open to new days to improve the grading infrastructure.
A collaborative learning environment, enabling open source skill development.