Hi all,
Im facing some issue during I'm self learning in the following course.
- Introduction to Containers, Kubernetes, and Red Hat OpenShift (DO180R)
Is there anyone who can handle the below issue?
Command)
sudo podman run --name mysql-basic \
> -e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 \
> -e MYSQL_DATABASE=items -e MYSQL_ROOT_PASSWORD=r00tpa55 \
> -d rhscl/mysql-57-rhel7:5.7-3.14
Error)
Trying to pull registry.lab.example.com/rhscl/mysql-57-rhel7:5.7-3.14 ...Failed
unable to pull rhscl/mysql-57-rhel7:5.7-3.14: 1 error occured:
* Error determining manifest MIME type for docker:// registry.lab.example.com/rhscl/mysql-57-rhel7:5.7-3.14 : pinging docker registry returned: Get https:// registry.lab.example.com/v2/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authorigy certificate "registry.lab.example.com")
Thanks all.
Manual fix instructions:
Essentially you are copying the docker registry certificate from the Services machine and placing it on workstation, master0, worker0, and worker1 and then trusting it again. You then must restart the cluster machines (master0, worker0, worker1) to get the cluster to recognize the new cert.
Details:
Login to workstation as student then run:
sudo -i
scp root@services:/etc/pki/ca-trust/source/anchors/example.com.crt /etc/pki/ca-trust/source/anchors
It's okay to overwrite the existing one - now trust it
update-ca-trust extract
Repeat this process on master0, worker0, and worker1
sudo ssh core@master0
sudo -i
scp root@services:/etc/pki/ca-trust/source/anchors/example.com.crt /etc/pki/ca-trust/source/anchors
update-ca-trust extract
Repeat for worker0 and worker1
Restart the three VMs master0, worker0, and worker1
Once they are rebooted it can take up to 5-10 minutes for the cluster to allow you to login.
To Test on Workstation:
sudo podman pull registry.lab.example.com/httpd:2.4
You should get no CA errors and the image should be visible via
sudo podman images
Cluster:
Login to the cluster using the kubeadmin credentials (see course for details)
oc new-project test
oc new-app registry.lab.example.com/httpd:2.4 --insecure-registry
Observe the output of the following command:
oc get events
you should see a successful pull of the container but the container will error out because it needs to run as root. This can be ignored. The fact that you can pull the container shows the issue is fixed.
Dear jim_rigsbee,
It's working now after fixed it as your guide. thanks so much.
Manual fix instructions:
Essentially you are copying the docker registry certificate from the Services machine and placing it on workstation, master0, worker0, and worker1 and then trusting it again. You then must restart the cluster machines (master0, worker0, worker1) to get the cluster to recognize the new cert.
Details:
Login to workstation as student then run:
sudo -i
scp root@services:/etc/pki/ca-trust/source/anchors/example.com.crt /etc/pki/ca-trust/source/anchors
It's okay to overwrite the existing one - now trust it
update-ca-trust extract
Repeat this process on master0, worker0, and worker1
sudo ssh core@master0
sudo -i
scp root@services:/etc/pki/ca-trust/source/anchors/example.com.crt /etc/pki/ca-trust/source/anchors
update-ca-trust extract
Repeat for worker0 and worker1
Restart the three VMs master0, worker0, and worker1
Once they are rebooted it can take up to 5-10 minutes for the cluster to allow you to login.
To Test on Workstation:
sudo podman pull registry.lab.example.com/httpd:2.4
You should get no CA errors and the image should be visible via
sudo podman images
Cluster:
Login to the cluster using the kubeadmin credentials (see course for details)
oc new-project test
oc new-app registry.lab.example.com/httpd:2.4 --insecure-registry
Observe the output of the following command:
oc get events
you should see a successful pull of the container but the container will error out because it needs to run as root. This can be ignored. The fact that you can pull the container shows the issue is fixed.
I am being asked for the root@services's password which is not provided in this thread...
Repeat this process on master0, worker0, and worker1
sudo ssh core@master0
sudo -i
scp root@services:/etc/pki/ca-trust/source/anchors/example.com.crt /etc/pki/ca-trust/source/anchors
root@services's password:
If this is expected, what are the required passwords?
The root password is redhat.
The kubeadmin ID also asks for a password, which I did not know.
I ignored the kubeadmin part since it was another test.
The section 2.3 did successfully complete after this fix was implemented.
Please upload a new lab environment ASAP.
Hi Jim,
Thanks! Your fix worked for me.
Reboot of master0, worker0 and worker1 was very fast for me (connected to those from workstation via ssh). I only waited for a minute or two and was able to run the guided sql lab successfully. Jack
Hmm... not quite worked actually:
the image was pulled to my workstation (can see with command sudo podman images) but the container is not running.
sudo podman ps
nothing is displayed with the above command.
??
sudo podman ps
this command is to see the container which are running. You pulled the image but you haven't created a container from that image.
you can see containers which are not running by using.
sudo podman ps -a
You can create a container by this command
sudo podman run --name mysql-basic \
> -e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 \
> -e MYSQL_DATABASE=items -e MYSQL_ROOT_PASSWORD=r00tpa55 \
> -d rhscl/mysql-57-rhel7:5.7-3.14
Hello Ahmad
Thanks but I did run the long command "sudo podman run ..." and it pulled down image but not span up a container.
Don't worry any more.
The classroom environment has been fixed as per a later post by zachgutterman. I deleted my lab and reprovisioned. This time the lab worked as expected!
Jack
You will encounter another error trying to access the web console in another part of the course. The symptoms are a message in the browser that it cannot connect to the server. Here is the fix:
From workstation:
ssh root@lb
vi /etc/haproxy/haproxy.cfg
Make the bottom of the file look like this (you're changing the ports on http and https):
backend http
#mode tcp
mode http
balance roundrobin
server http1 172.25.250.51:31577 check
server http2 172.25.250.52:31577 check
backend https
mode tcp
balance roundrobin
option ssl-hello-chk
server http1 172.25.250.51:31941 check
server http2 172.25.250.52:31941 check
Save the file and restart haproxy:
systemctl restart haproxy
Test the console with curl or Firefox
Browse: https://console-openshift-console.apps.cluster.lab.example.com
Use kubeadmin to login - password is on workstation /home/student/auth/kubeadmin-password
Hello Jim!
I'm at DO180's Section 6.10 and I have implemented your haproxy instructions above, but I still cannot connect to the OpenShift web console using Firefox from the workstation.
Please see my screen captures above and advise.. thanks !
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.