cancel
Showing results for 
Search instead for 
Did you mean: 
Travis
Moderator
Moderator
  • 7,488 Views

AAP2 and Ansible Navigator - Execution Environments

Jump to solution

The ansible-navigator command was introduced when Ansible Automation Platform 2 (AAP2) was released. Ansible navigator allows the use of Execution Environments (EEs)to leverage container-based images known as Execution Environment Images (EEIs). Navigator is capable of leveraging the default container runtimes on the system to launch the EE in order to run the playbook or perform other Ansible functions and sub-commands (https://ansible.readthedocs.io/projects/navigator/) and (https://ansible.readthedocs.io/projects/navigator/installation/#install-ansible-navigator-windows). 

There is a separate configuration file for ansible-navigator which is called ansible-navigator.yml and contains the basic settings and configuration information for how it should interact with and launch the container images (EEIs). The overview of the settings can be found (https://ansible.readthedocs.io/projects/navigator/settings/) and it is possible to modify some of the container behavior with this configuration file. If there are multiple container runtimes (container engines) it is possible to specify which container engine is used by the ansible-navigator command.

 

   execution-environment:
     container-engine: podman

 

The other thing to keep in mind when using Ansible Navigator is that localhost now has a different meaning. Prior to running the Ansible playbook within an EE, the playbook was launched locally from a control node with the ansible-playbook command. Now, the playbook as well as anything else in the working directory is mounted inside the container and executed from the EE. What results from this is that any assets that would be copied and written to localhost now reside within the temporary (ephemeral) filesystem of the running container and are deleted as soon as the ansible-navigator command exits.

I have created a quick demo using a set of dummy playbooks to illustrate the differences between localhost and workstation in the RH294/DO374/DO467 classroom environments. There are two playbooks where you have the opportunity to run and observe the container as there is a wait_for module used to look for a file in a specific directory. The demo is located (https://github.com/tmichett/AnsiblePlaybooks/tree/master/AAP2/navigator) and named Localhost_Navigator_Demo.yml and Workstation_Navigator_Demo.yml. It is set up to run directly from the RH294 classroom. Changes will need to be made to the EEI in the ansible-navigator.yml file for the other courses to use the correct EEI.

 

When running the playbooks, you should also open another terminal window so that you can use a podman exec -it <Container_Name> /bin/bash command to open the container and look around. This will provide the most useful information. The playbook looks for a file called /tmp/navdemo on the system and expects to find DEMO COMPLETE in that file. It will wait at the Ansible task until that file exists with the correct content (similar to something else you might need in a real Ansible task).

Running the demo is easy …

Terminal 1:

[student@workstation navigator]$ ansible-navigator run Localhost_Navigator_Demo.yml

… OUTPUT OMITTED …

TASK [Display message to screen] *****************************************************
ok: [localhost] => {
    "msg": "Hello, I'm waiting on a file on localhost /tmp/navdemo. I will continue waiting until the file exists. Open a new terminal, use PODMAN to go into the container and then create the file /tmp/navdemo with contents 'DEMO COMPLETE'."
}

TASK [Wait until the string "DEMO COMPLETE" is in the file /tmp/foo before continuing] ***

 

Terminal 2

[student@workstation navigator]$ ls /tmp | grep navdemo

[student@workstation navigator]$ podman exec -it ansible_runner_65c393c1-0d78-4126-a0db-d759f0884041 /bin/bash
bash-4.4#

bash-4.4# ls /tmp | grep navdemo

bash-4.4# ls
Demo_Clean.sh		      Workstation_Navigator_Demo.yml  ansible.cfg
Demo_Complete.sh	      ansible-navigator.log	      inventory
Localhost_Navigator_Demo.yml  ansible-navigator.yml	      playbook.yml

bash-4.4# ./Demo_Complete.sh

 

Repeat the above steps for the Workstation_Navigator_Demo.yml. What should be seen is that using the Demo_Complete.sh script in the container does nothing and you must run it locally on the workstation system in order for the contents to exist in the correct directory on the correct system. Even though workstation is used for the ansible-navigator command to launch the same EEI as an EE, the playbook targets localhost for one playbook and workstation for the other playbook. 

These differences must be taken into account when refactoring older playbooks as often playbooks would be written in a way to collect items “locally” on the control node (node running the Ansible playbook command) and now, if it is set to write content to localhost, it is being written inside the running container which is on ephemeral storage and is deleted when the ansible-navigator command has completed. There are various methods to modify and update the playbooks or it is also possible to update the ansible-navigator.yml file so that persistent storage is presented in the running container and when things are written to “localhost” they are saved. More information and demos can be found (https://github.com/tmichett/do374/tree/main/Demos/Misc) with the Delegation or the Extra_Mounts directories.

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
14 Replies
Travis
Moderator
Moderator
  • 1,184 Views

@ConstantinB -

I've updated some of my "General" demo items in my AnsiblePlaybooks repository. I'm still working on things and it will always be a work in progress (WIP), but there are instructions on examining the Execution Environment more, especially based on @bonnevil input above.

https://github.com/tmichett/AnsiblePlaybooks/blob/master/AAP2/navigator/EE_Demo_Readme.adoc

This allows you to see how the Ansible User is different with some of the Ad-Hoc examples I provided. I generally do this as part of my RH294/DO374 deliveries, but this helps make things a little more clear (I think). I will be expanding the ADOC tutorial a little bit more, but for now, it shows how you can do basic tests I teach people with ad-hoc commands to test inventory and ansible.cfg.

You will notice in this example, that the ansible user was/is DEVOPS and that SSH keys and SUDOERS are both setup. Let me know ways I can improve this further if you think anything is missing for your understanding.

@Trevor - same goes for you too!

 

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
ConstantinB
Mission Specialist
Mission Specialist
  • 1,175 Views

Thanks @bonnevil  kudos for the input above! It's much simpler way to get inside the container! TBH even the "--exec" is mentioned in the "ansiblle-navigator --help" output, it was overlooked by me and I was trying to find a way to get there using podman (this is how I landed here).

Thanks @Travis for a great demo! I totaly agree with the approach!

I haven't seen any of the EE particularites mentioned in the documentation. So when I've read this topic I though it worth to mention the other 2 I was aware of and another one discovered a bit later:)  

 

Scott-B
Cadet
Cadet
  • 56 Views

so why make this so much more complex? Why make ansible-navigator a container instead of just deploying software like ansible? It's not like DNF can't do the dependancies and RPM can include whatever you want. So far I just see a lot of downside any no valid reason to actually have made this a container with all the drawbacks of a full on EE.

0 Kudos
bonnevil
Starfighter Starfighter
Starfighter
  • 82 Views

There are a couple of reasons. In theory it allows you to use custom containers with different libraries and dependencies installed and avoid conflicts between two Ansible projects that have incompatible Python (or other) dependencies.

One of the things Red Hat observed with the old Ansible Tower is that folks using Ansible had to have a pile of separate Python virtual environments to manage conflicting Python library/package dependencies for their custom code and the versions of modules that they wanted to be using.  This was a minor nightmare to keep updated, and had to be managed by pip instead of DNF or RPM. For each venv.

It also made development harder, because some automation dev could have different libraries (possibly from an earlier project) on their dev workstation than were available in the Ansible Tower environment. So when you try to run the code on the Ansible Tower (automation controller) it'd break.  And even if you *didn't* use Ansible Tower, two different devs could have different libraries and dependencies from each other, so they'd have to keep notes on what they were and make sure they had them coordinated.

Folks with less complex or bleeding-edge automation didn't run into this as much, but it was an issue for a fair number of folks who needed special stuff in their execution environment.

So the solution for Ansible Tower -> automation controller was to containerize the "execution environments" that each of the venvs and the bare metal of the server provided. You could have different containers for different automation, and you could put version tags on the containers to manage them, so you knew you had the latest set of modifications for a particular playbook or project and didn't have to jump through hoops to set things up or run them.

By tying this into ansible-navigator as well, this means two things

  1. You can develop code and customize your "execution environment" container image together, so that all you have to do is publish your Ansible project in Git and tell the folks to use it to use a particular container image from your container registry.  This makes code distribution a lot simpler.

  2. Even if you don't use automation controller, this makes it a lot easier for two folks on different development workstations to ensure that they're using an execution environment with the correct versions of dependencies.

Another interesting wrinkle is that to some extent you can use this with different versions of Ansible.  This was useful during the transition from Ansible 2.9 to the new order of Ansible Core 2.11+ / AAP 2.x and collections, for example, where some folks had new playbooks and playbooks that needed migration.

Scott-B
Cadet
Cadet
  • 44 Views

That makes more sense. It's unfortunate that it was needed but at least I understand more of the reason behind it.

Thank you very much for taking the time and giving such a thorough explination!

Join the discussion
You must log in to join this conversation.