Chetan_Tiwary_
Moderator
Moderator
  • 6,631 Views

import_tasks vs include_tasks

Complex Ansible playbooks can be like a tangled web of tasks. If you're not the one who wrote it, it can be difficult to understand the logic of the playbook and to find the specific tasks that you need to modify. To make these playbooks more readable and maintainable, you can divide the tasks into separate files. This will help to untangle the web and make it easier to understand the playbook.

One of the method is to reuse a play book and include or import it into your play book for the very simple use of not writing everything from scratch using either import_tasks or include_tasks.


Import tasks are pre-processed at the time the playbook is parsed, while include tasks are processed as they are encountered during the execution of the playbook. This means that import tasks are evaluated once, at the beginning of the playbook, while include tasks are evaluated each time they are encountered.

Let’s understand this using the following cases : 

we have 3 yaml files : 

1. Import.yaml  :

Chetan_Tiwary__0-1689758750656.png

2. Include.yaml : 

Chetan_Tiwary__1-1689758810161.png

3. http_installation.yaml : 

Screenshot from 2023-07-19 15-05-46.png

****************************************************************

Now let us execute the playbook one by one : starting from include.yaml 

Chetan_Tiwary__3-1689758923963.png

Here once can see that total of 4 tasks are executed : 

  1. Gathering the facts
  2. Task printing Hello world
  3. A task being included 
  4. Execution of task which was inside the included playbook file

 

One can easily infer that task 4 was the result of the included task 3. And it happened as it was encountered during run time processing ( NOT pre processed ).

 

*************************************************************************************************

 

Now we will run the import.yaml : 

Chetan_Tiwary__4-1689758975544.png

So, only 3 tasks got executed and not 4 like we got in include.yaml case.

 

Notice the arrow - it points at message : statically imported http_installation.yaml

Which means import_tasks is importing / including its tasks yaml when playbook execution happens. At the very beginning, pre-processed at playbook parsing.

 

Hence, we can conclude that : IMPORT is a STATIC operation while INCLUDE is a DYNAMIC operation.

 

Let’s refer to the summary provided at : https://docs.ansible.com/ansible/devel/playbook_guide/playbooks_reuse.html#dynamic-vs-static 

Chetan_Tiwary__5-1689759024214.png

 

 

 

 

 

 

 

 

 

 

10 Replies
kegmystadev
Mission Specialist
Mission Specialist
  • 214 Views

assume role issues thru codepipeline

Hey Trev, appreciate it,

tasks:
- include_tasks: _assume_role.yml
vars:
assume_role_name: AWSAssumeRoleforCommonPipeline
role_session_name: enc-common-pipeline-session


- block:
- command: aws sts get-caller-identity
register: get_caller_id_output

- debug: var=get_caller_id_output.stdout

- include_tasks: roles/common.lambda_layers/tasks/main.yml

environment:
AWS_ACCESS_KEY_ID: "{{ (sts_creds|default({})).access_key|default() }}"
AWS_SECRET_ACCESS_KEY: "{{ (sts_creds|default({})).secret_key|default() }}"
AWS_SESSION_TOKEN: "{{ (sts_creds|default({})).session_token|default() }}"
 
That is  my play , the file that gets called in main.ymnl , creates an s3 bucket, thru pipeline, keep getting this,
 
Deploy S3 bucket for zip files]
******************************************
An exception occurred during task execution.
To see the full traceback, use -vvv.
The error was: botocore.exceptions.ClientError: An error occurred
(AccessDenied) when calling the DescribeStacks operation:
User: arn:aws:sts::3081:assumed-role/AWSAssumeRoleforCommonPipeline/enc-common-pipeline-session
is not authorized to perform: cloudformation:DescribeStacks on resource:
arn:aws:cloudformation:ap-southeast-2:3081:stack/twe-enc-common-3081/7cab40a0-f15b-11ee-a816-0a91bc59e3ff
because no identity-based policy allows the cloudformation:DescribeStacks action
 
- name: Deploy S3 bucket for zip files
cloudformation:
region: "{{ aws_region }}"
state: present
stack_name: "{{ s3_bucket_name }}"
template: "{{ s3_template }}"
template_parameters:
LayerBucketName: "{{ s3_bucket_name }}"
tags: "{{ tags }}"
 
I have tested this code multipl,e times, the code is fine , the issue is assuming role in pipeline, just dont seem to be able for it to have the perms it needs, whihc confuses me, your help very much appeciated, i have ONLY come to ansible in the last 3 months, and cfn templates is pretty new too, so any help appreciated 
 
 

 

Join the discussion
You must log in to join this conversation.