Ansible is a free and open-source configuration management utility that provides incredible features for automation. Ansible is very beginner friendly as it uses YAML documents which are intuitive and human readable for configuration. It also provides simple syntax with minimal to no dependencies.

Ansible provides a set of built-in tools and modules that allow engineers to automate repetitive tasks in simple steps. Using ansible, you can perform simple tasks such as gathering system information to complex tasks such as managing services, cron jobs, cloning repositories and more.

To learn more, check our Ansible series.

Let's dive in.

Installing Ansible on Debian

Before we dive into how to use Ansible, let us ensure we have Ansible installed and running on our system. The machine in which we install Ansible is known as the control node. This is machine in which we initiate commands to be executed on the remote machines.

  1. Start by updating your system package index
sudo apt-get update
  1. Next, run the apt command to install Ansible
sudo apt-get install ansible -y

Follow the prompts to install Ansible on your system.

And that's it, you have successfully installed Ansible on your system. We can now proceed to discuss how we can use it.

Ansible Setup Inventory File

Ansible has a special file called an inventory file. The inventory file holds information about the hosts that you will manage using Ansible.

The inventory information holds information such as the IP address or hostnames of the machines you wish to manage. Luckily, you can organize your hosts into various groups and subgroups. This makes it easy to run specific tasks on a group or a subgroup of hosts.

Once you define your hosts in your inventory file, you can tell Ansible (in the playbook) to limit the scope of the task to a specific host or group of hosts.

To setup our inventory file, we need to create the directory the ansible configuration directory and add the inventory file.

The commands are as shown:

sudo mkdir /etc/ansible
sudo touch /etc/ansible/hosts

Once you run an ansible playbook, Ansible will check the default inventory file in the /etc/ansible/hosts. However, ansible allows you to specify a custom configuration file using the -i parameter.

Check our tutorial on using custom inventory file to learn more.

Next, edit the configuration file with your favorite text editor:

sudo nano /etc/ansible/hosts

In the inventory file, we can create a group of hosts called slave_servers with aliases for each server and their respective addresses.

An example inventory file is as shown:

[slave_servers]
node1 ansible_host=192.168.132.128
node2 ansible_host=192.168.132.129

[all:vars]
ansible_python_interpreter=/usr/bin/python3

In the example inventory file above, we have a group of hosts called [slave_servers]. Keep in mind that we define a group of hosts using a pair of square brackets.

Next, we define an alias for each host followed by the address to the specified host. The syntax is as shown:

alias_name ansible_host=[address]

We also include the all:vars subgroup. This sets the path to the Python interpreter for all the hosts in the inventory file. Although this parameter is optional, it can be beneficial to standardize the location of the Python interpreter.

NOTE: Python is required to be installed on the remote hosts for Ansible to perform the tasks.

Once done, close and save the file.

We can now list the available hosts by running the command:

ansible-inventory --list

The command should list all the hosts in the inventory file and ensure that the inventory file follows the correct syntax.

An example output is as shown:

{
    "_meta": {
        "hostvars": {
            "node1": {
                "ansible_host": "192.168.132.128",
                "ansible_python_interpreter": "/usr/bin/python3"
            },
            "node2": {
                "ansible_host": "192.168.132.129",
                "ansible_python_interpreter": "/usr/bin/python3"
            }
        }
    },
    "all": {
        "children": [
            "slave_servers",
            "ungrouped"
        ]
    },
    "slave_servers": {
        "hosts": [
            "node1",
            "node2"
        ]
    }
}

To show the inventory file in YAML format, use the -y parameter as shown:

ansible-inventory --list -y

`You should get an output as shown:

all:
  children:
    slave_servers:
      hosts:
        node1:
          ansible_host: 192.168.132.128
          ansible_python_interpreter: /usr/bin/python3
        node2:
          ansible_host: 192.168.132.129
          ansible_python_interpreter: /usr/bin/python3
    ungrouped: {}

Congratulations, you have successfully setup your Ansible host file.

Ansible Check if Hosts are Up

Once we setup the Ansible inventory file, the next step is to check if the servers are up and whether Ansible can connect to them or not.

Before we can run the command, we need to ensure we can login into the servers using SSH. The best way to do that is by generating an SSH key and uploading it to the target hosts.

We can do that by running the command:

ssh-keygen

Accept the defaults and proceed.

Generating public/private rsa key pair.
Enter file in which to save the key (/home/debian/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/debian/.ssh/id_rsa
Your public key has been saved in /home/debian/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:yONsS61qQXjpJj939yw/5BoN9iULz35XSaNQeFvJThw debian@local
The key's randomart image is:
+---[RSA 3072]----+
|            . oEo|
|           . o * |
|   . .      o =  |
|  . +. .   . . + |
|   +  + S + o + o|
|  . +o o . B.= ..|
|   + .= . .o*   .|
|    +o.o. o+o . .|
|   ..+o. ..==o . |
+----[SHA256]-----+

We now upload the keys to the target hosts by running the command:

ssh-copy-id -i ~/.ssh/id_rsa.pub user@host_address

Feel free to replace the user and host_address with the username and password of the remote host. An example command is as shown:

ssh-copy-id -i ~/.ssh/id_rsa.pub debian@192.168.132.128

The command will prompt you for the password for the specified user. Enter the password and proceed.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/debian/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
debian@192.168.132.128's password:
Number of key(s) added: 1
Now try logging into the machine, with:   "ssh 'debian@192.168.132.128'"
and check to make sure that only the key(s) you wanted were added.

Repeat the above commands for all the hosts you wish to manage.

Once we have the SSK keys setup, we can run the ansible command to check if the hosts are up and running. The command syntax is as shown:

ansible [group] -m ping

We start by calling the ansible command followed by the group name of the servers we wish to access. The -m flag tells ansible that we wish to use the ping module. The Ansible ping module will check if the hosts are up and return PONG if true.

An example command is as shown:

ansible node1 -m ping -u debian

In this example, we call the ansible command followed by the alias of the host we wish to check. We then call the ping module using the -m parameter. Finally, we specify the username of the remote host using the -u flag.

Ansible will use the username and the public key we created earlier to login into the server. If the host is up, Ansible will return an output as shown:

node1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

We can repeat the same for node2 this time specifying the username for the second host.

ansible node2 -m ping -u centos

The command above should return:

node2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

We can see both hosts are up and running.

NOTE: If you want to check if all the hosts in the inventory file are up and running, you can use the all group as:

ansible all -m ping -u root

This will use the root account (available in all systems).

Ansible Ad-Hoc Commands

Once you are able to connect and interact with your remote hosts, you can run ad-hoc commands straight from your control node.

This is similar to connecting to your remote hosts via SSH and running the commands.

For example, to view the directory listing of the ~ directory, we can run:

ansible node1 -a "ls -la ~" -u debian

The command will login into the node1 host and return the directory listing of the home dir. An example output is as shown:

node1 | CHANGED | rc=0 >>
total 80
drwxr-xr-x 17 debian debian 4096 Jul 12 05:46 .
drwxr-xr-x  4 root   root   4096 Aug 17  2021 ..
drwx------  3 debian debian 4096 Jul 12 05:37 .ansible
-rw-------  1 debian debian    0 Aug 17  2021 .bash_history
-rw-r--r--  1 debian debian  220 Aug 17  2021 .bash_logout
-rw-r--r--  1 debian debian 3526 Aug 17  2021 .bashrc
drwx------ 10 debian debian 4096 Aug 17  2021 .cache
drwx------ 11 debian debian 4096 Aug 17  2021 .config
drwx------  3 debian debian 4096 Jul 12 05:46 ~debian:centos
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Desktop
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Documents
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Downloads
drwx------  2 debian debian 4096 Jul 12 05:22 .gnupg
drwxr-xr-x  3 debian debian 4096 Aug 17  2021 .local
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Music
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Pictures
-rw-r--r--  1 debian debian  807 Aug 17  2021 .profile
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Public
drwx------  2 debian debian 4096 Jul 12 05:33 .ssh
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Templates
drwxr-xr-x  2 debian debian 4096 Aug 17  2021 Videos

You can replace the ls -la ~ command with any command you want.

For example, to show the disk usage of the node2, we can run:

ansible node2 -a "df -h" -u centos

The command will return an output as shown:

node2 | CHANGED | rc=0 >>
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             866M     0  866M   0% /dev
tmpfs                896M     0  896M   0% /dev/shm
tmpfs                896M   18M  878M   2% /run
tmpfs                896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl-root   70G  4.8G   66G   7% /
/dev/mapper/cl-home  439G  3.2G  436G   1% /home
/dev/sda1           1014M  243M  772M  24% /boot
tmpfs                180M  4.6M  175M   3% /run/user/1000

Conclusion

Drum roll please!!!! Using this guide, you have covered the basics of working with Ansible. We have started by installing ansible on our control node. We then covered how to create an ansible inventory, checking if the hosts are up and more.

You have successfully earned you GeekBits Badge.

If you enjoy our content, please consider buying us a coffee to support our work:

Table of Contents
Great! Next, complete checkout for full access to GeekBits.
Welcome back! You've successfully signed in.
You've successfully subscribed to GeekBits.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.