Ansible repo yaml auto format

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I would like to know what do you think about using an opinionated code
formatter for our ansible repo ? I think this would help with
enforcing a common style and make our yaml files easier to read and
maintain.

I have been looking at https://prettier.io/ which support yaml and I
ran it on the repo's playbooks directory (see attached patch).  Most
of the changes are indentation but we could enforce more rules using
prettier's configuration options if we wish
(vhttps://prettier.io/docs/en/configuration.html).

Prettier looks quite cool but the only down side is that it is a
Javascript application and it is currently not packaged in Fedora, I
have used a container to run it on my laptop.

Anyway what do you think about  it ? Does anybody knows an alternative
to prettier ?
From 125219879e4e1fe961d1d293447953b4d5a6c108 Mon Sep 17 00:00:00 2001
From: Clement Verna <cverna@xxxxxxxxxxxx>
Date: Tue, 5 Mar 2019 09:14:45 +0100
Subject: [PATCH 1/1] Use prettier.io to format our playbooks.

This commit use the opinionated code formatter prettier to format
the yaml files. This gives us consistency and make it easier
to read and maintain our playbooks.

Signed-off-by: Clement Verna <cverna@xxxxxxxxxxxx>
---
 playbooks/check-for-nonvirt-updates.yml       |   36 +-
 playbooks/check-for-updates.yml               |   32 +-
 playbooks/check-host.yml                      |  506 ++-
 playbooks/clear_memcached.yml                 |    4 +-
 playbooks/clear_varnish.yml                   |    4 +-
 playbooks/cloud_prep.yml                      |   11 +-
 playbooks/deactivate_modernpaste_paste.yml    |    4 +-
 playbooks/death_to_postfix.yml                |   19 +-
 playbooks/destroy_cloud_inst.yml              |   20 +-
 playbooks/destroy_virt_inst.yml               |   56 +-
 playbooks/fix_arm_soc.yml                     |   30 +-
 playbooks/groups/arm-qa.yml                   |   26 +-
 playbooks/groups/autocloud-backend.yml        |   50 +-
 playbooks/groups/autocloud-web.yml            |   46 +-
 playbooks/groups/backup-server.yml            |   44 +-
 playbooks/groups/badges-backend.yml           |   49 +-
 playbooks/groups/badges-web.yml               |   55 +-
 playbooks/groups/basset.yml                   |   43 +-
 playbooks/groups/bastion.yml                  |   50 +-
 playbooks/groups/batcave.yml                  |   77 +-
 playbooks/groups/batcomputer.yml              |   30 +-
 playbooks/groups/beaker-virthosts.yml         |   34 +-
 playbooks/groups/beaker.yml                   |   52 +-
 playbooks/groups/blockerbugs.yml              |   39 +-
 playbooks/groups/bodhi-backend.yml            |  119 +-
 playbooks/groups/bugyou.yml                   |   56 +-
 playbooks/groups/bugzilla2fedmsg.yml          |   44 +-
 playbooks/groups/buildhw.yml                  |   63 +-
 playbooks/groups/busgateway.yml               |   65 +-
 playbooks/groups/certgetter.yml               |   31 +-
 playbooks/groups/ci.yml                       |   60 +-
 playbooks/groups/copr-backend.yml             |   39 +-
 playbooks/groups/copr-dist-git.yml            |   34 +-
 playbooks/groups/copr-frontend-cloud.yml      |   34 +-
 playbooks/groups/copr-frontend-upgrade.yml    |   14 +-
 playbooks/groups/copr-frontend.yml            |   36 +-
 playbooks/groups/copr-keygen.yml              |   50 +-
 playbooks/groups/datagrepper.yml              |   56 +-
 playbooks/groups/dhcp.yml                     |   32 +-
 playbooks/groups/dns.yml                      |   40 +-
 playbooks/groups/download.yml                 |  116 +-
 playbooks/groups/elections.yml                |   56 +-
 playbooks/groups/fas.yml                      |   44 +-
 playbooks/groups/fedimg.yml                   |   64 +-
 playbooks/groups/fedocal.yml                  |   55 +-
 playbooks/groups/freshmaker.yml               |   82 +-
 playbooks/groups/github2fedmsg.yml            |   47 +-
 playbooks/groups/gnome-backups.yml            |   40 +-
 playbooks/groups/hotness.yml                  |   60 +-
 playbooks/groups/hubs.yml                     |  116 +-
 playbooks/groups/infinote.yml                 |   54 +-
 playbooks/groups/ipa.yml                      |   98 +-
 playbooks/groups/ipsilon.yml                  |   67 +-
 playbooks/groups/kerneltest.yml               |   47 +-
 playbooks/groups/keyserver.yml                |   37 +-
 playbooks/groups/koji-hub.yml                 |  150 +-
 playbooks/groups/kojipkgs.yml                 |   52 +-
 playbooks/groups/koschei-backend.yml          |   44 +-
 playbooks/groups/koschei-web.yml              |   34 +-
 playbooks/groups/libravatar.yml               |   29 +-
 playbooks/groups/logserver.yml                |   81 +-
 playbooks/groups/loopabull.yml                |   64 +-
 playbooks/groups/mailman.yml                  |  137 +-
 playbooks/groups/maintainer-test.yml          |   85 +-
 playbooks/groups/mariadb-server.yml           |   32 +-
 playbooks/groups/mbs.yml                      |  116 +-
 playbooks/groups/mdapi.yml                    |   55 +-
 playbooks/groups/memcached.yml                |   32 +-
 playbooks/groups/minimal.yml                  |   28 +-
 playbooks/groups/mirrormanager.yml            |   89 +-
 playbooks/groups/modernpaste.yml              |   42 +-
 playbooks/groups/newcloud-undercloud.yml      |   54 +-
 playbooks/groups/noc.yml                      |   86 +-
 playbooks/groups/notifs-backend.yml           |   66 +-
 playbooks/groups/notifs-web.yml               |   37 +-
 playbooks/groups/nuancier.yml                 |  137 +-
 playbooks/groups/oci-registry.yml             |  100 +-
 playbooks/groups/odcs.yml                     |  158 +-
 playbooks/groups/openqa-workers.yml           |   34 +-
 playbooks/groups/openqa.yml                   |  125 +-
 playbooks/groups/openstack-compute-nodes.yml  |   33 +-
 playbooks/groups/osbs-cluster.yml             |  179 +-
 playbooks/groups/overcloud-config.yml         | 1168 ++++---
 playbooks/groups/packages.yml                 |   61 +-
 playbooks/groups/pagure-proxy.yml             |   32 +-
 playbooks/groups/pagure.yml                   |   73 +-
 playbooks/groups/pdc.yml                      |   64 +-
 playbooks/groups/people.yml                   |   87 +-
 playbooks/groups/pkgs.yml                     |  144 +-
 playbooks/groups/postgresql-server-bdr.yml    |   36 +-
 playbooks/groups/postgresql-server.yml        |   39 +-
 playbooks/groups/proxies.yml                  |  145 +-
 playbooks/groups/qa.yml                       |   77 +-
 playbooks/groups/rabbitmq.yml                 |   30 +-
 playbooks/groups/repospanner.yml              |   38 +-
 playbooks/groups/resultsdb.yml                |   72 +-
 playbooks/groups/retrace.yml                  |   73 +-
 playbooks/groups/rhel8beta.yml                |   24 +-
 playbooks/groups/secondary.yml                |  110 +-
 playbooks/groups/sign-bridge.yml              |   36 +-
 playbooks/groups/simple-koji-ci.yml           |   45 +-
 playbooks/groups/smtp-mm.yml                  |   31 +-
 playbooks/groups/sundries.yml                 |   89 +-
 playbooks/groups/tang.yml                     |   30 +-
 playbooks/groups/taskotron-client-hosts.yml   |   60 +-
 playbooks/groups/taskotron.yml                |  103 +-
 playbooks/groups/torrent.yml                  |   60 +-
 playbooks/groups/twisted-buildbots.yml        |   39 +-
 playbooks/groups/unbound.yml                  |   33 +-
 playbooks/groups/value.yml                    |   47 +-
 playbooks/groups/virthost.yml                 |   37 +-
 playbooks/groups/wiki.yml                     |   51 +-
 playbooks/groups/zanata2fedmsg.yml            |   47 +-
 playbooks/host_reboot.yml                     |   28 +-
 playbooks/host_update.yml                     |   29 +-
 .../ansiblemagazine.fedorainfracloud.org.yml  |   94 +-
 .../hosts/artboard.fedorainfracloud.org.yml   |  211 +-
 .../cloud-noc01.cloud.fedoraproject.org.yml   |   33 +-
 .../hosts/commops.fedorainfracloud.org.yml    |   28 +-
 .../communityblog.fedorainfracloud.org.yml    |   94 +-
 ...data-analysis01.phx2.fedoraproject.org.yml |  106 +-
 .../hosts/developer.fedorainfracloud.org.yml  |   28 +-
 .../elastic-dev.fedorainfracloud.org.yml      |   32 +-
 .../hosts/fas2-dev.fedorainfracloud.org.yml   |   28 +-
 .../hosts/fas3-dev.fedorainfracloud.org.yml   |   28 +-
 .../fed-cloud09.cloud.fedoraproject.org.yml   | 2951 ++++++++++-------
 .../hosts/fedimg-dev.fedorainfracloud.org.yml |   34 +-
 .../fedora-bootstrap.fedorainfracloud.org.yml |   54 +-
 ...littergallery-dev.fedorainfracloud.org.yml |   28 +-
 ...pinesspackets-stg.fedorainfracloud.org.yml |   40 +-
 .../happinesspackets.fedorainfracloud.org.yml |   37 +-
 .../hosts/hubs-dev.fedorainfracloud.org.yml   |   86 +-
 .../hosts/iddev.fedorainfracloud.org.yml      |   40 +-
 .../hosts/ipv6-test.fedoraproject.org.yml     |   28 +-
 .../hosts/lists-dev.fedorainfracloud.org.yml  |  255 +-
 .../hosts/magazine2.fedorainfracloud.org.yml  |  108 +-
 .../hosts/regcfp2.fedorainfracloud.org.yml    |   34 +-
 .../hosts/respins.fedorainfracloud.org.yml    |   28 +-
 .../hosts/taiga.fedorainfracloud.org.yml      |   34 +-
 .../hosts/taigastg.fedorainfracloud.org.yml   |   36 +-
 .../telegram-irc.fedorainfracloud.org.yml     |   34 +-
 .../hosts/testdays.fedorainfracloud.org.yml   |   45 +-
 .../upstreamfirst.fedorainfracloud.org.yml    |   92 +-
 playbooks/include/happy_birthday.yml          |   13 +-
 playbooks/include/proxies-certificates.yml    |  133 +-
 playbooks/include/proxies-fedora-web.yml      |  109 +-
 playbooks/include/proxies-fedorahosted.yml    |   19 +-
 playbooks/include/proxies-haproxy.yml         |   21 +-
 playbooks/include/proxies-miscellaneous.yml   |   97 +-
 playbooks/include/proxies-redirects.yml       | 1531 +++++----
 playbooks/include/proxies-reverseproxy.yml    | 1518 +++++----
 playbooks/include/proxies-rewrites.yml        |  108 +-
 playbooks/include/proxies-websites.yml        | 1984 ++++++-----
 playbooks/include/virt-create.yml             |   11 +-
 playbooks/list-vms-per-host.yml               |   15 +-
 playbooks/manual/autosign.yml                 |   49 +-
 playbooks/manual/get-system-packages.yml      |   18 +-
 playbooks/manual/history_undo.yml             |   36 +-
 playbooks/manual/kernel-qa.yml                |   25 +-
 playbooks/manual/openqa-restart-workers.yml   |   15 +-
 playbooks/manual/push-badges.yml              |   70 +-
 playbooks/manual/qadevel.yml                  |   38 +-
 .../releng-emergency-expire-old-repo.yml      |   20 +-
 playbooks/manual/remote_delldrive.yml         |   22 +-
 playbooks/manual/restart-fedmsg-services.yml  |   64 +-
 playbooks/manual/restart-pagure.yml           |   26 +-
 playbooks/manual/sign-and-import.yml          |   86 +-
 playbooks/manual/sign-vault.yml               |   32 +-
 playbooks/manual/sync-old-pkl.yml             |   48 +-
 playbooks/manual/update-firmware.yml          |  161 +-
 playbooks/manual/update-packages.yml          |   38 +-
 playbooks/openshift-apps/accountsystem.yml    |   88 +-
 playbooks/openshift-apps/asknot.yml           |  100 +-
 playbooks/openshift-apps/bodhi.yml            |  162 +-
 playbooks/openshift-apps/discourse2fedmsg.yml |   64 +-
 playbooks/openshift-apps/docsbuilding.yml     |   46 +-
 playbooks/openshift-apps/elections.yml        |   72 +-
 playbooks/openshift-apps/fedocal.yml          |   72 +-
 playbooks/openshift-apps/fpdc.yml             |   72 +-
 playbooks/openshift-apps/greenwave.yml        |  136 +-
 playbooks/openshift-apps/koschei.yml          |   30 +-
 playbooks/openshift-apps/mdapi.yml            |   94 +-
 .../openshift-apps/messaging-bridges.yml      |  253 +-
 playbooks/openshift-apps/modernpaste.yml      |   78 +-
 playbooks/openshift-apps/nuancier.yml         |   96 +-
 playbooks/openshift-apps/rats.yml             |   36 +-
 .../openshift-apps/release-monitoring.yml     |  108 +-
 playbooks/openshift-apps/silverblue.yml       |   96 +-
 playbooks/openshift-apps/the-new-hotness.yml  |  100 +-
 playbooks/openshift-apps/transtats.yml        |   78 +-
 playbooks/openshift-apps/waiverdb.yml         |  154 +-
 playbooks/rdiff-backup.yml                    |   26 +-
 playbooks/restart_unbound.yml                 |    6 +-
 playbooks/rkhunter_only.yml                   |   14 +-
 playbooks/rkhunter_update.yml                 |   26 +-
 playbooks/run_fasClient.yml                   |   20 +-
 playbooks/run_fasClient_simple.yml            |   14 +-
 playbooks/set_root_auth_keys.yml              |   16 +-
 playbooks/transient_cloud_instance.yml        |   54 +-
 playbooks/transient_newcloud_instance.yml     |   54 +-
 playbooks/update-proxy-dns.yml                |   80 +-
 playbooks/update_dns.yml                      |    5 +-
 playbooks/update_grokmirror_repos.yml         |    6 +-
 playbooks/update_ticketkey.yml                |   36 +-
 playbooks/vhost_halt_guests.yml               |   32 +-
 playbooks/vhost_poweroff.yml                  |   34 +-
 playbooks/vhost_reboot.yml                    |  115 +-
 playbooks/vhost_update.yml                    |   67 +-
 208 files changed, 11083 insertions(+), 10600 deletions(-)

diff --git a/playbooks/check-for-nonvirt-updates.yml b/playbooks/check-for-nonvirt-updates.yml
index 32d05f953..2d75431c0 100644
--- a/playbooks/check-for-nonvirt-updates.yml
+++ b/playbooks/check-for-nonvirt-updates.yml
@@ -13,24 +13,22 @@
   gather_facts: false
 
   tasks:
+    - name: check for updates (yum)
+      yum: list=updates update_cache=true
+      register: yumoutput
 
-  - name: check for updates (yum)
-    yum: list=updates update_cache=true
-    register: yumoutput
-
-  - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
+    - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
 
 - name: check for updates (Fedora)
   hosts: virt_host:&distro_Fedora
   gather_facts: false
 
   tasks:
+    - name: check for updates (dnf)
+      dnf: list=updates
+      register: dnfoutput
 
-  - name: check for updates (dnf)
-    dnf: list=updates
-    register: dnfoutput
-
-  - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
+    - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
 
 #
 # For some reason ansible detects aarch64/armv7 hosts as virt type "NA"
@@ -41,21 +39,19 @@
   gather_facts: false
 
   tasks:
+    - name: check for updates (yum)
+      yum: list=updates update_cache=true
+      register: yumoutput
 
-  - name: check for updates (yum)
-    yum: list=updates update_cache=true
-    register: yumoutput
-
-  - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
+    - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
 
 - name: check for updates (aarch64/armv7) Fedora
   hosts: virt_NA:&distro_Fedora
   gather_facts: false
 
   tasks:
+    - name: check for updates (dnf)
+      dnf: list=updates
+      register: dnfoutput
 
-  - name: check for updates (dnf)
-    dnf: list=updates
-    register: dnfoutput
-
-  - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
+    - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
diff --git a/playbooks/check-for-updates.yml b/playbooks/check-for-updates.yml
index d362da9a5..9df933425 100644
--- a/playbooks/check-for-updates.yml
+++ b/playbooks/check-for-updates.yml
@@ -13,30 +13,28 @@
   gather_facts: false
 
   tasks:
+    - name: check for updates (yum)
+      yum: list=updates update_cache=true
+      register: yumoutput
 
-  - name: check for updates (yum)
-    yum: list=updates update_cache=true
-    register: yumoutput
-
-  - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
-    when: yumoutput.results|length > 0
+    - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}"
+      when: yumoutput.results|length > 0
 
 - name: check for updates
   hosts: distro_Fedora:!*.app.os.fedoraproject.org:!*.app.os.stg.fedoraproject.org
   gather_facts: false
 
   tasks:
+    #
+    # We use the command module here because the real module can't expire
+    #
 
-#
-# We use the command module here because the real module can't expire
-#
-
-  - name: make dnf recheck for new metadata from repos
-    command: dnf clean expire-cache
+    - name: make dnf recheck for new metadata from repos
+      command: dnf clean expire-cache
 
-  - name: check for updates (dnf)
-    dnf: list=updates
-    register: dnfoutput
+    - name: check for updates (dnf)
+      dnf: list=updates
+      register: dnfoutput
 
-  - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
-    when: dnfoutput.results|length > 0
+    - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}"
+      when: dnfoutput.results|length > 0
diff --git a/playbooks/check-host.yml b/playbooks/check-host.yml
index 33bff7b99..d3cb88364 100644
--- a/playbooks/check-host.yml
+++ b/playbooks/check-host.yml
@@ -5,265 +5,257 @@
 - hosts: "{{ target }}"
   user: root
   vars:
-  - datadir_prfx_path: "/var/tmp/ansible-chk-host/"
+    - datadir_prfx_path: "/var/tmp/ansible-chk-host/"
 
   tasks:
-
-  - name: create temp dir for collecting info
-    shell: mktemp -d
-    register: temp_dir
-    changed_when: False
-
-  - name: Get list of active loaded services with systemctl
-    shell: '/bin/systemctl -t service --no-legend | egrep "loaded active" | tr -s " " | cut -d " " -f1'
-    changed_when: False
-    when: ansible_distribution_major_version|int > 6
-    register: loaded_active_services_systemctl
-    tags:
-      - check
-      - services
-
-  - name: Get list of inactive loaded services with systemctl
-    shell: '/bin/systemctl -t service --no-legend | egrep -v "loaded active" | tr -s " " | cut -d " " -f1'
-    changed_when: False
-    when: ansible_distribution_major_version|int > 6
-    register: loaded_inactive_services_systemctl
-    tags:
-      - check
-      - services
-
-
-  - name: Get list of enabled services with chkconfig at current runlevel
-    shell: "chkconfig | grep \"`runlevel | cut -d ' ' -f 2`:on\" | awk '{print $1}'"
-    changed_when: False
-    when: ansible_distribution_major_version|int <= 6
-    register: enabled_services_chkconfig
-    tags:
-      - check
-      - services
-
-  - name: Get list of disabled services with chkconfig at current runlevel
-    shell: "chkconfig | grep \"`runlevel | cut -d ' ' -f 2`:off\" | awk '{print $1}'"
-    changed_when: False
-    when: ansible_distribution_major_version|int <= 6
-    register: disabled_services_chkconfig
-    tags:
-      - check
-      - services
-
-
-  - name: output enabled service list chkconfig
-    shell: echo {{enabled_services_chkconfig.stdout_lines}} >> {{temp_dir.stdout}}/eservices
-    when: enabled_services_chkconfig is defined and enabled_services_chkconfig.rc == 0
-    changed_when: False
-    tags:
-      - check
-      - services
-
-  - name: output disabled loaded service list chkconfig
-    shell: echo {{disabled_services_chkconfig.stdout_lines}} >> {{temp_dir.stdout}}/dservices
-    when: disabled_services_chkconfig is defined and disabled_services_chkconfig.rc == 0
-    changed_when: False
-    tags:
-      - check
-      - services
-
-
-  - name: output loaded active service list systemctl
-    shell: echo {{loaded_active_services_systemctl.stdout_lines}} >> {{temp_dir.stdout}}/laservices
-    when: loaded_active_services_systemctl is defined and loaded_active_services_systemctl.rc == 0
-    changed_when: False
-    tags:
-      - check
-      - services
-
-  - name: output loaded inactive service list systemctl
-    shell: echo {{loaded_inactive_services_systemctl.stdout_lines}} >> {{temp_dir.stdout}}/liservices
-    when: loaded_inactive_services_systemctl is defined and loaded_inactive_services_systemctl.rc == 0
-    changed_when: False
-    tags:
-      - check
-      - services
-
-  - name: Check for pending updates
-#    script: {{ scripts }}/needs-updates --host {{ inventory_hostname  }}
-    script: needs-updates --host {{ inventory_hostname }}
-    register: list_update
-    delegate_to: 127.0.0.1
-    changed_when: False
-    tags:
-      - check
-      - updates
-
-  - name: Show pending updates
-    shell: echo {{list_update.stdout_lines}} >> {{temp_dir.stdout}}/pending_updates
-    changed_when: False
-    tags:
-      - check
-      - updates
-
-  - name: Get processes that need restarting
-    shell: needs-restarting
-    register: needs_restarting
-    changed_when: False
-    tags:
-      - check
-      - restart
-
-  - name: Show processes that need restarting
-    shell: echo {{needs_restarting.stdout_lines}} >> {{temp_dir.stdout}}/needing_restart
-    changed_when: False
-    tags:
-      - check
-      - restart
-
-  - name: Get locally changed files from the rpm package
-    shell: rpm_tmp_var=`mktemp` && ! rpm -Va 2>/dev/null > $rpm_tmp_var && [[ -s $rpm_tmp_var ]] && echo $rpm_tmp_var warn=no
-    register: localchanges
-    changed_when: False
-    tags:
-      - check
-      - fileverify
-
-  - name: Get locally changed files (excluding config files)
-    command: "egrep -v '  c /' {{ localchanges.stdout }}"
-    register: rpm_va_nc
-    changed_when: False
-    when: localchanges is defined and localchanges.stdout != ""
-    tags:
-      - check
-      - fileverify
-
-  - name: Show locally changed files (excluding config files)
-    shell: echo {{rpm_va_nc.stdout_lines}} >> {{temp_dir.stdout}}/local_changed
-    when: rpm_va_nc.stdout != ""
-    changed_when: False
-    tags:
-      - check
-      - fileverify
-
-  - name: 'Whitelist - Get locally changed files (config files)'
-    command: "egrep '  c /' {{ localchanges.stdout }}"
-    register: rpm_va_c
-    when: localchanges is defined and localchanges.stdout != ""
-    changed_when: False
-    tags:
-      - check
-      - fileverify
-
-  - name: 'Whitelist - Show locally changed files (config files)'
-    shell: echo {{rpm_va_c.stdout_lines}} >> {{temp_dir.stdout}}/local_config_changed
-    changed_when: False
-    when: rpm_va_c.stdout != ""
-    tags:
-      - check
-      - fileverify
-
-  - name: Check if using iptables
-    shell: /sbin/iptables -S
-    register: iptablesn
-    changed_when: False
-    tags:
-      - check
-      - iptables
-
-  - name: Show iptables rules
-    shell: echo "{{iptablesn.stdout_lines}}" >> {{ temp_dir.stdout }}/iptables
-    changed_when: False
-    tags:
-      - check
-      - iptables
-
-  - name: Show current SELinux status
-    shell: echo "SELinux is {{ ansible_selinux.status }} for this System" >> {{temp_dir.stdout}}/selinux
-    changed_when: False
-    tags:
-      - check
-      - selinux
-
-  - name: Show Boot SELinux mode
-    shell: echo "SELinux boots to {{ ansible_selinux.config_mode }} mode " >> {{temp_dir.stdout}}/selinux
-    when: ansible_selinux.status != "disabled"
-    changed_when: False
-    tags:
-      - check
-      - selinux
-
-  - name: Show Current SELinux mode
-    shell: echo "SELinux currently is in {{ ansible_selinux.mode }} mode" >> {{temp_dir.stdout}}/selinux
-    when: ansible_selinux.status != "disabled"
-    changed_when: False
-    tags:
-      - check
-      - selinux
-
-  - name: Match current SELinux status with boot status
-    shell: echo "SElinux Current and Boot modes are in sync" >> {{temp_dir.stdout}}/selinux
-    when: ansible_selinux.status != "disabled" and ansible_selinux.config_mode == ansible_selinux.mode
-    changed_when: False
-    tags:
-      - check
-      - selinux
-
-
-  - name: misMatch current SELinux status with boot status
-    shell: echo "SElinux Current and Boot modes are NOT in sync" >> {{temp_dir.stdout}}/selinux
-    when: ansible_selinux.status != "disabled" and ansible_selinux.config_mode != ansible_selinux.mode
-    changed_when: False
-    tags:
-      - check
-      - selinux
-
-  - name: resolve last persisted dir - if one is present
-    local_action: shell ls -d -1 {{datadir_prfx_path}}/{{inventory_hostname}}-* 2>/dev/null | sort -r | head -1
-    register: last_dir
-    changed_when: False
-    ignore_errors: True
-
-  - name: get file list
-    shell: ls -1 {{temp_dir.stdout}}/*
-    register: file_list
-    changed_when: False
-
-  - name: get timestamp
-    shell: "date +%Y-%m-%d-%H-%M-%S"
-    register: timestamp
-    changed_when: False
-
-  - name: create persisting-state directory
-    local_action: file path=/{{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}} state=directory
-    changed_when: False
-
-  - name: fetch file list
-    fetch: src={{item}} dest=/{{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/ flat=true
-    with_items: "{{file_list.stdout_lines}}"
-    changed_when: False
-
-
-  - name: diff the new files with last ones presisted
-    local_action: shell for file in {{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/*; do filename=$(basename $file); diff {{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/$filename {{last_dir.stdout.strip(':')}}/$filename; done
-    ignore_errors: True
-    changed_when: False
-    register: file_diff
-    when: last_dir is defined and last_dir.stdout != ""
-
-  - name: display diff
-    debug: var=file_diff.stdout_lines
-    ignore_errors: True
-    changed_when: False
-    when: file_diff is defined
-
-#clean up: can also be put as handlers
-
-  - name: clean remote temp dir
-    file: path={{temp_dir.stdout}} state=absent
-    changed_when: False
-
-  - name: clean rpm temp file
-    file: path={{localchanges.stdout}} state=absent
-    changed_when: False
-
-
+    - name: create temp dir for collecting info
+      shell: mktemp -d
+      register: temp_dir
+      changed_when: False
+
+    - name: Get list of active loaded services with systemctl
+      shell: '/bin/systemctl -t service --no-legend | egrep "loaded active" | tr -s " " | cut -d " " -f1'
+      changed_when: False
+      when: ansible_distribution_major_version|int > 6
+      register: loaded_active_services_systemctl
+      tags:
+        - check
+        - services
+
+    - name: Get list of inactive loaded services with systemctl
+      shell: '/bin/systemctl -t service --no-legend | egrep -v "loaded active" | tr -s " " | cut -d " " -f1'
+      changed_when: False
+      when: ansible_distribution_major_version|int > 6
+      register: loaded_inactive_services_systemctl
+      tags:
+        - check
+        - services
+
+    - name: Get list of enabled services with chkconfig at current runlevel
+      shell: 'chkconfig | grep "`runlevel | cut -d '' '' -f 2`:on" | awk ''{print $1}'''
+      changed_when: False
+      when: ansible_distribution_major_version|int <= 6
+      register: enabled_services_chkconfig
+      tags:
+        - check
+        - services
+
+    - name: Get list of disabled services with chkconfig at current runlevel
+      shell: 'chkconfig | grep "`runlevel | cut -d '' '' -f 2`:off" | awk ''{print $1}'''
+      changed_when: False
+      when: ansible_distribution_major_version|int <= 6
+      register: disabled_services_chkconfig
+      tags:
+        - check
+        - services
+
+    - name: output enabled service list chkconfig
+      shell: echo {{enabled_services_chkconfig.stdout_lines}} >> {{temp_dir.stdout}}/eservices
+      when: enabled_services_chkconfig is defined and enabled_services_chkconfig.rc == 0
+      changed_when: False
+      tags:
+        - check
+        - services
+
+    - name: output disabled loaded service list chkconfig
+      shell: echo {{disabled_services_chkconfig.stdout_lines}} >> {{temp_dir.stdout}}/dservices
+      when: disabled_services_chkconfig is defined and disabled_services_chkconfig.rc == 0
+      changed_when: False
+      tags:
+        - check
+        - services
+
+    - name: output loaded active service list systemctl
+      shell: echo {{loaded_active_services_systemctl.stdout_lines}} >> {{temp_dir.stdout}}/laservices
+      when: loaded_active_services_systemctl is defined and loaded_active_services_systemctl.rc == 0
+      changed_when: False
+      tags:
+        - check
+        - services
+
+    - name: output loaded inactive service list systemctl
+      shell: echo {{loaded_inactive_services_systemctl.stdout_lines}} >> {{temp_dir.stdout}}/liservices
+      when: loaded_inactive_services_systemctl is defined and loaded_inactive_services_systemctl.rc == 0
+      changed_when: False
+      tags:
+        - check
+        - services
+
+    - name: Check for pending updates
+      #    script: {{ scripts }}/needs-updates --host {{ inventory_hostname  }}
+      script: needs-updates --host {{ inventory_hostname }}
+      register: list_update
+      delegate_to: 127.0.0.1
+      changed_when: False
+      tags:
+        - check
+        - updates
+
+    - name: Show pending updates
+      shell: echo {{list_update.stdout_lines}} >> {{temp_dir.stdout}}/pending_updates
+      changed_when: False
+      tags:
+        - check
+        - updates
+
+    - name: Get processes that need restarting
+      shell: needs-restarting
+      register: needs_restarting
+      changed_when: False
+      tags:
+        - check
+        - restart
+
+    - name: Show processes that need restarting
+      shell: echo {{needs_restarting.stdout_lines}} >> {{temp_dir.stdout}}/needing_restart
+      changed_when: False
+      tags:
+        - check
+        - restart
+
+    - name: Get locally changed files from the rpm package
+      shell: rpm_tmp_var=`mktemp` && ! rpm -Va 2>/dev/null > $rpm_tmp_var && [[ -s $rpm_tmp_var ]] && echo $rpm_tmp_var warn=no
+      register: localchanges
+      changed_when: False
+      tags:
+        - check
+        - fileverify
+
+    - name: Get locally changed files (excluding config files)
+      command: "egrep -v '  c /' {{ localchanges.stdout }}"
+      register: rpm_va_nc
+      changed_when: False
+      when: localchanges is defined and localchanges.stdout != ""
+      tags:
+        - check
+        - fileverify
+
+    - name: Show locally changed files (excluding config files)
+      shell: echo {{rpm_va_nc.stdout_lines}} >> {{temp_dir.stdout}}/local_changed
+      when: rpm_va_nc.stdout != ""
+      changed_when: False
+      tags:
+        - check
+        - fileverify
+
+    - name: "Whitelist - Get locally changed files (config files)"
+      command: "egrep '  c /' {{ localchanges.stdout }}"
+      register: rpm_va_c
+      when: localchanges is defined and localchanges.stdout != ""
+      changed_when: False
+      tags:
+        - check
+        - fileverify
+
+    - name: "Whitelist - Show locally changed files (config files)"
+      shell: echo {{rpm_va_c.stdout_lines}} >> {{temp_dir.stdout}}/local_config_changed
+      changed_when: False
+      when: rpm_va_c.stdout != ""
+      tags:
+        - check
+        - fileverify
+
+    - name: Check if using iptables
+      shell: /sbin/iptables -S
+      register: iptablesn
+      changed_when: False
+      tags:
+        - check
+        - iptables
+
+    - name: Show iptables rules
+      shell: echo "{{iptablesn.stdout_lines}}" >> {{ temp_dir.stdout }}/iptables
+      changed_when: False
+      tags:
+        - check
+        - iptables
+
+    - name: Show current SELinux status
+      shell: echo "SELinux is {{ ansible_selinux.status }} for this System" >> {{temp_dir.stdout}}/selinux
+      changed_when: False
+      tags:
+        - check
+        - selinux
+
+    - name: Show Boot SELinux mode
+      shell: echo "SELinux boots to {{ ansible_selinux.config_mode }} mode " >> {{temp_dir.stdout}}/selinux
+      when: ansible_selinux.status != "disabled"
+      changed_when: False
+      tags:
+        - check
+        - selinux
+
+    - name: Show Current SELinux mode
+      shell: echo "SELinux currently is in {{ ansible_selinux.mode }} mode" >> {{temp_dir.stdout}}/selinux
+      when: ansible_selinux.status != "disabled"
+      changed_when: False
+      tags:
+        - check
+        - selinux
+
+    - name: Match current SELinux status with boot status
+      shell: echo "SElinux Current and Boot modes are in sync" >> {{temp_dir.stdout}}/selinux
+      when: ansible_selinux.status != "disabled" and ansible_selinux.config_mode == ansible_selinux.mode
+      changed_when: False
+      tags:
+        - check
+        - selinux
+
+    - name: misMatch current SELinux status with boot status
+      shell: echo "SElinux Current and Boot modes are NOT in sync" >> {{temp_dir.stdout}}/selinux
+      when: ansible_selinux.status != "disabled" and ansible_selinux.config_mode != ansible_selinux.mode
+      changed_when: False
+      tags:
+        - check
+        - selinux
+
+    - name: resolve last persisted dir - if one is present
+      local_action: shell ls -d -1 {{datadir_prfx_path}}/{{inventory_hostname}}-* 2>/dev/null | sort -r | head -1
+      register: last_dir
+      changed_when: False
+      ignore_errors: True
+
+    - name: get file list
+      shell: ls -1 {{temp_dir.stdout}}/*
+      register: file_list
+      changed_when: False
+
+    - name: get timestamp
+      shell: "date +%Y-%m-%d-%H-%M-%S"
+      register: timestamp
+      changed_when: False
+
+    - name: create persisting-state directory
+      local_action: file path=/{{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}} state=directory
+      changed_when: False
+
+    - name: fetch file list
+      fetch: src={{item}} dest=/{{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/ flat=true
+      with_items: "{{file_list.stdout_lines}}"
+      changed_when: False
+
+    - name: diff the new files with last ones presisted
+      local_action: shell for file in {{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/*; do filename=$(basename $file); diff {{datadir_prfx_path}}/{{inventory_hostname}}-{{timestamp.stdout}}/$filename {{last_dir.stdout.strip(':')}}/$filename; done
+      ignore_errors: True
+      changed_when: False
+      register: file_diff
+      when: last_dir is defined and last_dir.stdout != ""
+
+    - name: display diff
+      debug: var=file_diff.stdout_lines
+      ignore_errors: True
+      changed_when: False
+      when: file_diff is defined
+
+    #clean up: can also be put as handlers
+
+    - name: clean remote temp dir
+      file: path={{temp_dir.stdout}} state=absent
+      changed_when: False
+
+    - name: clean rpm temp file
+      file: path={{localchanges.stdout}} state=absent
+      changed_when: False
 #  handlers:
 #  - import_tasks: "{{ handlers_path }}/restart_services.yml"
 #  - import_tasks: "restart_services.yml"
diff --git a/playbooks/clear_memcached.yml b/playbooks/clear_memcached.yml
index eaae858da..a748ee612 100644
--- a/playbooks/clear_memcached.yml
+++ b/playbooks/clear_memcached.yml
@@ -3,5 +3,5 @@
   serial: 1
 
   tasks:
-  - name: clear memcache
-    command: echo flush_all | nc localhost 11211
+    - name: clear memcache
+      command: echo flush_all | nc localhost 11211
diff --git a/playbooks/clear_varnish.yml b/playbooks/clear_varnish.yml
index 3f833c46f..3021e3cd1 100644
--- a/playbooks/clear_varnish.yml
+++ b/playbooks/clear_varnish.yml
@@ -4,5 +4,5 @@
   serial: 1
 
   tasks:
-  - name: clear varnish
-    command: varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 ban req.url == .
+    - name: clear varnish
+      command: varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 ban req.url == .
diff --git a/playbooks/cloud_prep.yml b/playbooks/cloud_prep.yml
index 3cb6f6c08..8507734c3 100644
--- a/playbooks/cloud_prep.yml
+++ b/playbooks/cloud_prep.yml
@@ -2,13 +2,12 @@
 - hosts: 209.132.184.*
   user: root
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/deactivate_modernpaste_paste.yml b/playbooks/deactivate_modernpaste_paste.yml
index df1d59a87..e0e92fedc 100644
--- a/playbooks/deactivate_modernpaste_paste.yml
+++ b/playbooks/deactivate_modernpaste_paste.yml
@@ -7,5 +7,5 @@
   user: root
 
   tasks:
-      - name: Run deactivate-paste.py
-        command: "python /usr/local/bin/deactivate-paste.py {{paste}}"
+    - name: Run deactivate-paste.py
+      command: "python /usr/local/bin/deactivate-paste.py {{paste}}"
diff --git a/playbooks/death_to_postfix.yml b/playbooks/death_to_postfix.yml
index bdf357930..3e7400120 100644
--- a/playbooks/death_to_postfix.yml
+++ b/playbooks/death_to_postfix.yml
@@ -5,17 +5,16 @@
   hosts: "{{ target }}"
   user: root
 
-
   tasks:
-      - name: Try to stop postfix cleanly.
-        service:  name=postfix state=stopped
+    - name: Try to stop postfix cleanly.
+      service: name=postfix state=stopped
 
-      # This doesn't really remove the pid file.. but we say it does so ansible only runs it if the pid file is there..
-      - name: Really kill postfix master process
-        command:  pkill -u root master removes=/var/spool/postfix/pid/master.pid
+    # This doesn't really remove the pid file.. but we say it does so ansible only runs it if the pid file is there..
+    - name: Really kill postfix master process
+      command: pkill -u root master removes=/var/spool/postfix/pid/master.pid
 
-      - name: Clean up old pid lock file.
-        command: rm /var/spool/postfix/pid/master.pid removes=/var/spool/postfix/pid/master.pid
+    - name: Clean up old pid lock file.
+      command: rm /var/spool/postfix/pid/master.pid removes=/var/spool/postfix/pid/master.pid
 
-      - name: Try to start postfix cleanly
-        service:  name=postfix state=started
+    - name: Try to start postfix cleanly
+      service: name=postfix state=started
diff --git a/playbooks/destroy_cloud_inst.yml b/playbooks/destroy_cloud_inst.yml
index fc1cec723..935fd5f45 100644
--- a/playbooks/destroy_cloud_inst.yml
+++ b/playbooks/destroy_cloud_inst.yml
@@ -10,16 +10,16 @@
   gather_facts: false
 
   tasks:
-  - name: fail if the host/ip is not up
-    local_action: wait_for host={{ inventory_hostname }} port=22 delay=0 timeout=10
-    when: inventory_hostname not in result.list_vms
+    - name: fail if the host/ip is not up
+      local_action: wait_for host={{ inventory_hostname }} port=22 delay=0 timeout=10
+      when: inventory_hostname not in result.list_vms
 
-  - name: pause for 30s before doing it
-    pause: seconds=30 prompt="Destroying vm now {{ target }}, abort if this is wrong"
+    - name: pause for 30s before doing it
+      pause: seconds=30 prompt="Destroying vm now {{ target }}, abort if this is wrong"
 
-  - name: find the instance id from the builder
-    command: curl -s http://169.254.169.254/latest/meta-data/instance-id
-    register: instanceid
+    - name: find the instance id from the builder
+      command: curl -s http://169.254.169.254/latest/meta-data/instance-id
+      register: instanceid
 
-  - name: destroy the vm
-    command: /usr/sbin/halt -p
+    - name: destroy the vm
+      command: /usr/sbin/halt -p
diff --git a/playbooks/destroy_virt_inst.yml b/playbooks/destroy_virt_inst.yml
index 3dd25baf6..ab7a63033 100644
--- a/playbooks/destroy_virt_inst.yml
+++ b/playbooks/destroy_virt_inst.yml
@@ -15,31 +15,31 @@
   gather_facts: false
 
   tasks:
-  - name: get vm list on the vmhost
-    delegate_to: "{{ vmhost }}"
-    virt: command=list_vms
-    register: result
-
-  - name: fail if the host is not already defined/existent
-    local_action: fail msg="host does not exist on {{ vmhost }}"
-    when: inventory_hostname not in result.list_vms
-
-  - name: schedule 30m host downtime in nagios
-    nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-
-  - name: pause for 30s before doing it
-    pause: seconds=30 prompt="Destroying (and lvremove for) vm now {{ target }}, abort if this is wrong"
-
-  - name: destroy the vm
-    virt: name={{ inventory_hostname }} command=destroy
-    delegate_to: "{{ vmhost }}"
-
-  - name: undefine the vm
-    virt: name={{ inventory_hostname }} command=undefine
-    delegate_to: "{{ vmhost }}"
-
-  - name: destroy the lv
-    command: /sbin/lvremove -f {{volgroup}}/{{inventory_hostname}}
-    delegate_to: "{{ vmhost }}"
+    - name: get vm list on the vmhost
+      delegate_to: "{{ vmhost }}"
+      virt: command=list_vms
+      register: result
+
+    - name: fail if the host is not already defined/existent
+      local_action: fail msg="host does not exist on {{ vmhost }}"
+      when: inventory_hostname not in result.list_vms
+
+    - name: schedule 30m host downtime in nagios
+      nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+
+    - name: pause for 30s before doing it
+      pause: seconds=30 prompt="Destroying (and lvremove for) vm now {{ target }}, abort if this is wrong"
+
+    - name: destroy the vm
+      virt: name={{ inventory_hostname }} command=destroy
+      delegate_to: "{{ vmhost }}"
+
+    - name: undefine the vm
+      virt: name={{ inventory_hostname }} command=undefine
+      delegate_to: "{{ vmhost }}"
+
+    - name: destroy the lv
+      command: /sbin/lvremove -f {{volgroup}}/{{inventory_hostname}}
+      delegate_to: "{{ vmhost }}"
diff --git a/playbooks/fix_arm_soc.yml b/playbooks/fix_arm_soc.yml
index 23140ba4a..ac9e78d9f 100644
--- a/playbooks/fix_arm_soc.yml
+++ b/playbooks/fix_arm_soc.yml
@@ -9,25 +9,25 @@
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
 
   tasks:
-  - name: power off
-    delegate_to: noc01.phx2.fedoraproject.org
-    command: /opt/calxeda/bin/ipmitool -U admin -P "{{ armsocipmipass }}" -H "{{inventory_hostname_short}}-mgmt.arm.fedoraproject.org" power off
-#    no_log: True
+    - name: power off
+      delegate_to: noc01.phx2.fedoraproject.org
+      command: /opt/calxeda/bin/ipmitool -U admin -P "{{ armsocipmipass }}" -H "{{inventory_hostname_short}}-mgmt.arm.fedoraproject.org" power off
+    #    no_log: True
 
-  - name: power on
-    delegate_to: noc01.phx2.fedoraproject.org
-    command: /opt/calxeda/bin/ipmitool -U admin -P "{{ armsocipmipass }}" -H "{{inventory_hostname_short}}-mgmt.arm.fedoraproject.org" power on
-#    no_log: True
+    - name: power on
+      delegate_to: noc01.phx2.fedoraproject.org
+      command: /opt/calxeda/bin/ipmitool -U admin -P "{{ armsocipmipass }}" -H "{{inventory_hostname_short}}-mgmt.arm.fedoraproject.org" power on
+    #    no_log: True
 
-  - name: wait for soc ssh to come back up
-    local_action: wait_for delay=10 host={{ target }} port=22 state=started timeout=1200
+    - name: wait for soc ssh to come back up
+      local_action: wait_for delay=10 host={{ target }} port=22 state=started timeout=1200
 
-  - name: make sure time is set
-    delegate_to: "{{target}}"
-    command: ntpdate -u bastion01.phx2.fedoraproject.org
+    - name: make sure time is set
+      delegate_to: "{{target}}"
+      command: ntpdate -u bastion01.phx2.fedoraproject.org
 
 - include_playbook: groups/buildhw.yml hosts="{{target}}"
diff --git a/playbooks/groups/arm-qa.yml b/playbooks/groups/arm-qa.yml
index a1ff00868..e6de90ab7 100644
--- a/playbooks/groups/arm-qa.yml
+++ b/playbooks/groups/arm-qa.yml
@@ -5,26 +5,26 @@
   user: root
   gather_facts: True
   tags:
-   - arm-qa
+    - arm-qa
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - sudo
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/autocloud-backend.yml b/playbooks/groups/autocloud-backend.yml
index 1699fb750..df7bf5d58 100644
--- a/playbooks/groups/autocloud-backend.yml
+++ b/playbooks/groups/autocloud-backend.yml
@@ -6,45 +6,45 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - nagios_client
-  - collectd/base
-  - fedmsg/base
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - nagios_client
+    - collectd/base
+    - fedmsg/base
+    - sudo
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: autocloud-backend:autocloud-backend-stg
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - redis
-  - fedmsg/hub
-  - autocloud/backend
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - redis
+    - fedmsg/hub
+    - autocloud/backend
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
diff --git a/playbooks/groups/autocloud-web.yml b/playbooks/groups/autocloud-web.yml
index a4a2bf18c..3c54626f1 100644
--- a/playbooks/groups/autocloud-web.yml
+++ b/playbooks/groups/autocloud-web.yml
@@ -6,32 +6,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - mod_wsgi
-  - fedmsg/base
-  - sudo
-  - role: openvpn/client
-    when: env != "staging"
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - mod_wsgi
+    - fedmsg/base
+    - sudo
+    - role: openvpn/client
+      when: env != "staging"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the app-specific configuration
   hosts: autocloud-web:autocloud-web-stg
@@ -39,11 +39,11 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - autocloud/frontend
+    - autocloud/frontend
diff --git a/playbooks/groups/backup-server.yml b/playbooks/groups/backup-server.yml
index 5f6c4d9a6..a4aa01914 100644
--- a/playbooks/groups/backup-server.yml
+++ b/playbooks/groups/backup-server.yml
@@ -9,32 +9,34 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - { role: nfs/client,
-      mnt_dir: '/fedora_backups',
-      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
-      nfs_src_dir: 'fedora_backups' }
-  - openvpn/client
-  - grokmirror_mirror
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - {
+        role: nfs/client,
+        mnt_dir: "/fedora_backups",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
+        nfs_src_dir: "fedora_backups",
+      }
+    - openvpn/client
+    - grokmirror_mirror
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-  - import_tasks: "{{ tasks_path }}/rdiff_backup_server.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/rdiff_backup_server.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/badges-backend.yml b/playbooks/groups/badges-backend.yml
index 35819663d..79e7f4f12 100644
--- a/playbooks/groups/badges-backend.yml
+++ b/playbooks/groups/badges-backend.yml
@@ -11,31 +11,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - fedmsg/base
-  - sudo
-  - { role: openvpn/client,
-       when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - fedmsg/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: badges-backend:badges-backend-stg
@@ -43,15 +42,15 @@
   gather_facts: True
 
   roles:
-  - fedmsg/hub
-  - badges/backend
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/hub
+    - badges/backend
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/badges-web.yml b/playbooks/groups/badges-web.yml
index 8aeeb7e6e..0195e39f2 100644
--- a/playbooks/groups/badges-web.yml
+++ b/playbooks/groups/badges-web.yml
@@ -11,39 +11,38 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - badges/frontend
-  - fedmsg/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - role: collectd/web-service
-    site: frontpage
-    url: "http://localhost/";
-    interval: 10
-  - role: collectd/web-service
-    site: leaderboard
-    url: "http://localhost/leaderboard";
-    interval: 10
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - badges/frontend
+    - fedmsg/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - role: collectd/web-service
+      site: frontpage
+      url: "http://localhost/";
+      interval: 10
+    - role: collectd/web-service
+      site: leaderboard
+      url: "http://localhost/leaderboard";
+      interval: 10
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/basset.yml b/playbooks/groups/basset.yml
index 09cc0b1f9..b73b3efb8 100644
--- a/playbooks/groups/basset.yml
+++ b/playbooks/groups/basset.yml
@@ -8,33 +8,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mongodb
-  - rabbitmq
-  - mod_wsgi
-  - basset/frontend
-  - basset/worker
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mongodb
+    - rabbitmq
+    - mod_wsgi
+    - basset/frontend
+    - basset/worker
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/bastion.yml b/playbooks/groups/bastion.yml
index f4b5d9eb6..7f981c768 100644
--- a/playbooks/groups/bastion.yml
+++ b/playbooks/groups/bastion.yml
@@ -6,32 +6,38 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - { role: openvpn/server, when: not inventory_hostname.startswith('bastion-comm01') or inventory_hostname.startswith('bastion13') }
-  - { role: openvpn/client, when: inventory_hostname.startswith('bastion13') }
-  - { role: packager_alias, when: not inventory_hostname.startswith('bastion-comm01') or inventory_hostname.startswith('bastion13') }
-  - opendkim
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - {
+        role: openvpn/server,
+        when: not inventory_hostname.startswith('bastion-comm01') or inventory_hostname.startswith('bastion13'),
+      }
+    - { role: openvpn/client, when: inventory_hostname.startswith('bastion13') }
+    - {
+        role: packager_alias,
+        when: not inventory_hostname.startswith('bastion-comm01') or inventory_hostname.startswith('bastion13'),
+      }
+    - opendkim
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure bastion-qa
   hosts: bastion-comm01.qa.fedoraproject.org
@@ -39,7 +45,7 @@
   gather_facts: True
 
   tasks:
-  - name: install needed packages
-    package: name={{ item }} state=present
-    with_items:
-    - ipmitool
+    - name: install needed packages
+      package: name={{ item }} state=present
+      with_items:
+        - ipmitool
diff --git a/playbooks/groups/batcave.yml b/playbooks/groups/batcave.yml
index e0f7213ac..fc9933701 100644
--- a/playbooks/groups/batcave.yml
+++ b/playbooks/groups/batcave.yml
@@ -6,41 +6,58 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - ansible-server
-  - sudo
-  - collectd/base
-  - git/hooks
-  - cgit/base
-  - cgit/clean_lock_cron
-  - cgit/make_pkgs_list
-  - rsyncd
-  - apache
-  - httpd/mod_ssl
-  - role: httpd/certificate
-    certname: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: "{{wildcard_int_file}}"
-  - openvpn/client
-  - batcave
-  - { role: repospanner/server, when: inventory_hostname.startswith('batcave01'), node: batcave01, region: ansible, spawn_repospanner_node: false, join_repospanner_node: repospanner01.ansible.fedoraproject.org }
-  - { role: nfs/client, when: inventory_hostname.startswith('batcave'), mnt_dir: '/srv/web/pub',  nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' }
-  - { role: nfs/client, when: inventory_hostname.startswith('batcave01'), mnt_dir: '/mnt/fedora/app',  nfs_src_dir: 'fedora_app/app' }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - ansible-server
+    - sudo
+    - collectd/base
+    - git/hooks
+    - cgit/base
+    - cgit/clean_lock_cron
+    - cgit/make_pkgs_list
+    - rsyncd
+    - apache
+    - httpd/mod_ssl
+    - role: httpd/certificate
+      certname: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: "{{wildcard_int_file}}"
+    - openvpn/client
+    - batcave
+    - {
+        role: repospanner/server,
+        when: inventory_hostname.startswith('batcave01'),
+        node: batcave01,
+        region: ansible,
+        spawn_repospanner_node: false,
+        join_repospanner_node: repospanner01.ansible.fedoraproject.org,
+      }
+    - {
+        role: nfs/client,
+        when: inventory_hostname.startswith('batcave'),
+        mnt_dir: "/srv/web/pub",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub",
+      }
+    - {
+        role: nfs/client,
+        when: inventory_hostname.startswith('batcave01'),
+        mnt_dir: "/mnt/fedora/app",
+        nfs_src_dir: "fedora_app/app",
+      }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/batcomputer.yml b/playbooks/groups/batcomputer.yml
index b615d993d..a5691b34f 100644
--- a/playbooks/groups/batcomputer.yml
+++ b/playbooks/groups/batcomputer.yml
@@ -6,26 +6,26 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - ansible-ansible-awx
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - ansible-ansible-awx
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/beaker-virthosts.yml b/playbooks/groups/beaker-virthosts.yml
index b056c4bfc..42d948dc5 100644
--- a/playbooks/groups/beaker-virthosts.yml
+++ b/playbooks/groups/beaker-virthosts.yml
@@ -10,28 +10,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - { role: iscsi_client, when: datacenter == "phx2" }
-  - sudo
-  - { role: openvpn/client, when: datacenter != "phx2" }
-  - { role: beaker/virthost, tags: ['beakervirthost'] }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - { role: iscsi_client, when: datacenter == "phx2" }
+    - sudo
+    - { role: openvpn/client, when: datacenter != "phx2" }
+    - { role: beaker/virthost, tags: ["beakervirthost"] }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/beaker.yml b/playbooks/groups/beaker.yml
index 188b57f77..519ffa9d9 100644
--- a/playbooks/groups/beaker.yml
+++ b/playbooks/groups/beaker.yml
@@ -10,32 +10,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - apache
-  - { role: openvpn/client,
-      when: env != "staging", tags: ['openvpn_client'] }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - apache
+    - { role: openvpn/client, when: env != "staging", tags: ["openvpn_client"] }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure beaker and required services
   hosts: beaker:beaker-stg
@@ -43,16 +42,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: mariadb_server, tags: ['mariadb'] }
-   - { role: beaker/base, tags: ['beakerbase'] }
-   - { role: beaker/server, tags: ['beakerserver'] }
-   - { role: beaker/labcontroller, tags: ['beakerlabcontroller'] }
+    - { role: mariadb_server, tags: ["mariadb"] }
+    - { role: beaker/base, tags: ["beakerbase"] }
+    - { role: beaker/server, tags: ["beakerserver"] }
+    - { role: beaker/labcontroller, tags: ["beakerlabcontroller"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/blockerbugs.yml b/playbooks/groups/blockerbugs.yml
index fa7c6e051..427b23ea0 100644
--- a/playbooks/groups/blockerbugs.yml
+++ b/playbooks/groups/blockerbugs.yml
@@ -6,31 +6,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - rsyncd
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - blockerbugs
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - rsyncd
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - blockerbugs
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/bodhi-backend.yml b/playbooks/groups/bodhi-backend.yml
index 39bd15e05..e9a3b8b06 100644
--- a/playbooks/groups/bodhi-backend.yml
+++ b/playbooks/groups/bodhi-backend.yml
@@ -15,76 +15,75 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/web/infra/ansible/vars/all/RelEngFrozen.yaml"
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/web/infra/ansible/vars/all/RelEngFrozen.yaml"
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - nagios_client
-  - collectd/base
-  - hosts
-  - builder_repo
-  - fas_client
-  - sudo
-  - rkhunter
+    - base
+    - nagios_client
+    - collectd/base
+    - hosts
+    - builder_repo
+    - fas_client
+    - sudo
+    - rkhunter
 
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
 
-    # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji_prod'
-    nfs_src_dir: 'fedora_koji'
-    when: env == 'staging'
+      # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji_prod"
+      nfs_src_dir: "fedora_koji"
+      when: env == 'staging'
 
-  - role: nfs/client
-    mnt_dir: '/pub/'
-    nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/'
+    - role: nfs/client
+      mnt_dir: "/pub/"
+      nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub/"
 
-  - bodhi2/backend
-  - fedmsg/base
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
-    user: masher
-
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    extra_acl_user: fedmsg
-    service: bodhi
-    host: "bodhi.fedoraproject.org"
-    when: env == "production"
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    extra_acl_user: fedmsg
-    service: bodhi
-    host: "bodhi.stg.fedoraproject.org"
-    when: env == "staging"
-  - role: push-container-registry
-    cert_dest_dir: "/etc/docker/certs.d/registry{{ env_suffix }}.fedoraproject.org"
-    cert_src: "{{private}}/files/docker-registry/{{env}}/pki/issued/containerstable.crt"
-    key_src: "{{private}}/files/docker-registry/{{env}}/pki/private/containerstable.key"
-    certs_group: apache
+    - bodhi2/backend
+    - fedmsg/base
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
+      user: masher
 
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      extra_acl_user: fedmsg
+      service: bodhi
+      host: "bodhi.fedoraproject.org"
+      when: env == "production"
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      extra_acl_user: fedmsg
+      service: bodhi
+      host: "bodhi.stg.fedoraproject.org"
+      when: env == "staging"
+    - role: push-container-registry
+      cert_dest_dir: "/etc/docker/certs.d/registry{{ env_suffix }}.fedoraproject.org"
+      cert_src: "{{private}}/files/docker-registry/{{env}}/pki/issued/containerstable.crt"
+      key_src: "{{private}}/files/docker-registry/{{env}}/pki/private/containerstable.key"
+      certs_group: apache
 
   tasks:
-  - name: create secondary volume dir for stg bodhi
-    file: dest=/mnt/koji/vol state=directory owner=apache group=apache mode=0755
-    tags: bodhi
-    when: env == 'staging'
-  - name: create symlink for stg/prod secondary volume
-    file: src=/mnt/fedora_koji_prod/koji dest=/mnt/koji/vol/prod state=link
-    tags: bodhi
-    when: env == 'staging'
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - name: create secondary volume dir for stg bodhi
+      file: dest=/mnt/koji/vol state=directory owner=apache group=apache mode=0755
+      tags: bodhi
+      when: env == 'staging'
+    - name: create symlink for stg/prod secondary volume
+      file: src=/mnt/fedora_koji_prod/koji dest=/mnt/koji/vol/prod state=link
+      tags: bodhi
+      when: env == 'staging'
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/bugyou.yml b/playbooks/groups/bugyou.yml
index f39f4f0ca..2565101e0 100644
--- a/playbooks/groups/bugyou.yml
+++ b/playbooks/groups/bugyou.yml
@@ -11,28 +11,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - collectd/base
-  - hosts
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - collectd/base
+    - hosts
+    - fas_client
+    - sudo
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg basics
   hosts: bugyou:bugyou-stg
@@ -40,15 +40,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: bugyou:bugyou-stg
@@ -56,16 +56,16 @@
   gather_facts: True
 
   roles:
-  - fedmsg/hub
-  - bugyou/bugyou-master
-  - bugyou/bugyou-plugins
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/hub
+    - bugyou/bugyou-master
+    - bugyou/bugyou-plugins
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/bugzilla2fedmsg.yml b/playbooks/groups/bugzilla2fedmsg.yml
index 8b06f2f0b..dc7dedd91 100644
--- a/playbooks/groups/bugzilla2fedmsg.yml
+++ b/playbooks/groups/bugzilla2fedmsg.yml
@@ -11,29 +11,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - fedmsg/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - fedmsg/base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: bugzilla2fedmsg:bugzilla2fedmsg-stg
@@ -41,14 +41,14 @@
   gather_facts: True
 
   roles:
-  - bugzilla2fedmsg
-  - role: collectd/fedmsg-service
-    process: moksha-hub
+    - bugzilla2fedmsg
+    - role: collectd/fedmsg-service
+      process: moksha-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/buildhw.yml b/playbooks/groups/buildhw.yml
index eb7700642..cc71d90be 100644
--- a/playbooks/groups/buildhw.yml
+++ b/playbooks/groups/buildhw.yml
@@ -6,41 +6,46 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  - import_tasks: "{{ tasks_path }}/osbs_certs.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/osbs_certs.yml"
 
   roles:
-  - base
-  - { role: nfs/client, mnt_dir: '/mnt/fedora_koji',  nfs_src_dir: "{{ koji_hub_nfs }}", when: koji_hub_nfs is defined }
-  - koji_builder
-  - { role: bkernel, when: inventory_hostname.startswith('bkernel') }
-  - hosts
-  - { role: fas_client, when: not inventory_hostname.startswith('bkernel') }
-  - { role: sudo, when: not inventory_hostname.startswith('bkernel') }
-  - role: keytab/service
-    kt_location: /etc/kojid/kojid.keytab
-    service: compile
-  - role: keytab/service
-    owner_user: root
-    owner_group: root
-    service: innercompose
-    host: "odcs{{ env_suffix }}.fedoraproject.org"
-    kt_location: /etc/kojid/secrets/odcs_inner.keytab
-    when: env == "staging"
+    - base
+    - {
+        role: nfs/client,
+        mnt_dir: "/mnt/fedora_koji",
+        nfs_src_dir: "{{ koji_hub_nfs }}",
+        when: koji_hub_nfs is defined,
+      }
+    - koji_builder
+    - { role: bkernel, when: inventory_hostname.startswith('bkernel') }
+    - hosts
+    - { role: fas_client, when: not inventory_hostname.startswith('bkernel') }
+    - { role: sudo, when: not inventory_hostname.startswith('bkernel') }
+    - role: keytab/service
+      kt_location: /etc/kojid/kojid.keytab
+      service: compile
+    - role: keytab/service
+      owner_user: root
+      owner_group: root
+      service: innercompose
+      host: "odcs{{ env_suffix }}.fedoraproject.org"
+      kt_location: /etc/kojid/secrets/odcs_inner.keytab
+      when: env == "staging"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-    when: not inventory_hostname.startswith('bkernel')
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-    when: not inventory_hostname.startswith('bkernel')
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+      when: not inventory_hostname.startswith('bkernel')
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+      when: not inventory_hostname.startswith('bkernel')
 
-  - name: make sure kojid is running
-    service: name=kojid state=started enabled=yes
+    - name: make sure kojid is running
+      service: name=kojid state=started enabled=yes
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/busgateway.yml b/playbooks/groups/busgateway.yml
index 36bd745d7..2965216f9 100644
--- a/playbooks/groups/busgateway.yml
+++ b/playbooks/groups/busgateway.yml
@@ -6,31 +6,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - fedmsg/base
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - fedmsg/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: busgateway:busgateway-stg
@@ -38,23 +37,23 @@
   gather_facts: True
 
   roles:
-  - role: fedmsg/hub
-    enable_websocket_server: True
-  - role: fedmsg/datanommer
-  - role: fedmsg/relay
-  - role: fedmsg/gateway
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
-  - role: collectd/fedmsg-service
-    process: fedmsg-relay
-  - role: collectd/fedmsg-service
-    process: fedmsg-gateway
-  - role: collectd/fedmsg-activation
+    - role: fedmsg/hub
+      enable_websocket_server: True
+    - role: fedmsg/datanommer
+    - role: fedmsg/relay
+    - role: fedmsg/gateway
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
+    - role: collectd/fedmsg-service
+      process: fedmsg-relay
+    - role: collectd/fedmsg-service
+      process: fedmsg-gateway
+    - role: collectd/fedmsg-activation
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/certgetter.yml b/playbooks/groups/certgetter.yml
index 95290922d..e8b24a8dd 100644
--- a/playbooks/groups/certgetter.yml
+++ b/playbooks/groups/certgetter.yml
@@ -6,27 +6,26 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/ci.yml b/playbooks/groups/ci.yml
index d709159c1..f8a46f3a5 100644
--- a/playbooks/groups/ci.yml
+++ b/playbooks/groups/ci.yml
@@ -11,35 +11,37 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - { role: hosts, tags: ['hosts']}
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: dnf-automatic, tags: ['dnfautomatic'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openvpn/client,
-       when: deployment_type == "prod", tags: ['openvpn_client'] }
-   - postgresql_server
-   - apache
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - { role: hosts, tags: ["hosts"] }
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: dnf-automatic, tags: ["dnfautomatic"] }
+    - { role: sudo, tags: ["sudo"] }
+    - {
+        role: openvpn/client,
+        when: deployment_type == "prod",
+        tags: ["openvpn_client"],
+      }
+    - postgresql_server
+    - apache
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure resultsdb production
   hosts: ci
@@ -47,15 +49,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/resultsdb-backend, tags: ['resultsdb-be'] }
-   - { role: taskotron/resultsdb-frontend, tags: ['resultsdb-fe'] }
-   - { role: taskotron/execdb, tags: ['execdb'] }
-   - { role: ci_resultsdb, tags: ['ci_resultsdb'] }
+    - { role: taskotron/resultsdb-backend, tags: ["resultsdb-be"] }
+    - { role: taskotron/resultsdb-frontend, tags: ["resultsdb-fe"] }
+    - { role: taskotron/execdb, tags: ["execdb"] }
+    - { role: ci_resultsdb, tags: ["ci_resultsdb"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/copr-backend.yml b/playbooks/groups/copr-backend.yml
index b8b8793c8..19ba69463 100644
--- a/playbooks/groups/copr-backend.yml
+++ b/playbooks/groups/copr-backend.yml
@@ -4,30 +4,29 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: cloud basic setup
   hosts: copr-back-dev:copr-back-stg:copr-back
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
 
 - name: provision instance
   hosts: copr-back-dev:copr-back-stg:copr-back
@@ -35,14 +34,14 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   # Roles are run first, before tasks, regardless of where you place them here.
   roles:
-  - base
-  - fedmsg/base
-  - copr/backend
-  - nagios_client
+    - base
+    - fedmsg/base
+    - copr/backend
+    - nagios_client
diff --git a/playbooks/groups/copr-dist-git.yml b/playbooks/groups/copr-dist-git.yml
index 437c14293..404f1680e 100644
--- a/playbooks/groups/copr-dist-git.yml
+++ b/playbooks/groups/copr-dist-git.yml
@@ -4,29 +4,29 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: cloud basic setup
   hosts: copr-dist-git-dev:copr-dist-git-stg:copr-dist-git
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{copr_hostbase}}.fedorainfracloud.org"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{copr_hostbase}}.fedorainfracloud.org"
 
 - name: provision instance
   hosts: copr-dist-git-dev:copr-dist-git-stg:copr-dist-git
@@ -34,13 +34,13 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - copr/dist_git
+    - base
+    - copr/dist_git
 
   handlers:
-  - import_tasks: "../../handlers/restart_services.yml"
+    - import_tasks: "../../handlers/restart_services.yml"
diff --git a/playbooks/groups/copr-frontend-cloud.yml b/playbooks/groups/copr-frontend-cloud.yml
index dd3935918..afedf5ac5 100644
--- a/playbooks/groups/copr-frontend-cloud.yml
+++ b/playbooks/groups/copr-frontend-cloud.yml
@@ -4,29 +4,29 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: cloud basic setup
   hosts: copr-front-dev:copr-front
   # hosts: copr-front
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
 
 - name: provision instance
   hosts: copr-front:copr-front-dev
@@ -34,11 +34,11 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - base
-   - copr/frontend-cloud
-   - nagios_client
+    - base
+    - copr/frontend-cloud
+    - nagios_client
diff --git a/playbooks/groups/copr-frontend-upgrade.yml b/playbooks/groups/copr-frontend-upgrade.yml
index ab6ecefc6..635ad57d7 100644
--- a/playbooks/groups/copr-frontend-upgrade.yml
+++ b/playbooks/groups/copr-frontend-upgrade.yml
@@ -5,16 +5,16 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - copr/frontend
+    - copr/frontend
 
   tasks:
-  - name: Upgrade copr-frontend package
-    dnf: state=latest pkg=copr-frontend
+    - name: Upgrade copr-frontend package
+      dnf: state=latest pkg=copr-frontend
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/copr-frontend.yml b/playbooks/groups/copr-frontend.yml
index 7a2028d38..322aaa975 100644
--- a/playbooks/groups/copr-frontend.yml
+++ b/playbooks/groups/copr-frontend.yml
@@ -7,29 +7,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - { role: openvpn/client, when: env != "staging" }
-  - { role: sudo, sudoers: "{{ private }}/files/sudo/copr-sudoers" }
-  - redis
-  - mod_wsgi
-  - copr/frontend
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - { role: openvpn/client, when: env != "staging" }
+    - { role: sudo, sudoers: "{{ private }}/files/sudo/copr-sudoers" }
+    - redis
+    - mod_wsgi
+    - copr/frontend
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/copr-keygen.yml b/playbooks/groups/copr-keygen.yml
index 46a786d3f..69bc258ea 100644
--- a/playbooks/groups/copr-keygen.yml
+++ b/playbooks/groups/copr-keygen.yml
@@ -3,47 +3,47 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
-  - name: gather facts
-    setup:
-    check_mode: no
-    ignore_errors: True
-    register: facts
-  - name: install python2 and dnf stuff
-    raw: dnf -y install python-dnf libselinux-python yum
-    when: facts is failed
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - name: gather facts
+      setup:
+      check_mode: no
+      ignore_errors: True
+      register: facts
+    - name: install python2 and dnf stuff
+      raw: dnf -y install python-dnf libselinux-python yum
+      when: facts is failed
 
 - name: cloud basic setup
   hosts: copr-keygen-dev:copr-keygen-stg:copr-keygen
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{copr_hostbase}}.cloud.fedoraproject.org"
 
 - name: provision instance
   hosts: copr-keygen-dev:copr-keygen-stg:copr-keygen
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - copr/keygen
-  - nagios_client
+    - base
+    - copr/keygen
+    - nagios_client
diff --git a/playbooks/groups/datagrepper.yml b/playbooks/groups/datagrepper.yml
index 1dc42cba3..0b2744702 100644
--- a/playbooks/groups/datagrepper.yml
+++ b/playbooks/groups/datagrepper.yml
@@ -8,33 +8,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - fedmsg/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - fedmsg/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: datagrepper:datagrepper-stg
@@ -42,20 +41,19 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - datagrepper
-  - role: collectd/web-service
-    site: datagrepper
-    url: "http://localhost/datagrepper/raw?delta=86400";
-    interval: 15
+    - datagrepper
+    - role: collectd/web-service
+      site: datagrepper
+      url: "http://localhost/datagrepper/raw?delta=86400";
+      interval: 15
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 # The gluster work here can be omitted for now.  It is used by a feature of
 # datagrepper that is partially in place, but not yet functional.
 #
diff --git a/playbooks/groups/dhcp.yml b/playbooks/groups/dhcp.yml
index da2929ff8..a482b0362 100644
--- a/playbooks/groups/dhcp.yml
+++ b/playbooks/groups/dhcp.yml
@@ -6,27 +6,27 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - dhcp_server
-  - tftp_server
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - dhcp_server
+    - tftp_server
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/dns.yml b/playbooks/groups/dns.yml
index fcff65c47..d6383544f 100644
--- a/playbooks/groups/dns.yml
+++ b/playbooks/groups/dns.yml
@@ -8,30 +8,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - collectd/bind
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: datacenter != "phx2" and datacenter != "rdu" }
-  - dns
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - collectd/bind
+    - rsyncd
+    - sudo
+    - {
+        role: openvpn/client,
+        when: datacenter != "phx2" and datacenter != "rdu",
+      }
+    - dns
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/download.yml b/playbooks/groups/download.yml
index d5ab95624..ea4044dc1 100644
--- a/playbooks/groups/download.yml
+++ b/playbooks/groups/download.yml
@@ -6,15 +6,14 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: post-initial-steps
   hosts: download
@@ -22,58 +21,75 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - apache
-  - download
-  - { role: mod_limitipconn, when: ansible_distribution_major_version|int != '7'}
-  - rsyncd
-  - { role: nfs/client, when: datacenter == "phx2", mnt_dir: '/srv/pub',  nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' }
-  - { role: nfs/client, when: datacenter == "phx2", mnt_dir: '/mnt/koji/compose',  nfs_src_dir: 'fedora_koji/koji/compose' }
-  - { role: nfs/client, when: datacenter == "rdu", mnt_dir: '/srv/pub',  nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' }
-  - sudo
-  - { role: openvpn/client, when: datacenter != "phx2" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - apache
+    - download
+    - {
+        role: mod_limitipconn,
+        when: ansible_distribution_major_version|int != '7',
+      }
+    - rsyncd
+    - {
+        role: nfs/client,
+        when: datacenter == "phx2",
+        mnt_dir: "/srv/pub",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub",
+      }
+    - {
+        role: nfs/client,
+        when: datacenter == "phx2",
+        mnt_dir: "/mnt/koji/compose",
+        nfs_src_dir: "fedora_koji/koji/compose",
+      }
+    - {
+        role: nfs/client,
+        when: datacenter == "rdu",
+        mnt_dir: "/srv/pub",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub",
+      }
+    - sudo
+    - { role: openvpn/client, when: datacenter != "phx2" }
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-  - name: put in script for syncing on download-ib01
-    copy: src="{{ files }}/download/sync-up-downloads.sh.ib01" dest=/usr/local/bin/sync-up-downloads owner=root group=root mode=755
-    when: inventory_hostname == 'download-ib01.fedoraproject.org'
-  - name: put in script for syncing on download-ib01
-    copy: src="{{ files }}/download/sync-up-other.sh.ib01" dest=/usr/local/bin/sync-up-other owner=root group=root mode=755
-    when: inventory_hostname == 'download-ib01.fedoraproject.org'
-  - name: put in cron job for syncing
-    copy: src="{{ files }}/download/download-sync.cron.ib01"  dest=/etc/cron.d/download-sync owner=root group=root mode=644
-    when: inventory_hostname == 'download-ib01.fedoraproject.org'
-  - name: put in last sync scrypt for download-ib01
-    copy: src="{{ files}}/download/last-sync" dest=/usr/local/bin/last-sync mode=0755
-    when: inventory_hostname == 'download-ib01.fedoraproject.org'
-  - name: install bc so last-sync works.
-    package: name=bc state=present
-    when: inventory_hostname == 'download-ib01.fedoraproject.org'
+    - name: put in script for syncing on download-ib01
+      copy: src="{{ files }}/download/sync-up-downloads.sh.ib01" dest=/usr/local/bin/sync-up-downloads owner=root group=root mode=755
+      when: inventory_hostname == 'download-ib01.fedoraproject.org'
+    - name: put in script for syncing on download-ib01
+      copy: src="{{ files }}/download/sync-up-other.sh.ib01" dest=/usr/local/bin/sync-up-other owner=root group=root mode=755
+      when: inventory_hostname == 'download-ib01.fedoraproject.org'
+    - name: put in cron job for syncing
+      copy: src="{{ files }}/download/download-sync.cron.ib01"  dest=/etc/cron.d/download-sync owner=root group=root mode=644
+      when: inventory_hostname == 'download-ib01.fedoraproject.org'
+    - name: put in last sync scrypt for download-ib01
+      copy: src="{{ files}}/download/last-sync" dest=/usr/local/bin/last-sync mode=0755
+      when: inventory_hostname == 'download-ib01.fedoraproject.org'
+    - name: install bc so last-sync works.
+      package: name=bc state=present
+      when: inventory_hostname == 'download-ib01.fedoraproject.org'
 
-  - name: put in script for syncing on download-cc-rdu01
-    copy: src="{{ files }}/download/sync-up-downloads.sh.cc-rdu01" dest=/usr/local/bin/sync-up-downloads owner=root group=root mode=755
-    when: inventory_hostname == 'download-cc-rdu01.fedoraproject.org'
-  - name: put in cron job for syncing
-    copy: src="{{ files }}/download/download-sync.cron"  dest=/etc/cron.d/download-sync owner=root group=root mode=644
-    when: inventory_hostname == 'download-cc-rdu01.fedoraproject.org'
+    - name: put in script for syncing on download-cc-rdu01
+      copy: src="{{ files }}/download/sync-up-downloads.sh.cc-rdu01" dest=/usr/local/bin/sync-up-downloads owner=root group=root mode=755
+      when: inventory_hostname == 'download-cc-rdu01.fedoraproject.org'
+    - name: put in cron job for syncing
+      copy: src="{{ files }}/download/download-sync.cron"  dest=/etc/cron.d/download-sync owner=root group=root mode=644
+      when: inventory_hostname == 'download-cc-rdu01.fedoraproject.org'
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/elections.yml b/playbooks/groups/elections.yml
index e1c51ca71..f75570ca5 100644
--- a/playbooks/groups/elections.yml
+++ b/playbooks/groups/elections.yml
@@ -6,32 +6,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - collectd/base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg on elections
   hosts: elections:elections-stg
@@ -39,15 +38,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy elections itself
   hosts: elections:elections-stg
@@ -55,13 +54,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - elections
+    - elections
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/fas.yml b/playbooks/groups/fas.yml
index fdeb984f1..f98ebb1aa 100644
--- a/playbooks/groups/fas.yml
+++ b/playbooks/groups/fas.yml
@@ -8,33 +8,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - memcached
-  - mod_wsgi
-  - fas_server
-  - fedmsg/base
-  - sudo
-  - yubikey
-  - totpcgi
-  - { role: openvpn/client, when: env != "staging" }
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - memcached
+    - mod_wsgi
+    - fas_server
+    - fedmsg/base
+    - sudo
+    - yubikey
+    - totpcgi
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/fedimg.yml b/playbooks/groups/fedimg.yml
index 3bc801553..114ee9d97 100644
--- a/playbooks/groups/fedimg.yml
+++ b/playbooks/groups/fedimg.yml
@@ -9,33 +9,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - fas_client
-  - nagios_client
-  - hosts
-  - collectd/base
-  - fedmsg/base
-  - sudo
+    - base
+    - rkhunter
+    - fas_client
+    - nagios_client
+    - hosts
+    - collectd/base
+    - fedmsg/base
+    - sudo
   # The proxies don't actually need to talk to these hosts so we won't bother
   # putting them on the vpn.
   #- { role: openvpn/client,
   #    when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: fedimg:fedimg-stg
@@ -43,24 +43,24 @@
   gather_facts: True
 
   roles:
-  - fedmsg/hub
-  - role: fedimg
-    aws_keyname: fedimg-dev
-    aws_keypath: /etc/pki/fedimg/fedimg-dev
-    aws_pubkeypath: /etc/pki/fedimg/fedimg-dev.pub
-    when: env == 'staging'
-  - role: fedimg
-    aws_keyname: releng-ap-northeast-1
-    aws_keypath: /etc/pki/fedimg/fedimg-prod
-    aws_pubkeypath: /etc/pki/fedimg/fedimg-prod.pub
-    when: env != 'staging'
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/hub
+    - role: fedimg
+      aws_keyname: fedimg-dev
+      aws_keypath: /etc/pki/fedimg/fedimg-dev
+      aws_pubkeypath: /etc/pki/fedimg/fedimg-dev.pub
+      when: env == 'staging'
+    - role: fedimg
+      aws_keyname: releng-ap-northeast-1
+      aws_keypath: /etc/pki/fedimg/fedimg-prod
+      aws_pubkeypath: /etc/pki/fedimg/fedimg-prod.pub
+      when: env != 'staging'
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/fedocal.yml b/playbooks/groups/fedocal.yml
index 541612286..9e45843d7 100644
--- a/playbooks/groups/fedocal.yml
+++ b/playbooks/groups/fedocal.yml
@@ -6,32 +6,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - collectd/base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg
   hosts: fedocal-stg:fedocal
@@ -39,15 +38,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy fedocal itself
   hosts: fedocal-stg:fedocal
@@ -55,12 +54,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - fedocal
+    - fedocal
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/freshmaker.yml b/playbooks/groups/freshmaker.yml
index 8eeb09cdf..eac339f9f 100644
--- a/playbooks/groups/freshmaker.yml
+++ b/playbooks/groups/freshmaker.yml
@@ -6,29 +6,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - collectd/base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: openvpn on the prod frontend nodes
   hosts: freshmaker-frontend
@@ -36,15 +36,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - openvpn/client
+    - openvpn/client
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up Freshmaker frontend
   hosts: freshmaker-frontend:freshmaker-frontend-stg
@@ -52,19 +52,19 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - mod_wsgi
-  - role: freshmaker/frontend
-    # TLS is terminated for us at the proxy layer (like for every other app).
-    freshmaker_force_ssl: False
-    freshmaker_servername: null
+    - mod_wsgi
+    - role: freshmaker/frontend
+      # TLS is terminated for us at the proxy layer (like for every other app).
+      freshmaker_force_ssl: False
+      freshmaker_servername: null
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up Freshmaker backend
   hosts: freshmaker-backend:freshmaker-backend-stg
@@ -72,20 +72,20 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
-  - role: freshmaker/backend
-    freshmaker_servername: freshmaker{{env_suffix}}.fedoraproject.org
+    - fedmsg/base
+    - role: freshmaker/backend
+      freshmaker_servername: freshmaker{{env_suffix}}.fedoraproject.org
 
-  - role: keytab/service
-    service: freshmaker
-    owner_user: fedmsg
-    owner_group: fedmsg
-    host: "freshmaker{{env_suffix}}.fedoraproject.org"
+    - role: keytab/service
+      service: freshmaker
+      owner_user: fedmsg
+      owner_group: fedmsg
+      host: "freshmaker{{env_suffix}}.fedoraproject.org"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/github2fedmsg.yml b/playbooks/groups/github2fedmsg.yml
index 2ccc2b085..be2c7e185 100644
--- a/playbooks/groups/github2fedmsg.yml
+++ b/playbooks/groups/github2fedmsg.yml
@@ -11,32 +11,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy service-specific config
   hosts: github2fedmsg:github2fedmsg-stg
@@ -44,13 +43,13 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - github2fedmsg
-  - fedmsg/base
+    - github2fedmsg
+    - fedmsg/base
diff --git a/playbooks/groups/gnome-backups.yml b/playbooks/groups/gnome-backups.yml
index 0c583c7c0..e6cb475d7 100644
--- a/playbooks/groups/gnome-backups.yml
+++ b/playbooks/groups/gnome-backups.yml
@@ -6,30 +6,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - gnome_backups
-  - { role: nfs/client,
-      mnt_dir: '/gnome_backups',
-      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
-      nfs_src_dir: 'gnome_backups' }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - gnome_backups
+    - {
+        role: nfs/client,
+        mnt_dir: "/gnome_backups",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
+        nfs_src_dir: "gnome_backups",
+      }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/hotness.yml b/playbooks/groups/hotness.yml
index 33578f925..6b48fa6a6 100644
--- a/playbooks/groups/hotness.yml
+++ b/playbooks/groups/hotness.yml
@@ -11,35 +11,35 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - collectd/base
-  - hosts
-  - fas_client
-  - sudo
-  - role: keytab/service
-    service: hotness
-    owner_user: fedmsg
+    - base
+    - rkhunter
+    - nagios_client
+    - collectd/base
+    - hosts
+    - fas_client
+    - sudo
+    - role: keytab/service
+      service: hotness
+      owner_user: fedmsg
   # The proxies don't actually need to talk to these hosts so we won't bother
   # putting them on the vpn.
   #- { role: openvpn/client,
   #    when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg basics
   hosts: hotness:hotness-stg
@@ -47,15 +47,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: hotness:hotness-stg
@@ -63,15 +63,15 @@
   gather_facts: True
 
   roles:
-  - fedmsg/hub
-  - hotness
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/hub
+    - hotness
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/hubs.yml b/playbooks/groups/hubs.yml
index 5bf96741a..5b4225c14 100644
--- a/playbooks/groups/hubs.yml
+++ b/playbooks/groups/hubs.yml
@@ -8,31 +8,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 #
 # Database setup
@@ -44,10 +42,10 @@
   user: root
 
   tasks:
-  - name: install psycopg2 for the postgresql ansible modules
-    package: name=python-psycopg2 state=present
-    tags:
-    - packages
+    - name: install psycopg2 for the postgresql ansible modules
+      package: name=python-psycopg2 state=present
+      tags:
+        - packages
 
 - name: setup the database
   hosts: db01.stg.phx2.fedoraproject.org
@@ -55,21 +53,19 @@
   become: yes
   become_user: postgres
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - /srv/private/ansible/vars.yml
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   tasks:
-  #- name: hubs DB admin user
-  #  postgresql_user: name=hubsadmin password={{ hubs_admin_db_pass }}
-  #- name: databases creation
-  #  postgresql_db: name=hubs owner=hubsadmin encoding=UTF-8
-  - name: hubs DB user
-    postgresql_user: name=hubsapp password={{ hubs_db_pass }}
-  - name: databases creation
-    postgresql_db: name=hubs owner=hubsapp encoding=UTF-8
-
-
+    #- name: hubs DB admin user
+    #  postgresql_user: name=hubsadmin password={{ hubs_admin_db_pass }}
+    #- name: databases creation
+    #  postgresql_db: name=hubs owner=hubsadmin encoding=UTF-8
+    - name: hubs DB user
+      postgresql_user: name=hubsapp password={{ hubs_db_pass }}
+    - name: databases creation
+      postgresql_db: name=hubs owner=hubsapp encoding=UTF-8
 
 #
 # Real Hubs-specific work
@@ -81,35 +77,35 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - /srv/private/ansible/vars.yml
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   roles:
-  - fedmsg/base
-  - role: hubs
-    main_user: hubs
-    hubs_secret_key: "{{ hubs_session_secret }}"
-    hubs_db_type: postgresql
-    hubs_db_user: hubsapp
-    hubs_db_password: "{{ hubs_db_pass }}"
-    hubs_dev_mode: false
-    hubs_conf_dir: /etc/fedora-hubs
-    hubs_var_dir: /var/lib/fedora-hubs
-    # Set the SSL files to null because we use a SSL proxy
-    hubs_ssl_cert: null
-    hubs_ssl_key: null
-    hubs_fas_username: "{{ fedoraDummyUser }}"
-    hubs_fas_password: "{{ fedoraDummyUserPassword }}"
+    - fedmsg/base
+    - role: hubs
+      main_user: hubs
+      hubs_secret_key: "{{ hubs_session_secret }}"
+      hubs_db_type: postgresql
+      hubs_db_user: hubsapp
+      hubs_db_password: "{{ hubs_db_pass }}"
+      hubs_dev_mode: false
+      hubs_conf_dir: /etc/fedora-hubs
+      hubs_var_dir: /var/lib/fedora-hubs
+      # Set the SSL files to null because we use a SSL proxy
+      hubs_ssl_cert: null
+      hubs_ssl_key: null
+      hubs_fas_username: "{{ fedoraDummyUser }}"
+      hubs_fas_password: "{{ fedoraDummyUserPassword }}"
 
   tasks:
-  - name: add more hubs workers
-    service: name={{item}} enabled=yes state=started
-    with_items:
-    - fedora-hubs-triage@3
-    - fedora-hubs-triage@4
-    - fedora-hubs-worker@3
-    - fedora-hubs-worker@4
+    - name: add more hubs workers
+      service: name={{item}} enabled=yes state=started
+      with_items:
+        - fedora-hubs-triage@3
+        - fedora-hubs-triage@4
+        - fedora-hubs-worker@3
+        - fedora-hubs-worker@4
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/infinote.yml b/playbooks/groups/infinote.yml
index 41bef62f8..b9629ffd7 100644
--- a/playbooks/groups/infinote.yml
+++ b/playbooks/groups/infinote.yml
@@ -7,39 +7,39 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - openvpn/client
-  - cgit/base
-  - cgit/clean_lock_cron
-  - cgit/make_pkgs_list
-  - git/server
-  - role: apache
-  - role: httpd/mod_ssl
-  - infinote
-  - role: letsencrypt
-    site_name: 'infinote.fedoraproject.org'
-    certbot_addhost: 'infinote.fedoraproject.org'
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - openvpn/client
+    - cgit/base
+    - cgit/clean_lock_cron
+    - cgit/make_pkgs_list
+    - git/server
+    - role: apache
+    - role: httpd/mod_ssl
+    - infinote
+    - role: letsencrypt
+      site_name: "infinote.fedoraproject.org"
+      certbot_addhost: "infinote.fedoraproject.org"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-  - name: tweak ssl key
-    file: path=/etc/pki/tls/private/infinote.fedoraproject.org.key group=infinote mode=640
+    - name: tweak ssl key
+      file: path=/etc/pki/tls/private/infinote.fedoraproject.org.key group=infinote mode=640
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/ipa.yml b/playbooks/groups/ipa.yml
index 1a50ffdf1..08a7e05fe 100644
--- a/playbooks/groups/ipa.yml
+++ b/playbooks/groups/ipa.yml
@@ -6,31 +6,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy ipa itself
   hosts: ipa:ipa-stg
@@ -38,38 +37,40 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - ipa/server
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "id{{env_suffix}}.fedoraproject.org"
-    notify:
-    - combine IPA http keytabs
+    - ipa/server
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "id{{env_suffix}}.fedoraproject.org"
+      notify:
+        - combine IPA http keytabs
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - name: Combine IPA keytabs
-    shell: printf "%b" "read_kt /etc/httpd/conf/ipa.keytab\nread_kt /etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab\nwrite_kt /etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab.combined" | ktutil
-    changed_when: false
-    tags:
-    - krb5
-    - ipa/server
-  - name: Set owner and permissions on combined keytab
-    file: path="/etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab.combined"
-          owner=apache
-          group=apache
-          mode=0600
-    tags:
-    - krb5
-    - ipa/server
+    - name: Combine IPA keytabs
+      shell: printf "%b" "read_kt /etc/httpd/conf/ipa.keytab\nread_kt /etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab\nwrite_kt /etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab.combined" | ktutil
+      changed_when: false
+      tags:
+        - krb5
+        - ipa/server
+    - name: Set owner and permissions on combined keytab
+      file:
+        path="/etc/krb5.HTTP_id{{env_suffix}}.fedoraproject.org.keytab.combined"
+        owner=apache
+        group=apache
+        mode=0600
+      tags:
+        - krb5
+        - ipa/server
+
   # original: /etc/httpd/conf/ipa.keytab
   #- name: Make IPA HTTP use the combined keytab
   #  lineinfile: dest=/etc/httpd/conf.d/ipa.conf
@@ -87,19 +88,18 @@
   #  - krb5
   #  - ipa/server
   #  - config
-
 - name: do base role once more to revert any resolvconf changes
   hosts: ipa:ipa-stg
   user: root
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - base
+    - base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/ipsilon.yml b/playbooks/groups/ipsilon.yml
index f361f0bc4..e0bb1a7c5 100644
--- a/playbooks/groups/ipsilon.yml
+++ b/playbooks/groups/ipsilon.yml
@@ -11,43 +11,42 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "id.stg.fedoraproject.org"
-    when: env == "staging"
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "id.fedoraproject.org"
-    when: env == "production"
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "id.stg.fedoraproject.org"
+      when: env == "staging"
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "id.fedoraproject.org"
+      when: env == "production"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy ipsilon itself
   hosts: ipsilon:ipsilon-stg
@@ -55,12 +54,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - ipsilon
+    - ipsilon
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/kerneltest.yml b/playbooks/groups/kerneltest.yml
index c5bb0b7a0..a62c9d326 100644
--- a/playbooks/groups/kerneltest.yml
+++ b/playbooks/groups/kerneltest.yml
@@ -11,32 +11,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy service-specific config
   hosts: kerneltest-stg:kerneltest
@@ -44,13 +43,13 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-   - kerneltest
-   - fedmsg/base
+    - kerneltest
+    - fedmsg/base
diff --git a/playbooks/groups/keyserver.yml b/playbooks/groups/keyserver.yml
index 76598c25f..05ea0183b 100644
--- a/playbooks/groups/keyserver.yml
+++ b/playbooks/groups/keyserver.yml
@@ -11,30 +11,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - apache
-  - certbot
-  - keyserver
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - { role: openvpn/client, when: env != "staging" }
+    - apache
+    - certbot
+    - keyserver
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/koji-hub.yml b/playbooks/groups/koji-hub.yml
index 252ebe23a..df98dc1bf 100644
--- a/playbooks/groups/koji-hub.yml
+++ b/playbooks/groups/koji-hub.yml
@@ -12,9 +12,9 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
     - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
@@ -23,69 +23,72 @@
     - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - builder_repo
-  - collectd/base
-  - apache
-  - fedmsg/base
-  - role: keytab/service
-    service: kojira
-    host: "koji{{env_suffix}}.fedoraproject.org"
-  - role: keytab/service
-    service: koji-gc
-    owner_user: apache
-    host: "koji{{env_suffix}}.fedoraproject.org"
-  - koji_hub
-  - role: keytab/service
-    service: HTTP
-    owner_user: apache
-    host: "koji{{env_suffix}}.fedoraproject.org"
-    when: "fedmsg_koji_instance == 'primary'"
-  - role: keytab/service
-    service: HTTP
-    owner_user: apache
-    host: "{{fedmsg_koji_instance}}.koji.fedoraproject.org"
-    when: "fedmsg_koji_instance != 'primary'"
-  - role: keytab/service
-    service: shadow
-    owner_user: koji_shadow
-    host: "koji{{env_suffix}}.fedoraproject.org"
-    when: "fedmsg_koji_instance != 'primary'"
-  - { role: nfs/server, when: env == "staging" }
-  - { role: keepalived, when: env == "production" and inventory_hostname.startswith('koji') }
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
-    when: env == 'production' and inventory_hostname.startswith('koji')
-  - role: nfs/client
-    mnt_dir: '/mnt/koji'
-    nfs_src_dir: 'fedora_s390/data'
-    when: env == 'production' and inventory_hostname.startswith('s390')
-  - role: nfs/client
-    mnt_dir: '/mnt/koji'
-    nfs_src_dir: 'fedora_ppc/data'
-    when: env == 'production' and inventory_hostname.startswith('ppc')
-  - role: nfs/client
-    mnt_dir: '/mnt/koji'
-    nfs_src_dir: 'fedora_arm/data'
-    when: env == 'production' and inventory_hostname.startswith('arm')
-    # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji_prod'
-    nfs_src_dir: 'fedora_koji'
-    when: env == 'staging' and inventory_hostname.startswith('koji')
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - builder_repo
+    - collectd/base
+    - apache
+    - fedmsg/base
+    - role: keytab/service
+      service: kojira
+      host: "koji{{env_suffix}}.fedoraproject.org"
+    - role: keytab/service
+      service: koji-gc
+      owner_user: apache
+      host: "koji{{env_suffix}}.fedoraproject.org"
+    - koji_hub
+    - role: keytab/service
+      service: HTTP
+      owner_user: apache
+      host: "koji{{env_suffix}}.fedoraproject.org"
+      when: "fedmsg_koji_instance == 'primary'"
+    - role: keytab/service
+      service: HTTP
+      owner_user: apache
+      host: "{{fedmsg_koji_instance}}.koji.fedoraproject.org"
+      when: "fedmsg_koji_instance != 'primary'"
+    - role: keytab/service
+      service: shadow
+      owner_user: koji_shadow
+      host: "koji{{env_suffix}}.fedoraproject.org"
+      when: "fedmsg_koji_instance != 'primary'"
+    - { role: nfs/server, when: env == "staging" }
+    - {
+        role: keepalived,
+        when: env == "production" and inventory_hostname.startswith('koji'),
+      }
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
+      when: env == 'production' and inventory_hostname.startswith('koji')
+    - role: nfs/client
+      mnt_dir: "/mnt/koji"
+      nfs_src_dir: "fedora_s390/data"
+      when: env == 'production' and inventory_hostname.startswith('s390')
+    - role: nfs/client
+      mnt_dir: "/mnt/koji"
+      nfs_src_dir: "fedora_ppc/data"
+      when: env == 'production' and inventory_hostname.startswith('ppc')
+    - role: nfs/client
+      mnt_dir: "/mnt/koji"
+      nfs_src_dir: "fedora_arm/data"
+      when: env == 'production' and inventory_hostname.startswith('arm')
+      # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji_prod"
+      nfs_src_dir: "fedora_koji"
+      when: env == 'staging' and inventory_hostname.startswith('koji')
+    - sudo
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure sshfs target on koji01
   hosts: koji01.phx2.fedoraproject.org:koji01.stg.phx2.fedoraproject.org
@@ -95,22 +98,21 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Put public sshfs key in place
-    authorized_key: user="root"
-                    key="{{ lookup('file', '{{ private }}/files/releng/sshkeys/primary-s390x-sshfs' + '-staging.pub' if env == 'staging' else '{{ private }}/files/releng/sshkeys/primary-s390x-sshfs.pub') }}"
-                    state=present
-                    key_options='command="internal-sftp",from="{{ '10.16.0.25' if env == 'staging' else '10.16.0.11' }}",restrict'
-    tags:
-    - sshfs
+    - name: Put public sshfs key in place
+      authorized_key: user="root"
+        key="{{ lookup('file', '{{ private }}/files/releng/sshkeys/primary-s390x-sshfs' + '-staging.pub' if env == 'staging' else '{{ private }}/files/releng/sshkeys/primary-s390x-sshfs.pub') }}"
+        state=present
+        key_options='command="internal-sftp",from="{{ '10.16.0.25' if env == 'staging' else '10.16.0.11' }}",restrict'
+      tags:
+        - sshfs
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 # Setup the rabbitmq user so fedora-messaging can send messages
 - name: setup RabbitMQ
diff --git a/playbooks/groups/kojipkgs.yml b/playbooks/groups/kojipkgs.yml
index 0ee3a7969..ec37768de 100644
--- a/playbooks/groups/kojipkgs.yml
+++ b/playbooks/groups/kojipkgs.yml
@@ -6,37 +6,37 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - apache
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_app/app'
-    nfs_src_dir: 'fedora_app/app'
-  - role: nfs/client
-    mnt_dir: '/pub'
-    nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub'
-  - role: kojipkgs
-  - role: varnish
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - apache
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_app/app"
+      nfs_src_dir: "fedora_app/app"
+    - role: nfs/client
+      mnt_dir: "/pub"
+      nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub"
+    - role: kojipkgs
+    - role: varnish
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/koschei-backend.yml b/playbooks/groups/koschei-backend.yml
index 68d7666ec..41fa6ec50 100644
--- a/playbooks/groups/koschei-backend.yml
+++ b/playbooks/groups/koschei-backend.yml
@@ -6,33 +6,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - builder_repo
-  - collectd/base
-  - { role: sudo, sudoers: "{{ private }}/files/sudo/koschei01-sudoers" }
-  - koschei/backend
-  - role: keytab/service
-    owner_user: koschei
-    owner_group: koschei
-    service: koschei
-    host: "{{inventory_hostname}}"
-  - fedmsg/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - builder_repo
+    - collectd/base
+    - { role: sudo, sudoers: "{{ private }}/files/sudo/koschei01-sudoers" }
+    - koschei/backend
+    - role: keytab/service
+      owner_user: koschei
+      owner_group: koschei
+      service: koschei
+      host: "{{inventory_hostname}}"
+    - fedmsg/base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/koschei-web.yml b/playbooks/groups/koschei-web.yml
index 65e1be458..ee7d0ec9e 100644
--- a/playbooks/groups/koschei-web.yml
+++ b/playbooks/groups/koschei-web.yml
@@ -7,28 +7,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - { role: sudo, sudoers: "{{ private }}/files/sudo/koschei01-sudoers" }
-  - { role: openvpn/client, when: env != "staging" }
-  - { role: mod_wsgi, when: env != "staging" }
-  - koschei/frontend
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - { role: sudo, sudoers: "{{ private }}/files/sudo/koschei01-sudoers" }
+    - { role: openvpn/client, when: env != "staging" }
+    - { role: mod_wsgi, when: env != "staging" }
+    - koschei/frontend
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/libravatar.yml b/playbooks/groups/libravatar.yml
index 4dad63955..9dce14554 100644
--- a/playbooks/groups/libravatar.yml
+++ b/playbooks/groups/libravatar.yml
@@ -3,37 +3,36 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: cloud basic setup
   hosts: libravatar-stg:libravatar
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
 - name: provision instance
   hosts: libravatar-stg:libravatar
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - base
-   - libravatar
+    - base
+    - libravatar
diff --git a/playbooks/groups/logserver.yml b/playbooks/groups/logserver.yml
index 6d9b2ed6e..82dd4a7d9 100644
--- a/playbooks/groups/logserver.yml
+++ b/playbooks/groups/logserver.yml
@@ -6,47 +6,46 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - apache
-  - collectd/base
-  - collectd/server
-  - sudo
-  - epylog
-  - openvpn/client
-  - awstats
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "admin.fedoraproject.org"
-    when: env == "production"
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - apache
+    - collectd/base
+    - collectd/server
+    - sudo
+    - epylog
+    - openvpn/client
+    - awstats
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "admin.fedoraproject.org"
+      when: env == "production"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-#
-# We exclude some dirs from restorecon on updates on logservers as they are very large
-# and it takes a long long time to run restorecon over them.
-#
-  - name: exclude some directories from selinux relabeling on updates
-    copy: src="{{ files }}/logserver/fixfiles_exclude_dirs" dest=/etc/selinux/fixfiles_exclude_dirs owner=root mode=0644
+    #
+    # We exclude some dirs from restorecon on updates on logservers as they are very large
+    # and it takes a long long time to run restorecon over them.
+    #
+    - name: exclude some directories from selinux relabeling on updates
+      copy: src="{{ files }}/logserver/fixfiles_exclude_dirs" dest=/etc/selinux/fixfiles_exclude_dirs owner=root mode=0644
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Cloud Image stats
   hosts: log01.phx2.fedoraproject.org
@@ -54,16 +53,16 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - cloudstats
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_stats'
-    nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3"
-    nfs_src_dir: 'fedora_stats'
+    - cloudstats
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_stats"
+      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3"
+      nfs_src_dir: "fedora_stats"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/loopabull.yml b/playbooks/groups/loopabull.yml
index f3fa1ff2a..22b127657 100644
--- a/playbooks/groups/loopabull.yml
+++ b/playbooks/groups/loopabull.yml
@@ -7,28 +7,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Deploy and configure loopabull
   hosts: loopabull:loopabull-stg
@@ -36,12 +36,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
     - name: ensure ~/.ssh dir exists
@@ -63,20 +63,20 @@
     - fedmsg/base
     - fedmsg/hub
     - {
-      role: loopabull,
+        role: loopabull,
         loglevel: info,
         plugin: fedmsgrabbitmq,
-        routing_keys: [
-          "org.fedoraproject.stg.buildsys.build.state.change",
-          "org.fedoraproject.prod.buildsys.build.state.change",
-          "org.centos.prod.ci.pipeline.allpackages-pr.complete",
-          "org.centos.prod.ci.pipeline.allpackages-pr.package.running",
-        ],
+        routing_keys:
+          [
+            "org.fedoraproject.stg.buildsys.build.state.change",
+            "org.fedoraproject.prod.buildsys.build.state.change",
+            "org.centos.prod.ci.pipeline.allpackages-pr.complete",
+            "org.centos.prod.ci.pipeline.allpackages-pr.package.running",
+          ],
         playbooks_dir: /srv/loopabull-tasks/playbooks,
         ansible_cfg_path: /etc/ansible/ansible.cfg,
-        playbook_cmd: /usr/bin/ansible-playbook
-    }
-
+        playbook_cmd: /usr/bin/ansible-playbook,
+      }
 
 - name: Post Loopabull install configuration
   hosts: loopabull:loopabull-stg
@@ -84,12 +84,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
     - name: Enable fedmsg-rabbitmq-serializer
diff --git a/playbooks/groups/mailman.yml b/playbooks/groups/mailman.yml
index 42abda951..25950a718 100644
--- a/playbooks/groups/mailman.yml
+++ b/playbooks/groups/mailman.yml
@@ -10,34 +10,32 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - spamassassin
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - spamassassin
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 #
 # Database setup
@@ -49,10 +47,10 @@
   user: root
 
   tasks:
-  - name: install psycopg2 for the postgresql ansible modules
-    package: name=python-psycopg2 state=present
-    tags:
-    - packages
+    - name: install psycopg2 for the postgresql ansible modules
+      package: name=python-psycopg2 state=present
+      tags:
+        - packages
 
 - name: setup the database
   hosts: db01.stg.phx2.fedoraproject.org:db01.phx2.fedoraproject.org
@@ -60,26 +58,25 @@
   become: yes
   become_user: postgres
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   tasks:
-  # mailman auto-updates its schema, there can only be one admin user
-  - name: mailman DB user
-    postgresql_user: name=mailmanadmin password={{ mailman_mm_db_pass }}
-  - name: hyperkitty DB admin user
-    postgresql_user: name=hyperkittyadmin password={{ mailman_hk_admin_db_pass }}
-  - name: hyperkitty DB user
-    postgresql_user: name=hyperkittyapp password={{ mailman_hk_db_pass }}
-  - name: databases creation
-    postgresql_db: name={{ item }} owner="{{ item }}admin" encoding=UTF-8
-    with_items:
-    - mailman
-    - hyperkitty
-  - name: test database creation
-    postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8
-
+    # mailman auto-updates its schema, there can only be one admin user
+    - name: mailman DB user
+      postgresql_user: name=mailmanadmin password={{ mailman_mm_db_pass }}
+    - name: hyperkitty DB admin user
+      postgresql_user: name=hyperkittyadmin password={{ mailman_hk_admin_db_pass }}
+    - name: hyperkitty DB user
+      postgresql_user: name=hyperkittyapp password={{ mailman_hk_db_pass }}
+    - name: databases creation
+      postgresql_db: name={{ item }} owner="{{ item }}admin" encoding=UTF-8
+      with_items:
+        - mailman
+        - hyperkitty
+    - name: test database creation
+      postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8
 
 # Real MM/HK-specific work
 - name: setup mailman and hyperkitty
@@ -88,37 +85,37 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   roles:
-  - role: mailman
-    mailman_mailman_db_pass: "{{ mailman_mm_db_pass }}"
-    mailman_hyperkitty_admin_db_pass: "{{ mailman_hk_admin_db_pass }}"
-    mailman_hyperkitty_db_pass: "{{ mailman_hk_db_pass }}"
-    mailman_hyperkitty_cookie_key: "{{ mailman_hk_cookie_key }}"
-  - fedmsg/base
+    - role: mailman
+      mailman_mailman_db_pass: "{{ mailman_mm_db_pass }}"
+      mailman_hyperkitty_admin_db_pass: "{{ mailman_hk_admin_db_pass }}"
+      mailman_hyperkitty_db_pass: "{{ mailman_hk_db_pass }}"
+      mailman_hyperkitty_cookie_key: "{{ mailman_hk_cookie_key }}"
+    - fedmsg/base
 
   tasks:
-  - name: install more needed packages
-    package: name={{ item }} state=present
-    with_items:
-    - tar
-    tags:
-    - packages
-
-  #- name: easy access to the postgresql databases
-  #  template: src=$files/mailman/pgpass.j2 dest=/root/.pgpass
-  #            owner=root group=root mode=0600
-
-  - name: start services
-    service: state=started enabled=yes name={{ item }}
-    with_items:
-    - httpd
-    - mailman3
-    - postfix
-    when: inventory_hostname.startswith('mailman01.phx2') or inventory_hostname.startswith('lists-dev')
+    - name: install more needed packages
+      package: name={{ item }} state=present
+      with_items:
+        - tar
+      tags:
+        - packages
+
+    #- name: easy access to the postgresql databases
+    #  template: src=$files/mailman/pgpass.j2 dest=/root/.pgpass
+    #            owner=root group=root mode=0600
+
+    - name: start services
+      service: state=started enabled=yes name={{ item }}
+      with_items:
+        - httpd
+        - mailman3
+        - postfix
+      when: inventory_hostname.startswith('mailman01.phx2') or inventory_hostname.startswith('lists-dev')
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/maintainer-test.yml b/playbooks/groups/maintainer-test.yml
index c92a5ff0d..fc54386c7 100644
--- a/playbooks/groups/maintainer-test.yml
+++ b/playbooks/groups/maintainer-test.yml
@@ -3,34 +3,33 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: Do some basic cloud setup on them
   hosts: maintainer-test
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
-  - debug: var=FedoraRawhideNumber
-  - debug: var=FedoraBranchedNumber
-  - debug: var=FedoraCycleNumber
-  - debug: var=FedoraPreviousCycleNumber
-  - debug: var=FedoraPreviousPreviousCycleNumber
-  - debug: var=Frozen
-
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
+    - debug: var=FedoraRawhideNumber
+    - debug: var=FedoraBranchedNumber
+    - debug: var=FedoraCycleNumber
+    - debug: var=FedoraPreviousCycleNumber
+    - debug: var=FedoraPreviousPreviousCycleNumber
+    - debug: var=Frozen
 
 - import_playbook: "/srv/web/infra/ansible/playbooks/include/happy_birthday.yml myhosts=arm-packager"
 
@@ -38,39 +37,39 @@
   hosts: arm-packager:maintainer-test
   gather_facts: True
   tags:
-   - maintainer-test
+    - maintainer-test
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - sudo
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-  - name: install packager tools (dnf)
-    dnf: state=present pkg={{ item }}
-    with_items:
-    - fedora-packager
-    when: ansible_distribution_major_version|int > 21
-    tags:
-    - packages
+    - name: install packager tools (dnf)
+      dnf: state=present pkg={{ item }}
+      with_items:
+        - fedora-packager
+      when: ansible_distribution_major_version|int > 21
+      tags:
+        - packages
 
-  - name: allow packagers to use mock
-    copy: dest=/etc/pam.d/mock src="{{ files }}/common/mock"
-    tags:
-    - config
+    - name: allow packagers to use mock
+      copy: dest=/etc/pam.d/mock src="{{ files }}/common/mock"
+      tags:
+        - config
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/mariadb-server.yml b/playbooks/groups/mariadb-server.yml
index 07b87675a..176344688 100644
--- a/playbooks/groups/mariadb-server.yml
+++ b/playbooks/groups/mariadb-server.yml
@@ -12,28 +12,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - fas_client
-  - nagios_client
-  - hosts
-  - mariadb_server
-  - collectd/base
-  - sudo
+    - base
+    - rkhunter
+    - fas_client
+    - nagios_client
+    - hosts
+    - mariadb_server
+    - collectd/base
+    - sudo
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-# TODO: add iscsi task
+  # TODO: add iscsi task
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/mbs.yml b/playbooks/groups/mbs.yml
index e2cc3c41a..1a7bb8946 100644
--- a/playbooks/groups/mbs.yml
+++ b/playbooks/groups/mbs.yml
@@ -6,29 +6,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - collectd/base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: openvpn on the prod frontend nodes
   hosts: mbs-frontend
@@ -36,15 +36,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - openvpn/client
+    - openvpn/client
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Set up apache on the frontend MBS API app
   hosts: mbs-frontend:mbs-frontend-stg
@@ -52,15 +52,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - mod_wsgi
+    - mod_wsgi
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg configuration and common mbs files
   hosts: mbs:mbs-stg
@@ -68,16 +68,16 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
-  - mbs/common
+    - fedmsg/base
+    - mbs/common
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy the frontend MBS API app
   hosts: mbs-frontend:mbs-frontend-stg
@@ -85,27 +85,27 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - mbs/frontend
+    - mbs/frontend
 
   post_tasks:
-  # Shouldn't be necessary after this change makes it out
-  # https://src.fedoraproject.org/rpms/module-build-service/c/d19515a7c053aa90cddccd5e10a5615b773a7bd2
-  - name: Make sure fedmsg-hub isn't running on the frontend.
-    service:
-      name: fedmsg-hub
-      state: stopped
-      enabled: false
-    tags:
-    - mbs
-    - mbs/frontend
+    # Shouldn't be necessary after this change makes it out
+    # https://src.fedoraproject.org/rpms/module-build-service/c/d19515a7c053aa90cddccd5e10a5615b773a7bd2
+    - name: Make sure fedmsg-hub isn't running on the frontend.
+      service:
+        name: fedmsg-hub
+        state: stopped
+        enabled: false
+      tags:
+        - mbs
+        - mbs/frontend
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy the backend MBS scheduler daemon
   hosts: mbs-backend:mbs-backend-stg
@@ -113,22 +113,22 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - role: keytab/service
-    service: mbs
-    owner_user: fedmsg
-    host: "mbs{{env_suffix}}.fedoraproject.org"
-  - role: fedmsg/hub
-    tags: fedmsg/hub
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - role: keytab/service
+      service: mbs
+      owner_user: fedmsg
+      host: "mbs{{env_suffix}}.fedoraproject.org"
+    - role: fedmsg/hub
+      tags: fedmsg/hub
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
   # Amazingly, there isn't need for a mbs/backend role.  The fedmsg/hub role
   # along with mbs/common is enough.
   #- mbs/backend
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/mdapi.yml b/playbooks/groups/mdapi.yml
index c6de2fa0a..d2e4634ba 100644
--- a/playbooks/groups/mdapi.yml
+++ b/playbooks/groups/mdapi.yml
@@ -6,31 +6,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - collectd/base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy mdapi itself
   hosts: mdapi-stg:mdapi
@@ -38,16 +37,16 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - mdapi
-  - { role: plus-plus-service, when: env == "staging" }
+    - mdapi
+    - { role: plus-plus-service, when: env == "staging" }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg
   hosts: mdapi-stg:mdapi
@@ -55,12 +54,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/memcached.yml b/playbooks/groups/memcached.yml
index 94b764bb2..45ce7a35b 100644
--- a/playbooks/groups/memcached.yml
+++ b/playbooks/groups/memcached.yml
@@ -6,27 +6,27 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - collectd/memcached
-  - sudo
-  - memcached
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - collectd/memcached
+    - sudo
+    - memcached
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/minimal.yml b/playbooks/groups/minimal.yml
index c6b50cdf2..55b76a172 100644
--- a/playbooks/groups/minimal.yml
+++ b/playbooks/groups/minimal.yml
@@ -6,25 +6,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - nagios_client
-  - collectd/base
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - nagios_client
+    - collectd/base
+    - sudo
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/mirrormanager.yml b/playbooks/groups/mirrormanager.yml
index 3405653f8..f51fe97a7 100644
--- a/playbooks/groups/mirrormanager.yml
+++ b/playbooks/groups/mirrormanager.yml
@@ -6,30 +6,38 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - { role: openvpn/client, when: env != "staging" and inventory_hostname.startswith('mm-frontend')  }
-  - { role: nfs/client, when: inventory_hostname.startswith('mm-backend01'), mnt_dir: '/srv/pub',  nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - {
+        role: openvpn/client,
+        when: env != "staging" and inventory_hostname.startswith('mm-frontend'),
+      }
+    - {
+        role: nfs/client,
+        when: inventory_hostname.startswith('mm-backend01'),
+        mnt_dir: "/srv/pub",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub",
+      }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Deploy the backend
   hosts: mm-backend:mm-backend-stg
@@ -37,17 +45,17 @@
   gather_facts: True
 
   vars_files:
-  -  /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - mirrormanager/backend
-  - s3-mirror
-  - geoip
+    - mirrormanager/backend
+    - s3-mirror
+    - geoip
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Deploy the crawler
   hosts: mm-crawler:mm-crawler-stg
@@ -55,18 +63,17 @@
   gather_facts: True
 
   vars_files:
-  -  /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - mirrormanager/crawler
-  - { role: rsyncd,
-      when: env != "staging" }
-  - { role: openvpn/client, when: datacenter != "phx2" }
+    - mirrormanager/crawler
+    - { role: rsyncd, when: env != "staging" }
+    - { role: openvpn/client, when: datacenter != "phx2" }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Deploy the frontend (web-app)
   hosts: mm-frontend:mm-frontend-stg
@@ -74,15 +81,15 @@
   gather_facts: True
 
   vars_files:
-  -  /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - mirrormanager/frontend2
+    - mirrormanager/frontend2
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 # Do this one last, since the mirrormanager user needs to exist so that it can
 # own the fedmsg certs we put in place here.
@@ -92,12 +99,12 @@
   gather_facts: True
 
   vars_files:
-  -  /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/modernpaste.yml b/playbooks/groups/modernpaste.yml
index cbf2a0ff5..1d310bc38 100644
--- a/playbooks/groups/modernpaste.yml
+++ b/playbooks/groups/modernpaste.yml
@@ -11,30 +11,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - fedmsg/base
-  - { role: openvpn/client, when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - fedmsg/base
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: modernpaste-stg:modernpaste
@@ -42,12 +42,12 @@
   gather_facts: True
 
   roles:
-  - modernpaste
+    - modernpaste
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/newcloud-undercloud.yml b/playbooks/groups/newcloud-undercloud.yml
index 6d2244ed3..998ff365b 100644
--- a/playbooks/groups/newcloud-undercloud.yml
+++ b/playbooks/groups/newcloud-undercloud.yml
@@ -6,38 +6,38 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - sudo
-  - undercloud
-  - apache
-
-  - role: httpd/mod_ssl
-
-  - role: httpd/website
-    site_name: controller.fedorainfracloud.org
-    ssl: true
-    sslonly: true
-    certbot: true
-
-  - role: httpd/reverseproxy
-    website: controller.fedorainfracloud.org
-    destname: overcloud
-    balancer_name: controller.fedorainfracloud.org
-    balancer_members: ['192.168.20.51:80']
-    certbot_addhost: undercloud01.fedorainfracloud.org
-    http_not_https_yes_this_is_insecure_and_i_feel_bad: true
+    - base
+    - hosts
+    - sudo
+    - undercloud
+    - apache
+
+    - role: httpd/mod_ssl
+
+    - role: httpd/website
+      site_name: controller.fedorainfracloud.org
+      ssl: true
+      sslonly: true
+      certbot: true
+
+    - role: httpd/reverseproxy
+      website: controller.fedorainfracloud.org
+      destname: overcloud
+      balancer_name: controller.fedorainfracloud.org
+      balancer_members: ["192.168.20.51:80"]
+      certbot_addhost: undercloud01.fedorainfracloud.org
+      http_not_https_yes_this_is_insecure_and_i_feel_bad: true
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/noc.yml b/playbooks/groups/noc.yml
index d41f7c4a9..6bad3b645 100644
--- a/playbooks/groups/noc.yml
+++ b/playbooks/groups/noc.yml
@@ -7,45 +7,43 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - { role: rsyncd, when: datacenter == 'phx2' }
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "nagios{{env_suffix}}.fedoraproject.org"
-    when: datacenter == 'phx2' 
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "nagios-external{{env_suffix}}.fedoraproject.org"
-    when: datacenter != 'phx2' 
-   
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - { role: rsyncd, when: datacenter == 'phx2' }
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "nagios{{env_suffix}}.fedoraproject.org"
+      when: datacenter == 'phx2'
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "nagios-external{{env_suffix}}.fedoraproject.org"
+      when: datacenter != 'phx2'
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy service-specific config (just for production)
   hosts: nagios
@@ -53,22 +51,22 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - { role: dhcp_server, when: datacenter == 'phx2' }
-  - { role: tftp_server, when: datacenter == 'phx2' }
-  - nagios_server
-  - fedmsg/base
+    - { role: dhcp_server, when: datacenter == 'phx2' }
+    - { role: tftp_server, when: datacenter == 'phx2' }
+    - nagios_server
+    - fedmsg/base
 
   tasks:
-  - name: install some packages which arent in playbooks
-    package: name={{ item }} state=present
-    with_items:
-      - nmap
-      - tcpdump
+    - name: install some packages which arent in playbooks
+      package: name={{ item }} state=present
+      with_items:
+        - nmap
+        - tcpdump
diff --git a/playbooks/groups/notifs-backend.yml b/playbooks/groups/notifs-backend.yml
index c7794bfb0..b25e36637 100644
--- a/playbooks/groups/notifs-backend.yml
+++ b/playbooks/groups/notifs-backend.yml
@@ -11,33 +11,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - nagios_client
-  - collectd/base
-  - fedmsg/base
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - nagios_client
+    - collectd/base
+    - fedmsg/base
+    - sudo
   # The proxies don't actually need to talk to these hosts so we won't bother
   # putting them on the vpn.
   #- { role: openvpn/client,
   #    when: env != "staging" }
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: notifs-backend:notifs-backend-stg
@@ -45,27 +45,27 @@
   gather_facts: True
 
   pre_tasks:
-  - name: tell nagios to shush w.r.t. the backend since it usually complains
-    nagios: action=downtime minutes=25 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-    tags:
-    - fedmsgdconfig
-    - notifs/backend
+    - name: tell nagios to shush w.r.t. the backend since it usually complains
+      nagios: action=downtime minutes=25 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+      tags:
+        - fedmsgdconfig
+        - notifs/backend
 
   roles:
-  - fedmsg/hub
-  - redis
-  - rabbitmq
-  - memcached
-  - notifs/backend
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/hub
+    - redis
+    - rabbitmq
+    - memcached
+    - notifs/backend
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/notifs-web.yml b/playbooks/groups/notifs-web.yml
index 5df6efa35..9b185d69e 100644
--- a/playbooks/groups/notifs-web.yml
+++ b/playbooks/groups/notifs-web.yml
@@ -11,30 +11,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - mod_wsgi
-  - fedmsg/base
-  - notifs/frontend
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - mod_wsgi
+    - fedmsg/base
+    - notifs/frontend
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/nuancier.yml b/playbooks/groups/nuancier.yml
index a311c8a40..a7657d09b 100644
--- a/playbooks/groups/nuancier.yml
+++ b/playbooks/groups/nuancier.yml
@@ -11,31 +11,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up fedmsg
   hosts: nuancier:nuancier-stg
@@ -43,15 +42,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - fedmsg/base
+    - fedmsg/base
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up gluster on stg
   hosts: nuancier-stg
@@ -59,29 +58,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: gluster/server
-    glusterservername: gluster
-    username: "{{ nuancier_gluster_username }}"
-    password: "{{ nuancier_gluster_password }}"
-    owner: root
-    group: root
-    datadir: /srv/glusterfs/nuancier-stg
-
-  - role: gluster/client
-    glusterservername: gluster
-    servers:
-    - nuancier01.stg.phx2.fedoraproject.org
-    - nuancier02.stg.phx2.fedoraproject.org
-    username: "{{ nuancier_gluster_username }}"
-    password: "{{ nuancier_gluster_password }}"
-    owner: apache
-    group: root
-    mountdir: /var/cache/nuancier
+    - role: gluster/server
+      glusterservername: gluster
+      username: "{{ nuancier_gluster_username }}"
+      password: "{{ nuancier_gluster_password }}"
+      owner: root
+      group: root
+      datadir: /srv/glusterfs/nuancier-stg
+
+    - role: gluster/client
+      glusterservername: gluster
+      servers:
+        - nuancier01.stg.phx2.fedoraproject.org
+        - nuancier02.stg.phx2.fedoraproject.org
+      username: "{{ nuancier_gluster_username }}"
+      password: "{{ nuancier_gluster_password }}"
+      owner: apache
+      group: root
+      mountdir: /var/cache/nuancier
 
 - name: set up gluster on prod
   hosts: nuancier
@@ -89,29 +88,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: gluster/server
-    glusterservername: gluster
-    username: "{{ nuancier_gluster_username }}"
-    password: "{{ nuancier_gluster_password }}"
-    owner: root
-    group: root
-    datadir: /srv/glusterfs/nuancier
-
-  - role: gluster/client
-    glusterservername: gluster
-    servers:
-    - nuancier01.phx2.fedoraproject.org
-    - nuancier02.phx2.fedoraproject.org
-    username: "{{ nuancier_gluster_username }}"
-    password: "{{ nuancier_gluster_password }}"
-    owner: apache
-    group: root
-    mountdir: /var/cache/nuancier
+    - role: gluster/server
+      glusterservername: gluster
+      username: "{{ nuancier_gluster_username }}"
+      password: "{{ nuancier_gluster_password }}"
+      owner: root
+      group: root
+      datadir: /srv/glusterfs/nuancier
+
+    - role: gluster/client
+      glusterservername: gluster
+      servers:
+        - nuancier01.phx2.fedoraproject.org
+        - nuancier02.phx2.fedoraproject.org
+      username: "{{ nuancier_gluster_username }}"
+      password: "{{ nuancier_gluster_password }}"
+      owner: apache
+      group: root
+      mountdir: /var/cache/nuancier
 
 - name: deploy nuancier itself
   hosts: nuancier:nuancier-stg
@@ -119,12 +118,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - nuancier
+    - nuancier
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/oci-registry.yml b/playbooks/groups/oci-registry.yml
index ee94221b9..b1417d12b 100644
--- a/playbooks/groups/oci-registry.yml
+++ b/playbooks/groups/oci-registry.yml
@@ -1,5 +1,5 @@
 # create an osbs server
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=oci-registry:oci-registry-stg"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=oci-registry:oci-registry-stg"
 
 - name: make the box be real
   hosts: oci-registry:oci-registry-stg
@@ -7,43 +7,44 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - { role: nfs/client,
-      mnt_dir: '/srv/registry',
-      nfs_src_dir: "oci_registry",
-      when: "env != 'staging' and 'candidate' not in inventory_hostname" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/registry",
+        nfs_src_dir: "oci_registry",
+        when: "env != 'staging' and 'candidate' not in inventory_hostname",
+      }
 
   pre_tasks:
-  - name: Create /srv/registry on staging since it does not use NFS
-    file:
-      path: /srv/registry
-      state: directory
-      owner: root
-      group: root
-      mode: 0755
-    when: "env == 'staging' and 'candidate' not in inventory_hostname"
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - name: Create /srv/registry on staging since it does not use NFS
+      file:
+        path: /srv/registry
+        state: directory
+        owner: root
+        group: root
+        mode: 0755
+      when: "env == 'staging' and 'candidate' not in inventory_hostname"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup docker distribution registry
   hosts: oci-registry:oci-registry-stg
@@ -52,46 +53,33 @@
     - /srv/private/ansible/vars.yml
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
-
   # NOTE: tls is disabled for docker-distribution because we are listening only
   #       on localhost and all external connections will be through httpd which
   #       will be SSL enabled.
   roles:
     - {
-      role: docker-distribution,
+        role: docker-distribution,
         conf_path: "/etc/docker-distribution/registry/config.yml",
-        tls: {
-          enabled: False,
-        },
-        log: {
-          fields: {
-            service: "registry"
-          }
-        },
-        storage: {
-          filesystem: {
-            rootdirectory: "/srv/registry"
-          }
-        },
-        http: {
-          addr: ":5000"
-        }
+        tls: { enabled: False },
+        log: { fields: { service: "registry" } },
+        storage: { filesystem: { rootdirectory: "/srv/registry" } },
+        http: { addr: ":5000" },
       }
 
     # Setup compose-x86-01 push docker images to registry
     - {
-      role: push-docker,
+        role: push-docker,
         candidate_registry: "candidate-registry.stg.fedoraproject.org",
         candidate_registry_osbs_username: "{{candidate_registry_osbs_stg_username}}",
         candidate_registry_osbs_password: "{{candidate_registry_osbs_stg_password}}",
-      when: env == "staging",
-      delegate_to: compose-x86-01.phx2.fedoraproject.org
-    }
+        when: env == "staging",
+        delegate_to: compose-x86-01.phx2.fedoraproject.org,
+      }
     - {
-      role: push-docker,
+        role: push-docker,
         candidate_registry: "candidate-registry.fedoraproject.org",
         candidate_registry_osbs_username: "{{candidate_registry_osbs_prod_username}}",
         candidate_registry_osbs_password: "{{candidate_registry_osbs_prod_password}}",
-      when: env == "production",
-      delegate_to: compose-x86-01.phx2.fedoraproject.org
-    }
+        when: env == "production",
+        delegate_to: compose-x86-01.phx2.fedoraproject.org,
+      }
diff --git a/playbooks/groups/odcs.yml b/playbooks/groups/odcs.yml
index ea821fa67..826f47427 100644
--- a/playbooks/groups/odcs.yml
+++ b/playbooks/groups/odcs.yml
@@ -6,29 +6,29 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - collectd/base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: openvpn on the prod frontend nodes
   hosts: odcs-frontend
@@ -36,15 +36,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - openvpn/client
+    - openvpn/client
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Set up a gluster share on the backend for the frontend
   hosts: odcs:odcs-stg
@@ -52,25 +52,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - role: gluster/consolidated
-    gluster_brick_dir: /srv/glusterfs
-    gluster_mount_dir: /srv/odcs
-    gluster_brick_name: odcs
-    gluster_server_group: odcs-stg
-    tags: gluster
-    when: env == 'staging'
-  - role: gluster/consolidated
-    gluster_brick_dir: /srv/glusterfs
-    gluster_mount_dir: /srv/odcs
-    gluster_brick_name: odcs
-    gluster_server_group: odcs
-    tags: gluster
-    when: env != 'staging'
+    - role: gluster/consolidated
+      gluster_brick_dir: /srv/glusterfs
+      gluster_mount_dir: /srv/odcs
+      gluster_brick_name: odcs
+      gluster_server_group: odcs-stg
+      tags: gluster
+      when: env == 'staging'
+    - role: gluster/consolidated
+      gluster_brick_dir: /srv/glusterfs
+      gluster_mount_dir: /srv/odcs
+      gluster_brick_name: odcs
+      gluster_server_group: odcs
+      tags: gluster
+      when: env != 'staging'
 
 - name: Set up odcs frontend service
   hosts: odcs-frontend:odcs-frontend-stg
@@ -78,25 +78,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - mod_wsgi
-  - fedmsg/base
-  - odcs/frontend
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
-    when: env != 'staging'
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji_prod'
-    nfs_src_dir: 'fedora_koji'
-    when: env == 'staging'
+    - mod_wsgi
+    - fedmsg/base
+    - odcs/frontend
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
+      when: env != 'staging'
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji_prod"
+      nfs_src_dir: "fedora_koji"
+      when: env == 'staging'
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Set up odcs backend service
   hosts: odcs-backend:odcs-backend-stg
@@ -104,22 +104,22 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: odcs/backend
-  - role: fedmsg/base
-  - role: keytab/service
-    service: odcs
-    owner_user: odcs
-    owner_group: odcs
-    host: "odcs{{env_suffix}}.fedoraproject.org"
-  - role: fedmsg/hub
+    - role: odcs/backend
+    - role: fedmsg/base
+    - role: keytab/service
+      service: odcs
+      owner_user: odcs
+      owner_group: odcs
+      host: "odcs{{env_suffix}}.fedoraproject.org"
+    - role: fedmsg/hub
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Set up /mnt/koji on both the frontend and backend
   hosts: odcs:odcs-stg
@@ -127,25 +127,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
-    when: env != 'staging'
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
+      when: env != 'staging'
 
-  # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji_prod'
-    nfs_src_dir: 'fedora_koji'
-    when: env == 'staging'
+    # In staging, we mount fedora_koji as read only (see nfs_mount_opts)
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji_prod"
+      nfs_src_dir: "fedora_koji"
+      when: env == 'staging'
 
   post_tasks:
-  - file: src=/mnt/fedora_koji/koji dest=/mnt/koji state=link
-    tags: nfs/client
+    - file: src=/mnt/fedora_koji/koji dest=/mnt/koji state=link
+      tags: nfs/client
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/openqa-workers.yml b/playbooks/groups/openqa-workers.yml
index 9601ebde5..cb5cb1e40 100644
--- a/playbooks/groups/openqa-workers.yml
+++ b/playbooks/groups/openqa-workers.yml
@@ -4,28 +4,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - { role: hosts, tags: ['hosts']}
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openqa/worker, tags: ['openqa_worker'] }
-   - apache
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - { role: hosts, tags: ["hosts"] }
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: sudo, tags: ["sudo"] }
+    - { role: openqa/worker, tags: ["openqa_worker"] }
+    - apache
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/openqa.yml b/playbooks/groups/openqa.yml
index 153b93995..b150ae854 100644
--- a/playbooks/groups/openqa.yml
+++ b/playbooks/groups/openqa.yml
@@ -6,31 +6,34 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - { role: hosts, tags: ['hosts']}
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openvpn/client,
-       when: deployment_type == "prod", tags: ['openvpn_client'] }
-   - apache
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - { role: hosts, tags: ["hosts"] }
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: sudo, tags: ["sudo"] }
+    - {
+        role: openvpn/client,
+        when: deployment_type == "prod",
+        tags: ["openvpn_client"],
+      }
+    - apache
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure openQA
   hosts: openqa:openqa-stg
@@ -38,68 +41,68 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
-
-# relvalconsumer and autocloudreporter aren't particularly related
-# to openQA in any way, we just put those role on these boxes. There's
-# nowhere more obviously correct for rvc and acr should be on an
-# Autocloud box but I don't know if they're authed for RDB.
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+
+  # relvalconsumer and autocloudreporter aren't particularly related
+  # to openQA in any way, we just put those role on these boxes. There's
+  # nowhere more obviously correct for rvc and acr should be on an
+  # Autocloud box but I don't know if they're authed for RDB.
   roles:
-   - { role: openqa/server, tags: ['openqa_server'] }
-   - { role: openqa/dispatcher, tags: ['openqa_dispatcher'] }
-   - { role: check-compose, tags: ['check-compose'] }
-   - { role: fedmsg/base, tags: ['fedmsg_base', 'fedmsg'] }
-   - { role: fedmsg/hub, tags: ['fedmsg_hub', 'fedmsg'] }
-   - { role: relvalconsumer, tags: ['relvalconsumer'] }
-   - { role: autocloudreporter, tags: ['autocloudreporter'] }
+    - { role: openqa/server, tags: ["openqa_server"] }
+    - { role: openqa/dispatcher, tags: ["openqa_dispatcher"] }
+    - { role: check-compose, tags: ["check-compose"] }
+    - { role: fedmsg/base, tags: ["fedmsg_base", "fedmsg"] }
+    - { role: fedmsg/hub, tags: ["fedmsg_hub", "fedmsg"] }
+    - { role: relvalconsumer, tags: ["relvalconsumer"] }
+    - { role: autocloudreporter, tags: ["autocloudreporter"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up openQA server data NFS mounts (staging)
   hosts: openqa-stg
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: nfs/client
-    mnt_dir: '/var/lib/openqa/testresults'
-    nfs_src_dir: 'fedora_openqa_stg/testresults'
-    nfs_mount_opts: 'rw,bg,nfsvers=3'
-    tags: ['nfs_client']
-  - role: nfs/client
-    mnt_dir: '/var/lib/openqa/images'
-    nfs_src_dir: 'fedora_openqa_stg/images'
-    nfs_mount_opts: 'rw,bg,nfsvers=3'
-    tags: ['nfs_client']
+    - role: nfs/client
+      mnt_dir: "/var/lib/openqa/testresults"
+      nfs_src_dir: "fedora_openqa_stg/testresults"
+      nfs_mount_opts: "rw,bg,nfsvers=3"
+      tags: ["nfs_client"]
+    - role: nfs/client
+      mnt_dir: "/var/lib/openqa/images"
+      nfs_src_dir: "fedora_openqa_stg/images"
+      nfs_mount_opts: "rw,bg,nfsvers=3"
+      tags: ["nfs_client"]
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: set up openQA server data NFS mounts (prod)
   hosts: openqa
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: nfs/client
-    mnt_dir: '/var/lib/openqa/testresults'
-    nfs_src_dir: 'fedora_openqa/testresults'
-    nfs_mount_opts: 'rw,bg,nfsvers=3'
-    tags: ['nfs_client']
-  - role: nfs/client
-    mnt_dir: '/var/lib/openqa/images'
-    nfs_src_dir: 'fedora_openqa/images'
-    nfs_mount_opts: 'rw,bg,nfsvers=3'
-    tags: ['nfs_client']
+    - role: nfs/client
+      mnt_dir: "/var/lib/openqa/testresults"
+      nfs_src_dir: "fedora_openqa/testresults"
+      nfs_mount_opts: "rw,bg,nfsvers=3"
+      tags: ["nfs_client"]
+    - role: nfs/client
+      mnt_dir: "/var/lib/openqa/images"
+      nfs_src_dir: "fedora_openqa/images"
+      nfs_mount_opts: "rw,bg,nfsvers=3"
+      tags: ["nfs_client"]
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/openstack-compute-nodes.yml b/playbooks/groups/openstack-compute-nodes.yml
index cba763394..c9bb55b26 100644
--- a/playbooks/groups/openstack-compute-nodes.yml
+++ b/playbooks/groups/openstack-compute-nodes.yml
@@ -1,30 +1,29 @@
 ---
-
-- name:  deploy Open Stack compute nodes
+- name: deploy Open Stack compute nodes
   hosts: openstack-compute
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/RedHat.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - "/srv/private/ansible/files/openstack/passwords.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/RedHat.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - "/srv/private/ansible/files/openstack/passwords.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - sudo
-  - cloud_compute
+    - base
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - sudo
+    - cloud_compute
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/osbs-cluster.yml b/playbooks/groups/osbs-cluster.yml
index 9b89287da..f5b62511b 100644
--- a/playbooks/groups/osbs-cluster.yml
+++ b/playbooks/groups/osbs-cluster.yml
@@ -1,10 +1,10 @@
 # create an osbs server
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-control"
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-control-stg"
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-nodes:osbs-masters"
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-nodes-stg:osbs-masters-stg"
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-aarch64-masters-stg"
-- import_playbook:  "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-aarch64-masters"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-control"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-control-stg"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-nodes:osbs-masters"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-nodes-stg:osbs-masters-stg"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-aarch64-masters-stg"
+- import_playbook: "/srv/web/infra/ansible/playbooks/include/virt-create.yml myhosts=osbs-aarch64-masters"
 
 - name: make the box be real
   hosts: osbs-control:osbs-masters:osbs-nodes:osbs-control-stg:osbs-masters-stg:osbs-nodes-stg:osbs-aarch64-masters-stg:osbs-aarch64-masters
@@ -17,7 +17,7 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
     - base
@@ -34,7 +34,7 @@
     - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: OSBS control hosts pre-req setup
   hosts: osbs-control:osbs-control-stg
@@ -101,7 +101,6 @@
         src: "{{private}}/files/httpd/osbs-{{env}}.htpasswd"
         dest: /etc/origin/htpasswd
 
-
 - name: Setup cluster hosts pre-reqs
   hosts: osbs-masters-stg:osbs-nodes-stg:osbs-masters:osbs-nodes:osbs-aarch64-masters-stg:osbs-aarch64-masters
   tags:
@@ -161,7 +160,7 @@
     - name: copy docker-storage-setup config
       copy:
         src: "{{files}}/osbs/docker-storage-setup"
-        dest:  "/etc/sysconfig/docker-storage-setup"
+        dest: "/etc/sysconfig/docker-storage-setup"
 
 - name: Deploy kerberose keytab to cluster hosts
   hosts: osbs-masters-stg:osbs-nodes-stg:osbs-masters:osbs-nodes:osbs-aarch64-masters-stg:osbs-aarch64-masters
@@ -176,18 +175,18 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: keytab/service
-    owner_user: root
-    owner_group: root
-    service: osbs
-    host: "osbs.fedoraproject.org"
-    when: env == "production"
-  - role: keytab/service
-    owner_user: root
-    owner_group: root
-    service: osbs
-    host: "osbs.stg.fedoraproject.org"
-    when: env == "staging"
+    - role: keytab/service
+      owner_user: root
+      owner_group: root
+      service: osbs
+      host: "osbs.fedoraproject.org"
+      when: env == "production"
+    - role: keytab/service
+      owner_user: root
+      owner_group: root
+      service: osbs
+      host: "osbs.stg.fedoraproject.org"
+      when: env == "staging"
 
 - name: Deploy OpenShift Cluster x86_64
   hosts: osbs-control:osbs-control-stg
@@ -227,7 +226,7 @@
       openshift_ansible_python_interpreter: "/usr/bin/python3"
       openshift_ansible_use_crio: false
       openshift_ansible_crio_only: false
-      tags: ['openshift-cluster-x86','ansible-ansible-openshift-ansible']
+      tags: ["openshift-cluster-x86", "ansible-ansible-openshift-ansible"]
 
 - name: Deploy OpenShift Cluster aarch64
   hosts: osbs-control:osbs-control-stg
@@ -267,7 +266,7 @@
       openshift_ansible_python_interpreter: "/usr/bin/python3"
       openshift_ansible_use_crio: false
       openshift_ansible_crio_only: false
-      tags: ['openshift-cluster-aarch','ansible-ansible-openshift-ansible']
+      tags: ["openshift-cluster-aarch", "ansible-ansible-openshift-ansible"]
 
 - name: Setup OSBS requirements for OpenShift cluster hosts
   hosts: osbs-masters-stg:osbs-nodes-stg:osbs-masters:osbs-nodes
@@ -318,33 +317,32 @@
     - "/srv/private/ansible/vars.yml"
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   roles:
-  - role: osbs-secret
-    osbs_namespace: "{{ osbs_worker_namespace }}"
-    osbs_secret_name: odcs-oidc-secret
-    osbs_secret_files:
-    - source: "{{ private }}/files/osbs/{{ env }}/odcs-oidc-token"
-      dest: token
+    - role: osbs-secret
+      osbs_namespace: "{{ osbs_worker_namespace }}"
+      osbs_secret_name: odcs-oidc-secret
+      osbs_secret_files:
+        - source: "{{ private }}/files/osbs/{{ env }}/odcs-oidc-token"
+          dest: token
   tags:
     - osbs-worker-namespace
 
 - name: Create orchestrator namespace
   hosts: osbs-masters-stg[0]:osbs-masters[0]
   roles:
-  - role: osbs-namespace
-    osbs_orchestrator: true
-    osbs_worker_clusters: "{{ osbs_conf_worker_clusters }}"
-    osbs_cpu_limitrange: "{{ osbs_orchestrator_cpu_limitrange }}"
-    osbs_nodeselector: "{{ osbs_orchestrator_default_nodeselector|default('') }}"
-    osbs_sources_command: "{{ osbs_conf_sources_command }}"
-    osbs_readwrite_users: "{{ osbs_conf_readwrite_users }}"
-    osbs_service_accounts: "{{ osbs_conf_service_accounts }}"
-    koji_use_kerberos: true
-    koji_kerberos_keytab: "FILE:/etc/krb5.osbs_{{ osbs_url }}.keytab"
-    koji_kerberos_principal: "osbs/{{osbs_url}}@{{ ipa_realm }}"
+    - role: osbs-namespace
+      osbs_orchestrator: true
+      osbs_worker_clusters: "{{ osbs_conf_worker_clusters }}"
+      osbs_cpu_limitrange: "{{ osbs_orchestrator_cpu_limitrange }}"
+      osbs_nodeselector: "{{ osbs_orchestrator_default_nodeselector|default('') }}"
+      osbs_sources_command: "{{ osbs_conf_sources_command }}"
+      osbs_readwrite_users: "{{ osbs_conf_readwrite_users }}"
+      osbs_service_accounts: "{{ osbs_conf_service_accounts }}"
+      koji_use_kerberos: true
+      koji_kerberos_keytab: "FILE:/etc/krb5.osbs_{{ osbs_url }}.keytab"
+      koji_kerberos_principal: "osbs/{{osbs_url}}@{{ ipa_realm }}"
   tags:
     - osbs-orchestrator-namespace
 
-
 - name: Add the orchestrator labels to the nodes
   hosts: osbs-masters-stg[0]:osbs-masters[0]
   tags:
@@ -391,11 +389,11 @@
 - name: setup reactor config secret in orchestrator namespace
   hosts: osbs-masters-stg[0]:osbs-masters[0]
   roles:
-  - role: osbs-secret
-    osbs_secret_name: reactor-config-secret
-    osbs_secret_files:
-    - source: "/tmp/{{ osbs_namespace }}-reactor-config-secret.yml"
-      dest: config.yaml
+    - role: osbs-secret
+      osbs_secret_name: reactor-config-secret
+      osbs_secret_files:
+        - source: "/tmp/{{ osbs_namespace }}-reactor-config-secret.yml"
+          dest: config.yaml
   tags:
     - osbs-orchestrator-namespace
 
@@ -406,23 +404,22 @@
     - "/srv/private/ansible/vars.yml"
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   roles:
-  - role: osbs-secret
-    osbs_secret_name: odcs-oidc-secret
-    osbs_secret_files:
-    - source: "{{ private }}/files/osbs/{{ env }}/odcs-oidc-token"
-      dest: token
+    - role: osbs-secret
+      osbs_secret_name: odcs-oidc-secret
+      osbs_secret_files:
+        - source: "{{ private }}/files/osbs/{{ env }}/odcs-oidc-token"
+          dest: token
   tags:
     - osbs-orchestrator-namespace
 
-
 - name: setup client config secret in orchestrator namespace
   hosts: osbs-masters-stg[0]:osbs-masters[0]
   roles:
-  - role: osbs-secret
-    osbs_secret_name: client-config-secret
-    osbs_secret_files:
-    - source: "/tmp/{{ osbs_namespace }}-client-config-secret.conf"
-      dest: osbs.conf
+    - role: osbs-secret
+      osbs_secret_name: client-config-secret
+      osbs_secret_files:
+        - source: "/tmp/{{ osbs_namespace }}-client-config-secret.conf"
+          dest: osbs.conf
   tags:
     - osbs-orchestrator-namespace
 
@@ -448,18 +445,18 @@
     - "/srv/private/ansible/vars.yml"
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   roles:
-  - role: osbs-secret
-    osbs_secret_name: x86-64-orchestrator
-    osbs_secret_files:
-    - source: "/tmp/.orchestator-token-x86_64"
-      dest: token
+    - role: osbs-secret
+      osbs_secret_name: x86-64-orchestrator
+      osbs_secret_files:
+        - source: "/tmp/.orchestator-token-x86_64"
+          dest: token
 
   post_tasks:
-  - name: Delete the temporary secret file
-    local_action: >
-      file
-      state=absent
-      path="/tmp/.orchestator-token-x86_64"
+    - name: Delete the temporary secret file
+      local_action: >
+        file
+        state=absent
+        path="/tmp/.orchestator-token-x86_64"
   tags:
     - osbs-orchestrator-namespace
 
@@ -485,18 +482,18 @@
     - "/srv/private/ansible/vars.yml"
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   roles:
-  - role: osbs-secret
-    osbs_secret_name: aarch64-orchestrator
-    osbs_secret_files:
-    - source: "/tmp/.orchestator-token-aarch64"
-      dest: token
+    - role: osbs-secret
+      osbs_secret_name: aarch64-orchestrator
+      osbs_secret_files:
+        - source: "/tmp/.orchestator-token-aarch64"
+          dest: token
 
   post_tasks:
-  - name: Delete the temporary secret file
-    local_action: >
-      file
-      state=absent
-      path="/tmp/.orchestator-token-aarch64"
+    - name: Delete the temporary secret file
+      local_action: >
+        file
+        state=absent
+        path="/tmp/.orchestator-token-aarch64"
 
   tags:
     - osbs-orchestrator-namespace
@@ -526,12 +523,12 @@
         mode=0400
 
   roles:
-  - role: osbs-secret
-    osbs_secret_name: "v2-registry-dockercfg"
-    osbs_secret_type: kubernetes.io/dockercfg
-    osbs_secret_files:
-    - source: "/tmp/.dockercfg"
-      dest: .dockercfg
+    - role: osbs-secret
+      osbs_secret_name: "v2-registry-dockercfg"
+      osbs_secret_type: kubernetes.io/dockercfg
+      osbs_secret_files:
+        - source: "/tmp/.dockercfg"
+          dest: .dockercfg
 
   post_tasks:
     - name: Delete the temporary secret file
@@ -570,8 +567,8 @@
       osbs_secret_name: "v2-registry-dockercfg"
       osbs_secret_type: kubernetes.io/dockercfg
       osbs_secret_files:
-      - source: "/tmp/.dockercfg"
-        dest: .dockercfg
+        - source: "/tmp/.dockercfg"
+          dest: .dockercfg
 
   post_tasks:
     - name: Delete the temporary secret file
@@ -617,8 +614,7 @@
     - name: enable nrpe for monitoring (noc01)
       iptables: action=insert chain=INPUT destination_port=5666 protocol=tcp source=10.5.126.41 state=present jump=ACCEPT
       tags:
-      - iptables
-
+        - iptables
 
 - name: post-install osbs tasks
   hosts: osbs-nodes-stg:osbs-nodes:osbs-aarch64-nodes-stg:osbs-aarch64-nodes
@@ -639,7 +635,6 @@
     koji_builder_user: dockerbuilder
     osbs_builder_user: builder
 
-
   handlers:
     - name: Remove the previous buildroot image
       docker_image:
@@ -662,7 +657,7 @@
     - name: enable nrpe for monitoring (noc01)
       iptables: action=insert chain=INPUT destination_port=5666 protocol=tcp source=10.5.126.41 state=present jump=ACCEPT
       tags:
-      - iptables
+        - iptables
 
     - name: copy docker iptables script
       copy:
@@ -670,7 +665,7 @@
         dest: /usr/local/bin/fix-docker-iptables
         mode: 0755
       tags:
-      - iptables
+        - iptables
       notify:
         - restart and reload docker service
 
@@ -679,7 +674,7 @@
         src: "{{files}}/osbs/docker.firewall.service"
         dest: /etc/systemd/system/docker.service.d/firewall.conf
       tags:
-      - docker
+        - docker
       notify:
         - restart and reload docker service
 
diff --git a/playbooks/groups/overcloud-config.yml b/playbooks/groups/overcloud-config.yml
index 4734e55f2..b9a625539 100644
--- a/playbooks/groups/overcloud-config.yml
+++ b/playbooks/groups/overcloud-config.yml
@@ -4,564 +4,626 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
-   - /srv/web/infra/ansible/vars/newcloud.yml
-   - /srv/private/ansible/files/openstack/overcloudrc.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/newcloud.yml
+    - /srv/private/ansible/files/openstack/overcloudrc.yml
 
   tasks:
-
-  - name: setup auth/connection vars
-    set_fact:
-      os_cloud:
-        auth:
+    - name: setup auth/connection vars
+      set_fact:
+        os_cloud:
+          auth:
             auth_url: http://192.168.20.51:5000//v3
             username: admin
             password: "{{ OS_PASSWORD }}"
             project_name: admin
             project_domain_name: default
             user_domain_name: default
-        auth_type: v3password
-        region_name: regionOne
-        auth_version: 3
-        identity_api_version: 3
-
-  - name: create non-standard flavor
-    os_nova_flavor:
-      cloud: "{{ os_cloud }}"
-      name: "{{item.name}}"
-      ram: "{{item.ram}}"
-      disk: "{{item.disk}}"
-      vcpus: "{{item.vcpus}}"
-      swap: "{{item.swap}}"
-      ephemeral: 0
-    with_items:
-      - { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 }
-      - { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 100000 }
-      - { name: m2.prepare_builder, ram: 5000, disk: 16, vcpus: 2, swap: 0 }
-      # same as m.* but with swap
-      - { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 }
-      - { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 }
-      - { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 }
-      - { name: ms1.medium.bigswap, ram: 4096, disk: 40, vcpus: 2, swap: 40000 }
-      - { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 }
-      - { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 }
-      # inspired by http://aws.amazon.com/ec2/instance-types/
-      - { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 }
-      - { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 }
-      - { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 }
-      - { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 }
-
-  - name: download images
-    get_url:
-      dest: "/var/tmp/{{ item.imagename }}"
-      url: "{{ item.url }}"
-    with_items:
-       - { imagename: Fedora-Cloud-Base-28-1.1.ppc64le.qcow2, 
-           url: "https://dl.fedoraproject.org/pub/fedora-secondary/releases/28/Cloud/ppc64le/images/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2"; }
-       - { imagename: Fedora-Cloud-Base-28-1.1.x86_64.qcow2, 
-           url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"; }
-       - { imagename: Fedora-Cloud-Base-29-1.2.x86_64.qcow2, 
-           url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.qcow2"; }
-
-  - name: Add the images
-    os_image:
-      cloud: "{{ os_cloud }}"
-      name: "{{ item.name }}"
-      disk_format: qcow2
-      is_public: True
-      filename: "{{ item.filename }}"
-    with_items:
-      - { name: Fedora-Cloud-Base-28-1.1.ppc64le, filename: /var/tmp/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2 }
-      - { name: Fedora-Cloud-Base-28-1.1.x86_64, filename: /var/tmp/Fedora-Cloud-Base-28-1.1.x86_64.qcow2 }
-      - { name: Fedora-Cloud-Base-29-1.2.x86_64, filename: /var/tmp/Fedora-Cloud-Base-29-1.2.x86_64.qcow2 }
-
-  - name: Create tenants
-    os_project:
-      cloud: "{{ os_cloud }}"
-      name: "{{ item.name }}"
-      description: "{{ item.desc }}"
-      state: present
-      enabled: True
-      domain_id: default
-    with_items:
-      - { name: persistent, desc: "persistent instances" }
-      - { name: qa, desc: "developmnet and test-day applications of QA" }
-      - { name: transient, desc: 'transient instances' }
-      - { name: infrastructure, desc: "one off instances for infrastructure folks to test or check something (proof-of-concept)" }
-      - { name: copr, desc: 'Space for Copr builders' }
-      - { name: coprdev, desc: 'Development version of Copr' }
-      - { name: pythonbots, desc: 'project for python build bot users - twisted, etc' }
-      - { name: openshift, desc: 'Tenant for openshift deployment' }
-      - { name: maintainertest, desc: 'Tenant for maintainer test machines' }
-      - { name: aos-ci-cd, desc: 'Tenant for aos-ci-cd' }
-
-  ##### NETWORK ####
-  # http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.html
-  #
-  # NEW:
-  # network is 38.145.48.0/23
-  # gateway is 38.145.49.254
-  # leave 38.145.49.250-253 unused for dcops
-  # leave 38.145.49.231-249 unused for future testing
-  #
-  # OLD:
-  # external network is a class C: 209.132.184.0/24
-  # 209.132.184.1  to .25 - reserved for hardware.
-  # 209.132.184.26 to .30 - reserver for test cloud external ips
-  # 209.132.184.31 to .69 - icehouse cloud
-  # 209.132.184.70 to .89 - reserved for arm03 SOCs
-  # 209.132.184.90 to .251 - folsom cloud
-  #
-  - name: Create an external network
-    os_network:
-      cloud: "{{ os_cloud }}"
-      name: external
-      provider_network_type: flat
-      provider_physical_network: datacentre
-      external: true
-      shared: true
-    register: EXTERNAL_ID
-  - name: Create an external subnet
-    os_subnet:
-      cloud: "{{ os_cloud }}"
-      name: external-subnet
-      network_name: external
-      cidr: 38.145.48.0/23
-      allocation_pool_start: 38.145.48.1
-      allocation_pool_end: 38.145.49.230
-      gateway_ip: 38.145.49.254
-      enable_dhcp: false
-    register: EXTERNAL_SUBNET_ID
-
-  #- shell: source /root/keystonerc_admin && nova floating-ip-create external
-  #  when: packstack_sucessfully_finished.stat.exists == False
-
-  # 172.16.0.1/16 -- 172.22.0.1/16 - free (can be split to /20)
-  # 172.23.0.1/16 - free (but used by old cloud)
-  # 172.24.0.1/24 - RESERVED it is used internally for OS
-  # 172.24.1.0/24 -- 172.24.255.0/24 - likely free (?)
-  # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
-  # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
-  # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
-  # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
-  # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
-  # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
-  # 172.25.96.1/20 - cloudsig (172.25.96.1 - 172.25.111.254)
-  # 172.25.112.1/20 - qa (172.25.112.1 - 172.25.127.254)
-  # 172.25.128.1/20 - pythonbots (172.25.128.1 - 172.25.143.254)
-  # 172.25.144.1/20 - coprdev (172.25.144.1 - 172.25.159.254)
-  # 172.25.160.1/20 -- 172.25.240.1/20 - free
-  # 172.26.0.1/16 -- 172.31.0.1/16 - free (can be split to /20)
-
-  - name: Create a router for all tenants
-    os_router:
-      cloud: "{{ os_cloud }}"
-      project: "{{ item }}"
-      name: "ext-to-{{ item }}"
-      network: "external"
-    with_items: "{{all_projects}}"
-  - name: Create a private network for all tenants
-    os_network:
-      cloud: "{{ os_cloud }}"
-      project: "{{ item.name }}"
-      name: "{{ item.name }}-net"
-      shared: "{{ item.shared }}"
-    with_items:
-      - { name: copr, shared: true }
-      - { name: coprdev, shared: true }
-      - { name: infrastructure, shared: false }
-      - { name: persistent, shared: false }
-      - { name: pythonbots, shared: false }
-      - { name: transient, shared: false }
-      - { name: openshift, shared: false }
-      - { name: maintainertest, shared: false }
-      - { name: aos-ci-cd, shared: false }
-  - name: Create a subnet for all tenants
-    os_subnet:
-      cloud: "{{ os_cloud }}"
-      project: "{{ item.name }}"
-      network_name: "{{ item.name }}-net"
-      name: "{{ item.name }}-subnet"
-      cidr: "{{ item.cidr }}"
-      gateway_ip: "{{ item.gateway }}"
-      dns_nameservers: "66.35.62.163,140.211.169.201"
-    with_items:
-      - { name: copr, cidr: '172.25.80.1/20', gateway: '172.25.80.1' }
-      - { name: coprdev, cidr: '172.25.144.1/20', gateway: '172.25.144.1' }
-      - { name: infrastructure, cidr: '172.25.16.1/20', gateway: '172.25.16.1' }
-      - { name: persistent, cidr: '172.25.32.1/20', gateway: '172.25.32.1' }
-      - { name: pythonbots, cidr: '172.25.128.1/20', gateway: '172.25.128.1' }
-      - { name: transient, cidr: '172.25.48.1/20', gateway: '172.25.48.1' }
-      - { name: openshift, cidr: '172.25.160.1/20', gateway: '172.25.160.1' }
-      - { name: maintainertest, cidr: '172.25.176.1/20', gateway: '172.25.176.1' }
-      - { name: aos-ci-cd, cidr: '172.25.180.1/20', gateway: '172.25.180.1' }
-
-  - name: "Connect routers interface to the TENANT-subnet"
-    os_router:
-      cloud: "{{ os_cloud }}"
-      project: "{{ item }}"
-      name: "ext-to-{{ item }}"
-      interfaces: ["{{ item }}-subnet"]
-    with_items: "{{all_projects}}"
-
-  #################
-  # Security Groups
-  ################
-
-  - name: "Change the quota of quotas"
-    os_quota:
-      cloud: "{{os_cloud}}"
-      name: "{{item}}"
-      security_group: 100
-      security_group_rule: 100
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'ssh-anywhere' security group"
-    os_security_group:
-      cloud: "{{ os_cloud }}"
-      name: 'ssh-anywhere-{{item}}'
-      description: "allow ssh from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Add rules to security group ( ssh-anywhere )"
-    os_security_group_rule:
-          security_group: 'ssh-anywhere-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Allow nagios checks"
-    os_security_group:
-      cloud: "{{ os_cloud }}"
-      state: "present"
-      name: 'allow-nagios-{{item}}'
-      description: "allow nagios checks"
-      project: "{{item}}"
-    with_items:
-    - persistent
-
-  - name: Add rule to new security group (nagios)
-    os_security_group_rule:
-          security_group: 'allow-nagios-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "5666"
-          port_range_max: "5666"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "209.132.181.35/32"
-          project: "{{item}}"
-    with_items:
-    - persistent
-
-  - name: "Create 'ssh-from-persistent' security group"
-    os_security_group:
-      cloud: "{{ os_cloud }}"
-      state: "present"
-      name: 'ssh-from-persistent-{{item}}'
-      description: "allow ssh from persistent"
-      project: "{{item}}"
-    with_items:
-      - copr
-      - coprdev
-
-  - name: add rule to new security group (ssh-from-persistent)
-    os_security_group_rule:
-          security_group: 'ssh-from-persistent-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-          project: "{{item}}"
-    with_items:
-      - copr
-      - coprdev
-
-  - name: "Create 'ssh-internal' security group"
-    os_security_group:
-      state: "present"
-      cloud: "{{ os_cloud }}"
-      name: 'ssh-internal-{{item.name}}'
-      description: "allow ssh from {{item.name}}-network"
-      project: "{{ item.name }}"
-    with_items:
-      - { name: copr, prefix: '172.25.80.1/20' }
-      - { name: coprdev, prefix: '172.25.80.1/20' }
-      - { name: infrastructure, prefix: "172.25.16.1/20" }
-      - { name: persistent, prefix: "172.25.32.1/20" }
-      - { name: pythonbots, prefix: '172.25.128.1/20' }
-      - { name: transient, prefix: '172.25.48.1/20' }
-      - { name: openshift, prefix: '172.25.160.1/20' }
-      - { name: maintainertest, prefix: '172.25.180.1/20' }
-      - { name: aos-ci-cd, prefix: '172.25.200.1/20' }
-
-  - name: add rule to new security group (ssh-internal)
-    os_security_group_rule:
-          security_group: 'ssh-internal-{{item.name}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "{{ item.prefix }}"
-          project: "{{item.name}}"
-    with_items:
-      - { name: copr, prefix: '172.25.80.1/20' }
-      - { name: coprdev, prefix: '172.25.80.1/20' }
-      - { name: infrastructure, prefix: "172.25.16.1/20" }
-      - { name: persistent, prefix: "172.25.32.1/20" }
-      - { name: pythonbots, prefix: '172.25.128.1/20' }
-      - { name: transient, prefix: '172.25.48.1/20' }
-      - { name: openshift, prefix: '172.25.160.1/20' }
-      - { name: maintainertest, prefix: '172.25.180.1/20' }
-      - { name: aos-ci-cd, prefix: '172.25.200.1/20' }
-
-  - name: "Create 'web-80-anywhere' security group"
-    os_security_group:
-      state: "present"
-      name: 'web-80-anywhere-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow web-80 from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (web-80-anywhere)
-    os_security_group_rule:
-          security_group: 'web-80-anywhere-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "80"
-          port_range_max: "80"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'web-443-anywhere' security group"
-    os_security_group:
-      state: "present"
-      name: 'web-443-anywhere-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow web-443 from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (web-443-anywhere)
-    os_security_group_rule:
-          security_group: 'web-443-anywhere-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "443"
-          port_range_max: "443"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'oci-registry-5000-anywhere' security group"
-    os_security_group:
-      state: "present"
-      name: 'oci-registry-5000-anywhere-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow oci-registry-5000 from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (oci-registry-5000-anywhere)
-    os_security_group_rule:
-          security_group: 'oci-registry-5000-anywhere-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "5000"
-          port_range_max: "5000"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'wide-open' security group"
-    os_security_group:
-      state: "present"
-      name: 'wide-open-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow anything from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (wide-open/tcp)
-    os_security_group_rule:
-          security_group: 'wide-open-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "1"
-          port_range_max: "65535"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (wide-open/udp)
-    os_security_group_rule:
-          security_group: 'wide-open-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "1"
-          port_range_max: "65535"
-          ethertype: "IPv4"
-          protocol: "udp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'ALL ICMP' security group"
-    os_security_group:
-      state: "present"
-      name: 'all-icmp-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow all ICMP traffic"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (all-icmp)
-    os_security_group_rule:
-          security_group: 'all-icmp-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          ethertype: "IPv4"
-          protocol: "icmp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'keygen-persistent' security group"
-    os_security_group:
-      state: "present"
-      name: 'keygen-persistent-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "rules for copr-keygen"
-      project: "{{item}}"
-    with_items:
-      - copr
-      - coprdev
-
-  - name: add rule to new security group (keygen-persistent/5167)
-    os_security_group_rule:
-          security_group: 'keygen-persistent-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "5167"
-          port_range_max: "5167"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-          project: "{{item}}"
-    with_items:
-      - copr
-      - coprdev
-
-  - name: add rule to new security group (keygen-persistent/80)
-    os_security_group_rule:
-          security_group: 'keygen-persistent-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "80"
-          port_range_max: "80"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-          project: "{{item}}"
-    with_items:
-      - copr
-      - coprdev
-
-  - name: "Create 'pg-5432-anywhere' security group"
-    os_security_group:
-      state: "present"
-      name: 'pg-5432-anywhere-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow postgresql-5432 from anywhere"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (pg-5432-anywhere)
-    os_security_group_rule:
-          security_group: 'pg-5432-anywhere-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "5432"
-          port_range_max: "5432"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: "Create 'fedmsg-relay-persistent' security group"
-    os_security_group:
-      state: "present"
-      name: 'fedmsg-relay-persistent-{{item}}'
-      cloud: "{{ os_cloud }}"
-      description: "allow incoming 2003 and 4001 from internal network"
-      project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (fedmsg-relay-persistent/2003)
-    os_security_group_rule:
-          security_group: 'fedmsg-relay-persistent-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "2003"
-          port_range_max: "2003"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.80.1/16"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-  - name: add rule to new security group (fedmsg-relay-persistent/4001)
-    os_security_group_rule:
-          security_group: 'fedmsg-relay-persistent-{{item}}'
-          cloud: "{{ os_cloud }}"
-          direction: "ingress"
-          port_range_min: "4001"
-          port_range_max: "4001"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.80.1/16"
-          project: "{{item}}"
-    with_items: "{{all_projects}}"
-
-#########
-# quotas
-#########
-
-  - name: set quotas for copr
-    os_quota:
-      cloud: "{{ os_cloud }}"
-      cores: "{{ item.cores }}"
-      floating_ips: "{{ item.floating_ips }}"
-      instances: "{{ item.instances }}"
-      name: "{{ item.name }}"
-      security_group: "{{ item.security_group }}"
-    with_items:
-      - { name: copr, cores: 100, floating_ips: 10, instances: 50, ram: 350000, security_group: 15  }
-      - { name: coprdev, cores: 80, floating_ips: 10, instances: 40, ram: 300000, security_group: 15  }
-      - { name: persistent, cores: 175, floating_ips: 50, instances: 60, ram: 300000, security_group: 15  }
-      - { name: transient, cores: 70, floating_ips: 10, instances: 30, ram: 150000, security_group: 15  }
+          auth_type: v3password
+          region_name: regionOne
+          auth_version: 3
+          identity_api_version: 3
+
+    - name: create non-standard flavor
+      os_nova_flavor:
+        cloud: "{{ os_cloud }}"
+        name: "{{item.name}}"
+        ram: "{{item.ram}}"
+        disk: "{{item.disk}}"
+        vcpus: "{{item.vcpus}}"
+        swap: "{{item.swap}}"
+        ephemeral: 0
+      with_items:
+        - { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 }
+        - { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 100000 }
+        - { name: m2.prepare_builder, ram: 5000, disk: 16, vcpus: 2, swap: 0 }
+        # same as m.* but with swap
+        - { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 }
+        - { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 }
+        - { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 }
+        - {
+            name: ms1.medium.bigswap,
+            ram: 4096,
+            disk: 40,
+            vcpus: 2,
+            swap: 40000,
+          }
+        - { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 }
+        - { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 }
+        # inspired by http://aws.amazon.com/ec2/instance-types/
+        - { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 }
+        - { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 }
+        - { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 }
+        - { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 }
+
+    - name: download images
+      get_url:
+        dest: "/var/tmp/{{ item.imagename }}"
+        url: "{{ item.url }}"
+      with_items:
+        - {
+            imagename: Fedora-Cloud-Base-28-1.1.ppc64le.qcow2,
+            url: "https://dl.fedoraproject.org/pub/fedora-secondary/releases/28/Cloud/ppc64le/images/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2";,
+          }
+        - {
+            imagename: Fedora-Cloud-Base-28-1.1.x86_64.qcow2,
+            url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2";,
+          }
+        - {
+            imagename: Fedora-Cloud-Base-29-1.2.x86_64.qcow2,
+            url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.qcow2";,
+          }
+
+    - name: Add the images
+      os_image:
+        cloud: "{{ os_cloud }}"
+        name: "{{ item.name }}"
+        disk_format: qcow2
+        is_public: True
+        filename: "{{ item.filename }}"
+      with_items:
+        - {
+            name: Fedora-Cloud-Base-28-1.1.ppc64le,
+            filename: /var/tmp/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2,
+          }
+        - {
+            name: Fedora-Cloud-Base-28-1.1.x86_64,
+            filename: /var/tmp/Fedora-Cloud-Base-28-1.1.x86_64.qcow2,
+          }
+        - {
+            name: Fedora-Cloud-Base-29-1.2.x86_64,
+            filename: /var/tmp/Fedora-Cloud-Base-29-1.2.x86_64.qcow2,
+          }
+
+    - name: Create tenants
+      os_project:
+        cloud: "{{ os_cloud }}"
+        name: "{{ item.name }}"
+        description: "{{ item.desc }}"
+        state: present
+        enabled: True
+        domain_id: default
+      with_items:
+        - { name: persistent, desc: "persistent instances" }
+        - { name: qa, desc: "developmnet and test-day applications of QA" }
+        - { name: transient, desc: "transient instances" }
+        - {
+            name: infrastructure,
+            desc: "one off instances for infrastructure folks to test or check something (proof-of-concept)",
+          }
+        - { name: copr, desc: "Space for Copr builders" }
+        - { name: coprdev, desc: "Development version of Copr" }
+        - {
+            name: pythonbots,
+            desc: "project for python build bot users - twisted, etc",
+          }
+        - { name: openshift, desc: "Tenant for openshift deployment" }
+        - { name: maintainertest, desc: "Tenant for maintainer test machines" }
+        - { name: aos-ci-cd, desc: "Tenant for aos-ci-cd" }
+
+    ##### NETWORK ####
+    # http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.html
+    #
+    # NEW:
+    # network is 38.145.48.0/23
+    # gateway is 38.145.49.254
+    # leave 38.145.49.250-253 unused for dcops
+    # leave 38.145.49.231-249 unused for future testing
+    #
+    # OLD:
+    # external network is a class C: 209.132.184.0/24
+    # 209.132.184.1  to .25 - reserved for hardware.
+    # 209.132.184.26 to .30 - reserver for test cloud external ips
+    # 209.132.184.31 to .69 - icehouse cloud
+    # 209.132.184.70 to .89 - reserved for arm03 SOCs
+    # 209.132.184.90 to .251 - folsom cloud
+    #
+    - name: Create an external network
+      os_network:
+        cloud: "{{ os_cloud }}"
+        name: external
+        provider_network_type: flat
+        provider_physical_network: datacentre
+        external: true
+        shared: true
+      register: EXTERNAL_ID
+    - name: Create an external subnet
+      os_subnet:
+        cloud: "{{ os_cloud }}"
+        name: external-subnet
+        network_name: external
+        cidr: 38.145.48.0/23
+        allocation_pool_start: 38.145.48.1
+        allocation_pool_end: 38.145.49.230
+        gateway_ip: 38.145.49.254
+        enable_dhcp: false
+      register: EXTERNAL_SUBNET_ID
+
+    #- shell: source /root/keystonerc_admin && nova floating-ip-create external
+    #  when: packstack_sucessfully_finished.stat.exists == False
+
+    # 172.16.0.1/16 -- 172.22.0.1/16 - free (can be split to /20)
+    # 172.23.0.1/16 - free (but used by old cloud)
+    # 172.24.0.1/24 - RESERVED it is used internally for OS
+    # 172.24.1.0/24 -- 172.24.255.0/24 - likely free (?)
+    # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
+    # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
+    # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
+    # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
+    # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
+    # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
+    # 172.25.96.1/20 - cloudsig (172.25.96.1 - 172.25.111.254)
+    # 172.25.112.1/20 - qa (172.25.112.1 - 172.25.127.254)
+    # 172.25.128.1/20 - pythonbots (172.25.128.1 - 172.25.143.254)
+    # 172.25.144.1/20 - coprdev (172.25.144.1 - 172.25.159.254)
+    # 172.25.160.1/20 -- 172.25.240.1/20 - free
+    # 172.26.0.1/16 -- 172.31.0.1/16 - free (can be split to /20)
+
+    - name: Create a router for all tenants
+      os_router:
+        cloud: "{{ os_cloud }}"
+        project: "{{ item }}"
+        name: "ext-to-{{ item }}"
+        network: "external"
+      with_items: "{{all_projects}}"
+    - name: Create a private network for all tenants
+      os_network:
+        cloud: "{{ os_cloud }}"
+        project: "{{ item.name }}"
+        name: "{{ item.name }}-net"
+        shared: "{{ item.shared }}"
+      with_items:
+        - { name: copr, shared: true }
+        - { name: coprdev, shared: true }
+        - { name: infrastructure, shared: false }
+        - { name: persistent, shared: false }
+        - { name: pythonbots, shared: false }
+        - { name: transient, shared: false }
+        - { name: openshift, shared: false }
+        - { name: maintainertest, shared: false }
+        - { name: aos-ci-cd, shared: false }
+    - name: Create a subnet for all tenants
+      os_subnet:
+        cloud: "{{ os_cloud }}"
+        project: "{{ item.name }}"
+        network_name: "{{ item.name }}-net"
+        name: "{{ item.name }}-subnet"
+        cidr: "{{ item.cidr }}"
+        gateway_ip: "{{ item.gateway }}"
+        dns_nameservers: "66.35.62.163,140.211.169.201"
+      with_items:
+        - { name: copr, cidr: "172.25.80.1/20", gateway: "172.25.80.1" }
+        - { name: coprdev, cidr: "172.25.144.1/20", gateway: "172.25.144.1" }
+        - {
+            name: infrastructure,
+            cidr: "172.25.16.1/20",
+            gateway: "172.25.16.1",
+          }
+        - { name: persistent, cidr: "172.25.32.1/20", gateway: "172.25.32.1" }
+        - { name: pythonbots, cidr: "172.25.128.1/20", gateway: "172.25.128.1" }
+        - { name: transient, cidr: "172.25.48.1/20", gateway: "172.25.48.1" }
+        - { name: openshift, cidr: "172.25.160.1/20", gateway: "172.25.160.1" }
+        - {
+            name: maintainertest,
+            cidr: "172.25.176.1/20",
+            gateway: "172.25.176.1",
+          }
+        - { name: aos-ci-cd, cidr: "172.25.180.1/20", gateway: "172.25.180.1" }
+
+    - name: "Connect routers interface to the TENANT-subnet"
+      os_router:
+        cloud: "{{ os_cloud }}"
+        project: "{{ item }}"
+        name: "ext-to-{{ item }}"
+        interfaces: ["{{ item }}-subnet"]
+      with_items: "{{all_projects}}"
+
+    #################
+    # Security Groups
+    ################
+
+    - name: "Change the quota of quotas"
+      os_quota:
+        cloud: "{{os_cloud}}"
+        name: "{{item}}"
+        security_group: 100
+        security_group_rule: 100
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'ssh-anywhere' security group"
+      os_security_group:
+        cloud: "{{ os_cloud }}"
+        name: "ssh-anywhere-{{item}}"
+        description: "allow ssh from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Add rules to security group ( ssh-anywhere )"
+      os_security_group_rule:
+        security_group: "ssh-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "22"
+        port_range_max: "22"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Allow nagios checks"
+      os_security_group:
+        cloud: "{{ os_cloud }}"
+        state: "present"
+        name: "allow-nagios-{{item}}"
+        description: "allow nagios checks"
+        project: "{{item}}"
+      with_items:
+        - persistent
+
+    - name: Add rule to new security group (nagios)
+      os_security_group_rule:
+        security_group: "allow-nagios-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "5666"
+        port_range_max: "5666"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "209.132.181.35/32"
+        project: "{{item}}"
+      with_items:
+        - persistent
+
+    - name: "Create 'ssh-from-persistent' security group"
+      os_security_group:
+        cloud: "{{ os_cloud }}"
+        state: "present"
+        name: "ssh-from-persistent-{{item}}"
+        description: "allow ssh from persistent"
+        project: "{{item}}"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: add rule to new security group (ssh-from-persistent)
+      os_security_group_rule:
+        security_group: "ssh-from-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "22"
+        port_range_max: "22"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "172.25.32.1/20"
+        project: "{{item}}"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: "Create 'ssh-internal' security group"
+      os_security_group:
+        state: "present"
+        cloud: "{{ os_cloud }}"
+        name: "ssh-internal-{{item.name}}"
+        description: "allow ssh from {{item.name}}-network"
+        project: "{{ item.name }}"
+      with_items:
+        - { name: copr, prefix: "172.25.80.1/20" }
+        - { name: coprdev, prefix: "172.25.80.1/20" }
+        - { name: infrastructure, prefix: "172.25.16.1/20" }
+        - { name: persistent, prefix: "172.25.32.1/20" }
+        - { name: pythonbots, prefix: "172.25.128.1/20" }
+        - { name: transient, prefix: "172.25.48.1/20" }
+        - { name: openshift, prefix: "172.25.160.1/20" }
+        - { name: maintainertest, prefix: "172.25.180.1/20" }
+        - { name: aos-ci-cd, prefix: "172.25.200.1/20" }
+
+    - name: add rule to new security group (ssh-internal)
+      os_security_group_rule:
+        security_group: "ssh-internal-{{item.name}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "22"
+        port_range_max: "22"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "{{ item.prefix }}"
+        project: "{{item.name}}"
+      with_items:
+        - { name: copr, prefix: "172.25.80.1/20" }
+        - { name: coprdev, prefix: "172.25.80.1/20" }
+        - { name: infrastructure, prefix: "172.25.16.1/20" }
+        - { name: persistent, prefix: "172.25.32.1/20" }
+        - { name: pythonbots, prefix: "172.25.128.1/20" }
+        - { name: transient, prefix: "172.25.48.1/20" }
+        - { name: openshift, prefix: "172.25.160.1/20" }
+        - { name: maintainertest, prefix: "172.25.180.1/20" }
+        - { name: aos-ci-cd, prefix: "172.25.200.1/20" }
+
+    - name: "Create 'web-80-anywhere' security group"
+      os_security_group:
+        state: "present"
+        name: "web-80-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow web-80 from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (web-80-anywhere)
+      os_security_group_rule:
+        security_group: "web-80-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "80"
+        port_range_max: "80"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'web-443-anywhere' security group"
+      os_security_group:
+        state: "present"
+        name: "web-443-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow web-443 from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (web-443-anywhere)
+      os_security_group_rule:
+        security_group: "web-443-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "443"
+        port_range_max: "443"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'oci-registry-5000-anywhere' security group"
+      os_security_group:
+        state: "present"
+        name: "oci-registry-5000-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow oci-registry-5000 from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (oci-registry-5000-anywhere)
+      os_security_group_rule:
+        security_group: "oci-registry-5000-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "5000"
+        port_range_max: "5000"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'wide-open' security group"
+      os_security_group:
+        state: "present"
+        name: "wide-open-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow anything from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (wide-open/tcp)
+      os_security_group_rule:
+        security_group: "wide-open-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "1"
+        port_range_max: "65535"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (wide-open/udp)
+      os_security_group_rule:
+        security_group: "wide-open-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "1"
+        port_range_max: "65535"
+        ethertype: "IPv4"
+        protocol: "udp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'ALL ICMP' security group"
+      os_security_group:
+        state: "present"
+        name: "all-icmp-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow all ICMP traffic"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (all-icmp)
+      os_security_group_rule:
+        security_group: "all-icmp-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        ethertype: "IPv4"
+        protocol: "icmp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'keygen-persistent' security group"
+      os_security_group:
+        state: "present"
+        name: "keygen-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "rules for copr-keygen"
+        project: "{{item}}"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: add rule to new security group (keygen-persistent/5167)
+      os_security_group_rule:
+        security_group: "keygen-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "5167"
+        port_range_max: "5167"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "172.25.32.1/20"
+        project: "{{item}}"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: add rule to new security group (keygen-persistent/80)
+      os_security_group_rule:
+        security_group: "keygen-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "80"
+        port_range_max: "80"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "172.25.32.1/20"
+        project: "{{item}}"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: "Create 'pg-5432-anywhere' security group"
+      os_security_group:
+        state: "present"
+        name: "pg-5432-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow postgresql-5432 from anywhere"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (pg-5432-anywhere)
+      os_security_group_rule:
+        security_group: "pg-5432-anywhere-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "5432"
+        port_range_max: "5432"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "0.0.0.0/0"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: "Create 'fedmsg-relay-persistent' security group"
+      os_security_group:
+        state: "present"
+        name: "fedmsg-relay-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        description: "allow incoming 2003 and 4001 from internal network"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (fedmsg-relay-persistent/2003)
+      os_security_group_rule:
+        security_group: "fedmsg-relay-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "2003"
+        port_range_max: "2003"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "172.25.80.1/16"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    - name: add rule to new security group (fedmsg-relay-persistent/4001)
+      os_security_group_rule:
+        security_group: "fedmsg-relay-persistent-{{item}}"
+        cloud: "{{ os_cloud }}"
+        direction: "ingress"
+        port_range_min: "4001"
+        port_range_max: "4001"
+        ethertype: "IPv4"
+        protocol: "tcp"
+        remote_ip_prefix: "172.25.80.1/16"
+        project: "{{item}}"
+      with_items: "{{all_projects}}"
+
+    #########
+    # quotas
+    #########
+
+    - name: set quotas for copr
+      os_quota:
+        cloud: "{{ os_cloud }}"
+        cores: "{{ item.cores }}"
+        floating_ips: "{{ item.floating_ips }}"
+        instances: "{{ item.instances }}"
+        name: "{{ item.name }}"
+        security_group: "{{ item.security_group }}"
+      with_items:
+        - {
+            name: copr,
+            cores: 100,
+            floating_ips: 10,
+            instances: 50,
+            ram: 350000,
+            security_group: 15,
+          }
+        - {
+            name: coprdev,
+            cores: 80,
+            floating_ips: 10,
+            instances: 40,
+            ram: 300000,
+            security_group: 15,
+          }
+        - {
+            name: persistent,
+            cores: 175,
+            floating_ips: 50,
+            instances: 60,
+            ram: 300000,
+            security_group: 15,
+          }
+        - {
+            name: transient,
+            cores: 70,
+            floating_ips: 10,
+            instances: 30,
+            ram: 150000,
+            security_group: 15,
+          }
diff --git a/playbooks/groups/packages.yml b/playbooks/groups/packages.yml
index 8b53065e1..42951c03e 100644
--- a/playbooks/groups/packages.yml
+++ b/playbooks/groups/packages.yml
@@ -11,32 +11,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the new service specific config
   hosts: packages:packages-stg
@@ -44,20 +43,20 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: nfs/client
-    mnt_dir: /var/cache/fedoracommunity
-    nfs_src_dir: fedora_app_packages
-    when: env == "production"
-  - fedmsg/base
-  - fedmsg/hub
-  - packages3/web
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - role: nfs/client
+      mnt_dir: /var/cache/fedoracommunity
+      nfs_src_dir: fedora_app_packages
+      when: env == "production"
+    - fedmsg/base
+    - fedmsg/hub
+    - packages3/web
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/pagure-proxy.yml b/playbooks/groups/pagure-proxy.yml
index ef49f8485..2c27cdf65 100644
--- a/playbooks/groups/pagure-proxy.yml
+++ b/playbooks/groups/pagure-proxy.yml
@@ -6,28 +6,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-  - name: Enable ipv4_forward in sysctl
-    sysctl: name=net.ipv4.ip_forward value=1 state=present sysctl_set=yes reload=yes
+    - name: Enable ipv4_forward in sysctl
+      sysctl: name=net.ipv4.ip_forward value=1 state=present sysctl_set=yes reload=yes
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/pagure.yml b/playbooks/groups/pagure.yml
index a6dd601ec..17881fb66 100644
--- a/playbooks/groups/pagure.yml
+++ b/playbooks/groups/pagure.yml
@@ -6,30 +6,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - openvpn/client
-  - postgresql_server
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - openvpn/client
+    - postgresql_server
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy pagure itself
   hosts: pagure:pagure-stg
@@ -37,26 +37,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   pre_tasks:
-  - name: install fedmsg-relay
-    package: name=fedmsg-relay state=present
-    tags:
-    - pagure
-    - pagure/fedmsg
-  - name: and start it
-    service: name=fedmsg-relay state=started
-    tags:
-    - pagure
-    - pagure/fedmsg
+    - name: install fedmsg-relay
+      package: name=fedmsg-relay state=present
+      tags:
+        - pagure
+        - pagure/fedmsg
+    - name: and start it
+      service: name=fedmsg-relay state=started
+      tags:
+        - pagure
+        - pagure/fedmsg
 
   roles:
-  - pagure/frontend
-  - pagure/fedmsg
-  - { role: repospanner/server, when: inventory_hostname.startswith('pagure01'), node: pagure01, region: ansible, spawn_repospanner_node: false, join_repospanner_node: repospanner01.ansible.fedoraproject.org }
+    - pagure/frontend
+    - pagure/fedmsg
+    - {
+        role: repospanner/server,
+        when: inventory_hostname.startswith('pagure01'),
+        node: pagure01,
+        region: ansible,
+        spawn_repospanner_node: false,
+        join_repospanner_node: repospanner01.ansible.fedoraproject.org,
+      }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/pdc.yml b/playbooks/groups/pdc.yml
index bc44c50ac..a7dd737b0 100644
--- a/playbooks/groups/pdc.yml
+++ b/playbooks/groups/pdc.yml
@@ -7,60 +7,60 @@
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - collectd/base
-  - hosts
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - collectd/base
+    - hosts
+    - fas_client
+    - sudo
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
 - name: stuff for the web nodes
   hosts: pdc-web:pdc-web-stg
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - role: openvpn/client
-    when: env != "staging"
-  - mod_wsgi
-  - fedmsg/base
-  - pdc/frontend
+    - role: openvpn/client
+      when: env != "staging"
+    - mod_wsgi
+    - fedmsg/base
+    - pdc/frontend
 
 - name: stuff just for the backend nodes
   hosts: pdc-backend:pdc-backend-stg
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - fedmsg/base
-  - fedmsg/hub
-  - pdc/backend
-  - role: collectd/fedmsg-service
-    process: fedmsg-hub
+    - fedmsg/base
+    - fedmsg/hub
+    - pdc/backend
+    - role: collectd/fedmsg-service
+      process: fedmsg-hub
diff --git a/playbooks/groups/people.yml b/playbooks/groups/people.yml
index bce414474..9f7df1726 100644
--- a/playbooks/groups/people.yml
+++ b/playbooks/groups/people.yml
@@ -9,16 +9,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-
-  - name: mount project volume
-    mount: >
+    - name: mount project volume
+      mount: >
         name=/project
         src=/dev/mapper/GuestVolGroup00-project
         fstype=xfs
@@ -26,11 +25,11 @@
         passno=0
         dump=0
         state=mounted
-    tags:
-    - mount
+      tags:
+        - mount
 
-  - name: mount srv volume
-    mount: >
+    - name: mount srv volume
+      mount: >
         name=/srv
         src=/dev/mapper/GuestVolGroup00-srv
         fstype=xfs
@@ -38,14 +37,14 @@
         passno=0
         dump=0
         state=mounted
-    tags:
-    - mount
+      tags:
+        - mount
 
-  - name: create /srv/home directory
-    file: path=/srv/home state=directory owner=root group=root
+    - name: create /srv/home directory
+      file: path=/srv/home state=directory owner=root group=root
 
-  - name: bind mount home volume
-    mount: >
+    - name: bind mount home volume
+      mount: >
         name=/home
         src=/srv/home
         fstype=none
@@ -53,39 +52,39 @@
         passno=0
         dump=0
         state=mounted
-    tags:
-    - mount
+      tags:
+        - mount
 
   roles:
-  - base
-  - collectd/base
-  - fas_client
-  - hosts
-  - nagios_client
-  - rkhunter
-  - rsyncd
-  - sudo
-  - { role: openvpn/client, when: env != "staging" }
-  - cgit/base
-  - cgit/clean_lock_cron
-  - cgit/make_pkgs_list
-  - clamav
-  - planet
-  - { role: letsencrypt, site_name: 'fedoraplanet.org' }
-  - fedmsg/base
-  - git/server
+    - base
+    - collectd/base
+    - fas_client
+    - hosts
+    - nagios_client
+    - rkhunter
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - cgit/base
+    - cgit/clean_lock_cron
+    - cgit/make_pkgs_list
+    - clamav
+    - planet
+    - { role: letsencrypt, site_name: "fedoraplanet.org" }
+    - fedmsg/base
+    - git/server
 
-  - role: apache
+    - role: apache
 
-  - role: httpd/certificate
-    certname: wildcard-2017.fedorapeople.org
-    SSLCertificateChainFile: wildcard-2017.fedorapeople.org.intermediate.cert
+    - role: httpd/certificate
+      certname: wildcard-2017.fedorapeople.org
+      SSLCertificateChainFile: wildcard-2017.fedorapeople.org.intermediate.cert
 
-  - people
+    - people
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/pkgs.yml b/playbooks/groups/pkgs.yml
index 8e6a29564..b687abcfa 100644
--- a/playbooks/groups/pkgs.yml
+++ b/playbooks/groups/pkgs.yml
@@ -6,65 +6,91 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - sudo
-  - apache
-  - { role: repospanner/server,
-      node: fedora01,
-      region: rpms,
-      spawn_repospanner_node: true,
-      when: env == "staging" }
-  - { role: repospanner/bridge,
-      zone: rpms,
-      zonecert: fedora_rpms_push,
-      baseurl: "fedora01.rpms.stg.fedoraproject.org:8443",
-      when: env == "staging" }
-  - gitolite/base
-  - cgit/base
-  - cgit/clean_lock_cron
-  - cgit/make_pkgs_list
-  - gitolite/check_fedmsg_hooks
-  - { role: git/make_checkout_seed, when: env != "staging" }
-  - git/hooks
-  - git/checks
-  - clamav
-  - { role: nfs/client,
-      when: env != "staging",
-      mnt_dir: '/srv/cache/lookaside',
-      nfs_src_dir: 'fedora_sourcecache', nfs_mount_opts='rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3' }
-  - { role: nfs/client,
-      when: env == "staging" and inventory_hostname.startswith('pkgs02'),
-      mnt_dir: '/srv/cache/lookaside_prod',
-      nfs_src_dir: 'fedora_sourcecache', nfs_mount_opts='ro,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3' }
-  - role: distgit/pagure
-  - role: distgit
-    tags: distgit
-  - { role: hosts, when: env == "staging" }
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - sudo
+    - apache
+    - {
+        role: repospanner/server,
+        node: fedora01,
+        region: rpms,
+        spawn_repospanner_node: true,
+        when: env == "staging",
+      }
+    - {
+        role: repospanner/bridge,
+        zone: rpms,
+        zonecert: fedora_rpms_push,
+        baseurl: "fedora01.rpms.stg.fedoraproject.org:8443",
+        when: env == "staging",
+      }
+    - gitolite/base
+    - cgit/base
+    - cgit/clean_lock_cron
+    - cgit/make_pkgs_list
+    - gitolite/check_fedmsg_hooks
+    - { role: git/make_checkout_seed, when: env != "staging" }
+    - git/hooks
+    - git/checks
+    - clamav
+    - {
+        role: nfs/client,
+        when: env != "staging",
+        mnt_dir: "/srv/cache/lookaside",
+        nfs_src_dir: "fedora_sourcecache",
+        nfs_mount_opts='rw,
+        hard,
+        bg,
+        intr,
+        noatime,
+        nodev,
+        nosuid,
+        sec=sys,
+        nfsvers=3',
+      }
+    - {
+        role: nfs/client,
+        when: env == "staging" and inventory_hostname.startswith('pkgs02'),
+        mnt_dir: "/srv/cache/lookaside_prod",
+        nfs_src_dir: "fedora_sourcecache",
+        nfs_mount_opts='ro,
+        hard,
+        bg,
+        intr,
+        noatime,
+        nodev,
+        nosuid,
+        sec=sys,
+        nfsvers=3',
+      }
+    - role: distgit/pagure
+    - role: distgit
+      tags: distgit
+    - { role: hosts, when: env == "staging" }
 
   tasks:
-  - name: Copy keytab
-    copy: src={{private}}/files/keytabs/{{env}}/pkgs
-          dest=/etc/httpd.keytab
-          owner=apache group=apache mode=0600
-    tags:
-    - krb5
+    - name: Copy keytab
+      copy: src={{private}}/files/keytabs/{{env}}/pkgs
+        dest=/etc/httpd.keytab
+        owner=apache group=apache mode=0600
+      tags:
+        - krb5
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 # Set up for fedora-messaging
 - name: setup RabbitMQ
@@ -96,14 +122,14 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - { role: collectd/fedmsg-service, process: fedmsg-hub }
-  - fedmsg/base
-  - fedmsg/hub
+    - { role: collectd/fedmsg-service, process: fedmsg-hub }
+    - fedmsg/base
+    - fedmsg/hub
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/postgresql-server-bdr.yml b/playbooks/groups/postgresql-server-bdr.yml
index 682d22b98..b13948486 100644
--- a/playbooks/groups/postgresql-server-bdr.yml
+++ b/playbooks/groups/postgresql-server-bdr.yml
@@ -12,30 +12,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - fas_client
-  - nagios_client
-  - hosts
-  - collectd/base
-  - collectd/postgres  # This requires a 'databases' var to be set in host_vars
-  - sudo
-  - keepalived
-  - postgresql_server_bdr
+    - base
+    - rkhunter
+    - fas_client
+    - nagios_client
+    - hosts
+    - collectd/base
+    - collectd/postgres # This requires a 'databases' var to be set in host_vars
+    - sudo
+    - keepalived
+    - postgresql_server_bdr
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-# TODO: add iscsi task
+  # TODO: add iscsi task
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/postgresql-server.yml b/playbooks/groups/postgresql-server.yml
index a9f2abbea..b0f1f474a 100644
--- a/playbooks/groups/postgresql-server.yml
+++ b/playbooks/groups/postgresql-server.yml
@@ -12,30 +12,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - fas_client
-  - nagios_client
-  - hosts
-  - postgresql_server
-  - collectd/base
-  - collectd/postgres  # This requires a 'databases' var to be set in host_vars
-  - sudo
-  - { role: openvpn/client, when: inventory_hostname == "db-fas01.phx2.fedoraproject.org" or inventory_hostname == "db01.phx2.fedoraproject.org" }
+    - base
+    - rkhunter
+    - fas_client
+    - nagios_client
+    - hosts
+    - postgresql_server
+    - collectd/base
+    - collectd/postgres # This requires a 'databases' var to be set in host_vars
+    - sudo
+    - {
+        role: openvpn/client,
+        when: inventory_hostname == "db-fas01.phx2.fedoraproject.org" or inventory_hostname == "db01.phx2.fedoraproject.org",
+      }
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-# TODO: add iscsi task
+  # TODO: add iscsi task
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/proxies.yml b/playbooks/groups/proxies.yml
index 864c67d79..e0e0bf13c 100644
--- a/playbooks/groups/proxies.yml
+++ b/playbooks/groups/proxies.yml
@@ -8,31 +8,32 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - fas_client
-  - rkhunter
-  - nagios_client
-  - collectd/base
-  - sudo
-  - rsyncd
-  - { role: mirrormanager/mirrorlist_proxy,
-      when: env == "staging" or "'mirrorlist-proxy' in group_names" }
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - apache
+    - base
+    - fas_client
+    - rkhunter
+    - nagios_client
+    - collectd/base
+    - sudo
+    - rsyncd
+    - {
+        role: mirrormanager/mirrorlist_proxy,
+        when: env == "staging" or "'mirrorlist-proxy' in group_names",
+      }
+    - { role: openvpn/client, when: env != "staging" }
+    - apache
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   # You might think we would want these tasks_path on the proxy nodes, but they
   # actually deliver a configuration that our proxy-specific roles below then go
@@ -41,8 +42,7 @@
   #- import_tasks: "{{ tasks_path }}/mod_wsgi.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   # TODO
   #
@@ -60,8 +60,6 @@
   ## Not going to do
   # - smolt::proxy -- note going to do this.  smolt is dead.  long live smolt.
   # - domainnotarget stuff - only smolt used this
-
-
 - name: Set up the proxy basics
   hosts: proxies-stg:proxies
   strategy: free
@@ -69,28 +67,28 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   pre_tasks:
 
   roles:
-  - httpd/mod_ssl
-  - httpd/proxy
-  - varnish
-  #
-  # Re-run hosts here so things are ok for the haproxy check
-  #
-  - hosts
-  
+    - httpd/mod_ssl
+    - httpd/proxy
+    - varnish
+    #
+    # Re-run hosts here so things are ok for the haproxy check
+    #
+    - hosts
+
   tasks:
+
   #
   # When we have a prerelease we also need to drop the config files.
-
 #  - name: Remove prerelease-to-final-spins-1
 #    file: path=/etc/httpd/conf.d/spins.fedoraproject.org/prerelease-to-final-spins-1-redirectmatch.conf state=file
 #    tags:
@@ -125,7 +123,7 @@
 #    file: path=/etc/httpd/conf.d/alt.fedoraproject.org/prerelease-to-final-alt-1-redirectmatch.conf state=file
 #    tags:
 #    - httpd/redirect
-# 
+#
 #  - name: Remove prerelease-to-final-gfo-atomic-redirectmatch
 #    file: path=/etc/httpd/conf.d/getfedora.org/prerelease-to-final-gfo-atomic-redirectmatch.conf state=file
 #    tags:
@@ -162,43 +160,44 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   pre_tasks:
-  #
-  # If this is an initial deployment, we need the initial ticketkey
-  # If it's not, doesn't hurt to copy it over again
-  #
-  - name: deploy ticket key
-    copy: src=/root/ticketkey_{{env}}.tkey dest=/etc/httpd/ticketkey_{{env}}.tkey
-          owner=root group=root mode=0600
-    notify:
-    - reload proxyhttpd
-
-  #
-  # If this is an initial deployment, make sure docs are synced over.
-  # Do not count these as changed ever
-  #
-  - name: make sure docs are synced. This could take a very very very logtime to finish
-    shell: /usr/local/bin/lock-wrapper docs-sync "/usr/local/bin/docs-rsync" >& /dev/null
-    changed_when: false
-    ignore_errors: true
-
-  - name: make sure selinux contexts are right on srv
-    command: restorecon -R /srv
-    changed_when: false
-
-  - name: install restart ipv6 script on proxies that have problems keeping ipv6 routes
-    copy: src="{{ files }}/scripts/restart-broken-ipv6" dest=/usr/local/bin/restart-broken-ipv6 mode=0755
-    when: inventory_hostname.startswith('proxy11.fedoraproject')
-    tags: restart-ipv6
-
-  - name: setup cron job to check/fix ipv6
-    copy: src="{{ files }}/scripts/restart-broken-ipv6.cron" dest=/etc/cron.d/restart-broken-ipv6 mode=0644
-    when: inventory_hostname.startswith('proxy11.fedoraproject')
-    tags: restart-ipv6
+    #
+    # If this is an initial deployment, we need the initial ticketkey
+    # If it's not, doesn't hurt to copy it over again
+    #
+    - name: deploy ticket key
+      copy:
+        src=/root/ticketkey_{{env}}.tkey dest=/etc/httpd/ticketkey_{{env}}.tkey
+        owner=root group=root mode=0600
+      notify:
+        - reload proxyhttpd
+
+    #
+    # If this is an initial deployment, make sure docs are synced over.
+    # Do not count these as changed ever
+    #
+    - name: make sure docs are synced. This could take a very very very logtime to finish
+      shell: /usr/local/bin/lock-wrapper docs-sync "/usr/local/bin/docs-rsync" >& /dev/null
+      changed_when: false
+      ignore_errors: true
+
+    - name: make sure selinux contexts are right on srv
+      command: restorecon -R /srv
+      changed_when: false
+
+    - name: install restart ipv6 script on proxies that have problems keeping ipv6 routes
+      copy: src="{{ files }}/scripts/restart-broken-ipv6" dest=/usr/local/bin/restart-broken-ipv6 mode=0755
+      when: inventory_hostname.startswith('proxy11.fedoraproject')
+      tags: restart-ipv6
+
+    - name: setup cron job to check/fix ipv6
+      copy: src="{{ files }}/scripts/restart-broken-ipv6.cron" dest=/etc/cron.d/restart-broken-ipv6 mode=0644
+      when: inventory_hostname.startswith('proxy11.fedoraproject')
+      tags: restart-ipv6
diff --git a/playbooks/groups/qa.yml b/playbooks/groups/qa.yml
index eb24c5747..071fd037d 100644
--- a/playbooks/groups/qa.yml
+++ b/playbooks/groups/qa.yml
@@ -11,32 +11,35 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - hosts
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openvpn/client,
-       when: deployment_type != "qa-stg", tags: ['openvpn_client'] }
-   - apache
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - hosts
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: sudo, tags: ["sudo"] }
+    - {
+        role: openvpn/client,
+        when: deployment_type != "qa-stg",
+        tags: ["openvpn_client"],
+      }
+    - apache
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure qa buildbot CI
   hosts: qa-stg
@@ -44,18 +47,18 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/buildmaster, tags: ['buildmaster'] }
-   - { role: taskotron/buildmaster-configure, tags: ['buildmasterconfig'] }
-   - { role: taskotron/buildslave, tags: ['buildslave'] }
-   - { role: taskotron/buildslave-configure, tags: ['buildslaveconfig'] }
+    - { role: taskotron/buildmaster, tags: ["buildmaster"] }
+    - { role: taskotron/buildmaster-configure, tags: ["buildmasterconfig"] }
+    - { role: taskotron/buildslave, tags: ["buildslave"] }
+    - { role: taskotron/buildslave-configure, tags: ["buildslaveconfig"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure static sites for qa-stg
   hosts: qa-prod:qa-stg
@@ -63,9 +66,9 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
     - name: ensure ServerName is set in httpd.conf
@@ -82,19 +85,17 @@
         - qastaticsites
 
     - name: generate virtualhosts for static sites
-      template:  src={{ files }}/httpd/qadevel-virtualhost.conf.j2 dest=/etc/httpd/conf.d/{{ item.name }}.conf owner=root group=root mode=0644
+      template: src={{ files }}/httpd/qadevel-virtualhost.conf.j2 dest=/etc/httpd/conf.d/{{ item.name }}.conf owner=root group=root mode=0644
       with_items: "{{ static_sites }}"
       notify:
         - reload httpd
       tags:
         - qastaticsites
 
-# don't need this if buildbot is not enabled
-#  roles:
-#   - { role: taskotron/imagefactory-client,
-#       when: deployment_type != "qa-stg", tags: ['imagefactoryclient'] }
-#
+  # don't need this if buildbot is not enabled
+  #  roles:
+  #   - { role: taskotron/imagefactory-client,
+  #       when: deployment_type != "qa-stg", tags: ['imagefactoryclient'] }
+  #
   handlers:
-     - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/rabbitmq.yml b/playbooks/groups/rabbitmq.yml
index 169e78c5f..3b8b45450 100644
--- a/playbooks/groups/rabbitmq.yml
+++ b/playbooks/groups/rabbitmq.yml
@@ -6,26 +6,26 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - rabbitmq_cluster
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - rabbitmq_cluster
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/repospanner.yml b/playbooks/groups/repospanner.yml
index 8eb95da25..78a6fe104 100644
--- a/playbooks/groups/repospanner.yml
+++ b/playbooks/groups/repospanner.yml
@@ -8,30 +8,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - sudo
-  - openvpn/client
-  - role: repospanner/server
-    node: repospanner01
-    region: ansible
-    spawn_repospanner_node: true
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - sudo
+    - openvpn/client
+    - role: repospanner/server
+      node: repospanner01
+      region: ansible
+      spawn_repospanner_node: true
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/resultsdb.yml b/playbooks/groups/resultsdb.yml
index a1754b62e..974c9db2e 100644
--- a/playbooks/groups/resultsdb.yml
+++ b/playbooks/groups/resultsdb.yml
@@ -11,35 +11,33 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - { role: hosts, tags: ['hosts']}
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openvpn/client,
-       when: deployment_type == "prod" }
-   - apache
-   - { role: fedmsg/base,
-       when: deployment_type != "dev" }
-   - { role: dnf-automatic, tags: ['dnfautomatic'] }
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - { role: hosts, tags: ["hosts"] }
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: sudo, tags: ["sudo"] }
+    - { role: openvpn/client, when: deployment_type == "prod" }
+    - apache
+    - { role: fedmsg/base, when: deployment_type != "dev" }
+    - { role: dnf-automatic, tags: ["dnfautomatic"] }
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure resultsdb production
   hosts: resultsdb-dev:resultsdb-stg:resultsdb-prod
@@ -47,19 +45,23 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/resultsdb-fedmsg, tags: ['resultsdb-fedmsg'], when: deployment_type == "prod"}
-   - { role: taskotron/resultsdb-backend, tags: ['resultsdb-be'] }
-   - { role: taskotron/resultsdb-frontend, tags: ['resultsdb-fe'] }
-   - { role: taskotron/execdb, tags: ['execdb'] }
-   - { role: taskotron/vault, tags: ['vault'], when: deployment_type == "dev" }
+    - {
+        role: taskotron/resultsdb-fedmsg,
+        tags: ["resultsdb-fedmsg"],
+        when: deployment_type == "prod",
+      }
+    - { role: taskotron/resultsdb-backend, tags: ["resultsdb-be"] }
+    - { role: taskotron/resultsdb-frontend, tags: ["resultsdb-fe"] }
+    - { role: taskotron/execdb, tags: ["execdb"] }
+    - { role: taskotron/vault, tags: ["vault"], when: deployment_type == "dev" }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: Install rdbsync
   hosts: resultsdb-stg:resultsdb-prod
@@ -67,12 +69,12 @@
   gather_facts: True
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: rdbsync, tags: ['rdbsync']}
+    - { role: rdbsync, tags: ["rdbsync"] }
diff --git a/playbooks/groups/retrace.yml b/playbooks/groups/retrace.yml
index 45f53efd7..17613bffc 100644
--- a/playbooks/groups/retrace.yml
+++ b/playbooks/groups/retrace.yml
@@ -4,53 +4,74 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "{{ private }}/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "{{ private }}/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - fas_client
-  - rkhunter
-  - nagios_client
-  - sudo
-  - fedmsg/base
+    - base
+    - hosts
+    - fas_client
+    - rkhunter
+    - nagios_client
+    - sudo
+    - fedmsg/base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup FAF server
   hosts: retrace
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "{{ private }}/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "{{ private }}/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - abrt/faf-local
-  - { role: abrt/faf, faf_web_on_root: false, faf_admin_mail: msuchy@xxxxxxxxxx, faf_web_openid_privileged_teams: "provenpackager,proventesters", faf_web_secret_key: "{{fedora_faf_web_secret_key}}", faf_spool_dir: /srv/faf/  }
-  - abrt/faf-local-post
+    - abrt/faf-local
+    - {
+        role: abrt/faf,
+        faf_web_on_root: false,
+        faf_admin_mail: msuchy@xxxxxxxxxx,
+        faf_web_openid_privileged_teams: "provenpackager,proventesters",
+        faf_web_secret_key: "{{fedora_faf_web_secret_key}}",
+        faf_spool_dir: /srv/faf/,
+      }
+    - abrt/faf-local-post
 
 - name: setup retrace server
   hosts: retrace:retrace-stg
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "{{ private }}/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "{{ private }}/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - abrt/retrace-local-pre
-  - { role: abrt/retrace, rs_require_gpg_check: false, rs_max_parallel_tasks: 12, rs_max_packed_size: 1024, rs_max_unpacked_size: 1280, rs_min_storage_left: 1280, rs_delete_task_after: 8, rs_delete_failed_task_after: 1, rs_repo_dir: /srv/retrace/repos, rs_save_dir: /srv/retrace/tasks, rs_faf_link_dir: /srv/retrace/hardlink-local, hostname: retrace.fedoraproject.org, faf_spool_dir: /srv/faf }
-  - abrt/retrace-local
+    - abrt/retrace-local-pre
+    - {
+        role: abrt/retrace,
+        rs_require_gpg_check: false,
+        rs_max_parallel_tasks: 12,
+        rs_max_packed_size: 1024,
+        rs_max_unpacked_size: 1280,
+        rs_min_storage_left: 1280,
+        rs_delete_task_after: 8,
+        rs_delete_failed_task_after: 1,
+        rs_repo_dir: /srv/retrace/repos,
+        rs_save_dir: /srv/retrace/tasks,
+        rs_faf_link_dir: /srv/retrace/hardlink-local,
+        hostname: retrace.fedoraproject.org,
+        faf_spool_dir: /srv/faf,
+      }
+    - abrt/retrace-local
diff --git a/playbooks/groups/rhel8beta.yml b/playbooks/groups/rhel8beta.yml
index 25a802852..dcc726290 100644
--- a/playbooks/groups/rhel8beta.yml
+++ b/playbooks/groups/rhel8beta.yml
@@ -8,23 +8,23 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - sudo
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/secondary.yml b/playbooks/groups/secondary.yml
index 67af1db1a..73d445eeb 100644
--- a/playbooks/groups/secondary.yml
+++ b/playbooks/groups/secondary.yml
@@ -6,73 +6,79 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - download
-  - rsyncd
-  - sudo
-  - { role: nfs/client,
-      mnt_dir: '/srv/pub/archive',
-      nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/archive' }
-  - { role: nfs/client,
-      mnt_dir: '/srv/pub/alt',
-      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
-      nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/alt' }
-  - { role: nfs/client,
-      mnt_dir: '/srv/pub/fedora-secondary',
-      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
-      nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/fedora-secondary' }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - download
+    - rsyncd
+    - sudo
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/pub/archive",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub/archive",
+      }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/pub/alt",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub/alt",
+      }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/pub/fedora-secondary",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3",
+        nfs_src_dir: "fedora_ftp/fedora.redhat.com/pub/fedora-secondary",
+      }
 
-  - role: apache
+    - role: apache
 
-  - role: httpd/mod_ssl
+    - role: httpd/mod_ssl
 
-  - role: httpd/certificate
-    certname: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: "{{wildcard_int_file}}"
+    - role: httpd/certificate
+      certname: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: "{{wildcard_int_file}}"
 
-  - role: httpd/website
-    vars:
-    - site_name: secondary.fedoraproject.org
-    - cert_name: "{{wildcard_cert_name}}"
-    server_aliases:
-    - archive.fedoraproject.org
-    - archives.fedoraproject.org
+    - role: httpd/website
+      vars:
+        - site_name: secondary.fedoraproject.org
+        - cert_name: "{{wildcard_cert_name}}"
+      server_aliases:
+        - archive.fedoraproject.org
+        - archives.fedoraproject.org
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
-  - name: Install some misc packages needed for various tasks
-    package: name={{ item }} state=present
-    with_items:
-    - createrepo
-    - koji
-    - python-scandir
-    - python2-productmd
+    - name: Install some misc packages needed for various tasks
+      package: name={{ item }} state=present
+      with_items:
+        - createrepo
+        - koji
+        - python-scandir
+        - python2-productmd
 
-  - name: add create-filelist script from quick-fedora-mirror
-    copy: src="{{ files }}/scripts/create-filelist" dest=/usr/local/bin/create-filelist mode=0755
+    - name: add create-filelist script from quick-fedora-mirror
+      copy: src="{{ files }}/scripts/create-filelist" dest=/usr/local/bin/create-filelist mode=0755
 
-  - name: add cron script to update fullfiletimelist
-    copy: src="{{ files }}/scripts/update-fullfiletimelist" dest=/usr/local/bin/update-fullfiletimelist mode=0755
+    - name: add cron script to update fullfiletimelist
+      copy: src="{{ files }}/scripts/update-fullfiletimelist" dest=/usr/local/bin/update-fullfiletimelist mode=0755
 
-  - name: Update fullfiletimelist job
-    cron: name="update-fullfiletimelist" hour="*" minute="55" user="root"
+    - name: Update fullfiletimelist job
+      cron: name="update-fullfiletimelist" hour="*" minute="55" user="root"
         job="/usr/local/bin/lock-wrapper update-fullfiletimelist '/usr/local/bin/update-fullfiletimelist -l /tmp/update-fullfiletimelist.lock -t /srv/pub alt'"
         cron_file=update-fullfiletimelist
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/sign-bridge.yml b/playbooks/groups/sign-bridge.yml
index dedfc852c..55c5185dc 100644
--- a/playbooks/groups/sign-bridge.yml
+++ b/playbooks/groups/sign-bridge.yml
@@ -14,29 +14,29 @@
   gather_facts: true
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - sudo
-  - nagios_client
-  - sigul/bridge
-  - role: keytab/service
-    service: sigul
-    owner_user: sigul
-    owner_group: sigul
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - sudo
+    - nagios_client
+    - sigul/bridge
+    - role: keytab/service
+      service: sigul
+      owner_user: sigul
+      owner_group: sigul
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/simple-koji-ci.yml b/playbooks/groups/simple-koji-ci.yml
index d0fff5490..6a68e6219 100644
--- a/playbooks/groups/simple-koji-ci.yml
+++ b/playbooks/groups/simple-koji-ci.yml
@@ -3,43 +3,42 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: simple-koji-ci-dev.fedorainfracloud.org:simple-koji-ci-prod.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-      # - base
-  - rkhunter
-  - nagios_client
-  - role: simple-koji-ci
-  - role: keytab/service
-    service: simple-koji-ci
-    owner_user: fedmsg
-
+    # - base
+    - rkhunter
+    - nagios_client
+    - role: simple-koji-ci
+    - role: keytab/service
+      service: simple-koji-ci
+      owner_user: fedmsg
diff --git a/playbooks/groups/smtp-mm.yml b/playbooks/groups/smtp-mm.yml
index e69cd4204..c6fda5937 100644
--- a/playbooks/groups/smtp-mm.yml
+++ b/playbooks/groups/smtp-mm.yml
@@ -8,27 +8,26 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/sundries.yml b/playbooks/groups/sundries.yml
index c78096caa..c418e91aa 100644
--- a/playbooks/groups/sundries.yml
+++ b/playbooks/groups/sundries.yml
@@ -11,56 +11,55 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - mod_wsgi
-  - geoip
-  - geoip-city-wsgi/app
-  - role: easyfix/gather
-    when: master_sundries_node
-  - role: regindexer/build
-    when: master_sundries_node
-  - role: bz_review_report
-    when: master_sundries_node and env != "staging"
-  - rsyncd
-  - freemedia
-  - sudo
-  - pager_server
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - role: review-stats/build
-    when: master_sundries_node
-  - role: zanata
-    when: master_sundries_node
-  - role: fedora-web/build
-    when: master_sundries_node
-  - role: fedora-budget/build
-    when: master_sundries_node
-  - role: fedora-docs/build
-    when: master_sundries_node
-  - role: membership-map/build
-    when: master_sundries_node
-  - role: developer/build
-    when: master_sundries_node
-  - role: fedmsg/base
-    when: master_sundries_node
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - mod_wsgi
+    - geoip
+    - geoip-city-wsgi/app
+    - role: easyfix/gather
+      when: master_sundries_node
+    - role: regindexer/build
+      when: master_sundries_node
+    - role: bz_review_report
+      when: master_sundries_node and env != "staging"
+    - rsyncd
+    - freemedia
+    - sudo
+    - pager_server
+    - { role: openvpn/client, when: env != "staging" }
+    - role: review-stats/build
+      when: master_sundries_node
+    - role: zanata
+      when: master_sundries_node
+    - role: fedora-web/build
+      when: master_sundries_node
+    - role: fedora-budget/build
+      when: master_sundries_node
+    - role: fedora-docs/build
+      when: master_sundries_node
+    - role: membership-map/build
+      when: master_sundries_node
+    - role: developer/build
+      when: master_sundries_node
+    - role: fedmsg/base
+      when: master_sundries_node
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-  - import_tasks: "{{ tasks_path }}/reg-server.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/reg-server.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/tang.yml b/playbooks/groups/tang.yml
index 8c722cd94..abf390498 100644
--- a/playbooks/groups/tang.yml
+++ b/playbooks/groups/tang.yml
@@ -6,26 +6,26 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - rsyncd
-  - sudo
-  - tang
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - rsyncd
+    - sudo
+    - tang
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/taskotron-client-hosts.yml b/playbooks/groups/taskotron-client-hosts.yml
index bb01aa542..85375f6ed 100644
--- a/playbooks/groups/taskotron-client-hosts.yml
+++ b/playbooks/groups/taskotron-client-hosts.yml
@@ -10,30 +10,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - { role: openvpn/client, when: datacenter != "phx2" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - { role: openvpn/client, when: datacenter != "phx2" }
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure taskotron imagefactory
   hosts: qa11.qa.fedoraproject.org:qa12.qa.fedoraproject.org
@@ -41,15 +41,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/imagefactory, tags: ['taskotronimagefactory'] }
+    - { role: taskotron/imagefactory, tags: ["taskotronimagefactory"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure taskotron client host
   hosts: taskotron-dev-client-hosts:taskotron-stg-client-hosts:taskotron-prod-client-hosts
@@ -57,17 +57,15 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/taskotron-client, tags: ['taskotronclient'] }
-   - { role: taskotron/imagefactory-client, tags: ['imagefactoryclient']}
-   - { role: taskotron/buildslave, tags: ['buildslave'] }
-   - { role: taskotron/buildslave-configure, tags: ['buildslaveconfigure'] }
+    - { role: taskotron/taskotron-client, tags: ["taskotronclient"] }
+    - { role: taskotron/imagefactory-client, tags: ["imagefactoryclient"] }
+    - { role: taskotron/buildslave, tags: ["buildslave"] }
+    - { role: taskotron/buildslave-configure, tags: ["buildslaveconfigure"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/taskotron.yml b/playbooks/groups/taskotron.yml
index d2ec99fd6..ece7834ef 100644
--- a/playbooks/groups/taskotron.yml
+++ b/playbooks/groups/taskotron.yml
@@ -11,34 +11,37 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-   - { role: base, tags: ['base'] }
-   - { role: rkhunter, tags: ['rkhunter'] }
-   - { role: nagios_client, tags: ['nagios_client'] }
-   - { role: hosts, tags: ['hosts']}
-   - { role: fas_client, tags: ['fas_client'] }
-   - { role: collectd/base, tags: ['collectd_base'] }
-   - { role: dnf-automatic, tags: ['dnfautomatic'] }
-   - { role: sudo, tags: ['sudo'] }
-   - { role: openvpn/client,
-       when: deployment_type == "prod", tags: ['openvpn_client'] }
-   - apache
-   - { role: fedmsg/base }
+    - { role: base, tags: ["base"] }
+    - { role: rkhunter, tags: ["rkhunter"] }
+    - { role: nagios_client, tags: ["nagios_client"] }
+    - { role: hosts, tags: ["hosts"] }
+    - { role: fas_client, tags: ["fas_client"] }
+    - { role: collectd/base, tags: ["collectd_base"] }
+    - { role: dnf-automatic, tags: ["dnfautomatic"] }
+    - { role: sudo, tags: ["sudo"] }
+    - {
+        role: openvpn/client,
+        when: deployment_type == "prod",
+        tags: ["openvpn_client"],
+      }
+    - apache
+    - { role: fedmsg/base }
 
   tasks:
-  # this is how you include other task lists
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    # this is how you include other task lists
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure taskotron master
   hosts: taskotron-dev:taskotron-stg:taskotron-prod
@@ -46,24 +49,42 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: nfs/client, mnt_dir: '/srv/taskotron/',  nfs_src_dir: 'fedora_taskotron_dev', nfs_mount_opts: 'rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys', when: deployment_type == 'dev' }
-   - { role: nfs/client, mnt_dir: '/srv/taskotron/',  nfs_src_dir: 'fedora_taskotron_stg', nfs_mount_opts: 'rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys', when: deployment_type == 'stg' }
-   - { role: nfs/client, mnt_dir: '/srv/taskotron/',  nfs_src_dir: 'fedora_taskotron_prod', nfs_mount_opts: 'rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys', when: deployment_type == 'prod' }
-   - { role: taskotron/grokmirror, tags: ['grokmirror'] }
-#   - { role: taskotron/cgit, tags: ['cgit'] }
-   - { role: taskotron/buildmaster, tags: ['buildmaster'] }
-   - { role: taskotron/buildmaster-configure, tags: ['buildmasterconfig'] }
-   - { role: taskotron/taskotron-trigger, tags: ['trigger'] }
-   - { role: taskotron/taskotron-frontend, tags: ['frontend'] }
-   - { role: taskotron/taskotron-master, tags: ['taskotronmaster'] }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/taskotron/",
+        nfs_src_dir: "fedora_taskotron_dev",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys",
+        when: deployment_type == 'dev',
+      }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/taskotron/",
+        nfs_src_dir: "fedora_taskotron_stg",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys",
+        when: deployment_type == 'stg',
+      }
+    - {
+        role: nfs/client,
+        mnt_dir: "/srv/taskotron/",
+        nfs_src_dir: "fedora_taskotron_prod",
+        nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3,sec=sys",
+        when: deployment_type == 'prod',
+      }
+    - { role: taskotron/grokmirror, tags: ["grokmirror"] }
+    #   - { role: taskotron/cgit, tags: ['cgit'] }
+    - { role: taskotron/buildmaster, tags: ["buildmaster"] }
+    - { role: taskotron/buildmaster-configure, tags: ["buildmasterconfig"] }
+    - { role: taskotron/taskotron-trigger, tags: ["trigger"] }
+    - { role: taskotron/taskotron-frontend, tags: ["frontend"] }
+    - { role: taskotron/taskotron-master, tags: ["taskotronmaster"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: configure standalone taskotron host
   hosts: taskotron-dev
@@ -71,14 +92,14 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/taskotron-proxy, tags: ['taskotronproxy'] }
-   - { role: taskotron/ssl-taskotron, tags: ['ssltaskotron'] }
-   - { role: letsencrypt, site_name: 'taskotron-dev.fedoraproject.org' }
+    - { role: taskotron/taskotron-proxy, tags: ["taskotronproxy"] }
+    - { role: taskotron/ssl-taskotron, tags: ["ssltaskotron"] }
+    - { role: letsencrypt, site_name: "taskotron-dev.fedoraproject.org" }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/torrent.yml b/playbooks/groups/torrent.yml
index 3043b15eb..af3c2abad 100644
--- a/playbooks/groups/torrent.yml
+++ b/playbooks/groups/torrent.yml
@@ -6,37 +6,45 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - hosts
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - openvpn/client
-  - torrent
-  - apache
-
-  - role: httpd/mod_ssl
-
-  - role: httpd/certificate
-    certname: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: "{{wildcard_int_file}}"
-
-  - {role: httpd/website, vars: {site_name: torrent.fedoraproject.org, cert_name: "{{wildcard_cert_name}}", sslonly: true}}
+    - base
+    - hosts
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - openvpn/client
+    - torrent
+    - apache
+
+    - role: httpd/mod_ssl
+
+    - role: httpd/certificate
+      certname: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: "{{wildcard_int_file}}"
+
+    - {
+        role: httpd/website,
+        vars:
+          {
+            site_name: torrent.fedoraproject.org,
+            cert_name: "{{wildcard_cert_name}}",
+            sslonly: true,
+          },
+      }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/twisted-buildbots.yml b/playbooks/groups/twisted-buildbots.yml
index 2a5c85302..ad610e32d 100644
--- a/playbooks/groups/twisted-buildbots.yml
+++ b/playbooks/groups/twisted-buildbots.yml
@@ -3,34 +3,33 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
 - name: setup all the things
   hosts: twisted-buildbots
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   tasks:
-
-  - name: add twisted key
-    authorized_key: user=root key="{{ item }}"
-    with_file:
-     - /srv/web/infra/ansible/files/twisted/ssh-pub-key
-    tags:
-    - config
-    - sshkeys
+    - name: add twisted key
+      authorized_key: user=root key="{{ item }}"
+      with_file:
+        - /srv/web/infra/ansible/files/twisted/ssh-pub-key
+      tags:
+        - config
+        - sshkeys
diff --git a/playbooks/groups/unbound.yml b/playbooks/groups/unbound.yml
index 56d2e3c5f..6344e243e 100644
--- a/playbooks/groups/unbound.yml
+++ b/playbooks/groups/unbound.yml
@@ -6,28 +6,27 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - unbound
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - unbound
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/value.yml b/playbooks/groups/value.yml
index 236c00e63..ac6d982ce 100644
--- a/playbooks/groups/value.yml
+++ b/playbooks/groups/value.yml
@@ -6,35 +6,34 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - apache
-  - fedmsg/base
-  - fedmsg/irc
-  - supybot
-  - sudo
-  - rsyncd
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - role: collectd/fedmsg-service
-    process: fedmsg-irc
-  - mote
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - apache
+    - fedmsg/base
+    - fedmsg/irc
+    - supybot
+    - sudo
+    - rsyncd
+    - { role: openvpn/client, when: env != "staging" }
+    - role: collectd/fedmsg-service
+      process: fedmsg-irc
+    - mote
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/virthost.yml b/playbooks/groups/virthost.yml
index 6c309e144..439eae75c 100644
--- a/playbooks/groups/virthost.yml
+++ b/playbooks/groups/virthost.yml
@@ -10,28 +10,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - { role: iscsi_client, when: "inventory_hostname.startswith(('bvirthost', 'buildvmhost'))" }
-  - sudo
-  - { role: openvpn/client, when: datacenter != "phx2" }
-  - virthost
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - {
+        role: iscsi_client,
+        when: "inventory_hostname.startswith(('bvirthost', 'buildvmhost'))",
+      }
+    - sudo
+    - { role: openvpn/client, when: datacenter != "phx2" }
+    - virthost
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/wiki.yml b/playbooks/groups/wiki.yml
index 8bc9d0742..3eca8e6dc 100644
--- a/playbooks/groups/wiki.yml
+++ b/playbooks/groups/wiki.yml
@@ -11,32 +11,41 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - apache
-  - fedmsg/base
-  - { role: nfs/client, when: env == "staging", mnt_dir: '/mnt/web/attachments',  nfs_src_dir: 'fedora_app_staging/app/attachments' }
-  - { role: nfs/client, when: env != "staging", mnt_dir: '/mnt/web/attachments',  nfs_src_dir: 'fedora_app/app/attachments' }
-  - mediawiki
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - apache
+    - fedmsg/base
+    - {
+        role: nfs/client,
+        when: env == "staging",
+        mnt_dir: "/mnt/web/attachments",
+        nfs_src_dir: "fedora_app_staging/app/attachments",
+      }
+    - {
+        role: nfs/client,
+        when: env != "staging",
+        mnt_dir: "/mnt/web/attachments",
+        nfs_src_dir: "fedora_app/app/attachments",
+      }
+    - mediawiki
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/groups/zanata2fedmsg.yml b/playbooks/groups/zanata2fedmsg.yml
index 39f5f2633..9c786ef1b 100644
--- a/playbooks/groups/zanata2fedmsg.yml
+++ b/playbooks/groups/zanata2fedmsg.yml
@@ -11,32 +11,31 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - rsyncd
-  - sudo
-  - { role: openvpn/client,
-      when: env != "staging" }
-  - mod_wsgi
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - rsyncd
+    - sudo
+    - { role: openvpn/client, when: env != "staging" }
+    - mod_wsgi
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy service-specific config
   hosts: zanata2fedmsg:zanata2fedmsg-stg
@@ -44,13 +43,13 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - zanata2fedmsg
-  - fedmsg/base
+    - zanata2fedmsg
+    - fedmsg/base
diff --git a/playbooks/host_reboot.yml b/playbooks/host_reboot.yml
index 554b284a0..8923f4fef 100644
--- a/playbooks/host_reboot.yml
+++ b/playbooks/host_reboot.yml
@@ -7,21 +7,21 @@
   serial: 1
 
   tasks:
-  - name: tell nagios to shush
-    nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
+    - name: tell nagios to shush
+      nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
 
-  - name: reboot the host
-    command: /sbin/shutdown -r 1
+    - name: reboot the host
+      command: /sbin/shutdown -r 1
 
-  - name: wait for host to come back - up to 15 minutes
-    local_action: wait_for host={{ target }} port=22 delay=120 timeout=900 search_regex=OpenSSH
+    - name: wait for host to come back - up to 15 minutes
+      local_action: wait_for host={{ target }} port=22 delay=120 timeout=900 search_regex=OpenSSH
 
-  - name: sync time
-    command: ntpdate -u 1.rhel.pool.ntp.org
+    - name: sync time
+      command: ntpdate -u 1.rhel.pool.ntp.org
 
-  - name: tell nagios to unshush
-    nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
+    - name: tell nagios to unshush
+      nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
diff --git a/playbooks/host_update.yml b/playbooks/host_update.yml
index c7ba870cd..2c56f1166 100644
--- a/playbooks/host_update.yml
+++ b/playbooks/host_update.yml
@@ -2,31 +2,30 @@
 #
 # requires --extra-vars="target=somehostname yumcommand=update"
 
-
 - name: update the system
   hosts: "{{ target }}"
   gather_facts: false
   user: root
 
   tasks:
-  - name: expire-caches
-    command: yum clean expire-cache
+    - name: expire-caches
+      command: yum clean expire-cache
 
-  - name: yum -y {{ yumcommand }}
-    command: yum -y {{ yumcommand }}
-    async: 7200
-    poll: 30
+    - name: yum -y {{ yumcommand }}
+      command: yum -y {{ yumcommand }}
+      async: 7200
+      poll: 30
 
 - name: run rkhunter if installed
-  hosts:  "{{ target }}"
+  hosts: "{{ target }}"
   user: root
 
   tasks:
-  - name: check for rkhunter
-    command: /usr/bin/test -f /usr/bin/rkhunter
-    register: rkhunter
-    ignore_errors: true
+    - name: check for rkhunter
+      command: /usr/bin/test -f /usr/bin/rkhunter
+      register: rkhunter
+      ignore_errors: true
 
-  - name: run rkhunter --propupd
-    command: /usr/bin/rkhunter --propupd
-    when: rkhunter is success
+    - name: run rkhunter --propupd
+      command: /usr/bin/rkhunter --propupd
+      when: rkhunter is success
diff --git a/playbooks/hosts/ansiblemagazine.fedorainfracloud.org.yml b/playbooks/hosts/ansiblemagazine.fedorainfracloud.org.yml
index 17d0514d0..5cbb95f9c 100644
--- a/playbooks/hosts/ansiblemagazine.fedorainfracloud.org.yml
+++ b/playbooks/hosts/ansiblemagazine.fedorainfracloud.org.yml
@@ -3,71 +3,71 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: ansiblemagazine.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   tasks:
-  - name: add packages
-    package: state=present name={{ item }}
-    with_items:
-    - httpd
-    - php
-    - php-mysql
-    - mariadb-server
-    - mariadb
-    - mod_ssl
-    - php-mcrypt
-    - php-mbstring
-    - wget
-    - unzip
-    - postfix
-    - wordpress
+    - name: add packages
+      package: state=present name={{ item }}
+      with_items:
+        - httpd
+        - php
+        - php-mysql
+        - mariadb-server
+        - mariadb
+        - mod_ssl
+        - php-mcrypt
+        - php-mbstring
+        - wget
+        - unzip
+        - postfix
+        - wordpress
 
-  - name: enable httpd service
-    service: name=httpd enabled=yes state=started
+    - name: enable httpd service
+      service: name=httpd enabled=yes state=started
 
-  - name: configure postfix for ipv4 only
-    raw: postconf -e inet_protocols=ipv4
+    - name: configure postfix for ipv4 only
+      raw: postconf -e inet_protocols=ipv4
 
-  - name: enable local postfix service
-    service: name=postfix enabled=yes state=started
+    - name: enable local postfix service
+      service: name=postfix enabled=yes state=started
 
   roles:
-  - basessh
-  - nagios_client
-  - mariadb_server
+    - basessh
+    - nagios_client
+    - mariadb_server
 
   post_tasks:
-  - name: create databaseuser
-    mysql_user: name=ansiblemagazine
-                host=localhost
-                state=present
-                password="{{ ansiblemagazine_db_password }}"
-                priv="ansiblemagazine.*:ALL"
+    - name: create databaseuser
+      mysql_user: name=ansiblemagazine
+        host=localhost
+        state=present
+        password="{{ ansiblemagazine_db_password }}"
+        priv="ansiblemagazine.*:ALL"
 
-  - name: Wordpress cron
-    cron: name="Wordpress cron"
-          minute="*/10"
-          job="curl -s http://localhost:80/wp-cron.php >/dev/null"
+    - name: Wordpress cron
+      cron: name="Wordpress cron"
+        minute="*/10"
+        job="curl -s http://localhost:80/wp-cron.php >/dev/null"
diff --git a/playbooks/hosts/artboard.fedorainfracloud.org.yml b/playbooks/hosts/artboard.fedorainfracloud.org.yml
index fa3dae705..baf6f7fbb 100644
--- a/playbooks/hosts/artboard.fedorainfracloud.org.yml
+++ b/playbooks/hosts/artboard.fedorainfracloud.org.yml
@@ -3,126 +3,125 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: artboard.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
 
   tasks:
-
-  - name: Install common scripts
-    copy: src={{ item }} dest=/usr/local/bin/ owner=root group=root mode=0755
-    with_fileglob:
-    - "{{ roles_path }}/base/files/common-scripts/*"
-    tags:
-    - config
-    - base
-    - artboard
-
-  - name: set sebooleans so artboard can talk to the db
-    seboolean: name=httpd_can_network_connect_db state=true persistent=true
-    tags:
-    - selinux
-    - artboard
-
-  - name: mount up disk of persistent storage
-    mount: name=/srv/persist src='LABEL=artboard' fstype=ext4 state=mounted
-    tags:
-    - artboard
-
-  - name: check the selinux context of the artboard dirs
-    command: matchpathcon "/srv/persist/artboard/(.*)"
-    register: webcontext
-    check_mode: no
-    changed_when: false
-    tags:
-    - config
-    - selinux
-    - artboard
-
-  - name: set the SELinux policy for the artboard web dir
-    command: semanage fcontext -a -t httpd_sys_content_t "/srv/persist/artboard/(.*)"
-    when: webcontext.stdout.find('httpd_sys_content_t') == -1
-    tags:
-    - config
-    - selinux
-    - artboard
-
-  # packages needed
-  - name: add packages
-    package: state=present name={{ item }}
-    with_items:
-    - rsync
-    - openssh-clients
-    - httpd
-    - httpd-tools
-    - php
-    - php-gd
-    - php-mysql
-    - cronie-noanacron
-    - mod_ssl
-    tags:
-    - artboard
-
-  # packages needed to be gone
-  - name: erase packages
-    package: state=absent name={{ item }}
-    with_items:
-    - cronie-anacron
-    tags:
-    - artboard
-
-  - name: artboard backup thing
-    copy: src="{{ files }}/artboard/artboard-backup" dest=/etc/cron.daily/artboard-backup mode=0755
-    tags:
-    - artboard
-
-  - name: make artboard subdir
-    file: path=/srv/persist/artboard mode=0755 state=directory
-    tags:
-    - artboard
-
-  - name: link artboard into /var/www/html
-    file: state=link src=/srv/persist/artboard path=/var/www/html/artboard
-    tags:
-    - artboard
-
-  - name: add apache confs
-    copy: src="{{ files }}/artboard/{{ item }}" dest="/etc/httpd/conf.d/{{ item }}"  backup=true
-    with_items:
-    - artboard.conf
-    - redirect.conf
-    notify: reload httpd
-    tags:
-    - artboard
-
-  - name: startup apache
-    service: name=httpd state=started
-    tags:
-    - artboard
+    - name: Install common scripts
+      copy: src={{ item }} dest=/usr/local/bin/ owner=root group=root mode=0755
+      with_fileglob:
+        - "{{ roles_path }}/base/files/common-scripts/*"
+      tags:
+        - config
+        - base
+        - artboard
+
+    - name: set sebooleans so artboard can talk to the db
+      seboolean: name=httpd_can_network_connect_db state=true persistent=true
+      tags:
+        - selinux
+        - artboard
+
+    - name: mount up disk of persistent storage
+      mount: name=/srv/persist src='LABEL=artboard' fstype=ext4 state=mounted
+      tags:
+        - artboard
+
+    - name: check the selinux context of the artboard dirs
+      command: matchpathcon "/srv/persist/artboard/(.*)"
+      register: webcontext
+      check_mode: no
+      changed_when: false
+      tags:
+        - config
+        - selinux
+        - artboard
+
+    - name: set the SELinux policy for the artboard web dir
+      command: semanage fcontext -a -t httpd_sys_content_t "/srv/persist/artboard/(.*)"
+      when: webcontext.stdout.find('httpd_sys_content_t') == -1
+      tags:
+        - config
+        - selinux
+        - artboard
+
+    # packages needed
+    - name: add packages
+      package: state=present name={{ item }}
+      with_items:
+        - rsync
+        - openssh-clients
+        - httpd
+        - httpd-tools
+        - php
+        - php-gd
+        - php-mysql
+        - cronie-noanacron
+        - mod_ssl
+      tags:
+        - artboard
+
+    # packages needed to be gone
+    - name: erase packages
+      package: state=absent name={{ item }}
+      with_items:
+        - cronie-anacron
+      tags:
+        - artboard
+
+    - name: artboard backup thing
+      copy: src="{{ files }}/artboard/artboard-backup" dest=/etc/cron.daily/artboard-backup mode=0755
+      tags:
+        - artboard
+
+    - name: make artboard subdir
+      file: path=/srv/persist/artboard mode=0755 state=directory
+      tags:
+        - artboard
+
+    - name: link artboard into /var/www/html
+      file: state=link src=/srv/persist/artboard path=/var/www/html/artboard
+      tags:
+        - artboard
+
+    - name: add apache confs
+      copy: src="{{ files }}/artboard/{{ item }}" dest="/etc/httpd/conf.d/{{ item }}"  backup=true
+      with_items:
+        - artboard.conf
+        - redirect.conf
+      notify: reload httpd
+      tags:
+        - artboard
+
+    - name: startup apache
+      service: name=httpd state=started
+      tags:
+        - artboard
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml b/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml
index 60105abf4..dd0008a10 100644
--- a/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml
+++ b/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml
@@ -6,28 +6,27 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - dhcp_server
-  - tftp_server
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - dhcp_server
+    - tftp_server
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/commops.fedorainfracloud.org.yml b/playbooks/hosts/commops.fedorainfracloud.org.yml
index bea832062..78cf953ef 100644
--- a/playbooks/hosts/commops.fedorainfracloud.org.yml
+++ b/playbooks/hosts/commops.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: commops.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/communityblog.fedorainfracloud.org.yml b/playbooks/hosts/communityblog.fedorainfracloud.org.yml
index e0e00d10e..4468bd630 100644
--- a/playbooks/hosts/communityblog.fedorainfracloud.org.yml
+++ b/playbooks/hosts/communityblog.fedorainfracloud.org.yml
@@ -3,71 +3,71 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: communityblog.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   tasks:
-  - name: add packages
-    package: state=present name={{ item }}
-    with_items:
-    - httpd
-    - php
-    - php-mysql
-    - mariadb-server
-    - mariadb
-    - mod_ssl
-    - php-mcrypt
-    - php-mbstring
-    - wget
-    - unzip
-    - postfix
-    - wordpress
+    - name: add packages
+      package: state=present name={{ item }}
+      with_items:
+        - httpd
+        - php
+        - php-mysql
+        - mariadb-server
+        - mariadb
+        - mod_ssl
+        - php-mcrypt
+        - php-mbstring
+        - wget
+        - unzip
+        - postfix
+        - wordpress
 
-  - name: enable httpd service
-    service: name=httpd enabled=yes state=started
+    - name: enable httpd service
+      service: name=httpd enabled=yes state=started
 
-  - name: configure postfix for ipv4 only
-    raw: postconf -e inet_protocols=ipv4
+    - name: configure postfix for ipv4 only
+      raw: postconf -e inet_protocols=ipv4
 
-  - name: enable local postfix service
-    service: name=postfix enabled=yes state=started
+    - name: enable local postfix service
+      service: name=postfix enabled=yes state=started
 
   roles:
-  - basessh
-  - nagios_client
-  - mariadb_server
+    - basessh
+    - nagios_client
+    - mariadb_server
 
   post_tasks:
-  - name: create databaseuser
-    mysql_user: name=commbloguser
-                host=localhost
-                state=present
-                password="{{ communityblog_db_password }}"
-                priv="wp.*:ALL"
+    - name: create databaseuser
+      mysql_user: name=commbloguser
+        host=localhost
+        state=present
+        password="{{ communityblog_db_password }}"
+        priv="wp.*:ALL"
 
-  - name: Wordpress cron
-    cron: name="Wordpress cron"
-          minute="*/10"
-          job="curl http://localhost:8008/wp-cron.php >/dev/null"
+    - name: Wordpress cron
+      cron: name="Wordpress cron"
+        minute="*/10"
+        job="curl http://localhost:8008/wp-cron.php >/dev/null"
diff --git a/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml b/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml
index 62e8e3278..457bb82cb 100644
--- a/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml
+++ b/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml
@@ -6,77 +6,75 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - nagios_client
-  - collectd/base
-  - sudo
-  - role: keytab/service
-    owner_user: apache
-    owner_group: apache
-    service: HTTP
-    host: "data-analysis.fedoraproject.org"
-    when: env == "production"
-  - awstats
-  - web-data-analysis
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - nagios_client
+    - collectd/base
+    - sudo
+    - role: keytab/service
+      owner_user: apache
+      owner_group: apache
+      service: HTTP
+      host: "data-analysis.fedoraproject.org"
+      when: env == "production"
+    - awstats
+    - web-data-analysis
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: dole out the service-specific config
   hosts: data-analysis01.phx2.fedoraproject.org
   user: root
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-   - role: nfs/client
-     mnt_dir: '/mnt/fedora_stats'
-     nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3"
-     nfs_src_dir: 'fedora_stats'
-   - geoip
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_stats"
+      nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3"
+      nfs_src_dir: "fedora_stats"
+    - geoip
 
   tasks:
-   - name: install needed packages
-     package: name={{ item }} state=present
-     with_items:
-       - httpd
-       - httpd-tools
-       - mod_ssl
-       - rsync
-       - openssh-clients
-       - emacs-nox
-       - emacs-git
-       - git
-       - bc
-       - python-geoip-geolite2
-       - php-pdo
-       - php-gd
-       - php-xml
-       - php
-       - php-pecl-geoip
-       - gnuplot
-       - htmldoc
-       - mod_auth_gssapi
-
-
+    - name: install needed packages
+      package: name={{ item }} state=present
+      with_items:
+        - httpd
+        - httpd-tools
+        - mod_ssl
+        - rsync
+        - openssh-clients
+        - emacs-nox
+        - emacs-git
+        - git
+        - bc
+        - python-geoip-geolite2
+        - php-pdo
+        - php-gd
+        - php-xml
+        - php
+        - php-pecl-geoip
+        - gnuplot
+        - htmldoc
+        - mod_auth_gssapi
 ##
diff --git a/playbooks/hosts/developer.fedorainfracloud.org.yml b/playbooks/hosts/developer.fedorainfracloud.org.yml
index ccaadfbde..5744e041b 100644
--- a/playbooks/hosts/developer.fedorainfracloud.org.yml
+++ b/playbooks/hosts/developer.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: developer.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/elastic-dev.fedorainfracloud.org.yml b/playbooks/hosts/elastic-dev.fedorainfracloud.org.yml
index ae072859b..5ca08074c 100644
--- a/playbooks/hosts/elastic-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/elastic-dev.fedorainfracloud.org.yml
@@ -3,33 +3,31 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-  - import_tasks: "{{ roles_path }}/basessh/handlers/main.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ roles_path }}/basessh/handlers/main.yml"
 
 - name: setup all the things
   hosts: elastic-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
-
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
diff --git a/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml b/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml
index a55e0e20b..c97c1e198 100644
--- a/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: fas2-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml b/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml
index fea251f09..1ea4cc1fc 100644
--- a/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: fas3-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml b/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
index e54fc7a77..c5521dcc3 100644
--- a/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
+++ b/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
@@ -1,21 +1,21 @@
 ---
-- name:  Prepare storage on compute nodes
+- name: Prepare storage on compute nodes
   hosts: openstack-compute
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  # This is in fact duplicate from compute nodes, just be sure in case we did not run
-  # compute nodes playbook yet.
-  - name: Create logical volume for Swift
-    lvol: vg=vg_server lv=swift_store size=100g shrink=no
-  - name: Create FS on Swift storage
-    filesystem: fstype=ext4 dev=/dev/vg_server/swift_store
-  - name: SSH authorized key for root user
-    authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}"
+    # This is in fact duplicate from compute nodes, just be sure in case we did not run
+    # compute nodes playbook yet.
+    - name: Create logical volume for Swift
+      lvol: vg=vg_server lv=swift_store size=100g shrink=no
+    - name: Create FS on Swift storage
+      filesystem: fstype=ext4 dev=/dev/vg_server/swift_store
+    - name: SSH authorized key for root user
+      authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}"
 
 - name: deploy Open Stack controler
   hosts: fed-cloud09.cloud.fedoraproject.org
@@ -23,1285 +23,1666 @@
 
   vars:
     # this is actually without admin tenant
-    all_tenants: ['cloudintern', 'cloudsig', 'copr', 'coprdev', 'infrastructure',
-      'persistent', 'pythonbots', 'qa', 'scratch', 'transient', 'openshift', 'maintainertest', 'aos-ci-cd']
+    all_tenants:
+      [
+        "cloudintern",
+        "cloudsig",
+        "copr",
+        "coprdev",
+        "infrastructure",
+        "persistent",
+        "pythonbots",
+        "qa",
+        "scratch",
+        "transient",
+        "openshift",
+        "maintainertest",
+        "aos-ci-cd",
+      ]
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - sudo
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-    vars:
-      root_auth_users: msuchy
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-
-  - name: set root passwd
-    user: name=root password={{ cloud_rootpw }} state=present
-    tags:
-    - rootpw
-  - name: Set the hostname
-    hostname: name={{ controller_hostname }}
-
-  - name: Deploy root private SSH key
-    copy: src={{ private }}/files/openstack/fed-cloud09-root.key dest=/root/.ssh/id_rsa mode=600 owner=root group=root
-  - name: Deploy root public SSH key
-    copy: src={{ files }}/fedora-cloud/fed09-ssh-key.pub dest=/root/.ssh/id_rsa.pub mode=600 owner=root group=root
-  - authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}"
-
-  - name: install core pkgs
-    package: state=present pkg={{ item }}
-    with_items:
-    - libselinux-python
-    - ntp
-    - wget
-    - scsi-target-utils
-    - lvm2
-    - iptables-services
-
-  - name: disable selinux
-    selinux: policy=targeted state=permissive
-
-  - service: name=tgtd state=started enabled=yes
-
-  - name: Create logical volume for Swift
-    lvol: vg=vg_server lv=swift_store size=100g shrink=no
-  - name: Create FS on Swift storage
-    filesystem: fstype=ext4 dev=/dev/vg_server/swift_store
-
-  - template: src={{ files }}/fedora-cloud/hosts dest=/etc/hosts owner=root mode=0644
-
-  - stat: path=/etc/packstack_sucessfully_finished
-    register: packstack_sucessfully_finished
-
-  # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-networking.html
-  - service: name=NetworkManager state=stopped enabled=no
-  - service: name=network enabled=yes
-  - service: name=firewalld state=stopped enabled=no
-    ignore_errors: yes
-  - service: name=iptables state=started enabled=yes
-
-  - name: ensure iptables is configured to allow rabbitmq traffic (port 5672/tcp)
-    lineinfile:
-      dest=/etc/sysconfig/iptables
-      state=present
-      regexp="^.*INPUT.*172\.24\.0\.10/24.*tcp.*{{ item }}.*ACCEPT"
-      insertbefore="^.*INPUT.*RELATED,ESTABLISHED.*ACCEPT"
-      line="-A INPUT -s 172.24.0.10/24 -p tcp -m multiport --dports {{ item }} -m comment --comment \"added by fedora-infra ansible\" -j ACCEPT"
-      backup=yes
-    with_items:
-    - 80,443
-    - 3260
-    - 3306
-    - 5671
-    - 5672
-    - 6000,6001,6002,873
-    - 8777
-    - 27017
-    - 5900:5999,16509
-    - 16509,49152:49215
-    notify: restart iptables
-
-  # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-neutron-networking-controller-node.html
-  - command: ifdown br-tun
-    when: packstack_sucessfully_finished.stat.exists == False
-    ignore_errors: yes
-  - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^ONBOOT=" line="ONBOOT=yes"
-    notify:
-      - restart network
-  # only for first run
-  - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^NETMASK=" line="NETMASK=255.255.255.0"
-    when: packstack_sucessfully_finished.stat.exists == False
-    notify:
-      - restart network
-  - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^IPADDR=" line="IPADDR={{controller_private_ip}}"
-    when: packstack_sucessfully_finished.stat.exists == False
-    notify:
-      - restart network
-  - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="BOOTPROTO=" line="BOOTPROTO=none"
-    notify:
-      - restart network
-  - template: src={{files}}/fedora-cloud/ifcfg-br-ex dest=/etc/sysconfig/network-scripts/ifcfg-br-ex owner=root mode=0644
-    when: packstack_sucessfully_finished.stat.exists == False
-    notify:
-      - restart network
-  - template: src={{files}}/fedora-cloud/ifcfg-eth0 dest=/etc/sysconfig/network-scripts/ifcfg-eth0 owner=root mode=0644
-    when: packstack_sucessfully_finished.stat.exists == False
-    notify:
-      - restart network
-  - command: ifup eth1
-    when: packstack_sucessfully_finished.stat.exists == False
-  - meta: flush_handlers
-
-  # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-ntp.html
-  - service: name=ntpd state=started enabled=yes
-
-  # this two step can be done in one, but Ansible will then always show the action as changed
-  #- name: make sure epel-release is installed
-  #  get_url: url=http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm dest=/root/
-  #- package: state=present name=/root/epel-release-latest-7.noarch.rpm
-
-  #- name: make sure latest openvswitch is installed
-  #  get_url: url=http://people.redhat.com/~lkellogg/rpms/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm dest=/root/
-  #- package: state=present name=/root/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm
-
-  #- name: make sure latest openstack-utils is installed
-  #  get_url: url=https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-utils-2014.2-1.el7.centos.noarch.rpm dest=/root/
-  #- package: state=present name=/root/openstack-utils-2014.2-1.el7.centos.noarch.rpm
-
-  - name: install basic openstack packages
-    package: state=present name={{ item }}
-    with_items:
-    - openstack-utils
-    - openstack-selinux
-    - openstack-packstack
-    - python-glanceclient
-    - rabbitmq-server
-    - ansible-openstack-modules
-    - openstack-keystone
-    - openstack-neutron
-    - openstack-nova-common
-    - haproxy
-
-  - name: install etckeeper
-    package: state=present name=etckeeper
-  - name: init etckeeper
-    shell: cd /etc && etckeeper init
-
-
-  - name: add ssl cert files
-    copy: src={{ private }}/files/openstack/fedorainfracloud.org.{{item}} dest=/etc/pki/tls/certs/fedorainfracloud.org.{{item}} mode=0644 owner=root group=root
-    with_items:
-    - pem
-    - digicert.pem
-  - name: add ssl key file
-    copy: src={{ private }}/files/openstack/fedorainfracloud.org.key dest=/etc/pki/tls/private/fedorainfracloud.org.key mode=0600 owner=root group=root
-    changed_when: False
-
-  - name: allow services key access
-    acl: name=/etc/pki/tls/private/fedorainfracloud.org.key entity={{item}} etype=user permissions="r" state=present
-    with_items:
-    - keystone
-    - neutron
-    - nova
-    - rabbitmq
-    - cinder
-    - ceilometer
-    - swift
-
-  - file: state=directory path=/var/www/pub mode=0755
-  - copy: src={{ private }}/files/openstack/fedorainfracloud.org.pem dest=/var/www/pub/ mode=644
-
-  # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-database-controller.html
-  - name: install mysql packages
-    package: state=present pkg={{ item }}
-    with_items:
-    - mariadb-galera-server
-    - MySQL-python
-  - ini_file: dest=/etc/my.cnf section="mysqld" option="bind-address" value="{{ controller_public_ip }}"
-  - ini_file: dest=/etc/my.cnf section="mysqld" option="default-storage-engine" value="innodb"
-  - ini_file: dest=/etc/my.cnf section="mysqld" option="collation-server" value="utf8_general_ci"
-  - ini_file: dest=/etc/my.cnf section="mysqld" option="init-connect" value="'SET NAMES utf8'"
-  - ini_file: dest=/etc/my.cnf section="mysqld" option="character-set-server" value="utf8"
-  - service: name=mariadb state=started enabled=yes
-    # 'localhost' needs to be the last item for idempotency, see
-    # http://ansible.cc/docs/modules.html#mysql-user
-  - name: update mysql root password for localhost before setting .my.cnf
-    mysql_user: name=root host=localhost password={{ DBPASSWORD }}
-  - name: copy .my.cnf file with root password credentials
-    template: src={{ files }}/fedora-cloud/my.cnf dest=/root/.my.cnf owner=root mode=0600
-  - name: update mysql root password for all root accounts
-    mysql_user: name=root host={{ item }} password={{ DBPASSWORD }}
-    with_items:
-      - "{{ controller_public_ip }}"
-      - 127.0.0.1
-      - ::1
-  - name: copy .my.cnf file with root password credentials
-    template: src={{ files }}/fedora-cloud/my.cnf dest=/root/.my.cnf owner=root mode=0600
-  - name: delete anonymous MySQL server user for $server_hostname
-    mysql_user: user="" host="{{ controller_public_ip }}" state="absent"
-  - name: delete anonymous MySQL server user for localhost
-    mysql_user: user="" state="absent"
-  - name: remove the MySQL test database
-    mysql_db: db=test state=absent
-
-  # WORKAROUNDS - already reported to OpenStack team
-  - lineinfile:
-      dest=/usr/lib/python2.7/site-packages/packstack/plugins/dashboard_500.py
-      regexp="            host_resources\.append\(*ssl_key, 'ssl_ps_server.key'\)*"
-      line="            host_resources.append((ssl_key, 'ssl_ps_server.key'))"
-      backup=yes
-  - lineinfile:
-      dest=/usr/share/openstack-puppet/modules/rabbitmq/manifests/config.pp
-      regexp="RABBITMQ_NODE_PORT"
-      line="    'RABBITMQ_NODE_PORTTTTT'        => $port,"
-      backup=yes
-  - package: state=present pkg=mongodb-server
-  - ini_file: dest=/usr/lib/systemd/system/mongod.service section=Service option=PIDFile value=/var/run/mongodb/mongod.pid
-  - lineinfile:
-      dest=/usr/lib/python2.7/site-packages/packstack/puppet/templates/mongodb.pp
-      regexp="pidfilepath"
-      line="    pidfilepath => '/var/run/mongodb/mongod.pid'"
-      insertbefore="^}"
-  - meta: flush_handlers
-  # http://openstack.redhat.com/Quickstart
-  - template: src={{ files }}/fedora-cloud/packstack-controller-answers.txt dest=/root/ owner=root mode=0600
-  - command: packstack --answer-file=/root/packstack-controller-answers.txt
-    when: packstack_sucessfully_finished.stat.exists == False
-  - file: path=/etc/packstack_sucessfully_finished state=touch
-    when: packstack_sucessfully_finished.stat.exists == False
-  # FIXME we should really reboot here
-
-  - name: Set shell to nova user to allow cold migrations
-    user: name=nova shell=/bin/bash
-  - name: SSH authorized key for nova user
-    authorized_key: user=nova key="{{fed_cloud09_nova_public_key}}"
-  - name: SSH public key for nova user
-    template: src={{ files }}/fedora-cloud/fed_cloud09_nova_public_key dest=/var/lib/nova/.ssh/id_rsa.pub owner=nova group=nova
-  - name: Deploy private SSH key
-    copy: src={{ private }}/files/openstack/fed-cloud09-nova.key dest=/var/lib/nova/.ssh/id_rsa mode=600 owner=nova group=nova
-  - copy: src={{files}}/fedora-cloud/nova-ssh-config dest=/var/lib/nova/.ssh/config owner=nova group=nova mode=640
-
-  # http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-queue.html
-  # https://openstack.redhat.com/Securing_services#qpid
-  #### FIXME
-  - lineinfile: dest=/etc/rabbitmq/rabbitmq-env.conf regexp="^RABBITMQ_NODE_PORT=" state="absent"
-  - service: name=rabbitmq-server state=started
-
-  # flip endpoints internalurl to internal IP
-  # ceilometer
-  - shell: source /root/keystonerc_admin && keystone service-list | grep ceilometer | awk '{print $2}'
-    register: SERVICE_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    register: ENDPOINT_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8777'  --adminurl 'https://{{ controller_publicname }}:8777' --internalurl 'https://{{ controller_publicname }}:8777' ) || true
-  # cinder
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinder ' | awk '{print $2}'
-    register: SERVICE_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    register: ENDPOINT_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' ) || true
-  # cinderv2
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinderv2' | awk '{print $2}'
-    register: SERVICE_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    register: ENDPOINT_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' ) || true
-  # glance
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'glance' | awk '{print $2}'
-    register: SERVICE_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    register: ENDPOINT_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9292'  --adminurl 'https://{{ controller_publicname }}:9292' --internalurl 'https://{{ controller_publicname }}:9292' ) || true
-  # neutron
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'neutron' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9696/'  --adminurl 'https://{{ controller_publicname }}:9696/' --internalurl 'https://{{ controller_publicname }}:9696/' ) || true
-  # nova
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova ' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' ) || true
-  # nova_ec2
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova_ec2' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8773/services/Cloud'  --adminurl 'https://{{ controller_publicname }}:8773/services/Admin' --internalurl 'https://{{ controller_publicname }}:8773/services/Cloud' ) || true
-  # novav3
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'novav3' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v3'  --adminurl 'https://{{ controller_publicname }}:8774/v3' --internalurl 'https://{{ controller_publicname }}:8774/v3' ) || true
-  # swift
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift ' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s'  --adminurl 'https://{{controller_publicname}}:8080' --internalurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s' ) || true
-  # swift_s3
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift_s3' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8080'  --adminurl 'https://{{ controller_publicname }}:8080' --internalurl 'https://{{ controller_publicname }}:8080' ) || true
-  # keystone --- !!!!! we need to use ADMIN_TOKEN here - this MUST be last before we restart OS and set up haproxy
-  - shell: source /root/keystonerc_admin && keystone service-list | grep 'keystone' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: SERVICE_ID
-  - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: ENDPOINT_ID
-  - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=certfile value=/etc/haproxy/fedorainfracloud.org.combined
-  - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=keyfile value=/etc/pki/tls/private/fedorainfracloud.org.key
-  - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=ca_certs value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone --os-token '{{ADMIN_TOKEN}}' --os-endpoint 'http://{{ controller_publicname }}:35357/v2.0' endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:5000/v2.0'  --adminurl 'https://{{ controller_publicname }}:35357/v2.0' --internalurl 'https://{{ controller_publicname }}:5000/v2.0' ) || true
-  - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=enable value=True
-  - lineinfile: dest=/root/keystonerc_admin regexp="^export OS_AUTH_URL" line="export OS_AUTH_URL=https://{{ controller_publicname }}:5000/v2.0/"
-
-  # Setup sysconfig file for novncproxy
-  - copy: src={{ files }}/fedora-cloud/openstack-nova-novncproxy dest=/etc/sysconfig/openstack-nova-novncproxy mode=644 owner=root group=root
-
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_base_url value=https://{{ controller_publicname }}:6080/vnc_auto.html
-
-  # set SSL for services
-  - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_url value=https://{{ controller_publicname }}:9696
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=osapi_compute_listen_port value=6774
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ec2_listen_port value=6773
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=glance_api_servers value=https://{{ controller_publicname }}:9292
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=cert value=/etc/pki/tls/certs/fedorainfracloud.org.pem
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=key value=/etc/pki/tls/private/fedorainfracloud.org.key
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ca value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_host  value={{ controller_publicname }}
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ssl_only value=False
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=scheduler_default_filters value=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,DiskFilter
-  - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=default_floating_pool value=external
-
-  - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=bind_port value=7292
-  # configure Glance to use Swift as backend
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=default_store value=swift
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=stores value=glance.store.swift.Store
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_auth_address value=https://{{ controller_publicname }}:5000/v2.0
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_user value="services:swift"
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_key value="{{ SWIFT_PASS }}"
-  - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_create_container_on_put value="True"
-  - shell: rsync /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini
-  - shell: rsync /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini
-
-  - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-
-  - ini_file: dest=/etc/glance/glance-cache.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0
-
-  - ini_file: dest=/etc/glance/glance-scrubber.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0
-
-  - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=backup_swift_url value=https://{{ controller_publicname }}:8080/v1/AUTH_
-  - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=osapi_volume_listen_port value=6776
-  - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_protocol value=https
-  - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=service_protocol value=https
-  - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_protocol value=https
-  - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=service_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-
-  - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_url value=https://{{ controller_publicname }}:8774/v2
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=use_ssl value=False
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_cert_file value=/etc/pki/tls/certs/fedorainfracloud.org.pem
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_key_file value=/etc/pki/tls/private/fedorainfracloud.org.key
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_ca_file value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=bind_port value=8696
-  - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = LOADBALANCER" line="service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" insertafter="\[service_providers]"
-  - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = FIREWALL" line="service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default" insertafter="\[service_providers]"
-
-  - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_protocol value=https
-  - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-
-  - ini_file: dest=/etc/neutron/metadata_agent.ini section="filter:authtoken" option=auth_url value=https://{{ controller_publicname }}:35357/v2.0
-  - ini_file: dest=/etc/neutron/metadata_agent.ini section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:35357/v2.0
-
-  - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_protocol value=https
-  - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_port value=7080
-  - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_ip value=127.0.0.1
-
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_protocol value=https
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=service_credentials option=os_auth_url value=https://{{ controller_publicname }}:35357/v2.0
-  - ini_file: dest=/etc/ceilometer/ceilometer.conf section=api option=port value=6777
-
-  # enable stunell to neutron
-  - shell: cat /etc/pki/tls/certs/fedorainfracloud.org.pem /etc/pki/tls/certs/fedorainfracloud.org.digicert.pem /etc/pki/tls/private/fedorainfracloud.org.key > /etc/haproxy/fedorainfracloud.org.combined
-  - file: path=/etc/haproxy/fedorainfracloud.org.combined owner=haproxy mode=644
-  - copy: src={{ files }}/fedora-cloud/haproxy.cfg dest=/etc/haproxy/haproxy.cfg mode=644 owner=root group=root
-  # first OS have to free ports so haproxy can bind it, then we start OS on modified ports
-  #- shell: openstack-service stop
-  #- service: name=haproxy state=started enabled=yes
-  #- shell: openstack-service start
-
-  - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="^OPENSTACK_KEYSTONE_URL " line="OPENSTACK_KEYSTONE_URL = 'https://{{controller_publicname}}:5000/v2.0'"
-    notify:
-      - reload httpd
-  - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="OPENSTACK_SSL_CACERT " line="OPENSTACK_SSL_CACERT = '/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem'"
-    notify:
-      - reload httpd
-
-  # configure cider with multi back-end
-  # https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_manage-volumes.html
-  - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option="enabled_backends" value="equallogic-1,lvmdriver-1"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  # LVM
-  - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_group" value="cinder-volumes"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_driver" value="cinder.volume.drivers.lvm.LVMISCSIDriver"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_backend_name" value="LVM_iSCSI"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  # Dell EqualLogic - http://docs.openstack.org/trunk/config-reference/content/dell-equallogic-driver.html
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_driver" value="cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_ip" value="{{ IP_EQLX }}"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_login" value="{{ SAN_UNAME }}"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - name: set password for equallogic-1
-    ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_password" value="{{ SAN_PW }}"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_group_name" value="{{ EQLX_GROUP }}"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_pool" value="{{ EQLX_POOL }}"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-  - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_backend_name" value="equallogic"
-    notify:
-    - restart cinder api
-    - restart cinder scheduler
-    - restart cinder volume
-
-  # flush handlers_path here in case cinder changes and we need to restart it.
-  - meta: flush_handlers
-
-  # create storage types
-  # note that existing keys can be retrieved using: cinder extra-specs-list
-  - shell: source /root/keystonerc_admin && cinder type-create lvm
-    ignore_errors: yes
-  - shell: source /root/keystonerc_admin && cinder type-key lvm set volume_backend_name=lvm
-  - shell: source /root/keystonerc_admin && cinder type-create equallogic
-    ignore_errors: yes
-  - shell: source /root/keystonerc_admin && cinder type-key equallogic set volume_backend_name=equallogic
-
-  # http://docs.openstack.org/icehouse/install-guide/install/yum/content/glance-verify.html
-  - file: path=/root/images state=directory
-  - get_url: url=http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img dest=/root/images/cirros-0.3.2-x86_64-disk.img mode=0440
-  - name: Add the cirros-0.3.2-x86_64 image
-    glance_image:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name=cirros-0.3.2-x86_64
-      disk_format=qcow2
-      is_public=True
-      file=/root/images/cirros-0.3.2-x86_64-disk.img
-
-  - name: create non-standard flavor
-    nova_flavor:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name="{{item.name}}" ram="{{item.ram}}" root="{{item.disk}}" vcpus="{{item.vcpus}}" swap="{{item.swap}}"
-      ephemeral=0
-    with_items:
-      - { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 }
-      - { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 100000 }
-      - { name: m2.prepare_builder, ram: 5000, disk: 16, vcpus: 2, swap: 0 }
-      # same as m.* but with swap
-      - { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 }
-      - { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 }
-      - { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 }
-      - { name: ms1.medium.bigswap, ram: 4096, disk: 40, vcpus: 2, swap: 40000 }
-      - { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 }
-      - { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 }
-      # inspired by http://aws.amazon.com/ec2/instance-types/
-      - { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 }
-      - { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 }
-      - { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 }
-      - { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 }
-
-
-  #####  download common Images #####
-  # restricted images (RHEL) are handled two steps below
-  - name: Add the images
-    glance_image:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name="{{ item.name }}"
-      disk_format=qcow2
-      is_public=True
-      copy_from="{{ item.copy_from }}"
-    with_items:
-      - name: Fedora-x86_64-20-20131211.1
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2
-      - name: Fedora-x86_64-20-20140407
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2
-      - name: Fedora-Cloud-Base-20141203-21.x86_64
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
-      - name: Fedora-Cloud-Base-20141203-21.i386
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2
-      - name: Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64.qcow2
-      - name: Fedora-Cloud-Base-22_Alpha-20150305.x86_64
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Alpha-20150305.x86_64.qcow2
-      - name: Fedora-Cloud-Atomic-22_Beta-20150415.x86_64
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Beta-20150415.x86_64.qcow2
-      - name: Fedora-Cloud-Base-22_Beta-20150415.x86_64
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Beta-20150415.x86_64.qcow2
-      - name: Fedora-Cloud-Atomic-22-20150521.x86_64
-        copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
-      - name: Fedora-Cloud-Base-22-20150521.x86_64
-        copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-      - name: Fedora-Cloud-Base-23-20151030.x86_64
-        copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2
-      - name: CentOS-7-x86_64-GenericCloud-1503
-        copy_from: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2
-      - name: CentOS-6-x86_64-GenericCloud-20141129_01
-        copy_from: http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-20141129_01.qcow2
-      - name: Fedora-Cloud-Base-24_Alpha-7.x86_64.qcow2
-        copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/test/24_Alpha/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Alpha-7.x86_64.qcow2
-      - name: Fedora-Cloud-Base-24-1.2.x86_64.qcow2
-        copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
-      - name: Fedora-Cloud-Base-27-1.6.x86_64
-        copy_from: https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2
-      - name: Fedora-Cloud-Base-27-1.6.ppc64le
-        copy_from: https://download.fedoraproject.org/pub/fedora-secondary/releases/27/CloudImages/ppc64le/images/Fedora-Cloud-Base-27-1.6.ppc64le.qcow2
-  # RHEL6 can be downloaded from https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952
-  - stat: path=/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2
-    register: rhel6_image
-  - name: Add the RHEL6 image
-    glance_image:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name="rhel-guest-image-6.6-20141222.0.x86_64"
-      disk_format=qcow2
-      is_public=True
-      file="/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2"
-    when: rhel6_image.stat.exists == True
-
-  # RHEL7 can be download from https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads
-  - stat: path=/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2
-    register: rhel7_image
-  - name: Add the RHEL7 image
-    glance_image:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name="rhel-guest-image-7.0-20140930.0.x86_64"
-      disk_format=qcow2
-      is_public=True
-      file="/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2"
-    when: rhel7_image.stat.exists == True
-
-
-  ##### PROJECTS ######
-  - name: Create tenants
-    keystone_user:
-      login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      endpoint="https://{{controller_publicname}}:35357/v2.0";
-      tenant="{{ item.name }}"
-      tenant_description="{{ item.desc }}"
-      state=present
-    with_items:
-      - { name: persistent, desc: "persistent instances" }
-      - { name: qa, desc: "developmnet and test-day applications of QA" }
-      - { name: transient, desc: 'transient instances' }
-      - { name: infrastructure, desc: "one off instances for infrastructure folks to test or check something (proof-of-concept)" }
-      - { name: cloudintern, desc: 'project for the cloudintern under mattdm' }
-      - { name: cloudsig, desc: 'Fedora cloud sig folks.' }
-      - { name: copr, desc: 'Space for Copr builders' }
-      - { name: coprdev, desc: 'Development version of Copr' }
-      - { name: pythonbots, desc: 'project for python build bot users - twisted, etc' }
-      - { name: scratch, desc: 'scratch and short term instances' }
-      - { name: openshift, desc: 'Tenant for openshift deployment' }
-      - { name: maintainertest, desc: 'Tenant for maintainer test machines' }
-      - { name: aos-ci-cd, desc: 'Tenant for aos-ci-cd' }
-
-
-  ##### USERS #####
-  - name: Create users
-    keystone_user:
-      login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      endpoint="https://{{controller_publicname}}:35357/v2.0";
-      user="{{ item.name }}"
-      email="{{ item.email }}"
-      tenant="{{ item.tenant }}"
-      password="{{ item.password }}"
-      state=present
-    no_log: True
-    with_items:
-      - { name: anthomas, email: 'anthomas@xxxxxxxxxx', tenant: cloudintern, password: "{{anthomas_password}}" }
-      - { name: ausil, email: 'dennis@xxxxxxxx', tenant: infrastructure, password: "{{ausil_password}}" }
-      - { name: atomic, email: 'walters@xxxxxxxxxx', tenant: scratch, password: "{{cockpit_password}}" }
-      - { name: codeblock, email: 'codeblock@xxxxxxxx', tenant: infrastructure, password: "{{codeblock_password}}" }
-      - { name: copr, email: 'admin@xxxxxxxxxxxxxxxxx', tenant: copr, password: "{{copr_password}}" }
-      - { name: gholms, email: 'gholms@xxxxxxxxxxxxxxxxx', tenant: cloudintern, password: "{{gholms_password}}" }
-      - { name: jskladan, email: 'jskladan@xxxxxxxxxx', tenant: qa, password: "{{jskladan_password}}" }
-      - { name: kevin, email: 'kevin@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{kevin_password}}" }
-      - { name: laxathom, email: 'laxathom@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{laxathom_password}}" }
-      - { name: mattdm, email: 'mattdm@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{mattdm_password}}" }
-      - { name: msuchy, email: 'msuchy@xxxxxxxxxx', tenant: copr, password: "{{msuchy_password}}" }
-      - { name: nb, email: 'nb@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{nb_password}}" }
-      - { name: pingou, email: 'pingou@xxxxxxxxxxxx', tenant: infrastructure, password: "{{pingou_password}}" }
-      - { name: puiterwijk, email: 'puiterwijk@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{puiterwijk_password}}" }
-      - { name: stefw, email: 'stefw@xxxxxxxxxxxxxxxxx', tenant: scratch, password: "{{stefw_password}}" }
-      - { name: mizdebsk, email: 'mizdebsk@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{mizdebsk_password}}" }
-      - { name: kushal, email: 'kushal@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{kushal_password}}" }
-      - { name: red, email: 'red@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{red_password}}" }
-      - { name: samkottler, email: 'samkottler@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{samkottler_password}}" }
-      - { name: tflink, email: 'tflink@xxxxxxxxxxxxxxxxx', tenant: qa, password: "{{tflink_password}}" }
-      - { name: twisted, email: 'buildbot@xxxxxxxxxxxxxxxxx', tenant: pythonbots, password: "{{twisted_password}}" }
-      - { name: roshi, email: 'roshi@xxxxxxxxxxxxxxxxx', tenant: qa, password: "{{roshi_password}}" }
-      - { name: maxamillion, email: 'maxamillion@xxxxxxxxxxxxxxxxx', tenant: infrastructure, password: "{{maxamillion_password}}" }
-      - { name: clime, email: 'clime@xxxxxxxxxx', tenant: copr, password: "{{clime_password}}" }
-      - { name: jkadlcik, email: 'jkadlcik@xxxxxxxxxx', tenant: copr, password: "{{clime_password}}" }
-      - { name: misc, email: 'misc@xxxxxxxxxx', tenant: openshift, password: "{{misc_password}}" }
-      - { name: bowlofeggs, email: 'bowlofeggs@xxxxxxxxxxxxxxxxx', tenant: transient, password: "{{bowlofeggs_password}}" }
-      - { name: alivigni, email: 'alivigni@xxxxxxxxxx', tenant: aos-ci-cd, password: "{{alivigni_password}}" }
-      - { name: jbieren, email: 'jbieren@xxxxxxxxxx', tenant: aos-ci-cd, password: "{{jbieren_password}}" }
-      - { name: bpeck, email: 'bpeck@xxxxxxxxxx', tenant: aos-ci-cd, password: "{{bpeck_password}}" }
-      - { name: srallaba, email: 'srallaba@xxxxxxxxxx', tenant: aos-ci-cd, password: "{{srallaba_password}}" }
-      - { name: jburke, email: 'jburke@xxxxxxxxxx', tenant: aos-ci-cd, password: "{{jburke_password}}" }
-    tags:
-    - openstack_users
-
-  - name: upload SSH keys for users
-    nova_keypair:
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      login_username="{{ item.username }}"
-      login_password="{{ item.password }}" login_tenant_name="{{item.tenant}}" name="{{ item.name }}"
-      public_key="{{ item.public_key }}"
-    ignore_errors: yes
-    no_log: True
-    with_items:
-      - { username: anthomas, name: anthomas, tenant: cloudintern, password: "{{anthomas_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas anthomas') }}" }
-      - { username: ausil, name: ausil, tenant: infrastructure, password: "{{ausil_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas ausil') }}" }
-      - { username: codeblock, name: codeblock, tenant: infrastructure, password: "{{codeblock_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas codeblock') }}" }
-      - { username: buildsys, name: buildsys, tenant: copr, password: "{{copr_password}}", public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeTO0ddXuhDZYM9HyM0a47aeV2yIVWhTpddrQ7/RAIs99XyrsicQLABzmdMBfiZnP0FnHBF/e+2xEkT8hHJpX6bX81jjvs2bb8KP18Nh8vaXI3QospWrRygpu1tjzqZT0Llh4ZVFscum8TrMw4VWXclzdDw6x7csCBjSttqq8F3iTJtQ9XM9/5tCAAOzGBKJrsGKV1CNIrfUo5CSzY+IUVIr8XJ93IB2ZQVASK34T/49egmrWlNB32fqAbDMC+XNmobgn6gO33Yq5Ly7Dk4kqTUx2TEaqDkZfhsVu0YcwV81bmqsltRvpj6bIXrEoMeav7nbuqKcPLTxWEY/2icePF" }
-      - { username: gholms, name: gholms, tenant: cloudintern, password: "{{gholms_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas gholms') }}" }
-      - { username: jskladan, name: jskladan, tenant: qa, password: "{{jskladan_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jskladan') }}" }
-      - { username: kevin, name: kevin, tenant: infrastructure, password: "{{kevin_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kevin') }}" }
-      - { username: maxamillion, name: maxamillion, tenant: infrastructure, password: "{{maxamillion_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas maxamillion') }}" }
-      - { username: laxathom, name: laxathom, tenant: infrastructure, password: "{{laxathom_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas laxathom') }}" }
-      - { username: mattdm, name: mattdm, tenant: infrastructure, password: "{{mattdm_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mattdm') }}" }
-      - { username: msuchy, name: msuchy, tenant: copr, password: "{{msuchy_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}" }
-      - { username: nb, name: nb, tenant: infrastructure, password: "{{nb_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas nb') }}" }
-      - { username: pingou, name: pingou, tenant: infrastructure, password: "{{pingou_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas pingou') }}" }
-      - { username: puiterwijk, name: puiterwijk, tenant: infrastructure, password: "{{puiterwijk_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas puiterwijk') }}" }
-      - { username: stefw, name: stefw, tenant: scratch, password: "{{stefw_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas stefw') }}" }
-      - { username: mizdebsk, name: mizdebsk, tenant: infrastructure, password: "{{mizdebsk_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mizdebsk') }}" }
-      - { username: kushal, name: kushal, tenant: infrastructure, password: "{{kushal_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kushal') }}" }
-      - { username: red, name: red, tenant: infrastructure, password: "{{red_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas red') }}" }
-      - { username: roshi, name: roshi, tenant: qa, password: "{{roshi_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas roshi') }}" }
-      - { username: samkottler, name: samkottler, tenant: infrastructure, password: "{{samkottler_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas skottler') }}" }
-      - { username: tflink, name: tflink, tenant: qa, password: "{{tflink_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas tflink') }}" }
-      - { username: atomic, name: atomic, tenant: scratch, password: "{{cockpit_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas walters') }}" }
-#      - { name: twisted, tenant: pythonbots, password: "{{twisted_password}}", public_key: "" }
-      - { username: admin, name: fedora-admin-20130801, tenant: admin, password: "{{ADMIN_PASS}}", public_key: "{{ lookup('file', files + '/fedora-cloud/fedora-admin-20130801.pub') }}" }
-      - { username: asamalik, name: asamalik, tenant: scratch, password: "{{asamalik_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas asamalik') }}" }
-      - { username: clime, name: clime, tenant: copr, password: "{{clime_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas clime') }}" }
-      - { username: jkadlcik, name: jkadlcik, tenant: copr, password: "{{clime_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jkadlcik') }}" }
-      - { username: misc, name: misc, tenant: openshift, password: "{{misc_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas misc') }}" }
-      - { username: alivigni, name: alivigni, tenant: aos-ci-cd, password: "{{alivigni_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas alivigni') }}" }
-      - { username: jbieren, name: jbieren, tenant: aos-ci-cd, password: "{{jbieren_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jbieren') }}" }
-      - { username: bpeck, name: bpeck, tenant: aos-ci-cd, password: "{{bpeck_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas bpeck') }}" }
-      - { username: srallaba, name: srallaba, tenant: aos-ci-cd, password: "{{srallaba_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas srallaba') }}" }
-      - { username: jburke, name: jburke, tenant: aos-ci-cd, password: "{{jburke_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jburke') }}" }
-    tags:
-    - openstack_users
-
-  - name: Create roles for additional tenants
-    shell: source /root/keystonerc_admin && keystone role-list |grep ' {{item}} ' || keystone role-create --name {{ item }}
-    with_items: "{{all_tenants}}"
-  - name: Assign users to secondary tentants
-    shell: source /root/keystonerc_admin && keystone user-role-list --user "{{item.user}}" --tenant "{{item.tenant}}" | grep ' {{item.tenant }} ' || keystone user-role-add --user {{item.user}} --role {{item.tenant}} --tenant {{item.tenant}} || true
-    #keystone_user:
-    #  endpoint="https://{{controller_publicname}}:35357/v2.0";
-    #  login_user="admin" login_password="{{ ADMIN_PASS }}"
-    #  role=coprdev user={{ item }} tenant=coprdev
-    with_items:
-      - { user: admin, tenant: cloudintern }
-      - { user: admin, tenant: cloudsig }
-      - { user: admin, tenant: copr }
-      - { user: admin, tenant: coprdev }
-      - { user: admin, tenant: persistent }
-      - { user: admin, tenant: pythonbots }
-      - { user: admin, tenant: qa }
-      - { user: admin, tenant: infrastructure }
-      - { user: admin, tenant: scratch }
-      - { user: admin, tenant: transient }
-      - { user: admin, tenant: maintainertest }
-      - { user: admin, tenant: aos-ci-cd }
-      - { user: copr, tenant: coprdev }
-      - { user: kevin, tenant: cloudintern }
-      - { user: kevin, tenant: cloudsig }
-      - { user: kevin, tenant: copr }
-      - { user: kevin, tenant: coprdev }
-      - { user: kevin, tenant: persistent }
-      - { user: kevin, tenant: pythonbots }
-      - { user: kevin, tenant: qa }
-      - { user: kevin, tenant: scratch }
-      - { user: kevin, tenant: transient }
-      - { user: kevin, tenant: maintainertest }
-      - { user: kevin, tenant: aos-ci-cd }
-      - { user: msuchy, tenant: cloudintern }
-      - { user: msuchy, tenant: cloudsig }
-      - { user: msuchy, tenant: coprdev }
-      - { user: msuchy, tenant: infrastructure }
-      - { user: msuchy, tenant: persistent }
-      - { user: msuchy, tenant: pythonbots }
-      - { user: msuchy, tenant: qa }
-      - { user: msuchy, tenant: scratch }
-      - { user: msuchy, tenant: transient }
-      - { user: pingou, tenant: persistent }
-      - { user: puiterwijk, tenant: cloudintern }
-      - { user: puiterwijk, tenant: cloudsig }
-      - { user: puiterwijk, tenant: copr }
-      - { user: puiterwijk, tenant: coprdev }
-      - { user: puiterwijk, tenant: persistent }
-      - { user: puiterwijk, tenant: pythonbots }
-      - { user: puiterwijk, tenant: qa }
-      - { user: puiterwijk, tenant: scratch }
-      - { user: puiterwijk, tenant: transient }
-      - { user: puiterwijk, tenant: maintainertest }
-      - { user: puiterwijk, tenant: aos-ci-cd }
-      - { user: mizdebsk, tenant: aos-ci-cd }
-      - { user: mizdebsk, tenant: cloudintern }
-      - { user: mizdebsk, tenant: cloudsig }
-      - { user: mizdebsk, tenant: copr }
-      - { user: mizdebsk, tenant: coprdev }
-      - { user: mizdebsk, tenant: infrastructure }
-      - { user: mizdebsk, tenant: maintainertest }
-      - { user: mizdebsk, tenant: openshift }
-      - { user: mizdebsk, tenant: persistent }
-      - { user: mizdebsk, tenant: pythonbots }
-      - { user: mizdebsk, tenant: qa }
-      - { user: mizdebsk, tenant: scratch }
-      - { user: mizdebsk, tenant: transient }
-      - { user: clime, tenant: coprdev }
-      - { user: clime, tenant: persistent }
-      - { user: jkadlcik, tenant: coprdev }
-    tags:
-     - openstack_users
-
-  ##### NETWORK ####
-  # http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.html
-  #
-  # external network is a class C: 209.132.184.0/24
-  # 209.132.184.1  to .25 - reserved for hardware.
-  # 209.132.184.26 to .30 - reserver for test cloud external ips
-  # 209.132.184.31 to .69 - icehouse cloud
-  # 209.132.184.70 to .89 - reserved for arm03 SOCs
-  # 209.132.184.90 to .251 - folsom cloud
-  #
-  - name: Create en external network
-    neutron_network:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name=external
-      router_external=True
-      provider_network_type=flat
-      provider_physical_network=floatnet
-    register: EXTERNAL_ID
-  - name: Create an external subnet
-    neutron_subnet:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      name=external-subnet
-      network_name=external
-      cidr="{{ public_interface_cidr }}"
-      allocation_pool_start="{{ public_floating_start }}"
-      allocation_pool_end="{{ public_floating_end }}"
-      gateway_ip="{{ public_gateway_ip }}"
-      enable_dhcp=false
-    register: EXTERNAL_SUBNET_ID
-  #- shell: source /root/keystonerc_admin && nova floating-ip-create external
-  #  when: packstack_sucessfully_finished.stat.exists == False
-
-  # 172.16.0.1/16 -- 172.22.0.1/16 - free (can be split to /20)
-  # 172.23.0.1/16 - free (but used by old cloud)
-  # 172.24.0.1/24 - RESERVED it is used internally for OS
-  # 172.24.1.0/24 -- 172.24.255.0/24 - likely free (?)
-  # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
-  # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
-  # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
-  # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
-  # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
-  # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
-  # 172.25.96.1/20 - cloudsig (172.25.96.1 - 172.25.111.254)
-  # 172.25.112.1/20 - qa (172.25.112.1 - 172.25.127.254)
-  # 172.25.128.1/20 - pythonbots (172.25.128.1 - 172.25.143.254)
-  # 172.25.144.1/20 - coprdev (172.25.144.1 - 172.25.159.254)
-  # 172.25.160.1/20 -- 172.25.240.1/20 - free
-  # 172.26.0.1/16 -- 172.31.0.1/16 - free (can be split to /20)
-
-  - name: Create a router for all tenants
-    neutron_router:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      tenant_name="{{ item }}"
-      name="ext-to-{{ item }}"
-    with_items: "{{all_tenants}}"
-  - name: "Connect router's gateway to the external network"
-    neutron_router_gateway:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      router_name="ext-to-{{ item }}"
-      network_name="external"
-    with_items: "{{all_tenants}}"
-  - name: Create a private network for all tenants
-    neutron_network:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      tenant_name="{{ item.name }}"
-      name="{{ item.name }}-net"
-      shared="{{ item.shared }}"
-    with_items:
-      - { name: cloudintern, shared: false }
-      - { name: cloudsig, shared: false }
-      - { name: copr, shared: true }
-      - { name: coprdev, shared: true }
-      - { name: infrastructure, shared: false }
-      - { name: persistent, shared: false }
-      - { name: pythonbots, shared: false }
-      - { name: qa, shared: false }
-      - { name: scratch, shared: false }
-      - { name: transient, shared: false }
-      - { name: openshift, shared: false }
-      - { name: maintainertest, shared: false }
-      - { name: aos-ci-cd, shared: false }
-  - name: Create a subnet for all tenants
-    neutron_subnet:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      tenant_name="{{ item.name }}"
-      network_name="{{ item.name }}-net"
-      name="{{ item.name }}-subnet"
-      cidr="{{ item.cidr }}"
-      gateway_ip="{{ item.gateway }}"
-      dns_nameservers="66.35.62.163,140.211.169.201"
-    with_items:
-      - { name: cloudintern, cidr: '172.25.0.1/20', gateway: '172.25.0.1' }
-      - { name: cloudsig, cidr: '172.25.96.1/20', gateway: '172.25.96.1' }
-      - { name: copr, cidr: '172.25.80.1/20', gateway: '172.25.80.1' }
-      - { name: coprdev, cidr: '172.25.144.1/20', gateway: '172.25.144.1' }
-      - { name: infrastructure, cidr: '172.25.16.1/20', gateway: '172.25.16.1' }
-      - { name: persistent, cidr: '172.25.32.1/20', gateway: '172.25.32.1' }
-      - { name: pythonbots, cidr: '172.25.128.1/20', gateway: '172.25.128.1' }
-      - { name: qa, cidr: '172.25.112.1/20', gateway: '172.25.112.1' }
-      - { name: scratch, cidr: '172.25.64.1/20', gateway: '172.25.64.1' }
-      - { name: transient, cidr: '172.25.48.1/20', gateway: '172.25.48.1' }
-      - { name: openshift, cidr: '172.25.160.1/20', gateway: '172.25.160.1' }
-      - { name: maintainertest, cidr: '172.25.176.1/20', gateway: '172.25.176.1' }
-      - { name: aos-ci-cd, cidr: '172.25.180.1/20', gateway: '172.25.180.1' }
-  - name: "Connect router's interface to the TENANT-subnet"
-    neutron_router_interface:
-      login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
-      auth_url="https://{{controller_publicname}}:35357/v2.0";
-      tenant_name="{{ item }}"
-      router_name="ext-to-{{ item }}"
-      subnet_name="{{ item }}-subnet"
-    with_items: "{{all_tenants}}"
-
-  #################
-  # Security Groups
-  ################
-  - name: "Create 'ssh-anywhere' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'ssh-anywhere-{{item}}'
-      description: "allow ssh from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Allow nagios checks"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'allow-nagios-{{item}}'
-      description: "allow nagios checks"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "5666"
-          port_range_max: "5666"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "209.132.181.35/32"
-        - direction: "ingress"
-          ethertype: "IPv4"
-          protocol: "icmp"
-          remote_ip_prefix: "209.132.181.35/32"
-    with_items:
-    - persistent
-
-  - name: "Create 'ssh-from-persistent' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'ssh-from-persistent-{{item}}'
-      description: "allow ssh from persistent"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-    with_items:
-      - copr
-      - coprdev
-
-
-  - name: "Create 'ssh-internal' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'ssh-internal-{{item.name}}'
-      description: "allow ssh from {{item.name}}-network"
-      tenant_name: "{{ item.name }}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "22"
-          port_range_max: "22"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "{{ item.prefix }}"
-    with_items:
-      - { name: cloudintern, prefix: '172.25.0.1/20' }
-      - { name: cloudsig, prefix: '172.25.96.1/20' }
-      - { name: copr, prefix: '172.25.80.1/20' }
-      - { name: coprdev, prefix: '172.25.80.1/20' }
-      - { name: infrastructure, prefix: "172.25.16.1/20" }
-      - { name: persistent, prefix: "172.25.32.1/20" }
-      - { name: pythonbots, prefix: '172.25.128.1/20' }
-      - { name: qa, prefix: "172.25.112.1/20" }
-      - { name: scratch, prefix: '172.25.64.1/20' }
-      - { name: transient, prefix: '172.25.48.1/20' }
-      - { name: openshift, prefix: '172.25.160.1/20' }
-      - { name: maintainertest, prefix: '172.25.180.1/20' }
-      - { name: aos-ci-cd, prefix: '172.25.200.1/20' }
-
-  - name: "Create 'web-80-anywhere' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'web-80-anywhere-{{item}}'
-      description: "allow web-80 from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "80"
-          port_range_max: "80"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'web-443-anywhere' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'web-443-anywhere-{{item}}'
-      description: "allow web-443 from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "443"
-          port_range_max: "443"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'oci-registry-5000-anywhere' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'oci-registry-5000-anywhere-{{item}}'
-      description: "allow oci-registry-5000 from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "5000"
-          port_range_max: "5000"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'wide-open' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'wide-open-{{item}}'
-      description: "allow anything from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "0"
-          port_range_max: "65535"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-        - direction: "ingress"
-          port_range_min: "0"
-          port_range_max: "65535"
-          ethertype: "IPv4"
-          protocol: "udp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'ALL ICMP' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'all-icmp-{{item}}'
-      description: "allow all ICMP traffic"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          ethertype: "IPv4"
-          protocol: "icmp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'keygen-persistent' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'keygen-persistent'
-      description: "rules for copr-keygen"
-      tenant_name: "persistent"
-      rules:
-        - direction: "ingress"
-          port_range_min: "5167"
-          port_range_max: "5167"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-        - direction: "ingress"
-          port_range_min: "80"
-          port_range_max: "80"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.32.1/20"
-
-  - name: "Create 'pg-5432-anywhere' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'pg-5432-anywhere-{{item}}'
-      description: "allow postgresql-5432 from anywhere"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "5432"
-          port_range_max: "5432"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "0.0.0.0/0"
-    with_items: "{{all_tenants}}"
-
-  - name: "Create 'fedmsg-relay-persistent' security group"
-    neutron_sec_group:
-      login_username: "admin"
-      login_password: "{{ ADMIN_PASS }}"
-      login_tenant_name: "admin"
-      auth_url: "https://{{controller_publicname}}:35357/v2.0";
-      state: "present"
-      name: 'fedmsg-relay-persistent'
-      description: "allow incoming 2003 and 4001 from internal network"
-      tenant_name: "{{item}}"
-      rules:
-        - direction: "ingress"
-          port_range_min: "2003"
-          port_range_max: "2003"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.80.1/16"
-        - direction: "ingress"
-          port_range_min: "4001"
-          port_range_max: "4001"
-          ethertype: "IPv4"
-          protocol: "tcp"
-          remote_ip_prefix: "172.25.80.1/16"
-    with_items: "{{all_tenants}}"
-
-  # Update quota for Copr
-  #   SEE:
-  #   nova quota-defaults
-  #   nova quota-show --tenant $TENANT_ID
-  # default is 10 instances, 20 cores, 51200 RAM, 10 floating IPs
-  - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'copr ' | awk '{print $2}'
-    register: TENANT_ID
-    check_mode: no
-    changed_when: false
-  - shell: source /root/keystonerc_admin && nova quota-update --instances 50 --cores 100 --ram 350000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }}
-
-  - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'coprdev ' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: TENANT_ID
-  - shell: source /root/keystonerc_admin && nova quota-update --instances 40 --cores 80 --ram 300000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }}
-
-#
-# Note that we set manually the amount of volumes for this tenant to 20 in the web interface.
-# nova quota-update cannot do so.
-#
-  - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'persistent ' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: TENANT_ID
-  - shell: source /root/keystonerc_admin && nova quota-update --instances 60 --cores 175 --ram 288300 --security-groups 20 {{ TENANT_ID.stdout }}
-
-# Transient quota
-  - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'transient ' | awk '{print $2}'
-    check_mode: no
-    changed_when: false
-    register: TENANT_ID
-  - shell: source /root/keystonerc_admin && nova quota-update --instances 30 --cores 70 --ram 153600 --security-groups 20 {{ TENANT_ID.stdout }}
-
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+      vars:
+        root_auth_users: msuchy
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+
+    - name: set root passwd
+      user: name=root password={{ cloud_rootpw }} state=present
+      tags:
+        - rootpw
+    - name: Set the hostname
+      hostname: name={{ controller_hostname }}
+
+    - name: Deploy root private SSH key
+      copy: src={{ private }}/files/openstack/fed-cloud09-root.key dest=/root/.ssh/id_rsa mode=600 owner=root group=root
+    - name: Deploy root public SSH key
+      copy: src={{ files }}/fedora-cloud/fed09-ssh-key.pub dest=/root/.ssh/id_rsa.pub mode=600 owner=root group=root
+    - authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}"
+
+    - name: install core pkgs
+      package: state=present pkg={{ item }}
+      with_items:
+        - libselinux-python
+        - ntp
+        - wget
+        - scsi-target-utils
+        - lvm2
+        - iptables-services
+
+    - name: disable selinux
+      selinux: policy=targeted state=permissive
+
+    - service: name=tgtd state=started enabled=yes
+
+    - name: Create logical volume for Swift
+      lvol: vg=vg_server lv=swift_store size=100g shrink=no
+    - name: Create FS on Swift storage
+      filesystem: fstype=ext4 dev=/dev/vg_server/swift_store
+
+    - template: src={{ files }}/fedora-cloud/hosts dest=/etc/hosts owner=root mode=0644
+
+    - stat: path=/etc/packstack_sucessfully_finished
+      register: packstack_sucessfully_finished
+
+    # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-networking.html
+    - service: name=NetworkManager state=stopped enabled=no
+    - service: name=network enabled=yes
+    - service: name=firewalld state=stopped enabled=no
+      ignore_errors: yes
+    - service: name=iptables state=started enabled=yes
+
+    - name: ensure iptables is configured to allow rabbitmq traffic (port 5672/tcp)
+      lineinfile: dest=/etc/sysconfig/iptables
+        state=present
+        regexp="^.*INPUT.*172\.24\.0\.10/24.*tcp.*{{ item }}.*ACCEPT"
+        insertbefore="^.*INPUT.*RELATED,ESTABLISHED.*ACCEPT"
+        line="-A INPUT -s 172.24.0.10/24 -p tcp -m multiport --dports {{ item }} -m comment --comment \"added by fedora-infra ansible\" -j ACCEPT"
+        backup=yes
+      with_items:
+        - 80,443
+        - 3260
+        - 3306
+        - 5671
+        - 5672
+        - 6000,6001,6002,873
+        - 8777
+        - 27017
+        - 5900:5999,16509
+        - 16509,49152:49215
+      notify: restart iptables
+
+    # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-neutron-networking-controller-node.html
+    - command: ifdown br-tun
+      when: packstack_sucessfully_finished.stat.exists == False
+      ignore_errors: yes
+    - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^ONBOOT=" line="ONBOOT=yes"
+      notify:
+        - restart network
+    # only for first run
+    - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^NETMASK=" line="NETMASK=255.255.255.0"
+      when: packstack_sucessfully_finished.stat.exists == False
+      notify:
+        - restart network
+    - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="^IPADDR=" line="IPADDR={{controller_private_ip}}"
+      when: packstack_sucessfully_finished.stat.exists == False
+      notify:
+        - restart network
+    - lineinfile: dest=/etc/sysconfig/network-scripts/ifcfg-eth1 regexp="BOOTPROTO=" line="BOOTPROTO=none"
+      notify:
+        - restart network
+    - template: src={{files}}/fedora-cloud/ifcfg-br-ex dest=/etc/sysconfig/network-scripts/ifcfg-br-ex owner=root mode=0644
+      when: packstack_sucessfully_finished.stat.exists == False
+      notify:
+        - restart network
+    - template: src={{files}}/fedora-cloud/ifcfg-eth0 dest=/etc/sysconfig/network-scripts/ifcfg-eth0 owner=root mode=0644
+      when: packstack_sucessfully_finished.stat.exists == False
+      notify:
+        - restart network
+    - command: ifup eth1
+      when: packstack_sucessfully_finished.stat.exists == False
+    - meta: flush_handlers
+
+    # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-ntp.html
+    - service: name=ntpd state=started enabled=yes
+
+    # this two step can be done in one, but Ansible will then always show the action as changed
+    #- name: make sure epel-release is installed
+    #  get_url: url=http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm dest=/root/
+    #- package: state=present name=/root/epel-release-latest-7.noarch.rpm
+
+    #- name: make sure latest openvswitch is installed
+    #  get_url: url=http://people.redhat.com/~lkellogg/rpms/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm dest=/root/
+    #- package: state=present name=/root/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm
+
+    #- name: make sure latest openstack-utils is installed
+    #  get_url: url=https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-utils-2014.2-1.el7.centos.noarch.rpm dest=/root/
+    #- package: state=present name=/root/openstack-utils-2014.2-1.el7.centos.noarch.rpm
+
+    - name: install basic openstack packages
+      package: state=present name={{ item }}
+      with_items:
+        - openstack-utils
+        - openstack-selinux
+        - openstack-packstack
+        - python-glanceclient
+        - rabbitmq-server
+        - ansible-openstack-modules
+        - openstack-keystone
+        - openstack-neutron
+        - openstack-nova-common
+        - haproxy
+
+    - name: install etckeeper
+      package: state=present name=etckeeper
+    - name: init etckeeper
+      shell: cd /etc && etckeeper init
+
+    - name: add ssl cert files
+      copy: src={{ private }}/files/openstack/fedorainfracloud.org.{{item}} dest=/etc/pki/tls/certs/fedorainfracloud.org.{{item}} mode=0644 owner=root group=root
+      with_items:
+        - pem
+        - digicert.pem
+    - name: add ssl key file
+      copy: src={{ private }}/files/openstack/fedorainfracloud.org.key dest=/etc/pki/tls/private/fedorainfracloud.org.key mode=0600 owner=root group=root
+      changed_when: False
+
+    - name: allow services key access
+      acl: name=/etc/pki/tls/private/fedorainfracloud.org.key entity={{item}} etype=user permissions="r" state=present
+      with_items:
+        - keystone
+        - neutron
+        - nova
+        - rabbitmq
+        - cinder
+        - ceilometer
+        - swift
+
+    - file: state=directory path=/var/www/pub mode=0755
+    - copy: src={{ private }}/files/openstack/fedorainfracloud.org.pem dest=/var/www/pub/ mode=644
+
+    # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-database-controller.html
+    - name: install mysql packages
+      package: state=present pkg={{ item }}
+      with_items:
+        - mariadb-galera-server
+        - MySQL-python
+    - ini_file: dest=/etc/my.cnf section="mysqld" option="bind-address" value="{{ controller_public_ip }}"
+    - ini_file: dest=/etc/my.cnf section="mysqld" option="default-storage-engine" value="innodb"
+    - ini_file: dest=/etc/my.cnf section="mysqld" option="collation-server" value="utf8_general_ci"
+    - ini_file: dest=/etc/my.cnf section="mysqld" option="init-connect" value="'SET NAMES utf8'"
+    - ini_file: dest=/etc/my.cnf section="mysqld" option="character-set-server" value="utf8"
+    - service: name=mariadb state=started enabled=yes
+      # 'localhost' needs to be the last item for idempotency, see
+      # http://ansible.cc/docs/modules.html#mysql-user
+    - name: update mysql root password for localhost before setting .my.cnf
+      mysql_user: name=root host=localhost password={{ DBPASSWORD }}
+    - name: copy .my.cnf file with root password credentials
+      template: src={{ files }}/fedora-cloud/my.cnf dest=/root/.my.cnf owner=root mode=0600
+    - name: update mysql root password for all root accounts
+      mysql_user: name=root host={{ item }} password={{ DBPASSWORD }}
+      with_items:
+        - "{{ controller_public_ip }}"
+        - 127.0.0.1
+        - ::1
+    - name: copy .my.cnf file with root password credentials
+      template: src={{ files }}/fedora-cloud/my.cnf dest=/root/.my.cnf owner=root mode=0600
+    - name: delete anonymous MySQL server user for $server_hostname
+      mysql_user: user="" host="{{ controller_public_ip }}" state="absent"
+    - name: delete anonymous MySQL server user for localhost
+      mysql_user: user="" state="absent"
+    - name: remove the MySQL test database
+      mysql_db: db=test state=absent
+
+    # WORKAROUNDS - already reported to OpenStack team
+    - lineinfile:
+        dest=/usr/lib/python2.7/site-packages/packstack/plugins/dashboard_500.py
+        regexp="            host_resources\.append\(*ssl_key, 'ssl_ps_server.key'\)*"
+        line="            host_resources.append((ssl_key, 'ssl_ps_server.key'))"
+        backup=yes
+    - lineinfile:
+        dest=/usr/share/openstack-puppet/modules/rabbitmq/manifests/config.pp
+        regexp="RABBITMQ_NODE_PORT"
+        line="    'RABBITMQ_NODE_PORTTTTT'        => $port,"
+        backup=yes
+    - package: state=present pkg=mongodb-server
+    - ini_file: dest=/usr/lib/systemd/system/mongod.service section=Service option=PIDFile value=/var/run/mongodb/mongod.pid
+    - lineinfile:
+        dest=/usr/lib/python2.7/site-packages/packstack/puppet/templates/mongodb.pp
+        regexp="pidfilepath"
+        line="    pidfilepath => '/var/run/mongodb/mongod.pid'"
+        insertbefore="^}"
+    - meta: flush_handlers
+    # http://openstack.redhat.com/Quickstart
+    - template: src={{ files }}/fedora-cloud/packstack-controller-answers.txt dest=/root/ owner=root mode=0600
+    - command: packstack --answer-file=/root/packstack-controller-answers.txt
+      when: packstack_sucessfully_finished.stat.exists == False
+    - file: path=/etc/packstack_sucessfully_finished state=touch
+      when: packstack_sucessfully_finished.stat.exists == False
+    # FIXME we should really reboot here
+
+    - name: Set shell to nova user to allow cold migrations
+      user: name=nova shell=/bin/bash
+    - name: SSH authorized key for nova user
+      authorized_key: user=nova key="{{fed_cloud09_nova_public_key}}"
+    - name: SSH public key for nova user
+      template: src={{ files }}/fedora-cloud/fed_cloud09_nova_public_key dest=/var/lib/nova/.ssh/id_rsa.pub owner=nova group=nova
+    - name: Deploy private SSH key
+      copy: src={{ private }}/files/openstack/fed-cloud09-nova.key dest=/var/lib/nova/.ssh/id_rsa mode=600 owner=nova group=nova
+    - copy: src={{files}}/fedora-cloud/nova-ssh-config dest=/var/lib/nova/.ssh/config owner=nova group=nova mode=640
+
+    # http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-queue.html
+    # https://openstack.redhat.com/Securing_services#qpid
+    #### FIXME
+    - lineinfile: dest=/etc/rabbitmq/rabbitmq-env.conf regexp="^RABBITMQ_NODE_PORT=" state="absent"
+    - service: name=rabbitmq-server state=started
+
+    # flip endpoints internalurl to internal IP
+    # ceilometer
+    - shell: source /root/keystonerc_admin && keystone service-list | grep ceilometer | awk '{print $2}'
+      register: SERVICE_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      register: ENDPOINT_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8777'  --adminurl 'https://{{ controller_publicname }}:8777' --internalurl 'https://{{ controller_publicname }}:8777' ) || true
+    # cinder
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinder ' | awk '{print $2}'
+      register: SERVICE_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      register: ENDPOINT_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' ) || true
+    # cinderv2
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinderv2' | awk '{print $2}'
+      register: SERVICE_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      register: ENDPOINT_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' ) || true
+    # glance
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'glance' | awk '{print $2}'
+      register: SERVICE_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      register: ENDPOINT_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9292'  --adminurl 'https://{{ controller_publicname }}:9292' --internalurl 'https://{{ controller_publicname }}:9292' ) || true
+    # neutron
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'neutron' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9696/'  --adminurl 'https://{{ controller_publicname }}:9696/' --internalurl 'https://{{ controller_publicname }}:9696/' ) || true
+    # nova
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova ' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s'  --adminurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' ) || true
+    # nova_ec2
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova_ec2' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8773/services/Cloud'  --adminurl 'https://{{ controller_publicname }}:8773/services/Admin' --internalurl 'https://{{ controller_publicname }}:8773/services/Cloud' ) || true
+    # novav3
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'novav3' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v3'  --adminurl 'https://{{ controller_publicname }}:8774/v3' --internalurl 'https://{{ controller_publicname }}:8774/v3' ) || true
+    # swift
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift ' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s'  --adminurl 'https://{{controller_publicname}}:8080' --internalurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s' ) || true
+    # swift_s3
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift_s3' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8080'  --adminurl 'https://{{ controller_publicname }}:8080' --internalurl 'https://{{ controller_publicname }}:8080' ) || true
+    # keystone --- !!!!! we need to use ADMIN_TOKEN here - this MUST be last before we restart OS and set up haproxy
+    - shell: source /root/keystonerc_admin && keystone service-list | grep 'keystone' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: SERVICE_ID
+    - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: ENDPOINT_ID
+    - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=certfile value=/etc/haproxy/fedorainfracloud.org.combined
+    - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=keyfile value=/etc/pki/tls/private/fedorainfracloud.org.key
+    - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=ca_certs value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone --os-token '{{ADMIN_TOKEN}}' --os-endpoint 'http://{{ controller_publicname }}:35357/v2.0' endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:5000/v2.0'  --adminurl 'https://{{ controller_publicname }}:35357/v2.0' --internalurl 'https://{{ controller_publicname }}:5000/v2.0' ) || true
+    - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=enable value=True
+    - lineinfile: dest=/root/keystonerc_admin regexp="^export OS_AUTH_URL" line="export OS_AUTH_URL=https://{{ controller_publicname }}:5000/v2.0/"
+
+    # Setup sysconfig file for novncproxy
+    - copy: src={{ files }}/fedora-cloud/openstack-nova-novncproxy dest=/etc/sysconfig/openstack-nova-novncproxy mode=644 owner=root group=root
+
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_base_url value=https://{{ controller_publicname }}:6080/vnc_auto.html
+
+    # set SSL for services
+    - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_url value=https://{{ controller_publicname }}:9696
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=osapi_compute_listen_port value=6774
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ec2_listen_port value=6773
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=glance_api_servers value=https://{{ controller_publicname }}:9292
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=cert value=/etc/pki/tls/certs/fedorainfracloud.org.pem
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=key value=/etc/pki/tls/private/fedorainfracloud.org.key
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ca value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_host  value={{ controller_publicname }}
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ssl_only value=False
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=scheduler_default_filters value=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,DiskFilter
+    - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=default_floating_pool value=external
+
+    - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=bind_port value=7292
+    # configure Glance to use Swift as backend
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=default_store value=swift
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=stores value=glance.store.swift.Store
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_auth_address value=https://{{ controller_publicname }}:5000/v2.0
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_user value="services:swift"
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_key value="{{ SWIFT_PASS }}"
+    - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_create_container_on_put value="True"
+    - shell: rsync /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini
+    - shell: rsync /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini
+
+    - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+
+    - ini_file: dest=/etc/glance/glance-cache.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0
+
+    - ini_file: dest=/etc/glance/glance-scrubber.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0
+
+    - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=backup_swift_url value=https://{{ controller_publicname }}:8080/v1/AUTH_
+    - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=osapi_volume_listen_port value=6776
+    - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_protocol value=https
+    - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=service_protocol value=https
+    - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_protocol value=https
+    - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=service_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+
+    - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_url value=https://{{ controller_publicname }}:8774/v2
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=use_ssl value=False
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_cert_file value=/etc/pki/tls/certs/fedorainfracloud.org.pem
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_key_file value=/etc/pki/tls/private/fedorainfracloud.org.key
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_ca_file value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=bind_port value=8696
+    - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = LOADBALANCER" line="service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" insertafter="\[service_providers]"
+    - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = FIREWALL" line="service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default" insertafter="\[service_providers]"
+
+    - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_protocol value=https
+    - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+
+    - ini_file: dest=/etc/neutron/metadata_agent.ini section="filter:authtoken" option=auth_url value=https://{{ controller_publicname }}:35357/v2.0
+    - ini_file: dest=/etc/neutron/metadata_agent.ini section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:35357/v2.0
+
+    - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_protocol value=https
+    - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_port value=7080
+    - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_ip value=127.0.0.1
+
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_protocol value=https
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }}
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=service_credentials option=os_auth_url value=https://{{ controller_publicname }}:35357/v2.0
+    - ini_file: dest=/etc/ceilometer/ceilometer.conf section=api option=port value=6777
+
+    # enable stunell to neutron
+    - shell: cat /etc/pki/tls/certs/fedorainfracloud.org.pem /etc/pki/tls/certs/fedorainfracloud.org.digicert.pem /etc/pki/tls/private/fedorainfracloud.org.key > /etc/haproxy/fedorainfracloud.org.combined
+    - file: path=/etc/haproxy/fedorainfracloud.org.combined owner=haproxy mode=644
+    - copy: src={{ files }}/fedora-cloud/haproxy.cfg dest=/etc/haproxy/haproxy.cfg mode=644 owner=root group=root
+    # first OS have to free ports so haproxy can bind it, then we start OS on modified ports
+    #- shell: openstack-service stop
+    #- service: name=haproxy state=started enabled=yes
+    #- shell: openstack-service start
+
+    - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="^OPENSTACK_KEYSTONE_URL " line="OPENSTACK_KEYSTONE_URL = 'https://{{controller_publicname}}:5000/v2.0'"
+      notify:
+        - reload httpd
+    - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="OPENSTACK_SSL_CACERT " line="OPENSTACK_SSL_CACERT = '/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem'"
+      notify:
+        - reload httpd
+
+    # configure cider with multi back-end
+    # https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_manage-volumes.html
+    - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option="enabled_backends" value="equallogic-1,lvmdriver-1"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    # LVM
+    - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_group" value="cinder-volumes"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_driver" value="cinder.volume.drivers.lvm.LVMISCSIDriver"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_backend_name" value="LVM_iSCSI"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    # Dell EqualLogic - http://docs.openstack.org/trunk/config-reference/content/dell-equallogic-driver.html
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_driver" value="cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_ip" value="{{ IP_EQLX }}"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_login" value="{{ SAN_UNAME }}"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - name: set password for equallogic-1
+      ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_password" value="{{ SAN_PW }}"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_group_name" value="{{ EQLX_GROUP }}"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_pool" value="{{ EQLX_POOL }}"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+    - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_backend_name" value="equallogic"
+      notify:
+        - restart cinder api
+        - restart cinder scheduler
+        - restart cinder volume
+
+    # flush handlers_path here in case cinder changes and we need to restart it.
+    - meta: flush_handlers
+
+    # create storage types
+    # note that existing keys can be retrieved using: cinder extra-specs-list
+    - shell: source /root/keystonerc_admin && cinder type-create lvm
+      ignore_errors: yes
+    - shell: source /root/keystonerc_admin && cinder type-key lvm set volume_backend_name=lvm
+    - shell: source /root/keystonerc_admin && cinder type-create equallogic
+      ignore_errors: yes
+    - shell: source /root/keystonerc_admin && cinder type-key equallogic set volume_backend_name=equallogic
+
+    # http://docs.openstack.org/icehouse/install-guide/install/yum/content/glance-verify.html
+    - file: path=/root/images state=directory
+    - get_url: url=http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img dest=/root/images/cirros-0.3.2-x86_64-disk.img mode=0440
+    - name: Add the cirros-0.3.2-x86_64 image
+      glance_image:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name=cirros-0.3.2-x86_64
+        disk_format=qcow2
+        is_public=True
+        file=/root/images/cirros-0.3.2-x86_64-disk.img
+
+    - name: create non-standard flavor
+      nova_flavor:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name="{{item.name}}" ram="{{item.ram}}" root="{{item.disk}}" vcpus="{{item.vcpus}}" swap="{{item.swap}}"
+        ephemeral=0
+      with_items:
+        - { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 }
+        - { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 100000 }
+        - { name: m2.prepare_builder, ram: 5000, disk: 16, vcpus: 2, swap: 0 }
+        # same as m.* but with swap
+        - { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 }
+        - { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 }
+        - { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 }
+        - {
+            name: ms1.medium.bigswap,
+            ram: 4096,
+            disk: 40,
+            vcpus: 2,
+            swap: 40000,
+          }
+        - { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 }
+        - { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 }
+        # inspired by http://aws.amazon.com/ec2/instance-types/
+        - { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 }
+        - { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 }
+        - { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 }
+        - { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 }
+
+    #####  download common Images #####
+    # restricted images (RHEL) are handled two steps below
+    - name: Add the images
+      glance_image:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name="{{ item.name }}"
+        disk_format=qcow2
+        is_public=True
+        copy_from="{{ item.copy_from }}"
+      with_items:
+        - name: Fedora-x86_64-20-20131211.1
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2
+        - name: Fedora-x86_64-20-20140407
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2
+        - name: Fedora-Cloud-Base-20141203-21.x86_64
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
+        - name: Fedora-Cloud-Base-20141203-21.i386
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2
+        - name: Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64.qcow2
+        - name: Fedora-Cloud-Base-22_Alpha-20150305.x86_64
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Alpha-20150305.x86_64.qcow2
+        - name: Fedora-Cloud-Atomic-22_Beta-20150415.x86_64
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Beta-20150415.x86_64.qcow2
+        - name: Fedora-Cloud-Base-22_Beta-20150415.x86_64
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Beta-20150415.x86_64.qcow2
+        - name: Fedora-Cloud-Atomic-22-20150521.x86_64
+          copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
+        - name: Fedora-Cloud-Base-22-20150521.x86_64
+          copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+        - name: Fedora-Cloud-Base-23-20151030.x86_64
+          copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2
+        - name: CentOS-7-x86_64-GenericCloud-1503
+          copy_from: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2
+        - name: CentOS-6-x86_64-GenericCloud-20141129_01
+          copy_from: http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-20141129_01.qcow2
+        - name: Fedora-Cloud-Base-24_Alpha-7.x86_64.qcow2
+          copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/test/24_Alpha/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Alpha-7.x86_64.qcow2
+        - name: Fedora-Cloud-Base-24-1.2.x86_64.qcow2
+          copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
+        - name: Fedora-Cloud-Base-27-1.6.x86_64
+          copy_from: https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2
+        - name: Fedora-Cloud-Base-27-1.6.ppc64le
+          copy_from: https://download.fedoraproject.org/pub/fedora-secondary/releases/27/CloudImages/ppc64le/images/Fedora-Cloud-Base-27-1.6.ppc64le.qcow2
+    # RHEL6 can be downloaded from https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952
+    - stat: path=/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2
+      register: rhel6_image
+    - name: Add the RHEL6 image
+      glance_image:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name="rhel-guest-image-6.6-20141222.0.x86_64"
+        disk_format=qcow2
+        is_public=True
+        file="/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2"
+      when: rhel6_image.stat.exists == True
+
+    # RHEL7 can be download from https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads
+    - stat: path=/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2
+      register: rhel7_image
+    - name: Add the RHEL7 image
+      glance_image:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name="rhel-guest-image-7.0-20140930.0.x86_64"
+        disk_format=qcow2
+        is_public=True
+        file="/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2"
+      when: rhel7_image.stat.exists == True
+
+    ##### PROJECTS ######
+    - name: Create tenants
+      keystone_user:
+        login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        endpoint="https://{{controller_publicname}}:35357/v2.0";
+        tenant="{{ item.name }}"
+        tenant_description="{{ item.desc }}"
+        state=present
+      with_items:
+        - { name: persistent, desc: "persistent instances" }
+        - { name: qa, desc: "developmnet and test-day applications of QA" }
+        - { name: transient, desc: "transient instances" }
+        - {
+            name: infrastructure,
+            desc: "one off instances for infrastructure folks to test or check something (proof-of-concept)",
+          }
+        - {
+            name: cloudintern,
+            desc: "project for the cloudintern under mattdm",
+          }
+        - { name: cloudsig, desc: "Fedora cloud sig folks." }
+        - { name: copr, desc: "Space for Copr builders" }
+        - { name: coprdev, desc: "Development version of Copr" }
+        - {
+            name: pythonbots,
+            desc: "project for python build bot users - twisted, etc",
+          }
+        - { name: scratch, desc: "scratch and short term instances" }
+        - { name: openshift, desc: "Tenant for openshift deployment" }
+        - { name: maintainertest, desc: "Tenant for maintainer test machines" }
+        - { name: aos-ci-cd, desc: "Tenant for aos-ci-cd" }
+
+    ##### USERS #####
+    - name: Create users
+      keystone_user:
+        login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        endpoint="https://{{controller_publicname}}:35357/v2.0";
+        user="{{ item.name }}"
+        email="{{ item.email }}"
+        tenant="{{ item.tenant }}"
+        password="{{ item.password }}"
+        state=present
+      no_log: True
+      with_items:
+        - {
+            name: anthomas,
+            email: "anthomas@xxxxxxxxxx",
+            tenant: cloudintern,
+            password: "{{anthomas_password}}",
+          }
+        - {
+            name: ausil,
+            email: "dennis@xxxxxxxx",
+            tenant: infrastructure,
+            password: "{{ausil_password}}",
+          }
+        - {
+            name: atomic,
+            email: "walters@xxxxxxxxxx",
+            tenant: scratch,
+            password: "{{cockpit_password}}",
+          }
+        - {
+            name: codeblock,
+            email: "codeblock@xxxxxxxx",
+            tenant: infrastructure,
+            password: "{{codeblock_password}}",
+          }
+        - {
+            name: copr,
+            email: "admin@xxxxxxxxxxxxxxxxx",
+            tenant: copr,
+            password: "{{copr_password}}",
+          }
+        - {
+            name: gholms,
+            email: "gholms@xxxxxxxxxxxxxxxxx",
+            tenant: cloudintern,
+            password: "{{gholms_password}}",
+          }
+        - {
+            name: jskladan,
+            email: "jskladan@xxxxxxxxxx",
+            tenant: qa,
+            password: "{{jskladan_password}}",
+          }
+        - {
+            name: kevin,
+            email: "kevin@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{kevin_password}}",
+          }
+        - {
+            name: laxathom,
+            email: "laxathom@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{laxathom_password}}",
+          }
+        - {
+            name: mattdm,
+            email: "mattdm@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{mattdm_password}}",
+          }
+        - {
+            name: msuchy,
+            email: "msuchy@xxxxxxxxxx",
+            tenant: copr,
+            password: "{{msuchy_password}}",
+          }
+        - {
+            name: nb,
+            email: "nb@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{nb_password}}",
+          }
+        - {
+            name: pingou,
+            email: "pingou@xxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{pingou_password}}",
+          }
+        - {
+            name: puiterwijk,
+            email: "puiterwijk@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{puiterwijk_password}}",
+          }
+        - {
+            name: stefw,
+            email: "stefw@xxxxxxxxxxxxxxxxx",
+            tenant: scratch,
+            password: "{{stefw_password}}",
+          }
+        - {
+            name: mizdebsk,
+            email: "mizdebsk@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{mizdebsk_password}}",
+          }
+        - {
+            name: kushal,
+            email: "kushal@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{kushal_password}}",
+          }
+        - {
+            name: red,
+            email: "red@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{red_password}}",
+          }
+        - {
+            name: samkottler,
+            email: "samkottler@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{samkottler_password}}",
+          }
+        - {
+            name: tflink,
+            email: "tflink@xxxxxxxxxxxxxxxxx",
+            tenant: qa,
+            password: "{{tflink_password}}",
+          }
+        - {
+            name: twisted,
+            email: "buildbot@xxxxxxxxxxxxxxxxx",
+            tenant: pythonbots,
+            password: "{{twisted_password}}",
+          }
+        - {
+            name: roshi,
+            email: "roshi@xxxxxxxxxxxxxxxxx",
+            tenant: qa,
+            password: "{{roshi_password}}",
+          }
+        - {
+            name: maxamillion,
+            email: "maxamillion@xxxxxxxxxxxxxxxxx",
+            tenant: infrastructure,
+            password: "{{maxamillion_password}}",
+          }
+        - {
+            name: clime,
+            email: "clime@xxxxxxxxxx",
+            tenant: copr,
+            password: "{{clime_password}}",
+          }
+        - {
+            name: jkadlcik,
+            email: "jkadlcik@xxxxxxxxxx",
+            tenant: copr,
+            password: "{{clime_password}}",
+          }
+        - {
+            name: misc,
+            email: "misc@xxxxxxxxxx",
+            tenant: openshift,
+            password: "{{misc_password}}",
+          }
+        - {
+            name: bowlofeggs,
+            email: "bowlofeggs@xxxxxxxxxxxxxxxxx",
+            tenant: transient,
+            password: "{{bowlofeggs_password}}",
+          }
+        - {
+            name: alivigni,
+            email: "alivigni@xxxxxxxxxx",
+            tenant: aos-ci-cd,
+            password: "{{alivigni_password}}",
+          }
+        - {
+            name: jbieren,
+            email: "jbieren@xxxxxxxxxx",
+            tenant: aos-ci-cd,
+            password: "{{jbieren_password}}",
+          }
+        - {
+            name: bpeck,
+            email: "bpeck@xxxxxxxxxx",
+            tenant: aos-ci-cd,
+            password: "{{bpeck_password}}",
+          }
+        - {
+            name: srallaba,
+            email: "srallaba@xxxxxxxxxx",
+            tenant: aos-ci-cd,
+            password: "{{srallaba_password}}",
+          }
+        - {
+            name: jburke,
+            email: "jburke@xxxxxxxxxx",
+            tenant: aos-ci-cd,
+            password: "{{jburke_password}}",
+          }
+      tags:
+        - openstack_users
+
+    - name: upload SSH keys for users
+      nova_keypair: auth_url="https://{{controller_publicname}}:35357/v2.0";
+        login_username="{{ item.username }}"
+        login_password="{{ item.password }}" login_tenant_name="{{item.tenant}}" name="{{ item.name }}"
+        public_key="{{ item.public_key }}"
+      ignore_errors: yes
+      no_log: True
+      with_items:
+        - {
+            username: anthomas,
+            name: anthomas,
+            tenant: cloudintern,
+            password: "{{anthomas_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas anthomas') }}",
+          }
+        - {
+            username: ausil,
+            name: ausil,
+            tenant: infrastructure,
+            password: "{{ausil_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas ausil') }}",
+          }
+        - {
+            username: codeblock,
+            name: codeblock,
+            tenant: infrastructure,
+            password: "{{codeblock_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas codeblock') }}",
+          }
+        - {
+            username: buildsys,
+            name: buildsys,
+            tenant: copr,
+            password: "{{copr_password}}",
+            public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeTO0ddXuhDZYM9HyM0a47aeV2yIVWhTpddrQ7/RAIs99XyrsicQLABzmdMBfiZnP0FnHBF/e+2xEkT8hHJpX6bX81jjvs2bb8KP18Nh8vaXI3QospWrRygpu1tjzqZT0Llh4ZVFscum8TrMw4VWXclzdDw6x7csCBjSttqq8F3iTJtQ9XM9/5tCAAOzGBKJrsGKV1CNIrfUo5CSzY+IUVIr8XJ93IB2ZQVASK34T/49egmrWlNB32fqAbDMC+XNmobgn6gO33Yq5Ly7Dk4kqTUx2TEaqDkZfhsVu0YcwV81bmqsltRvpj6bIXrEoMeav7nbuqKcPLTxWEY/2icePF",
+          }
+        - {
+            username: gholms,
+            name: gholms,
+            tenant: cloudintern,
+            password: "{{gholms_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas gholms') }}",
+          }
+        - {
+            username: jskladan,
+            name: jskladan,
+            tenant: qa,
+            password: "{{jskladan_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jskladan') }}",
+          }
+        - {
+            username: kevin,
+            name: kevin,
+            tenant: infrastructure,
+            password: "{{kevin_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kevin') }}",
+          }
+        - {
+            username: maxamillion,
+            name: maxamillion,
+            tenant: infrastructure,
+            password: "{{maxamillion_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas maxamillion') }}",
+          }
+        - {
+            username: laxathom,
+            name: laxathom,
+            tenant: infrastructure,
+            password: "{{laxathom_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas laxathom') }}",
+          }
+        - {
+            username: mattdm,
+            name: mattdm,
+            tenant: infrastructure,
+            password: "{{mattdm_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mattdm') }}",
+          }
+        - {
+            username: msuchy,
+            name: msuchy,
+            tenant: copr,
+            password: "{{msuchy_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}",
+          }
+        - {
+            username: nb,
+            name: nb,
+            tenant: infrastructure,
+            password: "{{nb_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas nb') }}",
+          }
+        - {
+            username: pingou,
+            name: pingou,
+            tenant: infrastructure,
+            password: "{{pingou_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas pingou') }}",
+          }
+        - {
+            username: puiterwijk,
+            name: puiterwijk,
+            tenant: infrastructure,
+            password: "{{puiterwijk_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas puiterwijk') }}",
+          }
+        - {
+            username: stefw,
+            name: stefw,
+            tenant: scratch,
+            password: "{{stefw_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas stefw') }}",
+          }
+        - {
+            username: mizdebsk,
+            name: mizdebsk,
+            tenant: infrastructure,
+            password: "{{mizdebsk_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mizdebsk') }}",
+          }
+        - {
+            username: kushal,
+            name: kushal,
+            tenant: infrastructure,
+            password: "{{kushal_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kushal') }}",
+          }
+        - {
+            username: red,
+            name: red,
+            tenant: infrastructure,
+            password: "{{red_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas red') }}",
+          }
+        - {
+            username: roshi,
+            name: roshi,
+            tenant: qa,
+            password: "{{roshi_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas roshi') }}",
+          }
+        - {
+            username: samkottler,
+            name: samkottler,
+            tenant: infrastructure,
+            password: "{{samkottler_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas skottler') }}",
+          }
+        - {
+            username: tflink,
+            name: tflink,
+            tenant: qa,
+            password: "{{tflink_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas tflink') }}",
+          }
+        - {
+            username: atomic,
+            name: atomic,
+            tenant: scratch,
+            password: "{{cockpit_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas walters') }}",
+          }
+        #      - { name: twisted, tenant: pythonbots, password: "{{twisted_password}}", public_key: "" }
+        - {
+            username: admin,
+            name: fedora-admin-20130801,
+            tenant: admin,
+            password: "{{ADMIN_PASS}}",
+            public_key: "{{ lookup('file', files + '/fedora-cloud/fedora-admin-20130801.pub') }}",
+          }
+        - {
+            username: asamalik,
+            name: asamalik,
+            tenant: scratch,
+            password: "{{asamalik_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas asamalik') }}",
+          }
+        - {
+            username: clime,
+            name: clime,
+            tenant: copr,
+            password: "{{clime_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas clime') }}",
+          }
+        - {
+            username: jkadlcik,
+            name: jkadlcik,
+            tenant: copr,
+            password: "{{clime_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jkadlcik') }}",
+          }
+        - {
+            username: misc,
+            name: misc,
+            tenant: openshift,
+            password: "{{misc_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas misc') }}",
+          }
+        - {
+            username: alivigni,
+            name: alivigni,
+            tenant: aos-ci-cd,
+            password: "{{alivigni_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas alivigni') }}",
+          }
+        - {
+            username: jbieren,
+            name: jbieren,
+            tenant: aos-ci-cd,
+            password: "{{jbieren_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jbieren') }}",
+          }
+        - {
+            username: bpeck,
+            name: bpeck,
+            tenant: aos-ci-cd,
+            password: "{{bpeck_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas bpeck') }}",
+          }
+        - {
+            username: srallaba,
+            name: srallaba,
+            tenant: aos-ci-cd,
+            password: "{{srallaba_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas srallaba') }}",
+          }
+        - {
+            username: jburke,
+            name: jburke,
+            tenant: aos-ci-cd,
+            password: "{{jburke_password}}",
+            public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jburke') }}",
+          }
+      tags:
+        - openstack_users
+
+    - name: Create roles for additional tenants
+      shell: source /root/keystonerc_admin && keystone role-list |grep ' {{item}} ' || keystone role-create --name {{ item }}
+      with_items: "{{all_tenants}}"
+    - name: Assign users to secondary tentants
+      shell: source /root/keystonerc_admin && keystone user-role-list --user "{{item.user}}" --tenant "{{item.tenant}}" | grep ' {{item.tenant }} ' || keystone user-role-add --user {{item.user}} --role {{item.tenant}} --tenant {{item.tenant}} || true
+      #keystone_user:
+      #  endpoint="https://{{controller_publicname}}:35357/v2.0";
+      #  login_user="admin" login_password="{{ ADMIN_PASS }}"
+      #  role=coprdev user={{ item }} tenant=coprdev
+      with_items:
+        - { user: admin, tenant: cloudintern }
+        - { user: admin, tenant: cloudsig }
+        - { user: admin, tenant: copr }
+        - { user: admin, tenant: coprdev }
+        - { user: admin, tenant: persistent }
+        - { user: admin, tenant: pythonbots }
+        - { user: admin, tenant: qa }
+        - { user: admin, tenant: infrastructure }
+        - { user: admin, tenant: scratch }
+        - { user: admin, tenant: transient }
+        - { user: admin, tenant: maintainertest }
+        - { user: admin, tenant: aos-ci-cd }
+        - { user: copr, tenant: coprdev }
+        - { user: kevin, tenant: cloudintern }
+        - { user: kevin, tenant: cloudsig }
+        - { user: kevin, tenant: copr }
+        - { user: kevin, tenant: coprdev }
+        - { user: kevin, tenant: persistent }
+        - { user: kevin, tenant: pythonbots }
+        - { user: kevin, tenant: qa }
+        - { user: kevin, tenant: scratch }
+        - { user: kevin, tenant: transient }
+        - { user: kevin, tenant: maintainertest }
+        - { user: kevin, tenant: aos-ci-cd }
+        - { user: msuchy, tenant: cloudintern }
+        - { user: msuchy, tenant: cloudsig }
+        - { user: msuchy, tenant: coprdev }
+        - { user: msuchy, tenant: infrastructure }
+        - { user: msuchy, tenant: persistent }
+        - { user: msuchy, tenant: pythonbots }
+        - { user: msuchy, tenant: qa }
+        - { user: msuchy, tenant: scratch }
+        - { user: msuchy, tenant: transient }
+        - { user: pingou, tenant: persistent }
+        - { user: puiterwijk, tenant: cloudintern }
+        - { user: puiterwijk, tenant: cloudsig }
+        - { user: puiterwijk, tenant: copr }
+        - { user: puiterwijk, tenant: coprdev }
+        - { user: puiterwijk, tenant: persistent }
+        - { user: puiterwijk, tenant: pythonbots }
+        - { user: puiterwijk, tenant: qa }
+        - { user: puiterwijk, tenant: scratch }
+        - { user: puiterwijk, tenant: transient }
+        - { user: puiterwijk, tenant: maintainertest }
+        - { user: puiterwijk, tenant: aos-ci-cd }
+        - { user: mizdebsk, tenant: aos-ci-cd }
+        - { user: mizdebsk, tenant: cloudintern }
+        - { user: mizdebsk, tenant: cloudsig }
+        - { user: mizdebsk, tenant: copr }
+        - { user: mizdebsk, tenant: coprdev }
+        - { user: mizdebsk, tenant: infrastructure }
+        - { user: mizdebsk, tenant: maintainertest }
+        - { user: mizdebsk, tenant: openshift }
+        - { user: mizdebsk, tenant: persistent }
+        - { user: mizdebsk, tenant: pythonbots }
+        - { user: mizdebsk, tenant: qa }
+        - { user: mizdebsk, tenant: scratch }
+        - { user: mizdebsk, tenant: transient }
+        - { user: clime, tenant: coprdev }
+        - { user: clime, tenant: persistent }
+        - { user: jkadlcik, tenant: coprdev }
+      tags:
+        - openstack_users
+
+    ##### NETWORK ####
+    # http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.html
+    #
+    # external network is a class C: 209.132.184.0/24
+    # 209.132.184.1  to .25 - reserved for hardware.
+    # 209.132.184.26 to .30 - reserver for test cloud external ips
+    # 209.132.184.31 to .69 - icehouse cloud
+    # 209.132.184.70 to .89 - reserved for arm03 SOCs
+    # 209.132.184.90 to .251 - folsom cloud
+    #
+    - name: Create en external network
+      neutron_network:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name=external
+        router_external=True
+        provider_network_type=flat
+        provider_physical_network=floatnet
+      register: EXTERNAL_ID
+    - name: Create an external subnet
+      neutron_subnet:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        name=external-subnet
+        network_name=external
+        cidr="{{ public_interface_cidr }}"
+        allocation_pool_start="{{ public_floating_start }}"
+        allocation_pool_end="{{ public_floating_end }}"
+        gateway_ip="{{ public_gateway_ip }}"
+        enable_dhcp=false
+      register: EXTERNAL_SUBNET_ID
+    #- shell: source /root/keystonerc_admin && nova floating-ip-create external
+    #  when: packstack_sucessfully_finished.stat.exists == False
+
+    # 172.16.0.1/16 -- 172.22.0.1/16 - free (can be split to /20)
+    # 172.23.0.1/16 - free (but used by old cloud)
+    # 172.24.0.1/24 - RESERVED it is used internally for OS
+    # 172.24.1.0/24 -- 172.24.255.0/24 - likely free (?)
+    # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
+    # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
+    # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
+    # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
+    # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
+    # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
+    # 172.25.96.1/20 - cloudsig (172.25.96.1 - 172.25.111.254)
+    # 172.25.112.1/20 - qa (172.25.112.1 - 172.25.127.254)
+    # 172.25.128.1/20 - pythonbots (172.25.128.1 - 172.25.143.254)
+    # 172.25.144.1/20 - coprdev (172.25.144.1 - 172.25.159.254)
+    # 172.25.160.1/20 -- 172.25.240.1/20 - free
+    # 172.26.0.1/16 -- 172.31.0.1/16 - free (can be split to /20)
+
+    - name: Create a router for all tenants
+      neutron_router:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        tenant_name="{{ item }}"
+        name="ext-to-{{ item }}"
+      with_items: "{{all_tenants}}"
+    - name: "Connect router's gateway to the external network"
+      neutron_router_gateway:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        router_name="ext-to-{{ item }}"
+        network_name="external"
+      with_items: "{{all_tenants}}"
+    - name: Create a private network for all tenants
+      neutron_network:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        tenant_name="{{ item.name }}"
+        name="{{ item.name }}-net"
+        shared="{{ item.shared }}"
+      with_items:
+        - { name: cloudintern, shared: false }
+        - { name: cloudsig, shared: false }
+        - { name: copr, shared: true }
+        - { name: coprdev, shared: true }
+        - { name: infrastructure, shared: false }
+        - { name: persistent, shared: false }
+        - { name: pythonbots, shared: false }
+        - { name: qa, shared: false }
+        - { name: scratch, shared: false }
+        - { name: transient, shared: false }
+        - { name: openshift, shared: false }
+        - { name: maintainertest, shared: false }
+        - { name: aos-ci-cd, shared: false }
+    - name: Create a subnet for all tenants
+      neutron_subnet:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        tenant_name="{{ item.name }}"
+        network_name="{{ item.name }}-net"
+        name="{{ item.name }}-subnet"
+        cidr="{{ item.cidr }}"
+        gateway_ip="{{ item.gateway }}"
+        dns_nameservers="66.35.62.163,140.211.169.201"
+      with_items:
+        - { name: cloudintern, cidr: "172.25.0.1/20", gateway: "172.25.0.1" }
+        - { name: cloudsig, cidr: "172.25.96.1/20", gateway: "172.25.96.1" }
+        - { name: copr, cidr: "172.25.80.1/20", gateway: "172.25.80.1" }
+        - { name: coprdev, cidr: "172.25.144.1/20", gateway: "172.25.144.1" }
+        - {
+            name: infrastructure,
+            cidr: "172.25.16.1/20",
+            gateway: "172.25.16.1",
+          }
+        - { name: persistent, cidr: "172.25.32.1/20", gateway: "172.25.32.1" }
+        - { name: pythonbots, cidr: "172.25.128.1/20", gateway: "172.25.128.1" }
+        - { name: qa, cidr: "172.25.112.1/20", gateway: "172.25.112.1" }
+        - { name: scratch, cidr: "172.25.64.1/20", gateway: "172.25.64.1" }
+        - { name: transient, cidr: "172.25.48.1/20", gateway: "172.25.48.1" }
+        - { name: openshift, cidr: "172.25.160.1/20", gateway: "172.25.160.1" }
+        - {
+            name: maintainertest,
+            cidr: "172.25.176.1/20",
+            gateway: "172.25.176.1",
+          }
+        - { name: aos-ci-cd, cidr: "172.25.180.1/20", gateway: "172.25.180.1" }
+    - name: "Connect router's interface to the TENANT-subnet"
+      neutron_router_interface:
+        login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin"
+        auth_url="https://{{controller_publicname}}:35357/v2.0";
+        tenant_name="{{ item }}"
+        router_name="ext-to-{{ item }}"
+        subnet_name="{{ item }}-subnet"
+      with_items: "{{all_tenants}}"
+
+    #################
+    # Security Groups
+    ################
+    - name: "Create 'ssh-anywhere' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "ssh-anywhere-{{item}}"
+        description: "allow ssh from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "22"
+            port_range_max: "22"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Allow nagios checks"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "allow-nagios-{{item}}"
+        description: "allow nagios checks"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "5666"
+            port_range_max: "5666"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "209.132.181.35/32"
+          - direction: "ingress"
+            ethertype: "IPv4"
+            protocol: "icmp"
+            remote_ip_prefix: "209.132.181.35/32"
+      with_items:
+        - persistent
+
+    - name: "Create 'ssh-from-persistent' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "ssh-from-persistent-{{item}}"
+        description: "allow ssh from persistent"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "22"
+            port_range_max: "22"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "172.25.32.1/20"
+      with_items:
+        - copr
+        - coprdev
+
+    - name: "Create 'ssh-internal' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "ssh-internal-{{item.name}}"
+        description: "allow ssh from {{item.name}}-network"
+        tenant_name: "{{ item.name }}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "22"
+            port_range_max: "22"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "{{ item.prefix }}"
+      with_items:
+        - { name: cloudintern, prefix: "172.25.0.1/20" }
+        - { name: cloudsig, prefix: "172.25.96.1/20" }
+        - { name: copr, prefix: "172.25.80.1/20" }
+        - { name: coprdev, prefix: "172.25.80.1/20" }
+        - { name: infrastructure, prefix: "172.25.16.1/20" }
+        - { name: persistent, prefix: "172.25.32.1/20" }
+        - { name: pythonbots, prefix: "172.25.128.1/20" }
+        - { name: qa, prefix: "172.25.112.1/20" }
+        - { name: scratch, prefix: "172.25.64.1/20" }
+        - { name: transient, prefix: "172.25.48.1/20" }
+        - { name: openshift, prefix: "172.25.160.1/20" }
+        - { name: maintainertest, prefix: "172.25.180.1/20" }
+        - { name: aos-ci-cd, prefix: "172.25.200.1/20" }
+
+    - name: "Create 'web-80-anywhere' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "web-80-anywhere-{{item}}"
+        description: "allow web-80 from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "80"
+            port_range_max: "80"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'web-443-anywhere' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "web-443-anywhere-{{item}}"
+        description: "allow web-443 from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "443"
+            port_range_max: "443"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'oci-registry-5000-anywhere' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "oci-registry-5000-anywhere-{{item}}"
+        description: "allow oci-registry-5000 from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "5000"
+            port_range_max: "5000"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'wide-open' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "wide-open-{{item}}"
+        description: "allow anything from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "0"
+            port_range_max: "65535"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+          - direction: "ingress"
+            port_range_min: "0"
+            port_range_max: "65535"
+            ethertype: "IPv4"
+            protocol: "udp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'ALL ICMP' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "all-icmp-{{item}}"
+        description: "allow all ICMP traffic"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            ethertype: "IPv4"
+            protocol: "icmp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'keygen-persistent' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "keygen-persistent"
+        description: "rules for copr-keygen"
+        tenant_name: "persistent"
+        rules:
+          - direction: "ingress"
+            port_range_min: "5167"
+            port_range_max: "5167"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "172.25.32.1/20"
+          - direction: "ingress"
+            port_range_min: "80"
+            port_range_max: "80"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "172.25.32.1/20"
+
+    - name: "Create 'pg-5432-anywhere' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "pg-5432-anywhere-{{item}}"
+        description: "allow postgresql-5432 from anywhere"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "5432"
+            port_range_max: "5432"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "0.0.0.0/0"
+      with_items: "{{all_tenants}}"
+
+    - name: "Create 'fedmsg-relay-persistent' security group"
+      neutron_sec_group:
+        login_username: "admin"
+        login_password: "{{ ADMIN_PASS }}"
+        login_tenant_name: "admin"
+        auth_url: "https://{{controller_publicname}}:35357/v2.0";
+        state: "present"
+        name: "fedmsg-relay-persistent"
+        description: "allow incoming 2003 and 4001 from internal network"
+        tenant_name: "{{item}}"
+        rules:
+          - direction: "ingress"
+            port_range_min: "2003"
+            port_range_max: "2003"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "172.25.80.1/16"
+          - direction: "ingress"
+            port_range_min: "4001"
+            port_range_max: "4001"
+            ethertype: "IPv4"
+            protocol: "tcp"
+            remote_ip_prefix: "172.25.80.1/16"
+      with_items: "{{all_tenants}}"
+
+    # Update quota for Copr
+    #   SEE:
+    #   nova quota-defaults
+    #   nova quota-show --tenant $TENANT_ID
+    # default is 10 instances, 20 cores, 51200 RAM, 10 floating IPs
+    - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'copr ' | awk '{print $2}'
+      register: TENANT_ID
+      check_mode: no
+      changed_when: false
+    - shell: source /root/keystonerc_admin && nova quota-update --instances 50 --cores 100 --ram 350000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }}
+
+    - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'coprdev ' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: TENANT_ID
+    - shell: source /root/keystonerc_admin && nova quota-update --instances 40 --cores 80 --ram 300000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }}
+
+    #
+    # Note that we set manually the amount of volumes for this tenant to 20 in the web interface.
+    # nova quota-update cannot do so.
+    #
+    - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'persistent ' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: TENANT_ID
+    - shell: source /root/keystonerc_admin && nova quota-update --instances 60 --cores 175 --ram 288300 --security-groups 20 {{ TENANT_ID.stdout }}
+
+    # Transient quota
+    - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'transient ' | awk '{print $2}'
+      check_mode: no
+      changed_when: false
+      register: TENANT_ID
+    - shell: source /root/keystonerc_admin && nova quota-update --instances 30 --cores 70 --ram 153600 --security-groups 20 {{ TENANT_ID.stdout }}
diff --git a/playbooks/hosts/fedimg-dev.fedorainfracloud.org.yml b/playbooks/hosts/fedimg-dev.fedorainfracloud.org.yml
index 9a8cb9196..d8d3403e2 100644
--- a/playbooks/hosts/fedimg-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/fedimg-dev.fedorainfracloud.org.yml
@@ -3,37 +3,37 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: fedimg-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/fedora-bootstrap.fedorainfracloud.org.yml b/playbooks/hosts/fedora-bootstrap.fedorainfracloud.org.yml
index 840e7c6ef..02dbdf634 100644
--- a/playbooks/hosts/fedora-bootstrap.fedorainfracloud.org.yml
+++ b/playbooks/hosts/fedora-bootstrap.fedorainfracloud.org.yml
@@ -3,45 +3,45 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: fedora-bootstrap.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
 
   tasks:
-  - name: add packages
-    package: state=present name={{ item }}
-    with_items:
-    - httpd
-    - php
-    - mariadb-server
-    - mariadb
-    - mod_ssl
-    - wget
-    - unzip
-
-  - name: enable httpd service
-    service: name=httpd enabled=yes state=started
+    - name: add packages
+      package: state=present name={{ item }}
+      with_items:
+        - httpd
+        - php
+        - mariadb-server
+        - mariadb
+        - mod_ssl
+        - wget
+        - unzip
+
+    - name: enable httpd service
+      service: name=httpd enabled=yes state=started
diff --git a/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml b/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml
index dc7e49255..c6256251c 100644
--- a/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: glittergallery-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/happinesspackets-stg.fedorainfracloud.org.yml b/playbooks/hosts/happinesspackets-stg.fedorainfracloud.org.yml
index 85f2e93b8..17b542b0f 100644
--- a/playbooks/hosts/happinesspackets-stg.fedorainfracloud.org.yml
+++ b/playbooks/hosts/happinesspackets-stg.fedorainfracloud.org.yml
@@ -3,39 +3,41 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: happinesspackets-stg.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - basessh
-  - fedmsg/base
-  - { role: letsencrypt, site_name: 'happinesspackets-stg.fedorainfracloud.org' }
+    - basessh
+    - fedmsg/base
+    - {
+        role: letsencrypt,
+        site_name: "happinesspackets-stg.fedorainfracloud.org",
+      }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/happinesspackets.fedorainfracloud.org.yml b/playbooks/hosts/happinesspackets.fedorainfracloud.org.yml
index 2cd1acd56..bf90d87c6 100644
--- a/playbooks/hosts/happinesspackets.fedorainfracloud.org.yml
+++ b/playbooks/hosts/happinesspackets.fedorainfracloud.org.yml
@@ -3,39 +3,38 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: happinesspackets.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - basessh
-  - fedmsg/base
-  - { role: letsencrypt, site_name: 'happinesspackets.fedorainfracloud.org' }
+    - basessh
+    - fedmsg/base
+    - { role: letsencrypt, site_name: "happinesspackets.fedorainfracloud.org" }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/hubs-dev.fedorainfracloud.org.yml b/playbooks/hosts/hubs-dev.fedorainfracloud.org.yml
index 0c0fe030d..46c1f2285 100644
--- a/playbooks/hosts/hubs-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/hubs-dev.fedorainfracloud.org.yml
@@ -3,65 +3,61 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: hubs-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
-
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - basessh
-
-  - role: hubs
-    main_user: hubs
-    hubs_url_hostname: "{{ ansible_fqdn }}"
-    hubs_secret_key: demotestinghubsmachine
-    hubs_db_type: postgresql
-    hubs_dev_mode: false
-    hubs_conf_dir: /etc/fedora-hubs
-    hubs_var_dir: /var/lib/fedora-hubs
-    hubs_ssl_cert: /etc/letsencrypt/live/{{ ansible_fqdn }}/fullchain.pem
-    hubs_ssl_key: /etc/letsencrypt/live/{{ ansible_fqdn }}/privkey.pem
-    hubs_fas_username: "{{ fedoraDummyUser }}"
-    hubs_fas_password: "{{ fedoraDummyUserPassword }}"
-
+    - basessh
+
+    - role: hubs
+      main_user: hubs
+      hubs_url_hostname: "{{ ansible_fqdn }}"
+      hubs_secret_key: demotestinghubsmachine
+      hubs_db_type: postgresql
+      hubs_dev_mode: false
+      hubs_conf_dir: /etc/fedora-hubs
+      hubs_var_dir: /var/lib/fedora-hubs
+      hubs_ssl_cert: /etc/letsencrypt/live/{{ ansible_fqdn }}/fullchain.pem
+      hubs_ssl_key: /etc/letsencrypt/live/{{ ansible_fqdn }}/privkey.pem
+      hubs_fas_username: "{{ fedoraDummyUser }}"
+      hubs_fas_password: "{{ fedoraDummyUserPassword }}"
 
   tasks:
-  - dnf: name={{item}} state=present
-    with_items:
-    - htop
-    - tmux
-    - vim
-
-  - name: add more hubs workers
-    service: name={{item}} enabled=yes state=started
-    with_items:
-    - fedora-hubs-triage@3
-    - fedora-hubs-triage@4
-    - fedora-hubs-worker@3
-    - fedora-hubs-worker@4
+    - dnf: name={{item}} state=present
+      with_items:
+        - htop
+        - tmux
+        - vim
+
+    - name: add more hubs workers
+      service: name={{item}} enabled=yes state=started
+      with_items:
+        - fedora-hubs-triage@3
+        - fedora-hubs-triage@4
+        - fedora-hubs-worker@3
+        - fedora-hubs-worker@4
diff --git a/playbooks/hosts/iddev.fedorainfracloud.org.yml b/playbooks/hosts/iddev.fedorainfracloud.org.yml
index d54829691..afafcdee3 100644
--- a/playbooks/hosts/iddev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/iddev.fedorainfracloud.org.yml
@@ -3,40 +3,40 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: iddev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - basessh
-  - sudo
-  - hosts
-  - mod_wsgi
-  - base
+    - basessh
+    - sudo
+    - hosts
+    - mod_wsgi
+    - base
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/ipv6-test.fedoraproject.org.yml b/playbooks/hosts/ipv6-test.fedoraproject.org.yml
index e3065bd10..461d4085e 100644
--- a/playbooks/hosts/ipv6-test.fedoraproject.org.yml
+++ b/playbooks/hosts/ipv6-test.fedoraproject.org.yml
@@ -8,25 +8,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - hosts
-  - fas_client
-  - nagios_client
-  - collectd/base
-  - sudo
+    - base
+    - rkhunter
+    - hosts
+    - fas_client
+    - nagios_client
+    - collectd/base
+    - sudo
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/lists-dev.fedorainfracloud.org.yml b/playbooks/hosts/lists-dev.fedorainfracloud.org.yml
index 8074ca926..6d47bb17c 100644
--- a/playbooks/hosts/lists-dev.fedorainfracloud.org.yml
+++ b/playbooks/hosts/lists-dev.fedorainfracloud.org.yml
@@ -3,94 +3,95 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: lists-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   vars:
-  - tcp_ports: [22, 25, 80, 443]
-  - udp_ports: []
-  - postfix_maincf: "{{ roles_path }}/base/files/postfix/main.cf/main.cf.{{ inventory_hostname }}"
+    - tcp_ports: [22, 25, 80, 443]
+    - udp_ports: []
+    - postfix_maincf: "{{ roles_path }}/base/files/postfix/main.cf/main.cf.{{ inventory_hostname }}"
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
-  - sudo
-  - hosts
-  - mod_wsgi
-  - base
+    - basessh
+    - sudo
+    - hosts
+    - mod_wsgi
+    - base
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/postfix_basic.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-
-  # Basic Apache config
-  - name: install mod_ssl
-    package: name=mod_ssl  state=present
-
-  - name: copy ssl.conf
-    copy: src="{{ files }}/lists-dev/ssl.conf" dest=/etc/httpd/conf.d/ssl.conf
-          owner=root group=root mode=0644
-    notify:
-    - reload httpd
-
-  - name: basic apache virtualhost config
-    template: src="{{ files }}/lists-dev/apache.conf.j2" dest=/etc/httpd/conf.d/lists-dev.conf
-              owner=root group=root mode=0644
-    notify:
-    - reload httpd
-
-  # Database
-  - name: install postgresql server packages
-    package: name={{ item }}  state=present
-    with_items:
-    - postgresql-server
-    - postgresql-contrib
-    - python-psycopg2
-
-  - name: initialize postgresql
-    command: /usr/bin/postgresql-setup initdb
-             creates=/var/lib/pgsql/data/postgresql.conf
-
-  - name: copy pg_hba.conf
-    copy: src="{{ files }}/lists-dev/pg_hba.conf" dest=/var/lib/pgsql/data/pg_hba.conf
-          owner=postgres group=postgres
-    notify:
-    - restart postgresql
-
-  - name: start postgresql
-    service: state=started enabled=yes name=postgresql
-
-  - name: allow running sudo commands as postgresql for ansible
-    copy: src="{{ files }}/lists-dev/sudoers-norequiretty-postgres" dest=/etc/sudoers.d/norequiretty-postgres
-          owner=root group=root mode=0440
+    - import_tasks: "{{ tasks_path }}/postfix_basic.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
+
+    # Basic Apache config
+    - name: install mod_ssl
+      package: name=mod_ssl  state=present
+
+    - name: copy ssl.conf
+      copy: src="{{ files }}/lists-dev/ssl.conf" dest=/etc/httpd/conf.d/ssl.conf
+        owner=root group=root mode=0644
+      notify:
+        - reload httpd
+
+    - name: basic apache virtualhost config
+      template:
+        src="{{ files }}/lists-dev/apache.conf.j2" dest=/etc/httpd/conf.d/lists-dev.conf
+        owner=root group=root mode=0644
+      notify:
+        - reload httpd
+
+    # Database
+    - name: install postgresql server packages
+      package: name={{ item }}  state=present
+      with_items:
+        - postgresql-server
+        - postgresql-contrib
+        - python-psycopg2
+
+    - name: initialize postgresql
+      command: /usr/bin/postgresql-setup initdb
+        creates=/var/lib/pgsql/data/postgresql.conf
+
+    - name: copy pg_hba.conf
+      copy:
+        src="{{ files }}/lists-dev/pg_hba.conf" dest=/var/lib/pgsql/data/pg_hba.conf
+        owner=postgres group=postgres
+      notify:
+        - restart postgresql
+
+    - name: start postgresql
+      service: state=started enabled=yes name=postgresql
+
+    - name: allow running sudo commands as postgresql for ansible
+      copy:
+        src="{{ files }}/lists-dev/sudoers-norequiretty-postgres" dest=/etc/sudoers.d/norequiretty-postgres
+        owner=root group=root mode=0440
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-  - name: restart postgresql
-    service: name=postgresql state=restarted
-
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - name: restart postgresql
+      service: name=postgresql state=restarted
 
 #
 # Database setup
@@ -102,75 +103,71 @@
   become: yes
   become_user: postgres
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "{{ vars_path }}/{{ ansible_distribution }}.yml"
-
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   tasks:
-  # mailman auto-updates its schema, there can only be one admin user
-  - name: mailman DB user
-    postgresql_user: name=mailmanadmin password={{ lists_dev_mm_db_pass }}
-  - name: hyperkitty DB admin user
-    postgresql_user: name=hyperkittyadmin password={{ lists_dev_hk_db_pass }}
-  - name: hyperkitty DB user
-    postgresql_user: name=hyperkittyapp password={{ lists_dev_hk_db_pass }}
-  - name: databases creation
-    postgresql_db: name={{ item }} owner="{{ item }}admin" encoding=UTF-8
-    with_items:
-    - mailman
-    - hyperkitty
-  - name: test database creation
-    postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8
-
+    # mailman auto-updates its schema, there can only be one admin user
+    - name: mailman DB user
+      postgresql_user: name=mailmanadmin password={{ lists_dev_mm_db_pass }}
+    - name: hyperkitty DB admin user
+      postgresql_user: name=hyperkittyadmin password={{ lists_dev_hk_db_pass }}
+    - name: hyperkitty DB user
+      postgresql_user: name=hyperkittyapp password={{ lists_dev_hk_db_pass }}
+    - name: databases creation
+      postgresql_db: name={{ item }} owner="{{ item }}admin" encoding=UTF-8
+      with_items:
+        - mailman
+        - hyperkitty
+    - name: test database creation
+      postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8
 
 - name: setup mailman and hyperkitty
   hosts: lists-dev.fedorainfracloud.org
   gather_facts: True
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-  - role: mailman
-    mailman_db_server: localhost
-    mailman_mailman_db_pass: "{{ lists_dev_mm_db_pass }}"
-    mailman_hyperkitty_admin_db_pass: "{{ lists_dev_hk_db_pass }}"
-    mailman_hyperkitty_db_pass: "{{ lists_dev_hk_db_pass }}"
-    mailman_hyperkitty_cookie_key: "randomstringusedasacookiesecurekey-yesthisshouldbeinaprivaterepo_butidonthaveaccesstoit"
-  - collectd/base
+    - role: mailman
+      mailman_db_server: localhost
+      mailman_mailman_db_pass: "{{ lists_dev_mm_db_pass }}"
+      mailman_hyperkitty_admin_db_pass: "{{ lists_dev_hk_db_pass }}"
+      mailman_hyperkitty_db_pass: "{{ lists_dev_hk_db_pass }}"
+      mailman_hyperkitty_cookie_key: "randomstringusedasacookiesecurekey-yesthisshouldbeinaprivaterepo_butidonthaveaccesstoit"
+    - collectd/base
 
   tasks:
-
-  - name: install more needed packages
-    package: name={{ item }} state=present
-    with_items:
-    - tar
-    - vim
-    - tmux
-    - patch
-    tags:
-    - packages
-
-  #- name: easy access to the postgresql databases
-  #  template: src="{{ files }}/lists-dev/pgpass.j2" dest=/root/.pgpass
-  #            owner=root group=root mode=0600
-
-  - name: send root mail to abompard
-    lineinfile: dest=/etc/aliases regexp='^root:' line="root:abompard@xxxxxxxxxxxxxxxxx"
-    notify:
-    - reload aliases
-
-  - name: start services
-    service: state=started enabled=yes name={{ item }}
-    with_items:
-    - httpd
-    - mailman3
-    - postfix
-
+    - name: install more needed packages
+      package: name={{ item }} state=present
+      with_items:
+        - tar
+        - vim
+        - tmux
+        - patch
+      tags:
+        - packages
+
+    #- name: easy access to the postgresql databases
+    #  template: src="{{ files }}/lists-dev/pgpass.j2" dest=/root/.pgpass
+    #            owner=root group=root mode=0600
+
+    - name: send root mail to abompard
+      lineinfile: dest=/etc/aliases regexp='^root:' line="root:abompard@xxxxxxxxxxxxxxxxx"
+      notify:
+        - reload aliases
+
+    - name: start services
+      service: state=started enabled=yes name={{ item }}
+      with_items:
+        - httpd
+        - mailman3
+        - postfix
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-  - name: reload aliases
-    command: newaliases
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - name: reload aliases
+      command: newaliases
diff --git a/playbooks/hosts/magazine2.fedorainfracloud.org.yml b/playbooks/hosts/magazine2.fedorainfracloud.org.yml
index 6e0414b19..b5a8c6fbd 100644
--- a/playbooks/hosts/magazine2.fedorainfracloud.org.yml
+++ b/playbooks/hosts/magazine2.fedorainfracloud.org.yml
@@ -3,82 +3,82 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: magazine2.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   tasks:
-  - name: add packages
-    package: state=present name={{ item }}
-    with_items:
-    - httpd
-    - php
-    - php-mysql
-    - mariadb-server
-    - mariadb
-    - mod_ssl
-    - php-mcrypt
-    - php-mbstring
-    - wget
-    - unzip
-    - postfix
-    - wordpress
+    - name: add packages
+      package: state=present name={{ item }}
+      with_items:
+        - httpd
+        - php
+        - php-mysql
+        - mariadb-server
+        - mariadb
+        - mod_ssl
+        - php-mcrypt
+        - php-mbstring
+        - wget
+        - unzip
+        - postfix
+        - wordpress
 
-  - name: enable httpd service
-    service: name=httpd enabled=yes state=started
+    - name: enable httpd service
+      service: name=httpd enabled=yes state=started
 
-  - name: configure postfix for ipv4 only
-    raw: postconf -e inet_protocols=ipv4
+    - name: configure postfix for ipv4 only
+      raw: postconf -e inet_protocols=ipv4
 
-  - name: enable local postfix service
-    service: name=postfix enabled=yes state=started
+    - name: enable local postfix service
+      service: name=postfix enabled=yes state=started
 
-  - name: allow httpd to send mail
-    seboolean: name=httpd_can_sendmail state=true persistent=true
+    - name: allow httpd to send mail
+      seboolean: name=httpd_can_sendmail state=true persistent=true
 
   roles:
-  - basessh
-  - nagios_client
-  - mariadb_server
+    - basessh
+    - nagios_client
+    - mariadb_server
 
   post_tasks:
-  - name: create databaseuser
-    mysql_user: name=magazine
-                host=localhost
-                state=present
-                password="{{ magazine_db_password }}"
-                priv="magazine.*:ALL"
+    - name: create databaseuser
+      mysql_user: name=magazine
+        host=localhost
+        state=present
+        password="{{ magazine_db_password }}"
+        priv="magazine.*:ALL"
 
-  - name: Wordpress cron
-    cron: name="Wordpress cron"
-          minute="*/10"
-          job="curl -s http://localhost:8008/wp-cron.php >/dev/null"
+    - name: Wordpress cron
+      cron: name="Wordpress cron"
+        minute="*/10"
+        job="curl -s http://localhost:8008/wp-cron.php >/dev/null"
 
-  - name: Wordpress nightly update check
-    cron: name="Wordpress nightly update check"
-          special_time="daily"
-          job="yum -y -q update wordpress"
+    - name: Wordpress nightly update check
+      cron: name="Wordpress nightly update check"
+        special_time="daily"
+        job="yum -y -q update wordpress"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/regcfp2.fedorainfracloud.org.yml b/playbooks/hosts/regcfp2.fedorainfracloud.org.yml
index 3242f9c19..4302e6f18 100644
--- a/playbooks/hosts/regcfp2.fedorainfracloud.org.yml
+++ b/playbooks/hosts/regcfp2.fedorainfracloud.org.yml
@@ -3,35 +3,35 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: regcfp2.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
-  - nagios_client
-  - postgresql_server
-  - regcfp
+    - basessh
+    - nagios_client
+    - postgresql_server
+    - regcfp
 
   tasks:
diff --git a/playbooks/hosts/respins.fedorainfracloud.org.yml b/playbooks/hosts/respins.fedorainfracloud.org.yml
index d34336d29..b65be5d6f 100644
--- a/playbooks/hosts/respins.fedorainfracloud.org.yml
+++ b/playbooks/hosts/respins.fedorainfracloud.org.yml
@@ -3,30 +3,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: respins.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
+    - basessh
diff --git a/playbooks/hosts/taiga.fedorainfracloud.org.yml b/playbooks/hosts/taiga.fedorainfracloud.org.yml
index 8f1650fdc..ecca55a19 100644
--- a/playbooks/hosts/taiga.fedorainfracloud.org.yml
+++ b/playbooks/hosts/taiga.fedorainfracloud.org.yml
@@ -3,32 +3,32 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: taiga.fedorainfracloud.org
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
-  - role: taiga
-    taiga_back_version: stable
-    taiga_front_version: stable
+    - basessh
+    - role: taiga
+      taiga_back_version: stable
+      taiga_front_version: stable
diff --git a/playbooks/hosts/taigastg.fedorainfracloud.org.yml b/playbooks/hosts/taigastg.fedorainfracloud.org.yml
index 43d9359b5..859abd7ce 100644
--- a/playbooks/hosts/taigastg.fedorainfracloud.org.yml
+++ b/playbooks/hosts/taigastg.fedorainfracloud.org.yml
@@ -3,34 +3,34 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: taigastg.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
-  - role: certbot
-  - role: taiga
-    taiga_back_version: stable
-    taiga_front_version: stable
+    - basessh
+    - role: certbot
+    - role: taiga
+      taiga_back_version: stable
+      taiga_front_version: stable
diff --git a/playbooks/hosts/telegram-irc.fedorainfracloud.org.yml b/playbooks/hosts/telegram-irc.fedorainfracloud.org.yml
index b11f11906..4f0c877c9 100644
--- a/playbooks/hosts/telegram-irc.fedorainfracloud.org.yml
+++ b/playbooks/hosts/telegram-irc.fedorainfracloud.org.yml
@@ -3,38 +3,36 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: telegram-irc.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   roles:
-  - basessh
+    - basessh
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/testdays.fedorainfracloud.org.yml b/playbooks/hosts/testdays.fedorainfracloud.org.yml
index 20982c3a6..5694e6c17 100644
--- a/playbooks/hosts/testdays.fedorainfracloud.org.yml
+++ b/playbooks/hosts/testdays.fedorainfracloud.org.yml
@@ -4,34 +4,34 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: setup all the things
   hosts: testdays.fedorainfracloud.org
   gather_facts: True
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
-  - name: set hostname (required by some services, at least postfix need it)
-    hostname: name="{{inventory_hostname}}"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - name: set hostname (required by some services, at least postfix need it)
+      hostname: name="{{inventory_hostname}}"
 
   roles:
-  - basessh
-  - postgresql_server
+    - basessh
+    - postgresql_server
 
 - name: configure resultsdb and testdays
   hosts: testdays.fedorainfracloud.org
@@ -39,15 +39,14 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - { role: taskotron/resultsdb-backend, tags: ['resultsdb-be'] }
-   - { role: taskotron/resultsdb-frontend, tags: ['resultsdb-fe'] }
-   - { role: testdays, tags: ['testdays'] }
+    - { role: taskotron/resultsdb-backend, tags: ["resultsdb-be"] }
+    - { role: taskotron/resultsdb-frontend, tags: ["resultsdb-fe"] }
+    - { role: testdays, tags: ["testdays"] }
 
   handlers:
-   - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/hosts/upstreamfirst.fedorainfracloud.org.yml b/playbooks/hosts/upstreamfirst.fedorainfracloud.org.yml
index 1821b1e5c..69f1f0152 100644
--- a/playbooks/hosts/upstreamfirst.fedorainfracloud.org.yml
+++ b/playbooks/hosts/upstreamfirst.fedorainfracloud.org.yml
@@ -3,16 +3,16 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/persistent_cloud.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: do base configuration
   hosts: upstreamfirst.fedorainfracloud.org
@@ -20,30 +20,30 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - sudo
-  - collectd/base
-  - postgresql_server
-  - certbot
+    - base
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - sudo
+    - collectd/base
+    - postgresql_server
+    - certbot
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy pagure
   hosts: upstreamfirst.fedorainfracloud.org
@@ -51,28 +51,28 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
-
-#  pre_tasks:
-#  - name: install fedmsg-relay
-#    package: name=fedmsg-relay state=present
-#    tags:
-#    - pagure
-#    - pagure/fedmsg
-#  - name: and start it
-#    service: name=fedmsg-relay state=started
-#    tags:
-#    - pagure
-#    - pagure/fedmsg
-#
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+
+  #  pre_tasks:
+  #  - name: install fedmsg-relay
+  #    package: name=fedmsg-relay state=present
+  #    tags:
+  #    - pagure
+  #    - pagure/fedmsg
+  #  - name: and start it
+  #    service: name=fedmsg-relay state=started
+  #    tags:
+  #    - pagure
+  #    - pagure/fedmsg
+  #
   roles:
-      - pagure/upstreamfirst-frontend
-        #  - pagure/fedmsg
+    - pagure/upstreamfirst-frontend
+      #  - pagure/fedmsg
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: deploy ufmonitor
   hosts: upstreamfirst.fedorainfracloud.org
@@ -80,12 +80,12 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - "{{ vars_path }}/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "{{ vars_path }}/{{ ansible_distribution }}.yml"
 
   roles:
-      - { role: ufmonitor, tags: ['ufmonitor'] }
+    - { role: ufmonitor, tags: ["ufmonitor"] }
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/include/happy_birthday.yml b/playbooks/include/happy_birthday.yml
index 6d7a41ce2..514ee62ea 100644
--- a/playbooks/include/happy_birthday.yml
+++ b/playbooks/include/happy_birthday.yml
@@ -3,14 +3,13 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/happy_birthday.yml"
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/happy_birthday.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/include/proxies-certificates.yml b/playbooks/include/proxies-certificates.yml
index ffb5ec481..4423926e2 100644
--- a/playbooks/include/proxies-certificates.yml
+++ b/playbooks/include/proxies-certificates.yml
@@ -4,74 +4,73 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-
-  - role: httpd/mod_ssl
-
-  - role: httpd/certificate
-    certname: wildcard-2017.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: wildcard-2017.fedorahosted.org
-    SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: wildcard-2017.id.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.id.fedoraproject.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: wildcard-2017.stg.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
-    when: env == "staging"
-
-  - role: httpd/certificate
-    certname: wildcard-2017.app.os.stg.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.app.os.stg.fedoraproject.org.intermediate.cert
-    when: env == "staging"
-    tags:
-    - app.os.fedoraproject.org
-
-  - role: httpd/certificate
-    certname: wildcard-2017.app.os.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.app.os.fedoraproject.org.intermediate.cert
-    tags:
-    - app.os.fedoraproject.org
-
-  - role: httpd/certificate
-    certname: fedoramagazine.org
-    SSLCertificateChainFile: fedoramagazine.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: getfedora.org
-    SSLCertificateChainFile: getfedora.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: flocktofedora.org
-    SSLCertificateChainFile: flocktofedora.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: qa.stg.fedoraproject.org
-    SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
-    when: env == "staging"
-
-  - role: httpd/certificate
-    certname: qa.fedoraproject.org
-    SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
-
-  - role: httpd/certificate
-    certname: secondary.koji.fedoraproject.org.letsencrypt
-    SSLCertificateChainFile: secondary.koji.fedoraproject.org.letsencrypt.intermediate.crt
-
-  - role: httpd/certificate
-    certname: fedoracommunity.org
-    SSLCertificateChainFile: fedoracommunity.org.intermediate.cert
-    tags:
-    - fedoracommunity.org
+    - role: httpd/mod_ssl
+
+    - role: httpd/certificate
+      certname: wildcard-2017.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: wildcard-2017.fedorahosted.org
+      SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: wildcard-2017.id.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.id.fedoraproject.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: wildcard-2017.stg.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
+      when: env == "staging"
+
+    - role: httpd/certificate
+      certname: wildcard-2017.app.os.stg.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.app.os.stg.fedoraproject.org.intermediate.cert
+      when: env == "staging"
+      tags:
+        - app.os.fedoraproject.org
+
+    - role: httpd/certificate
+      certname: wildcard-2017.app.os.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.app.os.fedoraproject.org.intermediate.cert
+      tags:
+        - app.os.fedoraproject.org
+
+    - role: httpd/certificate
+      certname: fedoramagazine.org
+      SSLCertificateChainFile: fedoramagazine.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: getfedora.org
+      SSLCertificateChainFile: getfedora.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: flocktofedora.org
+      SSLCertificateChainFile: flocktofedora.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: qa.stg.fedoraproject.org
+      SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
+      when: env == "staging"
+
+    - role: httpd/certificate
+      certname: qa.fedoraproject.org
+      SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
+
+    - role: httpd/certificate
+      certname: secondary.koji.fedoraproject.org.letsencrypt
+      SSLCertificateChainFile: secondary.koji.fedoraproject.org.letsencrypt.intermediate.crt
+
+    - role: httpd/certificate
+      certname: fedoracommunity.org
+      SSLCertificateChainFile: fedoracommunity.org.intermediate.cert
+      tags:
+        - fedoracommunity.org
diff --git a/playbooks/include/proxies-fedora-web.yml b/playbooks/include/proxies-fedora-web.yml
index 9a2cbe467..896a5488c 100644
--- a/playbooks/include/proxies-fedora-web.yml
+++ b/playbooks/include/proxies-fedora-web.yml
@@ -4,66 +4,65 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
+    - role: fedora-web/main
+      website: fedoraproject.org
+    - role: fedora-web/spins
+      website: spins.fedoraproject.org
+    - role: fedora-web/start
+      website: start.fedoraproject.org
+    - role: fedora-web/boot
+      website: boot.fedoraproject.org
+    - role: fedora-web/mirrors
+      website: mirrors.fedoraproject.org
+    - role: fedora-web/communityblog
+      website: communityblog.fedoraproject.org
+    - role: fedora-web/community
+      website: fedoracommunity.org
+    - role: fedora-web/fudcon
+      website: fudcon.fedoraproject.org
+    - role: fedora-web/magazine
+      website: fedoramagazine.org
+    - role: fedora-web/getfedora
+      website: getfedora.org
+    - role: fedora-web/flocktofedora
+      website: flocktofedora.org
+    - role: fedora-web/labs
+      website: labs.fedoraproject.org
+    - role: fedora-web/arm
+      website: arm.fedoraproject.org
+    - role: fedora-web/iot
+      website: iot.fedoraproject.org
+      when: env == "staging"
+    - role: fedora-web/registry
+      website: registry.fedoraproject.org
+    - role: fedora-web/ostree
+      website: ostree.fedoraproject.org
+    - role: fedora-web/candidate-registry
+      website: candidate-registry.fedoraproject.org
+    - role: fedora-web/codecs
+      website: codecs.fedoraproject.org
+    - role: fedora-web/alt
+      website: alt.fedoraproject.org
+    - role: fedora-web/src
+      website: src.fedoraproject.org
 
-  - role: fedora-web/main
-    website: fedoraproject.org
-  - role: fedora-web/spins
-    website: spins.fedoraproject.org
-  - role: fedora-web/start
-    website: start.fedoraproject.org
-  - role: fedora-web/boot
-    website: boot.fedoraproject.org
-  - role: fedora-web/mirrors
-    website: mirrors.fedoraproject.org
-  - role: fedora-web/communityblog
-    website: communityblog.fedoraproject.org
-  - role: fedora-web/community
-    website: fedoracommunity.org
-  - role: fedora-web/fudcon
-    website: fudcon.fedoraproject.org
-  - role: fedora-web/magazine
-    website: fedoramagazine.org
-  - role: fedora-web/getfedora
-    website: getfedora.org
-  - role: fedora-web/flocktofedora
-    website: flocktofedora.org
-  - role: fedora-web/labs
-    website: labs.fedoraproject.org
-  - role: fedora-web/arm
-    website: arm.fedoraproject.org
-  - role: fedora-web/iot
-    website: iot.fedoraproject.org
-    when: env == "staging"
-  - role: fedora-web/registry
-    website: registry.fedoraproject.org
-  - role: fedora-web/ostree
-    website: ostree.fedoraproject.org
-  - role: fedora-web/candidate-registry
-    website: candidate-registry.fedoraproject.org
-  - role: fedora-web/codecs
-    website: codecs.fedoraproject.org
-  - role: fedora-web/alt
-    website: alt.fedoraproject.org
-  - role: fedora-web/src
-    website: src.fedoraproject.org
+    # Some other static content, not strictly part of "fedora-web" goes below here
+    - role: fedora-budget/proxy
+      website: budget.fedoraproject.org
 
-  # Some other static content, not strictly part of "fedora-web" goes below here
-  - role: fedora-budget/proxy
-    website: budget.fedoraproject.org
+    - role: fedora-docs/proxy
+      website: docs.fedoraproject.org
 
-  - role: fedora-docs/proxy
-    website: docs.fedoraproject.org
+    - role: fedora-docs-old/proxy
+      website: docs-old.fedoraproject.org
 
-  - role: fedora-docs-old/proxy
-    website: docs-old.fedoraproject.org
-
-  - role: developer/website
-    website: developer.fedoraproject.org
+    - role: developer/website
+      website: developer.fedoraproject.org
diff --git a/playbooks/include/proxies-fedorahosted.yml b/playbooks/include/proxies-fedorahosted.yml
index 8f413175b..e82cddb60 100644
--- a/playbooks/include/proxies-fedorahosted.yml
+++ b/playbooks/include/proxies-fedorahosted.yml
@@ -1,20 +1,19 @@
-- name: Fedorahosted. No more on our servers, but still in our hearts... 
+- name: Fedorahosted. No more on our servers, but still in our hearts...
   hosts: proxies-stg:proxies
   user: root
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - name: install special fedorahosted-redirects.conf with fedorahosted redirects
-    copy: src={{ files }}/httpd/fedorahosted-redirects.conf dest=/etc/httpd/conf.d/fedorahosted.org/fedorahosted-redirects.conf
-
-  - name: install special git.fedorahosted-redirects.conf with git.fedorahosted redirects
-    copy: src={{ files }}/httpd/git.fedorahosted-redirects.conf dest=/etc/httpd/conf.d/git.fedorahosted.org/fedorahosted-redirects.conf
+    - name: install special fedorahosted-redirects.conf with fedorahosted redirects
+      copy: src={{ files }}/httpd/fedorahosted-redirects.conf dest=/etc/httpd/conf.d/fedorahosted.org/fedorahosted-redirects.conf
 
+    - name: install special git.fedorahosted-redirects.conf with git.fedorahosted redirects
+      copy: src={{ files }}/httpd/git.fedorahosted-redirects.conf dest=/etc/httpd/conf.d/git.fedorahosted.org/fedorahosted-redirects.conf
diff --git a/playbooks/include/proxies-haproxy.yml b/playbooks/include/proxies-haproxy.yml
index 2b5d38a3b..8a324c862 100644
--- a/playbooks/include/proxies-haproxy.yml
+++ b/playbooks/include/proxies-haproxy.yml
@@ -4,19 +4,18 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
+    # The base haproxy role that sets it all up
+    - role: haproxy
 
-  # The base haproxy role that sets it all up
-  - role: haproxy
-
-  # And an additional apache rewrite so we can access the web stats
-  - role: haproxy/rewrite
-    website: admin.fedoraproject.org
-    path: /haproxy
+    # And an additional apache rewrite so we can access the web stats
+    - role: haproxy/rewrite
+      website: admin.fedoraproject.org
+      path: /haproxy
diff --git a/playbooks/include/proxies-miscellaneous.yml b/playbooks/include/proxies-miscellaneous.yml
index 3d6ddcee6..1bd33215f 100644
--- a/playbooks/include/proxies-miscellaneous.yml
+++ b/playbooks/include/proxies-miscellaneous.yml
@@ -4,58 +4,57 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-      # We retired this in favor of PDC
-      # https://lists.fedoraproject.org/archives/list/rel-eng@xxxxxxxxxxxxxxxxxxxxxxx/thread/LOWVTF6WTS43LNPWDEISLXUELXAH5YXR/#LOWVTF6WTS43LNPWDEISLXUELXAH5YXR
-      - file:
-          dest=/etc/httpd/conf.d/apps.fedoraproject.org/fedora-releng-dash.conf
-          state=absent
-        tags: releng-dash
-        notify: reload proxyhttpd
+    # We retired this in favor of PDC
+    # https://lists.fedoraproject.org/archives/list/rel-eng@xxxxxxxxxxxxxxxxxxxxxxx/thread/LOWVTF6WTS43LNPWDEISLXUELXAH5YXR/#LOWVTF6WTS43LNPWDEISLXUELXAH5YXR
+    - file:
+        dest=/etc/httpd/conf.d/apps.fedoraproject.org/fedora-releng-dash.conf
+        state=absent
+      tags: releng-dash
+      notify: reload proxyhttpd
 
   roles:
-
-  - role: httpd/mime-type
-    website: fedoraproject.org
-    mimetype: image/vnd.microsoft.icon
-    extensions:
-    - .ico
-
-  - role: fedmsg/crl
-    website: fedoraproject.org
-    path: /fedmsg
-
-  - role: fedmsg/gateway/slave
-    stunnel_service: "websockets"
-    stunnel_source_port: 9939
-    stunnel_destination_port: 9938
-
-  - role: httpd/fingerprints
-    website: admin.fedoraproject.org
-
-  - role: easyfix/proxy
-    website: fedoraproject.org
-    path: /easyfix
-
-  - role: review-stats/proxy
-    website: fedoraproject.org
-    path: /PackageReviewStatus
-
-  - role: membership-map/proxy
-    website: fedoraproject.org
-    path: /membership-map
-
-  - role: apps-fp-o
-    website: apps.fedoraproject.org
-    path: /
-
-  - role: pkgdb-proxy
-    tags:
-    - pkgdb2
+    - role: httpd/mime-type
+      website: fedoraproject.org
+      mimetype: image/vnd.microsoft.icon
+      extensions:
+        - .ico
+
+    - role: fedmsg/crl
+      website: fedoraproject.org
+      path: /fedmsg
+
+    - role: fedmsg/gateway/slave
+      stunnel_service: "websockets"
+      stunnel_source_port: 9939
+      stunnel_destination_port: 9938
+
+    - role: httpd/fingerprints
+      website: admin.fedoraproject.org
+
+    - role: easyfix/proxy
+      website: fedoraproject.org
+      path: /easyfix
+
+    - role: review-stats/proxy
+      website: fedoraproject.org
+      path: /PackageReviewStatus
+
+    - role: membership-map/proxy
+      website: fedoraproject.org
+      path: /membership-map
+
+    - role: apps-fp-o
+      website: apps.fedoraproject.org
+      path: /
+
+    - role: pkgdb-proxy
+      tags:
+        - pkgdb2
diff --git a/playbooks/include/proxies-redirects.yml b/playbooks/include/proxies-redirects.yml
index 17df6f63a..87ac6018e 100644
--- a/playbooks/include/proxies-redirects.yml
+++ b/playbooks/include/proxies-redirects.yml
@@ -4,775 +4,770 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
-
-  # An exceptional rewrite for bugz.fp.o
-  - role: packages3/bugz.fp.o
-    website: bugz.fedoraproject.org
-
-
-  # Various app redirects
-  - role: httpd/redirect
-    shortname: neuro
-    website: neuro.fedoraproject.org
-    path: /
-    target: https://docs.fedoraproject.org/en-US/neurofedora/overview/
-    tags:
-    - neuro
-
-  - role: httpd/redirect
-    shortname: community
-    website: admin.fedoraproject.org
-    path: /community
-    target: https://apps.fedoraproject.org/packages
-
-  - role: httpd/redirect
-    shortname: nagios
-    website: admin.fedoraproject.org
-    path: /nagios
-    target: https://nagios.fedoraproject.org/nagios/
-
-  - role: httpd/redirect
-    shortname: docs
-    website: fedoraproject.org
-    path: /docs
-    target: https://docs.fedoraproject.org/
-
-  - role: httpd/redirect
-    shortname: people-fp-o
-    website: people.fedoraproject.org
-    target: https://fedorapeople.org/
-
-  - role: httpd/redirect
-    shortname: fas
-    website: fas.fedoraproject.org
-    target: https://admin.fedoraproject.org/accounts/
-
-  - role: httpd/redirectmatch
-    shortname: codecs
-    website: codecs.fedoraproject.org
-    regex: ^.*/(.*openh264.*.rpm$)
-    target: http://ciscobinary.openh264.org/$1
-
-  - role: httpd/redirect
-    shortname: jenkins
-    website: jenkins.fedorainfracloud.org
-    target: https://jenkins-fedora-infra.apps.ci.centos.org/
-    tags: jenkins
-
-  - role: httpd/redirectmatch
-    shortname: fpaste
-    website: fpaste.org
-    regex: /(.*)$
-    target: https://paste.fedoraproject.org/$1
-
-  - role: httpd/redirectmatch
-    shortname: mailman
-    website: admin.fedoraproject.org
-    regex: /mailman/(.*)$
-    target: https://lists.fedoraproject.org/mailman/$1
-
-  - role: httpd/redirectmatch
-    shortname: mailman-pipermail
-    website: admin.fedoraproject.org
-    regex: /pipermail/(.*)$
-    target: https://lists.fedoraproject.org/pipermail/$1
-
-  - role: httpd/redirectmatch
-    shortname: 00-bodhi2-cutover-users
-    website: admin.fedoraproject.org
-    regex: /updates/user/(.*)$
-    target: https://bodhi.fedoraproject.org/users/$1
-
-  - role: httpd/redirectmatch
-    shortname: 01-bodhi2-cutover-comments-list
-    website: admin.fedoraproject.org
-    regex: /updates/comments$
-    target: https://bodhi.fedoraproject.org/comments/
-
-  # This one is sub-optimal, but we have no way to map /mine to /$username
-  - role: httpd/redirectmatch
-    shortname: 02-bodhi2-mine-fallback
-    website: admin.fedoraproject.org
-    regex: /updates/mine$
-    target: https://bodhi.fedoraproject.org/
-
-  # This is similar to /mine.  Ideally, we would redirect to
-  # /overrides/?user=$USERNAME, but we can't get that username afaik.
-  - role: httpd/redirectmatch
-    shortname: 03-bodhi2-cutover-overrides-list
-    website: admin.fedoraproject.org
-    regex: /updates/override/list$
-    target: https://bodhi.fedoraproject.org/overrides/
-
-  - role: httpd/redirectmatch
-    shortname: 04-bodhi2-new-update-gotcha
-    website: admin.fedoraproject.org
-    regex: /updates/new/*$
-    target: https://bodhi.fedoraproject.org/updates/new
-
-  - role: httpd/redirectmatch
-    shortname: 05-bodhi2-api-version
-    website: admin.fedoraproject.org
-    regex: /updates/api_version$
-    target: https://bodhi.fedoraproject.org/api_version
-
-  - role: httpd/redirectmatch
-    shortname: 06-bodhi2-login
-    website: admin.fedoraproject.org
-    regex: /updates/login$
-    target: https://bodhi.fedoraproject.org/login
-
-  - role: httpd/redirectmatch
-    shortname: 07-bodhi2-logout
-    website: admin.fedoraproject.org
-    regex: /updates/logout$
-    target: https://bodhi.fedoraproject.org/logout
-
-  - role: httpd/redirectmatch
-    shortname: 08-bodhi2-rss
-    website: admin.fedoraproject.org
-    regex: /updates/rss/rss2\.0
-    target: https://bodhi.fedoraproject.org/updates
-
-  - role: httpd/redirectmatch
-    shortname: 09-bodhi2-old-search-new-search
-    website: admin.fedoraproject.org
-    regex: /updates/search/(.+)$
-    target: https://bodhi.fedoraproject.org/updates/?like=$1
-
-  - role: httpd/redirectmatch
-    shortname: 89-bodhi2-icon
-    website: admin.fedoraproject.org
-    regex: /updates/static/images/bodhi-icon-48.png$
-    target: https://apps.fedoraproject.org/img/icons/bodhi.png
-
-  - role: httpd/redirectmatch
-    shortname: 90-bodhi2-cutover-updates
-    website: admin.fedoraproject.org
-    regex: /updates/(.+)$
-    target: https://bodhi.fedoraproject.org/updates/$1
-
-  - role: httpd/redirectmatch
-    shortname: 91-bodhi2-cutover-baseline
-    website: admin.fedoraproject.org
-    regex: /updates/*$
-    target: https://bodhi.fedoraproject.org/
-
-  # See https://github.com/fedora-infra/bodhi/issues/476
-  - role: httpd/redirectmatch
-    shortname: send-user-to-users
-    website: bodhi.fedoraproject.org
-    regex: /user/(.*)$
-    target: https://bodhi.fedoraproject.org/users/$1
-
-  - role: httpd/redirect
-    shortname: get-fedora
-    website: get.fedoraproject.org
-    target: https://getfedora.org/
-
-  - role: httpd/redirect
-    shortname: flocktofedora
-    website: flocktofedora.net
-    target: https://flocktofedora.org/
-
-  - role: httpd/redirect
-    shortname: fedoramy
-    website: fedora.my
-    target: http://www.fedora.my/
-
-  - role: httpd/redirect
-    shortname: copr
-    website: copr.fedoraproject.org
-    target: https://copr.fedorainfracloud.org/
-    when: env != "staging"
-    tags: copr
-
-  - role: httpd/redirect
-    shortname: join-fedora
-    website: join.fedoraproject.org
-    target: https://fedoraproject.org/wiki/Join
-
-  - role: httpd/redirect
-    shortname: get-help
-    website: help.fedoraproject.org
-    target: https://fedoraproject.org/get-help
-
-  - role: httpd/redirect
-    shortname: l10n
-    website: l10n.fedoraproject.org
-    target: https://translate.fedoraproject.org/
-
-  # This is just a redirect to developer, to make it easier for people to get
-  # here from Red Hat's developers.redhat.com (ticket #5216).
-  - role: httpd/redirect
-    shortname: developers
-    website: developers.fedoraproject.org
-    target: https://developer.fedoraproject.org/
-
-  # Redirect fudcon.fedoraproject.org to flocktofedora.org
-  - role: httpd/redirect
-    shortname: fudcon
-    website: fudcon.fedoraproject.org
-    path: /index.html
-    target: https://flocktofedora.org/
-
-  # Redirect specific websites from fedoraproject.org to getfedora.org
-  - role: httpd/redirect
-    shortname: main-fedoraproject
-    website: fedoraproject.org
-    path: /index.html
-    target: https://getfedora.org/
-
-  - role: httpd/redirect
-    shortname: get-fedora-old
-    website: fedoraproject.org
-    path: /get-fedora
-    target: https://getfedora.org/
-
-  - role: httpd/redirect
-    shortname: sponsors
-    website: fedoraproject.org
-    path: /sponsors
-    target: https://getfedora.org/sponsors
-
-  - role: httpd/redirect
-    shortname: code-of-conduct
-    website: fedoraproject.org
-    path: /code-of-conduct
-    target: https://docs.fedoraproject.org/fedora-project/project/code-of-conduct.html
-
-  - role: httpd/redirect
-    shortname: code-of-conduct-2
-    website: getfedora.org
-    path: /code-of-conduct
-    target: https://docs.fedoraproject.org/fedora-project/project/code-of-conduct.html
-
-  - role: httpd/redirect
-    shortname: verify
-    website: fedoraproject.org
-    path: /verify
-    target: https://getfedora.org/verify
-
-  - role: httpd/redirect
-    shortname: keys
-    website: fedoraproject.org
-    path: /keys
-    target: https://getfedora.org/keys
-
-  - role: httpd/redirect
-    shortname: release-banner
-    website: fedoraproject.org
-    path: /static/js/release-counter-ext.js
-    target: https://getfedora.org/static/js/release-counter-ext.js
-
-#
-# When there is no prerelease we redirect the prerelease urls
-# back to the main release.
-# This should be disabled when there is a prerelease
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-ws
-    website: getfedora.org
-    regex: /(.*)workstation/prerelease.*$
-    target: https://stg.getfedora.org/$1/workstation
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-srv
-    website: getfedora.org
-    regex: /(.*)server/prerelease.*$
-    target: https://stg.getfedora.org/$1/server
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-atomic
-    website: getfedora.org
-    regex: /(.*)atomic/prerelease.*$
-    target: https://stg.getfedora.org/$1/atomic
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-labs-1
-    website: labs.fedoraproject.org
-    regex: /(.*)prerelease.*$
-    target: https://labs.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-spins-1
-    website: spins.fedoraproject.org
-    regex: /(.*)prerelease.*$
-    target: https://spins.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-arm-1
-    website: arm.fedoraproject.org
-    regex: /(.*)prerelease.*$
-    target: https://arm.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-labs-2
-    website: labs.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://labs.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-spins-2
-    website: spins.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://spins.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-arm-2
-    website: arm.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://arm.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: cloud-to-atomic
-    website: getfedora.org
-    regex: /cloud/.*$
-    target: https://alt.stg.fedoraproject.org/cloud/$1
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: cloud-to-atomic-download
-    website: getfedora.org
-    regex: /(.*)/cloud/download.*$
-    target: https://alt.stg.fedoraproject.org/$1/cloud
-    when: env == 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-alt-1
-    website: alt.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://alt.stg.fedoraproject.org/$1
-    when: env == 'staging'
-
-# end staging
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-ws
-    website: getfedora.org
-    regex: /(.*)workstation/prerelease.*$
-    target: https://getfedora.org/$1/workstation
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-srv
-    website: getfedora.org
-    regex: /(.*)server/prerelease.*$
-    target: https://getfedora.org/$1/server
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-gfo-atomic
-    website: getfedora.org
-    regex: /(.*)atomic/prerelease.*$
-    target: https://getfedora.org/$1/atomic
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-labs-1
-    website: labs.fedoraproject.org
-    regex: /(.*)/prerelease.*$
-    target: https://labs.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-spins-1
-    website: spins.fedoraproject.org
-    regex: /(.*)/prerelease.*$
-    target: https://spins.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-arm-1
-    website: arm.fedoraproject.org
-    regex: /(.*)/prerelease.*$
-    target: https://arm.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-labs-2
-    website: labs.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://labs.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-spins-2
-    website: spins.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://spins.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-arm-2
-    website: arm.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://arm.fedoraproject.org/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: prerelease-to-final-alt-1
-    website: alt.fedoraproject.org
-    regex: /prerelease.*$
-    target: https://alt.fedoraproject.org/$1
-    when: env != 'staging'
-
-# end of prod prerelease
-
-  - role: httpd/redirectmatch
-    shortname: cloud-to-atomic
-    website: getfedora.org
-    regex: /cloud/.*$
-    target: https://alt.fedoraproject.org/cloud/$1
-    when: env != 'staging'
-
-  - role: httpd/redirectmatch
-    shortname: cloud-to-atomic-download
-    website: getfedora.org
-    regex: /(.*)/cloud/download.*$
-    target: https://alt.fedoraproject.org/$1/cloud
-    when: env != 'staging'
-
-  - role: httpd/redirect
-    shortname: store
-    website: store.fedoraproject.org
-    target: "https://redhat.corpmerchandise.com/ProductList.aspx?did=20588";
-
-  # Fonts on the wiki
-  - role: httpd/redirect
-    shortname: fonts-wiki
-    website: fonts.fedoraproject.org
-    target: https://fedoraproject.org/wiki/Category:Fonts_SIG
-
-  # Releng
-  - role: httpd/redirect
-    shortname: nightly
-    website: nightly.fedoraproject.org
-    target: https://www.happyassassin.net/nightlies.html
-
-  # We retired releng-dash in favor of PDC
-  # https://lists.fedoraproject.org/archives/list/rel-eng@xxxxxxxxxxxxxxxxxxxxxxx/thread/LOWVTF6WTS43LNPWDEISLXUELXAH5YXR/#LOWVTF6WTS43LNPWDEISLXUELXAH5YXR
-  - role: httpd/redirect
-    shortname: releng-dash
-    website: apps.fedoraproject.org
-    path: /releng-dash
-    target: https://pdc.fedoraproject.org/
-
-
-  # Send fp.com to fp.org
-  - role: httpd/redirect
-    shortname: site
-    website: fedoraproject.com
-    target: https://getfedora.org/
-
-  # Planet/people convenience
-  - role: httpd/redirect
-    shortname: infofeed
-    website: fedoraproject.org
-    path: /infofeed
-    target: http://fedoraplanet.org/infofeed
-
-  - role: httpd/redirect
-    shortname: people
-    website: fedoraproject.org
-    path: /people
-    target: http://fedoraplanet.org/
-
-  - role: httpd/redirect
-    shortname: fedorapeople
-    website: fedoraproject.org
-    path: /fedorapeople
-    target: http://fedoraplanet.org/
-
-  - role: httpd/redirect
-    shortname: planet.fedoraproject.org
-    website: planet.fedoraproject.org
-    target: http://fedoraplanet.org/
-
-  # QA
-  - role: httpd/redirect
-    shortname: qa
-    website: qa.fedoraproject.org
-    target: https://fedoraproject.org/wiki/QA
-    when: env != 'staging'
-
-
-  # Various community sites
-  - role: httpd/redirect
-    shortname: it-fedoracommunity-redirect
-    website: it.fedoracommunity.org
-    target: http://www.fedoraonline.it/
-
-  - role: httpd/redirect
-    shortname: uk-fedoracommunity-redirect
-    website: uk.fedoracommunity.org
-    target: http://www.fedora-uk.org/
-
-  - role: httpd/redirect
-    shortname: tw-fedoracommunity-redirect
-    website: tw.fedoracommunity.org
-    target: https://fedora-tw.org/
-
-  # Spins
-  - role: httpd/redirect
-    shortname: kde
-    website: kde.fedoraproject.org
-    target: https://spins.fedoraproject.org/kde/
-
-
-  # Various sites that we are friends with
-  - role: httpd/redirect
-    shortname: port389
-    website: port389.org
-    target: http://directory.fedoraproject.org/
-
-  - role: httpd/redirect
-    shortname: k12linux
-    website: k12linux.org
-    target: https://fedorahosted.org/k12linux/
-
-  - role: httpd/redirect
-    shortname: dogtagpki
-    website: pki.fedoraproject.org
-    target: http://dogtagpki.org/
-
-  # Cloudy bits
-  - role: httpd/redirect
-    shortname: cloud-front-page
-    website: cloud.fedoraproject.org
-    target: https://alt.fedoraproject.org/cloud/
-
-  - role: httpd/redirectmatch
-    shortname: redirect-cloudstart
-    website: redirect.fedoraproject.org
-    regex: /(console\.aws\.amazon\.com/ec2/v2/home.*)$
-    target: https://$1
-
-  ## Cloud image redirects
-
-  # Redirects/pointers for fedora 25 BASE cloud images
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-25
-    website: cloud.fedoraproject.org
-    path: /fedora-25.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/25/CloudImages/x86_64/images/Fedora-Cloud-Base-25-1.3.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-25-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-25.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/25/CloudImages/x86_64/images/Fedora-Cloud-Base-25-1.3.x86_64.raw.xz
-
-  # Redirects/pointers for fedora 24 BASE cloud images
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-24
-    website: cloud.fedoraproject.org
-    path: /fedora-24.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-24-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-24.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.raw.xz
-
-  # Redirects/pointers for fedora 23 BASE cloud images
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-23
-    website: cloud.fedoraproject.org
-    path: /fedora-23.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-23-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-23.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-23-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-23.i386.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/i386/Images/Fedora-Cloud-Base-23-20151030.i386.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-23
-    website: cloud.fedoraproject.org
-    path: /fedora-23.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/i386/Images/Fedora-Cloud-Base-23-20151030.i386.qcow2
-
-  # Redirects/pointers for fedora 23 ATOMIC cloud images
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-23
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-23.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Atomic-23-20151030.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-23-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-23.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Atomic-23-20151030.x86_64.raw.xz
-
-  # Redirects/pointers for fedora 22 BASE cloud images
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-22
-    website: cloud.fedoraproject.org
-    path: /fedora-22.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-22-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-22.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-22-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-22.i386.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-22
-    website: cloud.fedoraproject.org
-    path: /fedora-22.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2
-
-  # Redirects/pointers for fedora 22 ATOMIC cloud images
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-22
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-22.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-22-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-22.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.raw.xz
-
-  # Redirects/pointers for fedora 21 BASE cloud images
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-21
-    website: cloud.fedoraproject.org
-    path: /fedora-21.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-base-64bit-21-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-21.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-21-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-21.i386.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-base-32bit-21
-    website: cloud.fedoraproject.org
-    path: /fedora-21.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2
-
-  # Redirects/pointers for fedora 21 ATOMIC cloud images
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-21
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-21.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-21-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-21.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.raw.xz
-
-  # Except, there are no 32bit atomic images atm.
-  #- role: httpd/redirect
-  #  shortname: cloud-atomic-32bit-21-raw
-  #  website: cloud.fedoraproject.org
-  #  path: /fedora-atomic-21.i386.raw.xz
-  #  target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.raw.xz
-
-  #- role: httpd/redirect
-  #  shortname: cloud-atomic-32bit-21
-  #  website: cloud.fedoraproject.org
-  #  path: /fedora-atomic-21.i386.qcow2
-  #  target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.qcow2
-
-  # Redirects/pointers for fedora 20 cloud images
-  - role: httpd/redirect
-    shortname: cloud-64bit-20
-    website: cloud.fedoraproject.org
-    path: /fedora-20.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-32bit-20
-    website: cloud.fedoraproject.org
-    path: /fedora-20.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-64bit-20-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-20.x86_64.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.raw.xz
-
-  - role: httpd/redirect
-    shortname: cloud-32bit-20-raw
-    website: cloud.fedoraproject.org
-    path: /fedora-20.i386.raw.xz
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.raw.xz
-
-  # Redirects/pointers for fedora 19 cloud images
-  - role: httpd/redirect
-    shortname: cloud-64bit-19
-    website: cloud.fedoraproject.org
-    path: /fedora-19.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/x86_64/Fedora-x86_64-19-20140407-sda.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-32bit-19
-    website: cloud.fedoraproject.org
-    path: /fedora-19.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/i386/Fedora-i386-19-20140407-sda.qcow2
-
-  # Redirects/pointers for latest fedora cloud images.
-  - role: httpd/redirect
-    shortname: cloud-64bit-latest
-    website: cloud.fedoraproject.org
-    path: /fedora-latest.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-32bit-latest
-    website: cloud.fedoraproject.org
-    path: /fedora-latest.i386.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2
-
-  - role: httpd/redirect
-    shortname: cloud-atomic-64bit-latest
-    website: cloud.fedoraproject.org
-    path: /fedora-atomic-latest.x86_64.qcow2
-    target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
+    # An exceptional rewrite for bugz.fp.o
+    - role: packages3/bugz.fp.o
+      website: bugz.fedoraproject.org
+
+    # Various app redirects
+    - role: httpd/redirect
+      shortname: neuro
+      website: neuro.fedoraproject.org
+      path: /
+      target: https://docs.fedoraproject.org/en-US/neurofedora/overview/
+      tags:
+        - neuro
+
+    - role: httpd/redirect
+      shortname: community
+      website: admin.fedoraproject.org
+      path: /community
+      target: https://apps.fedoraproject.org/packages
+
+    - role: httpd/redirect
+      shortname: nagios
+      website: admin.fedoraproject.org
+      path: /nagios
+      target: https://nagios.fedoraproject.org/nagios/
+
+    - role: httpd/redirect
+      shortname: docs
+      website: fedoraproject.org
+      path: /docs
+      target: https://docs.fedoraproject.org/
+
+    - role: httpd/redirect
+      shortname: people-fp-o
+      website: people.fedoraproject.org
+      target: https://fedorapeople.org/
+
+    - role: httpd/redirect
+      shortname: fas
+      website: fas.fedoraproject.org
+      target: https://admin.fedoraproject.org/accounts/
+
+    - role: httpd/redirectmatch
+      shortname: codecs
+      website: codecs.fedoraproject.org
+      regex: ^.*/(.*openh264.*.rpm$)
+      target: http://ciscobinary.openh264.org/$1
+
+    - role: httpd/redirect
+      shortname: jenkins
+      website: jenkins.fedorainfracloud.org
+      target: https://jenkins-fedora-infra.apps.ci.centos.org/
+      tags: jenkins
+
+    - role: httpd/redirectmatch
+      shortname: fpaste
+      website: fpaste.org
+      regex: /(.*)$
+      target: https://paste.fedoraproject.org/$1
+
+    - role: httpd/redirectmatch
+      shortname: mailman
+      website: admin.fedoraproject.org
+      regex: /mailman/(.*)$
+      target: https://lists.fedoraproject.org/mailman/$1
+
+    - role: httpd/redirectmatch
+      shortname: mailman-pipermail
+      website: admin.fedoraproject.org
+      regex: /pipermail/(.*)$
+      target: https://lists.fedoraproject.org/pipermail/$1
+
+    - role: httpd/redirectmatch
+      shortname: 00-bodhi2-cutover-users
+      website: admin.fedoraproject.org
+      regex: /updates/user/(.*)$
+      target: https://bodhi.fedoraproject.org/users/$1
+
+    - role: httpd/redirectmatch
+      shortname: 01-bodhi2-cutover-comments-list
+      website: admin.fedoraproject.org
+      regex: /updates/comments$
+      target: https://bodhi.fedoraproject.org/comments/
+
+    # This one is sub-optimal, but we have no way to map /mine to /$username
+    - role: httpd/redirectmatch
+      shortname: 02-bodhi2-mine-fallback
+      website: admin.fedoraproject.org
+      regex: /updates/mine$
+      target: https://bodhi.fedoraproject.org/
+
+    # This is similar to /mine.  Ideally, we would redirect to
+    # /overrides/?user=$USERNAME, but we can't get that username afaik.
+    - role: httpd/redirectmatch
+      shortname: 03-bodhi2-cutover-overrides-list
+      website: admin.fedoraproject.org
+      regex: /updates/override/list$
+      target: https://bodhi.fedoraproject.org/overrides/
+
+    - role: httpd/redirectmatch
+      shortname: 04-bodhi2-new-update-gotcha
+      website: admin.fedoraproject.org
+      regex: /updates/new/*$
+      target: https://bodhi.fedoraproject.org/updates/new
+
+    - role: httpd/redirectmatch
+      shortname: 05-bodhi2-api-version
+      website: admin.fedoraproject.org
+      regex: /updates/api_version$
+      target: https://bodhi.fedoraproject.org/api_version
+
+    - role: httpd/redirectmatch
+      shortname: 06-bodhi2-login
+      website: admin.fedoraproject.org
+      regex: /updates/login$
+      target: https://bodhi.fedoraproject.org/login
+
+    - role: httpd/redirectmatch
+      shortname: 07-bodhi2-logout
+      website: admin.fedoraproject.org
+      regex: /updates/logout$
+      target: https://bodhi.fedoraproject.org/logout
+
+    - role: httpd/redirectmatch
+      shortname: 08-bodhi2-rss
+      website: admin.fedoraproject.org
+      regex: /updates/rss/rss2\.0
+      target: https://bodhi.fedoraproject.org/updates
+
+    - role: httpd/redirectmatch
+      shortname: 09-bodhi2-old-search-new-search
+      website: admin.fedoraproject.org
+      regex: /updates/search/(.+)$
+      target: https://bodhi.fedoraproject.org/updates/?like=$1
+
+    - role: httpd/redirectmatch
+      shortname: 89-bodhi2-icon
+      website: admin.fedoraproject.org
+      regex: /updates/static/images/bodhi-icon-48.png$
+      target: https://apps.fedoraproject.org/img/icons/bodhi.png
+
+    - role: httpd/redirectmatch
+      shortname: 90-bodhi2-cutover-updates
+      website: admin.fedoraproject.org
+      regex: /updates/(.+)$
+      target: https://bodhi.fedoraproject.org/updates/$1
+
+    - role: httpd/redirectmatch
+      shortname: 91-bodhi2-cutover-baseline
+      website: admin.fedoraproject.org
+      regex: /updates/*$
+      target: https://bodhi.fedoraproject.org/
+
+    # See https://github.com/fedora-infra/bodhi/issues/476
+    - role: httpd/redirectmatch
+      shortname: send-user-to-users
+      website: bodhi.fedoraproject.org
+      regex: /user/(.*)$
+      target: https://bodhi.fedoraproject.org/users/$1
+
+    - role: httpd/redirect
+      shortname: get-fedora
+      website: get.fedoraproject.org
+      target: https://getfedora.org/
+
+    - role: httpd/redirect
+      shortname: flocktofedora
+      website: flocktofedora.net
+      target: https://flocktofedora.org/
+
+    - role: httpd/redirect
+      shortname: fedoramy
+      website: fedora.my
+      target: http://www.fedora.my/
+
+    - role: httpd/redirect
+      shortname: copr
+      website: copr.fedoraproject.org
+      target: https://copr.fedorainfracloud.org/
+      when: env != "staging"
+      tags: copr
+
+    - role: httpd/redirect
+      shortname: join-fedora
+      website: join.fedoraproject.org
+      target: https://fedoraproject.org/wiki/Join
+
+    - role: httpd/redirect
+      shortname: get-help
+      website: help.fedoraproject.org
+      target: https://fedoraproject.org/get-help
+
+    - role: httpd/redirect
+      shortname: l10n
+      website: l10n.fedoraproject.org
+      target: https://translate.fedoraproject.org/
+
+    # This is just a redirect to developer, to make it easier for people to get
+    # here from Red Hat's developers.redhat.com (ticket #5216).
+    - role: httpd/redirect
+      shortname: developers
+      website: developers.fedoraproject.org
+      target: https://developer.fedoraproject.org/
+
+    # Redirect fudcon.fedoraproject.org to flocktofedora.org
+    - role: httpd/redirect
+      shortname: fudcon
+      website: fudcon.fedoraproject.org
+      path: /index.html
+      target: https://flocktofedora.org/
+
+    # Redirect specific websites from fedoraproject.org to getfedora.org
+    - role: httpd/redirect
+      shortname: main-fedoraproject
+      website: fedoraproject.org
+      path: /index.html
+      target: https://getfedora.org/
+
+    - role: httpd/redirect
+      shortname: get-fedora-old
+      website: fedoraproject.org
+      path: /get-fedora
+      target: https://getfedora.org/
+
+    - role: httpd/redirect
+      shortname: sponsors
+      website: fedoraproject.org
+      path: /sponsors
+      target: https://getfedora.org/sponsors
+
+    - role: httpd/redirect
+      shortname: code-of-conduct
+      website: fedoraproject.org
+      path: /code-of-conduct
+      target: https://docs.fedoraproject.org/fedora-project/project/code-of-conduct.html
+
+    - role: httpd/redirect
+      shortname: code-of-conduct-2
+      website: getfedora.org
+      path: /code-of-conduct
+      target: https://docs.fedoraproject.org/fedora-project/project/code-of-conduct.html
+
+    - role: httpd/redirect
+      shortname: verify
+      website: fedoraproject.org
+      path: /verify
+      target: https://getfedora.org/verify
+
+    - role: httpd/redirect
+      shortname: keys
+      website: fedoraproject.org
+      path: /keys
+      target: https://getfedora.org/keys
+
+    - role: httpd/redirect
+      shortname: release-banner
+      website: fedoraproject.org
+      path: /static/js/release-counter-ext.js
+      target: https://getfedora.org/static/js/release-counter-ext.js
+
+    #
+    # When there is no prerelease we redirect the prerelease urls
+    # back to the main release.
+    # This should be disabled when there is a prerelease
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-ws
+      website: getfedora.org
+      regex: /(.*)workstation/prerelease.*$
+      target: https://stg.getfedora.org/$1/workstation
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-srv
+      website: getfedora.org
+      regex: /(.*)server/prerelease.*$
+      target: https://stg.getfedora.org/$1/server
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-atomic
+      website: getfedora.org
+      regex: /(.*)atomic/prerelease.*$
+      target: https://stg.getfedora.org/$1/atomic
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-labs-1
+      website: labs.fedoraproject.org
+      regex: /(.*)prerelease.*$
+      target: https://labs.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-spins-1
+      website: spins.fedoraproject.org
+      regex: /(.*)prerelease.*$
+      target: https://spins.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-arm-1
+      website: arm.fedoraproject.org
+      regex: /(.*)prerelease.*$
+      target: https://arm.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-labs-2
+      website: labs.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://labs.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-spins-2
+      website: spins.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://spins.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-arm-2
+      website: arm.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://arm.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: cloud-to-atomic
+      website: getfedora.org
+      regex: /cloud/.*$
+      target: https://alt.stg.fedoraproject.org/cloud/$1
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: cloud-to-atomic-download
+      website: getfedora.org
+      regex: /(.*)/cloud/download.*$
+      target: https://alt.stg.fedoraproject.org/$1/cloud
+      when: env == 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-alt-1
+      website: alt.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://alt.stg.fedoraproject.org/$1
+      when: env == 'staging'
+
+    # end staging
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-ws
+      website: getfedora.org
+      regex: /(.*)workstation/prerelease.*$
+      target: https://getfedora.org/$1/workstation
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-srv
+      website: getfedora.org
+      regex: /(.*)server/prerelease.*$
+      target: https://getfedora.org/$1/server
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-gfo-atomic
+      website: getfedora.org
+      regex: /(.*)atomic/prerelease.*$
+      target: https://getfedora.org/$1/atomic
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-labs-1
+      website: labs.fedoraproject.org
+      regex: /(.*)/prerelease.*$
+      target: https://labs.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-spins-1
+      website: spins.fedoraproject.org
+      regex: /(.*)/prerelease.*$
+      target: https://spins.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-arm-1
+      website: arm.fedoraproject.org
+      regex: /(.*)/prerelease.*$
+      target: https://arm.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-labs-2
+      website: labs.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://labs.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-spins-2
+      website: spins.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://spins.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-arm-2
+      website: arm.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://arm.fedoraproject.org/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: prerelease-to-final-alt-1
+      website: alt.fedoraproject.org
+      regex: /prerelease.*$
+      target: https://alt.fedoraproject.org/$1
+      when: env != 'staging'
+
+    # end of prod prerelease
+
+    - role: httpd/redirectmatch
+      shortname: cloud-to-atomic
+      website: getfedora.org
+      regex: /cloud/.*$
+      target: https://alt.fedoraproject.org/cloud/$1
+      when: env != 'staging'
+
+    - role: httpd/redirectmatch
+      shortname: cloud-to-atomic-download
+      website: getfedora.org
+      regex: /(.*)/cloud/download.*$
+      target: https://alt.fedoraproject.org/$1/cloud
+      when: env != 'staging'
+
+    - role: httpd/redirect
+      shortname: store
+      website: store.fedoraproject.org
+      target: "https://redhat.corpmerchandise.com/ProductList.aspx?did=20588";
+
+    # Fonts on the wiki
+    - role: httpd/redirect
+      shortname: fonts-wiki
+      website: fonts.fedoraproject.org
+      target: https://fedoraproject.org/wiki/Category:Fonts_SIG
+
+    # Releng
+    - role: httpd/redirect
+      shortname: nightly
+      website: nightly.fedoraproject.org
+      target: https://www.happyassassin.net/nightlies.html
+
+    # We retired releng-dash in favor of PDC
+    # https://lists.fedoraproject.org/archives/list/rel-eng@xxxxxxxxxxxxxxxxxxxxxxx/thread/LOWVTF6WTS43LNPWDEISLXUELXAH5YXR/#LOWVTF6WTS43LNPWDEISLXUELXAH5YXR
+    - role: httpd/redirect
+      shortname: releng-dash
+      website: apps.fedoraproject.org
+      path: /releng-dash
+      target: https://pdc.fedoraproject.org/
+
+    # Send fp.com to fp.org
+    - role: httpd/redirect
+      shortname: site
+      website: fedoraproject.com
+      target: https://getfedora.org/
+
+    # Planet/people convenience
+    - role: httpd/redirect
+      shortname: infofeed
+      website: fedoraproject.org
+      path: /infofeed
+      target: http://fedoraplanet.org/infofeed
+
+    - role: httpd/redirect
+      shortname: people
+      website: fedoraproject.org
+      path: /people
+      target: http://fedoraplanet.org/
+
+    - role: httpd/redirect
+      shortname: fedorapeople
+      website: fedoraproject.org
+      path: /fedorapeople
+      target: http://fedoraplanet.org/
+
+    - role: httpd/redirect
+      shortname: planet.fedoraproject.org
+      website: planet.fedoraproject.org
+      target: http://fedoraplanet.org/
+
+    # QA
+    - role: httpd/redirect
+      shortname: qa
+      website: qa.fedoraproject.org
+      target: https://fedoraproject.org/wiki/QA
+      when: env != 'staging'
+
+    # Various community sites
+    - role: httpd/redirect
+      shortname: it-fedoracommunity-redirect
+      website: it.fedoracommunity.org
+      target: http://www.fedoraonline.it/
+
+    - role: httpd/redirect
+      shortname: uk-fedoracommunity-redirect
+      website: uk.fedoracommunity.org
+      target: http://www.fedora-uk.org/
+
+    - role: httpd/redirect
+      shortname: tw-fedoracommunity-redirect
+      website: tw.fedoracommunity.org
+      target: https://fedora-tw.org/
+
+    # Spins
+    - role: httpd/redirect
+      shortname: kde
+      website: kde.fedoraproject.org
+      target: https://spins.fedoraproject.org/kde/
+
+    # Various sites that we are friends with
+    - role: httpd/redirect
+      shortname: port389
+      website: port389.org
+      target: http://directory.fedoraproject.org/
+
+    - role: httpd/redirect
+      shortname: k12linux
+      website: k12linux.org
+      target: https://fedorahosted.org/k12linux/
+
+    - role: httpd/redirect
+      shortname: dogtagpki
+      website: pki.fedoraproject.org
+      target: http://dogtagpki.org/
+
+    # Cloudy bits
+    - role: httpd/redirect
+      shortname: cloud-front-page
+      website: cloud.fedoraproject.org
+      target: https://alt.fedoraproject.org/cloud/
+
+    - role: httpd/redirectmatch
+      shortname: redirect-cloudstart
+      website: redirect.fedoraproject.org
+      regex: /(console\.aws\.amazon\.com/ec2/v2/home.*)$
+      target: https://$1
+
+    ## Cloud image redirects
+
+    # Redirects/pointers for fedora 25 BASE cloud images
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-25
+      website: cloud.fedoraproject.org
+      path: /fedora-25.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/25/CloudImages/x86_64/images/Fedora-Cloud-Base-25-1.3.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-25-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-25.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/25/CloudImages/x86_64/images/Fedora-Cloud-Base-25-1.3.x86_64.raw.xz
+
+    # Redirects/pointers for fedora 24 BASE cloud images
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-24
+      website: cloud.fedoraproject.org
+      path: /fedora-24.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-24-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-24.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.raw.xz
+
+    # Redirects/pointers for fedora 23 BASE cloud images
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-23
+      website: cloud.fedoraproject.org
+      path: /fedora-23.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-23-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-23.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-23-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-23.i386.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/i386/Images/Fedora-Cloud-Base-23-20151030.i386.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-23
+      website: cloud.fedoraproject.org
+      path: /fedora-23.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/i386/Images/Fedora-Cloud-Base-23-20151030.i386.qcow2
+
+    # Redirects/pointers for fedora 23 ATOMIC cloud images
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-23
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-23.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Atomic-23-20151030.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-23-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-23.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Atomic-23-20151030.x86_64.raw.xz
+
+    # Redirects/pointers for fedora 22 BASE cloud images
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-22
+      website: cloud.fedoraproject.org
+      path: /fedora-22.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-22-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-22.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-22-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-22.i386.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-22
+      website: cloud.fedoraproject.org
+      path: /fedora-22.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2
+
+    # Redirects/pointers for fedora 22 ATOMIC cloud images
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-22
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-22.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-22-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-22.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.raw.xz
+
+    # Redirects/pointers for fedora 21 BASE cloud images
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-21
+      website: cloud.fedoraproject.org
+      path: /fedora-21.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-base-64bit-21-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-21.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-21-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-21.i386.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-base-32bit-21
+      website: cloud.fedoraproject.org
+      path: /fedora-21.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2
+
+    # Redirects/pointers for fedora 21 ATOMIC cloud images
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-21
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-21.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-21-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-21.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.raw.xz
+
+    # Except, there are no 32bit atomic images atm.
+    #- role: httpd/redirect
+    #  shortname: cloud-atomic-32bit-21-raw
+    #  website: cloud.fedoraproject.org
+    #  path: /fedora-atomic-21.i386.raw.xz
+    #  target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.raw.xz
+
+    #- role: httpd/redirect
+    #  shortname: cloud-atomic-32bit-21
+    #  website: cloud.fedoraproject.org
+    #  path: /fedora-atomic-21.i386.qcow2
+    #  target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.qcow2
+
+    # Redirects/pointers for fedora 20 cloud images
+    - role: httpd/redirect
+      shortname: cloud-64bit-20
+      website: cloud.fedoraproject.org
+      path: /fedora-20.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-32bit-20
+      website: cloud.fedoraproject.org
+      path: /fedora-20.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-64bit-20-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-20.x86_64.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.raw.xz
+
+    - role: httpd/redirect
+      shortname: cloud-32bit-20-raw
+      website: cloud.fedoraproject.org
+      path: /fedora-20.i386.raw.xz
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.raw.xz
+
+    # Redirects/pointers for fedora 19 cloud images
+    - role: httpd/redirect
+      shortname: cloud-64bit-19
+      website: cloud.fedoraproject.org
+      path: /fedora-19.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/x86_64/Fedora-x86_64-19-20140407-sda.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-32bit-19
+      website: cloud.fedoraproject.org
+      path: /fedora-19.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/i386/Fedora-i386-19-20140407-sda.qcow2
+
+    # Redirects/pointers for latest fedora cloud images.
+    - role: httpd/redirect
+      shortname: cloud-64bit-latest
+      website: cloud.fedoraproject.org
+      path: /fedora-latest.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-32bit-latest
+      website: cloud.fedoraproject.org
+      path: /fedora-latest.i386.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2
+
+    - role: httpd/redirect
+      shortname: cloud-atomic-64bit-latest
+      website: cloud.fedoraproject.org
+      path: /fedora-atomic-latest.x86_64.qcow2
+      target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2
diff --git a/playbooks/include/proxies-reverseproxy.yml b/playbooks/include/proxies-reverseproxy.yml
index 2859a6351..2250fe83d 100644
--- a/playbooks/include/proxies-reverseproxy.yml
+++ b/playbooks/include/proxies-reverseproxy.yml
@@ -4,772 +4,768 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   vars:
-  - varnish_url: http://localhost:6081
+    - varnish_url: http://localhost:6081
 
   pre_tasks:
-
-  - name: Remove some crusty files from bygone eras
-    file: dest=/etc/httpd/conf.d/{{item}} state=absent
-    with_items:
-    - meetbot.fedoraproject.org/reversepassproxy.conf
-    - meetbot.fedoraproject.org/meetbot.conf
-    notify:
-    - reload proxyhttpd
-    tags:
-    - httpd
-    - httpd/reverseproxy
-
+    - name: Remove some crusty files from bygone eras
+      file: dest=/etc/httpd/conf.d/{{item}} state=absent
+      with_items:
+        - meetbot.fedoraproject.org/reversepassproxy.conf
+        - meetbot.fedoraproject.org/meetbot.conf
+      notify:
+        - reload proxyhttpd
+      tags:
+        - httpd
+        - httpd/reverseproxy
 
   roles:
-
-  - role: httpd/reverseproxy
-    website: copr.fedoraproject.org
-    destname: coprapi
-    when: env != "staging"
-    tags: copr
-
-  - role: httpd/reverseproxy
-    website: copr.fedoraproject.org
-    destname: copr
-    proxyurl: http://localhost:10070
-    keephost: true
-    when: env == "staging"
-    tags: copr
-
-  - role: httpd/reverseproxy
-    website: nagios.fedoraproject.org
-    destname: nagios
-    remotepath: /
-    proxyurl: http://noc01.phx2.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: lists.fedoraproject.org
-    destname: mailman3
-    localpath: /
-    remotepath: /
-    header_scheme: true
-    keephost: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: lists.fedorahosted.org
-    destname: mailman3
-    localpath: /
-    remotepath: /
-    header_scheme: true
-    keephost: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: lists.pagure.io
-    destname: mailman3
-    localpath: /
-    remotepath: /
-    header_scheme: true
-    keephost: true
-    proxyurl: "{{ varnish_url }}"
-
-  # The place for the raw originals
-  - role: httpd/reverseproxy
-    website: meetbot-raw.fedoraproject.org
-    destname: meetbot
-    remotepath: /meetbot/
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://value01
-
-  # The place for the fancy mote view
-  - role: httpd/reverseproxy
-    website: meetbot.fedoraproject.org
-    destname: mote
-    #remotepath: /mote/
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://value01
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: nuancier
-    localpath: /nuancier
-    remotepath: /nuancier
-    header_scheme: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: github2fedmsg
-    localpath: /github2fedmsg
-    remotepath: /github2fedmsg
-    header_scheme: true
-    proxyurl: http://localhost:10037
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: fedora-notifications
-    localpath: /notifications
-    remotepath: /notifications
-    header_scheme: true
-    proxyurl: http://localhost:10036
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: packages
-    localpath: /packages
-    remotepath: /packages
-    proxyurl: http://localhost:10016
-
-  - role: httpd/reverseproxy
-    website: ask.fedoraproject.org
-    destname: askbot
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: paste.fedoraproject.org
-    destname: modernpaste
-    keephost: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: awx.fedoraproject.org
-    destname: awx
-    remotepath: /
-    localpath: /
-    proxyurl: http://localhost:10069
-    when: env == "production"
-    tags:
-    - awx.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: totpcgiprovision
-    localpath: /totpcgiprovision
-    proxyurl: http://localhost:10019
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: fas
-    remotepath: /accounts
-    localpath: /accounts
-    proxyurl: http://localhost:10004
-
-  # Fedoauth is odd here -- it has an entry for both stg and prod.
-  - role: httpd/reverseproxy
-    website: id.stg.fedoraproject.org
-    destname: id
-    proxyurl: http://localhost:10020
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: username.id.stg.fedoraproject.org
-    destname: usernameid
-    proxyurl: http://localhost:10020
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: id.stg.fedoraproject.org
-    destname: 00-kdcproxy
-    remotepath: /KdcProxy
-    localpath: /KdcProxy
-    proxyurl: http://localhost:10053
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: id.stg.fedoraproject.org
-    destname: 00-ipa
-    remotepath: /ipa
-    localpath: /ipa
-    proxyurl: http://localhost:10061
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: id.fedoraproject.org
-    destname: id
-    proxyurl: http://localhost:10020
-    tags:
-    - id.fedoraproject.org
-    when: env != "staging"
-
-  - role: httpd/reverseproxy
-    website: username.id.fedoraproject.org
-    destname: usernameid
-    proxyurl: http://localhost:10020
-    tags:
-    - id.fedoraproject.org
-    when: env != "staging"
-
-  - role: httpd/reverseproxy
-    website: id.fedoraproject.org
-    destname: 00-kdcproxy
-    remotepath: /KdcProxy
-    localpath: /KdcProxy
-    proxyurl: http://localhost:10053
-    when: env != "staging"
-
-  - role: httpd/reverseproxy
-    website: id.fedoraproject.org
-    destname: 00-ipa
-    remotepath: /ipa
-    localpath: /ipa
-    proxyurl: http://localhost:10061
-    when: env != "staging"
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: datagrepper
-    remotepath: /datagrepper
-    localpath: /datagrepper
-    rewrite: true
-    proxyurl: http://localhost:10028
-
-  - role: httpd/reverseproxy
-    website: badges.fedoraproject.org
-    destname: badges
-    proxyurl: http://localhost:10032
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: fedocal
-    remotepath: /calendar
-    localpath: /calendar
-    header_scheme: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: kerneltest
-    remotepath: /kerneltest
-    localpath: /kerneltest
-    header_scheme: true
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: qa.fedoraproject.org
-    destname: blockerbugs
-    remotepath: /blockerbugs
-    localpath: /blockerbugs
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: fedoraproject.org
-    destname: fp-wiki
-    wpath: /w
-    wikipath: /wiki
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: bodhi.fedoraproject.org
-    destname: bodhi
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: bodhi
-
-  - role: httpd/reverseproxy
-    website: caiapi.fedoraproject.org
-    destname: caiapi
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: caiapi
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: transtats.fedoraproject.org
-    destname: transtats
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: transtats
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: mirrormanager
-    remotepath: /mirrormanager
-    localpath: /mirrormanager
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: mirrors.fedoraproject.org
-    destname: mirrormanager-mirrorlist
-    proxyurl: http://localhost:10002
-
-  - role: httpd/reverseproxy
-    website: download.fedoraproject.org
-    destname: mirrormanager-redirector
-    proxyurl: http://localhost:10002
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: koschei
-    localpath: /koschei
-    remotepath: /koschei
-    proxyurl: "{{ varnish_url }}"
-
-  - role: httpd/reverseproxy
-    website: koschei.fedoraproject.org
-    destname: koschei
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: koschei
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: mdapi
-    remotepath: /mdapi
-    localpath: /mdapi
-    proxyurl: http://localhost:10043
-
-  - role: httpd/reverseproxy
-    website: openqa.fedoraproject.org
-    destname: openqa
-    balancer_name: openqa
-    balancer_members: ['openqa01:80']
-    http_not_https_yes_this_is_insecure_and_i_feel_bad: true
-    when: env == "production"
-    tags: openqa
-
-  - role: httpd/reverseproxy
-    website: openqa.fedoraproject.org
-    destname: openqa
-    balancer_name: openqa-stg
-    balancer_members: ['openqa-stg01.qa.fedoraproject.org:80']
-    http_not_https_yes_this_is_insecure_and_i_feel_bad: true
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: autocloud
-    localpath: /autocloud
-    remotepath: /autocloud
-    proxyurl: http://localhost:10041
-
-  - role: httpd/reverseproxy
-    website: pdc.fedoraproject.org
-    destname: pdc
-    proxyurl: http://localhost:10045
-    header_scheme: true
-    tags: pdc
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: zanata2fedmsg
-    localpath: /zanata2fedmsg
-    remotepath: /zanata2fedmsg
-    proxyurl: http://localhost:10046
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: yk-val
-    remotepath: /yk-val/verify
-    localpath: /yk-val/verify
-    proxyurl: http://localhost:10004
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: pager
-    remotepath: /pager
-    localpath: /pager
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://sundries01
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: awstats
-    remotepath: /awstats
-    localpath: /awstats
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://log01
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: epylog
-    remotepath: /epylog
-    localpath: /epylog
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://log01
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: maps
-    remotepath: /maps
-    localpath: /maps
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://log01
-
-  - role: httpd/reverseproxy
-    website: fedoraproject.org
-    destname: freemedia
-    remotepath: /freemedia
-    localpath: /freemedia
-    proxyurl: http://localhost:10011
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: collectd
-    localpath: /collectd
-    remotepath: /collectd
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://log01
-
-  ### Four entries for taskotron for production
-  - role: httpd/reverseproxy
-    website: taskotron.fedoraproject.org
-    destname: taskotron
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://taskotron01.vpn.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: taskotron.fedoraproject.org
-    destname: taskotron-resultsdb
-    localpath: /resultsdb
-    remotepath: /resultsdb
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb01.vpn.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: taskotron.fedoraproject.org
-    destname: taskotron-resultsdbapi
-    localpath: /resultsdb_api
-    remotepath: /resultsdb_api
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb01.vpn.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: taskotron.fedoraproject.org
-    destname: taskotron-execdb
-    localpath: /execdb
-    remotepath: /execdb
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb01.vpn.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: taskotron.fedoraproject.org
-    destname: taskotron-vault
-    localpath: /vault
-    remotepath: /vault
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb01.vpn.fedoraproject.org
-
-
-  ### And four entries for taskotron for staging
-  - role: httpd/reverseproxy
-    website: taskotron.stg.fedoraproject.org
-    destname: taskotron
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://taskotron-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: taskotron.stg.fedoraproject.org
-    destname: taskotron-resultsdb
-    localpath: /resultsdb
-    remotepath: /resultsdb
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: taskotron.stg.fedoraproject.org
-    destname: taskotron-resultsdbapi
-    localpath: /resultsdb_api
-    remotepath: /resultsdb_api
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: taskotron.stg.fedoraproject.org
-    destname: taskotron-execdb
-    localpath: /execdb
-    remotepath: /execdb
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  ### Beaker production
-  - role: httpd/reverseproxy
-    website: beaker.qa.fedoraproject.org
-    destname: beaker
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://beaker01.vpn.fedoraproject.org
-    when: env == "production"
-
-  ### Beaker staging
-  - role: httpd/reverseproxy
-    website: beaker.stg.fedoraproject.org
-    destname: beaker-stg
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://beaker-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  ### QA staging
-
-  - role: httpd/reverseproxy
-    website: qa.stg.fedoraproject.org
-    destname: qa-stg
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://qa-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: qa.stg.fedoraproject.org
-    destname: blockerbugs
-    remotepath: /blockerbugs
-    localpath: /blockerbugs
-    proxyurl: "{{ varnish_url }}"
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: phab.qa.stg.fedoraproject.org
-    destname: qa-stg-phab
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://phab.qa-stg01.qa.fedoraproject.org
-    keephost: true
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: docs.qa.stg.fedoraproject.org
-    destname: qa-stg-docs
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://docs.qa-stg01.qa.fedoraproject.org
-    when: env == "staging"
-
-  ### QA production
-
-  - role: httpd/reverseproxy
-    website: qa.fedoraproject.org
-    destname: qa-prod
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://qa-prod01.vpn.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: phab.qa.fedoraproject.org
-    destname: qa-prod-phab
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://phab.qa-prod01.vpn.fedoraproject.org
-    keephost: true
-
-  - role: httpd/reverseproxy
-    website: docs.qa.fedoraproject.org
-    destname: qa-prod-docs
-    # Talk directly to the app server, not haproxy
-    proxyurl: http://docs.qa-prod01.vpn.fedoraproject.org
-
-  # This one gets its own role (instead of httpd/reverseproxy) so that it can
-  # copy in some silly static resources (globe.png, index.html)
-  - role: geoip-city-wsgi/proxy
-    website: geoip.fedoraproject.org
-    proxyurl: http://localhost:10029
-
-  - role: httpd/reverseproxy
-    website: src.fedoraproject.org
-    destname: git
-    proxyurl: http://localhost:10057
-    header_scheme: true
-    keephost: true
-
-  - role: httpd/reverseproxy
-    website: osbs.fedoraproject.org
-    destname: osbs
-    proxyurl: http://localhost:10047
-
-  - role: httpd/reverseproxy
-    website: registry.fedoraproject.org
-    destname: registry-fedora
-    # proxyurl in this one is totally ignored, because Docker.
-    # (turns out it uses PATCH requests that Varnish cannot deal with)
-    proxyurl: "{{ varnish_url }}"
-    tags:
-    - registry
-
-  - role: httpd/reverseproxy
-    website: registry.centos.org
-    destname: registry-centos
-    # proxyurl in this one is totally ignored, because Docker.
-    # (turns out it uses PATCH requests that Varnish cannot deal with)
-    proxyurl: "{{ varnish_url }}"
-    tags:
-    - registry
-
-  - role: httpd/reverseproxy
-    website: candidate-registry.fedoraproject.org
-    destname: candidate-registry
-    proxyurl: http://localhost:10054
-
-  - role: httpd/reverseproxy
-    website: retrace.fedoraproject.org
-    destname: retrace
-    proxyurl: http://localhost:10049
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: faf.fedoraproject.org
-    destname: faf
-    proxyurl: http://localhost:10050
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: apps.fedoraproject.org
-    destname: pps
-    remotepath: /pps
-    localpath: /pps
-    proxyurl: http://localhost:10051
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: admin.fedoraproject.org
-    destname: fas3
-    remotepath: /fas3
-    localpath: /fas3
-    proxyurl: http://localhost:10052
-    when: env == "staging"
-
-  - role: httpd/reverseproxy
-    website: mbs.fedoraproject.org
-    destname: mbs
-    proxyurl: http://localhost:10063
-
-  - role: httpd/reverseproxy
-    website: koji.fedoraproject.org
-    destname: koji
-    proxyurl: http://localhost:10056
-    keephost: true
-
-  - role: httpd/reverseproxy
-    website: s390.koji.fedoraproject.org
-    destname: s390koji
-    proxyurl: http://localhost:10059
-    keephost: true
-
-  - role: httpd/reverseproxy
-    website: kojipkgs.fedoraproject.org
-    destname: kojipkgs
-    proxyurl: http://localhost:10062
-    keephost: true
-
-  - role: httpd/reverseproxy
-    website: "os{{ env_suffix }}.fedoraproject.org"
-    destname: os
-    balancer_name: os
-    targettype: openshift
-    balancer_members: "{{ openshift_masters }}"
-    keephost: true
-    tags:
-    - os.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: "app.os{{ env_suffix }}.fedoraproject.org"
-    destname: app.os
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags:
-    - app.os.fedoraproject.org
-
-  - role: httpd/reverseproxy
-    website: odcs.fedoraproject.org
-    destname: odcs
-    proxyurl: http://localhost:10066
-    tags:
-    - odcs
-
-  - role: httpd/reverseproxy
-    website: freshmaker.fedoraproject.org
-    destname: freshmaker
-    proxyurl: http://localhost:10067
-    tags:
-    - freshmaker
-
-  - role: httpd/reverseproxy
-    website: greenwave.fedoraproject.org
-    destname: greenwave
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: greenwave
-
-  - role: httpd/reverseproxy
-    website: waiverdb.fedoraproject.org
-    destname: waiverdb
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: waiverdb
-
-  - role: httpd/reverseproxy
-    website: elections.fedoraproject.org
-    destname: elections
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: elections
-
-  - role: httpd/reverseproxy
-    website: calendar.fedoraproject.org
-    destname: calendar
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: calendar
-
-  - role: httpd/reverseproxy
-    website: mdapi.fedoraproject.org
-    destname: mdapi
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: mdapi
-
-  - role: httpd/reverseproxy
-    website: wallpapers.fedoraproject.org
-    destname: wallpapers
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: wallpapers
-
-  - role: httpd/reverseproxy
-    website: silverblue.fedoraproject.org
-    destname: silverblue
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: silverblue
-
-  - role: httpd/reverseproxy
-    website: release-monitoring.org
-    destname: release-monitoring
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: release-montoring.org
-
-  - role: httpd/reverseproxy
-    website: whatcanidoforfedora.org
-    destname: whatcanidoforfedora
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: whatcanidoforfedora.org
-
-  - role: httpd/reverseproxy
-    website: fpdc.fedoraproject.org
-    destname: fpdc
-    balancer_name: app-os
-    targettype: openshift
-    keephost: true
-    tags: fpdc
-
-  - role: httpd/reverseproxy
-    website: data-analysis.fedoraproject.org
-    destname: awstats
-    remotepath: /
-    localpath: /
-    proxyurl: http://data-analysis01.phx2.fedoraproject.org
+    - role: httpd/reverseproxy
+      website: copr.fedoraproject.org
+      destname: coprapi
+      when: env != "staging"
+      tags: copr
+
+    - role: httpd/reverseproxy
+      website: copr.fedoraproject.org
+      destname: copr
+      proxyurl: http://localhost:10070
+      keephost: true
+      when: env == "staging"
+      tags: copr
+
+    - role: httpd/reverseproxy
+      website: nagios.fedoraproject.org
+      destname: nagios
+      remotepath: /
+      proxyurl: http://noc01.phx2.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: lists.fedoraproject.org
+      destname: mailman3
+      localpath: /
+      remotepath: /
+      header_scheme: true
+      keephost: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: lists.fedorahosted.org
+      destname: mailman3
+      localpath: /
+      remotepath: /
+      header_scheme: true
+      keephost: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: lists.pagure.io
+      destname: mailman3
+      localpath: /
+      remotepath: /
+      header_scheme: true
+      keephost: true
+      proxyurl: "{{ varnish_url }}"
+
+    # The place for the raw originals
+    - role: httpd/reverseproxy
+      website: meetbot-raw.fedoraproject.org
+      destname: meetbot
+      remotepath: /meetbot/
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://value01
+
+    # The place for the fancy mote view
+    - role: httpd/reverseproxy
+      website: meetbot.fedoraproject.org
+      destname: mote
+      #remotepath: /mote/
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://value01
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: nuancier
+      localpath: /nuancier
+      remotepath: /nuancier
+      header_scheme: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: github2fedmsg
+      localpath: /github2fedmsg
+      remotepath: /github2fedmsg
+      header_scheme: true
+      proxyurl: http://localhost:10037
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: fedora-notifications
+      localpath: /notifications
+      remotepath: /notifications
+      header_scheme: true
+      proxyurl: http://localhost:10036
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: packages
+      localpath: /packages
+      remotepath: /packages
+      proxyurl: http://localhost:10016
+
+    - role: httpd/reverseproxy
+      website: ask.fedoraproject.org
+      destname: askbot
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: paste.fedoraproject.org
+      destname: modernpaste
+      keephost: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: awx.fedoraproject.org
+      destname: awx
+      remotepath: /
+      localpath: /
+      proxyurl: http://localhost:10069
+      when: env == "production"
+      tags:
+        - awx.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: totpcgiprovision
+      localpath: /totpcgiprovision
+      proxyurl: http://localhost:10019
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: fas
+      remotepath: /accounts
+      localpath: /accounts
+      proxyurl: http://localhost:10004
+
+    # Fedoauth is odd here -- it has an entry for both stg and prod.
+    - role: httpd/reverseproxy
+      website: id.stg.fedoraproject.org
+      destname: id
+      proxyurl: http://localhost:10020
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: username.id.stg.fedoraproject.org
+      destname: usernameid
+      proxyurl: http://localhost:10020
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: id.stg.fedoraproject.org
+      destname: 00-kdcproxy
+      remotepath: /KdcProxy
+      localpath: /KdcProxy
+      proxyurl: http://localhost:10053
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: id.stg.fedoraproject.org
+      destname: 00-ipa
+      remotepath: /ipa
+      localpath: /ipa
+      proxyurl: http://localhost:10061
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: id.fedoraproject.org
+      destname: id
+      proxyurl: http://localhost:10020
+      tags:
+        - id.fedoraproject.org
+      when: env != "staging"
+
+    - role: httpd/reverseproxy
+      website: username.id.fedoraproject.org
+      destname: usernameid
+      proxyurl: http://localhost:10020
+      tags:
+        - id.fedoraproject.org
+      when: env != "staging"
+
+    - role: httpd/reverseproxy
+      website: id.fedoraproject.org
+      destname: 00-kdcproxy
+      remotepath: /KdcProxy
+      localpath: /KdcProxy
+      proxyurl: http://localhost:10053
+      when: env != "staging"
+
+    - role: httpd/reverseproxy
+      website: id.fedoraproject.org
+      destname: 00-ipa
+      remotepath: /ipa
+      localpath: /ipa
+      proxyurl: http://localhost:10061
+      when: env != "staging"
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: datagrepper
+      remotepath: /datagrepper
+      localpath: /datagrepper
+      rewrite: true
+      proxyurl: http://localhost:10028
+
+    - role: httpd/reverseproxy
+      website: badges.fedoraproject.org
+      destname: badges
+      proxyurl: http://localhost:10032
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: fedocal
+      remotepath: /calendar
+      localpath: /calendar
+      header_scheme: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: kerneltest
+      remotepath: /kerneltest
+      localpath: /kerneltest
+      header_scheme: true
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: qa.fedoraproject.org
+      destname: blockerbugs
+      remotepath: /blockerbugs
+      localpath: /blockerbugs
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: fedoraproject.org
+      destname: fp-wiki
+      wpath: /w
+      wikipath: /wiki
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: bodhi.fedoraproject.org
+      destname: bodhi
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: bodhi
+
+    - role: httpd/reverseproxy
+      website: caiapi.fedoraproject.org
+      destname: caiapi
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: caiapi
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: transtats.fedoraproject.org
+      destname: transtats
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: transtats
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: mirrormanager
+      remotepath: /mirrormanager
+      localpath: /mirrormanager
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: mirrors.fedoraproject.org
+      destname: mirrormanager-mirrorlist
+      proxyurl: http://localhost:10002
+
+    - role: httpd/reverseproxy
+      website: download.fedoraproject.org
+      destname: mirrormanager-redirector
+      proxyurl: http://localhost:10002
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: koschei
+      localpath: /koschei
+      remotepath: /koschei
+      proxyurl: "{{ varnish_url }}"
+
+    - role: httpd/reverseproxy
+      website: koschei.fedoraproject.org
+      destname: koschei
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: koschei
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: mdapi
+      remotepath: /mdapi
+      localpath: /mdapi
+      proxyurl: http://localhost:10043
+
+    - role: httpd/reverseproxy
+      website: openqa.fedoraproject.org
+      destname: openqa
+      balancer_name: openqa
+      balancer_members: ["openqa01:80"]
+      http_not_https_yes_this_is_insecure_and_i_feel_bad: true
+      when: env == "production"
+      tags: openqa
+
+    - role: httpd/reverseproxy
+      website: openqa.fedoraproject.org
+      destname: openqa
+      balancer_name: openqa-stg
+      balancer_members: ["openqa-stg01.qa.fedoraproject.org:80"]
+      http_not_https_yes_this_is_insecure_and_i_feel_bad: true
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: autocloud
+      localpath: /autocloud
+      remotepath: /autocloud
+      proxyurl: http://localhost:10041
+
+    - role: httpd/reverseproxy
+      website: pdc.fedoraproject.org
+      destname: pdc
+      proxyurl: http://localhost:10045
+      header_scheme: true
+      tags: pdc
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: zanata2fedmsg
+      localpath: /zanata2fedmsg
+      remotepath: /zanata2fedmsg
+      proxyurl: http://localhost:10046
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: yk-val
+      remotepath: /yk-val/verify
+      localpath: /yk-val/verify
+      proxyurl: http://localhost:10004
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: pager
+      remotepath: /pager
+      localpath: /pager
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://sundries01
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: awstats
+      remotepath: /awstats
+      localpath: /awstats
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://log01
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: epylog
+      remotepath: /epylog
+      localpath: /epylog
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://log01
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: maps
+      remotepath: /maps
+      localpath: /maps
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://log01
+
+    - role: httpd/reverseproxy
+      website: fedoraproject.org
+      destname: freemedia
+      remotepath: /freemedia
+      localpath: /freemedia
+      proxyurl: http://localhost:10011
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: collectd
+      localpath: /collectd
+      remotepath: /collectd
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://log01
+
+    ### Four entries for taskotron for production
+    - role: httpd/reverseproxy
+      website: taskotron.fedoraproject.org
+      destname: taskotron
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://taskotron01.vpn.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: taskotron.fedoraproject.org
+      destname: taskotron-resultsdb
+      localpath: /resultsdb
+      remotepath: /resultsdb
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb01.vpn.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: taskotron.fedoraproject.org
+      destname: taskotron-resultsdbapi
+      localpath: /resultsdb_api
+      remotepath: /resultsdb_api
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb01.vpn.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: taskotron.fedoraproject.org
+      destname: taskotron-execdb
+      localpath: /execdb
+      remotepath: /execdb
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb01.vpn.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: taskotron.fedoraproject.org
+      destname: taskotron-vault
+      localpath: /vault
+      remotepath: /vault
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb01.vpn.fedoraproject.org
+
+    ### And four entries for taskotron for staging
+    - role: httpd/reverseproxy
+      website: taskotron.stg.fedoraproject.org
+      destname: taskotron
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://taskotron-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: taskotron.stg.fedoraproject.org
+      destname: taskotron-resultsdb
+      localpath: /resultsdb
+      remotepath: /resultsdb
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: taskotron.stg.fedoraproject.org
+      destname: taskotron-resultsdbapi
+      localpath: /resultsdb_api
+      remotepath: /resultsdb_api
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: taskotron.stg.fedoraproject.org
+      destname: taskotron-execdb
+      localpath: /execdb
+      remotepath: /execdb
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://resultsdb-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    ### Beaker production
+    - role: httpd/reverseproxy
+      website: beaker.qa.fedoraproject.org
+      destname: beaker
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://beaker01.vpn.fedoraproject.org
+      when: env == "production"
+
+    ### Beaker staging
+    - role: httpd/reverseproxy
+      website: beaker.stg.fedoraproject.org
+      destname: beaker-stg
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://beaker-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    ### QA staging
+
+    - role: httpd/reverseproxy
+      website: qa.stg.fedoraproject.org
+      destname: qa-stg
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://qa-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: qa.stg.fedoraproject.org
+      destname: blockerbugs
+      remotepath: /blockerbugs
+      localpath: /blockerbugs
+      proxyurl: "{{ varnish_url }}"
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: phab.qa.stg.fedoraproject.org
+      destname: qa-stg-phab
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://phab.qa-stg01.qa.fedoraproject.org
+      keephost: true
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: docs.qa.stg.fedoraproject.org
+      destname: qa-stg-docs
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://docs.qa-stg01.qa.fedoraproject.org
+      when: env == "staging"
+
+    ### QA production
+
+    - role: httpd/reverseproxy
+      website: qa.fedoraproject.org
+      destname: qa-prod
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://qa-prod01.vpn.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: phab.qa.fedoraproject.org
+      destname: qa-prod-phab
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://phab.qa-prod01.vpn.fedoraproject.org
+      keephost: true
+
+    - role: httpd/reverseproxy
+      website: docs.qa.fedoraproject.org
+      destname: qa-prod-docs
+      # Talk directly to the app server, not haproxy
+      proxyurl: http://docs.qa-prod01.vpn.fedoraproject.org
+
+    # This one gets its own role (instead of httpd/reverseproxy) so that it can
+    # copy in some silly static resources (globe.png, index.html)
+    - role: geoip-city-wsgi/proxy
+      website: geoip.fedoraproject.org
+      proxyurl: http://localhost:10029
+
+    - role: httpd/reverseproxy
+      website: src.fedoraproject.org
+      destname: git
+      proxyurl: http://localhost:10057
+      header_scheme: true
+      keephost: true
+
+    - role: httpd/reverseproxy
+      website: osbs.fedoraproject.org
+      destname: osbs
+      proxyurl: http://localhost:10047
+
+    - role: httpd/reverseproxy
+      website: registry.fedoraproject.org
+      destname: registry-fedora
+      # proxyurl in this one is totally ignored, because Docker.
+      # (turns out it uses PATCH requests that Varnish cannot deal with)
+      proxyurl: "{{ varnish_url }}"
+      tags:
+        - registry
+
+    - role: httpd/reverseproxy
+      website: registry.centos.org
+      destname: registry-centos
+      # proxyurl in this one is totally ignored, because Docker.
+      # (turns out it uses PATCH requests that Varnish cannot deal with)
+      proxyurl: "{{ varnish_url }}"
+      tags:
+        - registry
+
+    - role: httpd/reverseproxy
+      website: candidate-registry.fedoraproject.org
+      destname: candidate-registry
+      proxyurl: http://localhost:10054
+
+    - role: httpd/reverseproxy
+      website: retrace.fedoraproject.org
+      destname: retrace
+      proxyurl: http://localhost:10049
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: faf.fedoraproject.org
+      destname: faf
+      proxyurl: http://localhost:10050
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: apps.fedoraproject.org
+      destname: pps
+      remotepath: /pps
+      localpath: /pps
+      proxyurl: http://localhost:10051
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: admin.fedoraproject.org
+      destname: fas3
+      remotepath: /fas3
+      localpath: /fas3
+      proxyurl: http://localhost:10052
+      when: env == "staging"
+
+    - role: httpd/reverseproxy
+      website: mbs.fedoraproject.org
+      destname: mbs
+      proxyurl: http://localhost:10063
+
+    - role: httpd/reverseproxy
+      website: koji.fedoraproject.org
+      destname: koji
+      proxyurl: http://localhost:10056
+      keephost: true
+
+    - role: httpd/reverseproxy
+      website: s390.koji.fedoraproject.org
+      destname: s390koji
+      proxyurl: http://localhost:10059
+      keephost: true
+
+    - role: httpd/reverseproxy
+      website: kojipkgs.fedoraproject.org
+      destname: kojipkgs
+      proxyurl: http://localhost:10062
+      keephost: true
+
+    - role: httpd/reverseproxy
+      website: "os{{ env_suffix }}.fedoraproject.org"
+      destname: os
+      balancer_name: os
+      targettype: openshift
+      balancer_members: "{{ openshift_masters }}"
+      keephost: true
+      tags:
+        - os.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: "app.os{{ env_suffix }}.fedoraproject.org"
+      destname: app.os
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags:
+        - app.os.fedoraproject.org
+
+    - role: httpd/reverseproxy
+      website: odcs.fedoraproject.org
+      destname: odcs
+      proxyurl: http://localhost:10066
+      tags:
+        - odcs
+
+    - role: httpd/reverseproxy
+      website: freshmaker.fedoraproject.org
+      destname: freshmaker
+      proxyurl: http://localhost:10067
+      tags:
+        - freshmaker
+
+    - role: httpd/reverseproxy
+      website: greenwave.fedoraproject.org
+      destname: greenwave
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: greenwave
+
+    - role: httpd/reverseproxy
+      website: waiverdb.fedoraproject.org
+      destname: waiverdb
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: waiverdb
+
+    - role: httpd/reverseproxy
+      website: elections.fedoraproject.org
+      destname: elections
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: elections
+
+    - role: httpd/reverseproxy
+      website: calendar.fedoraproject.org
+      destname: calendar
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: calendar
+
+    - role: httpd/reverseproxy
+      website: mdapi.fedoraproject.org
+      destname: mdapi
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: mdapi
+
+    - role: httpd/reverseproxy
+      website: wallpapers.fedoraproject.org
+      destname: wallpapers
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: wallpapers
+
+    - role: httpd/reverseproxy
+      website: silverblue.fedoraproject.org
+      destname: silverblue
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: silverblue
+
+    - role: httpd/reverseproxy
+      website: release-monitoring.org
+      destname: release-monitoring
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: release-montoring.org
+
+    - role: httpd/reverseproxy
+      website: whatcanidoforfedora.org
+      destname: whatcanidoforfedora
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: whatcanidoforfedora.org
+
+    - role: httpd/reverseproxy
+      website: fpdc.fedoraproject.org
+      destname: fpdc
+      balancer_name: app-os
+      targettype: openshift
+      keephost: true
+      tags: fpdc
+
+    - role: httpd/reverseproxy
+      website: data-analysis.fedoraproject.org
+      destname: awstats
+      remotepath: /
+      localpath: /
+      proxyurl: http://data-analysis01.phx2.fedoraproject.org
diff --git a/playbooks/include/proxies-rewrites.yml b/playbooks/include/proxies-rewrites.yml
index 88981555e..6cea79eca 100644
--- a/playbooks/include/proxies-rewrites.yml
+++ b/playbooks/include/proxies-rewrites.yml
@@ -4,70 +4,68 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   roles:
+    - role: httpd/domainrewrite
+      destname: admin
+      website: admin.fedoraproject.org
+      target: https://apps.fedoraproject.org/
 
-  - role: httpd/domainrewrite
-    destname: admin
-    website: admin.fedoraproject.org
-    target: https://apps.fedoraproject.org/
+    - role: httpd/domainrewrite
+      destname: apache-status
+      website: admin.fedoraproject.org
+      path: /status
 
-  - role: httpd/domainrewrite
-    destname: apache-status
-    website: admin.fedoraproject.org
-    path: /status
+    - role: httpd/domainrewrite
+      destname: 00-admin
+      website: admin.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
 
-  - role: httpd/domainrewrite
-    destname: 00-admin
-    website: admin.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
+    - role: httpd/domainrewrite
+      destname: 00-docs
+      website: docs.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
 
-  - role: httpd/domainrewrite
-    destname: 00-docs
-    website: docs.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
+    - role: httpd/domainrewrite
+      destname: 00-start
+      website: start.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
 
-  - role: httpd/domainrewrite
-    destname: 00-start
-    website: start.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
+    - role: httpd/domainrewrite
+      destname: translate
+      website: translate.fedoraproject.org
+      # TODO - At some point, this will switch to fedora.zanata.org
+      target: https://fedora.transifex.net/
 
-  - role: httpd/domainrewrite
-    destname: translate
-    website: translate.fedoraproject.org
-    # TODO - At some point, this will switch to fedora.zanata.org
-    target: https://fedora.transifex.net/
+    - role: httpd/domainrewrite
+      destname: 00-translate-icon
+      website: translate.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
 
-  - role: httpd/domainrewrite
-    destname: 00-translate-icon
-    website: translate.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
-
-  - role: httpd/domainrewrite
-    destname: 00-registry-icon
-    website: registry.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
-
-  - role: httpd/domainrewrite
-    destname: 00-community-icon
-    website: communityblog.fedoraproject.org
-    path: ^/favicon.ico$
-    status: 301
-    target: https://fedoraproject.org/static/images/favicon.ico
+    - role: httpd/domainrewrite
+      destname: 00-registry-icon
+      website: registry.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
 
+    - role: httpd/domainrewrite
+      destname: 00-community-icon
+      website: communityblog.fedoraproject.org
+      path: ^/favicon.ico$
+      status: 301
+      target: https://fedoraproject.org/static/images/favicon.ico
diff --git a/playbooks/include/proxies-websites.yml b/playbooks/include/proxies-websites.yml
index df22e22ff..489fba08f 100644
--- a/playbooks/include/proxies-websites.yml
+++ b/playbooks/include/proxies-websites.yml
@@ -4,1004 +4,1002 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   pre_tasks:
-  - name: Install policycoreutils-python
-    package: name={{item}} state=present
-    with_items:
-    - policycoreutils-python-utils
-    - policycoreutils-python
-
-  - name: Create /srv/web/ for all the goodies.
-    file: >
+    - name: Install policycoreutils-python
+      package: name={{item}} state=present
+      with_items:
+        - policycoreutils-python-utils
+        - policycoreutils-python
+
+    - name: Create /srv/web/ for all the goodies.
+      file: >
         dest=/srv/web state=directory
         owner=root group=root mode=0755
-    tags:
-    - httpd
-    - httpd/website
-
-  - name: check the selinux context of webdir
-    command: matchpathcon /srv/web
-    register: webdir
-    check_mode: no
-    changed_when: "1 != 1"
-    tags:
-    - config
-    - selinux
-    - httpd
-    - httpd/website
-
-  - name: /srv/web file contexts
-    command: semanage fcontext -a -t httpd_sys_content_t "/srv/web(/.*)?"
-    when: webdir.stdout.find('httpd_sys_content_t') == -1
-    tags:
-    - config
-    - selinux
-    - httpd
-    - httpd/website
+      tags:
+        - httpd
+        - httpd/website
+
+    - name: check the selinux context of webdir
+      command: matchpathcon /srv/web
+      register: webdir
+      check_mode: no
+      changed_when: "1 != 1"
+      tags:
+        - config
+        - selinux
+        - httpd
+        - httpd/website
+
+    - name: /srv/web file contexts
+      command: semanage fcontext -a -t httpd_sys_content_t "/srv/web(/.*)?"
+      when: webdir.stdout.find('httpd_sys_content_t') == -1
+      tags:
+        - config
+        - selinux
+        - httpd
+        - httpd/website
 
   roles:
-
-  - role: httpd/website
-    site_name: fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    server_aliases:
-    - stg.fedoraproject.org
-    - localhost
-    - www.fedoraproject.org
-    - hotspot-nocache.fedoraproject.org
-    - infinote.fedoraproject.org
-
-  # This is for all the other domains we own
-  # that redirect to https://fedoraproject.org
-  - role: httpd/website
-    site_name: fedoraproject.com
-    cert_name: "{{wildcard_cert_name}}"
-    server_aliases:
-    - epel.io
-    - fedp.org
-    - fedora.asia
-    - fedora.com.my
-    - fedora.cr
-    - fedora.events
-    - fedora.me
-    - fedora.mobi
-    - fedora.my
-    - fedora.org
-    - fedora.org.cn
-    - fedora.pe
-    - fedora.pt
-    - fedora.redhat.com
-    - fedora.software
-    - fedora.tk
-    - fedora.us
-    - fedora.wiki
-    - fedoralinux.com
-    - fedoralinux.net
-    - fedoralinux.org
-    - fedoraproject.asia
-    - fedoraproject.cn
-    - fedoraproject.co.uk
-    - fedoraproject.com
-    - fedoraproject.com.cn
-    - fedoraproject.com.gr
-    - fedoraproject.com.my
-    - fedoraproject.cz
-    - fedoraproject.eu
-    - fedoraproject.gr
-    - fedoraproject.info
-    - fedoraproject.net
-    - fedoraproject.net.cn
-    - fedoraproject.org.uk
-    - fedoraproject.pe
-    - fedoraproject.su
-    - projectofedora.org
-    - www.fedora.asia
-    - www.fedora.com.my
-    - www.fedora.cr
-    - www.fedora.events
-    - www.fedora.me
-    - www.fedora.mobi
-    - www.fedora.org
-    - www.fedora.org.cn
-    - www.fedora.pe
-    - www.fedora.pt
-    - www.fedora.redhat.com
-    - www.fedora.software
-    - www.fedora.tk
-    - www.fedora.us
-    - www.fedora.wiki
-    - www.fedoralinux.com
-    - www.fedoralinux.net
-    - www.fedoralinux.org
-    - www.fedoraproject.asia
-    - www.fedoraproject.cn
-    - www.fedoraproject.co.uk
-    - www.fedoraproject.com
-    - www.fedoraproject.com.cn
-    - www.fedoraproject.com.gr
-    - www.fedoraproject.com.my
-    - www.fedoraproject.cz
-    - www.fedoraproject.eu
-    - www.fedoraproject.gr
-    - www.fedoraproject.info
-    - www.fedoraproject.net
-    - www.fedoraproject.net.cn
-    - www.fedoraproject.org.uk
-    - www.fedoraproject.pe
-    - www.fedoraproject.su
-    - www.projectofedora.org
-    - www.getfedora.com
-    - getfedora.com
-    - fedoraplayground.org
-    - fedoraplayground.com
-
-  - role: httpd/website
-    site_name: admin.fedoraproject.org
-    server_aliases: [admin.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: cloud.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: mirrors.fedoraproject.org
-    server_aliases:
-    - mirrors.stg.fedoraproject.org
-    - fedoramirror.net
-    - www.fedoramirror.net
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: src.fedoraproject.org
-    server_aliases: [src.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    sslonly: true
-    use_h2: false
-
-  - role: httpd/website
-    site_name: download.fedoraproject.org
-    server_aliases:
-    - download01.fedoraproject.org
-    - download02.fedoraproject.org
-    - download03.fedoraproject.org
-    - download04.fedoraproject.org
-    - download05.fedoraproject.org
-    - download06.fedoraproject.org
-    - download07.fedoraproject.org
-    - download08.fedoraproject.org
-    - download09.fedoraproject.org
-    - download10.fedoraproject.org
-    - download-rdu01.fedoraproject.org
-    - download.stg.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: translate.fedoraproject.org
-    server_aliases: [translate.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: pki.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: spins.fedoraproject.org
-    server_aliases:
-    - spins.stg.fedoraproject.org
-    - spins-test.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: labs.fedoraproject.org
-    server_aliases:
-    - labs.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: arm.fedoraproject.org
-    server_aliases:
-    - arm.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: iot.fedoraproject.org
-    server_aliases:
-    - iot.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: budget.fedoraproject.org
-    server_aliases:
-    - budget.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: boot.fedoraproject.org
-    server_aliases: [boot.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: boot.fedoraproject.org
-    server_aliases: [boot.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: smolts.org
-    ssl: false
-    server_aliases:
-    - smolt.fedoraproject.org
-    - stg.smolts.org
-    - www.smolts.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: docs.fedoraproject.org
-    server_aliases:
-    - doc.fedoraproject.org
-    - docs.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: docs-old.fedoraproject.org
-    server_aliases:
-    - docs-old.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: bodhi.fedoraproject.org
-    sslonly: true
-    server_aliases: [bodhi.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: caiapi.fedoraproject.org
-    sslonly: true
-    server_aliases: [caiapi.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: caiapi
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: ostree.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    tags: ostree
-
-  - role: httpd/website
-    site_name: hubs.fedoraproject.org
-    sslonly: true
-    server_aliases: [hubs.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: flocktofedora.org
-    server_aliases:
-    - flocktofedora.org
-    - www.flocktofedora.org
-    ssl: true
-    sslonly: true
-    cert_name: flocktofedora.org
-    SSLCertificateChainFile: flocktofedora.org.intermediate.cert
-
-  - role: httpd/website
-    site_name: flocktofedora.net
-    server_aliases:
-    - flocktofedora.com
-    - www.flocktofedora.net
-    - www.flocktofedora.com
-    ssl: false
-
-  - role: httpd/website
-    site_name: fedora.my
-    server_aliases:
-    - fedora.my
-    ssl: false
-
-  - role: httpd/website
-    site_name: copr.fedoraproject.org
-    sslonly: true
-    server_aliases: [copr.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: copr
-
-  - role: httpd/website
-    site_name: bugz.fedoraproject.org
-    server_aliases: [bugz.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: fas.fedoraproject.org
-    server_aliases:
-    - fas.stg.fedoraproject.org
-    - accounts.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: fedoracommunity.org
-    server_aliases:
-    - www.fedoracommunity.org
-    - stg.fedoracommunity.org
-    - fedoraproject.community
-    - fedora.community
-    - www.fedora.community
-    - www.fedoraproject.community
-    ssl: true
-    cert_name: fedoracommunity.org
-    SSLCertificateChainFile: fedoracommunity.org.intermediate.cert
-
-  - role: httpd/website
-    site_name: get.fedoraproject.org
-    server_aliases: [get.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: help.fedoraproject.org
-    server_aliases: [help.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: it.fedoracommunity.org
-    server_aliases: [it.fedoracommunity.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: uk.fedoracommunity.org
-    server_aliases:
-    - uk.fedoracommunity.org
-    - www.uk.fedoracommunity.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: tw.fedoracommunity.org
-    server_aliases:
-    - tw.fedoracommunity.org
-    - www.tw.fedoracommunity.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: communityblog.fedoraproject.org
-    server_aliases: [communityblog.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: people.fedoraproject.org
-    server_aliases: [people.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: join.fedoraproject.org
-    server_aliases: [join.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: l10n.fedoraproject.org
-    server_aliases: [l10n.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: start.fedoraproject.org
-    server_aliases: [start.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: kde.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: nightly.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: store.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: port389.org
-    server_aliases:
-    - www.port389.org
-    - 389tcp.org
-    - www.389tcp.org
-    ssl: false
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: transtats.fedoraproject.org
-    sslonly: true
-    server_aliases: [transtats.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: 
-    - transtats
-
-  - role: httpd/website
-    site_name: whatcanidoforfedora.org
-    server_aliases:
-    - www.whatcanidoforfedora.org
-    - stg.whatcanidoforfedora.org
-    ssl: true
-    sslonly: true
-    certbot: true
-    tags:
-    - whatcanidoforfedora.org
-
-  - role: httpd/website
-    site_name: fedoramagazine.org
-    server_aliases: [www.fedoramagazine.org stg.fedoramagazine.org]
-    cert_name: fedoramagazine.org
-    SSLCertificateChainFile: fedoramagazine.org.intermediate.cert
-    sslonly: true
-
-  - role: httpd/website
-    site_name: k12linux.org
-    server_aliases:
-    - www.k12linux.org
-    ssl: false
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: fonts.fedoraproject.org
-    server_aliases: [fonts.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: meetbot.fedoraproject.org
-    server_aliases: [meetbot.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: meetbot-raw.fedoraproject.org
-    server_aliases: [meetbot-raw.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: fudcon.fedoraproject.org
-    server_aliases: [fudcon.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: ask.fedoraproject.org
-    server_aliases: [ask.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: badges.fedoraproject.org
-    server_aliases: [badges.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: paste.fedoraproject.org
-    server_aliases:
-    - paste.stg.fedoraproject.org
-    - modernpaste.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: awx.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "production"
-    tags:
-    - awx.fedoraproject.org
-
-
-#
-# Make a website here so we can redirect it to paste.fedoraproject.org
-#
-  - role: httpd/website
-    site_name: fpaste.org
-    certbot: true
-    server_aliases:
-    - www.fpaste.org
-    tags:
-    - fpaste.org
-
-  - role: httpd/website
-    site_name: koji.fedoraproject.org
-    sslonly: true
-    server_aliases:
-    - koji.stg.fedoraproject.org
-    - kojipkgs.stg.fedoraproject.org
-    - buildsys.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: s390.koji.fedoraproject.org
-    sslonly: true
-    certbot: true
-    server_aliases:
-    - s390pkgs.fedoraproject.org
-    tags:
-    - s390.koji.fedoraproject.org
-
-  - role: httpd/website
-    site_name: kojipkgs.fedoraproject.org
-    sslonly: true
-    server_aliases:
-    - kojipkgs01.fedoraproject.org
-    - kojipkgs02.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: apps.fedoraproject.org
-    server_aliases: [apps.stg.fedoraproject.org]
-    sslonly: true
-    gzip: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: pdc.fedoraproject.org
-    server_aliases: [pdc.stg.fedoraproject.org]
-    sslonly: true
-    gzip: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: developer.fedoraproject.org
-    server_aliases: [developer.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  # This is just a redirect to developer, to make it easier for people to get
-  # here from Red Hat's developers.redhat.com (ticket #5216).
-  - role: httpd/website
-    site_name: developers.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: osbs.fedoraproject.org
-    server_aliases: [osbs.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: os.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    # The Connection and Upgrade headers don't work for h2
-    # So non-h2 is needed to fix websockets.
-    use_h2: false
-    tags:
-    - os.fedoraproject.org
-
-  - role: httpd/website
-    site_name: app.os.fedoraproject.org
-    server_aliases: ["*.app.os.fedoraproject.org"]
-    sslonly: true
-    cert_name: "{{os_wildcard_cert_name}}"
-    SSLCertificateChainFile: "{{os_wildcard_int_file}}"
-    # The Connection and Upgrade headers don't work for h2
-    # So non-h2 is needed to fix websockets.
-    use_h2: false
-    tags:
-    - app.os.fedoraproject.org
-
-  - role: httpd/website
-    site_name: os.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    # The Connection and Upgrade headers don't work for h2
-    # So non-h2 is needed to fix websockets.
-    use_h2: false
-    tags:
-    - os.stg.fedoraproject.org
-
-  - role: httpd/website
-    site_name: app.os.stg.fedoraproject.org
-    server_aliases: ["*.app.os.stg.fedoraproject.org"]
-    sslonly: true
-    cert_name: "{{os_wildcard_cert_name}}"
-    SSLCertificateChainFile: "{{os_wildcard_int_file}}"
-    # The Connection and Upgrade headers don't work for h2
-    # So non-h2 is needed to fix websockets.
-    use_h2: false
-    tags:
-    - app.os.stg.fedoraproject.org
-
-  - role: httpd/website
-    site_name: registry.fedoraproject.org
-    server_aliases: [registry.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: registry.centos.org
-    server_aliases: [registry.stg.centos.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: candidate-registry.fedoraproject.org
-    server_aliases: [candidate-registry.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: retrace.fedoraproject.org
-    server_aliases: [retrace.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: faf.fedoraproject.org
-    server_aliases: [faf.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: alt.fedoraproject.org
-    server_aliases:
-    - alt.stg.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  # Kinda silly that we have two entries here, one for prod and one for stg.
-  # This is inherited from our puppet setup -- we can collapse them as soon as
-  # is convenient.  -- threebean
-  - role: httpd/website
-    site_name: taskotron.fedoraproject.org
-    server_aliases: [taskotron.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: taskotron.stg.fedoraproject.org
-    server_aliases: [taskotron.stg.fedoraproject.org]
-    # Set this explicitly to stg here.. as per the original puppet config.
-    SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: lists.fedoraproject.org
-    server_aliases: [lists.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: lists.fedorahosted.org
-    server_aliases: [lists.stg.fedorahosted.org]
-    sslonly: true
-    SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
-    cert_name: wildcard-2017.fedorahosted.org
-
-  - role: httpd/website
-    site_name: id.fedoraproject.org
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
-    stssubdomains: false
-    tags:
-    - id.fedoraproject.org
-
-  - role: httpd/website
-    site_name: username.id.fedoraproject.org
-    server_aliases:
-    - "*.id.fedoraproject.org"
-    # Must not be sslonly, because example.id.fedoraproject.org must be reachable
-    # via plain http for openid identity support
-    sslonly: false
-    cert_name: wildcard-2017.id.fedoraproject.org
-    SSLCertificateChainFile: wildcard-2017.id.fedoraproject.org.intermediate.cert
-    tags:
-    - id.fedoraproject.org
-
-  - role: httpd/website
-    site_name: id.stg.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    tags:
-    - id.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: username.id.stg.fedoraproject.org
-    server_aliases:
-    - "*.id.stg.fedoraproject.org"
-    # Must not be sslonly, because example.id.fedoraproject.org must be reachable
-    # via plain http for openid identity support
-    cert_name: "{{wildcard_cert_name}}"
-    SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
-    tags:
-    - id.fedoraproject.org
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: getfedora.org
-    server_aliases: [stg.getfedora.org]
-    sslonly: true
-    cert_name: getfedora.org
-    SSLCertificateChainFile: getfedora.org.intermediate.cert
-
-  - role: httpd/website
-    site_name: qa.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-    sslonly: true
-
-  - role: httpd/website
-    site_name: openqa.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-    server_aliases: [openqa.stg.fedoraproject.org]
-    sslonly: true
-
-  - role: httpd/website
-    site_name: redirect.fedoraproject.org
-    server_aliases: [redirect.stg.fedoraproject.org]
-    sslonly: true
-    gzip: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: geoip.fedoraproject.org
-    server_aliases: [geoip.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: codecs.fedoraproject.org
-    server_aliases: [codecs.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: jenkins.fedorainfracloud.org
-    cert_name: jenkins.fedorainfracloud.org
-    certbot: true
-
-  - role: httpd/website
-    site_name: beaker.qa.fedoraproject.org
-    server_aliases: [beaker.qa.fedoraproject.org]
-    # Set this explicitly to stg here.. as per the original puppet config.
-    SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
-    sslonly: true
-    cert_name: "qa.fedoraproject.org"
-
-  - role: httpd/website
-    site_name: beaker.stg.fedoraproject.org
-    server_aliases: [beaker.stg.fedoraproject.org]
-    # Set this explicitly to stg here.. as per the original puppet config.
-    SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: qa.stg.fedoraproject.org
-    server_aliases: [qa.stg.fedoraproject.org]
-    cert_name: qa.stg.fedoraproject.org
-    SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: phab.qa.stg.fedoraproject.org
-    server_aliases: [phab.qa.stg.fedoraproject.org]
-    cert_name: qa.stg.fedoraproject.org
-    SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: docs.qa.stg.fedoraproject.org
-    server_aliases: [docs.qa.stg.fedoraproject.org]
-    cert_name: qa.stg.fedoraproject.org
-    SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
-    sslonly: true
-    when: env == "staging"
-
-  - role: httpd/website
-    site_name: phab.qa.fedoraproject.org
-    server_aliases: [phab.qa.fedoraproject.org]
-    cert_name: qa.fedoraproject.org
-    SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
-    sslonly: true
-
-  - role: httpd/website
-    site_name: data-analysis.fedoraproject.org
-    server_aliases: [data-analysis.stg.fedoraproject.org]
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: docs.qa.fedoraproject.org
-    server_aliases: [docs.qa.fedoraproject.org]
-    cert_name: qa.fedoraproject.org
-    SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
-    sslonly: true
-
-  - role: httpd/website
-    site_name: nagios.fedoraproject.org
-    server_aliases: [nagios.stg.fedoraproject.org]
-    SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
-    sslonly: true
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: mbs.fedoraproject.org
-    sslonly: true
-    server_aliases: [mbs.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: odcs.fedoraproject.org
-    sslonly: true
-    server_aliases: [odcs.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: odcs
-
-  - role: httpd/website
-    site_name: freshmaker.fedoraproject.org
-    sslonly: true
-    server_aliases: [freshmaker.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: freshmaker
-
-  - role: httpd/website
-    site_name: greenwave.fedoraproject.org
-    sslonly: true
-    server_aliases: [greenwave.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: koschei.fedoraproject.org
-    sslonly: true
-    server_aliases: [koschei.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: waiverdb.fedoraproject.org
-    sslonly: true
-    server_aliases: [waiverdb.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: silverblue.fedoraproject.org
-    sslonly: true
-    server_aliases: [silverblue.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-
-  - role: httpd/website
-    site_name: release-monitoring.org
-    sslonly: true
-    certbot: true
-    server_aliases: [stg.release-monitoring.org]
-    tags:
-    - release-monitoring.org
-
-  - role: httpd/website
-    site_name: lists.pagure.io
-    sslonly: true
-    certbot: true
-    tags:
-    - lists.pagure.io
-
-  - role: httpd/website
-    site_name: fpdc.fedoraproject.org
-    sslonly: true
-    server_aliases: [fpdc.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: fpdc
-
-  - role: httpd/website
-    site_name: neuro.fedoraproject.org
-    sslonly: true
-    server_aliases: [neuro.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: neuro
-
-  - role: httpd/website
-    site_name: elections.fedoraproject.org
-    sslonly: true
-    server_aliases: [elections.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: elections
-
-  - role: httpd/website
-    site_name: wallpapers.fedoraproject.org
-    sslonly: true
-    server_aliases: [wallpapers.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: wallpapers
-
-  - role: httpd/website
-    site_name: mdapi.fedoraproject.org
-    sslonly: true
-    server_aliases: [mdapi.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: mdapi
-
-  - role: httpd/website
-    site_name: calendar.fedoraproject.org
-    sslonly: true
-    server_aliases: [calendar.stg.fedoraproject.org]
-    cert_name: "{{wildcard_cert_name}}"
-    tags: calendar
-
-# fedorahosted is retired. We have the site here so we can redirect it.
-
-  - role: httpd/website
-    site_name: fedorahosted.org
-    sslonly: true
-    server_aliases: [bzr.fedorahosted.org hg.fedorahosted.org svn.fedorahosted.org]
-    SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
-    cert_name: wildcard-2017.fedorahosted.org
-
-  - role: httpd/website
-    site_name: git.fedorahosted.org
-    sslonly: true
-    SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
-    cert_name: wildcard-2017.fedorahosted.org
-
-# planet.fedoraproject.org is not to be used, it's fedoraplanet.org
-# We only have it here so we can redirect it with the correct cert
-
-  - role: httpd/website
-    site_name: planet.fedoraproject.org
-    cert_name: "{{wildcard_cert_name}}"
-
-# pkgs.fp.o will be an alias of src.fp.o once we get everyone over to https
-# git push/pull. For now, we just want a cert via the certbot system.
-  - role: httpd/website
-    site_name: pkgs.fedoraproject.org
-    ssl: true
-    sslonly: true
-    certbot: true
-    certbot_addhost: pkgs02.phx2.fedoraproject.org
-    tags:
-    - pkgs.fedoraproject.org
-    when: env == "production" and "phx2" in inventory_hostname
-
-  - role: httpd/website
-    site_name: pkgs.stg.fedoraproject.org
-    ssl: true
-    sslonly: true
-    certbot: true
-    certbot_addhost: pkgs01.stg.phx2.fedoraproject.org
-    tags:
-    - pkgs.fedoraproject.org
-    when: env == "staging" and "phx2" in inventory_hostname
+    - role: httpd/website
+      site_name: fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      server_aliases:
+        - stg.fedoraproject.org
+        - localhost
+        - www.fedoraproject.org
+        - hotspot-nocache.fedoraproject.org
+        - infinote.fedoraproject.org
+
+    # This is for all the other domains we own
+    # that redirect to https://fedoraproject.org
+    - role: httpd/website
+      site_name: fedoraproject.com
+      cert_name: "{{wildcard_cert_name}}"
+      server_aliases:
+        - epel.io
+        - fedp.org
+        - fedora.asia
+        - fedora.com.my
+        - fedora.cr
+        - fedora.events
+        - fedora.me
+        - fedora.mobi
+        - fedora.my
+        - fedora.org
+        - fedora.org.cn
+        - fedora.pe
+        - fedora.pt
+        - fedora.redhat.com
+        - fedora.software
+        - fedora.tk
+        - fedora.us
+        - fedora.wiki
+        - fedoralinux.com
+        - fedoralinux.net
+        - fedoralinux.org
+        - fedoraproject.asia
+        - fedoraproject.cn
+        - fedoraproject.co.uk
+        - fedoraproject.com
+        - fedoraproject.com.cn
+        - fedoraproject.com.gr
+        - fedoraproject.com.my
+        - fedoraproject.cz
+        - fedoraproject.eu
+        - fedoraproject.gr
+        - fedoraproject.info
+        - fedoraproject.net
+        - fedoraproject.net.cn
+        - fedoraproject.org.uk
+        - fedoraproject.pe
+        - fedoraproject.su
+        - projectofedora.org
+        - www.fedora.asia
+        - www.fedora.com.my
+        - www.fedora.cr
+        - www.fedora.events
+        - www.fedora.me
+        - www.fedora.mobi
+        - www.fedora.org
+        - www.fedora.org.cn
+        - www.fedora.pe
+        - www.fedora.pt
+        - www.fedora.redhat.com
+        - www.fedora.software
+        - www.fedora.tk
+        - www.fedora.us
+        - www.fedora.wiki
+        - www.fedoralinux.com
+        - www.fedoralinux.net
+        - www.fedoralinux.org
+        - www.fedoraproject.asia
+        - www.fedoraproject.cn
+        - www.fedoraproject.co.uk
+        - www.fedoraproject.com
+        - www.fedoraproject.com.cn
+        - www.fedoraproject.com.gr
+        - www.fedoraproject.com.my
+        - www.fedoraproject.cz
+        - www.fedoraproject.eu
+        - www.fedoraproject.gr
+        - www.fedoraproject.info
+        - www.fedoraproject.net
+        - www.fedoraproject.net.cn
+        - www.fedoraproject.org.uk
+        - www.fedoraproject.pe
+        - www.fedoraproject.su
+        - www.projectofedora.org
+        - www.getfedora.com
+        - getfedora.com
+        - fedoraplayground.org
+        - fedoraplayground.com
+
+    - role: httpd/website
+      site_name: admin.fedoraproject.org
+      server_aliases: [admin.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: cloud.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: mirrors.fedoraproject.org
+      server_aliases:
+        - mirrors.stg.fedoraproject.org
+        - fedoramirror.net
+        - www.fedoramirror.net
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: src.fedoraproject.org
+      server_aliases: [src.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      sslonly: true
+      use_h2: false
+
+    - role: httpd/website
+      site_name: download.fedoraproject.org
+      server_aliases:
+        - download01.fedoraproject.org
+        - download02.fedoraproject.org
+        - download03.fedoraproject.org
+        - download04.fedoraproject.org
+        - download05.fedoraproject.org
+        - download06.fedoraproject.org
+        - download07.fedoraproject.org
+        - download08.fedoraproject.org
+        - download09.fedoraproject.org
+        - download10.fedoraproject.org
+        - download-rdu01.fedoraproject.org
+        - download.stg.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: translate.fedoraproject.org
+      server_aliases: [translate.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: pki.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: spins.fedoraproject.org
+      server_aliases:
+        - spins.stg.fedoraproject.org
+        - spins-test.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: labs.fedoraproject.org
+      server_aliases:
+        - labs.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: arm.fedoraproject.org
+      server_aliases:
+        - arm.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: iot.fedoraproject.org
+      server_aliases:
+        - iot.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: budget.fedoraproject.org
+      server_aliases:
+        - budget.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: boot.fedoraproject.org
+      server_aliases: [boot.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: boot.fedoraproject.org
+      server_aliases: [boot.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: smolts.org
+      ssl: false
+      server_aliases:
+        - smolt.fedoraproject.org
+        - stg.smolts.org
+        - www.smolts.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: docs.fedoraproject.org
+      server_aliases:
+        - doc.fedoraproject.org
+        - docs.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: docs-old.fedoraproject.org
+      server_aliases:
+        - docs-old.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: bodhi.fedoraproject.org
+      sslonly: true
+      server_aliases: [bodhi.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: caiapi.fedoraproject.org
+      sslonly: true
+      server_aliases: [caiapi.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: caiapi
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: ostree.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      tags: ostree
+
+    - role: httpd/website
+      site_name: hubs.fedoraproject.org
+      sslonly: true
+      server_aliases: [hubs.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: flocktofedora.org
+      server_aliases:
+        - flocktofedora.org
+        - www.flocktofedora.org
+      ssl: true
+      sslonly: true
+      cert_name: flocktofedora.org
+      SSLCertificateChainFile: flocktofedora.org.intermediate.cert
+
+    - role: httpd/website
+      site_name: flocktofedora.net
+      server_aliases:
+        - flocktofedora.com
+        - www.flocktofedora.net
+        - www.flocktofedora.com
+      ssl: false
+
+    - role: httpd/website
+      site_name: fedora.my
+      server_aliases:
+        - fedora.my
+      ssl: false
+
+    - role: httpd/website
+      site_name: copr.fedoraproject.org
+      sslonly: true
+      server_aliases: [copr.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: copr
+
+    - role: httpd/website
+      site_name: bugz.fedoraproject.org
+      server_aliases: [bugz.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: fas.fedoraproject.org
+      server_aliases:
+        - fas.stg.fedoraproject.org
+        - accounts.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: fedoracommunity.org
+      server_aliases:
+        - www.fedoracommunity.org
+        - stg.fedoracommunity.org
+        - fedoraproject.community
+        - fedora.community
+        - www.fedora.community
+        - www.fedoraproject.community
+      ssl: true
+      cert_name: fedoracommunity.org
+      SSLCertificateChainFile: fedoracommunity.org.intermediate.cert
+
+    - role: httpd/website
+      site_name: get.fedoraproject.org
+      server_aliases: [get.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: help.fedoraproject.org
+      server_aliases: [help.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: it.fedoracommunity.org
+      server_aliases: [it.fedoracommunity.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: uk.fedoracommunity.org
+      server_aliases:
+        - uk.fedoracommunity.org
+        - www.uk.fedoracommunity.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: tw.fedoracommunity.org
+      server_aliases:
+        - tw.fedoracommunity.org
+        - www.tw.fedoracommunity.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: communityblog.fedoraproject.org
+      server_aliases: [communityblog.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: people.fedoraproject.org
+      server_aliases: [people.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: join.fedoraproject.org
+      server_aliases: [join.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: l10n.fedoraproject.org
+      server_aliases: [l10n.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: start.fedoraproject.org
+      server_aliases: [start.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: kde.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: nightly.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: store.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: port389.org
+      server_aliases:
+        - www.port389.org
+        - 389tcp.org
+        - www.389tcp.org
+      ssl: false
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: transtats.fedoraproject.org
+      sslonly: true
+      server_aliases: [transtats.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags:
+        - transtats
+
+    - role: httpd/website
+      site_name: whatcanidoforfedora.org
+      server_aliases:
+        - www.whatcanidoforfedora.org
+        - stg.whatcanidoforfedora.org
+      ssl: true
+      sslonly: true
+      certbot: true
+      tags:
+        - whatcanidoforfedora.org
+
+    - role: httpd/website
+      site_name: fedoramagazine.org
+      server_aliases: [www.fedoramagazine.org stg.fedoramagazine.org]
+      cert_name: fedoramagazine.org
+      SSLCertificateChainFile: fedoramagazine.org.intermediate.cert
+      sslonly: true
+
+    - role: httpd/website
+      site_name: k12linux.org
+      server_aliases:
+        - www.k12linux.org
+      ssl: false
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: fonts.fedoraproject.org
+      server_aliases: [fonts.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: meetbot.fedoraproject.org
+      server_aliases: [meetbot.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: meetbot-raw.fedoraproject.org
+      server_aliases: [meetbot-raw.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: fudcon.fedoraproject.org
+      server_aliases: [fudcon.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: ask.fedoraproject.org
+      server_aliases: [ask.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: badges.fedoraproject.org
+      server_aliases: [badges.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: paste.fedoraproject.org
+      server_aliases:
+        - paste.stg.fedoraproject.org
+        - modernpaste.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: awx.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "production"
+      tags:
+        - awx.fedoraproject.org
+
+    #
+    # Make a website here so we can redirect it to paste.fedoraproject.org
+    #
+    - role: httpd/website
+      site_name: fpaste.org
+      certbot: true
+      server_aliases:
+        - www.fpaste.org
+      tags:
+        - fpaste.org
+
+    - role: httpd/website
+      site_name: koji.fedoraproject.org
+      sslonly: true
+      server_aliases:
+        - koji.stg.fedoraproject.org
+        - kojipkgs.stg.fedoraproject.org
+        - buildsys.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: s390.koji.fedoraproject.org
+      sslonly: true
+      certbot: true
+      server_aliases:
+        - s390pkgs.fedoraproject.org
+      tags:
+        - s390.koji.fedoraproject.org
+
+    - role: httpd/website
+      site_name: kojipkgs.fedoraproject.org
+      sslonly: true
+      server_aliases:
+        - kojipkgs01.fedoraproject.org
+        - kojipkgs02.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: apps.fedoraproject.org
+      server_aliases: [apps.stg.fedoraproject.org]
+      sslonly: true
+      gzip: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: pdc.fedoraproject.org
+      server_aliases: [pdc.stg.fedoraproject.org]
+      sslonly: true
+      gzip: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: developer.fedoraproject.org
+      server_aliases: [developer.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    # This is just a redirect to developer, to make it easier for people to get
+    # here from Red Hat's developers.redhat.com (ticket #5216).
+    - role: httpd/website
+      site_name: developers.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: osbs.fedoraproject.org
+      server_aliases: [osbs.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: os.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      # The Connection and Upgrade headers don't work for h2
+      # So non-h2 is needed to fix websockets.
+      use_h2: false
+      tags:
+        - os.fedoraproject.org
+
+    - role: httpd/website
+      site_name: app.os.fedoraproject.org
+      server_aliases: ["*.app.os.fedoraproject.org"]
+      sslonly: true
+      cert_name: "{{os_wildcard_cert_name}}"
+      SSLCertificateChainFile: "{{os_wildcard_int_file}}"
+      # The Connection and Upgrade headers don't work for h2
+      # So non-h2 is needed to fix websockets.
+      use_h2: false
+      tags:
+        - app.os.fedoraproject.org
+
+    - role: httpd/website
+      site_name: os.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      # The Connection and Upgrade headers don't work for h2
+      # So non-h2 is needed to fix websockets.
+      use_h2: false
+      tags:
+        - os.stg.fedoraproject.org
+
+    - role: httpd/website
+      site_name: app.os.stg.fedoraproject.org
+      server_aliases: ["*.app.os.stg.fedoraproject.org"]
+      sslonly: true
+      cert_name: "{{os_wildcard_cert_name}}"
+      SSLCertificateChainFile: "{{os_wildcard_int_file}}"
+      # The Connection and Upgrade headers don't work for h2
+      # So non-h2 is needed to fix websockets.
+      use_h2: false
+      tags:
+        - app.os.stg.fedoraproject.org
+
+    - role: httpd/website
+      site_name: registry.fedoraproject.org
+      server_aliases: [registry.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: registry.centos.org
+      server_aliases: [registry.stg.centos.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: candidate-registry.fedoraproject.org
+      server_aliases: [candidate-registry.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: retrace.fedoraproject.org
+      server_aliases: [retrace.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: faf.fedoraproject.org
+      server_aliases: [faf.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: alt.fedoraproject.org
+      server_aliases:
+        - alt.stg.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    # Kinda silly that we have two entries here, one for prod and one for stg.
+    # This is inherited from our puppet setup -- we can collapse them as soon as
+    # is convenient.  -- threebean
+    - role: httpd/website
+      site_name: taskotron.fedoraproject.org
+      server_aliases: [taskotron.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: taskotron.stg.fedoraproject.org
+      server_aliases: [taskotron.stg.fedoraproject.org]
+      # Set this explicitly to stg here.. as per the original puppet config.
+      SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: lists.fedoraproject.org
+      server_aliases: [lists.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: lists.fedorahosted.org
+      server_aliases: [lists.stg.fedorahosted.org]
+      sslonly: true
+      SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
+      cert_name: wildcard-2017.fedorahosted.org
+
+    - role: httpd/website
+      site_name: id.fedoraproject.org
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
+      stssubdomains: false
+      tags:
+        - id.fedoraproject.org
+
+    - role: httpd/website
+      site_name: username.id.fedoraproject.org
+      server_aliases:
+        - "*.id.fedoraproject.org"
+      # Must not be sslonly, because example.id.fedoraproject.org must be reachable
+      # via plain http for openid identity support
+      sslonly: false
+      cert_name: wildcard-2017.id.fedoraproject.org
+      SSLCertificateChainFile: wildcard-2017.id.fedoraproject.org.intermediate.cert
+      tags:
+        - id.fedoraproject.org
+
+    - role: httpd/website
+      site_name: id.stg.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      tags:
+        - id.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: username.id.stg.fedoraproject.org
+      server_aliases:
+        - "*.id.stg.fedoraproject.org"
+      # Must not be sslonly, because example.id.fedoraproject.org must be reachable
+      # via plain http for openid identity support
+      cert_name: "{{wildcard_cert_name}}"
+      SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
+      tags:
+        - id.fedoraproject.org
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: getfedora.org
+      server_aliases: [stg.getfedora.org]
+      sslonly: true
+      cert_name: getfedora.org
+      SSLCertificateChainFile: getfedora.org.intermediate.cert
+
+    - role: httpd/website
+      site_name: qa.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+      sslonly: true
+
+    - role: httpd/website
+      site_name: openqa.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+      server_aliases: [openqa.stg.fedoraproject.org]
+      sslonly: true
+
+    - role: httpd/website
+      site_name: redirect.fedoraproject.org
+      server_aliases: [redirect.stg.fedoraproject.org]
+      sslonly: true
+      gzip: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: geoip.fedoraproject.org
+      server_aliases: [geoip.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: codecs.fedoraproject.org
+      server_aliases: [codecs.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: jenkins.fedorainfracloud.org
+      cert_name: jenkins.fedorainfracloud.org
+      certbot: true
+
+    - role: httpd/website
+      site_name: beaker.qa.fedoraproject.org
+      server_aliases: [beaker.qa.fedoraproject.org]
+      # Set this explicitly to stg here.. as per the original puppet config.
+      SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
+      sslonly: true
+      cert_name: "qa.fedoraproject.org"
+
+    - role: httpd/website
+      site_name: beaker.stg.fedoraproject.org
+      server_aliases: [beaker.stg.fedoraproject.org]
+      # Set this explicitly to stg here.. as per the original puppet config.
+      SSLCertificateChainFile: wildcard-2017.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: qa.stg.fedoraproject.org
+      server_aliases: [qa.stg.fedoraproject.org]
+      cert_name: qa.stg.fedoraproject.org
+      SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: phab.qa.stg.fedoraproject.org
+      server_aliases: [phab.qa.stg.fedoraproject.org]
+      cert_name: qa.stg.fedoraproject.org
+      SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: docs.qa.stg.fedoraproject.org
+      server_aliases: [docs.qa.stg.fedoraproject.org]
+      cert_name: qa.stg.fedoraproject.org
+      SSLCertificateChainFile: qa.stg.fedoraproject.org.intermediate.cert
+      sslonly: true
+      when: env == "staging"
+
+    - role: httpd/website
+      site_name: phab.qa.fedoraproject.org
+      server_aliases: [phab.qa.fedoraproject.org]
+      cert_name: qa.fedoraproject.org
+      SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
+      sslonly: true
+
+    - role: httpd/website
+      site_name: data-analysis.fedoraproject.org
+      server_aliases: [data-analysis.stg.fedoraproject.org]
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: docs.qa.fedoraproject.org
+      server_aliases: [docs.qa.fedoraproject.org]
+      cert_name: qa.fedoraproject.org
+      SSLCertificateChainFile: qa.fedoraproject.org.intermediate.cert
+      sslonly: true
+
+    - role: httpd/website
+      site_name: nagios.fedoraproject.org
+      server_aliases: [nagios.stg.fedoraproject.org]
+      SSLCertificateChainFile: wildcard-2017.fedoraproject.org.intermediate.cert
+      sslonly: true
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: mbs.fedoraproject.org
+      sslonly: true
+      server_aliases: [mbs.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: odcs.fedoraproject.org
+      sslonly: true
+      server_aliases: [odcs.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: odcs
+
+    - role: httpd/website
+      site_name: freshmaker.fedoraproject.org
+      sslonly: true
+      server_aliases: [freshmaker.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: freshmaker
+
+    - role: httpd/website
+      site_name: greenwave.fedoraproject.org
+      sslonly: true
+      server_aliases: [greenwave.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: koschei.fedoraproject.org
+      sslonly: true
+      server_aliases: [koschei.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: waiverdb.fedoraproject.org
+      sslonly: true
+      server_aliases: [waiverdb.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: silverblue.fedoraproject.org
+      sslonly: true
+      server_aliases: [silverblue.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+
+    - role: httpd/website
+      site_name: release-monitoring.org
+      sslonly: true
+      certbot: true
+      server_aliases: [stg.release-monitoring.org]
+      tags:
+        - release-monitoring.org
+
+    - role: httpd/website
+      site_name: lists.pagure.io
+      sslonly: true
+      certbot: true
+      tags:
+        - lists.pagure.io
+
+    - role: httpd/website
+      site_name: fpdc.fedoraproject.org
+      sslonly: true
+      server_aliases: [fpdc.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: fpdc
+
+    - role: httpd/website
+      site_name: neuro.fedoraproject.org
+      sslonly: true
+      server_aliases: [neuro.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: neuro
+
+    - role: httpd/website
+      site_name: elections.fedoraproject.org
+      sslonly: true
+      server_aliases: [elections.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: elections
+
+    - role: httpd/website
+      site_name: wallpapers.fedoraproject.org
+      sslonly: true
+      server_aliases: [wallpapers.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: wallpapers
+
+    - role: httpd/website
+      site_name: mdapi.fedoraproject.org
+      sslonly: true
+      server_aliases: [mdapi.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: mdapi
+
+    - role: httpd/website
+      site_name: calendar.fedoraproject.org
+      sslonly: true
+      server_aliases: [calendar.stg.fedoraproject.org]
+      cert_name: "{{wildcard_cert_name}}"
+      tags: calendar
+
+    # fedorahosted is retired. We have the site here so we can redirect it.
+
+    - role: httpd/website
+      site_name: fedorahosted.org
+      sslonly: true
+      server_aliases:
+        [bzr.fedorahosted.org hg.fedorahosted.org svn.fedorahosted.org]
+      SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
+      cert_name: wildcard-2017.fedorahosted.org
+
+    - role: httpd/website
+      site_name: git.fedorahosted.org
+      sslonly: true
+      SSLCertificateChainFile: wildcard-2017.fedorahosted.org.intermediate.cert
+      cert_name: wildcard-2017.fedorahosted.org
+
+    # planet.fedoraproject.org is not to be used, it's fedoraplanet.org
+    # We only have it here so we can redirect it with the correct cert
+
+    - role: httpd/website
+      site_name: planet.fedoraproject.org
+      cert_name: "{{wildcard_cert_name}}"
+
+    # pkgs.fp.o will be an alias of src.fp.o once we get everyone over to https
+    # git push/pull. For now, we just want a cert via the certbot system.
+    - role: httpd/website
+      site_name: pkgs.fedoraproject.org
+      ssl: true
+      sslonly: true
+      certbot: true
+      certbot_addhost: pkgs02.phx2.fedoraproject.org
+      tags:
+        - pkgs.fedoraproject.org
+      when: env == "production" and "phx2" in inventory_hostname
+
+    - role: httpd/website
+      site_name: pkgs.stg.fedoraproject.org
+      ssl: true
+      sslonly: true
+      certbot: true
+      certbot_addhost: pkgs01.stg.phx2.fedoraproject.org
+      tags:
+        - pkgs.fedoraproject.org
+      when: env == "staging" and "phx2" in inventory_hostname
diff --git a/playbooks/include/virt-create.yml b/playbooks/include/virt-create.yml
index 48efb79b1..9ff866233 100644
--- a/playbooks/include/virt-create.yml
+++ b/playbooks/include/virt-create.yml
@@ -3,13 +3,12 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
+    - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
-
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/list-vms-per-host.yml b/playbooks/list-vms-per-host.yml
index a7e4ef4b7..c4b2a4cbd 100644
--- a/playbooks/list-vms-per-host.yml
+++ b/playbooks/list-vms-per-host.yml
@@ -7,14 +7,13 @@
   gather_facts: True
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - "/srv/private/ansible/vars.yml"
-  - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml"
 
   tasks:
+    - virt: command=info
+      register: virt_info
 
-  - virt: command=info
-    register: virt_info
-
-  - template: src={{files}}/virthost-lists.j2 dest=/tmp/virthost-lists.out
-    delegate_to: localhost
+    - template: src={{files}}/virthost-lists.j2 dest=/tmp/virthost-lists.out
+      delegate_to: localhost
diff --git a/playbooks/manual/autosign.yml b/playbooks/manual/autosign.yml
index bf3c1d399..685f2b98d 100644
--- a/playbooks/manual/autosign.yml
+++ b/playbooks/manual/autosign.yml
@@ -12,36 +12,35 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - basessh
-  - rkhunter
-  - nagios_client
-  - hosts
-  - fas_client
-  - collectd/base
-  - sudo
-  - fedmsg/base
-  - fedmsg/hub
-  - role: nfs/client
-    mnt_dir: '/mnt/fedora_koji'
-    nfs_src_dir: 'fedora_koji'
-    when: env != 'staging'
-  - role: keytab/service
-    service: autosign
-  - robosignatory
+    - base
+    - basessh
+    - rkhunter
+    - nagios_client
+    - hosts
+    - fas_client
+    - collectd/base
+    - sudo
+    - fedmsg/base
+    - fedmsg/hub
+    - role: nfs/client
+      mnt_dir: "/mnt/fedora_koji"
+      nfs_src_dir: "fedora_koji"
+      when: env != 'staging'
+    - role: keytab/service
+      service: autosign
+    - robosignatory
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
-
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/get-system-packages.yml b/playbooks/manual/get-system-packages.yml
index e08b21a65..7494f747d 100644
--- a/playbooks/manual/get-system-packages.yml
+++ b/playbooks/manual/get-system-packages.yml
@@ -1,6 +1,6 @@
 #
-# A playbook to get all the rpms installed on a set of systems. 
-# 
+# A playbook to get all the rpms installed on a set of systems.
+#
 
 - name: Get installed packages
   hosts: builders:releng-compose:data-analysis01.phx2.fedoraproject.org
@@ -8,14 +8,12 @@
   user: root
 
   tasks:
+    - name: RPM_output
+      shell: "/usr/bin/rpm -qa"
+      register: rpm_output
+      args:
+        warn: false # set warn=false to prevent warning
 
-  - name: RPM_output
-    shell: "/usr/bin/rpm -qa"
-    register: rpm_output
-    args:
-      warn: false # set warn=false to prevent warning
-
-
-  - debug: var=rpm_output.stdout_lines
+    - debug: var=rpm_output.stdout_lines
 #    when: rpm_output is defined and rpm_output.results|length > 0
 
diff --git a/playbooks/manual/history_undo.yml b/playbooks/manual/history_undo.yml
index 30ec0e404..c03552fa2 100644
--- a/playbooks/manual/history_undo.yml
+++ b/playbooks/manual/history_undo.yml
@@ -13,26 +13,26 @@
   user: root
 
   tasks:
-  - name: find the ID of the last yum transaction
-    shell: yum history package {{ package }} | sed -n 3p | awk -F "|" '{ print $1 }' | tr -d ' '
-    register: transaction_id
+    - name: find the ID of the last yum transaction
+      shell: yum history package {{ package }} | sed -n 3p | awk -F "|" '{ print $1 }' | tr -d ' '
+      register: transaction_id
 
-  # If transaction_id.stderr == "", then that means that the $PACKAGE we're
-  # looking for was never installed: it does not appear in the yum history.
-  - debug: var=transaction_id.stdout
-    when: transaction_id.stderr == ""
+    # If transaction_id.stderr == "", then that means that the $PACKAGE we're
+    # looking for was never installed: it does not appear in the yum history.
+    - debug: var=transaction_id.stdout
+      when: transaction_id.stderr == ""
 
-  - name: get info on that transaction
-    command: yum history info {{ transaction_id.stdout }}
-    register: transaction_info
-    when: transaction_id.stderr == ""
+    - name: get info on that transaction
+      command: yum history info {{ transaction_id.stdout }}
+      register: transaction_info
+      when: transaction_id.stderr == ""
 
-  - debug: var=transaction_info.stdout_lines
-    when: transaction_id.stderr == ""
+    - debug: var=transaction_info.stdout_lines
+      when: transaction_id.stderr == ""
 
-  #- pause: seconds=30 prompt="Undoing that yum transaction.  Abort if this is wrong."
-  #  when: transaction_id.stderr == ""
+    #- pause: seconds=30 prompt="Undoing that yum transaction.  Abort if this is wrong."
+    #  when: transaction_id.stderr == ""
 
-  - name: Okay.. undo that transaction now
-    command: yum -y history undo {{ transaction_id.stdout }}
-    when: transaction_id.stderr == ""
+    - name: Okay.. undo that transaction now
+      command: yum -y history undo {{ transaction_id.stdout }}
+      when: transaction_id.stderr == ""
diff --git a/playbooks/manual/kernel-qa.yml b/playbooks/manual/kernel-qa.yml
index 792842441..47b9ecf98 100644
--- a/playbooks/manual/kernel-qa.yml
+++ b/playbooks/manual/kernel-qa.yml
@@ -8,22 +8,21 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - sudo
-  - hosts
+    - base
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - sudo
+    - hosts
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
-
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/openqa-restart-workers.yml b/playbooks/manual/openqa-restart-workers.yml
index 786fbdbc4..838eb8eeb 100644
--- a/playbooks/manual/openqa-restart-workers.yml
+++ b/playbooks/manual/openqa-restart-workers.yml
@@ -2,14 +2,13 @@
   hosts: openqa-workers:openqa-stg-workers
   user: root
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - name: restart all the worker services
-    service: name=openqa-worker@{{ item }} state=restarted
-    with_sequence: "count={{ openqa_workers }}"
-
+    - name: restart all the worker services
+      service: name=openqa-worker@{{ item }} state=restarted
+      with_sequence: "count={{ openqa_workers }}"
diff --git a/playbooks/manual/push-badges.yml b/playbooks/manual/push-badges.yml
index ca1713c5f..2bf43f9e4 100644
--- a/playbooks/manual/push-badges.yml
+++ b/playbooks/manual/push-badges.yml
@@ -13,44 +13,44 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   vars:
-   upstream: "https://pagure.io/fedora-badges.git";
-   workingdir: /srv/web/infra/badges/
+    upstream: "https://pagure.io/fedora-badges.git";
+    workingdir: /srv/web/infra/badges/
 
   tasks:
-  - name: Make a tmp directory
-    tempfile:
-      state: directory
-      suffix: _badges_tempdir
-    register: tmp
+    - name: Make a tmp directory
+      tempfile:
+        state: directory
+        suffix: _badges_tempdir
+      register: tmp
 
-  - set_fact:
-      tempdir: "{{tmp.path}}"
+    - set_fact:
+        tempdir: "{{tmp.path}}"
 
-  - name: clone the local bare repo
-    git: dest={{tempdir}} repo=/git/badges remote=origin update=yes
+    - name: clone the local bare repo
+      git: dest={{tempdir}} repo=/git/badges remote=origin update=yes
 
-  - name: add pagure as a second remote
-    command: git remote add pagure {{upstream}} chdir={{tempdir}}
+    - name: add pagure as a second remote
+      command: git remote add pagure {{upstream}} chdir={{tempdir}}
 
-  - name: pull down changes from pagure
-    command: git pull pagure master chdir={{tempdir}}
+    - name: pull down changes from pagure
+      command: git pull pagure master chdir={{tempdir}}
 
-  - name: push pagure changes back to the lockbox bare repo
-    command: git push origin master chdir={{tempdir}}
+    - name: push pagure changes back to the lockbox bare repo
+      command: git push origin master chdir={{tempdir}}
 
-  - name: clean up that temporary {{tempdir}} dir
-    file: dest={{tempdir}} state=absent
+    - name: clean up that temporary {{tempdir}} dir
+      file: dest={{tempdir}} state=absent
 
-  - name: and pull those commits from the bare repo to the working dir
-    command: git pull origin master chdir={{workingdir}}
+    - name: and pull those commits from the bare repo to the working dir
+      command: git pull origin master chdir={{workingdir}}
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: copy new badge art over to the badges web nodes
   hosts: badges-web:badges-web-stg
@@ -58,15 +58,15 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - badges/frontend
+    - badges/frontend
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: copy any new badges rules over to the badges backend and restart it
   hosts: badges-backend:badges-backend-stg
@@ -74,12 +74,12 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-   - badges/backend
+    - badges/backend
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/qadevel.yml b/playbooks/manual/qadevel.yml
index d01500fc1..c04902ecf 100644
--- a/playbooks/manual/qadevel.yml
+++ b/playbooks/manual/qadevel.yml
@@ -9,15 +9,15 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
+    - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: make the box be real
   hosts: qadevel:qadevel-stg
@@ -25,25 +25,25 @@
   gather_facts: True
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - nagios_client
-  - fas_client
-  - collectd/base
-  - sudo
+    - base
+    - rkhunter
+    - nagios_client
+    - fas_client
+    - collectd/base
+    - sudo
 
   pre_tasks:
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/hosts.yml"
-  - import_tasks: "{{ tasks_path }}/2fa_client.yml"
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/hosts.yml"
+    - import_tasks: "{{ tasks_path }}/2fa_client.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/releng-emergency-expire-old-repo.yml b/playbooks/manual/releng-emergency-expire-old-repo.yml
index 797c3a6f8..84184b6a9 100644
--- a/playbooks/manual/releng-emergency-expire-old-repo.yml
+++ b/playbooks/manual/releng-emergency-expire-old-repo.yml
@@ -20,19 +20,19 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Expire old files
-    command: /usr/bin/mm2_emergency-expire-repo {{product}} {{version}}
+    - name: Expire old files
+      command: /usr/bin/mm2_emergency-expire-repo {{product}} {{version}}
 
-  - name: Recreate pickle
-    command: /usr/bin/mm2_update-mirrorlist-server
+    - name: Recreate pickle
+      command: /usr/bin/mm2_update-mirrorlist-server
 
-  - name: Sync the pickle
-    command: /usr/local/bin/sync_pkl_to_mirrorlists.sh
+    - name: Sync the pickle
+      command: /usr/local/bin/sync_pkl_to_mirrorlists.sh
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/remote_delldrive.yml b/playbooks/manual/remote_delldrive.yml
index 066ff7dc2..222fa912d 100644
--- a/playbooks/manual/remote_delldrive.yml
+++ b/playbooks/manual/remote_delldrive.yml
@@ -3,19 +3,19 @@
   hosts: "{{target}}"
   user: root
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Copy script over to {{target}}
-    copy: src={{private}}/scripts/drivestatus.py dest=/root/drivestatus.py
+    - name: Copy script over to {{target}}
+      copy: src={{private}}/scripts/drivestatus.py dest=/root/drivestatus.py
 
-  - name: Run it for {{mgmt}}
-    shell: python /root/drivestatus.py {{mgmt}}
-    register: out
+    - name: Run it for {{mgmt}}
+      shell: python /root/drivestatus.py {{mgmt}}
+      register: out
 
-  - name: Remove it
-    file: path=/root/drivestatus.py state=absent
+    - name: Remove it
+      file: path=/root/drivestatus.py state=absent
 
-  - debug: var=out.stdout_lines
+    - debug: var=out.stdout_lines
diff --git a/playbooks/manual/restart-fedmsg-services.yml b/playbooks/manual/restart-fedmsg-services.yml
index a5e6628e3..294ab4187 100644
--- a/playbooks/manual/restart-fedmsg-services.yml
+++ b/playbooks/manual/restart-fedmsg-services.yml
@@ -11,13 +11,13 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: bounce the fedmsg-gateway service
-    service: name=fedmsg-gateway state=restarted
+    - name: bounce the fedmsg-gateway service
+      service: name=fedmsg-gateway state=restarted
 
 - name: restart fedmsg-relay instances
   hosts: fedmsg-relays:fedmsg-relays-stg
@@ -25,13 +25,13 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: bounce the fedmsg-relay service
-    service: name=fedmsg-relay state=restarted
+    - name: bounce the fedmsg-relay service
+      service: name=fedmsg-relay state=restarted
 
 - name: restart fedmsg-irc instances
   hosts: fedmsg-ircs:fedmsg-ircs-stg
@@ -39,13 +39,13 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: bounce the fedmsg-irc service
-    service: name=fedmsg-irc state=restarted
+    - name: bounce the fedmsg-irc service
+      service: name=fedmsg-irc state=restarted
 
 - name: tell nagios to be quiet about FMN for the moment
   hosts: notifs-backend:notifs-backend-stg
@@ -53,15 +53,15 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: schedule a 25 minute downtime.  give notifs backend time to start up.
-    nagios: action=downtime minutes=25 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
+    - name: schedule a 25 minute downtime.  give notifs backend time to start up.
+      nagios: action=downtime minutes=25 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
 #  - name: bounce the fmn-digests service
 #    service: name=fmn-digests@1 state=restarted
 
@@ -71,13 +71,13 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: bounce the fedmsg-hub service
-    service: name=fedmsg-hub state=restarted
+    - name: bounce the fedmsg-hub service
+      service: name=fedmsg-hub state=restarted
 
 - name: restart moksha-hub instances
   hosts: moksha-hubs:moksha-hubs-stg
@@ -85,10 +85,10 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: bounce the moksha-hub service
-    service: name=moksha-hub state=restarted
+    - name: bounce the moksha-hub service
+      service: name=moksha-hub state=restarted
diff --git a/playbooks/manual/restart-pagure.yml b/playbooks/manual/restart-pagure.yml
index 9f9630127..92123e7c2 100644
--- a/playbooks/manual/restart-pagure.yml
+++ b/playbooks/manual/restart-pagure.yml
@@ -2,23 +2,23 @@
   hosts: pagure:pagure-stg
   user: root
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - name: ask puiterwijk if he would like to capture debug info before restarting.
-    pause: seconds=30 prompt="Restarting pagure, abort if you want to get puiterwijk's attention first."
+    - name: ask puiterwijk if he would like to capture debug info before restarting.
+      pause: seconds=30 prompt="Restarting pagure, abort if you want to get puiterwijk's attention first."
 
-  - debug: msg=Karate Chop!
+    - debug: msg=Karate Chop!
 
-  - name: Reload apache...
-    service: name="httpd" state=reloaded
+    - name: Reload apache...
+      service: name="httpd" state=reloaded
 
   post_tasks:
-  - name: tell nagios to unshush w.r.t. apache
-    nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
+    - name: tell nagios to unshush w.r.t. apache
+      nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
diff --git a/playbooks/manual/sign-and-import.yml b/playbooks/manual/sign-and-import.yml
index 0f2d11103..49365d034 100644
--- a/playbooks/manual/sign-and-import.yml
+++ b/playbooks/manual/sign-and-import.yml
@@ -23,58 +23,58 @@
   # repo.  Since we're in freeze right now, we'll default to the testing repo.
   # It would be nice to be able to toggle this from the command line.
   vars:
-  - repodir: /mnt/fedora/app/fi-repo/{% if testing %}testing/{% endif %}{{ rhel }}
-  - testing: False
+    - repodir: /mnt/fedora/app/fi-repo/{% if testing %}testing/{% endif %}{{ rhel }}
+    - testing: False
 
   tasks:
-  - fail: msg="Please use the infra tags from now on"
-    when: no_use_infratags is not defined
+    - fail: msg="Please use the infra tags from now on"
+      when: no_use_infratags is not defined
 
-  - fail: msg="Please specify rhel version with rhel=6/7"
-    when: rhel is not defined
+    - fail: msg="Please specify rhel version with rhel=6/7"
+      when: rhel is not defined
 
-  - name: Fail if no rpmdir provided
-    fail: msg="No rpmdir provided"
-    when: rpmdir is not defined
-  # TODO -- I'd also like to fail if rpmdir does not exist.
-  # TODO -- I'd also like to fail if there are no *.rpm files in there.
+    - name: Fail if no rpmdir provided
+      fail: msg="No rpmdir provided"
+      when: rpmdir is not defined
+    # TODO -- I'd also like to fail if rpmdir does not exist.
+    # TODO -- I'd also like to fail if there are no *.rpm files in there.
 
-  - name: sign all the rpms with our gpg key
-    shell: /bin/rpm --resign {{ rpmdir }}/*.rpm
+    - name: sign all the rpms with our gpg key
+      shell: /bin/rpm --resign {{ rpmdir }}/*.rpm
 
-  - name: make a directory where we store the rpms afterwards
-    file: path={{ rpmdir }}-old state=directory
+    - name: make a directory where we store the rpms afterwards
+      file: path={{ rpmdir }}-old state=directory
 
-  - name: copy the source rpms to the SRPMS dir of {{ repodir }}
-    copy: src={{ item }} dest={{ repodir }}/SRPMS/
-    with_fileglob:
-     - "{{ rpmdir }}/*.src.rpm"
+    - name: copy the source rpms to the SRPMS dir of {{ repodir }}
+      copy: src={{ item }} dest={{ repodir }}/SRPMS/
+      with_fileglob:
+        - "{{ rpmdir }}/*.src.rpm"
 
-  - name: move processed srpms out to {{ rpmdir }}-old
-    command: /bin/mv {{ item }} {{ rpmdir }}-old/
-    when: not testing
-    with_fileglob:
-     - "{{ rpmdir }}/*.src.rpm"
+    - name: move processed srpms out to {{ rpmdir }}-old
+      command: /bin/mv {{ item }} {{ rpmdir }}-old/
+      when: not testing
+      with_fileglob:
+        - "{{ rpmdir }}/*.src.rpm"
 
-  - name: copy the binary rpms to the x86_64 dir of {{ repodir }}
-    copy: src={{ item }} dest={{ repodir }}/x86_64/
-    with_fileglob:
-     - "{{ rpmdir }}/*.rpm"
+    - name: copy the binary rpms to the x86_64 dir of {{ repodir }}
+      copy: src={{ item }} dest={{ repodir }}/x86_64/
+      with_fileglob:
+        - "{{ rpmdir }}/*.rpm"
 
-  - name: copy the binary rpms to the i386 dir of {{ repodir }}
-    copy: src={{ item }} dest={{ repodir }}/i386/
-    with_fileglob:
-     - "{{ rpmdir }}/*.rpm"
+    - name: copy the binary rpms to the i386 dir of {{ repodir }}
+      copy: src={{ item }} dest={{ repodir }}/i386/
+      with_fileglob:
+        - "{{ rpmdir }}/*.rpm"
 
-  - name: move processed rpms out to {{ rpmdir }}-old
-    command: /bin/mv {{ item }} {{ rpmdir }}-old/
-    when: not testing
-    with_fileglob:
-     - "{{ rpmdir }}/*.rpm"
+    - name: move processed rpms out to {{ rpmdir }}-old
+      command: /bin/mv {{ item }} {{ rpmdir }}-old/
+      when: not testing
+      with_fileglob:
+        - "{{ rpmdir }}/*.rpm"
 
-  - name: Run createrepo on each repo
-    command: createrepo --update {{ repodir }}/{{ item }}/
-    with_items:
-    - SRPMS
-    - x86_64
-    - i386
+    - name: Run createrepo on each repo
+      command: createrepo --update {{ repodir }}/{{ item }}/
+      with_items:
+        - SRPMS
+        - x86_64
+        - i386
diff --git a/playbooks/manual/sign-vault.yml b/playbooks/manual/sign-vault.yml
index 85d814495..2d670bd6b 100644
--- a/playbooks/manual/sign-vault.yml
+++ b/playbooks/manual/sign-vault.yml
@@ -12,15 +12,15 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
+    - import_tasks: "{{ tasks_path }}/virt_instance_create.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
 - name: make sign vault server
   hosts: sign-vault
@@ -28,22 +28,22 @@
   gather_facts: true
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - base
-  - rkhunter
-  - serial-console
-  - sigul/server
+    - base
+    - rkhunter
+    - serial-console
+    - sigul/server
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
-  - import_tasks: "{{ tasks_path }}/yumrepos.yml"
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - import_tasks: "{{ tasks_path }}/yumrepos.yml"
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/motd.yml"
+    - import_tasks: "{{ tasks_path }}/motd.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/manual/sync-old-pkl.yml b/playbooks/manual/sync-old-pkl.yml
index 4c1f5b2f2..9147684e5 100644
--- a/playbooks/manual/sync-old-pkl.yml
+++ b/playbooks/manual/sync-old-pkl.yml
@@ -3,41 +3,41 @@
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Copy borked pkl
-    copy: src=/var/lib/mirrormanager/mirrorlist_cache.pkl dest=/root/mirrorlist_cache.pkl-{{ ansible_date_time.date }} remote_src=yes
+    - name: Copy borked pkl
+      copy: src=/var/lib/mirrormanager/mirrorlist_cache.pkl dest=/root/mirrorlist_cache.pkl-{{ ansible_date_time.date }} remote_src=yes
 
-  - name: Nuke borked pkl
-    file: path=/var/lib/mirrormanager/mirrorlist_cache.pkl state=absent
+    - name: Nuke borked pkl
+      file: path=/var/lib/mirrormanager/mirrorlist_cache.pkl state=absent
 
-  - name: Copy old pkl/files into place
-    copy: src=/var/lib/mirrormanager/old/{{item}} dest=/var/lib/mirrormanager/{{item}} force=yes remote_src=yes
-    with_items:
-    - mirrorlist_cache.pkl
-    - i2_netblocks.txt
-    - global_netblocks.txt
+    - name: Copy old pkl/files into place
+      copy: src=/var/lib/mirrormanager/old/{{item}} dest=/var/lib/mirrormanager/{{item}} force=yes remote_src=yes
+      with_items:
+        - mirrorlist_cache.pkl
+        - i2_netblocks.txt
+        - global_netblocks.txt
 
-  - name: Sync the pkl
-    command: /usr/local/bin/sync_pkl_to_mirrorlists.sh
-    become: yes
-    become_user: mirrormanager
+    - name: Sync the pkl
+      command: /usr/local/bin/sync_pkl_to_mirrorlists.sh
+      become: yes
+      become_user: mirrormanager
 
 - name: Do mm-proxy stuff
   hosts: mirrorlist-proxies
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Restart mirrorlist1
-    command: systemctl restart mirrorlist1
+    - name: Restart mirrorlist1
+      command: systemctl restart mirrorlist1
 
-  - name: Stop mirrorlist2
-    command: systemctl stop mirrorlist2
+    - name: Stop mirrorlist2
+      command: systemctl stop mirrorlist2
diff --git a/playbooks/manual/update-firmware.yml b/playbooks/manual/update-firmware.yml
index 301880a94..3a60546e4 100644
--- a/playbooks/manual/update-firmware.yml
+++ b/playbooks/manual/update-firmware.yml
@@ -12,101 +12,100 @@
 - name: Show warning
   hosts: localhost
   tasks:
-  - pause: prompt="DO NOT ABORT THIS PLAYBOOK, IT WILL TAKE LONG! Press enter to confirm"
-  - pause: prompt="Giving you time to read the above warnings..." minutes=5
-  - pause: prompt="Hit enter one more time to confirm..."
+    - pause: prompt="DO NOT ABORT THIS PLAYBOOK, IT WILL TAKE LONG! Press enter to confirm"
+    - pause: prompt="Giving you time to read the above warnings..." minutes=5
+    - pause: prompt="Hit enter one more time to confirm..."
 
 - name: Copy and apply firmware upgrades
   hosts: all
   user: root
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   vars:
-  - updates:
-    - dirname: Dell-R520
-      vendor: "Dell Inc."
-      product: "PowerEdge R520"
-      files:
-      - iDRAC-with-Lifecycle-Controller_Firmware_VV01T_LN_2.21.21.21_A00.BIN
-      - R520_BIOS_35C9T_LN_2.4.2.BIN
-    - dirname: Dell-R630
-      vendor: "Dell Inc."
-      product: "PowerEdge R630"
-      files:
-      - iDRAC-with-Lifecycle-Controller_Firmware_1X82C_LN_2.21.21.21_A00.BIN
-      - BIOS_1RMMP_LN_1.5.4.BIN
-    - dirname: Dell-R720xd
-      vendor: "Dell Inc."
-      product: "PowerEdge R720xd"
-      files:
-      - iDRAC-with-Lifecycle-Controller_Firmware_VV01T_LN_2.21.21.21_A00.BIN
-      - BIOS_MKCTM_LN_2.5.2.BIN
+    - updates:
+        - dirname: Dell-R520
+          vendor: "Dell Inc."
+          product: "PowerEdge R520"
+          files:
+            - iDRAC-with-Lifecycle-Controller_Firmware_VV01T_LN_2.21.21.21_A00.BIN
+            - R520_BIOS_35C9T_LN_2.4.2.BIN
+        - dirname: Dell-R630
+          vendor: "Dell Inc."
+          product: "PowerEdge R630"
+          files:
+            - iDRAC-with-Lifecycle-Controller_Firmware_1X82C_LN_2.21.21.21_A00.BIN
+            - BIOS_1RMMP_LN_1.5.4.BIN
+        - dirname: Dell-R720xd
+          vendor: "Dell Inc."
+          product: "PowerEdge R720xd"
+          files:
+            - iDRAC-with-Lifecycle-Controller_Firmware_VV01T_LN_2.21.21.21_A00.BIN
+            - BIOS_MKCTM_LN_2.5.2.BIN
 
   tasks:
-  - name: Create drop place for upgrades
-    check_mode: no
-    when: ansible_virtualization_role == "host"
-    file: path=/root/firmware-upgrades
-          state=directory
+    - name: Create drop place for upgrades
+      check_mode: no
+      when: ansible_virtualization_role == "host"
+      file: path=/root/firmware-upgrades
+        state=directory
 
-  - name: Check which updates to copy
-    check_mode: no
-    stat: path=/root/firmware-upgrades/{{ item.1}}.applied
-    register: is_applied_results
-    when: item.0.vendor == ansible_system_vendor and item.0.product == ansible_product_name
-    with_subelements:
-    - updates
-    - files
+    - name: Check which updates to copy
+      check_mode: no
+      stat: path=/root/firmware-upgrades/{{ item.1}}.applied
+      register: is_applied_results
+      when: item.0.vendor == ansible_system_vendor and item.0.product == ansible_product_name
+      with_subelements:
+        - updates
+        - files
 
-  - name: Copy updates
-    check_mode: no
-    copy: src={{ bigfiles }}/firmware/{{ item.item.0.dirname }}/{{ item.item.1}}
-          dest=/root/firmware-upgrades/
-          mode=0700
-    register: copy_results
-    when: "'stat' in item and not item.stat.exists"
-    with_items: "{{is_applied_results.results}}"
+    - name: Copy updates
+      check_mode: no
+      copy:
+        src={{ bigfiles }}/firmware/{{ item.item.0.dirname }}/{{ item.item.1}}
+        dest=/root/firmware-upgrades/
+        mode=0700
+      register: copy_results
+      when: "'stat' in item and not item.stat.exists"
+      with_items: "{{is_applied_results.results}}"
 
+    # Dell updates here
+    - name: Check Dell updates
+      check_mode: no
+      command: /root/firmware-upgrades/{{ item.item.1}} -qc
+      register: check_results
+      failed_when: "'System(s) supported by this package' in check_results.stdout"
+      changed_when: "'is the same' not in check_results.stdout"
+      when: "ansible_system_vendor == 'Dell Inc.' and 'stat' in item and not item.stat.exists"
+      with_items: "{{is_applied_results.results}}"
 
-  # Dell updates here
-  - name: Check Dell updates
-    check_mode: no
-    command: /root/firmware-upgrades/{{ item.item.1}} -qc
-    register: check_results
-    failed_when: "'System(s) supported by this package' in check_results.stdout"
-    changed_when: "'is the same' not in check_results.stdout"
-    when: "ansible_system_vendor == 'Dell Inc.' and 'stat' in item and not item.stat.exists"
-    with_items: "{{is_applied_results.results}}"
+    - name: Apply Dell updates
+      command: /root/firmware-upgrades/{{ item.item.item.1}} -q
+      register: update_results
+      failed_when: "'System(s) supported by this package:' in update_results.stdout"
+      changed_when: "'should be restarted' in update_results.stdout or 'completed successfully' in update_results.stdout"
+      when: ansible_system_vendor == "Dell Inc." and item.changed
+      with_items: "{{check_results.results}}"
 
-  - name: Apply Dell updates
-    command: /root/firmware-upgrades/{{ item.item.item.1}} -q
-    register: update_results
-    failed_when: "'System(s) supported by this package:' in update_results.stdout"
-    changed_when: "'should be restarted' in update_results.stdout or 'completed successfully' in update_results.stdout"
-    when: ansible_system_vendor == "Dell Inc." and item.changed
-    with_items: "{{check_results.results}}"
+    # Note: IBM updates were considered, but IBM does not allow checking of
+    # downloaded firmware packages: at the moment of writing they do not
+    # publish a GPG signature or checksums of downloaded files. (2016-01-21)
 
-  # Note: IBM updates were considered, but IBM does not allow checking of
-  # downloaded firmware packages: at the moment of writing they do not
-  # publish a GPG signature or checksums of downloaded files. (2016-01-21)
+    # Generic stuff continues here
+    - name: Mark updates as done
+      file: path=/root/firmware-upgrades/{{ item.item.1 }}.applied
+        state=touch owner=root mode=644
+      when: "'stat' in item and not item.stat.exists"
+      with_items: "{{is_applied_results.results}}"
 
-
-  # Generic stuff continues here
-  - name: Mark updates as done
-    file: path=/root/firmware-upgrades/{{ item.item.1 }}.applied
-          state=touch owner=root mode=644
-    when: "'stat' in item and not item.stat.exists"
-    with_items: "{{is_applied_results.results}}"
-
-  # We are cleaning up all files we copied, regardless of update result
-  - name: Delete update files
-    check_mode: no
-    file: path=/root/firmware-upgrades/{{ item.item.1 }}
-          state=absent
-    when: "'stat' in item and not item.stat.exists"
-    with_items: "{{is_applied_results.results}}"
+    # We are cleaning up all files we copied, regardless of update result
+    - name: Delete update files
+      check_mode: no
+      file: path=/root/firmware-upgrades/{{ item.item.1 }}
+        state=absent
+      when: "'stat' in item and not item.stat.exists"
+      with_items: "{{is_applied_results.results}}"
diff --git a/playbooks/manual/update-packages.yml b/playbooks/manual/update-packages.yml
index 7b6eb9344..98f1ca8d5 100644
--- a/playbooks/manual/update-packages.yml
+++ b/playbooks/manual/update-packages.yml
@@ -14,28 +14,26 @@
     testing: False
 
   tasks:
+    - name: yum update {{ package }} from main repo
+      yum: name="{{ package }}" state=latest update_cache=yes
+      when: not testing and ansible_distribution_major_version|int < 22
 
-  - name: yum update {{ package }} from main repo
-    yum: name="{{ package }}" state=latest update_cache=yes
-    when: not testing and ansible_distribution_major_version|int < 22
+    - name: yum update {{ package }} from testing repo
+      yum: name="{{ package }}" state=latest enablerepo=infrastructure-tags-stg update_cache=yes
+      when: testing and ansible_distribution_major_version|int < 22
 
-  - name: yum update {{ package }} from testing repo
-    yum: name="{{ package }}" state=latest enablerepo=infrastructure-tags-stg update_cache=yes
-    when: testing and ansible_distribution_major_version|int < 22
+    - name: dnf clean all (since we can't do it when updating)
+      command: dnf clean all
+      when: not testing and ansible_distribution_major_version|int > 21
 
-  - name: dnf clean all (since we can't do it when updating)
-    command: dnf clean all
-    when: not testing and ansible_distribution_major_version|int > 21
+    - name: dnf update {{ package }} from main repo
+      dnf: name="{{ package }}" state=latest
+      when: not testing and ansible_distribution_major_version|int > 21
 
-  - name: dnf update {{ package }} from main repo
-    dnf: name="{{ package }}" state=latest
-    when: not testing and ansible_distribution_major_version|int > 21
-
-  - name: dnf clean all (since we can't do it when updating)
-    command: dnf clean all --enablerepo=infrastructure-tags-stg
-    when: testing and ansible_distribution_major_version|int > 21
-
-  - name: dnf update {{ package }} from testing repo
-    dnf: name="{{ package }}" state=latest enablerepo=infrastructure-tags-stg
-    when: testing and ansible_distribution_major_version|int > 21
+    - name: dnf clean all (since we can't do it when updating)
+      command: dnf clean all --enablerepo=infrastructure-tags-stg
+      when: testing and ansible_distribution_major_version|int > 21
 
+    - name: dnf update {{ package }} from testing repo
+      dnf: name="{{ package }}" state=latest enablerepo=infrastructure-tags-stg
+      when: testing and ansible_distribution_major_version|int > 21
diff --git a/playbooks/openshift-apps/accountsystem.yml b/playbooks/openshift-apps/accountsystem.yml
index 72278ef0a..3a93960a1 100644
--- a/playbooks/openshift-apps/accountsystem.yml
+++ b/playbooks/openshift-apps/accountsystem.yml
@@ -9,47 +9,47 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: accountsystem
-    description: CAIAPI and Noggin
-    appowners:
-    - puiterwijk
-  - role: openshift/imagestream
-    app: accountsystem
-    imagename: caiapi
-  - role: openshift/imagestream
-    app: accountsystem
-    imagename: noggin
-  - role: openshift/object
-    app: accountsystem
-    objectname: buildconfig_caiapi.yml
-    template: buildconfig_caiapi.yml
-  - role: openshift/start-build
-    app: accountsystem
-    buildname: caiapi-build
-  - role: openshift/object
-    app: accountsystem
-    template: configmap_caiapi.yml
-    objectname: configmap_caiapi.yml
-  - role: openshift/secret-file
-    app: accountsystem
-    key: oidc
-    secret_name: oidc
-    privatefile: "caiapi/{{env}}/oidc.json"
-  - role: openshift/object
-    app: accountsystem
-    file: service_caiapi.yml
-    objectname: service_caiapi.yml
-  - role: openshift/route
-    app: accountsystem
-    routename: caiapi
-    host: "caiapi{{ env_suffix }}.fedoraproject.org"
-    servicename: caiapi
-    serviceport: 8080
-  - role: openshift/object
-    app: accountsystem
-    file: deploymentconfig_caiapi.yml
-    objectname: deploymentconfig_caiapi.yml
-  - role: openshift/rollout
-    app: accountsystem
-    dcname: caiapi
+    - role: openshift/project
+      app: accountsystem
+      description: CAIAPI and Noggin
+      appowners:
+        - puiterwijk
+    - role: openshift/imagestream
+      app: accountsystem
+      imagename: caiapi
+    - role: openshift/imagestream
+      app: accountsystem
+      imagename: noggin
+    - role: openshift/object
+      app: accountsystem
+      objectname: buildconfig_caiapi.yml
+      template: buildconfig_caiapi.yml
+    - role: openshift/start-build
+      app: accountsystem
+      buildname: caiapi-build
+    - role: openshift/object
+      app: accountsystem
+      template: configmap_caiapi.yml
+      objectname: configmap_caiapi.yml
+    - role: openshift/secret-file
+      app: accountsystem
+      key: oidc
+      secret_name: oidc
+      privatefile: "caiapi/{{env}}/oidc.json"
+    - role: openshift/object
+      app: accountsystem
+      file: service_caiapi.yml
+      objectname: service_caiapi.yml
+    - role: openshift/route
+      app: accountsystem
+      routename: caiapi
+      host: "caiapi{{ env_suffix }}.fedoraproject.org"
+      servicename: caiapi
+      serviceport: 8080
+    - role: openshift/object
+      app: accountsystem
+      file: deploymentconfig_caiapi.yml
+      objectname: deploymentconfig_caiapi.yml
+    - role: openshift/rollout
+      app: accountsystem
+      dcname: caiapi
diff --git a/playbooks/openshift-apps/asknot.yml b/playbooks/openshift-apps/asknot.yml
index c8e6d5f49..8b4c10a57 100644
--- a/playbooks/openshift-apps/asknot.yml
+++ b/playbooks/openshift-apps/asknot.yml
@@ -9,53 +9,53 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: asknot
-    description: What can I do for Fedora
-    appowners:
-    - cverna
-
-  - role: openshift/object
-    app: asknot
-    template: imagestream.yml
-    objectname: imagestream.yml
-
-  - role: openshift/object
-    app: asknot
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-
-  - role: openshift/start-build
-    app: asknot
-    buildname: asknot-build
-    objectname: asknot-build
-
-  - role: openshift/object
-    app: asknot
-    file: service.yml
-    objectname: service.yml
-
-  - role: openshift/route
-    app: asknot
-    routename: asknot
-    host: "stg.whatcanidoforfedora.org"
-    serviceport: 8080-tcp
-    servicename: asknot
-    when: env == "staging"
-
-  - role: openshift/route
-    app: asknot
-    routename: asknot
-    host: "whatcanidoforfedora.org"
-    serviceport: 8080-tcp
-    servicename: asknot
-    when: env == "production"
-
-  - role: openshift/object
-    app: asknot
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-
-  - role: openshift/rollout
-    app: asknot
-    dcname: asknot
+    - role: openshift/project
+      app: asknot
+      description: What can I do for Fedora
+      appowners:
+        - cverna
+
+    - role: openshift/object
+      app: asknot
+      template: imagestream.yml
+      objectname: imagestream.yml
+
+    - role: openshift/object
+      app: asknot
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+
+    - role: openshift/start-build
+      app: asknot
+      buildname: asknot-build
+      objectname: asknot-build
+
+    - role: openshift/object
+      app: asknot
+      file: service.yml
+      objectname: service.yml
+
+    - role: openshift/route
+      app: asknot
+      routename: asknot
+      host: "stg.whatcanidoforfedora.org"
+      serviceport: 8080-tcp
+      servicename: asknot
+      when: env == "staging"
+
+    - role: openshift/route
+      app: asknot
+      routename: asknot
+      host: "whatcanidoforfedora.org"
+      serviceport: 8080-tcp
+      servicename: asknot
+      when: env == "production"
+
+    - role: openshift/object
+      app: asknot
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+
+    - role: openshift/rollout
+      app: asknot
+      dcname: asknot
diff --git a/playbooks/openshift-apps/bodhi.yml b/playbooks/openshift-apps/bodhi.yml
index e761e090f..16c3b80e8 100644
--- a/playbooks/openshift-apps/bodhi.yml
+++ b/playbooks/openshift-apps/bodhi.yml
@@ -9,88 +9,88 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   pre_tasks:
-  - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
+    - include_vars: dir=/srv/web/infra/ansible/vars/all/ ignore_files=README
 
   roles:
-  - role: openshift/project
-    app: bodhi
-    description: bodhi
-    appowners:
-    - bowlofeggs
-  - role: openshift/keytab
-    app: bodhi
-    key: koji-keytab
-    secret_name: bodhi-keytab
-    service: bodhi
-    host: "bodhi{{ env_suffix }}.fedoraproject.org"
-  - role: openshift/secret-file
-    app: bodhi
-    secret_name: bodhi-fedmsg-key
-    key: fedmsg-bodhi.key
-    privatefile: fedmsg-certs/keys/bodhi-bodhi01.stg.phx2.fedoraproject.org.key
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: bodhi
-    secret_name: bodhi-fedmsg-key
-    key: fedmsg-bodhi.key
-    privatefile: fedmsg-certs/keys/bodhi-bodhi-web-temp-bodhi.app.os.fedoraproject.org.key
-    when: env != "staging"
-  - role: openshift/secret-file
-    app: bodhi
-    secret_name: bodhi-fedmsg-crt
-    key: fedmsg-bodhi.crt
-    privatefile: fedmsg-certs/keys/bodhi-bodhi01.stg.phx2.fedoraproject.org.crt
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: bodhi
-    secret_name: bodhi-fedmsg-crt
-    key: fedmsg-bodhi.crt
-    privatefile: fedmsg-certs/keys/bodhi-bodhi-web-temp-bodhi.app.os.fedoraproject.org.crt
-    when: env != "staging"
-  - role: openshift/imagestream
-    app: bodhi
-    imagename: bodhi-web
-  - role: openshift/object
-    app: bodhi
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-    bodhi_version: 3.13.3-1.fc29.infra
-    when: env == "staging"
-  - role: openshift/object
-    app: bodhi
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-    bodhi_version: 3.13.3-1.fc29.infra
-    when: env != "staging"
-  - role: openshift/start-build
-    app: bodhi
-    buildname: bodhi-web
-  - role: openshift/object
-    app: bodhi
-    template_fullpath: "{{roles_path}}/bodhi2/base/templates/configmap.yml"
-    objectname: configmap.yml
-  - role: openshift/object
-    app: bodhi
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/route
-    app: bodhi
-    routename: bodhi-web
-    host: "bodhi{{ env_suffix }}.fedoraproject.org"
-    serviceport: web
-    servicename: bodhi-web
-  - role: openshift/object
-    app: bodhi
-    template: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/rollout
-    app: bodhi
-    dcname: bodhi-web
+    - role: openshift/project
+      app: bodhi
+      description: bodhi
+      appowners:
+        - bowlofeggs
+    - role: openshift/keytab
+      app: bodhi
+      key: koji-keytab
+      secret_name: bodhi-keytab
+      service: bodhi
+      host: "bodhi{{ env_suffix }}.fedoraproject.org"
+    - role: openshift/secret-file
+      app: bodhi
+      secret_name: bodhi-fedmsg-key
+      key: fedmsg-bodhi.key
+      privatefile: fedmsg-certs/keys/bodhi-bodhi01.stg.phx2.fedoraproject.org.key
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: bodhi
+      secret_name: bodhi-fedmsg-key
+      key: fedmsg-bodhi.key
+      privatefile: fedmsg-certs/keys/bodhi-bodhi-web-temp-bodhi.app.os.fedoraproject.org.key
+      when: env != "staging"
+    - role: openshift/secret-file
+      app: bodhi
+      secret_name: bodhi-fedmsg-crt
+      key: fedmsg-bodhi.crt
+      privatefile: fedmsg-certs/keys/bodhi-bodhi01.stg.phx2.fedoraproject.org.crt
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: bodhi
+      secret_name: bodhi-fedmsg-crt
+      key: fedmsg-bodhi.crt
+      privatefile: fedmsg-certs/keys/bodhi-bodhi-web-temp-bodhi.app.os.fedoraproject.org.crt
+      when: env != "staging"
+    - role: openshift/imagestream
+      app: bodhi
+      imagename: bodhi-web
+    - role: openshift/object
+      app: bodhi
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+      bodhi_version: 3.13.3-1.fc29.infra
+      when: env == "staging"
+    - role: openshift/object
+      app: bodhi
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+      bodhi_version: 3.13.3-1.fc29.infra
+      when: env != "staging"
+    - role: openshift/start-build
+      app: bodhi
+      buildname: bodhi-web
+    - role: openshift/object
+      app: bodhi
+      template_fullpath: "{{roles_path}}/bodhi2/base/templates/configmap.yml"
+      objectname: configmap.yml
+    - role: openshift/object
+      app: bodhi
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/route
+      app: bodhi
+      routename: bodhi-web
+      host: "bodhi{{ env_suffix }}.fedoraproject.org"
+      serviceport: web
+      servicename: bodhi-web
+    - role: openshift/object
+      app: bodhi
+      template: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/rollout
+      app: bodhi
+      dcname: bodhi-web
 
   post_tasks:
-  - name: Scale up pods
-    command: oc -n bodhi scale dc/bodhi-web --replicas={{ hostvars[groups['bodhi2'][0]]['openshift_pods'] }}
-    when: env != "staging"
-  - name: Scale up pods
-    command: oc -n bodhi scale dc/bodhi-web --replicas={{ hostvars[groups['bodhi2-stg'][0]]['openshift_pods'] }}
-    when: env == "staging"
+    - name: Scale up pods
+      command: oc -n bodhi scale dc/bodhi-web --replicas={{ hostvars[groups['bodhi2'][0]]['openshift_pods'] }}
+      when: env != "staging"
+    - name: Scale up pods
+      command: oc -n bodhi scale dc/bodhi-web --replicas={{ hostvars[groups['bodhi2-stg'][0]]['openshift_pods'] }}
+      when: env == "staging"
diff --git a/playbooks/openshift-apps/discourse2fedmsg.yml b/playbooks/openshift-apps/discourse2fedmsg.yml
index ca605b376..e2643dc4e 100644
--- a/playbooks/openshift-apps/discourse2fedmsg.yml
+++ b/playbooks/openshift-apps/discourse2fedmsg.yml
@@ -9,40 +9,40 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: discourse2fedmsg
-    description: discourse2fedmsg
-    appowners:
-    - puiterwijk
-  - role: openshift/object
-    app: discourse2fedmsg
-    file: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: discourse2fedmsg
-    file: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/project
+      app: discourse2fedmsg
+      description: discourse2fedmsg
+      appowners:
+        - puiterwijk
+    - role: openshift/object
+      app: discourse2fedmsg
+      file: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: discourse2fedmsg
+      file: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/start-build
-    app: discourse2fedmsg
-    buildname: discourse2fedmsg-build
+    - role: openshift/start-build
+      app: discourse2fedmsg
+      buildname: discourse2fedmsg-build
 
-  - role: openshift/object
-    app: discourse2fedmsg
-    file: service.yml
-    objectname: service.yml
+    - role: openshift/object
+      app: discourse2fedmsg
+      file: service.yml
+      objectname: service.yml
 
-  - role: openshift/route
-    app: discourse2fedmsg
-    routename: discourse2fedmsg
-    serviceport: 8080-tcp
-    servicename: discourse2fedmsg
+    - role: openshift/route
+      app: discourse2fedmsg
+      routename: discourse2fedmsg
+      serviceport: 8080-tcp
+      servicename: discourse2fedmsg
 
-  - role: openshift/object
-    app: discourse2fedmsg
-    template: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: discourse2fedmsg
+      template: deploymentconfig.yml
+      objectname: deploymentconfig.yml
 
-  - role: openshift/rollout
-    app: discourse2fedmsg
-    dcname: discourse2fedmsg
+    - role: openshift/rollout
+      app: discourse2fedmsg
+      dcname: discourse2fedmsg
diff --git a/playbooks/openshift-apps/docsbuilding.yml b/playbooks/openshift-apps/docsbuilding.yml
index 476c0cbb9..4cf31cec5 100644
--- a/playbooks/openshift-apps/docsbuilding.yml
+++ b/playbooks/openshift-apps/docsbuilding.yml
@@ -9,26 +9,26 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: docsbuilding
-    description: Documentation building
-    appowners:
-    - asamalik
-  - role: openshift/imagestream
-    app: docsbuilding
-    imagename: builder
-  - role: openshift/object
-    app: docsbuilding
-    objectname: buildconfig.yml
-    template: buildconfig.yml
-  - role: openshift/start-build
-    app: docsbuilding
-    buildname: builder-build
-  - role: openshift/object
-    app: docsbuilding
-    file: cron.yml
-    objectname: cron.yml
-  - role: openshift/object
-    app: docsbuilding
-    file: pvc.yml
-    objectname: pvc.yml
+    - role: openshift/project
+      app: docsbuilding
+      description: Documentation building
+      appowners:
+        - asamalik
+    - role: openshift/imagestream
+      app: docsbuilding
+      imagename: builder
+    - role: openshift/object
+      app: docsbuilding
+      objectname: buildconfig.yml
+      template: buildconfig.yml
+    - role: openshift/start-build
+      app: docsbuilding
+      buildname: builder-build
+    - role: openshift/object
+      app: docsbuilding
+      file: cron.yml
+      objectname: cron.yml
+    - role: openshift/object
+      app: docsbuilding
+      file: pvc.yml
+      objectname: pvc.yml
diff --git a/playbooks/openshift-apps/elections.yml b/playbooks/openshift-apps/elections.yml
index 20c9adcbe..5f9220c8f 100644
--- a/playbooks/openshift-apps/elections.yml
+++ b/playbooks/openshift-apps/elections.yml
@@ -9,44 +9,44 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: elections
-    description: Fedora Elections apps
-    appowners:
-    - cverna
-    - pingou
-  - role: openshift/object
-    app: elections
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: elections
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/project
+      app: elections
+      description: Fedora Elections apps
+      appowners:
+        - cverna
+        - pingou
+    - role: openshift/object
+      app: elections
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: elections
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/object
-    app: elections
-    template: configmap.yml
-    objectname: configmap.yml
+    - role: openshift/object
+      app: elections
+      template: configmap.yml
+      objectname: configmap.yml
 
-  - role: openshift/start-build
-    app: elections
-    buildname: elections-build
-    objectname: elections-build
+    - role: openshift/start-build
+      app: elections
+      buildname: elections-build
+      objectname: elections-build
 
-  - role: openshift/object
-    app: elections
-    file: service.yml
-    objectname: service.yml
+    - role: openshift/object
+      app: elections
+      file: service.yml
+      objectname: service.yml
 
-  - role: openshift/route
-    app: elections
-    routename: elections
-    #    host: "elections{{ env_suffix }}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: elections
+    - role: openshift/route
+      app: elections
+      routename: elections
+      #    host: "elections{{ env_suffix }}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: elections
 
-  - role: openshift/object
-    app: elections
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: elections
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
diff --git a/playbooks/openshift-apps/fedocal.yml b/playbooks/openshift-apps/fedocal.yml
index 8135e11f1..494fd36fd 100644
--- a/playbooks/openshift-apps/fedocal.yml
+++ b/playbooks/openshift-apps/fedocal.yml
@@ -9,44 +9,44 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: fedocal
-    description: Fedora calendar apps
-    appowners:
-    - cverna
-    - pingou
-  - role: openshift/object
-    app: fedocal
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: fedocal
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/project
+      app: fedocal
+      description: Fedora calendar apps
+      appowners:
+        - cverna
+        - pingou
+    - role: openshift/object
+      app: fedocal
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: fedocal
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/object
-    app: fedocal
-    template: configmap.yml
-    objectname: configmap.yml
+    - role: openshift/object
+      app: fedocal
+      template: configmap.yml
+      objectname: configmap.yml
 
-  - role: openshift/start-build
-    app: fedocal
-    buildname: fedocal-build
-    objectname: fedocal-build
+    - role: openshift/start-build
+      app: fedocal
+      buildname: fedocal-build
+      objectname: fedocal-build
 
-  - role: openshift/object
-    app: fedocal
-    file: service.yml
-    objectname: service.yml
+    - role: openshift/object
+      app: fedocal
+      file: service.yml
+      objectname: service.yml
 
-  - role: openshift/route
-    app: fedocal
-    routename: fedocal
-    host: "calendar{{ env_suffix }}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: fedocal
+    - role: openshift/route
+      app: fedocal
+      routename: fedocal
+      host: "calendar{{ env_suffix }}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: fedocal
 
-  - role: openshift/object
-    app: fedocal
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: fedocal
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
diff --git a/playbooks/openshift-apps/fpdc.yml b/playbooks/openshift-apps/fpdc.yml
index e0d9f0645..9d523ac9a 100644
--- a/playbooks/openshift-apps/fpdc.yml
+++ b/playbooks/openshift-apps/fpdc.yml
@@ -9,44 +9,44 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: fpdc
-    description: Fedora Product Definition Center
-    appowners:
-    - cverna
-    - abompard
-  - role: openshift/object
-    app: fpdc
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: fpdc
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/project
+      app: fpdc
+      description: Fedora Product Definition Center
+      appowners:
+        - cverna
+        - abompard
+    - role: openshift/object
+      app: fpdc
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: fpdc
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/object
-    app: fpdc
-    template: configmap.yml
-    objectname: configmap.yml
+    - role: openshift/object
+      app: fpdc
+      template: configmap.yml
+      objectname: configmap.yml
 
-  - role: openshift/start-build
-    app: fpdc
-    buildname: fpdc-build
-    objectname: fpdc-build
+    - role: openshift/start-build
+      app: fpdc
+      buildname: fpdc-build
+      objectname: fpdc-build
 
-  - role: openshift/object
-    app: fpdc
-    file: service.yml
-    objectname: service.yml
+    - role: openshift/object
+      app: fpdc
+      file: service.yml
+      objectname: service.yml
 
-  - role: openshift/route
-    app: fpdc
-    routename: fpdc
-    host: "fpdc{{ env_suffix }}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: fpdc
+    - role: openshift/route
+      app: fpdc
+      routename: fpdc
+      host: "fpdc{{ env_suffix }}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: fpdc
 
-  - role: openshift/object
-    app: fpdc
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: fpdc
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
diff --git a/playbooks/openshift-apps/greenwave.yml b/playbooks/openshift-apps/greenwave.yml
index b1002abcf..4ffe0547d 100644
--- a/playbooks/openshift-apps/greenwave.yml
+++ b/playbooks/openshift-apps/greenwave.yml
@@ -9,71 +9,71 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  # The openshift/project role breaks if the project already exists:
-  # https://pagure.io/fedora-infrastructure/issue/6404
-  - role: openshift/project
-    app: greenwave
-    description: greenwave
-    appowners:
-    - dcallagh
-    - gnaponie
-    - lholecek
-    - ralph
-  - role: openshift/secret-file
-    app: greenwave
-    secret_name: greenwave-fedmsg-key
-    key: fedmsg-greenwave.key
-    privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.stg.fedoraproject.org.key
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: greenwave
-    secret_name: greenwave-fedmsg-crt
-    key: fedmsg-greenwave.crt
-    privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.stg.fedoraproject.org.crt
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: greenwave
-    secret_name: greenwave-fedmsg-key
-    key: fedmsg-greenwave.key
-    privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.fedoraproject.org.key
-    when: env != "staging"
-  - role: openshift/secret-file
-    app: greenwave
-    secret_name: greenwave-fedmsg-crt
-    key: fedmsg-greenwave.crt
-    privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.fedoraproject.org.crt
-    when: env != "staging"
-  - role: openshift/object
-    app: greenwave
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: greenwave
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-  - role: openshift/object
-    app: greenwave
-    template: configmap.yml
-    objectname: configmap.yml
-  - role: openshift/object
-    app: greenwave
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/route
-    app: greenwave
-    routename: web-pretty
-    host: "greenwave{{ env_suffix }}.fedoraproject.org"
-    serviceport: web
-    servicename: greenwave-web
-  # TODO -- someday retire this old route in favor of the pretty one above.
-  - role: openshift/object
-    app: greenwave
-    file: route.yml
-    objectname: route.yml
-  - role: openshift/object
-    app: greenwave
-    template: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/rollout
-    app: greenwave
-    dcname: greenwave-web
+    # The openshift/project role breaks if the project already exists:
+    # https://pagure.io/fedora-infrastructure/issue/6404
+    - role: openshift/project
+      app: greenwave
+      description: greenwave
+      appowners:
+        - dcallagh
+        - gnaponie
+        - lholecek
+        - ralph
+    - role: openshift/secret-file
+      app: greenwave
+      secret_name: greenwave-fedmsg-key
+      key: fedmsg-greenwave.key
+      privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.stg.fedoraproject.org.key
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: greenwave
+      secret_name: greenwave-fedmsg-crt
+      key: fedmsg-greenwave.crt
+      privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.stg.fedoraproject.org.crt
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: greenwave
+      secret_name: greenwave-fedmsg-key
+      key: fedmsg-greenwave.key
+      privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.fedoraproject.org.key
+      when: env != "staging"
+    - role: openshift/secret-file
+      app: greenwave
+      secret_name: greenwave-fedmsg-crt
+      key: fedmsg-greenwave.crt
+      privatefile: fedmsg-certs/keys/greenwave-greenwave-web-greenwave.app.os.fedoraproject.org.crt
+      when: env != "staging"
+    - role: openshift/object
+      app: greenwave
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: greenwave
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+    - role: openshift/object
+      app: greenwave
+      template: configmap.yml
+      objectname: configmap.yml
+    - role: openshift/object
+      app: greenwave
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/route
+      app: greenwave
+      routename: web-pretty
+      host: "greenwave{{ env_suffix }}.fedoraproject.org"
+      serviceport: web
+      servicename: greenwave-web
+    # TODO -- someday retire this old route in favor of the pretty one above.
+    - role: openshift/object
+      app: greenwave
+      file: route.yml
+      objectname: route.yml
+    - role: openshift/object
+      app: greenwave
+      template: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/rollout
+      app: greenwave
+      dcname: greenwave-web
diff --git a/playbooks/openshift-apps/koschei.yml b/playbooks/openshift-apps/koschei.yml
index 113015b8f..d5a5b4a6b 100644
--- a/playbooks/openshift-apps/koschei.yml
+++ b/playbooks/openshift-apps/koschei.yml
@@ -11,19 +11,19 @@
     - /srv/web/infra/ansible/roles/openshift-apps/koschei/vars/{{ env }}.yml
 
   roles:
-  - openshift/project
+    - openshift/project
 
-  - role: openshift/keytab
-    secret_name: keytab
-    key: krb5.keytab
-    service: koschei
-    host: "koschei-backend01{{ env_suffix }}.phx2.fedoraproject.org"
+    - role: openshift/keytab
+      secret_name: keytab
+      key: krb5.keytab
+      service: koschei
+      host: "koschei-backend01{{ env_suffix }}.phx2.fedoraproject.org"
 
-  - role: openshift/route
-    routename: frontend
-    host: "koschei{{ env_suffix }}.fedoraproject.org"
-    serviceport: web
-    servicename: frontend
+    - role: openshift/route
+      routename: frontend
+      host: "koschei{{ env_suffix }}.fedoraproject.org"
+      serviceport: web
+      servicename: frontend
 
   tasks:
     - name: Apply objects
@@ -48,10 +48,10 @@
         min_mem: "{{ item.memory[0] }}"
         max_mem: "{{ item.memory[1] }}"
       with_items:
-        - { name: polling,        cpu: [ 1000, 1500 ],  memory: [  256,  512 ] }
-        - { name: scheduler,      cpu: [  200,  500 ],  memory: [   64,  128 ] }
-        - { name: build-resolver, cpu: [ 1000, 1500 ],  memory: [ 1024, 4096 ] }
-        - { name: repo-resolver,  cpu: [ 2000, 8000 ],  memory: [ 1024, 4096 ] }
+        - { name: polling, cpu: [1000, 1500], memory: [256, 512] }
+        - { name: scheduler, cpu: [200, 500], memory: [64, 128] }
+        - { name: build-resolver, cpu: [1000, 1500], memory: [1024, 4096] }
+        - { name: repo-resolver, cpu: [2000, 8000], memory: [1024, 4096] }
       loop_control:
         label: "{{ item.name }}"
 
diff --git a/playbooks/openshift-apps/mdapi.yml b/playbooks/openshift-apps/mdapi.yml
index 30bf6faab..393c99976 100644
--- a/playbooks/openshift-apps/mdapi.yml
+++ b/playbooks/openshift-apps/mdapi.yml
@@ -9,60 +9,60 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: mdapi
-    description: mdapi is a small API exposing the metadata contained in different RPM repositories.
-    appowners:
-    - cverna
-    - pingou
+    - role: openshift/project
+      app: mdapi
+      description: mdapi is a small API exposing the metadata contained in different RPM repositories.
+      appowners:
+        - cverna
+        - pingou
 
-  - role: openshift/object
-    app: mdapi
-    template: imagestream.yml
-    objectname: imagestream.yml
+    - role: openshift/object
+      app: mdapi
+      template: imagestream.yml
+      objectname: imagestream.yml
 
-  - role: openshift/object
-    app: mdapi
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/object
+      app: mdapi
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/object
-    app: mdapi
-    file: storage.yml
-    objectname: storage.yml
+    - role: openshift/object
+      app: mdapi
+      file: storage.yml
+      objectname: storage.yml
 
-  - role: openshift/object
-    app: mdapi
-    template: configmap.yml
-    objectname: configmap.yml
+    - role: openshift/object
+      app: mdapi
+      template: configmap.yml
+      objectname: configmap.yml
 
-  - role: openshift/object
-    app: mdapi
-    file: cron.yml
-    objectname: cron.yml
+    - role: openshift/object
+      app: mdapi
+      file: cron.yml
+      objectname: cron.yml
 
-  - role: openshift/start-build
-    app: mdapi
-    buildname: mdapi-build
-    objectname: mdapi-build
+    - role: openshift/start-build
+      app: mdapi
+      buildname: mdapi-build
+      objectname: mdapi-build
 
-  - role: openshift/object
-    app: mdapi
-    file: service.yml
-    objectname: service.yml
+    - role: openshift/object
+      app: mdapi
+      file: service.yml
+      objectname: service.yml
 
-  - role: openshift/route
-    app: mdapi
-    routename: mdapi
-    host: "mdapi{{env_suffix}}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: mdapi
+    - role: openshift/route
+      app: mdapi
+      routename: mdapi
+      host: "mdapi{{env_suffix}}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: mdapi
 
-  - role: openshift/object
-    app: mdapi
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: mdapi
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
 
-  - role: openshift/rollout
-    app: mdapi
-    dcname: mdapi
+    - role: openshift/rollout
+      app: mdapi
+      dcname: mdapi
diff --git a/playbooks/openshift-apps/messaging-bridges.yml b/playbooks/openshift-apps/messaging-bridges.yml
index 0c052fc5a..7b5b6b635 100644
--- a/playbooks/openshift-apps/messaging-bridges.yml
+++ b/playbooks/openshift-apps/messaging-bridges.yml
@@ -11,68 +11,69 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: Create the RabbitMQ user
-    rabbitmq_user:
-      user: "messaging-bridge{{ env_suffix }}.fedoraproject.org"
-      vhost: /pubsub
-      read_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
-      write_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
-      configure_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
-    tags:
-    - config
-  - name: Create the RabbitMQ exchanges
-    rabbitmq_exchange:
-      name: "{{item}}"
-      exchange_type: topic
-      vhost: /pubsub
-      login_user: admin
-      login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
-    with_items:
-    - amq.topic
-    - zmq.topic
-    tags:
-    - config
-  - name: Create the RabbitMQ queue amqp_to_zmq
-    rabbitmq_queue:
-      name: amqp_to_zmq
-      vhost: /pubsub
-      login_user: admin
-      login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
-    tags:
-    - config
-  - name: Create the RabbitMQ queue for verify-missing
-    rabbitmq_queue:
-      name: amqp_bridge_verify_missing
-      vhost: /pubsub
-      durable: True
-      auto_delete: False
-      message_ttl: 60000
-      login_user: admin
-      login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
-    tags:
-    - config
-  # Do this manually until an Ansible bugfix is deployed
-  # https://github.com/ansible/ansible/pull/45109
-  #
-  # == Dirty manual way while bug 45109 isn't fixed
-  - name: Get the rabbitmqadmin command
-    get_url:
-      url: http://localhost:15672/cli/rabbitmqadmin
-      dest: /usr/local/bin/rabbitmqadmin
-      mode: 0755
-    tags:
-    - config
-  - name: Create the amqp-to-zmq bindings
-    command: /usr/local/bin/rabbitmqadmin -V /pubsub -u admin -p {{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }} declare binding source=amq.topic destination=amqp_to_zmq destination_type=queue
-    tags:
-    - config
-  - name: Create the verify-missing bindings
-    command: /usr/local/bin/rabbitmqadmin -V /pubsub -u admin -p {{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }} declare binding source={{ item }} destination=amqp_bridge_verify_missing destination_type=queue
-    with_items:
-    - amq.topic
-    - zmq.topic
-    tags:
-    - config
+    - name: Create the RabbitMQ user
+      rabbitmq_user:
+        user: "messaging-bridge{{ env_suffix }}.fedoraproject.org"
+        vhost: /pubsub
+        read_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
+        write_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
+        configure_priv: "((a|z)mq\\.topic|amqp_to_zmq|amqp_bridge_verify_missing)"
+      tags:
+        - config
+    - name: Create the RabbitMQ exchanges
+      rabbitmq_exchange:
+        name: "{{item}}"
+        exchange_type: topic
+        vhost: /pubsub
+        login_user: admin
+        login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
+      with_items:
+        - amq.topic
+        - zmq.topic
+      tags:
+        - config
+    - name: Create the RabbitMQ queue amqp_to_zmq
+      rabbitmq_queue:
+        name: amqp_to_zmq
+        vhost: /pubsub
+        login_user: admin
+        login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
+      tags:
+        - config
+    - name: Create the RabbitMQ queue for verify-missing
+      rabbitmq_queue:
+        name: amqp_bridge_verify_missing
+        vhost: /pubsub
+        durable: True
+        auto_delete: False
+        message_ttl: 60000
+        login_user: admin
+        login_password: "{{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }}"
+      tags:
+        - config
+    # Do this manually until an Ansible bugfix is deployed
+    # https://github.com/ansible/ansible/pull/45109
+    #
+    # == Dirty manual way while bug 45109 isn't fixed
+    - name: Get the rabbitmqadmin command
+      get_url:
+        url: http://localhost:15672/cli/rabbitmqadmin
+        dest: /usr/local/bin/rabbitmqadmin
+        mode: 0755
+      tags:
+        - config
+    - name: Create the amqp-to-zmq bindings
+      command: /usr/local/bin/rabbitmqadmin -V /pubsub -u admin -p {{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }} declare binding source=amq.topic destination=amqp_to_zmq destination_type=queue
+      tags:
+        - config
+    - name: Create the verify-missing bindings
+      command: /usr/local/bin/rabbitmqadmin -V /pubsub -u admin -p {{ (env == 'production')|ternary(rabbitmq_admin_password_production, rabbitmq_admin_password_staging) }} declare binding source={{ item }} destination=amqp_bridge_verify_missing destination_type=queue
+      with_items:
+        - amq.topic
+        - zmq.topic
+      tags:
+        - config
+
   #
   # == Proper ansible way of doing it
   #- name: Create the amqp-to-zmq bindings
@@ -98,8 +99,6 @@
   #  - zmq.topic
   #  tags:
   #  - config
-
-
 # Now create the app
 
 - name: make the app be real
@@ -113,72 +112,72 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: messaging-bridges
-    description: "ZeroMQ <-> AMQP bridges"
-    appowners:
-    - abompard
-    - jcline
+    - role: openshift/project
+      app: messaging-bridges
+      description: "ZeroMQ <-> AMQP bridges"
+      appowners:
+        - abompard
+        - jcline
 
-  - role: openshift/secret-file
-    app: messaging-bridges
-    secret_name: fedmsg-key
-    key: fedmsg-fedmsg-migration-tools.key
-    privatefile: "fedmsg-certs/keys/fedmsg-migration-tools{{env_suffix}}.fedoraproject.org.key"
-  - role: openshift/secret-file
-    app: messaging-bridges
-    secret_name: fedmsg-cert
-    key: fedmsg-fedmsg-migration-tools.crt
-    privatefile: "fedmsg-certs/keys/fedmsg-migration-tools{{env_suffix}}.fedoraproject.org.crt"
+    - role: openshift/secret-file
+      app: messaging-bridges
+      secret_name: fedmsg-key
+      key: fedmsg-fedmsg-migration-tools.key
+      privatefile: "fedmsg-certs/keys/fedmsg-migration-tools{{env_suffix}}.fedoraproject.org.key"
+    - role: openshift/secret-file
+      app: messaging-bridges
+      secret_name: fedmsg-cert
+      key: fedmsg-fedmsg-migration-tools.crt
+      privatefile: "fedmsg-certs/keys/fedmsg-migration-tools{{env_suffix}}.fedoraproject.org.crt"
 
-  - role: openshift/secret-file
-    app: messaging-bridges
-    secret_name: rabbitmq-ca
-    key: rabbitmq-ca.crt
-    privatefile: "rabbitmq/{{env}}/pki/ca.crt"
-  - role: openshift/secret-file
-    app: messaging-bridges
-    secret_name: rabbitmq-key
-    key: rabbitmq-fedmsg-migration-tools.key
-    privatefile: "rabbitmq/{{env}}/pki/private/messaging-bridge{{env_suffix}}.fedoraproject.org.key"
-  - role: openshift/secret-file
-    app: messaging-bridges
-    secret_name: rabbitmq-cert
-    key: rabbitmq-fedmsg-migration-tools.crt
-    privatefile: "rabbitmq/{{env}}/pki/issued/messaging-bridge{{env_suffix}}.fedoraproject.org.crt"
+    - role: openshift/secret-file
+      app: messaging-bridges
+      secret_name: rabbitmq-ca
+      key: rabbitmq-ca.crt
+      privatefile: "rabbitmq/{{env}}/pki/ca.crt"
+    - role: openshift/secret-file
+      app: messaging-bridges
+      secret_name: rabbitmq-key
+      key: rabbitmq-fedmsg-migration-tools.key
+      privatefile: "rabbitmq/{{env}}/pki/private/messaging-bridge{{env_suffix}}.fedoraproject.org.key"
+    - role: openshift/secret-file
+      app: messaging-bridges
+      secret_name: rabbitmq-cert
+      key: rabbitmq-fedmsg-migration-tools.crt
+      privatefile: "rabbitmq/{{env}}/pki/issued/messaging-bridge{{env_suffix}}.fedoraproject.org.crt"
 
-  - role: openshift/object
-    app: messaging-bridges
-    file: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: messaging-bridges
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/object
+      app: messaging-bridges
+      file: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: messaging-bridges
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/start-build
-    app: messaging-bridges
-    buildname: messaging-bridges-build
+    - role: openshift/start-build
+      app: messaging-bridges
+      buildname: messaging-bridges-build
 
-  - role: openshift/object
-    app: messaging-bridges
-    template: configmap.yml
-    objectname: configmap.yml
-  - role: openshift/object
-    app: messaging-bridges
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/object
-    app: messaging-bridges
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: messaging-bridges
+      template: configmap.yml
+      objectname: configmap.yml
+    - role: openshift/object
+      app: messaging-bridges
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/object
+      app: messaging-bridges
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
 
-  - role: openshift/rollout
-    app: messaging-bridges
-    dcname: amqp-to-zmq
-  - role: openshift/rollout
-    app: messaging-bridges
-    dcname: zmq-to-amqp
-  - role: openshift/rollout
-    app: messaging-bridges
-    dcname: verify-missing
+    - role: openshift/rollout
+      app: messaging-bridges
+      dcname: amqp-to-zmq
+    - role: openshift/rollout
+      app: messaging-bridges
+      dcname: zmq-to-amqp
+    - role: openshift/rollout
+      app: messaging-bridges
+      dcname: verify-missing
diff --git a/playbooks/openshift-apps/modernpaste.yml b/playbooks/openshift-apps/modernpaste.yml
index 6521180aa..29ea9991d 100644
--- a/playbooks/openshift-apps/modernpaste.yml
+++ b/playbooks/openshift-apps/modernpaste.yml
@@ -9,42 +9,42 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: modernpaste
-    description: modernpaste
-    appowners:
-    - codeblock
-  - role: openshift/object
-    app: modernpaste
-    file: imagestream.yml
-  - role: openshift/object
-    app: modernpaste
-    template: secret.yml
-  - role: openshift/object
-    app: modernpaste
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-  - role: openshift/start-build
-    app: modernpaste
-    buildname: modernpaste-docker-build
-  - role: openshift/object
-    app: modernpaste
-    template: configmap.yml
-    objectname: configmap.yml
-  - role: openshift/object
-    app: modernpaste
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/object
-    app: modernpaste
-    file: route.yml
-    routename: modernpaste
-    serviceport: web
-    servicename: modernpaste
-  - role: openshift/object
-    app: modernpaste
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/rollout
-    app: modernpaste
-    dcname: modernpaste-web
+    - role: openshift/project
+      app: modernpaste
+      description: modernpaste
+      appowners:
+        - codeblock
+    - role: openshift/object
+      app: modernpaste
+      file: imagestream.yml
+    - role: openshift/object
+      app: modernpaste
+      template: secret.yml
+    - role: openshift/object
+      app: modernpaste
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+    - role: openshift/start-build
+      app: modernpaste
+      buildname: modernpaste-docker-build
+    - role: openshift/object
+      app: modernpaste
+      template: configmap.yml
+      objectname: configmap.yml
+    - role: openshift/object
+      app: modernpaste
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/object
+      app: modernpaste
+      file: route.yml
+      routename: modernpaste
+      serviceport: web
+      servicename: modernpaste
+    - role: openshift/object
+      app: modernpaste
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/rollout
+      app: modernpaste
+      dcname: modernpaste-web
diff --git a/playbooks/openshift-apps/nuancier.yml b/playbooks/openshift-apps/nuancier.yml
index 31e8d2eca..fe8c2e5cd 100644
--- a/playbooks/openshift-apps/nuancier.yml
+++ b/playbooks/openshift-apps/nuancier.yml
@@ -9,51 +9,51 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: nuancier
-    description: Fedora nuancier apps
-    appowners:
-    - cverna
-    - pingou
-
-  - role: openshift/object
-    app: nuancier
-    template: imagestream.yml
-    objectname: imagestream.yml
-
-  - role: openshift/object
-    app: nuancier
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-
-  - role: openshift/object
-    app: nuancier
-    template: storage.yml
-    objectname: storage.yml
-
-  - role: openshift/object
-    app: nuancier
-    template: configmap.yml
-    objectname: configmap.yml
-
-  - role: openshift/start-build
-    app: nuancier
-    buildname: nuancier-build
-    objectname: nuancier-build
-
-  - role: openshift/object
-    app: nuancier
-    file: service.yml
-    objectname: service.yml
-
-  - role: openshift/route
-    app: nuancier
-    routename: nuancier
-    host: "wallpapers{{ env_suffix }}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: nuancier
-
-  - role: openshift/object
-    app: nuancier
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/project
+      app: nuancier
+      description: Fedora nuancier apps
+      appowners:
+        - cverna
+        - pingou
+
+    - role: openshift/object
+      app: nuancier
+      template: imagestream.yml
+      objectname: imagestream.yml
+
+    - role: openshift/object
+      app: nuancier
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+
+    - role: openshift/object
+      app: nuancier
+      template: storage.yml
+      objectname: storage.yml
+
+    - role: openshift/object
+      app: nuancier
+      template: configmap.yml
+      objectname: configmap.yml
+
+    - role: openshift/start-build
+      app: nuancier
+      buildname: nuancier-build
+      objectname: nuancier-build
+
+    - role: openshift/object
+      app: nuancier
+      file: service.yml
+      objectname: service.yml
+
+    - role: openshift/route
+      app: nuancier
+      routename: nuancier
+      host: "wallpapers{{ env_suffix }}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: nuancier
+
+    - role: openshift/object
+      app: nuancier
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
diff --git a/playbooks/openshift-apps/rats.yml b/playbooks/openshift-apps/rats.yml
index a3fdb0bb4..ecacea165 100644
--- a/playbooks/openshift-apps/rats.yml
+++ b/playbooks/openshift-apps/rats.yml
@@ -9,21 +9,21 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: rats
-    description: rats
-    appowners:
-    - pingou
-  # RabbitMQ
-  - role: openshift/object
-    app: rats
-    file: rabbitmq/imagestream.yml
-  - role: openshift/object
-    app: rats
-    file: rabbitmq/deploymentconfig.yml
-  - role: openshift/object
-    app: rats
-    file: rabbitmq/service.yml
-  - role: openshift/rollout
-    app: rats
-    dcdname: rats-queue
+    - role: openshift/project
+      app: rats
+      description: rats
+      appowners:
+        - pingou
+    # RabbitMQ
+    - role: openshift/object
+      app: rats
+      file: rabbitmq/imagestream.yml
+    - role: openshift/object
+      app: rats
+      file: rabbitmq/deploymentconfig.yml
+    - role: openshift/object
+      app: rats
+      file: rabbitmq/service.yml
+    - role: openshift/rollout
+      app: rats
+      dcdname: rats-queue
diff --git a/playbooks/openshift-apps/release-monitoring.yml b/playbooks/openshift-apps/release-monitoring.yml
index 823d4fbe7..da73507f9 100644
--- a/playbooks/openshift-apps/release-monitoring.yml
+++ b/playbooks/openshift-apps/release-monitoring.yml
@@ -34,57 +34,57 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: release-monitoring
-    description: release-monitoring
-    appowners:
-    - zlopez
-  - role: openshift/secret-file
-    app: release-monitoring
-    secret_name: release-monitoring-fedora-messaging-ca
-    key: fedora-messaging-release-monitoring-ca.crt
-    privatefile: "rabbitmq/{{env}}/pki/ca.crt"
-  - role: openshift/secret-file
-    app: release-monitoring
-    secret_name: release-monitoring-fedora-messaging-key
-    key: fedora-messaging-release-monitoring.key
-    privatefile: "rabbitmq/{{env}}/pki/private/anitya{{env_suffix}}.key"
-  - role: openshift/secret-file
-    app: release-monitoring
-    secret_name: release-monitoring-fedora-messaging-cert
-    key: fedora-messaging-release-monitoring.crt
-    privatefile: "rabbitmq/{{env}}/pki/issued/anitya{{env_suffix}}.crt"
-  - role: openshift/object
-    app: release-monitoring
-    file: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: release-monitoring
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-  - role: openshift/start-build
-    app: release-monitoring
-    buildname: release-monitoring-web-build
-  - role: openshift/object
-    app: release-monitoring
-    template: configmap.yml
-    objectname: configmap.yml
-  - role: openshift/object
-    app: release-monitoring
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/object
-    app: release-monitoring
-    template: route.yml
-    objectname: route.yml
-  - role: openshift/object
-    app: release-monitoring
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/object
-    app: release-monitoring
-    file: cron.yml
-    objectname: cron.yml
-  - role: openshift/rollout
-    app: release-monitoring
-    dcname: release-monitoring-web
+    - role: openshift/project
+      app: release-monitoring
+      description: release-monitoring
+      appowners:
+        - zlopez
+    - role: openshift/secret-file
+      app: release-monitoring
+      secret_name: release-monitoring-fedora-messaging-ca
+      key: fedora-messaging-release-monitoring-ca.crt
+      privatefile: "rabbitmq/{{env}}/pki/ca.crt"
+    - role: openshift/secret-file
+      app: release-monitoring
+      secret_name: release-monitoring-fedora-messaging-key
+      key: fedora-messaging-release-monitoring.key
+      privatefile: "rabbitmq/{{env}}/pki/private/anitya{{env_suffix}}.key"
+    - role: openshift/secret-file
+      app: release-monitoring
+      secret_name: release-monitoring-fedora-messaging-cert
+      key: fedora-messaging-release-monitoring.crt
+      privatefile: "rabbitmq/{{env}}/pki/issued/anitya{{env_suffix}}.crt"
+    - role: openshift/object
+      app: release-monitoring
+      file: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: release-monitoring
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+    - role: openshift/start-build
+      app: release-monitoring
+      buildname: release-monitoring-web-build
+    - role: openshift/object
+      app: release-monitoring
+      template: configmap.yml
+      objectname: configmap.yml
+    - role: openshift/object
+      app: release-monitoring
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/object
+      app: release-monitoring
+      template: route.yml
+      objectname: route.yml
+    - role: openshift/object
+      app: release-monitoring
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: release-monitoring
+      file: cron.yml
+      objectname: cron.yml
+    - role: openshift/rollout
+      app: release-monitoring
+      dcname: release-monitoring-web
diff --git a/playbooks/openshift-apps/silverblue.yml b/playbooks/openshift-apps/silverblue.yml
index 534fcade8..95f210a26 100644
--- a/playbooks/openshift-apps/silverblue.yml
+++ b/playbooks/openshift-apps/silverblue.yml
@@ -9,51 +9,51 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: silverblue
-    description: teamsilverblue.org static website
-    appowners:
-    - misc
-    - sanja
-  - role: openshift/object
-    app: silverblue
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: silverblue
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-
-  - role: openshift/start-build
-    app: silverblue
-    buildname: silverblue-build
-    objectname: silverblue-build
-
-  - role: openshift/object
-    app: silverblue
-    file: service.yml
-    objectname: service.yml
-
-  - role: openshift/route
-    app: silverblue
-    routename: silverblue
-    host: "teamsilverblue.org"
-    serviceport: 8080-tcp
-    servicename: silverblue
-    when: env == "production" 
-
-  - role: openshift/route
-    app: silverblue
-    routename: silverblue
-    host: "silverblue{{ env_suffix }}.fedoraproject.org"
-    serviceport: 8080-tcp
-    servicename: silverblue
-
-  - role: openshift/object
-    app: silverblue
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-
-  - role: openshift/rollout
-    app: silverblue
-    dcname: silverblue
+    - role: openshift/project
+      app: silverblue
+      description: teamsilverblue.org static website
+      appowners:
+        - misc
+        - sanja
+    - role: openshift/object
+      app: silverblue
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: silverblue
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+
+    - role: openshift/start-build
+      app: silverblue
+      buildname: silverblue-build
+      objectname: silverblue-build
+
+    - role: openshift/object
+      app: silverblue
+      file: service.yml
+      objectname: service.yml
+
+    - role: openshift/route
+      app: silverblue
+      routename: silverblue
+      host: "teamsilverblue.org"
+      serviceport: 8080-tcp
+      servicename: silverblue
+      when: env == "production"
+
+    - role: openshift/route
+      app: silverblue
+      routename: silverblue
+      host: "silverblue{{ env_suffix }}.fedoraproject.org"
+      serviceport: 8080-tcp
+      servicename: silverblue
+
+    - role: openshift/object
+      app: silverblue
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+
+    - role: openshift/rollout
+      app: silverblue
+      dcname: silverblue
diff --git a/playbooks/openshift-apps/the-new-hotness.yml b/playbooks/openshift-apps/the-new-hotness.yml
index dfad65a9e..cfa51a84d 100644
--- a/playbooks/openshift-apps/the-new-hotness.yml
+++ b/playbooks/openshift-apps/the-new-hotness.yml
@@ -9,63 +9,63 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: rabbit/queue
-    username: the-new-hotness{{ env_suffix }}
-    queue_name: the-new-hotness{{ env_suffix }}
-    routing_keys:
-      - "org.release-monitoring.*.anitya.project.version.update"
-      - "org.release-monitoring.*.anitya.project.map.new"
-      - "org.fedoraproject.*.buildsys.task.state.change"
+    - role: rabbit/queue
+      username: the-new-hotness{{ env_suffix }}
+      queue_name: the-new-hotness{{ env_suffix }}
+      routing_keys:
+        - "org.release-monitoring.*.anitya.project.version.update"
+        - "org.release-monitoring.*.anitya.project.map.new"
+        - "org.fedoraproject.*.buildsys.task.state.change"
 
-  - role: openshift/project
-    app: the-new-hotness
-    description: Fedora-messaging consumer that listens to the-new-hotness.org and files bugzilla bugs in response.
-    appowners:
-    - zlopez
+    - role: openshift/project
+      app: the-new-hotness
+      description: Fedora-messaging consumer that listens to the-new-hotness.org and files bugzilla bugs in response.
+      appowners:
+        - zlopez
 
-  - role: openshift/secret-file
-    app: the-new-hotness
-    secret_name: the-new-hotness-fedora-messaging-ca
-    key: fedora-messaging-the-new-hotness-ca.crt
-    privatefile: "rabbitmq/{{env}}/pki/ca.crt"
+    - role: openshift/secret-file
+      app: the-new-hotness
+      secret_name: the-new-hotness-fedora-messaging-ca
+      key: fedora-messaging-the-new-hotness-ca.crt
+      privatefile: "rabbitmq/{{env}}/pki/ca.crt"
 
-  - role: openshift/secret-file
-    app: the-new-hotness
-    secret_name: the-new-hotness-fedora-messaging-key
-    key: fedora-messaging-the-new-hotness.key
-    privatefile: "rabbitmq/{{env}}/pki/private/the-new-hotness{{env_suffix}}.key"
+    - role: openshift/secret-file
+      app: the-new-hotness
+      secret_name: the-new-hotness-fedora-messaging-key
+      key: fedora-messaging-the-new-hotness.key
+      privatefile: "rabbitmq/{{env}}/pki/private/the-new-hotness{{env_suffix}}.key"
 
-  - role: openshift/secret-file
-    app: the-new-hotness
-    secret_name: the-new-hotness-fedora-messaging-cert
-    key: fedora-messaging-the-new-hotness.crt
-    privatefile: "rabbitmq/{{env}}/pki/issued/the-new-hotness{{env_suffix}}.crt"
+    - role: openshift/secret-file
+      app: the-new-hotness
+      secret_name: the-new-hotness-fedora-messaging-cert
+      key: fedora-messaging-the-new-hotness.crt
+      privatefile: "rabbitmq/{{env}}/pki/issued/the-new-hotness{{env_suffix}}.crt"
 
-  - role: openshift/object
-    app: the-new-hotness
-    file: imagestream.yml
-    objectname: imagestream.yml
+    - role: openshift/object
+      app: the-new-hotness
+      file: imagestream.yml
+      objectname: imagestream.yml
 
-  - role: openshift/object
-    app: the-new-hotness
-    template: buildconfig.yml
-    objectname: buildconfig.yml
+    - role: openshift/object
+      app: the-new-hotness
+      template: buildconfig.yml
+      objectname: buildconfig.yml
 
-  - role: openshift/object
-    app: the-new-hotness
-    template: configmap.yml
-    objectname: configmap.yml
+    - role: openshift/object
+      app: the-new-hotness
+      template: configmap.yml
+      objectname: configmap.yml
 
-  - role: openshift/start-build
-    app: the-new-hotness
-    buildname: the-new-hotness-build
-    objectname: the-new-hotness-build
+    - role: openshift/start-build
+      app: the-new-hotness
+      buildname: the-new-hotness-build
+      objectname: the-new-hotness-build
 
-  - role: openshift/object
-    app: the-new-hotness
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
+    - role: openshift/object
+      app: the-new-hotness
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
 
-  - role: openshift/rollout
-    app: the-new-hotness
-    dcname: the-new-hotness
+    - role: openshift/rollout
+      app: the-new-hotness
+      dcname: the-new-hotness
diff --git a/playbooks/openshift-apps/transtats.yml b/playbooks/openshift-apps/transtats.yml
index 5df79ef3a..2427fce31 100644
--- a/playbooks/openshift-apps/transtats.yml
+++ b/playbooks/openshift-apps/transtats.yml
@@ -9,42 +9,42 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  - role: openshift/project
-    app: transtats
-    description: transtats
-    appowners:
-    - suanand
-  - role: openshift/object
-    app: transtats
-    objectname: secret.yml
-    template: secret.yml
-  - role: openshift/imagestream
-    app: transtats
-    imagename: transtats
-  - role: openshift/object
-    app: transtats
-    file: buildconfig.yml
-    objectname: buildconfig.yml
-  - role: openshift/start-build
-    app: transtats
-    buildname: transtats-build
-  - role: openshift/object
-    app: transtats
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/route
-    app: transtats
-    routename: transtats-web
-    host: transtats{{ env_suffix }}.fedoraproject.org
-    file: route.yml
-    serviceport: web
-    servicename: transtats-web
-    annotations:
-      haproxy.router.openshift.io/timeout: 8m
-  - role: openshift/object
-    app: transtats
-    file: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/rollout
-    app: transtats
-    dcname: transtats-web
+    - role: openshift/project
+      app: transtats
+      description: transtats
+      appowners:
+        - suanand
+    - role: openshift/object
+      app: transtats
+      objectname: secret.yml
+      template: secret.yml
+    - role: openshift/imagestream
+      app: transtats
+      imagename: transtats
+    - role: openshift/object
+      app: transtats
+      file: buildconfig.yml
+      objectname: buildconfig.yml
+    - role: openshift/start-build
+      app: transtats
+      buildname: transtats-build
+    - role: openshift/object
+      app: transtats
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/route
+      app: transtats
+      routename: transtats-web
+      host: transtats{{ env_suffix }}.fedoraproject.org
+      file: route.yml
+      serviceport: web
+      servicename: transtats-web
+      annotations:
+        haproxy.router.openshift.io/timeout: 8m
+    - role: openshift/object
+      app: transtats
+      file: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/rollout
+      app: transtats
+      dcname: transtats-web
diff --git a/playbooks/openshift-apps/waiverdb.yml b/playbooks/openshift-apps/waiverdb.yml
index 9008fe398..99fd0e52d 100644
--- a/playbooks/openshift-apps/waiverdb.yml
+++ b/playbooks/openshift-apps/waiverdb.yml
@@ -9,80 +9,80 @@
     - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   roles:
-  # The openshift/project role breaks if the project already exists:
-  # https://pagure.io/fedora-infrastructure/issue/6404
-  - role: openshift/project
-    app: waiverdb
-    description: waiverdb
-    appowners:
-    - ralph
-    - mjia
-    - dcallagh
-    - gnaponie
-  - role: openshift/object
-    app: waiverdb
-    template: secret.yml
-    objectname: secret.yml
-  - role: openshift/secret-file
-    app: waiverdb
-    secret_name: waiverdb-stg-secret
-    key: client_secrets.json
-    template: client_secrets.json
-  - role: openshift/secret-file
-    app: waiverdb
-    secret_name: waiverdb-fedmsg-key
-    key: fedmsg-waiverdb.key
-    privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.stg.fedoraproject.org.key
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: waiverdb
-    secret_name: waiverdb-fedmsg-crt
-    key: fedmsg-waiverdb.crt
-    privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.stg.fedoraproject.org.crt
-    when: env == "staging"
-  - role: openshift/secret-file
-    app: waiverdb
-    secret_name: waiverdb-fedmsg-key
-    key: fedmsg-waiverdb.key
-    privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.fedoraproject.org.key
-    when: env != "staging"
-  - role: openshift/secret-file
-    app: waiverdb
-    secret_name: waiverdb-fedmsg-crt
-    key: fedmsg-waiverdb.crt
-    privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.fedoraproject.org.crt
-    when: env != "staging"
-  - role: openshift/object
-    app: waiverdb
-    template: imagestream.yml
-    objectname: imagestream.yml
-  - role: openshift/object
-    app: waiverdb
-    template: buildconfig.yml
-    objectname: buildconfig.yml
-  - role: openshift/object
-    app: waiverdb
-    template: configmap.yml
-    objectname: configmap.yml
-  - role: openshift/object
-    app: waiverdb
-    file: service.yml
-    objectname: service.yml
-  - role: openshift/route
-    app: waiverdb
-    routename: web-pretty
-    host: "waiverdb{{ env_suffix }}.fedoraproject.org"
-    serviceport: web
-    servicename: waiverdb-web
-  # TODO -- someday retire this old route in favor of the pretty one above.
-  - role: openshift/object
-    app: waiverdb
-    file: route.yml
-    objectname: route.yml
-  - role: openshift/object
-    app: waiverdb
-    template: deploymentconfig.yml
-    objectname: deploymentconfig.yml
-  - role: openshift/rollout
-    app: waiverdb
-    dcname: waiverdb-web
+    # The openshift/project role breaks if the project already exists:
+    # https://pagure.io/fedora-infrastructure/issue/6404
+    - role: openshift/project
+      app: waiverdb
+      description: waiverdb
+      appowners:
+        - ralph
+        - mjia
+        - dcallagh
+        - gnaponie
+    - role: openshift/object
+      app: waiverdb
+      template: secret.yml
+      objectname: secret.yml
+    - role: openshift/secret-file
+      app: waiverdb
+      secret_name: waiverdb-stg-secret
+      key: client_secrets.json
+      template: client_secrets.json
+    - role: openshift/secret-file
+      app: waiverdb
+      secret_name: waiverdb-fedmsg-key
+      key: fedmsg-waiverdb.key
+      privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.stg.fedoraproject.org.key
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: waiverdb
+      secret_name: waiverdb-fedmsg-crt
+      key: fedmsg-waiverdb.crt
+      privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.stg.fedoraproject.org.crt
+      when: env == "staging"
+    - role: openshift/secret-file
+      app: waiverdb
+      secret_name: waiverdb-fedmsg-key
+      key: fedmsg-waiverdb.key
+      privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.fedoraproject.org.key
+      when: env != "staging"
+    - role: openshift/secret-file
+      app: waiverdb
+      secret_name: waiverdb-fedmsg-crt
+      key: fedmsg-waiverdb.crt
+      privatefile: fedmsg-certs/keys/waiverdb-waiverdb-web-waiverdb.app.os.fedoraproject.org.crt
+      when: env != "staging"
+    - role: openshift/object
+      app: waiverdb
+      template: imagestream.yml
+      objectname: imagestream.yml
+    - role: openshift/object
+      app: waiverdb
+      template: buildconfig.yml
+      objectname: buildconfig.yml
+    - role: openshift/object
+      app: waiverdb
+      template: configmap.yml
+      objectname: configmap.yml
+    - role: openshift/object
+      app: waiverdb
+      file: service.yml
+      objectname: service.yml
+    - role: openshift/route
+      app: waiverdb
+      routename: web-pretty
+      host: "waiverdb{{ env_suffix }}.fedoraproject.org"
+      serviceport: web
+      servicename: waiverdb-web
+    # TODO -- someday retire this old route in favor of the pretty one above.
+    - role: openshift/object
+      app: waiverdb
+      file: route.yml
+      objectname: route.yml
+    - role: openshift/object
+      app: waiverdb
+      template: deploymentconfig.yml
+      objectname: deploymentconfig.yml
+    - role: openshift/rollout
+      app: waiverdb
+      dcname: waiverdb-web
diff --git a/playbooks/rdiff-backup.yml b/playbooks/rdiff-backup.yml
index d6c46fea6..da92a254b 100644
--- a/playbooks/rdiff-backup.yml
+++ b/playbooks/rdiff-backup.yml
@@ -16,20 +16,20 @@
   # FIXME - coping with errors?
 
   vars:
-  - global_backup_targets: ['/etc', '/home']
+    - global_backup_targets: ["/etc", "/home"]
 
   tasks:
-  - name: run rdiff-backup hitting all the global targets
-    local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_port|default(22) }} -C %s rdiff-backup --server' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@xxxxxxxxxxxxxxxxx -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@xxxxxxxxxxxxxxxxx"
-    with_items: '{{ global_backup_targets }}'
-    when: global_backup_targets is defined
+    - name: run rdiff-backup hitting all the global targets
+      local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_port|default(22) }} -C %s rdiff-backup --server' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@xxxxxxxxxxxxxxxxx -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@xxxxxxxxxxxxxxxxx"
+      with_items: "{{ global_backup_targets }}"
+      when: global_backup_targets is defined
 
-  - name: copy new database dumps into the backup server database dir
-    local_action: "shell rsync -a {{ inventory_hostname }}:{{ item }}/ /fedora_backups/databases/{{ inventory_hostname }}/"
-    with_items: '{{ db_backup_dir }}'
-    when: db_backup_dir is defined
+    - name: copy new database dumps into the backup server database dir
+      local_action: "shell rsync -a {{ inventory_hostname }}:{{ item }}/ /fedora_backups/databases/{{ inventory_hostname }}/"
+      with_items: "{{ db_backup_dir }}"
+      when: db_backup_dir is defined
 
-  - name: run rdiff-backup hitting all the host targets
-    local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_port|default(22) }} -C %s rdiff-backup --server' --exclude='**git-seed*' --exclude='**git_seed' --exclude='**.snapshot' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@xxxxxxxxxxxxxxxxx -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@xxxxxxxxxxxxxxxxx"
-    with_items: '{{ host_backup_targets }}'
-    when: host_backup_targets is defined
+    - name: run rdiff-backup hitting all the host targets
+      local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_port|default(22) }} -C %s rdiff-backup --server' --exclude='**git-seed*' --exclude='**git_seed' --exclude='**.snapshot' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@xxxxxxxxxxxxxxxxx -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@xxxxxxxxxxxxxxxxx"
+      with_items: "{{ host_backup_targets }}"
+      when: host_backup_targets is defined
diff --git a/playbooks/restart_unbound.yml b/playbooks/restart_unbound.yml
index 782b14e14..624eb9656 100644
--- a/playbooks/restart_unbound.yml
+++ b/playbooks/restart_unbound.yml
@@ -9,8 +9,8 @@
   user: root
 
   vars_files:
-  - /srv/web/infra/ansible/vars/global.yml
-  - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/restart_unbound.yml"
+    - import_tasks: "{{ tasks_path }}/restart_unbound.yml"
diff --git a/playbooks/rkhunter_only.yml b/playbooks/rkhunter_only.yml
index 92f5b35af..d57d6c32b 100644
--- a/playbooks/rkhunter_only.yml
+++ b/playbooks/rkhunter_only.yml
@@ -5,11 +5,11 @@
   user: root
 
   tasks:
-  - name: check for rkhunter
-    command: /usr/bin/test -f /usr/bin/rkhunter
-    register: rkhunter
-    ignore_errors: true
+    - name: check for rkhunter
+      command: /usr/bin/test -f /usr/bin/rkhunter
+      register: rkhunter
+      ignore_errors: true
 
-  - name: run rkhunter --propupd
-    command: /usr/bin/rkhunter --propupd
-    when: rkhunter is success
+    - name: run rkhunter --propupd
+      command: /usr/bin/rkhunter --propupd
+      when: rkhunter is success
diff --git a/playbooks/rkhunter_update.yml b/playbooks/rkhunter_update.yml
index 3e5927892..a84f8c73c 100644
--- a/playbooks/rkhunter_update.yml
+++ b/playbooks/rkhunter_update.yml
@@ -5,19 +5,19 @@
   user: root
 
   tasks:
-  - name: expire-caches
-    command: yum clean expire-cache
+    - name: expire-caches
+      command: yum clean expire-cache
 
-  - name: yum -y {{ yumcommand }}
-    command: yum -y {{ yumcommand }}
-    async: 7200
-    poll: 15
+    - name: yum -y {{ yumcommand }}
+      command: yum -y {{ yumcommand }}
+      async: 7200
+      poll: 15
 
-  - name: check for rkhunter
-    command: /usr/bin/test -f /usr/bin/rkhunter
-    register: rkhunter
-    ignore_errors: true
+    - name: check for rkhunter
+      command: /usr/bin/test -f /usr/bin/rkhunter
+      register: rkhunter
+      ignore_errors: true
 
-  - name: run rkhunter --propupd
-    command: /usr/bin/rkhunter --propupd
-    when: rkhunter is success
+    - name: run rkhunter --propupd
+      command: /usr/bin/rkhunter --propupd
+      when: rkhunter is success
diff --git a/playbooks/run_fasClient.yml b/playbooks/run_fasClient.yml
index 53ea2f2e7..2bdeddcfb 100644
--- a/playbooks/run_fasClient.yml
+++ b/playbooks/run_fasClient.yml
@@ -9,10 +9,10 @@
   gather_facts: False
 
   tasks:
-  - name: actually run fasClient -a
-    command: fasClient -a
-    ignore_errors: true
-    when: inventory_hostname_short.startswith('bastion0')
+    - name: actually run fasClient -a
+      command: fasClient -a
+      ignore_errors: true
+      when: inventory_hostname_short.startswith('bastion0')
 
 - name: run fasClient on people and pkgs first as these are the ones most people want updated
   hosts: people02.fedoraproject.org:pkgs02.phx2.fedoraproject.org
@@ -20,9 +20,9 @@
   gather_facts: False
 
   tasks:
-  - name: actually run fasClient -i
-    command: fasClient -i
-    ignore_errors: true
+    - name: actually run fasClient -i
+      command: fasClient -i
+      ignore_errors: true
 
 - name: run fasClient -i on the rest of hosts which only affects sysadmins
   hosts: all:!builders:!*cloud*:!*composer*:!people*:!pkgs02*:!*.stg.*:!twisted*:!*.fedorainfracloud.org:!bkernel*:!autosign*:!*.app.os.fedoraproject.org:!*.app.os.stg.fedoraproject.org
@@ -30,6 +30,6 @@
   gather_facts: False
 
   tasks:
-  - name: actually run fasClient -i
-    command: fasClient -i
-    ignore_errors: true
+    - name: actually run fasClient -i
+      command: fasClient -i
+      ignore_errors: true
diff --git a/playbooks/run_fasClient_simple.yml b/playbooks/run_fasClient_simple.yml
index 8176d978b..cefdb1f27 100644
--- a/playbooks/run_fasClient_simple.yml
+++ b/playbooks/run_fasClient_simple.yml
@@ -8,10 +8,10 @@
   gather_facts: False
 
   tasks:
-  - name: actually run fasClient -a
-    command: fasClient -a
-    when: inventory_hostname_short.startswith('bastion0')
-    ignore_errors: true
+    - name: actually run fasClient -a
+      command: fasClient -a
+      when: inventory_hostname_short.startswith('bastion0')
+      ignore_errors: true
 
 - name: run fasClient on people and pkgs first as these are the ones most people want updated
   hosts: people02.fedoraproject.org:pkgs02.phx2.fedoraproject.org
@@ -19,6 +19,6 @@
   gather_facts: False
 
   tasks:
-  - name: actually run fasClient -i
-    command: fasClient -i
-    ignore_errors: true
+    - name: actually run fasClient -i
+      command: fasClient -i
+      ignore_errors: true
diff --git a/playbooks/set_root_auth_keys.yml b/playbooks/set_root_auth_keys.yml
index ee431de36..419ca64cb 100644
--- a/playbooks/set_root_auth_keys.yml
+++ b/playbooks/set_root_auth_keys.yml
@@ -5,14 +5,14 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
   vars:
-   - root_auth_users: ''
+    - root_auth_users: ""
 
   tasks:
-   - name: add root keys for sysadmin-main and other allowed users
-     action: authorized_key user=root key={{ item }}
-     with_lines:
-     - "{{ auth_keys_from_fas}} @sysadmin-main {{ root_auth_users }}"
+    - name: add root keys for sysadmin-main and other allowed users
+      action: authorized_key user=root key={{ item }}
+      with_lines:
+        - "{{ auth_keys_from_fas}} @sysadmin-main {{ root_auth_users }}"
diff --git a/playbooks/transient_cloud_instance.yml b/playbooks/transient_cloud_instance.yml
index 3ce042cf7..dad563d4b 100644
--- a/playbooks/transient_cloud_instance.yml
+++ b/playbooks/transient_cloud_instance.yml
@@ -30,30 +30,30 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
   vars:
     image: "{{ centos70_x86_64 }}"
     instance_type: m1.small
 
   tasks:
-  - name: fail when name is not provided
-    fail: msg="Please specify the name of the instance"
-    when: name is not defined
+    - name: fail when name is not provided
+      fail: msg="Please specify the name of the instance"
+      when: name is not defined
 
-  - import_tasks: "{{ tasks_path }}/transient_cloud.yml"
+    - import_tasks: "{{ tasks_path }}/transient_cloud.yml"
 
-  - name: gather facts
-    setup:
-    check_mode: no
-    ignore_errors: True
-    register: facts
+    - name: gather facts
+      setup:
+      check_mode: no
+      ignore_errors: True
+      register: facts
 
-  - name: install python2 and dnf stuff
-    raw: dnf -y install python-dnf libselinux-python
-    when: facts is failed
+    - name: install python2 and dnf stuff
+      raw: dnf -y install python-dnf libselinux-python
+      when: facts is failed
 
 - name: provision instance
   hosts: tmp_just_created
@@ -62,20 +62,20 @@
     ANSIBLE_HOST_KEY_CHECKING: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: install cloud-utils (yum)
-    package: name=cloud-utils state=present
-    when: ansible_distribution_major_version|int < 22
+    - name: install cloud-utils (yum)
+      package: name=cloud-utils state=present
+      when: ansible_distribution_major_version|int < 22
 
-  - name: install cloud-utils (dnf)
-    command: dnf install -y cloud-utils
-    when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
+    - name: install cloud-utils (dnf)
+      command: dnf install -y cloud-utils
+      when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
 
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/transient_newcloud_instance.yml b/playbooks/transient_newcloud_instance.yml
index 3b01af90b..0f8952ff9 100644
--- a/playbooks/transient_newcloud_instance.yml
+++ b/playbooks/transient_newcloud_instance.yml
@@ -30,20 +30,20 @@
   gather_facts: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/private/ansible/vars.yml
-   - /srv/web/infra/ansible/vars/fedora-cloud.yml
-   - /srv/private/ansible/files/openstack/passwords.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/private/ansible/vars.yml
+    - /srv/web/infra/ansible/vars/fedora-cloud.yml
+    - /srv/private/ansible/files/openstack/passwords.yml
   vars:
     image: "{{ centos70_x86_64 }}"
     instance_type: m1.small
 
   tasks:
-  - name: fail when name is not provided
-    fail: msg="Please specify the name of the instance"
-    when: name is not defined
+    - name: fail when name is not provided
+      fail: msg="Please specify the name of the instance"
+      when: name is not defined
 
-  - import_tasks: "{{ tasks_path }}/transient_newcloud.yml"
+    - import_tasks: "{{ tasks_path }}/transient_newcloud.yml"
 
 - name: Install Pythonic stuff.
   hosts: tmp_just_created
@@ -52,15 +52,15 @@
     ANSIBLE_HOST_KEY_CHECKING: False
 
   tasks:
-  - name: gather facts
-    setup:
-    check_mode: no
-    ignore_errors: True
-    register: facts
+    - name: gather facts
+      setup:
+      check_mode: no
+      ignore_errors: True
+      register: facts
 
-  - name: install python2 and dnf stuff
-    raw: dnf -y install python-dnf libselinux-python
-    when: facts is failed
+    - name: install python2 and dnf stuff
+      raw: dnf -y install python-dnf libselinux-python
+      when: facts is failed
 
 - name: provision instance
   hosts: tmp_just_created
@@ -69,20 +69,20 @@
     ANSIBLE_HOST_KEY_CHECKING: False
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - name: install cloud-utils (yum)
-    package: name=cloud-utils state=present
-    when: ansible_distribution_major_version|int < 22
+    - name: install cloud-utils (yum)
+      package: name=cloud-utils state=present
+      when: ansible_distribution_major_version|int < 22
 
-  - name: install cloud-utils (dnf)
-    command: dnf install -y cloud-utils
-    when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
+    - name: install cloud-utils (dnf)
+      command: dnf install -y cloud-utils
+      when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
 
-  - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
+    - import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
diff --git a/playbooks/update-proxy-dns.yml b/playbooks/update-proxy-dns.yml
index 0730617a3..26aedc9f0 100644
--- a/playbooks/update-proxy-dns.yml
+++ b/playbooks/update-proxy-dns.yml
@@ -8,57 +8,57 @@
   user: root
   serial: 1
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   vars:
-  - userstring: "Ansible update-proxy-dns.yml <root@xxxxxxxxxxxxxxxxx>"
+    - userstring: "Ansible update-proxy-dns.yml <root@xxxxxxxxxxxxxxxxx>"
 
   tasks:
-  - name: Make up a tempdir..
-    local_action: command mktemp -p /var/tmp -d dns-checkout.XXXXXXXX
-    register: tmp
-    when: nodns is not defined or not "true" in nodns
+    - name: Make up a tempdir..
+      local_action: command mktemp -p /var/tmp -d dns-checkout.XXXXXXXX
+      register: tmp
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Clone the dns repo into /var/tmp/dns-checkout.....
-    local_action: git repo=/git/dns/ dest={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Clone the dns repo into /var/tmp/dns-checkout.....
+      local_action: git repo=/git/dns/ dest={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Run zone-template (fedoraproject.org)
-    local_action: command {{tmp.stdout}}/zone-template {{tmp.stdout}}/fedoraproject.org.cfg {{status}} {{inventory_hostname}} chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Run zone-template (fedoraproject.org)
+      local_action: command {{tmp.stdout}}/zone-template {{tmp.stdout}}/fedoraproject.org.cfg {{status}} {{inventory_hostname}} chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Run zone-template (getfedora.org)
-    local_action: command {{tmp.stdout}}/zone-template {{tmp.stdout}}/getfedora.org.cfg {{status}} {{inventory_hostname}} chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Run zone-template (getfedora.org)
+      local_action: command {{tmp.stdout}}/zone-template {{tmp.stdout}}/getfedora.org.cfg {{status}} {{inventory_hostname}} chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Commit once
-    local_action: command git commit -a -m '{{status}} {{inventory_hostname}}' --author '{{userstring}}' chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Commit once
+      local_action: command git commit -a -m '{{status}} {{inventory_hostname}}' --author '{{userstring}}' chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Do domains
-    local_action: command {{tmp.stdout}}/do-domains chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Do domains
+      local_action: command {{tmp.stdout}}/do-domains chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Commit second time
-    local_action: command git commit -a -m 'done build' --author '{{userstring}}' chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Commit second time
+      local_action: command git commit -a -m 'done build' --author '{{userstring}}' chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Push our changes back
-    local_action: command git push chdir={{tmp.stdout}}
-    when: nodns is not defined or not "true" in nodns
+    - name: Push our changes back
+      local_action: command git push chdir={{tmp.stdout}}
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Destroy our temporary clone of /git/dns/ in /var/tmp/dns-checkout....
-    local_action: file dest={{tmp.stdout}} state=absent
-    when: nodns is not defined or not "true" in nodns
+    - name: Destroy our temporary clone of /git/dns/ in /var/tmp/dns-checkout....
+      local_action: file dest={{tmp.stdout}} state=absent
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Run update-dns on each nameserver
-    command: /usr/local/bin/update-dns
-    delegate_to: "{{item}}"
-    with_items: "{{groups.dns}}"
-    when: nodns is not defined or not "true" in nodns
+    - name: Run update-dns on each nameserver
+      command: /usr/local/bin/update-dns
+      delegate_to: "{{item}}"
+      with_items: "{{groups.dns}}"
+      when: nodns is not defined or not "true" in nodns
 
-  - name: Wait for dns to percolate (1 minute)
-    pause: minutes=1
-    when: nodns is not defined or not "true" in nodns
+    - name: Wait for dns to percolate (1 minute)
+      pause: minutes=1
+      when: nodns is not defined or not "true" in nodns
diff --git a/playbooks/update_dns.yml b/playbooks/update_dns.yml
index d5d9253b5..be39a78c6 100644
--- a/playbooks/update_dns.yml
+++ b/playbooks/update_dns.yml
@@ -3,6 +3,5 @@
   user: root
 
   tasks:
-
-  - name: push dns changes out
-    command: /usr/local/bin/update-dns
+    - name: push dns changes out
+      command: /usr/local/bin/update-dns
diff --git a/playbooks/update_grokmirror_repos.yml b/playbooks/update_grokmirror_repos.yml
index b41b67ff2..1cf7e37ad 100644
--- a/playbooks/update_grokmirror_repos.yml
+++ b/playbooks/update_grokmirror_repos.yml
@@ -7,6 +7,6 @@
   gather_facts: false
 
   tasks:
-     - name: update grokmirror repos
-       command: chdir={{ grokmirror_basedir }}/{{ item.name }} git fetch origin {{ grokmirror_default_branch }}:{{ grokmirror_default_branch }}
-       with_items: "{{ grokmirror_repos }}"
+    - name: update grokmirror repos
+      command: chdir={{ grokmirror_basedir }}/{{ item.name }} git fetch origin {{ grokmirror_default_branch }}:{{ grokmirror_default_branch }}
+      with_items: "{{ grokmirror_repos }}"
diff --git a/playbooks/update_ticketkey.yml b/playbooks/update_ticketkey.yml
index 7b77f4e5c..4c529122b 100644
--- a/playbooks/update_ticketkey.yml
+++ b/playbooks/update_ticketkey.yml
@@ -3,36 +3,36 @@
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-  - name: create new production ticket key
-    command: /usr/local/bin/generate_ticketkey /root/ticketkey_production.tkey fpprod
+    - name: create new production ticket key
+      command: /usr/local/bin/generate_ticketkey /root/ticketkey_production.tkey fpprod
 
-  - name: create new staging ticket key
-    command: /usr/local/bin/generate_ticketkey /root/ticketkey_staging.tkey fpstag
+    - name: create new staging ticket key
+      command: /usr/local/bin/generate_ticketkey /root/ticketkey_staging.tkey fpstag
 
 - name: Push out new ticket key
   hosts: proxies:proxies-stg
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - "/srv/private/ansible/vars.yml"
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - "/srv/private/ansible/vars.yml"
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   handlers:
-  - import_tasks: "{{ handlers_path }}/restart_services.yml"
+    - import_tasks: "{{ handlers_path }}/restart_services.yml"
 
   tasks:
-
-  - name: deploy ticket key
-    copy: src=/root/ticketkey_{{env}}.tkey dest=/etc/httpd/ticketkey_{{env}}.tkey
-          owner=root group=root mode=0600
-    notify:
-    - reload proxyhttpd
+    - name: deploy ticket key
+      copy:
+        src=/root/ticketkey_{{env}}.tkey dest=/etc/httpd/ticketkey_{{env}}.tkey
+        owner=root group=root mode=0600
+      notify:
+        - reload proxyhttpd
diff --git a/playbooks/vhost_halt_guests.yml b/playbooks/vhost_halt_guests.yml
index 083147c35..429ba6e59 100644
--- a/playbooks/vhost_halt_guests.yml
+++ b/playbooks/vhost_halt_guests.yml
@@ -18,20 +18,18 @@
 # ansible 0.9 should allow us to preserve content of two registered variables
 # across multiple plays
 
-
 - name: find instances
   hosts: "{{ vhost }}"
   user: root
 
   tasks:
-  - name: get list of guests
-    virt: command=list_vms
-    register: vmlist
-
-  - name: add them to myvms_new group
-    local_action: add_host hostname={{ item }} groupname=myvms_new
-    with_items: "{{ vmlist.list_vms }}"
+    - name: get list of guests
+      virt: command=list_vms
+      register: vmlist
 
+    - name: add them to myvms_new group
+      local_action: add_host hostname={{ item }} groupname=myvms_new
+      with_items: "{{ vmlist.list_vms }}"
 
 - name: halt instances
   hosts: myvms_new
@@ -39,14 +37,14 @@
   serial: 1
 
   tasks:
-  - name: tell nagios to shush
-    nagios: action=silence host={{ inventory_hostname_short }}
-    delegate_to: noc01.phx2.fedoraproject.org
+    - name: tell nagios to shush
+      nagios: action=silence host={{ inventory_hostname_short }}
+      delegate_to: noc01.phx2.fedoraproject.org
 
-  - name: echo-y
-    command: /sbin/halt -p
-    ignore_errors: true
-    # if one of them is down we don't care
+    - name: echo-y
+      command: /sbin/halt -p
+      ignore_errors: true
 
-  - name: wait for them to die
-    local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
+      # if one of them is down we don't care
+    - name: wait for them to die
+      local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
diff --git a/playbooks/vhost_poweroff.yml b/playbooks/vhost_poweroff.yml
index 6c2e1712d..b75a5966f 100644
--- a/playbooks/vhost_poweroff.yml
+++ b/playbooks/vhost_poweroff.yml
@@ -20,17 +20,17 @@
   user: root
 
   tasks:
-  - name: get list of guests
-    virt: command=list_vms
-    register: vmlist
+    - name: get list of guests
+      virt: command=list_vms
+      register: vmlist
 
-#  - name: get info on guests (prereboot)
-#    virt: command=info
-#    register: vminfo_pre
+    #  - name: get info on guests (prereboot)
+    #    virt: command=info
+    #    register: vminfo_pre
 
-  - name: add them to myvms_new group
-    local_action: add_host hostname={{ item }} groupname=myvms_new
-    with_items: "{{ vmlist.list_vms }}"
+    - name: add them to myvms_new group
+      local_action: add_host hostname={{ item }} groupname=myvms_new
+      with_items: "{{ vmlist.list_vms }}"
 
 - name: halt instances
   hosts: myvms_new
@@ -39,10 +39,10 @@
   serial: 1
 
   tasks:
-  - name: halt the vm instances - to poweroff
-    command: /sbin/shutdown -h 1
-    ignore_errors: true
-    # if one of them is down we don't care
+    - name: halt the vm instances - to poweroff
+      command: /sbin/shutdown -h 1
+      ignore_errors: true
+      # if one of them is down we don't care
 
 - name: wait for the whole set to die.
   hosts: myvms_new
@@ -50,8 +50,8 @@
   user: root
 
   tasks:
-  - name: wait for them to die
-    local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
+    - name: wait for them to die
+      local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
 
 - name: reboot vhost
   hosts: "{{ target }}"
@@ -59,5 +59,5 @@
   user: root
 
   tasks:
-  - name: halt the virthost
-    command: /sbin/shutdown -h 1
+    - name: halt the virthost
+      command: /sbin/shutdown -h 1
diff --git a/playbooks/vhost_reboot.yml b/playbooks/vhost_reboot.yml
index 98477367f..a0fe21475 100644
--- a/playbooks/vhost_reboot.yml
+++ b/playbooks/vhost_reboot.yml
@@ -21,17 +21,17 @@
   user: root
 
   tasks:
-  - name: get list of guests
-    virt: command=list_vms state=running
-    register: vmlist
+    - name: get list of guests
+      virt: command=list_vms state=running
+      register: vmlist
 
-#  - name: get info on guests (prereboot)
-#    virt: command=info
-#    register: vminfo_pre
+    #  - name: get info on guests (prereboot)
+    #    virt: command=info
+    #    register: vminfo_pre
 
-  - name: add them to myvms_new group
-    local_action: add_host hostname={{ item }} groupname=myvms_new
-    with_items: "{{ vmlist.list_vms }}"
+    - name: add them to myvms_new group
+      local_action: add_host hostname={{ item }} groupname=myvms_new
+      with_items: "{{ vmlist.list_vms }}"
 
 # Call out to another playbook.  Disable any proxies that may live here
 - import_playbook: update-proxy-dns.yml status=disable proxies=myvms_new:&proxies
@@ -44,21 +44,21 @@
   serial: 1
 
   tasks:
-  - name: drain OS node if necessary
-    command: oc adm drain {{inventory_hostname }} --ignore-daemonsets --delete-local-data
-    delegate_to: os-master01{{env_suffix}}.phx2.fedoraproject.org
-    when: inventory_hostname.startswith(('os-node', 'os-master'))
-
-  - name: schedule regular host downtime
-    nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-    when: nonagios is not defined or not nonagios
-
-  - name: halt the vm instances - to poweroff
-    command: /sbin/shutdown -h 1
-    ignore_errors: true
-    # if one of them is down we don't care
+    - name: drain OS node if necessary
+      command: oc adm drain {{inventory_hostname }} --ignore-daemonsets --delete-local-data
+      delegate_to: os-master01{{env_suffix}}.phx2.fedoraproject.org
+      when: inventory_hostname.startswith(('os-node', 'os-master'))
+
+    - name: schedule regular host downtime
+      nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+      when: nonagios is not defined or not nonagios
+
+    - name: halt the vm instances - to poweroff
+      command: /sbin/shutdown -h 1
+      ignore_errors: true
+      # if one of them is down we don't care
 
 - name: wait for the whole set to die.
   hosts: myvms_new
@@ -66,8 +66,8 @@
   user: root
 
   tasks:
-  - name: wait for them to die
-    local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
+    - name: wait for them to die
+      local_action: wait_for port=22 delay=30 timeout=300 state=stopped host={{ inventory_hostname }}
 
 - name: reboot vhost
   hosts: "{{ target }}"
@@ -75,37 +75,37 @@
   user: root
 
   tasks:
-  - name: tell nagios to shush
-    nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-    when: nonagios is not defined or not nonagios
+    - name: tell nagios to shush
+      nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+      when: nonagios is not defined or not nonagios
 
-  - name: reboot the virthost
-    command: /sbin/shutdown -r 1
+    - name: reboot the virthost
+      command: /sbin/shutdown -r 1
 
-  - name: wait for virthost to come back - up to 15 minutes
-    local_action: wait_for host={{ target }} port=22 delay=120 timeout=900 search_regex=OpenSSH
+    - name: wait for virthost to come back - up to 15 minutes
+      local_action: wait_for host={{ target }} port=22 delay=120 timeout=900 search_regex=OpenSSH
 
-  - name: wait for libvirtd to come back on the virthost
-    wait_for: path=/var/run/libvirtd.pid state=present delay=10
+    - name: wait for libvirtd to come back on the virthost
+      wait_for: path=/var/run/libvirtd.pid state=present delay=10
 
-  - name: look up vmlist
-    virt: command=list_vms
-    register: newvmlist
+    - name: look up vmlist
+      virt: command=list_vms
+      register: newvmlist
 
-  - name: add them to myvms_postreboot group
-    local_action: add_host hostname={{ item }} groupname=myvms_postreboot
-    with_items: "{{ newvmlist.list_vms }}"
+    - name: add them to myvms_postreboot group
+      local_action: add_host hostname={{ item }} groupname=myvms_postreboot
+      with_items: "{{ newvmlist.list_vms }}"
 
-#  - name: sync time
-#    command: ntpdate -u 1.rhel.pool.ntp.org
+    #  - name: sync time
+    #    command: ntpdate -u 1.rhel.pool.ntp.org
 
-  - name: tell nagios to unshush
-    nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-    when: nonagios is not defined or not nonagios
+    - name: tell nagios to unshush
+      nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+      when: nonagios is not defined or not nonagios
 
 - name: post reboot tasks
   hosts: myvms_postreboot
@@ -114,10 +114,10 @@
   serial: 1
 
   tasks:
-  - name: Add back to openshift
-    command: oc adm uncordon {{inventory_hostname}}
-    delegate_to: os-master01{{env_suffix}}.phx2.fedoraproject.org
-    when: inventory_hostname.startswith(('os-node', 'os-master'))
+    - name: Add back to openshift
+      command: oc adm uncordon {{inventory_hostname}}
+      delegate_to: os-master01{{env_suffix}}.phx2.fedoraproject.org
+      when: inventory_hostname.startswith(('os-node', 'os-master'))
 
 # Call out to that dns playbook.  Put proxies back in now that they're back
 - import_playbook: update-proxy-dns.yml status=enable proxies=myvms_new:&proxies
@@ -129,12 +129,11 @@
   user: root
 
   vars_files:
-   - /srv/web/infra/ansible/vars/global.yml
-   - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
+    - /srv/web/infra/ansible/vars/global.yml
+    - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
 
   tasks:
-  - import_tasks: "{{ tasks_path }}/restart_unbound.yml"
-
+    - import_tasks: "{{ tasks_path }}/restart_unbound.yml"
 #  - name: get info on guests (postreboot)
 #    virt: command=info
 #    register: vminfo_post
diff --git a/playbooks/vhost_update.yml b/playbooks/vhost_update.yml
index e0879029d..543458fef 100644
--- a/playbooks/vhost_update.yml
+++ b/playbooks/vhost_update.yml
@@ -10,13 +10,13 @@
   user: root
 
   tasks:
-  - name: get list of guests
-    virt: command=list_vms
-    register: vmlist
+    - name: get list of guests
+      virt: command=list_vms
+      register: vmlist
 
-  - name: add them to myvms_new group
-    local_action: add_host hostname={{ item }} groupname=myvms_new
-    with_items: '{{vmlist.list_vms}}'
+    - name: add them to myvms_new group
+      local_action: add_host hostname={{ item }} groupname=myvms_new
+      with_items: "{{vmlist.list_vms}}"
 
 # Call out to another playbook.  Disable any proxies that may live here
 #- include_playbook: update-proxy-dns.yml status=disable proxies=myvms_new:&proxies
@@ -28,13 +28,12 @@
   serial: 1
 
   tasks:
-
-  - name: schedule regular host downtime
-    nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
-    delegate_to: noc01.phx2.fedoraproject.org
-    ignore_errors: true
-    failed_when: no
-    when: nonagios is not defined or not "true" in nonagios
+    - name: schedule regular host downtime
+      nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }}
+      delegate_to: noc01.phx2.fedoraproject.org
+      ignore_errors: true
+      failed_when: no
+      when: nonagios is not defined or not "true" in nonagios
 
 - name: update the system
   hosts: "{{ target }}:myvms_new"
@@ -42,32 +41,32 @@
   user: root
 
   tasks:
-  - name: expire-caches
-    command: yum clean expire-cache
-    when: ansible_distribution_major_version|int < 22
+    - name: expire-caches
+      command: yum clean expire-cache
+      when: ansible_distribution_major_version|int < 22
 
-  - name: yum -y {{ yumcommand }}
-    command: yum -y {{ yumcommand }}
-    async: 7200
-    poll: 30
-    when: ansible_distribution_major_version|int < 22
+    - name: yum -y {{ yumcommand }}
+      command: yum -y {{ yumcommand }}
+      async: 7200
+      poll: 30
+      when: ansible_distribution_major_version|int < 22
 
-  - name: dnf -y {{ yumcommand }} --refresh
-    command: dnf -y {{ yumcommand }} --refresh
-    async: 7200
-    poll: 30
-    when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
+    - name: dnf -y {{ yumcommand }} --refresh
+      command: dnf -y {{ yumcommand }} --refresh
+      async: 7200
+      poll: 30
+      when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined
 
 - name: run rkhunter if installed
-  hosts:  "{{ target }}:myvms_new"
+  hosts: "{{ target }}:myvms_new"
   user: root
 
   tasks:
-  - name: check for rkhunter
-    command: /usr/bin/test -f /usr/bin/rkhunter
-    register: rkhunter
-    ignore_errors: true
+    - name: check for rkhunter
+      command: /usr/bin/test -f /usr/bin/rkhunter
+      register: rkhunter
+      ignore_errors: true
 
-  - name: run rkhunter --propupd
-    command: /usr/bin/rkhunter --propupd
-    when: rkhunter is success
+    - name: run rkhunter --propupd
+      command: /usr/bin/rkhunter --propupd
+      when: rkhunter is success
-- 
2.20.1

_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux