Hi all,
I would like to apply the following patch to solve https://pagure.io/fedora-infrastructure/issue/8147.
We start to quite often have build failing in OpenShift because the nodes are lacking of disk space. This patch creates a cron job that runs every weeks (on Monday) and deletes docker "dangling" images.
A dangling image for docker is an image that is not used or as not been used by a container.
+1s ?
diff --git a/playbooks/groups/os-cluster.yml b/playbooks/groups/os-cluster.yml
index 4b56286dc..52a4e2635 100644
@@ -248,3 +248,18 @@
- name: Enable wildcard routes
command: oc -n default set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true
changed_when: false
+
+
+- name: Add a cleanup cron job to the nodes
+ hosts: os_nodes_stg:os_nodes
+ tags:
+ - os-node-cleanup
+ tasks:
+ - name: Ensure a job that runs every Mondays to clean old docker images from the nodes.
+ cron:
+ name: "remove docker dangling images"
+ weekday: 1 #Monday
+ minute: "0"
+ hour: "0"
+ job: "docker rmi $(docker images --filter dangling=true -q)"
+ sate: present
index 4b56286dc..52a4e2635 100644
@@ -248,3 +248,18 @@
- name: Enable wildcard routes
command: oc -n default set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true
changed_when: false
+
+
+- name: Add a cleanup cron job to the nodes
+ hosts: os_nodes_stg:os_nodes
+ tags:
+ - os-node-cleanup
+ tasks:
+ - name: Ensure a job that runs every Mondays to clean old docker images from the nodes.
+ cron:
+ name: "remove docker dangling images"
+ weekday: 1 #Monday
+ minute: "0"
+ hour: "0"
+ job: "docker rmi $(docker images --filter dangling=true -q)"
+ sate: present
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx