Tracking down high writes in GlusterFS volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage.  I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high on one particular drive.  Specifically I've noticed one volume is much much higher total bytes written than others (despite using less overall space).  My volume is writing over 1TB of data per day (by my manual calculation, and with glusterfs profiling) and wearing my SSDs quickly, how can I best determine which VM or process is at fault here? 

There are 5 low use VMs using the volume in question.  I'm attempting to track iostats on each of the vm's individually but so far I'm not seeing anything obvious that would account for 1TB of writes per day that the gluster volume is reporting. 
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux