Re: Constant write load on 4 node ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Great, this helped a lot. Although "ceph iostat" didn't give iostats of single images, but just general overview of IO, i remembered the new nautilus RDB performance monitoring.

https://ceph.com/rbd/new-in-nautilus-rbd-performance-monitoring/

With a "simple"
>rbd perf image iotop
i was able to see that the writes indeed are from the Log Server and the Zabbix Monitoring Server. I didn't expect that it would cause that much I/O... unbelieveable...

----- Ursprüngliche Mail -----
Von: "Ashley Merrick" <singapore@xxxxxxxxxxxxxx>
An: "i schmidt" <i.schmidt@xxxxxxxxxxx>
CC: "ceph-users" <ceph-users@xxxxxxx>
Gesendet: Montag, 14. Oktober 2019 15:20:46
Betreff: Re:  Constant write load on 4 node ceph cluster

Is the storage being used for the whole VM disk? 

If so have you checked none of your software is writing constant log's? Or something that could continuously write to disk. 

If your running a new version you can use : [ https://docs.ceph.com/docs/mimic/mgr/iostat/ | https://docs.ceph.com/docs/mimic/mgr/iostat/ ] to locate the exact RBD image. 




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux