Show IOps per VM/client to find heavy users...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Will do so definitively, thanks Wido and Dan...
Cheers guys


On 8 August 2014 16:13, Wido den Hollander <wido at 42on.com> wrote:

> On 08/08/2014 03:44 PM, Dan Van Der Ster wrote:
>
>> Hi,
>> Here?s what we do to identify our top RBD users.
>>
>> First, enable log level 10 for the filestore so you can see all the IOs
>> coming from the VMs. Then use a script like this (used on a dumpling
>> cluster):
>>
>> https://github.com/cernceph/ceph-scripts/blob/master/
>> tools/rbd-io-stats.pl
>>
>> to summarize the osd logs and identify the top clients.
>>
>> Then its just a matter of scripting to figure out the ops/sec per
>> volume, but for us at least the main use-case has been to identify who
>> is responsible for a new peak in overall ops ? and daily-granular
>> statistics from the above script tends to suffice.
>>
>> BTW, do you throttle your clients? We found that its absolutely
>> necessary, since without a throttle just a few active VMs can eat up the
>> entire iops capacity of the cluster.
>>
>
> +1
>
> I'd strongly advise to set I/O limits for Instances. I've had multiple
> occasions where a runaway script inside a VM was hammering on the
> underlying storage killing all I/O.
>
> Not only with Ceph, but over the many years I've worked with storage. I/O
> == expensive
>
> CloudStack supports I/O limiting, so I recommend you set a limit. Set it
> to 750 write IOps for example. That way one Instance can't kill the whole
> cluster, but it still has enough I/O to run. (usually).
>
> Wido
>
>
>> Cheers, Dan
>>
>> -- Dan van der Ster || Data & Storage Services || CERN IT Department --
>>
>>
>> On 08 Aug 2014, at 13:51, Andrija Panic <andrija.panic at gmail.com
>> <mailto:andrija.panic at gmail.com>> wrote:
>>
>>  Hi,
>>>
>>> we just had some new clients, and have suffered very big degradation
>>> in CEPH performance for some reasons (we are using CloudStack).
>>>
>>> I'm wondering if there is way to monitor OP/s or similar usage by
>>> client connected, so we can isolate the heavy client ?
>>>
>>> Also, what is the general best practice to monitor these kind of
>>> changes in CEPH ? I'm talking about R/W or OP/s change or similar...
>>>
>>> Thanks,
>>> --
>>>
>>> Andrija Pani?
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Andrija Pani?
--------------------------------------
  http://admintweets.com
--------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140808/54911029/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux