Re: Cephfs total throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, they're both delayed and a guesstimate: the OSDs send periodic
information to the monitors about the state of their PGs, which
includes amount of data read/written from them. The monitor
extrapolates the throughput at each report interval based on the pg
updates it received during that time.
-Greg

On Tue, Sep 15, 2015 at 1:08 PM, Barclay Jameson
<almightybeeij@xxxxxxxxx> wrote:
> Good point. I have seen some really weird numbers something like 7x my
> normal client IO. This happens very rarely though.
>
> On Tue, Sep 15, 2015 at 2:25 PM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
>> FWIW I wouldn't totally trust these numbers.  At one point a while back I
>> had ceph reporting 226GB/s for several seconds sustained. While that would
>> have been really fantastic, I suspect it probably wasn't the case. ;)
>>
>> Mark
>>
>>
>> On 09/15/2015 11:25 AM, Barclay Jameson wrote:
>>>
>>> Unfortunately, it's not longer idle as my CephFS cluster is now in
>>> production :)
>>>
>>> On Tue, Sep 15, 2015 at 11:17 AM, Gregory Farnum <gfarnum@xxxxxxxxxx>
>>> wrote:
>>>>
>>>> On Tue, Sep 15, 2015 at 9:10 AM, Barclay Jameson
>>>> <almightybeeij@xxxxxxxxx> wrote:
>>>>>
>>>>> So, I asked this on the irc as well but I will ask it here as well.
>>>>>
>>>>> When one does 'ceph -s' it shows client IO.
>>>>>
>>>>> The question is simple.
>>>>>
>>>>> Is this total throughput or what the clients would see?
>>>>>
>>>>> Since it's replication factor of 3 that means for every write 3 are
>>>>> actually written.
>>>>>
>>>>> First lets assume I have only one cephfs client writing data.
>>>>>
>>>>> If this is total throughput then to get the maximum throughput for
>>>>> what a client would see do I need to divide it by 3?
>>>>>
>>>>> Else, if this is what my client sees then do I need to multiply this
>>>>> by 3 to see what my maximum cluster throughput would be?
>>>>
>>>>
>>>> I believe this is client-facing IO. It's pretty simple to check if
>>>> you've got an idle cluster; run rados bench and see if they're about
>>>> the same or about three times as large. ;)
>>>> -Greg
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux