Re: strange cache tier behaviour with cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The basic logic is that if a IO is not on the cache tier, then proxy it, which means do the IO direct on the base tier. The throttle is designed to minimise the latency impact of promotions and flushes.

So yes during testing it will not promote everything, but during normal work loads it makes things much better. The defaults were chosen after benchmarks showed they were the turning point where performance started become effected.

But yes I think there could be a better section on tuning the cache tier, but it's not an easy task as there are a lot of variables that can change depending on the hardware and work load.

Sent from Nine

From: Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>
Sent: 14 Jun 2016 12:11 p.m.
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] strange cache tier behaviour with cephfs

Hi,

ok the write test also shows now a more expected behaviour.

As it seems to me, if there is more writing than

osd_tier_promote_max_bytes_sec

the write's are going directly against the cold pool ( which is a really
good behaviour ( seriously ) ).

But that should be definitly added to the documentation. Otherwise (new)
people have no chance to find that.

The search engines show < 10 hits for "osd_tier_promote_max_bytes_sec"
one of it in

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007632.html

which has a totally different topic.

Anyway, super super big thanks for your time !

--
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 14.06.2016 um 07:47 schrieb Nick Fisk:
> osd_tier_promote_max_objects_sec
> and
> osd_tier_promote_max_bytes_sec
>
> is what you are looking for, I think by default its set to 5MB/s, which
> would roughly correlate to why you are only seeing around 8 objects each
> time being promoted. This was done like this as too many promotions hurt
> performance, so you don't actually want to promote on every IO.
>
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>> Christian Balzer
>> Sent: 14 June 2016 02:00
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re: [ceph-users] strange cache tier behaviour with cephfs
>>
>>
>> Hello,
>>
>> On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote:
>>
>>> Hi Christian,
>>>
>>> if i read a 1,5 GB file, which is not changing at all.
>>>
>>> Then i expect the agent to copy it one time from the cold pool to the
>>> cache pool.
>>>
>> Before Jewel, that is what you would have seen, yes.
>>
>> Did you read what Sam wrote and me in reply to him?
>>
>>> In fact its every time making a new copy.
>>>
>> Is it?
>> Is there 1.5GB of data copied into the cache tier each time?
>> An object is 4MB, you only had 8 in your first run, then 16...
>>
>>> I can see that by increasing disc usage of the cache and the
>>> increasing object number.
>>>
>>> And the non existing improvement of speed.
>>>
>> That could be down to your network or other factors on your client.
>>
>> Christian
>> --
>> Christian Balzer        Network/Systems Engineer
>> chibi@xxxxxxx   Global OnLine Japan/Rakuten Communications
>> http://www.gol.com/
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux