Re: strange cache tier behaviour with cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

wow.

After setting in ceph.conf and restarting the whole cluster this:

osd tier promote max bytes sec = 1610612736
osd tier promote max objects sec = 20000

And repeating the test, the cache pool got the full 11 GB of the test
file with 2560 objects copied from the cold pool.



Aaand, repeating the test multiple times showed, that each time, there
is some movement within the cache pool WITHOUT copy from the cold pool.
So it shifts some MB within the cache pool from one OSD to another.

So its for example changing from:

/dev/sde1       234315556   2559404  231756152   2% /var/lib/ceph/osd/ceph-0
/dev/sdf1       234315556   2848300  231467256   2% /var/lib/ceph/osd/ceph-1
/dev/sdi1       234315556   2820596  231494960   2% /var/lib/ceph/osd/ceph-2
/dev/sdj1       234315556   2712796  231602760   2% /var/lib/ceph/osd/ceph-3

to

/dev/sde1       234315556   2670360  231645196   2% /var/lib/ceph/osd/ceph-0
/dev/sdf1       234315556   2951116  231364440   2% /var/lib/ceph/osd/ceph-1
/dev/sdi1       234315556   2903000  231412556   2% /var/lib/ceph/osd/ceph-2
/dev/sdj1       234315556   2831992  231483564   2% /var/lib/ceph/osd/ceph-3


So around 400 MB has been shifted inside cache pool ( why ever ).

The numbers of objects is stable and not changed.

The Speed is going from ~ 100 MB/s up to ~ 170 MB/s which is close to
the network maximum considering the client is busy too.

So this hidden and undocumentent config option changed the behaviour to
the, according to the documentation, expected behaviour.

Thank you very much for this hint !

I will repeat now all the testing.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 14.06.2016 um 07:47 schrieb Nick Fisk:
> osd_tier_promote_max_objects_sec
> and
> osd_tier_promote_max_bytes_sec
> 
> is what you are looking for, I think by default its set to 5MB/s, which
> would roughly correlate to why you are only seeing around 8 objects each
> time being promoted. This was done like this as too many promotions hurt
> performance, so you don't actually want to promote on every IO.
> 
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>> Christian Balzer
>> Sent: 14 June 2016 02:00
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  strange cache tier behaviour with cephfs
>>
>>
>> Hello,
>>
>> On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote:
>>
>>> Hi Christian,
>>>
>>> if i read a 1,5 GB file, which is not changing at all.
>>>
>>> Then i expect the agent to copy it one time from the cold pool to the
>>> cache pool.
>>>
>> Before Jewel, that is what you would have seen, yes.
>>
>> Did you read what Sam wrote and me in reply to him?
>>
>>> In fact its every time making a new copy.
>>>
>> Is it?
>> Is there 1.5GB of data copied into the cache tier each time?
>> An object is 4MB, you only had 8 in your first run, then 16...
>>
>>> I can see that by increasing disc usage of the cache and the
>>> increasing object number.
>>>
>>> And the non existing improvement of speed.
>>>
>> That could be down to your network or other factors on your client.
>>
>> Christian
>> --
>> Christian Balzer        Network/Systems Engineer
>> chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
>> http://www.gol.com/
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux