osd_tier_promote_max_objects_sec and osd_tier_promote_max_bytes_sec is what you are looking for, I think by default its set to 5MB/s, which would roughly correlate to why you are only seeing around 8 objects each time being promoted. This was done like this as too many promotions hurt performance, so you don't actually want to promote on every IO. > -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Christian Balzer > Sent: 14 June 2016 02:00 > To: ceph-users@xxxxxxxxxxxxxx > Subject: Re: strange cache tier behaviour with cephfs > > > Hello, > > On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote: > > > Hi Christian, > > > > if i read a 1,5 GB file, which is not changing at all. > > > > Then i expect the agent to copy it one time from the cold pool to the > > cache pool. > > > Before Jewel, that is what you would have seen, yes. > > Did you read what Sam wrote and me in reply to him? > > > In fact its every time making a new copy. > > > Is it? > Is there 1.5GB of data copied into the cache tier each time? > An object is 4MB, you only had 8 in your first run, then 16... > > > I can see that by increasing disc usage of the cache and the > > increasing object number. > > > > And the non existing improvement of speed. > > > That could be down to your network or other factors on your client. > > Christian > -- > Christian Balzer Network/Systems Engineer > chibi@xxxxxxx Global OnLine Japan/Rakuten Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com