Re: strange cache tier behaviour with cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

ok lets make it step by step:

before  `dd if=file of=/dev/zero`

[root@cephmon1 ~]# rados -p ssd_cache cache-flush-evict-all
-> Moving all away
[root@cephmon1 ~]# rados -p ssd_cache ls
[root@cephmon1 ~]#
-> empty

cache osds at that point:

/dev/sde1       234315556     84368  234231188   1% /var/lib/ceph/osd/ceph-0
/dev/sdf1       234315556    106716  234208840   1% /var/lib/ceph/osd/ceph-1
/dev/sdi1       234315556     97132  234218424   1% /var/lib/ceph/osd/ceph-2
/dev/sdj1       234315556     87584  234227972   1% /var/lib/ceph/osd/ceph-3

/dev/sde1       234315556     90252  234225304   1% /var/lib/ceph/osd/ceph-8
/dev/sdf1       234315556    107424  234208132   1% /var/lib/ceph/osd/ceph-9
/dev/sdi1       234315556    378104  233937452   1%
/var/lib/ceph/osd/ceph-10
/dev/sdj1       234315556     94856  234220700   1%
/var/lib/ceph/osd/ceph-11


Now we run the dd.


20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 85.6032 s, 125 MB/s

[root@cephmon1 ~]# rados -p ssd_cache ls | wc -l
40

/dev/sde1       234315556    624896  233690660   1% /var/lib/ceph/osd/ceph-0
/dev/sdf1       234315556    643200  233672356   1% /var/lib/ceph/osd/ceph-1
/dev/sdi1       234315556    596744  233718812   1% /var/lib/ceph/osd/ceph-2
/dev/sdj1       234315556    615868  233699688   1% /var/lib/ceph/osd/ceph-3

/dev/sde1       234315556    573496  233742060   1% /var/lib/ceph/osd/ceph-8
/dev/sdf1       234315556    570240  233745316   1% /var/lib/ceph/osd/ceph-9
/dev/sdi1       234315556    624032  233691524   1%
/var/lib/ceph/osd/ceph-10
/dev/sdj1       234315556    627216  233688340   1%
/var/lib/ceph/osd/ceph-11

So we were going from ~ 1 GB to ~ 4 GB. ( of a 11 GB file ).

So 3 GB are copied from the cold pool to cache pool.

So i assume 3 GB had, maybe, served from the cache pool, and the other 8
GB had been served from the cold storage.

According to the docu it says for the writeback mode:

" When a Ceph client needs data that resides in the storage tier, the
cache tiering agent migrates the data to the cache tier on read, then it
is sent to the Ceph client.
"

This is obviously not happening there.

And the question is why.


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 14.06.2016 um 03:00 schrieb Christian Balzer:
> 
> Hello,
> 
> On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote:
> 
>> Hi Christian,
>>
>> if i read a 1,5 GB file, which is not changing at all.
>>
>> Then i expect the agent to copy it one time from the cold pool to the
>> cache pool.
>>
> Before Jewel, that is what you would have seen, yes.
> 
> Did you read what Sam wrote and me in reply to him?
> 
>> In fact its every time making a new copy.
>>
> Is it?
> Is there 1.5GB of data copied into the cache tier each time?
> An object is 4MB, you only had 8 in your first run, then 16...
> 
>> I can see that by increasing disc usage of the cache and the increasing
>> object number.
>>
>> And the non existing improvement of speed.
>>
> That could be down to your network or other factors on your client.
> 
> Christian
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux