Re: Write back mode Cach-tier behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The hit set count/period is supposed to control whether the object will be in the cache pool or in the cold stage pool. By setting to 0, the object is always promoted. This is good for writings but on my use case, for example, I wouldn't want every read operation to make an object get promoted and that is what happens. You can't adjust the warm up. I don't know if I'm being clear.


Em Ter, 6 de jun de 2017 05:26, TYLin <wooertim@xxxxxxxxx> escreveu:

On Jun 6, 2017, at 11:18 AM, jiajia zhong <zhong2plus@xxxxxxxxx> wrote:

it's very similar to ours.  but  is there any need to seperate the osds for different pools ? why ? 
below's our crushmap.

-98   6.29997 root tier_cache                                       
-94   1.39999     host cephn1-ssd                                   
 95   0.70000         osd.95           up  1.00000          1.00000 
101   0.34999         osd.101          up  1.00000          1.00000 
102   0.34999         osd.102          up  1.00000          1.00000 
-95   1.39999     host cephn2-ssd                                   
 94   0.70000         osd.94           up  1.00000          1.00000 
103   0.34999         osd.103          up  1.00000          1.00000 
104   0.34999         osd.104          up  1.00000          1.00000 
-96   1.39999     host cephn3-ssd                                   
105   0.34999         osd.105          up  1.00000          1.00000 
106   0.34999         osd.106          up  1.00000          1.00000 
 93   0.70000         osd.93           up  1.00000          1.00000 
-93   0.70000     host cephn4-ssd                                   
 97   0.34999         osd.97           up  1.00000          1.00000 
 98   0.34999         osd.98           up  1.00000          1.00000 
-97   1.39999     host cephn5-ssd                                   
 96   0.70000         osd.96           up  1.00000          1.00000 
 99   0.34999         osd.99           up  1.00000          1.00000 
100   0.34999         osd.100          up  1.00000          1.00000


Because ceph cannot distinguish metadata request and data request. If we use same osd sets for both metadata cache and data cache, the bandwidth of metadata request may be occupied by data request and lead to long response time. 

Thanks,
Ting Yi Lin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux