cache questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have been doing some tests with rados bench write on a EC storage  
pool with a writeback cache pool(replicated, size 3), and have some  
questions:

* I had set target_max_bytes to 280G, and after some time of writing,  
the cache pool stays filled around 250G of data, rados df output:

pool name       category                 KB      objects       clones   
    degraded      unfound           rd        rd KB           wr        
  wr KB
cache           -                  244371669        60685            0  
            0           0            0            0      3521019    
7329726464
data            -                          0            0            0  
            0           0            0            0            0        
      0
ecdata          -                 7211409408      1760598            0  
            0           0            0            0      1760598    
7211409408
metadata        -                          2           20            0  
            0           0            0            0           21        
      8
   total used     11019283916      1821303
   total avail    83093579980
   total space    94112863896

I thought target_max_bytes was raw bytes in the cache pool, but I  
suppose it is the actual data then ? (I did not find this in the docs)  
. So the cache pool will fill up to about 3*280G?


* Another question about that rados df output: There are no reads on  
the cache pool, but of course there are when data gets evicted to the  
ecdata pool. I guess this is just not taken into account for the rados  
df output?



* After the rados test has been done, rados starts to cleanup the  
written data. I see that it then reads data from the ecdata pool (is  
this for verification?) but it is not read from the cache:


pool name       category                 KB      objects       clones   
    degraded      unfound           rd        rd KB           wr        
  wr KB
cache           -                  147251372      2975965            0  
            0           0            1            1      8976363   
12472766465
data            -                          0            0            0  
            0           0            0            0            0        
      0
ecdata          -                 9528655873      2326333            0  
            0           0      2919104            0      3763883   
12472758273
metadata        -                          2           20            0  
            0           0            0            0           21        
      8
   total used     16216299992      5302318
   total avail    77896563904
   total space    94112863896


Is this normal behaviour?


Thanks !!

Kenneth



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux