Re: Cache Pool and EC: objects didn't flush to a cold EC storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



08-Mar-16 00:41, Robert LeBlanc пишет:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Did you also set "target_max_bytes" to the size of the pool? That bit
us when we didn't have it set. The ratio then uses the
target_max_bytes to know when to flush.

Yes, later I set this option.
But the cluster has earned again after I add new OSD in the cache tier pool and 'full OSD' status was dropped.

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia <mike.almateia@xxxxxxxxx> wrote:
Hello Cephers!

When my cluster hit "full ratio" settings, objects from cache pull didn't
flush to a cold storage.

1. Hit the 'full ratio':

2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/31423 4329 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:36:29.255815 osd.64 10.22.11.21:6824/31423 4332 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:37:04.769765 osd.64 10.22.11.21:6824/31423 4333 : cluster
[WRN] OSD near full (90%)
...

2. Well, ok. Set the option 'ceph osd pool set hotec cache_target_full_ratio
0.8'.
But no one of objects didn't flush at all

3. Ok. Try flush all object manually:
[root@c1 ~]# rados -p hotec cache-flush-evict-all
         rbd_data.34d1f5746d773.0000000000016ba9

4. After full day objects still in cache pool, didn't flush at all:
[root@c1 ~]# rados df
pool name                 KB      objects       clones     degraded  unfound
rd        rd KB           wr        wr KB
data                       0            0            0            0        0
6            4       158212    215700473
hotec              797656118     25030755            0            0        0
370599    163045649     69947951  17786794779
rbd                        0            0            0            0        0
0            0            0            0
   total used      2080570792     25030755

It a bug or predictable action?

--
Mike. runs!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux