Re: Cache Pool and EC: objects didn't flush to a cold EC storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



06-Mar-16 17:28, Christian Balzer пишет:
On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote:

Hello Cephers!

When my cluster hit "full ratio" settings, objects from cache pull
didn't flush to a cold storage.

As always, versions of everything, Ceph foremost.

Yes of course, I think it's an independent of versions of a problem.


1. Hit the 'full ratio':

2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/31423 4329 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:36:29.255815 osd.64 10.22.11.21:6824/31423 4332 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:37:04.769765 osd.64 10.22.11.21:6824/31423 4333 : cluster
[WRN] OSD near full (90%)
...

You want to:
a) read the latest (master) documentation for cache tiering
b) this ML and it archives, in particular the current thread titled
"Cache tier operation clarifications"

In short, target_max_bytes or objects NEEDs to be set.

Yes, later set the 'target_max_bytes' too. But nothing happened.
Cluster "get to work" after I add new OSD in the cache tier pool.
May be "full osd" status drop all requests to the data, include 'delete'.


2. Well, ok. Set the option 'ceph osd pool set hotec
cache_target_full_ratio 0.8'.
But no one of objects didn't flush at all

Flush and evict are 2 different things.

cache_target_dirty_ratio needs to be set as well (below full) for this to
work, aside from the issue above.


Thanks.


3. Ok. Try flush all object manually:
[root@c1 ~]# rados -p hotec cache-flush-evict-all

          rbd_data.34d1f5746d773.0000000000016ba9

4. After full day objects still in cache pool, didn't flush at all:
[root@c1 ~]# rados df
pool name                 KB      objects       clones     degraded
   unfound           rd        rd KB           wr        wr KB
data                       0            0            0            0
         0            6            4       158212    215700473
hotec              797656118     25030755            0            0
         0       370599    163045649     69947951  17786794779
rbd                        0            0            0            0
         0            0            0            0            0
    total used      2080570792     25030755

It a bug or predictable action?

If you didn't set the cache to forward mode first, it will fill up again
immediately.

Ok. I get this. Thanks.


Christian


Thanks Christian, after add new OSD in pool, Ceph was re-balanced in ~18 hours and work normally work by now.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux