Re: Cache Pool and EC: objects didn't flush to a cold EC storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Did you also set "target_max_bytes" to the size of the pool? That bit
us when we didn't have it set. The ratio then uses the
target_max_bytes to know when to flush.
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.3.6
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJW3fWJCRDmVDuy+mK58QAAWg0P/131uHJVnkZ7jFCRi8yY
iD2//WQJl5VJtcsYwqR0PxqKdMHGTDs263BpVSyUj5tMF+dgOpfvkPSlZ2Uf
SGYBXMvmUOu3WOMWfgcu6Tkt6Sai7vJtn6m8P1B+jKPEiTRqk+Apkft87JAE
rOsVM1lEwGZNH6+C8XUz13xcZeM15MTHn/QyRRhjt0cNLHxcG0/oWBBX753j
BIhde7XtORq6U0T79E6N6kd8KRE0XgOiwWa3bk9mKHWxkrc+1W53RfefwexU
rA9VkJKI+7YCh307TXF2cFEw8JPglOJdMcn5G96tb//jMBGh+kBfoT3FbM4F
Pb9LASt+DRIptZsF4DJJHLCOs6HseLmAiDp6z+wntjMITkeRGdxcA92llXz+
+/nnGKJtOZj76agXhYmkEZeEVSCiKaKC2xFqUy+p+B1UVGff+cSRt5Fz3NfB
NOSlYXbYCdahXaoKcaxa6oupep3TtjI6TBQ7JS4kHHfBMj8JHpSga4WkKqlz
e3Oz9PsDU9Tw2UVyo4zLEqgpcWcbY8E1VAAoirKAGcCqnwzwjvhGM2e1h66L
yYjepiUQ9oLbIct9MXJOSAMwctsrAYgvR1veG+vqND5ZLr+OIR7at9Vpeg8m
+oBVG+4PgxlIEfxVGf+8OjLK9sJUTm+AtLMzsbDqMFX9VQtpoTlsqYGd5gTW
9t/H
=7sfH
-----END PGP SIGNATURE-----
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia <mike.almateia@xxxxxxxxx> wrote:
> Hello Cephers!
>
> When my cluster hit "full ratio" settings, objects from cache pull didn't
> flush to a cold storage.
>
> 1. Hit the 'full ratio':
>
> 2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster
> [WRN] OSD near full (90%)
> 2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/31423 4329 : cluster
> [WRN] OSD near full (90%)
> 2016-03-06 11:36:29.255815 osd.64 10.22.11.21:6824/31423 4332 : cluster
> [WRN] OSD near full (90%)
> 2016-03-06 11:37:04.769765 osd.64 10.22.11.21:6824/31423 4333 : cluster
> [WRN] OSD near full (90%)
> ...
>
> 2. Well, ok. Set the option 'ceph osd pool set hotec cache_target_full_ratio
> 0.8'.
> But no one of objects didn't flush at all
>
> 3. Ok. Try flush all object manually:
> [root@c1 ~]# rados -p hotec cache-flush-evict-all
>         rbd_data.34d1f5746d773.0000000000016ba9
>
> 4. After full day objects still in cache pool, didn't flush at all:
> [root@c1 ~]# rados df
> pool name                 KB      objects       clones     degraded  unfound
> rd        rd KB           wr        wr KB
> data                       0            0            0            0        0
> 6            4       158212    215700473
> hotec              797656118     25030755            0            0        0
> 370599    163045649     69947951  17786794779
> rbd                        0            0            0            0        0
> 0            0            0            0
>   total used      2080570792     25030755
>
> It a bug or predictable action?
>
> --
> Mike. runs!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux