too few pgs in cache tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
we use an EC-Pool with an small cache tier in front of, for our
archive-data (4 * 16TB VM-disks).

The ec-pool has k=3;m=2 because we startet with 5 nodes and want to
migrate to an new ec-pool with k=5;m=2. Therefor we migrate one VM-disk
(16TB) from the ceph-cluster to an fc-raid with the proxmox-ve interface
"move disk".

The move was finished, but during removing the ceph-vm file the warning
'ssd-archiv' at/near target max; pool ssd-archiv has too few pgs occour.

Some hour later only the second warning exsist.

ceph health detail
HEALTH_WARN pool ssd-archiv has too few pgs
pool ssd-archiv objects per pg (51196) is more than 14.7709 times
cluster average (3466)

info about the image, which was deleted:
rbd image 'vm-409-disk-1':
        size 16384 GB in 4194304 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.2b8fda574b0dc51
        format: 2
        features: layering

I think we hit http://tracker.ceph.com/issues/8103
but normaly one reading should not put the data in the cache tier, or??
Is deleting a second read??

Our ceph version: 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)


Regards

Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux