Re: PGs allocated to osd with weights 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I can’t get data flushed out of osd with weights set to 0. Is there any way of checking the tasks queued for PG remapping ? Thank You.

can you give some more details about your cluster (replicated or EC pools, applied rules etc.)? My first guess would be that the other OSDs are either (near) full and the PGs can't be recovered on the remaining servers. Or your crush rules don't allow the redistribution of those PGs since your osd tree has changed.

The output of

ceph osd df tree

would help.

Regards,
Eugen


Zitat von Yanko Davila <davila@xxxxxxxxxxxx>:

Hello

I can’t get data flushed out of osd with weights set to 0. Is there any way of checking the tasks queued for PG remapping ? Thank You.

Yanko.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux