Re: OSD down after PG increase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2014-03-13 9:02 GMT+01:00 Andrey Korolyov <andrey@xxxxxxx>:
> Yes, if you have essentially high amount of commited data in the cluster
> and/or large number of PG(tens of thousands).

I've increased from 64 to 8192 PGs

> If you have a room to
> experiment with this transition from scratch you may want to play with
> numbers in the OSD` queues since they causing deadlock-like behaviour on
> operations like increasing PG count or large pool deletion. If cluster
> has no I/O at all at the moment, such behaviour is not expected definitely.

My cluster was totally idle, it's a test with ceph-ansible repository and nobody
was using it.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux