Re: about PG_Number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In our experience too few PGS leads to non-uniform disk load as well. We had lots of blocked IO when many of the disks were idle because there were and equal number always at 100% utilized. Lots of blocked IO when OSDS restarted and scanning the huge PGS (data). Scrubs were also painful for similar reasons. Since increasing the PG number things have gotten a lot better, there is still a pretty big descrepency between the active and idle disks and after we get our SSD cache tier in we will change to straw2 and do some balancing.

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Nov 13, 2015 5:34 AM, "Francois Lafont" <flafdivers@xxxxxxx> wrote:
Hi,

On 13/11/2015 09:13, Vickie ch wrote:

> If you have a large amount of OSDs but less pg number. You will find your
> data write unevenly.
> Some OSD have no change to write data.
> In the other side, pg number too large but OSD number too small that have a
> chance to cause data lost.

Data lost, are you sure?

Personally, I would have said:

          few PG/OSDs                       lot of PG/OSDs
              ------------------------------------>
 * Data distribution less envenly          * Good balanced distribution of data
 * Use less CPU and RAM                    * Use lot of CPU and RAM

No?


François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux