Re: increase pgnum after adjust reweight osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Christian.
this cluster has 7 nodes with 69 osds.
I know this version is so old,but its hard to stop service to upgrade.

and I will increase it slowly with step 100.
Thanks again.


2016-04-25 15:55 GMT+08:00 Christian Balzer <chibi@xxxxxxx>:
>
> Hello,
>
> On Mon, 25 Apr 2016 13:23:04 +0800 lin zhou wrote:
>
>> Hi,Cephers:
>>
>> Recently,I face a problem of full.and I have using reweight to adjust it.
>> But now I want to increase pgnum before I can add new nodes into the
>> cluster.
>>
> How many more nodes, OSDs?
>
>> current pg_num is 2048,and total OSD is 69.I want to increase to 4096.
>>
>> so what's the recommended steps:one-time increase derectly to 4096 or
>> increase it slowly,eg.increase 200 each time.
>>
>> ceph version is:0.67.8.and I use rbd pool only.
>>
> That's quite old, there are AFAIK some changes in current Ceph versions
> that improve data placement.
>
> Also current versions won't allow you to make large changes to pg_num
> because the massive and prolonged impact that can have.
>
> So you're better off to do it in small steps, unless you can afford your
> cluster having poor performance for a long time.
>
>>  the form of osdid,osd pg_um,osd reweight and osd used is below:
>>
> It's still quite uneven, I'd be worried that any OSD with more that 85%
> utilization might become near_full or even full during the data
> re-balancing.
>
> Christian
>> dumped all in format plain
>> OSD weight          pgnum   used
>> 0   0.89            106 83%
>> 1   1               102 73%
>> 2   0.9             104 87%
>> 3   0.9192          107 80%
>> 5   0.89            106 85%
>> 6   0.9271          108 82%
>> 7   1               112 77%
>> 8   0.9477          113 82%
>> 9   1               112 78%
>> 10  0.9177          109 79%
>> 11  1               108 76%
>> 12  0.9266          109 84%
>> 13  1               105 75%
>> 14  0.846           103 80%
>> 15  0.91            109 80%
>> 16  1               99   68%
>> 17  1               108 79%
>> 18  1               109 77%
>> 19  0.8506          109 84%
>> 20  0.9504          111 79%
>> 21  1               95   71%
>> 22  0.9178          106 76%
>> 23  1               108 76%
>> 24  0.9274          118 82%
>> 25  0.923           117 86%
>> 26  1               107 76%
>> 27  1               111 80%
>> 28  0.9254          101 80%
>> 29  0.9445          104 82%
>> 30  1               115 81%
>> 31  0.9285          105 75%
>> 32  0.7823          105 81%
>> 33  0.9002          111 81%
>> 34  0.8024          106 79%
>> 35  1               100 71%
>> 36  1               117 81%
>> 37  0.7949          106 79%
>> 38  0.9356          108 78%
>> 39  0.866           106 76%
>> 40  0.8322          105 76%
>> 41  0.9297          97   81%
>> 42  1               97   68%
>> 43  0.8393          115 81%
>> 44  0.9355          108 78%
>> 45  0.8429          115 84%
>> 46  1               100 71%
>> 47  1               105 73%
>> 48  0.9476          109 80%
>> 49  1               117 82%
>> 50  0.8642          100 74%
>> 51  1               101 76%
>> 56  1               104 77%
>> 57  1               102 70%
>> 62  1               106 79%
>> 63  0.9332          99 82%
>> 68  1               103 76%
>> 69  1               100 71%
>> 74  1               105 77%
>> 75  1               104 80%
>> 80  1               101 73%
>> 81  1               112 78%
>> 86  0.866           104 76%
>> 87  1               97   70%
>> 92  1               104 79%
>> 93  0.9464          102 75%
>> 98  0.9082          113 80%
>> 99  1               108 77%
>> 104 1               107 79%
>> 105 1               109 77%
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux