Re: How should I deal with placement group numbers when reducing number of OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

We are in a situation where we need to decrease PG for a pool as well. One thought is to live migrate with block copy to a new pool with the right number of PGs and then once they are all moved delete the old pool. We don't have a lot of data in that pool yet, that may not be feasible for you.

- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1

On Tue, Sep 1, 2015 at 6:19 AM, Jan Schermer  wrote:
Hi,
we're in the process of changing 480G drives for 1200G drives, which should cut the number of OSDs I have roughly to 1/3.

My largest "volumes" pool for OpenStack volumes has 16384 PGs at the moment and I have 36K PGs in total. That equals to ~180 PGs/OSD and would become ~500 PG/s OSD.

I know I can't actually decrease the number of PGs in a pool, and I'm wondering if it's worth working around to decrease the numbers? It is possible I'll be expanding the storage in the future, but probably not 3-fold.

I think it's not worth bothering with and I'll just have to disable the "too many PGs per OSD" warning if I upgrade.

I already put some new drives in and the OSDs seem to work fine (though I had to restart them after backfilling - they were spinning CPU for no apparent reason).

Your thoughts?

Thanks
Jan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.0.2
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJV5c1oCRDmVDuy+mK58QAAkXkP/1Wi4vBQ9BmZ6y11Eg+2
MxFl4ajDBYosJZz1jbnRvIKWWPlVbFHxbE0cFby6RtumT6DzpRNny+12TMcE
aakwUuVR5RADh+oXzr4MU4xlPj6DWMAzSx8Bi5Mid6KVlJtK6Egsq9hCHD50
EwXg1PcEoagJL5QOHFcT/u89TlE26Enp2cl4tjwp3ltMWj1hay+J63gpTglS
Tfmhi8hx22Q3RCWhVCFS+gWzWXjYPVfh3bONaSmK9BhqGjy98QJa6II+a6kL
gAWG7XTJl1zAKko44cj7JSqHLmzyuBfoa/PuZMOjkEfDAOW6jdTU4VUAj3bd
OK6E8sw8EMhbhlVOle6HvG1dO6bJhIt9uRxSVf+hZfFp87DoIHRAZ1J3b0PR
zB6s8b+XfSph3gnU2ZsCc3wHuqM3MFXUcI7Vn7tvdV7HWXWBTGtPhokI5COk
vgpLO1gvTTRzkNxmsLqwCTBFhFqK2zPw6xHpL1D5BcUYr/zS02+48ARZoUh7
pRteDdsnHOPSc5m1DcldvQtQelSMgfIyULVSXlZAukIWH9rsNt7Zishj3lvR
W7z8/Ixr22TJ15mkVAAVwtlI813X59tPhmZrFmffP/GaF9vQpKUysEVZFhm1
rrTfBt6ZBa5nhYCatojpv91HM7WNeY0XJSrl+LnwGjP9avt/B2r1SoRG61Y0
d3BM
=J7n1
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux