-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I'm not convinced that a backing pool can be removed from a caching tier. I just haven't been able to get around to trying it. - ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Sep 1, 2015 at 10:29 AM, Jan Schermer wrote: Unfortunately we are not in control of the VMs using this pool, so something like "sync -> stop VM -> incremental sync -> start VM on new pool" would be extremely complicated. I _think_ it's possible to misuse a cache tier to do this (add a cache tier, remove the underlying tier, add a new pool and remove cache tier), but that's a hack at best. So before we go even considering this - will there be any significant gains from this? When we increased the PG numbers it had a very positive effect on the cluster, but with only 1/3 of the drives I am worried there will be too much contention on the OSDs. I've already seen a higher CPU usage and while some latency metrics went down thanks to the new Intel drives, other metrics went up of course, so I'm not sure how it will perform in the real life... Jan On 01 Sep 2015, at 18:08, Robert LeBlanc wrote: - -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 We are in a situation where we need to decrease PG for a pool as well. One thought is to live migrate with block copy to a new pool with the right number of PGs and then once they are all moved delete the old pool. We don't have a lot of data in that pool yet, that may not be feasible for you. - - ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Sep 1, 2015 at 6:19 AM, Jan Schermer wrote: Hi, we're in the process of changing 480G drives for 1200G drives, which should cut the number of OSDs I have roughly to 1/3. My largest "volumes" pool for OpenStack volumes has 16384 PGs at the moment and I have 36K PGs in total. That equals to ~180 PGs/OSD and would become ~500 PG/s OSD. I know I can't actually decrease the number of PGs in a pool, and I'm wondering if it's worth working around to decrease the numbers? It is possible I'll be expanding the storage in the future, but probably not 3-fold. I think it's not worth bothering with and I'll just have to disable the "too many PGs per OSD" warning if I upgrade. I already put some new drives in and the OSDs seem to work fine (though I had to restart them after backfilling - they were spinning CPU for no apparent reason). Your thoughts? Thanks Jan _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com - -----BEGIN PGP SIGNATURE----- Version: Mailvelope v1.0.2 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJV5c1oCRDmVDuy+mK58QAAkXkP/1Wi4vBQ9BmZ6y11Eg+2 MxFl4ajDBYosJZz1jbnRvIKWWPlVbFHxbE0cFby6RtumT6DzpRNny+12TMcE aakwUuVR5RADh+oXzr4MU4xlPj6DWMAzSx8Bi5Mid6KVlJtK6Egsq9hCHD50 EwXg1PcEoagJL5QOHFcT/u89TlE26Enp2cl4tjwp3ltMWj1hay+J63gpTglS Tfmhi8hx22Q3RCWhVCFS+gWzWXjYPVfh3bONaSmK9BhqGjy98QJa6II+a6kL gAWG7XTJl1zAKko44cj7JSqHLmzyuBfoa/PuZMOjkEfDAOW6jdTU4VUAj3bd OK6E8sw8EMhbhlVOle6HvG1dO6bJhIt9uRxSVf+hZfFp87DoIHRAZ1J3b0PR zB6s8b+XfSph3gnU2ZsCc3wHuqM3MFXUcI7Vn7tvdV7HWXWBTGtPhokI5COk vgpLO1gvTTRzkNxmsLqwCTBFhFqK2zPw6xHpL1D5BcUYr/zS02+48ARZoUh7 pRteDdsnHOPSc5m1DcldvQtQelSMgfIyULVSXlZAukIWH9rsNt7Zishj3lvR W7z8/Ixr22TJ15mkVAAVwtlI813X59tPhmZrFmffP/GaF9vQpKUysEVZFhm1 rrTfBt6ZBa5nhYCatojpv91HM7WNeY0XJSrl+LnwGjP9avt/B2r1SoRG61Y0 d3BM =J7n1 - -----END PGP SIGNATURE----- -----BEGIN PGP SIGNATURE----- Version: Mailvelope v1.0.2 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJV5dQVCRDmVDuy+mK58QAAAWkQAJHuwHD0T2tLkC2UvU2A x1kNgxjyFgZykBAO8oZPQqgAva3AVwC70b9Wi+OlYSFEAKxu0M0sjHtfQP5d uMFLfk2T+PeloWCSKUToIbqTR892vrivO12pII7SvBNcmH5OEF8wlyzfVw1l BVm1sA9tLqCQ6GHA6u4n1iXAn/ZCUzwB08XRRXNHgFp5oNTBxve720zEqO11 5CLv010WkcGtNnbZUqYpOKWXXpd3KnVvh4dNatGiUheHzTv7u9R6Iu5mZGTt +vJFNkDq0Yy3h/uyneJu1tPBNHvYM3o1vy7VL7lQ4G45mV2oqrdTn/Pp0mb3 y8R5F9hx+40rtXl/gehi4fY0crYPmg+vG2/GPpxKxeJoWFDcbinbACPXN9oR vm/4mi83R/zoisQt6wxbHwaFJAZDQldeb+Wej7IJ/JzEL+pW395ezE3AaLe0 mFStyIyZXC6ceqjeEXzl5X3eFU+snzKPWMF4xznxfe3/Qz9NxKegBCjr5WoV //BA4+XisLQpCsFhAC7B87bs7ExoC/eD67K97E7QFH2GnkYL554RdTQI3bBZ 8y2X2Udi+EbzCx99yEN6aU/H1tgkAZ2q1WmgoxQwxPMVBQ7A0ZfXonxWvIBv P6XXx8SQoFX/j23sitnrin5LtsnNnDZO1JOiRw0FcGbeNLnT0vVe+sCjcBZI Hg4P =8q50 -----END PGP SIGNATURE-----
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com