Depending on your actual OSD setup (separate rocksDB/WAL) simply
deleting pools won’t immediately delete the remaining objects. The DBs
are cleaned up quite slowly which can leave you with completely
saturated disks. This has been explained multiple times here, I just
don’t have a link at hand. If this is just a test cluster it could be
way faster to rebuild the OSDs. Or you can first try the pool deletion
and see how fast you can rebuild your pools.
Zitat von Dallas Jones <djones@xxxxxxxxxxxxxxxxx>:
Stumbling closer toward a usable production cluster with Ceph, but I have
yet another stupid n00b question I'm hoping you all will tolerate.
I have 38 OSDs up and in across 4 hosts. I (maybe prematurely) removed my
test filesystem as well as the metadata and data pools used by the deleted
filesystem.
This leaves me with 38 OSDs with a bunch of data on them.
Is there a simple way to just whack all of the data on all of those OSDs
before I create new pools and a new filesystem?
Version:
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)
As you can see from the partial output of ceph -s, I left a bunch of crap
spread across the OSDs...
pools: 8 pools, 32 pgs
objects: 219 objects, 1.2 KiB
usage: 45 TiB used, 109 TiB / 154 TiB avail
pgs: 32 active+clean
Thanks in advance for a shove in the right direction.
-Dallas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx