Am 26.02.2018 um 17:31 schrieb David Turner: > That was a good way to check for the recovery sleep. Does your `ceph status` show 128 PGs backfilling (or a number near that at least)? The PGs not backfilling will say 'backfill+wait'. Yes: pgs: 37778254/593342240 objects degraded (6.367%) 2036 active+clean 128 active+undersized+degraded+remapped+backfilling 6 active+clean+scrubbing+deep 6 active+clean+scrubbing The 2048 PGs are from our data pool, they are by now finally clean again and scrubbing around. The 128 PGs are all from the metadata pool, so they are exclusively from the two "clean" OSDs and the two OSDs being backfilled: # ceph osd df | head ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 0 ssd 0.21829 1.00000 223G 11227M 212G 4.90 0.89 0 1 ssd 0.21829 1.00000 223G 11225M 212G 4.90 0.89 0 2 ssd 0.21819 1.00000 223G 77813M 147G 34.00 6.16 128 3 ssd 0.21819 1.00000 223G 77590M 147G 33.91 6.15 128 It's just... very slow. > > On Mon, Feb 26, 2018 at 11:25 AM Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx <mailto:freyermuth@xxxxxxxxxxxxxxxxxx>> wrote: > > Am 26.02.2018 um 16:59 schrieb Patrick Donnelly: > > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth > > <freyermuth@xxxxxxxxxxxxxxxxxx <mailto:freyermuth@xxxxxxxxxxxxxxxxxx>> wrote: > >> Looking with: > >> ceph daemon osd.2 perf dump > >> I get: > >> "bluefs": { > >> "gift_bytes": 0, > >> "reclaim_bytes": 0, > >> "db_total_bytes": 84760592384, > >> "db_used_bytes": 78920024064, > >> "wal_total_bytes": 0, > >> "wal_used_bytes": 0, > >> "slow_total_bytes": 0, > >> "slow_used_bytes": 0, > >> so it seems this is almost exclusively RocksDB usage. > >> > >> Is this expected? > > > > Yes. The directory entries are stored in the omap of the objects. This > > will be stored in the RocksDB backend of Bluestore. > > > >> Is there a recommendation on how much MDS storage is needed for a CephFS with 450 TB? > > > > It seems in the above test you're using about 1KB per inode (file). > > Using that you can extrapolate how much space the data pool needs > > based on your file system usage. (If all you're doing is filling the > > file system with empty files, of course you're going to need an > > unusually large metadata pool.) > > > Many thanks, this helps! > We naturally hope our users will not do this, this stress test was a worst case - > but the rough number (1 kB per inode) does indeed help a lot, and also the increase with modifications > of the file as laid out by David. > > Is also the slow backfilling normal? > Will such increase in storage (by many file modifications) at some point also be reduced, i.e. > is the database compacted / can one trigger that / is there something like "SQL vacuum"? > > To also answer David's questions in parallel: > - Concerning the slow backfill, I am only talking about the "metadata OSDs". > They are fully SSD backed, and have no separate device for block.db / WAL. > - I adjusted backfills up to 128 for those metadata OSDs, the cluster is currently fully empty, i.e. no client's are doing anything. > There are no slow requests. > Since no clients are doing anything and the rest of the cluster is now clean (apart from the two backfilling OSDs), > right now there is also no memory pressure at all. > The "clean" OSDs are reading with 7 MB/s each, with 5 % CPU load each. > The OSDs being backfilled have 3.3 % CPU load, and have about 250 kB/s of write throughput. > Network traffic between the node with the clean OSDs and the "being-bbackfilled" OSDs is about 1.5 Mbit/s, while there is significantly more bandwidth available... > - Checking sleeps with: > # ceph -n osd.1 --show-config | grep sleep > osd_recovery_sleep = 0.000000 > osd_recovery_sleep_hdd = 0.100000 > osd_recovery_sleep_hybrid = 0.025000 > osd_recovery_sleep_ssd = 0.000000 > shows there should be 0 sleep. Or is there another way to query? > > Cheers and many thanks for the valuable replies! > Oliver > -- Oliver Freyermuth Universität Bonn Physikalisches Institut, Raum 1.047 Nußallee 12 53115 Bonn -- Tel.: +49 228 73 2367 Fax: +49 228 73 7869 --
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com