> Date: Fri, 22 Feb 2019 16:26:34 -0800 > From: solarflow99 <solarflow99@xxxxxxxxx> > > > Aren't you undersized at only 30GB? I thought you should have 4% of your > OSDs The 4% guidance is new. Until relatively recently the oft-suggested and default value was 1%. > From: "Vitaliy Filippov" <vitalif@xxxxxxxxxx> > Numbers are easy to calculate from RocksDB parameters, however I also > don't understand why it's 3 -> 30 -> 300... > > Default memtables are 256 MB, there are 4 of them, so L0 should be 1 GB, > L1 should be 10 GB, and L2 should be 100 GB? I’m very curious as well, one would think that in practice the size and usage of the OSD would be factors, something the docs imply. This is an area where we could really use more concrete guidance. Clusters especially using HDDs are often doing so for $/TB reasons. Economics and available slots are constraints on how much faster WAL+DB storage can be provisioned. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com