On 3 November 2017 at 07:45, Martin Overgaard Hansen <moh@xxxxxxxxxxxxx> wrote: > I want to bring this subject back in the light and hope someone can provide > insight regarding the issue, thanks. Thanks Martin, I was going to do the same. Is it possible to make the DB partition (on the fastest device) too big? in other words is there a point where for a given set of OSDs (number + size) the DB partition is sized too large and is wasting resources. I recall a comment by someone proposing to split up a single large (fast) SSD into 100GB partitions for each OSD. The answer could be couched as some intersection of pool type (RBD / RADOS / CephFS), object change(update?) intensity, size of OSD etc and rule-of-thumb. An idea occurred to me that by monitoring for the logged spill message (the event when the DB partition spills/overflows to the OSD), OSDs could be (lazily) destroyed and recreated with a new DB partition increased in size say by 10% each time. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com