> Op 10 augustus 2017 om 11:14 schreef Marcus Haarmann <marcus.haarmann@xxxxxxxxx>: > > > Hi, > > we have done some testing with bluestore and found that the memory consumption of the osd > processes is depending not on the real data amount stored but on the number of stored > objects. > This means that e.g. a block device of 100 GB which spreads over 100 objects has a different > memory usage than storing 10000000 smaller objects (the bluestore blocksize should be tuned for > that kind of setup). (1000000 objects of size 4k to 100k had a memory consumption of ~4GB on the osd > on standard block size, while the amount of data was only ~15GB). Yes, the amount of objects and PGs will determine how much Memory a OSD will use. > So it depends on the usage, a cephfs stores each file as a single object, while the rbd is configured > to allocate larger objects. > Not true in this case. Both CephFS and RBD stripe over 4MB RADOS objects. So a 1024MB file in CephFS will result in 256 RADOS objects of 4MB in size. This is configurable using directory layouts, but 4MB is the default. Wido > Marcus Haarmann > > > Von: "Stijn De Weirdt" <stijn.deweirdt@xxxxxxxx> > An: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> > Gesendet: Donnerstag, 10. August 2017 10:34:48 > Betreff: luminous/bluetsore osd memory requirements > > hi all, > > we are planning to purchse new OSD hardware, and we are wondering if for > upcoming luminous with bluestore OSDs, anything wrt the hardware > recommendations from > http://docs.ceph.com/docs/master/start/hardware-recommendations/ > will be different, esp the memory/cpu part. i understand from colleagues > that the async messenger makes a big difference in memory usage (maybe > also cpu load?); but we are also interested in the "1GB of RAM per TB" > recommendation/requirement. > > many thanks, > > stijn > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com