n 07/08/18 17:10, Robert Stanford wrote: > > I was surprised to see an email on this list a couple of days ago, > which said that write performance would actually fall with BlueStore. I > thought the reason BlueStore existed was to increase performance. > Nevertheless, it seems like filestore is going away and everyone should > upgrade. > > My question is: I have SSDs for filestore journals, for spinning OSDs. > When upgrading to BlueStore, am I better of using the SSDs for wal/db, > or am I better of keeping everything (dat, wall, db) on the spinning > disks (from a performance perspective)? > > Thanks > R Your performance will always be better if you put journals/WAL/DB on faster storage. An all-in-one HDD Bluestore OSD is not more performant than a properly set up SSD/HDD split OSD. You do, however, need to take care to make sure that you appropriately size the partitions on the SSD that you designate as the DB devices - if you leave it naively to the default settings and trust the tool to do it, you'll probably end up with tiny partitions that are no use (unless that's changed since the last time I looked). Bluestore's performance is better overall than Filestore's. Write performance is potentially worse than filestore largely in the specific case of a filestore OSD with an SSD journal (compared to a bluestore OSD with an SSD DB) subject to bursty, high bandwidth writes - as *all* writes to filestore hit the journal first and ack once committed to journal, that's potentially faster than bluestore which will usually write direct to main storage on the data HDD and so acks slower. If writing was constant, you would still hit a bottleneck when the filestore journal has to be flushed to HDD, at which point performance on bluestore wins out again because it writes to the long-term storage more efficiently than filestore does.
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com