On 08/15/2018 06:15 PM, Robert Stanford wrote: > > The workload is relatively high read/write of objects through radosgw. > Gbps+ in both directions. The OSDs are spinning disks, the journals (up > until now filestore) are on SSDs. Four OSDs / journal disk. > RGW isn't always a heavy enough workload for this. It depends on your choice. I've deployed many RGW-only workloads without WAL+DB and it works fine. RBD is a perfect use-case which needs very low (<10ms) write latency and that's not always the case with RGW. Just having the WAL on a SSD device can also help. Keep in mind that the 'journal' doesn't apply anymore with BlueStore. That was a FileStore thing. Wido > On Wed, Aug 15, 2018 at 10:58 AM, Wido den Hollander <wido@xxxxxxxx > <mailto:wido@xxxxxxxx>> wrote: > > > > On 08/15/2018 05:57 PM, Robert Stanford wrote: > > > > Thank you Wido. I don't want to make any assumptions so let me verify, > > that's 10GB of DB per 1TB storage on that OSD alone, right? So if I > > have 4 OSDs sharing the same SSD journal, each 1TB, there are 4 10 GB DB > > partitions for each? > > > > Yes, that is correct. > > Each OSD needs 10GB/1TB of storage of DB. So size your SSD according to > your storage needs. > > However, it depends on the workload if you need to offload WAL+DB to a > SSD. What is the workload? > > Wido > > > On Wed, Aug 15, 2018 at 1:59 AM, Wido den Hollander <wido@xxxxxxxx <mailto:wido@xxxxxxxx> > > <mailto:wido@xxxxxxxx <mailto:wido@xxxxxxxx>>> wrote: > > > > > > > > On 08/15/2018 04:17 AM, Robert Stanford wrote: > > > I am keeping the wal and db for a ceph cluster on an SSD. I am using > > > the masif_bluestore_block_db_size / masif_bluestore_block_wal_size > > > parameters in ceph.conf to specify how big they should be. Should these > > > values be the same, or should one be much larger than the other? > > > > > > > This has been answered multiple times on this mailinglist in the last > > months, a bit of searching would have helped. > > > > Nevertheless, 1GB for the WAL is sufficient and then allocate about 10GB > > of DB per TB of storage. That should be enough in most use cases. > > > > Now, if you can spare more DB space, do so! > > > > Wido > > > > > R > > > > > > > > > _______________________________________________ > > > ceph-users mailing list > > > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > <mailto:ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>> > > > > > > > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com