On Fri, Sep 7, 2018 at 9:02 AM, Muhammad Junaid <junaid.fsd.pk@xxxxxxxxx> wrote: > Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm > SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G > WAL partition for each OSD on the SSD. > > Are you asking us to do 40GB * 5 partitions on SSD just for block.db? Yes. You don't need a separate WAL defined. It only makes sense when you have something *faster* than where block.db will live. In your case 'data' will go in the slower spinning devices, 'block.db' will go in the SSD, and there is no need for WAL. You would only benefit from WAL if you had another device, like an NVMe, where 2GB partitions (or LVs) could be created for block.wal > > On Fri, Sep 7, 2018 at 5:36 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote: >> >> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid <junaid.fsd.pk@xxxxxxxxx> >> wrote: >> > Hi there >> > >> > Asking the questions as a newbie. May be asked a number of times before >> > by >> > many but sorry, it is not clear yet to me. >> > >> > 1. The WAL device is just like journaling device used before bluestore. >> > And >> > CEPH confirms Write to client after writing to it (Before actual write >> > to >> > primary device)? >> > >> > 2. If we have lets say 5 OSD's (4 TB SAS) and 1 200GB SSD. Should we >> > partition SSD in 10 partitions? Shoud/Can we set WAL Partition Size >> > against >> > each OSD as 10GB? Or what min/max we should set for WAL Partition? And >> > can >> > we set remaining 150GB as (30GB * 5) for 5 db partitions for all OSD's? >> >> A WAL partition would only help if you have a device faster than the >> SSD where the block.db would go. >> >> We recently updated our sizing recommendations for block.db at least >> 4% of the size of block (also referenced as the data device): >> >> >> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing >> >> In your case, what you want is to create 5 logical volumes from your >> 200GB at 40GB each, without a need for a WAL device. >> >> >> > >> > Thanks in advance. Regards. >> > >> > Muhammad Junaid >> > >> > _______________________________________________ >> > ceph-users mailing list >> > ceph-users@xxxxxxxxxxxxxx >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com