Re: Bluestore nvme DB/WAL size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm in a similar situation, currently running filestore with spinners and journals on NVME partitions which are about 1% of the size of the OSD. If I migrate to bluestore, I'll still only have that 1% available. Per the docs, if my block.db device fills up, the metadata is going to spill back onto the block device which will incur an understandable perfomance penalty. The question is, will there be more of performance hit in that scenario versus if the block.db was on the spinner and just the WAL was on the NVME?

On Fri, Dec 21, 2018 at 9:01 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
Den tors 20 dec. 2018 kl 22:45 skrev Vladimir Brik
<vladimir.brik@xxxxxxxxxxxxxxxx>:
> Hello
> I am considering using logical volumes of an NVMe drive as DB or WAL
> devices for OSDs on spinning disks.
> The documentation recommends against DB devices smaller than 4% of slow
> disk size. Our servers have 16x 10TB HDDs and a single 1.5TB NVMe, so
> dividing it equally will result in each OSD getting ~90GB DB NVMe
> volume, which is a lot less than 4%. Will this cause problems down the road?

Well, apart from the reply you already got on "one nvme fails all the
HDDs it is WAL/DB for",
the recommendations are about getting the best out of them, especially
for the DB I suppose.

If one can size stuff up before, then following recommendations is a
good choice, but I think
you should test using it for WALs for instance, and bench it against
another host with data,
wal and db on the HDD and see if it helps a lot in your expected use case.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux