Re: WAL/DB size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



30gb already includes WAL, see http://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing

15 августа 2019 г. 1:15:58 GMT+03:00, Anthony D'Atri <aad@xxxxxxxxxxxxxx> пишет:
Good points in both posts, but I think there’s still some unclarity.

Absolutely let’s talk about DB and WAL together. By “bluestore goes on flash” I assume you mean WAL+DB?

“Simply allocate DB and WAL will appear there automatically”

Forgive me please if this is obvious, but I’d like to see a holistic explanation of WAL and DB sizing *together*, which I think would help folks put these concepts together and plan deployments with some sense of confidence.

We’ve seen good explanations on the list of why only specific DB sizes, say 30GB, are actually used _for the DB_.
If the WAL goes along with the DB, shouldn’t we also explicitly determine an appropriate size N for the WAL, and make the partition (30+N) GB?
If so, how do we derive N? Or is it a constant?

Filestore was so much simpler, 10GB set+forget for the journal. Not that I miss XFS, mind you.


Actually standalone WAL is required when you have either very small fast
device (and don't want db to use it) or three devices (different in
performance) behind OSD (e.g. hdd, ssd, nvme). So WAL is to be located
at the fastest one.

For the given use case you just have HDD and NVMe and DB and WAL can
safely collocate. Which means you don't need to allocate specific volume
for WAL. Hence no need to answer the question how many space is needed
for WAL. Simply allocate DB and WAL will appear there automatically.


Yes, i'm surprised how often people talk about the DB and WAL separately
for no good reason. In common setups bluestore goes on flash and the
storage goes on the HDDs, simple.

In the event flash is 100s of GB and would be wasted, is there anything
that needs to be done to set rocksdb to use the highest level? 600 I
believe

ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
With best regards,
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux