Re: Shared WAL/DB device partition for multiple OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear David,

On 11.05.2018 22:10, David Turner wrote:
For if you should do WAL only on the NVMe vs use a filestore journal, that depends on your write patterns, use case, etc.

we mostly use CephFS, for scientific data processing. It's
mainly larger files (10 MB to 10 GB, but sometimes also
a bunch of small files), typically written once, and read
several times. We also keep Singularity software container
images on CephFS, here the read patterns are more scattered.

We currently have about 1PB raw capacity on ca. 150 OSDs, and
I'll add another 45 OSDs with 10 TB disks now. The old
OSDs are all filestore, I'm planning to to switch them
over to bluestore bit by bit once the new OSDs are online.


that ceph will prioritize things such that the WAL won't spill over at all and just have the DB going over to the HDD.  I didn't want to deal with speed differentials between OSDs.

So would it make sense to just assign 15GB SSD partition
for WAL+DB per 10TB OSD (more I don't have available),
and let the DB spill over?

Or do you think it would make for more predictable/uniform
OSD performance to  a few TB WAL and always keep the DB on
the HDDs (like in your cluster), for our use case?

Or should I try to use a 2 TB WAL and a 13 TB DB partition
per 10 TB OSD - maybe 13 TB for DB is just way too small to
give any benefit , here?

One important question - is it possible to change WAL and
DB later on without deleting and re-creating the OSD(s)?


The troubleshooting slow requests of that just sounds awful.

So true!


Thanks again for all the advice,

Oliver
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux