Re: SSD Sizing for DB/WAL: 4% for large drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for a lot of detailed and useful information :)

I'm tempted to ask a related question on SSD endurance...

If 60GB is the sweet spot for each DB/WAL partition, and the SSD has
spare capacity, for example, I'd budgeted 266GB per DB/WAL.

Would it then be better to make a 60GB "sweet spot" sized DB/WALs, and
leave the remaining SSD unused, as this would maximise the lifespan of
the SSD, and speedup  garbage collection?

many thanks

Jake



On 5/29/19 9:56 AM, Mattia Belluco wrote:
> On 5/29/19 5:40 AM, Konstantin Shalygin wrote:
>> block.db should be 30Gb or 300Gb - anything between is pointless. There
>> is described why:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html
> 
> Following some discussions we had at the past Cephalocon I beg to differ
> on this point: when RocksDB needs to compact a layer it rewrites it
> *before* deleting the old data; if you'd like to be sure you db does not
> spill over to the spindle you should allocate twice the size of the
> biggest layer to allow for compaction. I guess ~60 GB would be the sweet
> spot assuming you don't plan to mess with size and multiplier of the
> rocksDB layers and don't want to go all the way to 600 GB (300 GB x2)
> 
> regards,
> Mattia
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux