Re: Write amplification for CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think you will have to do the science there, it will be dependent on
thousands of factors like size of OSD, WAL, DB, placement of them,
size of incoming data if it is many small ops or few large ones and so
on. I don't think any one single answer there would work out for many
cases.

Most of us just buy a fast (nvme, or ssd) drive with a large DWPD and
put WAL/DB on those. My gut feeling says that WAL and DB will see a
lot of small writes, so "MB/s" might not be very important, but the
amount of writes on them is the important factor in terms of wear.
Then again, the point of just having WAL/DB separate might make the
OSD live lots longer since they bear the load of said smaller writes
for metadata.


Den mån 30 jan. 2023 kl 15:34 skrev Manuel Holtgrewe <zyklenfrei@xxxxxxxxx>:
>
> OK. How much data will be written to the WAL and elsewhere?
>
> On Mon, Jan 30, 2023 at 3:17 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>>
>> > I'm concerned with the potential increased NVME wear. Assuming writes of multiples of the block size, when I write 1GB of data to the CephFS, how much data is written to the disks?
>>
>> In that case, repl=3 will write 1GB to three PGs. EC 8+3 would write
>> 125M (ie, 1/8 of a GB) to 11 (8+3) drives.
>>
>> --
>> May the most significant bit of your life be positive.



-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux