Re: ec performance in small random io testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark:

We are using bluestore. I want to explain, consider the 4M range of an
object never be written,if the firsr write is 4K and it fall on the 3M
offset, EC will append it to offset 0.so 4K amplify to 3M+4K.if second
4K write fall on the 2M,  it will need rmw and clone,this should also
be an overhead...

2017年7月22日星期六,Mark Nelson <mnelson@xxxxxxxxxx> 写道:
>
> Hello,
>
> Out of curiosity where you using filestore or bluestore?
>
> Mark
>
> On 07/21/2017 04:26 AM, zengran zhang wrote:
>>
>> Hi :
>>
>>    we found that the ec pool's iops is low in 4K randwrite test, I
>> think its because the append op will amplify most random small io, and
>> next small write on pending area will cause rmw and clone range...
>>
>>     could we change the `append` to `write`, and change the `truncate`
>> to `unmap` when rollback?
>>
>>     now the write io fall on one object will splite into one append
>> plus one overwrite at most, if we change to write+overwrite, write may
>> splite into many writes and overwrites that depending on how much
>> holes on the write range..
>>
>>     now the UNMAP is not yet implemented,want to know wether it is hard to do?
>>
>>     thanks and regards!
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux