Re: Ceph Bluestore OSD CPU utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/31/2017 09:29 PM, Jianjian Huo wrote:
> On Sat, Jul 29, 2017 at 8:34 PM, Mark Nelson <mark.a.nelson@xxxxxxxxx> wrote:
>>
>> https://drive.google.com/uc?export=download&id=0B2gTBZrkrnpZbE50QUdtZlBxdFU
> Thanks for sharing this data, Mark.
> From your data of last March, for RBD EC overwrite on NVMe, EC
> sequential writes are faster than 3X for all IO sizes including small
> 4K/16KB. Is this right? but I am not seeing this on my setup(all nvme
> drives, 12 of them per node), in my case EC sequential writes are 2~3
> times slower than 3X. Maybe I have too many drives per node?
>

FWIW, we've seen EC random writes being 3x to 4x slower than replication
in terms of IOPS for a block size of 4kb. Similar setup: 10 NVMe disks
per node.

Mohamad

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux