Re: CephFs performance data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ben,

That was a great presentation you gave this morning and thank you for
sending out the slides.  I was wondering if you could provide the
ceph.conf file you used for the tests?  I think you stated during the
presentation that you increased the buffer size on the FUSE mounted
clients.  If this is true, how did you do it and what setting did you
use.

Thanks,
Eric

On Wed, Aug 3, 2016 at 1:33 PM, Ben England <bengland@xxxxxxxxxx> wrote:
> Thanks Somnath, here are the Cephfs slides from this morning.
>
> https://s3.amazonaws.com/ben.england/Cephfs+performance+results+June+2016.pdf
>
> Let me know if you have trouble reading them.
>
> As for sequential 4-KB I/O performance, I believe kernel Cephfs client is aggregating reads and writes - otherwise you are quite right that the IOPS level could not be sustained.  This is common practice in other distributed filesystems as well, such as NFS and Gluster.  For RANDOM I/O, coalescing of I/O requests is far less likely.   I have some performance data on these runs that should be useful in confirming the result, will try to look that up for you.
>
> I'm looking forward to hearing other people's results with Cephfs as well.
>
> -ben
>
>
> ----- Original Message -----
>> From: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
>> To: "Ben England (bengland@xxxxxxxxxx)" <bengland@xxxxxxxxxx>
>> Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
>> Sent: Wednesday, August 3, 2016 12:23:21 PM
>> Subject: CephFs performance data
>>
>> Ben,
>> Thanks for the first cut data on CephFS , it was really helpful.
>> Do you mind to send out the slides ?
>> Also, continuing the discussion on 4K SW throughput is ~1GB/S , if you are
>> running test with direct IO, client Linux system can't coalesce the writes
>> and I didn't see much coalescing at the OSD servers as well. Unless CephFS
>> has some coalescing logic it will be good if we can find out how we are
>> getting this bigger throughput for 4K. Let me know if I am missing
>> something..
>>
>> Thanks & Regards
>> Somnath
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux