Re: cephfs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ceph version is v11.2, no special config changes, just some regular
configurations for filestore backend.

Fio is using libaio, with direct=1, running 4KB block size.

I will try kernel client to see if there's any improvement.

Thanks,
Sheng

On Tue, Jul 18, 2017 at 8:58 AM, Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
> On Mon, Jul 17, 2017 at 10:27 PM, sheng qiu <herbert1984106@xxxxxxxxx> wrote:
>> hi,
>>
>> I am evaluating the cephfs performance and see unreasonable
>> performance when multiple threads accessing same mount point.
>>
>> there are 10k files (300MB each) in the fuse mount point, when
>> increase multiple fio threads accessing those 10k files, the
>> performance is not scaled and bounded to ~40MB/s.
>>
>> the ceph cluster has 36 OSDs on three high end servers, each OSD
>> backed by a NVMe drive. Theres's another servers running one MDS
>> process and monitors.
>>
>> if pushing IO via fio rbd engine to the cluster directly, it has
>> pretty reasonable performance. Is there any special configuration i
>> should pay attention when setup cephfs or fuse client is the problem?
>> or i should use more MDS?
>>
>> any suggestions would be appreciated.
>
> What version are you running? Any special config option changes?
>
> What I/O engines are you testing with fio? We need more details about
> the testing to give you better feedback.
>
> --
> Patrick Donnelly
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux