Re: slow mds requests with random read test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Patrick for help.
The random write tests are performing well enough, though. Wonder why read
test is so poor with the same configuration(resulting read bandwidth about
15MB/s vs 400MB/s of write).  especially the logs of slow requests are
irrelevant with testing ops. I am thinking it is something with cephfs
kernel client?

Any other thoughts?

Patrick Donnelly <pdonnell@xxxxxxxxxx> 于2023年5月31日周三 00:58写道:

> On Tue, May 30, 2023 at 8:42 AM Ben <ruidong.gao@xxxxxxxxx> wrote:
> >
> > Hi,
> >
> > We are performing couple performance tests on CephFS using fio. fio is
> run
> > in k8s pod and 3 pods will be up running mounting the same pvc to CephFS
> > volume. Here is command line for random read:
> > fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G
> > -numjobs=5 -runtime=500 -group_reporting -directory=/tmp/cache
> > -name=Rand_Read_Testing_$BUILD_TIMESTAMP
> > The random read is performed very slow. Here is the cluster log from
> > dashboard:
> > [...]
> > Any suggestions on the problem?
>
> Your random read workload is too extreme for your cluster of OSDs.
> It's causing slow metadata ops for the MDS. To resolve this we would
> normally suggest allocating a set of OSDs on SSDs for use by the
> CephFS metadata pool to isolate the worklaods.
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Red Hat Partner Engineer
> IBM, Inc.
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux