Re: Inaccurate client io stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using filestore with SSD journal and a 3x replication. I've only noticed the low client IO after luminous upgrade, the actual traffic should be much higher. It had never been that low since my giant deployment (yup, it is a very old cluster)

Regards,
Horace Ng


----- Original Message -----
From: "John Spray" <jspray@xxxxxxxxxx>
To: "horace" <horace@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Friday, May 11, 2018 7:04:56 PM
Subject: Re:  Inaccurate client io stats

On Fri, May 11, 2018 at 4:51 AM, Horace <horace@xxxxxxxxx> wrote:
> Hi everyone,
>
> I've got a 3-node cluster running without any issue. However, I found out that since upgraded to luminous, the client io stat is far too way off from the real one. Have no idea how to troubleshoot this after went through all the logs. Any help would be appreciated.

The ratio from logical IO (from clients) to raw IO (to disks) depends
on configuration:
 - Are you using filestore or bluestore?  Any SSD journals?
 - What replication level is in use?  3x?

If you're using filestore, no SSD journals, and 3x journalling, then
there will be a factor of six amplification between the client IO and
the disk IO.  The cluster IO stats do still look rather low though...

John

> Got more than 10 client hosts connecting to the cluster, running around 300 VMs.
>
> ceph version 12.2.4
>
> #ceph -s
>
>   cluster:
>     id:     xxxxxxxxxxxxxxx
>     health: HEALTH_OK
>
>   services:
>     mon: 3 daemons, quorum ceph0,ceph1,ceph2
>     mgr: ceph1(active), standbys: ceph0, ceph2
>     osd: 24 osds: 24 up, 24 in
>     rgw: 1 daemon active
>
>   data:
>     pools:   17 pools, 956 pgs
>     objects: 4225k objects, 14495 GB
>     usage:   43424 GB used, 16231 GB / 59656 GB avail
>     pgs:     956 active+clean
>
>   io:
>     client:   123 kB/s rd, 2677 kB/s wr, 38 op/s rd, 278 op/s wr
>
> (at one of the node)
> #atop
>
> DSK |          sdb | busy     42% | read     268 | write    519 |  KiB/w    109 | MBr/s    2.4 | MBw/s    5.6 | avio 5.26 ms |
> DSK |          sde | busy     26% | read     129 | write    313 |  KiB/w    150 | MBr/s    0.7 | MBw/s    4.6 | avio 5.94 ms |
> DSK |          sdg | busy     24% | read      90 | write    230 |  KiB/w     86 | MBr/s    0.5 | MBw/s    1.9 | avio 7.50 ms |
> DSK |          sdf | busy     21% | read     109 | write    148 |  KiB/w    162 | MBr/s    0.8 | MBw/s    2.3 | avio 8.12 ms |
> DSK |          sdh | busy     19% | read     100 | write    221 |  KiB/w    118 | MBr/s    0.5 | MBw/s    2.5 | avio 5.78 ms |
> DSK |          sda | busy     18% | read     170 | write    163 |  KiB/w     83 | MBr/s    1.6 | MBw/s    1.3 | avio 5.35 ms |
> DSK |          sdc | busy      3% | read       0 | write   1545 |  KiB/w     58 | MBr/s    0.0 | MBw/s    8.8 | avio 0.21 ms |
> DSK |          sdd | busy      3% | read       0 | write   1195 |  KiB/w     57 | MBr/s    0.0 | MBw/s    6.7 | avio 0.24 ms |
>
> Regards,
> Horace Ng
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux