Re: 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm not sure what's normal when you're sharing the journal and the data on the same disk drive. You might do better if you partition the drive and put the journals on an unformatted partition; obviously providing each journal its own spindle/ssd/whatever would be progressively better.
There was a conversation a few days ago in which someone else saw some pretty high commit latencies for shared-disk systems, but I believe their averages were still a lot lower than yours. Disks and RAID/disk controllers are just wildly variable in how they handle that sort of workload. :(
-Greg

Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Mar 16, 2014 at 10:21 PM, <duan.xufeng@xxxxxxxxxx> wrote:

Hi Gregory,
        The latency is under my ceph test environment. It has only 1 host with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC,
        journal and data are on the same disk which fs type is ext4.
        my cluster config is like this. is it normal under this configuration ? or how can i improve performance ?

Thanks.

[root@storage1 ~]# ceph -s
    cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
     health HEALTH_OK
     monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 1, quorum 0 storage1
     osdmap e373: 16 osds: 16 up, 16 in
      pgmap v3515: 1024 pgs, 1 pools, 15635 MB data, 3946 objects
            87220 MB used, 27764 GB / 29340 GB avail
                1024 active+clean

[root@storage1 ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      16      root default
-2      16              host storage1
0       1                       osd.0   up      1
1       1                       osd.1   up      1
2       1                       osd.2   up      1
3       1                       osd.3   up      1
4       1                       osd.4   up      1
5       1                       osd.5   up      1
6       1                       osd.6   up      1
7       1                       osd.7   up      1
8       1                       osd.8   up      1
9       1                       osd.9   up      1
10      1                       osd.10  up      1
11      1                       osd.11  up      1
12      1                       osd.12  up      1
13      1                       osd.13  up      1
14      1                       osd.14  up      1
15      1                       osd.15  up      1

[root@storage1 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 3429fd17-4a92-4d3b-a7fa-04adedb0da82
public network = 193.168.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true

[mon]
debug paxos = 0/5

[mon.storage1]
host = storage1
mon addr = 193.168.1.100:6789

[osd.0]
host = storage1

[osd.1]
host = storage1

[osd.2]
host = storage1

[osd.3]
host = storage1

[osd.4]
host = storage1

[osd.5]
host = storage1

[osd.6]
host = storage1

[osd.7]
host = storage1

[osd.8]
host = storage1

[osd.9]
host = storage1

[osd.10]
host = storage1

[osd.11]
host = storage1

[osd.12]
host = storage1

[osd.13]
host = storage1

[osd.14]
host = storage1

[osd.15]
host = storage1
       
       


Re: why my "fs_commit_latency" is so high ? is it normal ?

Gregory Farnum   收件人: duan.xufeng
2014/03/17 13:08

抄送: "ceph-users@xxxxxxxxxxxxxx"






That seems a little high; how do you have your system configured? That
latency is how long it takes for the hard drive to durably write out
something to the journal.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Mar 16, 2014 at 9:59 PM,  <duan.xufeng@xxxxxxxxxx> wrote:
>
> [root@storage1 ~]# ceph osd perf
> osdid fs_commit_latency(ms) fs_apply_latency(ms)
>     0                   149                   52
>     1                   201                   61
>     2                   176                  166
>     3                   240                   57
>     4                   219                   49
>     5                   167                   56
>     6                   201                   54
>     7                   175                  188
>     8                   192                  124
>     9                   367                  193
>    10                   343                  160
>    11                   183                  110
>    12                   158                  143
>    13                   267                  147
>    14                   150                  155
>    15                   159                   54
>
> --------------------------------------------------------
> ZTE Information Security Notice: The information contained in this mail (and
> any attachment transmitted herewith) is privileged and confidential and is
> intended for the exclusive use of the addressee(s).  If you are not an
> intended recipient, any disclosure, reproduction, distribution or other
> dissemination or use of the information contained is strictly prohibited.
> If you have received this mail in error, please delete it and notify us
> immediately.
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux