A 4k randread performance question about bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

i use ceph12.2.5 ,build ceph cluster that has 6 nodes, 15 sas osd per
nodes. and create three 100G nbd device in the ceph cluster, i run 2
of fio in order to put some pressure on the ceph cluster, command is
as follows:
fio --filename=/dev/nbd1 -iodepth=8 -rw=randwrite -ioengine=libaio
-bs=4k -size=100G -thread -numjobs=1 -group_reporting -direct=1
-name=write_1M --runtime=60000
fio --filename=/dev/nbd2 -iodepth=8 -rw=randwrite -ioengine=libaio
-bs=4k -size=100G -thread -numjobs=1 -group_reporting -direct=1
-name=write_1M --runtime=60000

then, i run other fio ,command is as follows:
fio --filename=/dev/nbd3 -iodepth=1 -rate_iops=20 -rw=randread
-ioengine=libaio -bs=4k -size=100G -thread -numjobs=1 -group_reporting
-direct=1 -name=write_1M --runtime=60000

i think that rate_iops=20 is very low iops,but nbd3 device util has
been 100% all the time.
so I check osd dump_historic_ops_by_duration, as follows:
"description": "osd_op(client.1833906.0:12612752 13.5b2
13:4dbdb965:::rbd_data.968ca6b8b4567.0000000000000c39:head [read
819200~4096] sn
apc 0=[] ondisk+read+known_if_redirected e11516)",
            "initiated_at": "2018-12-05 15:55:28.887058",
            "age": 58.740290,
            "duration": 2.411657,
            "type_data": {
                "flag_point": "started",
                "client_info": {
                    "client": "client.1833906",
                    "client_addr": "10.182.24.92:0/2707867326",
                    "tid": 12612752
                },
                "events": [
                    {
                        "time": "2018-12-05 15:55:28.887058",
                        "event": "initiated"
                    },
                    {
                        "time": "2018-12-05 15:55:28.887094",
                        "event": "queued_for_pg"
                    },
                    {
                        "time": "2018-12-05 15:55:28.887114",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2018-12-05 15:55:28.887142",
                        "event": "started"
                    },
                    {
                        "time": "2018-12-05 15:55:31.298714",
                        "event": "done"
                    }
                ]
            }
        },

I open bluestore log,as follows :
2018-12-05 14:12:37.143892 7f892dc16700 15
bluestore(/var/lib/ceph/osd/ceph-87) read 13.5b2_head
#13:4dbdb965:::rbd_data.968ca6b8b4567.0000000000000c39:head#
0xd5000~1000
2018-12-05 14:12:37.155899 7f892dc16700 10
bluestore(/var/lib/ceph/osd/ceph-87) read 13.5b2_head
#13:4dbdb965:::rbd_data.968ca6b8b4567.0000000000000c39:head#
0xd5000~1000 = 4096
2018-12-05 15:55:28.887147 7f8935425700 15
bluestore(/var/lib/ceph/osd/ceph-87) read 13.5b2_head
#13:4dbdb965:::rbd_data.968ca6b8b4567.0000000000000c39:head#
0xc8000~1000
2018-12-05 15:55:31.298666 7f8935425700 10
bluestore(/var/lib/ceph/osd/ceph-87) read 13.5b2_head
#13:4dbdb965:::rbd_data.968ca6b8b4567.0000000000000c39:head#
0xc8000~1000 = 4096

why bluestore read object cost 2.4s?




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux