ceph luminous - ceph tell osd bench performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

For every CONSECUTIVE   ran of the "ceph tell osd.x bench"  command
I get different and MUCH worse  results

Is this expected ?

OSD was created with the following command ( /dev/sda is an Entreprise class SDD)

ceph-deploy osd create --zap-disk --bluestore  osd01:sdc --block-db /dev/sda --block-wal /dev/sda

If not, what could cause it ?

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 440630335
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 313287177
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 251350160
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 244450342
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 253622108
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 253355474
}

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 252890400


[root@osd01 ~]# megacli -LDGetProp  -DskCache -L3 -a0

Adapter 0-VD 3(target id: 3): Disk Write Cache : Enabled

Exit Code: 0x00
[root@osd01 ~]# megacli -LDGetProp  -Cache -L3 -a0

Adapter 0-VD 3(target id: 3): Cache Policy:WriteBack, ReadAdaptive, Cached, Write Cache OK if bad BBU

mount
/dev/sdd1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,nodiratime,swalloc,attr2,largeio,inode64,allocsize=4096k,logbufs=8,logbsize=256k,noquota)
 

[root@osd01 ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdf      8:80   0  59.8G  0 disk
├─sdf5   8:85   0    37G  0 part /var
├─sdf3   8:83   0     6G  0 part [SWAP]
├─sdf1   8:81   0     1G  0 part /boot
├─sdf4   8:84   0     1K  0 part
└─sdf2   8:82   0  15.5G  0 part /
sdd      8:48   0 558.4G  0 disk
├─sdd2   8:50   0 558.3G  0 part
└─sdd1   8:49   0   100M  0 part /var/lib/ceph/osd/ceph-0
sdb      8:16   0 558.4G  0 disk
sr0     11:0    1  1024M  0 rom
sde      8:64   0 558.4G  0 disk
sdc      8:32   0 558.4G  0 disk
├─sdc2   8:34   0 558.3G  0 part
└─sdc1   8:33   0   100M  0 part /var/lib/ceph/osd/ceph-3
sda      8:0    0   372G  0 disk
├─sda4   8:4    0     1G  0 part
├─sda2   8:2    0     1G  0 part
├─sda3   8:3    0    30G  0 part
└─sda1   8:1    0    30G  0 part

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux