Ceph Performance puzzle.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hope someone sheds light. Not able to reason out what I am seeing.
OSD bench shows that bandwidth is increasing with BlockSize.
While librados bench shows bandwidth is falling after 16k.
I have 3 SSD  OSDs in SSD pool.

vjujjuri@wsl6:~$ iperf -csl2 -P 16
 [SUM]  0.0-10.1 sec  1.11 GBytes   945 Mbits/sec

vjujjuri@wsl13:/media/data$ ceph --version
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)


vjujjuri@wsl13:/media/data$ ceph tell osd.0 bench 10485760 8192
{
    "bytes_written": 10485760,
    "blocksize": 8192,
    "bytes_per_sec": 47176008.000000
}

vjujjuri@wsl13:/media/data$ ceph tell osd.0 bench 10485760 16384
{
    "bytes_written": 10485760,
    "blocksize": 16384,
    "bytes_per_sec": 109491958.000000
}

vjujjuri@wsl13:/media/data$ ceph tell osd.0 bench 10485760 32768
{
    "bytes_written": 10485760,
    "blocksize": 32768,
    "bytes_per_sec": 242963276.000000
}

Where as the bench:
vjujjuri@wsl13:/media/data$ rados bench -p sfdc_ssd -b 8192 10 write
--no-cleanup
Bandwidth (MB/sec):     25.878

vjujjuri@wsl13:/media/data$ rados bench -p sfdc_ssd -b 16384 10 write
--no-cleanup
Bandwidth (MB/sec):     48.425

vjujjuri@wsl13:/media/data$ rados bench -p sfdc_ssd -b 32768 10 write
--no-cleanup
Bandwidth (MB/sec):     35.750

Thanks in advance


--
Jvrao
---
First they ignore you, then they laugh at you, then they fight you,
then you win. - Mahatma Gandhi
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux