Re: full ssd setup preliminary hammer bench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>From the version number it looks buggy. I'm really interested what fixed the issue for you.

I'll test with debian client with my new hardware to compare.

Currently, client difference vs previous test is:

- centos7.1 vs debian wheezy
- librbd hammer vs giant
- CPU E5-2687W @3.1GHZ vs CPU E5-2603 v2 @ 1.80GHz
- 10GBE network with mellanox connectx-3  vs 1gbe network with intel e1000


I just have done randread 4K bench, without data in osd buffer cache,
it's seem to use a lot more cpu on osd side

iops: 43k
cpu osd server : 75.0 id  (vs 89idle with data in buffer).

So around twice more cpu.
I'll try to patch gperftools to see if it's improving cpu usage on osd side.

----- Mail original -----
De: "Stefan Priebe" <s.priebe@xxxxxxxxxxxx>
À: "aderumier" <aderumier@xxxxxxxxx>
Cc: "Mark Nelson" <mnelson@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Samedi 18 Avril 2015 07:28:48
Objet: Re:  full ssd setup preliminary hammer bench

Am 18.04.2015 um 07:24 schrieb Alexandre DERUMIER <aderumier@xxxxxxxxx>: 

>>> any idea whether this might be the tcmalloc bug? 
> 
> I still don't known if centos/redhat packages have also the bug or not. 
> gperftools.x86_64 2.1-1.el7 

>From the version number it looks buggy. I'm really interested what fixed the issue for you. 

> 
> 
> 
> 
> ----- Mail original ----- 
> De: "Stefan Priebe" <s.priebe@xxxxxxxxxxxx> 
> À: "aderumier" <aderumier@xxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx> 
> Envoyé: Vendredi 17 Avril 2015 20:57:42 
> Objet: Re:  full ssd setup preliminary hammer bench 
> 
>> Am 17.04.2015 um 17:37 schrieb Alexandre DERUMIER: 
>> Hi Mark, 
>> 
>> I finally got my hardware for my production full ssd cluster. 
>> 
>> Here a first preliminary bench. (1osd). 
>> 
>> I got around 45K iops with randread 4K with a small 10GB rbd volume 
>> 
>> 
>> I'm pretty happy because I don't see anymore huge cpu difference between krbd && lirbd. 
>> In my previous bench I was using debian wheezy as client, 
>> now it's a centos 7.1, so maybe something is different (glibc,...). 
> 
> any idea whether this might be the tcmalloc bug? 
> 
>> 
>> I'm planning to do big benchmark centos vs ubuntu vs debian, client && server, to compare. 
>> I have 18 osd ssd for the benchmarks. 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> results : rand 4K : 1 osd 
>> ------------------------- 
>> 
>> fio + librbd: 
>> ------------ 
>> iops: 45.1K 
>> 
>> clat percentiles (usec): 
>> | 1.00th=[ 358], 5.00th=[ 406], 10.00th=[ 446], 20.00th=[ 556], 
>> | 30.00th=[ 676], 40.00th=[ 1048], 50.00th=[ 1192], 60.00th=[ 1304], 
>> | 70.00th=[ 1400], 80.00th=[ 1496], 90.00th=[ 1624], 95.00th=[ 1720], 
>> | 99.00th=[ 1880], 99.50th=[ 1928], 99.90th=[ 2064], 99.95th=[ 2128], 
>> | 99.99th=[ 2512] 
>> 
>> cpu server : 89.1 iddle 
>> cpu client : 92,5 idle 
>> 
>> fio + krbd 
>> ---------- 
>> iops:47.5K 
>> 
>> clat percentiles (usec): 
>> | 1.00th=[ 620], 5.00th=[ 636], 10.00th=[ 644], 20.00th=[ 652], 
>> | 30.00th=[ 668], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 692], 
>> | 70.00th=[ 708], 80.00th=[ 724], 90.00th=[ 756], 95.00th=[ 820], 
>> | 99.00th=[ 1004], 99.50th=[ 1032], 99.90th=[ 1144], 99.95th=[ 1448], 
>> | 99.99th=[ 2224] 
>> 
>> cpu server : 92.4 idle 
>> cpu client : 96,8 idle 
>> 
>> 
>> 
>> 
>> hardware (ceph node && client node): 
>> ----------------------------------- 
>> ceph : hammer 
>> os : centos 7.1 
>> 2 x 10cores Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz 
>> 64GB ram 
>> 2 x intel s3700 100GB : raid1: os + monitor 
>> 6 x intel s3500 160GB : osds 
>> 2x10gb mellanox connect-x3 (lacp) 
>> 
>> network 
>> ------- 
>> mellanox sx1012 with breakout cables (10GB) 
>> 
>> 
>> centos tunning: 
>> --------------- 
>> -noop scheduler 
>> -tune-adm profile latency-performance 
>> 
>> ceph.conf 
>> --------- 
>> auth_cluster_required = cephx 
>> auth_service_required = cephx 
>> auth_client_required = cephx 
>> filestore_xattr_use_omap = true 
>> 
>> 
>> osd pool default min size = 1 
>> 
>> debug lockdep = 0/0 
>> debug context = 0/0 
>> debug crush = 0/0 
>> debug buffer = 0/0 
>> debug timer = 0/0 
>> debug journaler = 0/0 
>> debug osd = 0/0 
>> debug optracker = 0/0 
>> debug objclass = 0/0 
>> debug filestore = 0/0 
>> debug journal = 0/0 
>> debug ms = 0/0 
>> debug monc = 0/0 
>> debug tp = 0/0 
>> debug auth = 0/0 
>> debug finisher = 0/0 
>> debug heartbeatmap = 0/0 
>> debug perfcounter = 0/0 
>> debug asok = 0/0 
>> debug throttle = 0/0 
>> 
>> osd_op_threads = 5 
>> filestore_op_threads = 4 
>> 
>> 
>> osd_op_num_threads_per_shard = 1 
>> osd_op_num_shards = 10 
>> filestore_fd_cache_size = 64 
>> filestore_fd_cache_shards = 32 
>> ms_nocrc = true 
>> ms_dispatch_throttle_bytes = 0 
>> 
>> cephx sign messages = false 
>> cephx require signatures = false 
>> 
>> [client] 
>> rbd_cache = false 
>> 
>> 
>> 
>> 
>> 
>> rand 4K : rbd volume size: 10GB (data in osd node buffer - no access to disk) 
>> ------------------------------------------------------------------------------ 
>> fio + librbd 
>> ------------ 
>> [global] 
>> ioengine=rbd 
>> clientname=admin 
>> pool=pooltest 
>> rbdname=rbdtest 
>> invalidate=0 
>> rw=randread 
>> direct=1 
>> bs=4k 
>> numjobs=2 
>> group_reporting=1 
>> iodepth=32 
>> 
>> 
>> 
>> fio + krbd 
>> ----------- 
>> [global] 
>> ioengine=aio 
>> invalidate=1 # mandatory 
>> rw=randread 
>> bs=4K 
>> direct=1 
>> numjobs=2 
>> group_reporting=1 
>> size=10G 
>> 
>> iodepth=32 
>> filename=/dev/rbd0 (noop scheduler) 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________ 
>> ceph-users mailing list 
>> ceph-users@xxxxxxxxxxxxxx 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux