Re: Cephfs meta data pool to ssd and measuring performance difference

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try IOR mdtest for metadata performance. 


From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
Sent: Friday, 3 August 2018 7:49:13 PM
To: dcsysengineer
Cc: ceph-users
Subject: Re: Cephfs meta data pool to ssd and measuring performance difference
 

I have moved the pool, but strange thing is that if I do something like
this

for object in `cat out`; do rados -p fs_meta get $object /dev/null ;
done

I do not see any activity on the ssd drives with something like dstat
(checked on all nodes (sdh))

net/eth4.60-net/eth4.52
--dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde-----dsk/sdf---
--dsk/sdg-----dsk/sdh-----dsk/sdi--
recv send: recv send| read writ: read writ: read writ: read writ:
read writ: read writ: read writ: read writ: read writ
0 0 : 0 0 | 415B 201k:3264k 1358k:4708k 3487k:5779k 3046k:
13M 6886k:9055B 487k:1118k 836k: 327k 210k:3497k 1569k
154k 154k: 242k 336k| 0 0 : 0 0 : 0 12k: 0 0 :
0 0 : 0 0 : 0 0 : 0 0 : 0 0
103k 92k: 147k 199k| 0 164k: 0 0 : 0 32k: 0 0
:8192B 12k: 0 0 : 0 0 : 0 0 : 0 0
96k 124k: 108k 96k| 0 4096B: 0 0 :4096B 20k: 0 0
:4096B 12k: 0 8192B: 0 0 : 0 0 : 0 0
175k 375k: 330k 266k| 0 69k: 0 0 : 0 0 : 0 0 :
0 0 :8192B 136k: 0 0 : 0 0 : 0 0
133k 102k: 124k 103k| 0 0 : 0 0 : 0 76k: 0 0 :
0 32k: 0 0 : 0 0 : 0 0 : 0 0
350k 185k: 318k 1721k| 0 57k: 0 0 : 0 16k: 0 0 :
0 36k:1416k 0 : 0 0 : 0 144k: 0 0
206k 135k: 164k 797k| 0 0 : 0 0 :8192B 44k: 0 0 :
0 28k: 660k 0 : 0 0 :4096B 260k: 0 0
138k 136k: 252k 273k| 0 51k: 0 0 :4096B 16k: 0 0 :
0 0 : 0 0 : 0 0 : 0 0 : 0 0
158k 117k: 436k 369k| 0 0 : 0 0 : 0 0 : 0 20k:
0 0 :4096B 20k: 0 20k: 0 0 : 0 0
146k 106k: 327k 988k| 0 63k: 0 16k: 0 52k: 0 0 :
0 0 : 0 52k: 0 0 : 0 0 : 0 0
77k 74k: 361k 145k| 0 0 : 0 0 : 0 16k: 0 0 :
0 0 : 0 0 : 0 0 : 0 0 : 0 0
186k 149k: 417k 824k| 0 51k: 0 0 : 0 28k: 0 0 :
0 28k: 0 0 : 0 0 : 0 36k: 0 0

But this showed some activity

[@c01 ~]# ceph osd pool stats | grep fs_meta -A 2
pool fs_meta id 19
client io 0 B/s rd, 17 op/s rd, 0 op/s wr

I took maybe around 20h to move the fs_meta pool (only 200MB, 2483328
objects) from hdd to ssd, also maybe because of some other remapping of
one replaced and one added hdd. (I have slow hdd's)

I did not manage to do a good test, because the results seem to be
similar as before the move. I did not want to create files because I
thought it would include the fs_data pool to much, which is on my slow
hdd's. So I did the readdir and stats tests.

I checked if mds.a was active, limited the cache of mds.a to 1000 inodes
(I think) with:
ceph daemon mds.a config set mds_cache_size 1000 ()

Flushed caches on the nodes with:
free && sync && echo 3 > /proc/sys/vm/drop_caches && free

And ran these tests:
python ../smallfile-master/smallfile_cli.py --operation stat --threads 1
--file-size 128 --files-per-dir 50000 --files 500000 --top
/home/backup/test/kernel/
python ../smallfile-master/smallfile_cli.py --operation readdir
--threads 1 --file-size 128 --files-per-dir 50000 --files 500000 --top
/home/backup/test/kernel/

Maybe this is helpful in selecting a better test for your move.


-----Original Message-----
From: David C [mailto:dcsysengineer@xxxxxxxxx]
Sent: maandag 30 juli 2018 14:23
To: Marc Roos
Cc: ceph-users
Subject: Re: Cephfs meta data pool to ssd and measuring
performance difference

Something like smallfile perhaps? https://github.com/bengland2/smallfile

Or you just time creating/reading lots of files

With read benching you would want to ensure you've cleared your mds
cache or use a dataset larger than the cache.

I'd be interested in seeing your results, I this on the to do list
myself.

On 25 Jul 2018 15:18, "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx> wrote:




>From this thread, I got how to move the meta data pool from the
hdd's to
the ssd's.
https://www.spinics.net/lists/ceph-users/msg39498.html

ceph osd pool get fs_meta crush_rule
ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd

I guess this can be done on a live system?

What would be a good test to show the performance difference
between the
old hdd and the new ssd?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux