Hi,
not flushing the ceph-journal!
I speak about the caching from linux.
If you run free, you can see how much is cached:
like
# free
total used free shared buff/cache
available
Mem: 41189692 16665960 4795700 124780 19728032
28247464
To free the cache (normly not done in productional systems):
sync; echo 3 > /proc/sys/vm/drop_caches
look after that with free and run your bench (only read) again.
Udo
Am 2017-11-20 13:06, schrieb Rudi Ahlers:
Hi,
So are you saying this isn't true speed?
Do I just flush the journal and test again? i.e. ceph-osd -i osd.0
--flush-journal && ceph-osd -i osd.2 --flush-journal && ceph-osd -i
osd.3
--flush-journal etc, etc?
On Mon, Nov 20, 2017 at 2:02 PM, <ulembke@xxxxxxxxxxxx> wrote:
Hi Rudi,
Am 2017-11-20 11:58, schrieb Rudi Ahlers:
...
Some more stats:
root@virt2:~# rados bench -p Data 10 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s)
avg
lat(s)
0 0 0 0 0 0 -
0
1 16 402 386 1543.69 1544 0.00182802
0.0395421
2 16 773 757 1513.71 1484 0.00243911
0.0409455
this values are due cached osd-data on your osd-nodes.
If you flush your cache (on all osd-nodes), your reads will be much
worse,
because they came from the HDDs.
Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com