Re: Ceph performance is too good (impossible..)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
if you wrote from an client, the data was written in an (or more) Placement Group in 4MB-Chunks. This PGs are written to journal and the osd-disk and due this the data are in the linux file buffer on the osd-node too (until the os need the storage for other data (file buffer or anything else)).

If you read the data from the client again, the osd-node takes the data from the file buffer instead to read the same data again from the slow disks. Ths is the reason, why huge ram in osd-nodes speed up ceph ;-)
Normaly nice, but difficult for benchmarking.

Udo

Am 2016-12-12 05:51, schrieb V Plus:
Hi.. Udo,
I am not sure I understood what you said.
Did you mean that the 'dd' command also got cached in the osd node? or??


On Sun, Dec 11, 2016 at 10:46 PM, Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:

Hi,
but I assume you measure also cache in this scenario - the osd-nodes has
cached the writes in the filebuffer
(due this the latency should be very small).

Udo

On 12.12.2016 03:00, V Plus wrote:
> Thanks Somnath!
> As you recommended, I executed:
> dd if=/dev/zero bs=1M count=4096 of=/dev/rbd0
> dd if=/dev/zero bs=1M count=4096 of=/dev/rbd1
>
> Then the output results look more reasonable!
> Could you tell me why??
>
> Btw, the purpose of my run is to test the performance of rbd in ceph.
> Does my case mean that before every test, I have to "initialize" all
> the images???
>
> Great thanks!!
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux