the benchmark on both node is about 30MB/s. But I can only get 10MB/s on /dev/rbd0. this is confusing. the performance on ceph fs is affected by cache buffer very much. If I write twice the size of system memory on ceph fs using dd(I suppose buffer cache effect will be minor) the rate is about 30 MB/s, which matchs the single osd benchmark very well. 964 1 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 34.203317 sec at 30657 KB/sec 10.08.11_13:25:41.370628 log 10.08.11_13:25:40.319855 osd0 172.16.12.1:6801/23964 2 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 33.138057 sec at 31642 KB/sec 10.08.11_13:26:32.776497 log 10.08.11_13:27:07.812769 osd1 172.16.12.2:6800/31012 1 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 33.503103 sec at 31297 KB/sec 10.08.11_13:27:10.954734 log 10.08.11_13:27:46.003163 osd1 172.16.12.2:6800/31012 2 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 34.287723 sec at 30581 KB/sec On Tue, Aug 10, 2010 at 4:27 PM, Wido den Hollander <wido@xxxxxxxxx> wrote: > Hi, > > It might be usefull to benchmark your individual OSD's first. > > Wiki: http://ceph.newdream.net/wiki/Troubleshooting#OSD_performance > > There might be a slow OSD in your cluster which slows down the cluster. > > With some test with the Ceph filesystem i was able to reach speeds up to > 150MB/sec (dd with conv=sync) writing to my Ceph filesystem. > > Haven't tried RBD yet on the cluster, but in my previous setup i was > reaching about 30 ~ 35MB/sec with RBD. > > Wido > > On Tue, 2010-08-10 at 10:18 +0200, Ulrich Oelmann wrote: >> Hi there, >> >> I don't think that Xiaoguang saw the speed of 30-60 MB/s because of cache >> effects as the speed should be in the order of at least some hundred MB/s >> then. Do you agree? >> >> Could perhaps someone having an rbd-setup running confirm or reject >> Xiaoguangs measurements? >> >> Best regards >> Ulrich >> >> >> -----Ursprüngliche Nachricht----- >> Von: Haifeng Liu <haifeng@xxxxxxxxxxxxx> >> Gesendet: Aug 10, 2010 7:46:52 AM >> An: "ceph-devel@xxxxxxxxxxxxxxx" <ceph-devel@xxxxxxxxxxxxxxx> >> Betreff: Re: RBD performance is not good >> >> >Xiaoguang, >> > >> >This should be a normal result – the client’s pagecache buffered the write >> >when you dd onto a ceph fs, while no buffering when dd to a rbd. What do you >> >think? >> > >> >Thanks >> >-haifeng >> > >> > >> >On 8/10/10 11:23 AM, "Xiaoguang Liu" wrote: >> > >> >> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this >> >> is much lower than what I expected. >> >> I can get 30-60 MB/s on a ceph filesystem over the same cluster. >> >> >> >> any idea? >> >> >> >> >> >> >> >> >> >> [root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0 bs=1M count=5000 >> >> 5000+0 records in >> >> 5000+0 records out >> >> 5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s >> >> -- >> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >> >> > >> >N�����r��y���b�X��ǧv�^�){.n�+���z�]z���{ay� ʇڙ�,j ��f���h���z� �w��� ���j:+v���w�j�m���� ����zZ+��ݢj"�� >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html