@Mark Nelson: thanks for the precision, I'll think about that the next time I'll build an array. It was raid1 with 2 disks (no broken array) @Plaetinck, Dieter: Sorry I made a little mistake, I was referring about the system cache (page cache), the one which considers write operations to the storage system complete after the data has been copied into it. Secondly the disk write cache (hard drive disk), the one stored into the hard drive disk. I'm going to make the sentence clearer and remove the disk write cache part. @Jerker Nyberg: I performed some measure on each system during a write but it's more in my head than on the paper. As I can say, the commodity cluster was struggling during the write. The other machines barely showed a load, even when I deactivated every cores and kept only 1 or 2 it was ok. The CPU load from the OSD wasn't so high. @Tommi Virtanen: nice catch! I'm gonna update the article :) Thank you for all the feedback, I'll try to perform some of the tests guys mentioned above :) On Tue, Aug 28, 2012 at 3:11 PM, Tommi Virtanen <tv@xxxxxxxxxxx> wrote: > On Mon, Aug 27, 2012 at 1:47 PM, Sébastien Han <han.sebastien@xxxxxxxxx> wrote: >> For those of you who are interested, I performed several benchmarks of >> RADOS and RBD on different types of hardware and use case. >> You can find my results here: >> http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/ > > Nice! > > Minor nit: "sudo echo 3 | tee /proc/sys/vm/drop_caches && sudo sync" > you probably want "say echo 3 | sudo tee ... && sync" -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html