Re: production ready?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 30.10.2012 14:45, schrieb Gregory Farnum:
But there's still the problem of slow random write IOP/s. At least i haven't
seen any good benchmarks.

It's not magic — I haven't done extensive testing but I believe people
see aggregate IOPs of about what you can calculate:
(number of storage disks * IOPS per disks) / (replication level)
The journaling bumps that up a little bit for bursts, of course;
similarly if you're doing it on a brand new RBD image it can be a bit
slower since you need to create all the objects as well as write data
to them. You need to architect your storage system to match your
requirements. If you want to run write-heavy databases on RBD, there
are people doing that. They're using SSDs and are very pleased with
its performance. *shrug*

My last test was with 0.49 so i can't talk about 0.52 but as far as i know nothing has changed in this case.

I had 6 Dedicated servers running each with 4x Intel 520series SSDs running 4 OSDs (one OSD per disk). I had the journal running in tmpfs 1GB size to be sure it isn't the bottleneck. Replication was set to 2.

Each SSD is capable of doing 30.000 IOP/s random 4k.

But with RBD i wasn't able to get more than 20.000 IOP/s but overall i had:
6 ded. servers * 4 SSDS => 24 OSDs/SSDs * 30.000 IOP/s / Replication 2 => 360.000 iop/s theoretical overall performance

But i didn't get more than 20.000 while using 3.6Ghz Xeon CPUs and Dual 10GBE.

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux