Christian,
I think you are going to suffer the effects of spindle contention with
this type of setup. Based on your email and my assumptions, I will use
the following inputs:
- 4 OSDs, each backed by a 12-disk RAID 6 set
- 75iops for each 7200rpm 3TB drive
- RAID 6 write penalty of 6
- OSD Journal co-located with OSD
- Ceph replication size of 2
4osds * 12disks * 75iops / 6(RAID6WritePenalty) / 2(OsdJournalHit) /
2(CephReplication) = 150 Writes/second max
4osds * 12disks * 75iops / 2xCephReplication = 1800 Reads/second max
My guess is 150 writes/second is far lower than your 500 VMs will
require. After all, this setup will likely give you lower writes/second
than a single 15K SAS drive. Further, if you need to replace a drive, I
suspect this setup would grind to a halt as the RAID6 set set attempts
to repair.
On the other hand, if you planned for 48 individual drives with OSD
journals on SSDs in a typical setup of perhaps 5:1 or lower ratio of
SSDs:HDs, the calculation would look like:
48osds * 75iops / 2xCephReplication = 1800 Writes/second max
48osds * 75iops / 2xCephReplication = 1800 Reads/second max
As you can see, I estimate 12x more random writes without RAID6 (6x) and
co-located osd journals (2x).
Plus you'll be able to configure 12x more placement groups in your CRUSH
rules by going from 4 osds to 48 osds. That will allow Ceph's
psuedo-random placement rules to significantly improve the distribution
of data and io load across the cluster to decrease the risk of hot-spots.
A few other notes:
- You'll certainly want QEMU 1.4.2 or later to get asynchronous io for RBD.
- You'll likely want to enable RBD writeback cache. It helps coalesce
small writes before hitting the disks.
Cheers,
Mike
On 12/17/2013 2:44 AM, Christian Balzer wrote:
Hello,
I've been doing a lot of reading and am looking at the following design
for a storage cluster based on Ceph. I will address all the likely
knee-jerk reactions and reasoning below, so hold your guns until you've
read it all. I also have a number of questions I've not yet found the
answer to or determined it by experimentation.
Hardware:
2x 4U (can you say Supermicro? ^.^) servers with 24 3.5" hotswap bays, 2
internal OS (journal?) drives, probably Opteron 4300 CPUs (see below),
Areca 1882 controller with 4GB cache, 2 or 3 2-port Infiniband HCAs.
24 3TB HDs (30% of the price of a 4TB one!) in one or two RAID6, 2 of them
hotspares, giving us 60TB per node and thus with a replication factor of 2
that's also the usable space.
Space for 2 more identical servers if need be.
Network:
Infiniband QDR, 2x 18port switches (interconnected of course), redundant
paths everywhere, including to the clients (compute nodes).
Ceph configuration:
Additional server with mon, mons also on the 2 storage nodes, at least 2
OSDs per node (see below)
This is for a private cloud with about 500 VMs at most. There will 2 types
of VMs, the majority writing a small amount of log chatter to their
volumes, the other type (a few dozen) writing a more substantial data
stream.
I estimate less than 100MB/s of read/writes at full build out, which
should be well within the abilities of this setup.
Now for the rationale of this design that goes contrary to anything normal
Ceph layouts suggest:
1. Idiot (aka NOC monkey) proof hotswap of disks.
This will be deployed in a remote data center, meaning that qualified
people will not be available locally and thus would have to travel there
each time a disk or two fails.
In short, telling somebody to pull the disk tray with the red flashing LED
and put a new one from the spare pile in there is a lot more likely to
result in success than telling them to pull the 3rd row, 4th column disk
in server 2. ^o^
2. Density, TCO
Ideally I would love to deploy something like this:
http://www.mbx.com/60-drive-4u-storage-server/
but they seem to not really have a complete product description, price
list, etc. ^o^ With a monster like that, I'd be willing to reconsider local
raids and just overspec things in a way that a LOT disks can fail before
somebody (with a clue) needs to visit that DC.
However failing that, the typical approach to use many smaller servers for
OSDs increases the costs and/or reduces density. Replacing the 4U servers
with 2U ones (that hold 12 disks) would require some sort of controller (to
satisfy my #1 requirement) and similar amounts of HCAs per node, clearly
driving the TCO up. 1U servers with typically 4 disk would be even worse.
3. Increased reliability/stability
Failure of a single disk has no impact on the whole cluster, no need for
any CPU/network intensive rebalancing.
Questions/remarks:
Due to the fact that there will be redundancy, reliability on the disk
level and that there will be only 2 storage nodes initially, I'm
planning to disable rebalancing.
Or will Ceph realize that making replicas on the same server won't really
save the day and refrain from doing so?
If more nodes are added later, I will likely set an appropriate full ratio
and activate rebalancing on a permanent basis again (except for planned
maintenances of course).
My experience tells me that an actual node failure will be due to:
1. Software bugs, kernel or otherwise.
2. Marginal hardware (CPU/memory/mainboard hairline cracks, I've seen it
all)
Actual total loss of power in the DC doesn't worry me, because if that
happens I'm likely under a ton of rubble, this being Japan. ^_^
Given that a RAID6 with just 7 disk connected to an Areca 1882
controller in a different cluster I'm running here gives me about
800MB/s writes and 1GB/s reads I have a feeling that putting the journal on
SSDs (Intel DC S3700) would be a waste, if not outright harmful.
But I guess I shall determine that by testing, maybe the higher IOPS rate
will still be beneficial.
Since the expected performance of this RAID will be at least double the
bandwidth available on a single IB interface, I'm thinking of splitting it
in half and have an OSD for each half and bound to a different interface.
One hopes that nothing in the OSD design stops it from dealing with these
speeds/bandwidths.
The plan is to use Ceph only for RBD, so would "filestore xattr use omap"
really be needed in case tests determine ext4 to be faster than xfs in my
setup?
Given the above configuration, I'm wondering how many CPU cores would be
sufficient in the storage nodes.
Somewhere in the documentation
http://ceph.com/docs/master/start/hardware-recommendations/
is a recommendation for 1GB RAM per 1TB of storage, but later on the same
page we see a storage server example with 36TB and 16GB RAM.
Ideally I would love to use just one 6 or 8 core Opteron 4300 with 32GB of
memory, thus having only one NUMA domain and keeping all the
processes dealing with I/O and interrupts (and there will be lots of them)
on the same CPU. (I very much agree with Kyle Bader's mail about NUMA last
week, I've seen this happen myself)
According to this:
http://ceph.com/docs/master/rados/configuration/network-config-ref/
a monitor doesn't need to be on the cluster network, only the OSDs do.
However my initial tests seemed to tell me otherwise. If the mons don't
need to be on the cluster network, I'd consider for the initial deployment
a direct IB interconnect between the 2 storage nodes.
This actually brings me to my next question, should the cluster network
fail, would the OSDs still continue to function and use the public network
instead?
I hope this wasn't tl;dr. ^o^
Regards,
Christian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com