Agreed. In an ideal world I would have interleaved all my compute, long term storage and processing posix. Unfortunately, business doesn't always work out so nicely so I'm left with buying and building out to match changing needs. In this case we are a small
part of a larger org and have been allocated X racks in the cage, which is at this point land locked with no room to expand so it is actual floor space that's limited. Hence the necessity to go as dense as possible when adding any new capacity. Luckily ceph
is flexible enough to function fine when deployed like an EMC solution, it's just muuuch cheaper and more fun to operate!
Aaron
It's not a requirement to build out homogeneous racks of ceph gear. Most larger places don't do that (it creates weird hot spots). If you have 5 racks of gear, you're better off spreading out servers in those 5 than just a pair of
racks that are really built up. In Aaron's case, he can easily do that since he's not using a cluster network.
Just be sure to dial in your crush map and failure domains with only a pair of installed cabinets.
Thanks for sharing Christian! It's always good to hear about how others are using and deploying Ceph, while coming to similar and different conclusions.
Also,when you say datacenter space is expensive, are you referring to power or actual floor space? Datacenter space is almost always sold by power and floor space is usually secondary. Are there markets where that's opposite? If so,
those are ripe for new entrants!
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended recipient and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient, any disclosure, distribution
or other use of this e-mail message or attachments is prohibited. If you have received this e-mail message in error, please delete and notify the sender immediately. Thank you.
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com