Musings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are looking to deploy Ceph in our environment and I have some musings
that I would like some feedback on. There are concerns about scaling a
single Ceph instance to the PBs of size we would use, so the idea is to
start small like once Ceph cluster per rack or two. Then as we feel more
comfortable with it, then expand/combine clusters into larger systems. I'm
not sure that it is possible to combine discrete Ceph clusters. It also
seems to make sense to build a CRUSH map that defines regions, data
centers, sections, rows, racks, and hosts now so that there is less data
migration later, but I'm not sure how a merge would work.

I've been also toying with the idea of SSD journal per node verses SSD
cache tier pool verses lots of RAM for cache. Based on the performance
webinar today, it seems that cache misses in the cache pool causes a lot of
writing to the cache pool and severely degrades performance. I certainly
like the idea of a heat map that way a single read of an entire VM (backup,
rsync) won't kill the cache pool.

I've also been bouncing the idea to have data locality by configuring the
CRUSH map to keep two of the three replicas within the same row and the
third replica just somewhere in the data center. Based on a conversation on
the IRC a couple of days ago, it seems that this could work very will if
min_size is 2. But the documentation and the objective of Ceph seems to
indicate that min_size only applies in degraded situations. During normal
operation a write would have to be acknowledged by all three replicas
before being returned to the client, otherwise it would be eventually
consistent and not strongly consistent (I do like the idea of eventually
consistent for replication as long as we can be strongly consistent in some
form at the same time like 2 out of 3).

I've read through the online manual, so now I'm looking for personal
perspectives that you may have.

Thanks,
Robert LeBlanc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140814/4c70e365/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux