Re: Applicability and migration path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Den fre 10 aug. 2018 kl 04:33 skrev Matthew Pounsett <matt@xxxxxxxxxxxxx>:

First, in my tests and reading I haven't encountered anything that suggests I should expect problems from using a small number of large file servers in a cluster.  But I recognize that this isn't the preferred configuration, and I'm wondering if I should be worried about any operational issues.  Obviously I/O won't be as good as it could be, but I don't expect it would be worse than software RAID served over NFS.  Is there anything there I've missed?  Eventually the plan will be to swap out the large file servers for a larger number of smaller servers, but that will take years of regular hardware refresh cycles.


Having few but large OSD hosts will mean that a loss of one (for planned or unplanned maintenance) will have a high impact on the cluster as a whole.
The copies will by default try to place each replica on a separate host, which is good, but it would also mean that loss of a host requires a lot of data to
be shuffled and rebuilt from other replicas, and this will of course take time.

As opposed to the previous setup, this will add some inter-host traffic aswell, each write to the primary PG will then in turn cause that host to replicate it
again over the network to X other hosts to form the required amount of replicas, so where raid/zfs/lvm did the copying internally on the same host before,
it will now go once or twice over the net. Not a huge problem, but worth noticing. When repairs happen, like if a host dies, the amount of host-to-host 
traffic will be quite large, given the 80-130T described above.

I still think ceph can be a decent solution for your problem, but it would be easier to make rolling maintenance on a cluster if the loss is smaller when a
host is gone if the cluster is made up of many smaller hosts. So, while the above situation is better than what you came from, it would not be an optimal
ceph setup, but then again, who has an optimal setup anyhow. Everyone wants to fix some part of the cluster always, if money and time was endless.


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux