You can always tell the cluster (ceph osd set noout) not to rebalance if a host crashes if your pool-size and crush-map is "redundant enough". Wolfgang On 03/06/2013 09:30 AM, Olivier Bonvalet wrote: > Hi, > > I think it depends of your total number of OSDs. > > I take my case : I have 8 OSD per host, and 5 host. If one host crash, I > loose 20% of the cluster, and have a huge amount of data to rebalance. > > For fault tolerance, it was not a good idea. > > > Le mardi 05 mars 2013 à 02:39 -0800, waed Albataineh a écrit : >> Hi there, >> i believe for the quick start of Ceph we will get two OSDs, even it's >> not recommended i wanna increase them. >> My question is it gonna ended bad if i end up with 10 OSDs per host ?? >> and for these increment i must manipulate the configuration file, >> right ?? finally if i finished the installation then i realize i need >> to change sth on the configuration file is it possible or it will >> crash ?? >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfgang.hennerbichler@xxxxxxxxxxxxxxxx http://www.risc-software.at _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com