Hello, On Sun, 18 Mar 2018 10:59:15 -0400 Mark Steffen wrote: > Hello, > > I have a Ceph newb question I would appreciate some advice on > > Presently I have 4 hosts in my Ceph cluster, each with 4 480GB eMLC drives > in them. These 4 hosts have 2 more empty slots each. > A lot of the answers would become clearer and more relevant if you could tell us foremost the exact SSD models (old and new) and the rest of the cluster HW config (controllers, network). When I read 480GB the only DC level SSDs with 3 DWPD are Samsungs, those 3 DWPD may or may not be sufficient of course for your use case. I frequently managed to wear out SSDs more during testing and burn-in (i.e. several RAID rebuilds) than in a year of actual usage. A full level data balancing with Ceph (or more than one depending on how you bring those new SSDs and hosts online) is a significant write storm. > Also, I have some new servers that could also become hosts in the cluster > (I deploy Ceph in a 'hyperconverged' configuration with KVM hypervisor; I > find that I usually tend to run out of disk and RAM before I run out of CPU > so why not make the most of it, at least for now). > > The new hosts have only 4 available drive slots each (there are 3 of them). > > Am I ok (since this is SSDs and so I'm doubting a major IO bottleneck that > I undoubtedly would see with spinners) to just go ahead and add additional > two 1TB drives to each of the first 4 hosts, as well as put 4 x 1TB SSDs in > the 3 new hosts? This would give each host a similar amount of storage, > though an unequal amount of OSDs each. > Some SSDs tend to react much worse to being written to at full speed than others, so tuning Ceph to not use all bandwidth might be still a good idea. > Since the failure domain is by host, and the OSDs are SSD (with 1TB drives > typically being faster than 480GB drives anyway) is this reasonable? Or do > I really need to keep the configuration identical across the board and just > add additiona 480GB drives to the new hosts and have it all match? > Larger SSDs are not always faster (have more parallelism) than smaller ones, thus the question for your models. Having differently sized OSDs is not a problem per se, but needs a full understanding of what is going on. Your larger OSDs will see twice the action, are they a) really twice as fast or b) is your load never going to be an issue anyway? Christian > I'm also using Luminous/Bluestore if it matters. > > Thanks in advance! > > *Mark Steffen* > *"Don't believe everything you read on the Internet." -Abraham Lincoln* -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Rakuten Communications _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com