Den tis 22 jan. 2019 kl 00:50 skrev Brian Topping <brian.topping@xxxxxxxxx>: > > I've scrounged up 5 old Atom Supermicro nodes and would like to run them 365/7 for limited production as RBD with Bluestore (ideally latest 13.2.4 Mimic), triple copy redundancy. Underlying OS is a Debian 9 64 bit, minimal install. > > The other thing to consider about a lab is “what do you want to learn?” If reliability isn’t an issue (ie you aren’t putting your family pictures on it), regardless of the cluster technology, you can often learn basics more quickly without the overhead of maintaining quorums and all that stuff on day one. So at risk of being a heretic, start small, for instance with single mon/manager and add more later. Well, if you start small with one OSD, you are going to run into "the defaults will work against you" since as you make your first pool, it will want to place 3 copies on the separate hosts, so not only are you trying to get accustomed to ceph terms and technologies, you are also working against the whole cluster idea by not building a cluster at all, so you will encounter problems regular ceph admins don't see ever, so chances of getting help is smaller. Things like "OSD will pre-allocate so much data a 10G OSD crashes at start" or "my pool wont start since my pgs are in a bad state since I have only one OSD or only one host and I didn't change the crush rules" is just something people starting small will ever experience. Anyone with 3 or more real hosts with real drives attached just will never see it. Telling people to learn clusters by building a non-cluster might be counter-productive. When you have a working ceph cluster you can practice in getting it to run on a rpi with a usb stick for a drive, but starting at that will make you fight two or more unknowns at the same time, both ceph being new to you, and un-clustering a cluster software suite. (and possibly running on non-x86_64 for a third unknown) -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com