Hi, so as people asked me on irc, it seems there was a lack of communication regarding the hardware for the FOSP project (https://pagure.io/atomic-wg/issue/153). Due to internal changes (on top of regular changes) on the other side of the RH firewall, the acquisition of the servers and the allocation of the budget took a much longer time than planned, and we got the fund allocated last week. So we started working right away on a quote for hardware, and for various reason, we settled on the following to be shared between the requirement for my team (OSAS) and Fedora Cloud: - 1 6U super micro chassis, https://www.supermicro.com/products/MicroBlade/index.cfm IT can hold 28 blades servers. So to fill the chassis, we selected: - 2 * MBI-6418A-T5H, with 4 nodes, each having 4G of ram and a small disk ( 128 or 64G ssd, depending on price) - 2 * MBI-6118D-T4, with 4 disk of 1T (or 2T) per disk - 16 * MBI-6128R-T2, each with 128G of ram, with ssd and 256G if possible. and free slots for later (8 slots). The 4 first blades are to be used by OSAS, to host various small services and backups. The rest is for Fedora, and the 16 MBI-6128R-T2 would have at least 128G of ram, with 256G ssd, 2 xeon with 18 cores, 1G connectivity. I did try to optimize to not have too much unused ressources, but that's gonna still be a lot of ressources. We plan to host that in the new space we are gonna have near Raleigh, in the community cage. I do not have a public page to give about that yet, since we are still working on hosting public pages and a website about that, but people can ping me if they want more information on this. For the deployment itself, I had emergency (server, laptop) that did prevent me from working more on it for now. -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS
Attachment:
signature.asc
Description: This is a digitally signed message part
_______________________________________________ cloud mailing list -- cloud@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to cloud-leave@xxxxxxxxxxxxxxxxxxxxxxx