Seeking a (paid) consultant - Ceph expert.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are a small non-profit company that develops technologies for affordable delivery of broadband Internet access to rural communities in developing regions.

We are experimenting with a small Ceph cluster that is housed in a street-side cabinet.
Our goals are to maintain availability, and avoid data-loss, even under extreme and hardware failures. 
We are NOT sensitive to performance, scalability or optimization of storage size --- rather high degree of replication and focus on redundancy.

Our current experiment consists of 3 rack-mount chassis by Dell - PowerEdge C6220.
Each chassis holds 4 server blades.
Total of 12 server blades. 
Each blade has two 10Gig network cards, used in an MLAG (Multi-chassis Link Aggregation Group) configuration to two redundant network switches. 
Each blade has 4 SAS drives (two SSDs and two HDDs).
Each blade has 192Gb of RAM and 24 cores of Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (2 Sockets).

We are using Proxmox (we're no experts, but got it working pretty nicely, migration of workloads and all).
We are using Ceph version 16.2.9 and manage its basics via the Proxmox integration/GUI with nearly default settings, and without understanding much of what we're doing.
(We made minor edits to the Crash-maps - but I'll admit that I don't really understand Ceph's building blocks and nuances).

We need hand-holding in re-designing the system with our specific (extreme) availability goals -- we don't need step-by-step spoon-feeding, but rather big-picture design guidelines.
Someone who understands deeply the specifics and nuances of Ceph, but also can maintain a big-picture operational view of our needs.

Ideally reply directly to me via email - although a discussion on the list could probably be useful as well.
(I don't even know what questions to ask).

Thanks!

Yahel.

Yahel Ben-David, Ph.D.
De Novo Group - Executive Director
Bridging the gap between research and impact.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux