Re: Ceph server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Many thanks
Ignazio

Il Ven 12 Mar 2021, 00:04 Reed Dier <reed.dier@xxxxxxxxxxx> ha scritto:

> I'm going to echo what Stefan said.
>
> I would ditch the 2x SATA drives to free up your slots.
> Replace with an M.2 or SATADOM.
>
> I would also recommend moving from the 2x X710-DA2 cards to 1x X710-DA4
> card.
> It can't saturate the x8 slot, and it frees up a PCIe slot for possibly
> another NVMe card or something else if you need it down the line.
>
> The only other thing I would say to consider is making sure you know the
> endurance of the 4510 is enough for your workload long term.
>
> Reed
>
> > On Mar 10, 2021, at 1:12 PM, Stefan Kooman <stefan@xxxxxx> wrote:
> >
> > On 3/10/21 5:43 PM, Ignazio Cassano wrote:
> >> Hello, what do you think about of ceph cluster made up of 6 nodes each
> one
> >> with the following configuration ?
> >> A+ Server 1113S-WN10RT
> >> Barebone
> >> Supermicro A+ Server 1113S-WN10RT - 1U - 10x U.2 NVMe - 2x M.2 - Dual
> >> 10-Gigabit LAN - 750W Redundant
> >> Processor
> >> AMD EPYC™ 7272 Processor 12-core 2.90GHz 64MB Cache (120W)
> >> Memory
> >> 8 x 8GB PC4-25600 3200MHz DDR4 ECC RDIMM
> >
> > ^^ I would double that amount of RAM, especially (see below) if you plan
> on adding more NVMe drives.
> >
> >> U.2/U.3 NVMe Drive
> >> 5 x 8.0TB Intel® SSD DC P4510 Series U.2 PCIe 3.1 x4 NVMe Solid State
> Drive
> >> Hard Drive
> >
> > ^^ Why 5 * 8.0 TB instead of 10 * 4.0 TB? Are you planning on upgrading
> later? Ceph likes more OSDs better than fewer larger ones. Recovery will be
> faster as well, and the impact of one NVMe dying will be lower.
> >
> >> 2 x 240GB Intel® SSD D3-S4610 Series 2.5" SATA 6.0Gb/s Solid State Drive
> >
> > ^^ Do you sacrifce two NVMe ports for two SATA OS disks? If so, I would
> advise for getting (redundant, optional) U.2 NVMe or SATADOM or similar.
> >
> >> Network Card
> >> 2 x Intel® 10-Gigabit Ethernet Converged Network Adapter X710-DA2 (2x
> SFP+)
> >> Server Management
> >
> > ^ Why two? One for "public" and one for "cluster"? Than most probably
> you won't need that, and one bond would suffice (see current Ceph best
> practices). If you need 40 Gb/s in one LACP trunk: perfectly fine.
> >
> > Gr. Stefan
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux