Re: Feedback for proof of concept OSD Node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What about the network cards? The motherboard I’m looking for has 2 x 10Gbe, with that and the CPU frequency, I think the bottleneck will be the HDD. Is that overkill? Thanks!

Ignacio Ocampo

> On 2 Oct 2020, at 0:38, Martin Verges <martin.verges@xxxxxxxx> wrote:
> 
> 
> For private projects, you can search small 1U servers with up to 4 3.5" disk slots and some e3-1230 v3/4/5 cpu. They can be bought for 250-350€ (used) and then you just plug in a disk.
> They are also good for SATA SSDs and work quite well. You can mix both drives in the same system as well. 
> 
> --
> Martin Verges
> Managing director
> 
> Mobile: +49 174 9335695
> E-Mail: martin.verges@xxxxxxxx
> Chat: https://t.me/MartinVerges
> 
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> 
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
> 
> 
>> Am Fr., 2. Okt. 2020 um 08:32 Uhr schrieb Ignacio Ocampo <nafiux@xxxxxxxxx>:
>> Hi Brian,
>> 
>> Here more context about what I want to accomplish: I've migrated a bunch of
>> services from AWS to a local server, but having everything in a single
>> server is not safe, and instead of investing in RAID, I would like to start
>> setting up a small Ceph Cluster to have redundancy and a robust mechanism
>> in case any component fails.
>> 
>> Also, in the mid-term, I do have plans to deploy a small OpenStack Cluster.
>> 
>> Because of that, I would like to set up the first small Ceph Cluster that
>> can scale as my needs grow, the idea is to have 3 OSD nodes with the same
>> characteristics and add additional HDDs as needed, up to 5 HDD per OSD
>> node, starting with 1 HDD per node.
>> 
>> Thanks!
>> 
>> On Thu, Oct 1, 2020 at 11:35 AM Brian Topping <brian.topping@xxxxxxxxx>
>> wrote:
>> 
>> > Welcome to Ceph!
>> >
>> > I think better questions to start with are “what are your objectives in
>> > your study?” Is it just seeing Ceph run with many disks, or are you trying
>> > to see how much performance you can get out of it with distributed disk?
>> > What is your budget? Do you want to try different combinations of storage
>> > devices to learn how they differ in performance or do you just want to jump
>> > to the fastest things out there?
>> >
>> > One often doesn’t need a bunch of machines to determine that Ceph is a
>> > really versatile and robust solution. I pretty regularly deploy Ceph on a
>> > single node using Kubernetes and Rook. Some would ask “why would one ever
>> > do that, just use direct storage!”. The answer is when I want to expand a
>> > cluster, I am willing to have traded initial performance overhead for
>> > letting Ceph distribute data at a later date. And the overhead is far lower
>> > than one might think when there’s not a network bottleneck to deal with. I
>> > do use direct storage on LVM when I have distributed workloads such as
>> > Kafka that abstract storage that a service instance depends on. It doesn’t
>> > make much sense in my mind for Kafka or Cassandra to use Ceph because I can
>> > afford to lose nodes using those services.
>> >
>> > In other words, Ceph is virtualized storage. You have likely come to it
>> > because your workloads need to be able to come up anywhere on your network
>> > and reach that storage. How do you see those workloads exercising the
>> > capabilities of Ceph? That’s where your interesting use cases come from,
>> > and can help you better decide what the best lab platform is to get started.
>> >
>> > Hope that helps, Brian
>> >
>> > On Sep 29, 2020, at 12:44 AM, Ignacio Ocampo <nafiux@xxxxxxxxx> wrote:
>> >
>> > Hi All :),
>> >
>> > I would like to get your feedback about the components below to build a
>> > PoC OSD Node (I will build 3 of these).
>> >
>> > SSD for OS.
>> > NVMe for cache.
>> > HDD for storage.
>> >
>> > The Supermicro motherboard has 2 10Gb cards, and I will use ECC memories.
>> >
>> > <image.png>
>> >
>> > Thanks for your feedback!
>> >
>> > --
>> > Ignacio Ocampo
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>> >
>> >
>> 
>> -- 
>> Ignacio Ocampo
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux