Re: OT: How to Build a poor man's storage with ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since you mention NextCloud it will probably be RWG deployment. ALso it's
not clear why 3 nodes? Is rack-space a premium?

Just to compare your suggestion:
3x24 (I guess 4U?) x 8TB with Replication = 576 TB raw storage + 192 TB
usable

Let's go 6x12 (2U) x 4TB with EC 3+2 = 288 TB raw storage + 172 TB usable.
Same rack-space, a little bit less usable storage, smaller drives (means
faster recovery). Put a SSD or NVME in each server for the Index pool. Plus
maybe 2 NVMEs for the WAL/DB

On Tue, 8 Jun 2021 at 21:39, Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote:

> Hello List,
>
> i used to build 3 Node Cluster with spinning Rust and later with
> (Enterprise) SSDs.
> All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks
> and i was done.
> The Requirements were just 10/15TB Disk usage (30-45TB Raw).
>
> Now i was asked if i could also build a cheap 200-500TB Cluster
> Storage, which should also scale. Just for Data Storage such as
> NextCloud/OwnCloud.
>
> Buying 3x 24 Slot Server with 8TB Enterprise SSDs ends up at about 3x
> 45k EUR = 135k EUR.
> Where the SSDs are 90% of the price. (about 1.700EUR per 8TB SSD)
>
> How do the "big boys" do this? Just throw money at it?
> Would a mix of OSD SSD Metadata + Spinning Rust do the job?
>
> My experience so far is that each time i had a crash/problem it was
> always such a pain to wait for the spinning rust.
>
> Do you have any experience/hints on this?
> Maybe combine 3x 10TB HDDs to a 30TB Raid0/striping Disk => which
> would speed up the performance, but have a bigger impact on a dying
> disk.
>
> My requirements are more or less low IO Traffic but loads of disk space
> usage.
>
> Any hints/ideas/links are welcome.
>
> Cheers,
> Michael
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux