OT: How to Build a poor man's storage with ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello List,

i used to build 3 Node Cluster with spinning Rust and later with
(Enterprise) SSDs.
All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks
and i was done.
The Requirements were just 10/15TB Disk usage (30-45TB Raw).

Now i was asked if i could also build a cheap 200-500TB Cluster
Storage, which should also scale. Just for Data Storage such as
NextCloud/OwnCloud.

Buying 3x 24 Slot Server with 8TB Enterprise SSDs ends up at about 3x
45k EUR = 135k EUR.
Where the SSDs are 90% of the price. (about 1.700EUR per 8TB SSD)

How do the "big boys" do this? Just throw money at it?
Would a mix of OSD SSD Metadata + Spinning Rust do the job?

My experience so far is that each time i had a crash/problem it was
always such a pain to wait for the spinning rust.

Do you have any experience/hints on this?
Maybe combine 3x 10TB HDDs to a 30TB Raid0/striping Disk => which
would speed up the performance, but have a bigger impact on a dying
disk.

My requirements are more or less low IO Traffic but loads of disk space usage.

Any hints/ideas/links are welcome.

Cheers,
Michael
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux