Re: Is Ceph appropriate for small installations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Do you have replication factor 2?

To test recovery e.g. kill one OSD process, observe when ceph notices it and starts moving data. Reformat the OSD partition, remove the killed OSD from cluster, then add a new OSD using the freshly formatted partition. When you have again 3 OSDs, observe when data migration finishes. Till then, the system will be loaded with recovery

J.

On 02.09.2015 12:15, Marcin Przyczyna wrote:
> On 08/31/2015 09:39 AM, Wido den Hollander wrote:
>
>> True, but your performance is greatly impacted during recovery. So a
>> three node cluster might work well when the skies are clear and the sun
>> is shining, but it has a hard time dealing with a complete node failure.
> The question of "how tiny a cluster can be" I answer with
> some perf. data I collected few days ago.
>
> My setup:
> - 3 armhf based servers (a sort of raspberrypi),
> - only one 100 mbit/s LAN for all sort of access,
> - 3 usb sticks with dedicated 4 GB /dev/sda1 partition sticked in each
> armhf server and formatted with xfs,
> - 3 MMC cards for OS,
> - OSD jornal on MMC, OSD on USB Stick
> - 1 ordinary PC as client,
> - 3 OSDs, 3 MONs, 1 MDS.
> - debian8, 64bit everywhere
>
> I/O Performance test command based on
> "time dd if=/dev/zero of=./test bs=1024k count=1024"
> revealed:
>
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 81.3964 s, 13.2 MB/s
>
> real    1m28.619s
> user    0m0.052s
> sys     0m1.724s
>
> Result:
> A creation of 1 GB zerobased file consumes ~1min 30 secs.
>
> From my point of view it is possible to set poor's man
> fileserver cluster on home lan typical hardware
> (i.e. my switch is a soho DSL modem/router) with
> 3 low power servers powered by smartphone chips
> and 3 cheap USB sticks as data storage.
>
> It is not quick, but it works. It consumes tiny amount
> of electrical power (whole "farm" needs about 25W),
> it has no mechanical rotating parts.
> The cluster uses passive radiators only and
> produces almost no heat wave: no fans are needed at all.
> During very hot summer this year I did notice
> no temperature based failure at all.
>
> HW cost: about 800 euro.
> Hint: try to find a centera on EMC² webpage with that price :-)
>
> My question:
> how can I "damage" one of my OSDs in an intelligent way
> to test the cluster performance during recovery ?
>
> Cheers,
> Marcin.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux