Re: Low cost, extendable, failure tolerant home cloud storage question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Main conditions:
> - working like stripe sets, so unit failures will not cause data loss (1 or two server failures are allowed)
Gluster replica or dispersed volumes can tolerate that
> - scalable, and therefore
> - can be extended with one (or maybe two) server at a time (due to low budget)
In order to be safe you need 'replica 3' or a disperse volumes.
In both cases extending by 1 brick (brick is not equal to node) is not possible in most cases. For example in 'replica 3' you need to add 3 more bricks (brick is a combination of 'server + directory' and it is recommended to be on separate systems or it's a potential single point of failure). Dispersed volumes also need to be extended in numbers of <disperse count>, so if you have 4+2 (4 bricks , 2 are the maximum you can loose without dataloss ) - you need to add another 6 bricks to extend.

> - cheap nodes (with 1-2GB of RAM) able to handle the task (like Rpi, >Odroid XU4 or even HP T610)
You need a little bit more ram for daily usage and most probably more cores as healing of data in replica is demanding (dispersed volumes are like raid's parity and require some cpu).

>As I read a lot about GlusterFS, I recetly concluded that this might also >not be possible with this FS.
>At the beginning I thoug! ht, that I create 2+2 disperse volume (for >optimal sector size) that I can later extend to 4+2, even later 6+2 and >continue....
Actually a 2+2 disperse volume will be extended with another 4 disks and will become a 2 * (2+2) , so you will have 2 subvolumes consisting of 2+2 and the algorithm  will distribute the data to any of the 2 subvolumes.
 
>Now the question: Can I achieve this with GlusterFS? How? What >configuration I must choose?
The idea with the ITX boards is not so bad. You can get 2 small systems and create your erasure coding.

So you will have something like this:
ITX1:/gluster_bricks/HDD1/brick
ITX1:/gluster_bricks/HDD2/brick
ITX1:/gluster_bricks/HDD3/brick


ITX2:/gluster_bricks/HDD1/brick
ITX2:/gluster_bricks/HDD2/brick
ITX2:/gluster_bricks/HDD3/brick


Then when you create your dispersed volume you should take 3 bricks from ITX1 and 2 from ITX2 (4+2) .Once the volume is created you can expand it with another set of disks .

In theory, failure of a system should be tolerable.

Yet, I would prefer the 'replica 3 arbiter 1' approach as it doesn't take so much space and extending will require only 2 data disks .
Have you thought about 'replica 3 arbiter 1' types of volumes ?
Of course you can build software/hardware raid and manage the disks underneath without problems, but every time you change the raid shape you will need to wipe the fs and 'reset-brick'.


Best Regards,
Strahil Nikolov

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux