Replicated and Non Replicated Bricks on Same Partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 29, 2013 at 8:44 PM, Robert Hajime Lanning
<lanning at lanning.cc>wrote:

> On 04/29/13 20:28, Anand Avati wrote:
>
>
>> On Mon, Apr 29, 2013 at 9:19 AM, Heath Skarlupka <
>> heath.skarlupka at ssec.wisc.edu <mailto:heath.skarlupka at ssec.**wisc.edu<heath.skarlupka at ssec.wisc.edu>>>
>> wrote:
>>
>>     Gluster-Users,
>>
>>     We currently have a 30 node Gluster Distributed-Replicate 15 x 2
>>     filesystem.  Each node has a ~20TB xfs filesystem mounted to /data
>>     and the bricks live on /data/brick.  We have been very happy with
>>     this setup, but are now collecting more data that doesn't need to
>>     be replicated because it can be easily regenerated.  Most of the
>>     data lives on our replicated volume and is starting to waste
>>     space.  My plan was to create a second directory under the /data
>>     partition called /data/non_replicated_brick on each of the 30
>>     nodes and start up a second Gluster filesystem.  This would allow
>>     me to dynamically size the replicated and non_replicated space
>>     based on our current needs.
>>
>>     I'm a bit worried about going forward with this because I haven't
>>     seen many users talk about putting two gluster bricks on the same
>>     underlying filesystem.  I've gotten passed the technical hurdle
>>     and know that it is technically possible, but I'm worried about
>>     corner cases and issues that might crop up when we add more bricks
>>     and need to rebalance both gluster volumes at once.  Does anybody
>>     have any insight in what the caveats of doing this are or are
>>     there any users putting multiple bricks on a single filesystem in
>>     the 50-100 node size range.  Thank you all for your insights and help!
>>
>>
>> This is a very common use case and should work fine. In the future we are
>> exploring better integration with dm-thinp so that each brick has its own
>> XFS filesystem on a thin provisioned logical volume. But for now you can
>> create a second volume on the same XFS filesystems.
>>
>> Avati
>>
>>
> There is an issue when replicated bricks fill unevenly.  The
> non-replicated volume will cause uneven filling of bricks as seen in the
> replicated volume.
>
> I am not sure how ENOSPC is handled asymmetrically, but if the fuller
> brick happens to be down during a write that would be causing ENOSPC, you
> won't get the error and replication will fail, when the self-heal kicks in.
>
>
Yes, self-heal will keep failing till enough free space is made available.
Ideally you should set the "min-free-disk" parameter and have new creations
redirected to a different server from about 80-90% util or so, and only let
existing file grow bigger.

Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130429/e2e50748/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux