Re: Production cluster planning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 5, 2016 at 2:14 PM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:
2016-10-05 20:50 GMT+02:00 David Gossage <dgossage@xxxxxxxxxxxxxxxxxx>:
> The mirrored slog will be useful.  Depending on what you put on the pool
> l2arc may not get used much.  I removed mine as it got such a low hit rate
> serving VM's.

I'll use shards. The most accessed shard isn't cached in L2ARC?

L2ARC filled up some, but the usage hit ratios never hit above 8% so I just freed it up fro the system to use as it would.  Haven;t noticed any difference since doing that. 

I'll also other pools for plain file hosting (websites and so one). In
this case an L2ARC would be usefull. I think.
Is possible to share the same L2ARC from multiple ZFS pools ?

L2ARC is per pool I believe. 

> I also re-did my bricks from raidz2 to using mirrored pairs. I felt that
> with 3 server redundancy the performance benefits from a more raid10 type
> layout would be more useful than raidz2 for the pool.  I could see where
> some might want more security though in their data

So, you removed the RAIDZ2 and created multiple mirrors (like RAID-1) ?
I can do that, but is really a waste of space. In a replica 3, your
are replicating 6 times.
Or did you create a RAID-10 ? I really hate RAID-10. If you loose 2
disks from the same mirror
(and in a 12 disks server this could be frequent) you loose the whole
pool and rebuilding time from
the network could be.............. HUGE. Try to rebuild 24TB from
network. In a perfect world, this require
at least 24.000/1.24 = 19.200 seconds. more than 5 hours by using the
whole 10GB network only for healing.

I feel more confortable with 2xRAIDZ-2 than a RAID10

That's fine.  For my VM case I currently only use around 1TB per host so worst case scenario I found was if one server is down I could rebuild brick overnight.  For your case that's not as easy.  Also for me I use 1TB disks in my mirrors so if one drive goes down it's not that long to re-sync it on replace.  Reduces my chance to have 2 disks in same mirror die at same time a little bit.

By using SLOG, the RAIDZ2 write penalty should be removed as gluster
always write to SLOG (SSD) and to the RAIDZ-2 only in background,
right?
Is possible to change the ZIL flush timeout from 5 seconds to
something bigger as i'm using SSD with power-loss protection ?

I believe you can though I haven't played with that.   

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux