Recommended underlining disk storage environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 04:15 PM 12/5/2008, Stas Oskin wrote:
>Hi.
>
>Thanks so much for your replies, they had given me a good head-start.
>
>Few remaining questions:
>
>
>you first expand the underlying block device with LVM, then you grow 
>your filesystem.  some filesystems support this, some dont.
>
>
>Isn't this usually reversed - first you grow the underlining 
>file-system, then you increase the LVM size?
the filesystem cannot exceed the size of the device it's sitting on. 
if the block device or logical volume is 200GB, you can't expand the 
filesystem.  so you first expand the volume/block device to 300GB 
then grow the filesystem to 300Gb, for example.

>if you have 3 drives stripped together and one filesystem on top of 
>them ,then you will have a problem.
>if you have 3 drives each with their own filesystem on top and you 
>"unify" that with gluster or something then you can keep running but 
>will loose access to those files.
>
>
>Actually, this sounds as a good idea! By having all the drives 
>unified via GlusterFS, this basically means any of them could be 
>lost, but it won't influence the other drives on same server.
>
>Have you ever tried such setup?

not with gluster.. and there are performance advantages.
with a LVM stripe, you're data reads are distributed over mutliple 
physical devices, however with Unify, you'd be reading any individual 
file form only one spindle.  However, this is the price we pay for 
availability, so I think it depends on your performance 
requirements.  If you dont need blazing fast read's then unify will 
give you better availability.

>Also, I presume it still would be possible to have one of the disks 
>to function as the system disk? In the event it's lost, a simple 
>restore of the root, boot and swap partitions to a new disk + AFR 
>healing for the data should do the job. What you think?

any sub-directory can be the root of the gluster filesystem, so you 
could have this example:
/dev/sda1 /
/dev/sda2 /boot
/dev/sda3 /home
/dev/sdb /home2
/dev/sdc1 /home3
/dev/adc2 /junk

and then unify /tree1 with /home, /home2, /home3, /junk/stuff/home4
or something like that.

>at some point you'll saturate something.  you'll either saturate 
>your disk I/O or your network.  most likely the network, so try and 
>make sure that the network you use for the AFR connections doesn't 
>have anything else competing for the bandwidth and I think you'll be fine.
>
>
>This makes sense indeed.
>
>By the way, how do you manage all the bricks?
>Do you have some centralized way to add new breaks and update the 
>config files for clients/servers?

My configuration is pretty simple.  I have one brick on each server 
using AFR betwen them.
However, I believe they have a few features targeted for 1.5 which 
will allow dynamic reconfiguration as well as a configuration 
editor/manager which will simplify things

However, once you're comfortable with the way the config files are 
parsed, you'll get the hang of it.  but if you're going to 
re-configure your setup frequently, then it'll get inconvenient pretty quickly.

Keith 




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux