Re: Need help for production setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Juan,

Understand...but if i am using replica=3 then ?? As using the HW raid with commodity HDD will be not good choice...and if i choose HW raid with enterprise grade HDD then cost will be higher and then there will be no use to choose glusterfs for storage...

Thanks,
Punit


On Mon, Aug 18, 2014 at 10:55 AM, Juan José Pavlik Salles <jjpavlik@xxxxxxxxx> wrote:
I know what you mean Punit, but think about this: 

-Lets say you have 2 servers (serverA and serverB) with 12 drives each, without RAID and no LVM (which as Ryan pointed could be even worse without RAID).
-You create a replicated volume between the servers, where drive1 in serverA is replicated in drive1 in serverB.
-Then one of your drives fail lets say drive1 in serverA. Now you are covered because its replicated brick in serverB is working.
-BUT how long will it take you to change the drive and heal it? What if drive1 (the replica brick) in serverB fails now? If this happen... you will loose all your data in those replicated bricks.

RAID would save you in that scenario, you shouldn't be worried for that particular problem. As you are using fast drives maybe the healing time wont be a real problem but... you should think about it.

Regards  



2014-08-17 23:20 GMT-03:00 Ryan Nix <ryan.nix@xxxxxxxxx>:

I have mixed feelings on the use of RAID.  If you don't use RAID on a machine that has eight drives, for instance, what happens if you lose one drive with something like LVM?  As far as I know, that whole system would be rendered useless.  Granted, when you're using Gluster, the client systems that are attached to the storage can use round robin DNS to find the next Gluster node with its data and not skip a beat.  At least with RAID you can afford to lose a drive or even two.  Still, I see your point. I too would like to not have to waste drives on a RAID set.  I understand that Ceph recommends against the use of RAID and I don't know the official position with RAID and Gluster.

Sent from my iPad

On Aug 17, 2014, at 9:00 PM, Punit Dambiwal <hypunit@xxxxxxxxx> wrote:

Hi Juan,

Thanks for the advise...but if we use the Hardware RAID then what is the purpose of using glusterfs ?? We don't want to use HW RAID and want to leverage the gluster replication...

If we use the HW RAID that means it will create another overhead (First HW RAID replication and second glusterfs replication)...

Thanks,
Punit 


On Sat, Aug 16, 2014 at 9:47 PM, Juan José Pavlik Salles <jjpavlik@xxxxxxxxx> wrote:
Hi Punit, I'm not Gluster expert however I'm also in the way of building a production gluster like you. What I've read is that even though you are using a replicated volume, is a good practice to have RAID 5 o 6 (depending on the amount of disks) under it. RAID will help you when drives die (which they will) and replication will help you when nodes die (which they will eventually, or maybe some maintenance tasks, power shortagges), so they are two different solutions for two different problems. 

As far as I know, if you are using the native client to access the gluster volumen you are already using HA.

Regards!


2014-08-15 4:39 GMT-03:00 Punit Dambiwal <hypunit@xxxxxxxxx>:
Hi,

I want to use gluster distributed replicate storage with 4 nodes with the below configuration :- 

  • 4 x Quanta STRATOS S210-X22RQ
  • 4 x 12 x 512 GB , 2.5″ SSD (front, hot swappable main storage)
  • 4 x 2 x 128 GB, 2’5″ SSD (rear, hot swappable os drives, awesome feature!)
  • 2 x quality stacked switches (with one leg of each bond device out to each switch)
  • IPMI: For Fencing Ovirt.
I want to use this setup in ovirt...I seek your help to best utilize the storage with redundancy(even one node failure or any HDD failure) ??

Do i need to use the HW RAID also...it seems the gluster replication can do the same...any advise ??

Also i want it to be high available...if any node goes down the data can be accessible...please refer any good document for HA with keepalived or others ??

Thanks,
Punit

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Pavlik Salles Juan José

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Pavlik Salles Juan José

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux