Re: Raid over 48 disks ... for real now

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is quite a box. There's a picture of the box with the cover removed
on Sun's website:

http://www.sun.com/images/k3/k3_sunfirex4500_4.jpg

>From the X4500 homepage, there's a gallery of additional pictures. The
drives drop in from the top. Massive fans channel air in the small
gaps between the drives. It doesn't look like there's much room
between the disks, but a lot of cold air gets sucked in the front, and
a lot of hot air comes out the back. So it must be doing its job :).

I have not tried a fsck on it yet. I'll probably setup a lot of 2TB
partitions rather than a single large partition. Then write the
software to handle storing data across many partitions.

Norman

On 1/18/08, michael@xxxxxxxxx <michael@xxxxxxxxx> wrote:
> Quoting Norman Elton <normelton@xxxxxxxxx>:
>
> > I posed the question a few weeks ago about how to best accommodate
> > software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
> > Thumper). I appreciate all the suggestions.
> >
> > Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
> > controllers, each with eight 1TB drives, for a total raw storage of
> > 48TB. I must admit, it's quite impressive. And loud. More information
> > about the hardware is available online...
> >
> > http://www.sun.com/servers/x64/x4500/arch-wp.pdf
> >
> > It came loaded with Solaris, configured with ZFS. Things seemed to
> > work fine. I did not do any benchmarks, but I can revert to that
> > configuration if necessary.
> >
> > Now I've loaded RHEL onto the box. For a first-shot, I've created one
> > RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
> > create a VolGroup across the arrays.
> >
> > So now I'm trying to figure out what to do with this space. So far,
> > I've tested mke2fs on a 1TB and a 5TB LogVol.
> >
> > I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
> > Am I better off sticking with relatively small partitions (2-5 TB), or
> > should I crank up the block size and go for one big partition?
>
> Impressive system. I'm curious to what the storage drives look like
> and how they attach to the server with that many disks?
> Sounds like you have some time to play around before shoving it into
> production.
> I wonder how long it would take to run an fsck on one large filesystem?
>
> Cheers,
> Mike
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux