Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Numbers look very disappointing. I destroyed RAID0. Does it mean disks
on all the servers are bad?

Disk seems to be from  "Vendor: FUJITSU   Model: MBD2300RC         Rev: D809"

 fdisk -l

Disk /dev/sda: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           1          13      104391   83  Linux
/dev/sde2              14       36404   292310707+  8e  Linux LVM
[root at dslg1 ~]# dd if=/dev/zero of=/dev/sda bs=128k count=80k oflag=direct
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 572.117 seconds, 18.8 MB/s


On Wed, Apr 20, 2011 at 12:53 PM, Joe Landman
<landman at scalableinformatics.com> wrote:
> On 04/20/2011 03:43 PM, Mohit Anchlia wrote:
>>
>> Thanks! Is there any recommended configuration you want me to use when
>> using mdadm?
>>
>> I got this link:
>>
>> http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.1
>
> First things first, break the RAID0, and then lets measure performance per
> disk, to make sure nothing else bad is going on.
>
> ? ? ? ?dd if=/dev/zero of=/dev/DISK bs=128k count=80k oflag=direct
> ? ? ? ?dd of=/dev/null if=/dev/DISK bs=128k count=80k iflag=direct
>
> for /dev/DISK being one of the drives in your existing RAID0. ?Once we know
> the raw performance, I'd suggest something like this
>
> ? ? ? ?mdadm --create /dev/md0 --metadata=1.2 --chunk=512 \
> ? ? ? ? ? ? ? ?--raid-devices=4 /dev/DISK1 /dev/DISK2 ? ? \
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? /dev/DISK3 /dev/DISK4
> ? ? ? ?mdadm --examine --scan | grep "md\/0" >> /etc/mdadm.conf
>
> then
>
> ? ? ? ?dd if=/dev/zero of=/dev/md0 bs=128k count=80k oflag=direct
> ? ? ? ?dd of=/dev/null if=/dev/md0 bs=128k count=80k iflag=direct
>
> and lets see how it behaves. ?If these are good, then
>
> ? ? ? ?mkfs.xfs -l version=2 -d su=512k,sw=4,agcount=32 /dev/md0
>
> (yeah, I know, gluster folk have a preference for ext* ... we generally
> don't recommend ext* for anything other than OS drives ... you might need to
> install xfsprogs and the xfs kernel module ... which kernel are you using
> BTW?)
>
> then
>
> ? ? ? ?mount -o logbufs=4,logbsize=64k /dev/md0 /data
> ? ? ? ?mkdir stress
>
>
> ? ? ? ?dd if=/dev/zero of=/data/big.file bs=128k count=80k oflag=direct
> ? ? ? ?dd of=/dev/null if=/data/big.file bs=128k count=80k iflag=direct
>
> and see how it handles things.
>
> When btrfs finally stabilizes enough to be used, it should be a reasonable
> replacement for xfs, but this is likely to be a few years.
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web ?: http://scalableinformatics.com
> ? ? ? http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax ?: +1 866 888 3112
> cell : +1 734 612 4615
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux