1/4 glusterfsd's runs amok; performance suffers;

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 11, 2012 at 12:11:39PM +0100, Nux! wrote:
> On 10.08.2012 22:16, Harry Mangalam wrote:
> >pbs3:/dev/md127  8.2T  5.9T  2.3T  73% /bducgl  <---
> 
> Harry,
> 
> The name of that md device (127) indicated there may be something
> dodgy going on there. A device shouldn't be named 127 unless some
> problems occured. Are you sure your drives are OK?

I have systems with /dev/md127 all the time, and there's no problem. It
seems to number downwards from /dev/md127 - if I create md array on the same
system it is /dev/md126.

However, this does suggest that the nodes are not configured identically:
two are /dev/sda or /dev/sdb, which suggests either plain disk or hardware
RAID, while two are /dev/md0 or /dev/127, which is software RAID.

Although this could explain performance differences between the nodes, this
is transparent to gluster and doesn't explain why the files are unevenly
balanced - unless there is one huge file which happens to have been
allocated to this node.

Regards,

Brian.



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux