Re: Adding applications to GlusterHPC ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 14, 2008 at 12:10 PM, Keshetti Mahesh
<keshetti85-student@xxxxxxxxxxx> wrote:
> >  Can you send me /proc/partitions file of master and slave nodes after
>  >  booting into GlusterHPC?
>
>  ====================================================
>  ----> On server (n1),
>
>  # cat /proc/partitions
>
>  i) When booted with the GlusterHPC,
>  major minor #blocks         name
>   22        0    80418240       hdc
>   22        1        104391      hdc1
>   22        2     19077187      hdc2
>   22        3     49145715      hdc3
>   22        4                 1      hdc4
>   22        5      8385898      hdc5
>   22        6      1090071      hdc6
>
>  ii) when booted with OS (FC7),
>  major minor #blocks         name
>   8        0    80418240       sda
>   8        1        104391      sda1
>   8        2     19077187      sda2
>   8        3     49145715      sda3
>   8        4                 1      sda4
>   8        5      8385898      sda5
>   8        6      1090071      sda6
>
>  ====================================================
>
>  ----> On client (n2) : there is no OS installed on client
>
>  # cat /proc/partitions
>
>  (When booted with GlusterHPC)
>
>  major minor   #blocks         name
>   3        0      80418240       hda
>
>  ====================================================
>
>  If you need any more information please feel free to ask.
>
>  -Mahesh
>

Just now I got a hint on whats happening in the partition naming of
the Gluster.
In my server (n1), SATA disk is located in the IDE channel #1 as master whereas
the the same capacity SATA disk is located in IDE channel #0 as master in the
client node (n2). This difference in the location of hard disk is
letting the Gluster
to name the hard disk drive devices differently on the two nodes.
i.e. SATA disk is named as hdc in server and as hda in client.

Due to this difference the client is generating errors like
"Can' read from the /dev/hdc3: there is no devic ewith that name" and aborting.

If it is the actual reason behind the problem I have reported it is
really a bug in
Gluster.

-Mahesh




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux