Re: Gluster-devel Digest, Vol 41, Issue 13

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All,

When I use the ext3 as the filesystem on the server, I have a new trouble that one directory at most have 31998 subdirectories. Do you have any advice for me?

Thanks,
Yaomin

--------------------------------------------------
From: <gluster-devel-request@xxxxxxxxxx>
Sent: Tuesday, January 06, 2009 8:21 PM
To: <gluster-devel@xxxxxxxxxx>
Subject: Gluster-devel Digest, Vol 41, Issue 13

Send Gluster-devel mailing list submissions to
gluster-devel@xxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.nongnu.org/mailman/listinfo/gluster-devel
or, via email, send a message with subject or body 'help' to
gluster-devel-request@xxxxxxxxxx

You can reach the person managing the list at
gluster-devel-owner@xxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-devel digest..."


Today's Topics:

  1. Re: Cascading different translator doesn't work as
     expectation (yaomin @ gmail)
  2. Re: Cascading different translator doesn't work as
     expectation (Krishna Srinivas)
  3. Re: Cascading different translator doesn't work as
     expectation (yaomin @ gmail)


----------------------------------------------------------------------

Message: 1
Date: Tue, 6 Jan 2009 17:13:49 +0800
From: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Subject: Re: Cascading different translator doesn't
work as expectation
To: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Cc: gluster-devel@xxxxxxxxxx
Message-ID: <D7CA065B4BF644348A6DE543D8213029@yangyaomin>
Content-Type: text/plain; charset="iso-8859-1"

Krishna,

   Thank you for your kind help before.

According to your advice, I confront a new error. The storage node has no log information, and the client's log is like following:

/lib64/libc.so.6[0x3fbb2300a0]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(afr_setxattr+0x6a)[0x2aaaaaf0658a]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/stripe.so(notify+0x220)[0x2aaaab115c80]
/usr/local/lib/libglusterfs.so.0(default_notify+0x25)[0x2aaaaaab8f55]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(notify+0x16d)[0x2aaaaaefc19d]
/usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so(notify+0x681)[0x2aaaaacebac1]
/usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xbb)[0x2aaaaaabe14b]
/usr/local/lib/libglusterfs.so.0(poll_iteration+0x79)[0x2aaaaaabd509]
[glusterfs](main+0x66a)[0x4026aa]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3fbb21d8a4]
[glusterfs][0x401b69]
---------


[root@IP6 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.5G  6.8G  2.2G  76% /
/dev/sda1             190M   12M  169M   7% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
/dev/sda4             447G  2.8G  422G   1% /locfs
/dev/sdb1             459G  199M  435G   1% /locfsb
df: `/mnt/new': Transport endpoint is not connected

Thanks,
Yaomin

--------------------------------------------------
From: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Sent: Tuesday, January 06, 2009 1:09 PM
To: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Cc: <gluster-devel@xxxxxxxxxx>
Subject: Re: Cascading different translator doesn't work as expectation

Alfred,
Your vol files are wrong. you need to remove all the volume
definitions below "writeback" in the client vol file. For server vol
file the definition of performance translators is not having any
effect. Also you need to use "features/locks" translator above
"storage/posix"
Krishna

On Tue, Jan 6, 2009 at 8:51 AM, yaomin @ gmail <yangyaomin@xxxxxxxxx> wrote:
All,

    It seems difficult for you.

    There is a new problem when I tested.

When I kill all the storage nodes, the client still try to send data,
and doesn't quit.

Thanks,
Alfred
From: yaomin @ gmail
Sent: Monday, January 05, 2009 10:52 PM
To: Krishna Srinivas
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: Cascading different translator doesn't work as
expectation
Krishna,
    Thank you for your quick response.
There are two log information in the client's log file when setting up
the client.
2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0
2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0

  There is no any information in the storage node's log file.

  Although I changed the scheduler from ALU to RR, there only the
No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.

  Each machine has 2GB memory.

Thanks,
Alfred

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.gnu.org/pipermail/gluster-devel/attachments/20090106/76074e85/attachment.html

------------------------------

Message: 2
Date: Tue, 6 Jan 2009 15:06:42 +0530
From: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Subject: Re: Cascading different translator doesn't
work as expectation
To: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Cc: gluster-devel@xxxxxxxxxx
Message-ID:
<ad4bc5820901060136k2a3c0943nd89b0d4f41240e22@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Yaomin,

Can you:
* mention what version you are using
* give the modified client and server vol file (to see if there are any errors)
* give gdb backtrace from the core file? "gdb -c /core.pid glusterfs"
and then type "bt"

Krishna

On Tue, Jan 6, 2009 at 2:43 PM, yaomin @ gmail <yangyaomin@xxxxxxxxx> wrote:
Krishna,

    Thank you for your kind help before.

According to your advice, I confront a new error. The storage node has
no log information, and the client's log is like following:

/lib64/libc.so.6[0x3fbb2300a0]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(afr_setxattr+0x6a)[0x2aaaaaf0658a]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/stripe.so(notify+0x220)[0x2aaaab115c80]
/usr/local/lib/libglusterfs.so.0(default_notify+0x25)[0x2aaaaaab8f55]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(notify+0x16d)[0x2aaaaaefc19d]
/usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so(notify+0x681)[0x2aaaaacebac1]
/usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xbb)[0x2aaaaaabe14b]
/usr/local/lib/libglusterfs.so.0(poll_iteration+0x79)[0x2aaaaaabd509]
[glusterfs](main+0x66a)[0x4026aa]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3fbb21d8a4]
[glusterfs][0x401b69]
---------

[root@IP6 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.5G  6.8G  2.2G  76% /
/dev/sda1             190M   12M  169M   7% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
/dev/sda4             447G  2.8G  422G   1% /locfs
/dev/sdb1             459G  199M  435G   1% /locfsb
df: `/mnt/new': Transport endpoint is not connected

Thanks,
Yaomin
--------------------------------------------------
From: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Sent: Tuesday, January 06, 2009 1:09 PM
To: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Cc: <gluster-devel@xxxxxxxxxx>
Subject: Re: Cascading different translator doesn't work as
expectation

Alfred,
Your vol files are wrong. you need to remove all the volume
definitions below "writeback" in the client vol file. For server vol
file the definition of performance translators is not having any
effect. Also you need to use "features/locks" translator above
"storage/posix"
Krishna

On Tue, Jan 6, 2009 at 8:51 AM, yaomin @ gmail <yangyaomin@xxxxxxxxx>
wrote:
All,

    It seems difficult for you.

    There is a new problem when I tested.

When I kill all the storage nodes, the client still try to send data,
and doesn't quit.

Thanks,
Alfred
From: yaomin @ gmail
Sent: Monday, January 05, 2009 10:52 PM
To: Krishna Srinivas
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: Cascading different translator doesn't work
as
expectation
Krishna,
    Thank you for your quick response.
    There are two log information in the client's log file when setting
up
the client.
    2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0
    2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0

  There is no any information in the storage node's log file.

  Although I changed the scheduler from ALU to RR, there only the
No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.

  Each machine has 2GB memory.

Thanks,
Alfred





------------------------------

Message: 3
Date: Tue, 6 Jan 2009 20:21:35 +0800
From: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Subject: Re: Cascading different translator doesn't
work as expectation
To: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Cc: gluster-devel@xxxxxxxxxx
Message-ID: <CA759DEFFF2C42AA877946E88BD53E42@yangyaomin>
Content-Type: text/plain; charset="iso-8859-1"

Krishna,

   1, The version is 1.3.9
   2, the client and server vol files are in the attachments.
   3, The result is "No Stack"

Thanks,
Yaomin

--------------------------------------------------
From: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Sent: Tuesday, January 06, 2009 5:36 PM
To: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Cc: <gluster-devel@xxxxxxxxxx>
Subject: Re: Cascading different translator doesn't work as
expectation

Yaomin,

Can you:
* mention what version you are using
* give the modified client and server vol file (to see if there are any
errors)
* give gdb backtrace from the core file? "gdb -c /core.pid glusterfs"
and then type "bt"

Krishna

On Tue, Jan 6, 2009 at 2:43 PM, yaomin @ gmail <yangyaomin@xxxxxxxxx>
wrote:
Krishna,

    Thank you for your kind help before.

    According to your advice, I confront a new error. The storage node
has
no log information, and the client's log is like following:

/lib64/libc.so.6[0x3fbb2300a0]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(afr_setxattr+0x6a)[0x2aaaaaf0658a]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/stripe.so(notify+0x220)[0x2aaaab115c80]
/usr/local/lib/libglusterfs.so.0(default_notify+0x25)[0x2aaaaaab8f55]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(notify+0x16d)[0x2aaaaaefc19d]
/usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so(notify+0x681)[0x2aaaaacebac1]
/usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xbb)[0x2aaaaaabe14b]
/usr/local/lib/libglusterfs.so.0(poll_iteration+0x79)[0x2aaaaaabd509]
[glusterfs](main+0x66a)[0x4026aa]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3fbb21d8a4]
[glusterfs][0x401b69]
---------

[root@IP6 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.5G  6.8G  2.2G  76% /
/dev/sda1             190M   12M  169M   7% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
/dev/sda4             447G  2.8G  422G   1% /locfs
/dev/sdb1             459G  199M  435G   1% /locfsb
df: `/mnt/new': Transport endpoint is not connected

Thanks,
Yaomin
--------------------------------------------------
From: "Krishna Srinivas" <krishna@xxxxxxxxxxxxx>
Sent: Tuesday, January 06, 2009 1:09 PM
To: "yaomin @ gmail" <yangyaomin@xxxxxxxxx>
Cc: <gluster-devel@xxxxxxxxxx>
Subject: Re: Cascading different translator doesn't work
as
expectation

Alfred,
Your vol files are wrong. you need to remove all the volume
definitions below "writeback" in the client vol file. For server vol
file the definition of performance translators is not having any
effect. Also you need to use "features/locks" translator above
"storage/posix"
Krishna

On Tue, Jan 6, 2009 at 8:51 AM, yaomin @ gmail <yangyaomin@xxxxxxxxx>
wrote:
All,

    It seems difficult for you.

    There is a new problem when I tested.

    When I kill all the storage nodes, the client still try to send
data,
and doesn't quit.

Thanks,
Alfred
From: yaomin @ gmail
Sent: Monday, January 05, 2009 10:52 PM
To: Krishna Srinivas
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: Cascading different translator doesn't
work
as
expectation
Krishna,
    Thank you for your quick response.
There are two log information in the client's log file when setting
up
the client.
    2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0
    2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0

  There is no any information in the storage node's log file.

  Although I changed the scheduler from ALU to RR, there only the
No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.

  Each machine has 2GB memory.

Thanks,
Alfred

-------------- next part --------------
volume client-ns
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.2        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume name_space          # name of the remote volume
end-volume

volume client11
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.2        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick1          # name of the remote volume
end-volume

volume client12
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.2        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick2          # name of the remote volume
end-volume


volume client21
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.4        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick1          # name of the remote volume
end-volume

volume client22
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.4        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick2          # name of the remote volume
end-volume

volume client31
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.5        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick1          # name of the remote volume
end-volume

volume client32
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.5        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick2          # name of the remote volume
end-volume

volume client41
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.7        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick1          # name of the remote volume
end-volume

volume client42
 type protocol/client
 option transport-type tcp/client       # for TCP/IP transport
 option remote-host 192.168.13.7        # IP address of the remote brick
# option remote-port 6996                # default server port is 6996
# option transport-timeout 30            # seconds to wait for a response
                                        # from server for each request
 option remote-subvolume brick2          # name of the remote volume
end-volume

volume afr1
 type cluster/afr
 subvolumes client11 client21
 option debug off         # turns on detailed debug messages
                             # in log by default is debugging off
 option self-heal on    # turn off self healing default is on
end-volume

volume afr2
 type cluster/afr
 subvolumes client31 client41
 option debug off         # turns on detailed debug messages
                             # in log by default is debugging off
 option self-heal on    # turn off self healing default is on
end-volume

volume afr3
 type cluster/afr
 subvolumes client12 client22
 option debug off         # turns on detailed debug messages
                             # in log by default is debugging off
 option self-heal on    # turn off self healing default is on
end-volume

volume afr4
 type cluster/afr
 subvolumes client32 client42
 option debug off         # turns on detailed debug messages
                             # in log by default is debugging off
 option self-heal on    # turn off self healing default is on
end-volume

volume stripe1
  type cluster/stripe
  option block-size 1MB                 #default size is 128KB
  subvolumes afr1 afr2
end-volume

volume stripe2
  type cluster/stripe
  option block-size 1MB                 #default size is 128KB
  subvolumes afr3 afr4
end-volume



volume bricks
 type cluster/unify
 subvolumes stripe1 stripe2
 option namespace client-ns
 option scheduler rr
end-volume


### Add io-threads feature
volume iot
 type performance/io-threads
 option thread-count 1  # deault is 1
 option cache-size 16MB #64MB

 subvolumes bricks #stripe #afr #bricks
end-volume

### Add readahead feature
volume readahead
 type performance/read-ahead
 option page-size 1MB      # unit in bytes
 option page-count 4       # cache per file  = (page-count x page-size)
 subvolumes iot
end-volume

### Add IO-Cache feature
volume iocache
 type performance/io-cache
 option page-size 256KB
 option page-count 8
 subvolumes readahead
end-volume

### Add writeback feature
volume writeback
 type performance/write-behind
 option aggregate-size 1MB  #option flush-behind off
 option window-size 3MB        # default is 0bytes
#  option flush-behind on       # default is 'off'
 subvolumes iocache
end-volume
-------------- next part --------------
volume name_space
 type storage/posix
 option directory /locfsb/name_space
end-volume

volume brick_1
 type storage/posix               # POSIX FS translator
 option directory /locfs/brick    # Export this directory
end-volume


volume brick1
 type features/posix-locks               # POSIX FS translator
 subvolumes brick_1
end-volume

volume brick_2
 type storage/posix               # POSIX FS translator
 option directory /locfsb/brick    # Export this directory
end-volume


volume brick2
 type features/posix-locks               # POSIX FS translator
 subvolumes brick_2
end-volume

volume server
 type protocol/server
 option transport-type tcp/server       # For TCP/IP transport
# option listen-port 6996                # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
 subvolumes brick1 brick2 name_space
option auth.ip.brick1.allow 192.168.13.* # Allow access to "brick1" volume option auth.ip.brick2.allow 192.168.13.* # Allow access to "brick2" volume
 option auth.ip.name_space.allow 192.168.13.* # Allow access to
"name_space" volume
end-volume

------------------------------

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel


End of Gluster-devel Digest, Vol 41, Issue 13
*********************************************





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux