Gluster-users Digest, Vol 24, Issue 35

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can anyone tell my why I am still getting emails even after I was
assured I would not anymore?

Please remove ANY zerowait.com email addresses. 



Jonathan Burnett 
http://www.zerowait.com 


Main number       302.996.9408 
8-5 support       888.811.0808 
24 hour support   888.850-0808 

Affordable service, support & upgrades for Network Appliance Equipment 
Monitoring -  Maintenance -  Management 
707 Kirkwood Hwy, Wilmington, DE 19805 

Being busy does not always mean real work. The object of all work is
production 
or accomplishment and to either of these ends there must be forethought,
system, 
planning, intelligence, and honest purpose, as well as perspiration.
Seeming to do 
is not doing. 
Thomas A. Edison 

On Thu, 2010-04-22 at 02:09 -0700, gluster-users-request at gluster.org
wrote:

> Send Gluster-users mailing list submissions to
> 	gluster-users at gluster.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
> 	gluster-users-request at gluster.org
> 
> You can reach the person managing the list at
> 	gluster-users-owner at gluster.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
> 
> 
> Today's Topics:
> 
>    1. Re: Memory usage high on server sides (Raghavendra Bhat)
>    2. iscsi with gluster (harris narang)
>    3. Re: Memory usage high on server sides (Chris Jin)
>    4. Re: iscsi with gluster (Liam Slusser)
>    5. Re: iscsi with gluster (miloska at gmail.com)
>    6. Re: iscsi with gluster (Tejas N. Bhise)
>    7. Re: iscsi with gluster (Liam Slusser)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Wed, 21 Apr 2010 22:38:11 -0600 (CST)
> From: Raghavendra Bhat <raghavendrabhat at gluster.com>
> Subject: Re: Memory usage high on server sides
> To: Chris Jin <chris at pikicentral.com>
> Cc: gluster-users <gluster-users at gluster.org>
> Message-ID: <888786402.1758748.1271911091465.JavaMail.root at mb2>
> Content-Type: text/plain; charset=utf-8
> 
> 
> Hi Chris,
> 
> http://patches.gluster.com/patch/3151/
> Can you please apply this patch and see if this works for you?
> 
> Thanks
> 
> 
> Regards,
> Raghavendra Bhat
> 
> > Tejas,
> > 
> > We still have hundreds of GBs to copy, and have not put the new file
> > system into the test. So far the clients works all fine. I mean the
> > commands like ls, mkdir, touch, and etc.
> > 
> > Thanks again for your time.
> > 
> > regards,
> > 
> > Chris
> > 
> > On Wed, 2010-04-14 at 23:04 -0600, Tejas N. Bhise wrote:
> > > Chris,
> > > 
> > > By the way, after the copy is done, how is the system responding to
> > > regular access ? In the sense, was the problem with copy also
> > > carried forward as more trouble seen with subsequent access of
> > > data over glusterfs ?
> > > 
> > > Regards,
> > > Tejas.
> > > 
> > > ----- Original Message -----
> > > From: "Chris Jin" <chris at pikicentral.com>
> > > To: "Tejas N. Bhise" <tejas at gluster.com>
> > > Cc: "gluster-users" <gluster-users at gluster.org>
> > > Sent: Thursday, April 15, 2010 9:48:42 AM
> > > Subject: Re: Memory usage high on server sides
> > > 
> > > Hi Tejas,
> > > 
> > > > Problems you saw - 
> > > > 
> > > > 1) High memory usage on client where gluster volume is mounted
> > > 
> > > Memory usage for clients is 0% after copying.
> > > $ps auxf
> > > USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME
> > COMMAND
> > > root     19692  1.3  0.0 262148  6980 ?        Ssl  Apr12
> > > 61:33 /sbin/glusterfs --log-level=NORMAL
> > > --volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol
> > /gfs/r2/f2
> > > 
> > > > 2) High memory usage on server
> > > Yes.
> > > $ps auxf
> > > USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME
> > COMMAND
> > > root     26472  2.2 29.1 718100 600260 ?       Ssl  Apr09 184:09
> > > glusterfsd -f /etc/glusterfs/servers/r2/f1.vol
> > > root     26485  1.8 39.8 887744 821384 ?       Ssl  Apr09 157:16
> > > glusterfsd -f /etc/glusterfs/servers/r2/f2.vol
> > > 
> > > > 3) 2 days to copy 300 GB data
> > > More than 700GB. There are two folders. The first one is copied to
> > > server 1 and server 2, and the second one is copied to server 2 and
> > > server 3. The vol files are below.
> > > 
> > > > About the config, can you provide the following for both old and
> > new systems -
> > > > 
> > > > 1) OS and kernel level on gluster servers and clients
> > > Debian Kernel 2.6.18-6-amd64
> > > 
> > > $uname -a
> > > Linux fs2 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64
> > > GNU/Linux
> > > 
> > > > 2) volume file from servers and clients
> > > 
> > > #####Server Vol file (f1.vol)
> > > # The same settings for f2.vol and f3.vol, just different dirs and
> > ports
> > > # f1 f3 for Server 1, f1 f2 for Server 2, f2 f3 for Server 3
> > > volume posix1
> > >   type storage/posix
> > >   option directory /gfs/r2/f1
> > > end-volume
> > > 
> > > volume locks1
> > >     type features/locks
> > >     subvolumes posix1
> > > end-volume
> > > 
> > > volume brick1
> > >     type performance/io-threads
> > >     option thread-count 8
> > >     subvolumes locks1
> > > end-volume
> > > 
> > > volume server-tcp
> > >     type protocol/server
> > >     option transport-type tcp
> > >     option auth.addr.brick1.allow 192.168.0.*
> > >     option transport.socket.listen-port 6991
> > >     option transport.socket.nodelay on
> > >     subvolumes brick1
> > > end-volume
> > > 
> > > #####Client Vol file (c1.vol)
> > > # The same settings for c2.vol and c3.vol
> > > # s2 s3 for c2, s3 s1 for c3
> > > volume s1
> > >     type protocol/client
> > >     option transport-type tcp
> > >     option remote-host 192.168.0.31
> > >     option transport.socket.nodelay on
> > >     option transport.remote-port 6991
> > >     option remote-subvolume brick1
> > > end-volume
> > > 
> > > volume s2
> > >     type protocol/client
> > >     option transport-type tcp
> > >     option remote-host 192.168.0.32
> > >     option transport.socket.nodelay on
> > >     option transport.remote-port 6991
> > >     option remote-subvolume brick1
> > > end-volume
> > > 
> > > volume mirror
> > >     type cluster/replicate
> > >     option data-self-heal off
> > >     option metadata-self-heal off
> > >     option entry-self-heal off
> > >     subvolumes s1 s2
> > > end-volume
> > > 
> > > volume writebehind
> > >     type performance/write-behind
> > >     option cache-size 100MB
> > >     option flush-behind off
> > >     subvolumes mirror
> > > end-volume
> > > 
> > > volume iocache
> > >     type performance/io-cache
> > >     option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print
> > $2 *
> > > 0.2 / 1024}' | cut -f1 -d.`MB
> > >     option cache-timeout 1
> > >     subvolumes writebehind
> > > end-volume
> > > 
> > > volume quickread
> > >     type performance/quick-read
> > >     option cache-timeout 1
> > >     option max-file-size 256Kb
> > >     subvolumes iocache
> > > end-volume
> > > 
> > > volume statprefetch
> > >     type performance/stat-prefetch
> > >     subvolumes quickread
> > > end-volume
> > > 
> > > 
> > > > 3) Filesystem type of backend gluster subvolumes
> > > ext3
> > > 
> > > > 4) How close to full the backend subvolumes are
> > > New 2T hard disks for each server.
> > > 
> > > > 5) The exact copy command .. did you mount the volumes from
> > > > old and new system on a single machine and did cp or used rsync
> > > > or some other method ? If something more than just a cp, please
> > > > send the exact command line you used.
> > > The old file system uses DRBD and NFS.
> > > The exact command is
> > > sudo cp -R -v -p -P /nfsmounts/nfs3/photo .
> > > 
> > > > 6) How many files/directories ( tentative ) in that 300GB data (
> > would help in 
> > > > trying to reproduce inhouse with a smaller test bed ).
> > > I cannot tell, but the file sizes are between 1KB to 200KB, average
> > > around 20KB.
> > > 
> > > > 7) Was there other load on the new or old system ?
> > > The old systems are still used for web servers.
> > > The new systems are on the same servers but different hard disks. 
> > > 
> > > > 8) Any other patterns you noticed.
> > > There is once that one client tried to connect one server with
> > external
> > > IP address.
> > > Using distribute translator across all three mirrors will make
> > system
> > > twice slower than using three mounted folders.
> > > 
> > > Is this information enough?
> > > 
> > > Please take a look.
> > > 
> > > Regards,
> > > 
> > > Chris
> > > 
> > > 
> > > 
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Thu, 22 Apr 2010 11:26:19 +0530
> From: harris narang <harish.narang2010 at gmail.com>
> Subject: iscsi with gluster
> To: gluster-users at gluster.org
> Message-ID:
> 	<n2v9e276b21004212256q2b2a3b83t2b87a5fd7ee1cd2c at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Dear sir/madam,
> i want to use gluster with iscsi .please suggest me weather it is possible
> or not.
> 
> with regards
> harish narang
> 
> ------------------------------
> 
> Message: 3
> Date: Thu, 22 Apr 2010 15:58:38 +1000
> From: Chris Jin <chris at pikicentral.com>
> Subject: Re: Memory usage high on server sides
> To: Raghavendra Bhat <raghavendrabhat at gluster.com>
> Cc: gluster-users <gluster-users at gluster.org>
> Message-ID: <1271915918.4792.1.camel at Chris-Ubuntu.dascom.office>
> Content-Type: text/plain
> 
> Thanks Raghavendra,
> 
> We will test it soon.
> 
> Regards,
> 
> Chris
> 
> On Wed, 2010-04-21 at 22:38 -0600, Raghavendra Bhat wrote:
> > Hi Chris,
> > 
> > http://patches.gluster.com/patch/3151/
> > Can you please apply this patch and see if this works for you?
> > 
> > Thanks
> > 
> > 
> > Regards,
> > Raghavendra Bhat
> > 
> > > Tejas,
> > > 
> > > We still have hundreds of GBs to copy, and have not put the new file
> > > system into the test. So far the clients works all fine. I mean the
> > > commands like ls, mkdir, touch, and etc.
> > > 
> > > Thanks again for your time.
> > > 
> > > regards,
> > > 
> > > Chris
> > > 
> > > On Wed, 2010-04-14 at 23:04 -0600, Tejas N. Bhise wrote:
> > > > Chris,
> > > > 
> > > > By the way, after the copy is done, how is the system responding to
> > > > regular access ? In the sense, was the problem with copy also
> > > > carried forward as more trouble seen with subsequent access of
> > > > data over glusterfs ?
> > > > 
> > > > Regards,
> > > > Tejas.
> > > > 
> > > > ----- Original Message -----
> > > > From: "Chris Jin" <chris at pikicentral.com>
> > > > To: "Tejas N. Bhise" <tejas at gluster.com>
> > > > Cc: "gluster-users" <gluster-users at gluster.org>
> > > > Sent: Thursday, April 15, 2010 9:48:42 AM
> > > > Subject: Re: Memory usage high on server sides
> > > > 
> > > > Hi Tejas,
> > > > 
> > > > > Problems you saw - 
> > > > > 
> > > > > 1) High memory usage on client where gluster volume is mounted
> > > > 
> > > > Memory usage for clients is 0% after copying.
> > > > $ps auxf
> > > > USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME
> > > COMMAND
> > > > root     19692  1.3  0.0 262148  6980 ?        Ssl  Apr12
> > > > 61:33 /sbin/glusterfs --log-level=NORMAL
> > > > --volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol
> > > /gfs/r2/f2
> > > > 
> > > > > 2) High memory usage on server
> > > > Yes.
> > > > $ps auxf
> > > > USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME
> > > COMMAND
> > > > root     26472  2.2 29.1 718100 600260 ?       Ssl  Apr09 184:09
> > > > glusterfsd -f /etc/glusterfs/servers/r2/f1.vol
> > > > root     26485  1.8 39.8 887744 821384 ?       Ssl  Apr09 157:16
> > > > glusterfsd -f /etc/glusterfs/servers/r2/f2.vol
> > > > 
> > > > > 3) 2 days to copy 300 GB data
> > > > More than 700GB. There are two folders. The first one is copied to
> > > > server 1 and server 2, and the second one is copied to server 2 and
> > > > server 3. The vol files are below.
> > > > 
> > > > > About the config, can you provide the following for both old and
> > > new systems -
> > > > > 
> > > > > 1) OS and kernel level on gluster servers and clients
> > > > Debian Kernel 2.6.18-6-amd64
> > > > 
> > > > $uname -a
> > > > Linux fs2 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64
> > > > GNU/Linux
> > > > 
> > > > > 2) volume file from servers and clients
> > > > 
> > > > #####Server Vol file (f1.vol)
> > > > # The same settings for f2.vol and f3.vol, just different dirs and
> > > ports
> > > > # f1 f3 for Server 1, f1 f2 for Server 2, f2 f3 for Server 3
> > > > volume posix1
> > > >   type storage/posix
> > > >   option directory /gfs/r2/f1
> > > > end-volume
> > > > 
> > > > volume locks1
> > > >     type features/locks
> > > >     subvolumes posix1
> > > > end-volume
> > > > 
> > > > volume brick1
> > > >     type performance/io-threads
> > > >     option thread-count 8
> > > >     subvolumes locks1
> > > > end-volume
> > > > 
> > > > volume server-tcp
> > > >     type protocol/server
> > > >     option transport-type tcp
> > > >     option auth.addr.brick1.allow 192.168.0.*
> > > >     option transport.socket.listen-port 6991
> > > >     option transport.socket.nodelay on
> > > >     subvolumes brick1
> > > > end-volume
> > > > 
> > > > #####Client Vol file (c1.vol)
> > > > # The same settings for c2.vol and c3.vol
> > > > # s2 s3 for c2, s3 s1 for c3
> > > > volume s1
> > > >     type protocol/client
> > > >     option transport-type tcp
> > > >     option remote-host 192.168.0.31
> > > >     option transport.socket.nodelay on
> > > >     option transport.remote-port 6991
> > > >     option remote-subvolume brick1
> > > > end-volume
> > > > 
> > > > volume s2
> > > >     type protocol/client
> > > >     option transport-type tcp
> > > >     option remote-host 192.168.0.32
> > > >     option transport.socket.nodelay on
> > > >     option transport.remote-port 6991
> > > >     option remote-subvolume brick1
> > > > end-volume
> > > > 
> > > > volume mirror
> > > >     type cluster/replicate
> > > >     option data-self-heal off
> > > >     option metadata-self-heal off
> > > >     option entry-self-heal off
> > > >     subvolumes s1 s2
> > > > end-volume
> > > > 
> > > > volume writebehind
> > > >     type performance/write-behind
> > > >     option cache-size 100MB
> > > >     option flush-behind off
> > > >     subvolumes mirror
> > > > end-volume
> > > > 
> > > > volume iocache
> > > >     type performance/io-cache
> > > >     option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print
> > > $2 *
> > > > 0.2 / 1024}' | cut -f1 -d.`MB
> > > >     option cache-timeout 1
> > > >     subvolumes writebehind
> > > > end-volume
> > > > 
> > > > volume quickread
> > > >     type performance/quick-read
> > > >     option cache-timeout 1
> > > >     option max-file-size 256Kb
> > > >     subvolumes iocache
> > > > end-volume
> > > > 
> > > > volume statprefetch
> > > >     type performance/stat-prefetch
> > > >     subvolumes quickread
> > > > end-volume
> > > > 
> > > > 
> > > > > 3) Filesystem type of backend gluster subvolumes
> > > > ext3
> > > > 
> > > > > 4) How close to full the backend subvolumes are
> > > > New 2T hard disks for each server.
> > > > 
> > > > > 5) The exact copy command .. did you mount the volumes from
> > > > > old and new system on a single machine and did cp or used rsync
> > > > > or some other method ? If something more than just a cp, please
> > > > > send the exact command line you used.
> > > > The old file system uses DRBD and NFS.
> > > > The exact command is
> > > > sudo cp -R -v -p -P /nfsmounts/nfs3/photo .
> > > > 
> > > > > 6) How many files/directories ( tentative ) in that 300GB data (
> > > would help in 
> > > > > trying to reproduce inhouse with a smaller test bed ).
> > > > I cannot tell, but the file sizes are between 1KB to 200KB, average
> > > > around 20KB.
> > > > 
> > > > > 7) Was there other load on the new or old system ?
> > > > The old systems are still used for web servers.
> > > > The new systems are on the same servers but different hard disks. 
> > > > 
> > > > > 8) Any other patterns you noticed.
> > > > There is once that one client tried to connect one server with
> > > external
> > > > IP address.
> > > > Using distribute translator across all three mirrors will make
> > > system
> > > > twice slower than using three mounted folders.
> > > > 
> > > > Is this information enough?
> > > > 
> > > > Please take a look.
> > > > 
> > > > Regards,
> > > > 
> > > > Chris
> > > > 
> > > > 
> > > > 
> > > 
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> > 
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Thu, 22 Apr 2010 00:38:46 -0700
> From: Liam Slusser <lslusser at gmail.com>
> Subject: Re: iscsi with gluster
> To: harris narang <harish.narang2010 at gmail.com>
> Cc: gluster-users at gluster.org
> Message-ID:
> 	<v2p104d4e0e1004220038m300c8b49g376138d22b5827a3 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> You COULD run gluster on top of an iscsi mounted volume...but why
> would you want too?  If you already have an iscsi SAN why not use gfs2
> or something like that?
> 
> liam
> 
> On Wed, Apr 21, 2010 at 10:56 PM, harris narang
> <harish.narang2010 at gmail.com> wrote:
> > Dear sir/madam,
> > i want to use gluster with iscsi .please suggest me weather it is possible
> > or not.
> >
> > with regards
> > harish narang
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >
> >
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Thu, 22 Apr 2010 09:29:51 +0100
> From: "miloska at gmail.com" <miloska at gmail.com>
> Subject: Re: iscsi with gluster
> To: Liam Slusser <lslusser at gmail.com>
> Cc: gluster-users at gluster.org
> Message-ID:
> 	<x2m34dd39fb1004220129hc0c41accx580cbec46ab8419b at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Thu, Apr 22, 2010 at 8:38 AM, Liam Slusser <lslusser at gmail.com> wrote:
> > You COULD run gluster on top of an iscsi mounted volume...but why
> > would you want too? ?If you already have an iscsi SAN why not use gfs2
> > or something like that?
> >
> 
> You need full cluster infrastructure for that - Gluster is a much
> simpler solution.
> 
> GFS2 is also _very_ slow, although I never run a test to compare it
> with Gluster, but my feeling is that Gluster much faster.
> 
> 
> ------------------------------
> 
> Message: 6
> Date: Thu, 22 Apr 2010 02:34:01 -0600 (CST)
> From: "Tejas N. Bhise" <tejas at gluster.com>
> Subject: Re: iscsi with gluster
> To: harris narang <harish.narang2010 at gmail.com>
> Cc: gluster-users at gluster.org
> Message-ID: <1355050767.2542040.1271925241115.JavaMail.root at mb1>
> Content-Type: text/plain; charset=utf-8
> 
> Hi Harish,
> 
> Gluster Server aggregates local filesystems ( or directories on it ) into a single glusterfs volume. These backend local filesystems can use local disk, FC SAN, iSCSI - does not matter, as long as the host and the backend filesystem work with it.
> 
> Hope that answers your question.
> 
> Regards,
> Tejas.
> ----- Original Message -----
> From: "harris narang" <harish.narang2010 at gmail.com>
> To: gluster-users at gluster.org
> Sent: Thursday, April 22, 2010 11:26:19 AM
> Subject: iscsi with gluster
> 
> Dear sir/madam,
> i want to use gluster with iscsi .please suggest me weather it is possible
> or not.
> 
> with regards
> harish narang
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> ------------------------------
> 
> Message: 7
> Date: Thu, 22 Apr 2010 02:07:43 -0700
> From: Liam Slusser <lslusser at gmail.com>
> Subject: Re: iscsi with gluster
> To: "miloska at gmail.com" <miloska at gmail.com>
> Cc: "gluster-users at gluster.org" <gluster-users at gluster.org>
> Message-ID: <D89A807E-A3A8-4CC1-94BB-DB2F9DC714FE at gmail.com>
> Content-Type: text/plain;	charset=us-ascii;	format=flowed;	delsp=yes
> 
> I have a gfs2 cluster and have found the performance to be  
> outstanding. It's great with small files.   It's hard to say how it  
> compares to my gluster cluster since I designed them to do different  
> tasks.  But since the storage is all shared block level it does have  
> many advantages.
> 
> Liam
> 
> 
> 
> On Apr 22, 2010, at 1:29 AM, "miloska at gmail.com" <miloska at gmail.com>  
> wrote:
> 
> > On Thu, Apr 22, 2010 at 8:38 AM, Liam Slusser <lslusser at gmail.com>  
> > wrote:
> >> You COULD run gluster on top of an iscsi mounted volume...but why
> >> would you want too?  If you already have an iscsi SAN why not use  
> >> gfs2
> >> or something like that?
> >>
> >
> > You need full cluster infrastructure for that - Gluster is a much
> > simpler solution.
> >
> > GFS2 is also _very_ slow, although I never run a test to compare it
> > with Gluster, but my feeling is that Gluster much faster.
> 
> 
> ------------------------------
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> End of Gluster-users Digest, Vol 24, Issue 35
> *********************************************


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux