filling gluster cluster with large file doesn't crash the system?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



craig,

A) stat on server that generated the file:

jeff at riff:/mnt/gluster$ stat y.out
  File: `y.out'
  Size: 214574579712    Blocks: 291441832  IO Block: 65536  regular file
Device: 16h/22d    Inode: 14213316644377695875  Links: 1
Access: (0777/-rwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-11-09 10:32:16.000000000 -0800
Modify: 2010-11-09 13:57:05.000000000 -0800
Change: 2010-11-09 13:57:05.000000000 -0800

B) stat on gluster brick (brick2) that is housing the file:

[root at brick2 ~]# stat /exp2/y.out
   File: `/exp2/y.out'
   Size: 214574579712	Blocks: 291441832  IO Block: 4096   regular file
Device: fd00h/64768d	Inode: 655412      Links: 1
Access: (0777/-rwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-11-09 10:32:16.000000000 -0800
Modify: 2010-11-09 13:57:05.000000000 -0800
Change: 2010-11-09 13:57:05.000000000 -0800

c) df on brick 2

[root at brick2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       143G  143G     0 100% /
/dev/hda1              99M   13M   82M  13% /boot
tmpfs                 470M     0  470M   0% /dev/shm
172.16.1.76:/gs-test  283G  158G  119G  58% /mnt/gluster


interesting that block size is different depending on who you ask.

[root at brick2 ~]# dumpe2fs /dev/hda1 | grep -i 'Block size'
dumpe2fs 1.39 (29-May-2006)
Block size:               1024

-Matt


On Nov 11, 2010, at 9:44 PM, Craig Carl wrote:

> Matt,
>   Based on your Gluster servers configs that file is bigger than the  
> available disk space, obviously that isn't right.
>
> Can you send us the output of `stat y.out` taken from the Gluster  
> mount point and from the back end of the server Gluster created the  
> file on

>
>   I'm also going to try and reproduce the problem here on 3.1 and  
> 3.1.1qa5.
>
>
> Thanks,
> Craig
>
> -->
> Craig Carl
> Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Gtalk - craig.carl at gmail.com
>
>
> From: "Matt Hodson" <matth at geospiza.com>
> To: "Craig Carl" <craig at gluster.com>
> Cc: "Jeff Kozlowski" <jeff at genesifter.net>, gluster-users at gluster.org
> Sent: Wednesday, November 10, 2010 9:21:40 AM
> Subject: Re: filling gluster cluster with large file  
> doesn't crash the system?!
>
> Craig,
> inline...
>
> On Nov 10, 2010, at 7:17 AM, Craig Carl wrote:
>
> Matt -
>    A couple of questions -
>
> What is your volume config? (`gluster volume info all`)
>
> gluster> volume info all
>
> Volume Name: gs-test
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 172.16.1.76:/exp1
> Brick2: 172.16.2.117:/exp2
>
> What is the hardware config for each storage server?
>
> brick 1 = 141GB
> brick 2 = 143GB
>
> What command did you run to create the test data?
>
> #perl -e 'print rand while 1' > y.out &
>
> What process is still writing to the file?
>
> same one as above.
>
>
> Thanks,
> Craig
>
> -->
> Craig Carl
> Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Gtalk - craig.carl at gmail.com
>
>
> From: "Matt Hodson" <matth at geospiza.com>
> To: gluster-users at gluster.org
> Cc: "Jeff Kozlowski" <jeff at genesifter.net>
> Sent: Tuesday, November 9, 2010 10:46:04 AM
> Subject: Re: filling gluster cluster with large file  
> doesn't        crash the system?!
>
> I should also note that on this non-production test rig the block size
> on both bricks is 1KB (1024) so the theoretical file size limit is
> 16GB.  so how then did i get a file of 200GB?
> -matt
>
> On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote:
>
> > craig et al,
> >
> > I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we
> > nfs mounted the cluster from a 3rd machine and wrote random junk to
> > a file. i watched the file grow to 200GB on the cluster when it
> > appeared to stop. however the machine writing to the file still
> > lists the file as growing. it's now at over 320GB. what's going on?
> >
> > -matt
> >
> > -------
> > Matt Hodson
> > Scientific Customer Support, Geospiza
> > (206) 633-4403, Ext. 111
> > http://www.geospiza.com
> >
> >
> >
> >
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux