***SPAM*** Re: filling gluster cluster with large file doesn't crash the system?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rudi Ahlers wrote:
> On Tue, Nov 9, 2010 at 8:46 PM, Matt Hodson <matth at geospiza.com> wrote:
>   
>> I should also note that on this non-production test rig the block size on
>> both bricks is 1KB (1024) so the theoretical file size limit is 16GB.  so
>> how then did i get a file of 200GB?
>> -matt
>>
>> On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote:
>>
>>     
>>> craig et al,
>>>
>>> I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we nfs
>>> mounted the cluster from a 3rd machine and wrote random junk to a file. i
>>> watched the file grow to 200GB on the cluster when it appeared to stop.
>>> however the machine writing to the file still lists the file as growing.
>>> it's now at over 320GB. what's going on?
>>>
>>> -matt
>>>
>>> -------
>>> Matt Hodson
>>> Scientific Customer Support, Geospiza
>>> (206) 633-4403, Ext. 111
>>> http://www.geospiza.com
>>>
>>>       
>
>
> How, exactly, did you fill the file with junk?
>
>
>
>   
#perl -e 'print rand while 1' > y.out &





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux