tailing active files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This seems to indicate that concurrent (read-write or write-write)  
file access from the same FUSE mount point is broken in general.  Is  
this the case?  Is there a fix coming soon?


-Bryan




On Jan 13, 2009, at Jan 13, 3:20 AM, Krishna Srinivas wrote:

> Avati's mail did not reach the list I think, here are the contents:
>
> ---
> The problem is that FUSE does not invalidate cache when data is
> written through an fd opened in direct IO mode. Opening the write fd
> in direct IO mode is necessary for performance reasons in fuse till
> very recent versions (otherwise writes happen in 4KB chunks).  You can
> mount the filesystem with --disable-direct-io-mode and you will
> observe that tail -f works fine.
>
> We plan to get rid of the while direct IO mess when newer version of
> fuse is available (where large writes are permitted)
> ---
>
> On Tue, Jan 13, 2009 at 1:49 PM, Krishna Srinivas <krishna at zresearch.com 
> > wrote:
>> It is a bug, Thanks for reporting!
>> Krishna
>>
>> On Tue, Jan 13, 2009 at 12:07 PM, Andrew McGill <list2008 at lunch.za.net 
>> > wrote:
>>> Same on 1.3.12:  tail -f doesn't -f :
>>>
>>>       glusterfs 1.3.12
>>>       Repository revision: glusterfs--mainline--2.5--patch-797
>>>
>>> Here's something interesting though -- it is actually working, it  
>>> is just
>>> reading nulls instead of actual data.  Here is the transition from  
>>> '8', '9'
>>> to '10', '11':
>>>
>>> strace tail -f nums.txt     # .......
>>>
>>> nanosleep({1, 0}, NULL)                 = 0
>>> fstat64(3, {st_mode=S_IFREG|0644, st_size=422, ...}) = 0
>>> read(3, "\0\0", 8192)                   = 2
>>> read(3, "", 8192)                       = 0
>>> fstat64(3, {st_mode=S_IFREG|0644, st_size=422, ...}) = 0
>>> write(1, "\0\0", 2)                     = 2
>>> nanosleep({1, 0}, NULL)                 = 0
>>> fstat64(3, {st_mode=S_IFREG|0644, st_size=425, ...}) = 0
>>> read(3, "\0\0\0", 8192)                 = 3
>>> read(3, "", 8192)                       = 0
>>> fstat64(3, {st_mode=S_IFREG|0644, st_size=425, ...}) = 0
>>> write(1, "\0\0\0", 3)                   = 3
>>> nanosleep({1, 0}, NULL)                 = 0
>>>
>>>
>>>
>>> &:-)
>>>
>>>
>>> On Tuesday 13 January 2009 05:04:53 Keith Freedman wrote:
>>>> this is interesting.
>>>>
>>>> I'm running glusterfs--mainline--3.0--patch-840
>>>>
>>>> using AFR.
>>>> I did your test
>>>> on the local machine, running tail does exactly what you  
>>>> indicated...
>>>> it acts like it was run without the -f.
>>>> on the other replication server, lines show up 2 at a time.
>>>> so it started at 9 or something
>>>> then I got 10 & 11, 2 seconds later, 12, 13, etc...
>>>> the same time tail -f on the server I was running the thing on sat
>>>> there a while then produced some output.
>>>>
>>>> again, the afr machine updated more frequently, but the local  
>>>> machine
>>>> just needed to get past some buffering
>>>>
>>>> what I saw was (you'll notice this faster if you remove the sleep)
>>>> it would show some numbers, then jump and show another batch of
>>>> numbers, then pause then show another batch.
>>>> here's an output from tail -f
>>>> Notice that at 63, there was some weirdness, like it was trying to
>>>> print 1041 and 1860, I'm guessing
>>>> then I got 1861 -.... and then I'd get a jump in numbers.. if I cat
>>>> the file all the numbers are there.
>>>>
>>>> Also, in some of my tests I got input/output errors ----  I belive
>>>> this was due to having tail -f running on the other afr server and
>>>> this code you provided using > which truncates the file.  seems AFR
>>>> has a little bug there if a file is open for reading on the other
>>>> server and is truncated.  the io error went away when I killed the
>>>> tail process on the other machine.
>>>>
>>>> 55
>>>> 56
>>>> 57
>>>> 58
>>>> 59
>>>> 60
>>>> 61
>>>> 62
>>>> 63
>>>> 041
>>>> 60
>>>> 1861
>>>> 1862
>>>> 1863
>>>> 1864
>>>> 1865
>>>> 1866
>>>> 1867
>>>> 1868
>>>> 1869
>>>> 1870
>>>> 1871
>>>> 1872
>>>>
>>>> I also noticed
>>>>
>>>> At 02:25 PM 1/12/2009, Bryan Talbot wrote:
>>>>> I'm running 1.4rc7 (glusterfs--mainline--3.0--patch-814) and  
>>>>> seeing
>>>>> some odd behavior when tailing a file that is being written to  
>>>>> by a
>>>>> single process --  a log file in this case.
>>>>>
>>>>> The odd behaviors that I've noticed are that "tail -f" behaves  
>>>>> like
>>>>> "tail" and doesn't show any updates.  In addition, /usr/bin/less
>>>>> seems to show binary values (at least that's what I assume the  
>>>>> "^@"
>>>>> characters are supposed to be) when the bottom of the file "G"
>>>>> accessed instead of the new data added to the file after less  
>>>>> was started.
>>>>>
>>>>> Is this a known issue?  Is there a work-around?
>>>>>
>>>>> Here's how I'm able to reproduce it.  Run the script below and
>>>>> direct the output to a gluster-hosted file.  Then attempt to "tail
>>>>> -f" or use /usr/bin/less on the file from another terminal.
>>>>>
>>>>>
>>>>> $> num=0; while [ 1 ]; do echo $((num++)); sleep 1; done >
>>>>> /mnt/gluster/nums.txt
>>>>>
>>>>>
>>>>> The output from /usr/bin/less ends up looking like this:
>>>>> ...
>>>>> 301
>>>>> 302
>>>>> 303
>>>>> 304
>>>>> 305
>>>>> 306
>>>>> 307
>>>>> 308
>>>>> 309
>>>>> 310
>>>>> 311
>>>>> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>>>>>
>>>>>
>>>>> Gluster configs are very basic:
>>>>> ## Server
>>>>> volume brick
>>>>> type storage/posix
>>>>> option directory /glusterfs/export
>>>>> end-volume
>>>>>
>>>>> volume lock
>>>>>  type features/posix-locks
>>>>>  subvolumes brick
>>>>> end-volume
>>>>>
>>>>> volume export
>>>>>  type performance/io-threads
>>>>>  subvolumes lock
>>>>>  option thread-count 4 # default value is 1
>>>>> end-volume
>>>>>
>>>>> volume server
>>>>> type protocol/server
>>>>> option transport-type tcp
>>>>> subvolumes export
>>>>> option auth.addr.export.allow 10.10.10.*
>>>>> end-volume
>>>>>
>>>>>
>>>>>
>>>>> ## Client
>>>>> volume volume1
>>>>>  type protocol/client
>>>>>  option transport-type tcp/client
>>>>>  option remote-host    10.10.10.2
>>>>>  option remote-subvolume export
>>>>> end-volume
>>>>>
>>>>> volume volume2
>>>>>  type protocol/client
>>>>>  option transport-type tcp/client
>>>>>  option remote-host    10.10.10.3
>>>>>  option remote-subvolume export
>>>>> end-volume
>>>>>
>>>>> volume mirror1
>>>>>  type cluster/afr
>>>>>  subvolumes volume1 volume2
>>>>> end-volume
>>>>>
>>>>>
>>>>>
>>>>> -Bryan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux