Re: Reduce memcpy in glfs read and write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Would https://bugzilla.redhat.com/show_bug.cgi?id=1233136 be related to
Sachin's problem?

Milind

On 06/21/2016 06:28 PM, Pranith Kumar Karampuri wrote:
Hey!!
        Hope you are doing good. I took a look at the bt. So when flush
comes write-behind has to flush all the writes down. I see the following
frame hung in iob_unref:
Thread 7 (Thread 0x7fa601a30700 (LWP 16218)):
#0  0x00007fa60cc55225 in pthread_spin_lock () from
/lib64/libpthread.so.0 <<---- Does it always hang there?
#1  0x00007fa60e1f373e in iobref_unref (iobref=0x19dc7e0) at iobuf.c:907
#2  0x00007fa60e246fb2 in args_wipe (args=0x19e70ec) at default-args.c:1593
#3  0x00007fa60e1ea534 in call_stub_wipe_args (stub=0x19e709c) at
call-stub.c:2466
#4  0x00007fa60e1ea5de in call_stub_destroy (stub=0x19e709c) at
call-stub.c:2482

Is this on top of master branch? It seems like we missed an unlock of
the spin-lock or the iobref has junk value which gives the feeling that
it is in locked state (May be double free?). Do you have any extra
patches you have in your repo which make changes in iobuf?

On Tue, Jun 21, 2016 at 4:07 AM, Sachin Pandit <spandit@xxxxxxxxxxxxx
<mailto:spandit@xxxxxxxxxxxxx>> wrote:

    Hi all,____

    __ __

    I bid adieu to you all with the hope of crossing path again, and the
    time has come rather quickly. It feels great to work on GlusterFS
    again.____

    ____

    Currently we are trying to write data backed up by Commvault Simpana
    to glusterfs volume (Disperse volume). To improve the performance, I
    have implemented the proposal put forward my Rafi  K C [1]. I have
    some questions regarding libgfapi and iobuf pool. ____

    __ __

    To reduce an extra level of copy in glfs read and write, I have
    implemented few APIs to request a buffer (similar to the one
    represented in  [1]) from iobuf pool which can be used by the
    application to write data to. With this implementation, when I try
    to reuse the buffer for consecutive writes, I could see a hang in
    syncop_flush of glfs_close (BT of the hang can be found in [2]). I
    wanted to know if reusing the buffer is recommended. If not, do we
    need to request buffer for each writes?____

    __ __

    Setup : Distributed-Disperse ( 4 * (2+1)). Bricks scattered over 3
    nodes.____

    __ __

    [1]
    http://www.gluster.org/pipermail/gluster-devel/2015-February/043966.html____

    [2] Attached file -  bt.txt____

    __ __

    Thanks & Regards,____

    Sachin Pandit.____

    ***************************Legal Disclaimer***************************
    "This communication may contain confidential and privileged material for the
    sole use of the intended recipient. Any unauthorized review, use or distribution
    by others is strictly prohibited. If you have received the message by mistake,
    please advise the sender by reply email and delete the message. Thank you."
    **********************************************************************


    _______________________________________________
    Gluster-devel mailing list
    Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
    http://www.gluster.org/mailman/listinfo/gluster-devel




--
Pranith


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux