Re: Rebalance data migration and corruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 02/09/2016 12:30 PM, Raghavendra G wrote:
   Right. But if there are simultaneous access to the same file from

             any other client and rebalance process, delegations shall
        not be
             granted or revoked if granted even though they are operating at
             different offsets. So if you rely only on delegations,
        migration may
             not proceed if an application has held a lock or doing any
        I/Os.


        Does the brick process wait for the response of delegation holder
        (rebalance process here) before it wipes out the
        delegation/locks? If
        that's the case, rebalance process can complete one transaction of
        (read, src) and (write, dst) before responding to a delegation
        recall.
        That way there is no starvation for both applications and rebalance
        process (though this makes both of them slower, but that cannot
        helped I
        think).


    yes. Brick process should wait for certain period before revoking
    the delegations forcefully in case if it is not returned by the
    client. Also if required (like done by NFS servers) we can choose to
    increase this timeout value at run time if the client is diligently
    flushing the data.


hmm.. I would prefer an infinite timeout. The only scenario where brick
process can forcefully flush leases would be connection lose with
rebalance process. The more scenarios where brick can flush leases
without knowledge of rebalance process, we open up more race-windows for
this bug to occur.

In fact at least in theory to be correct, rebalance process should
replay all the transactions that happened during the lease which got
flushed out by brick (after re-acquiring that lease). So, we would like
to avoid any such scenarios.

Btw, what is the necessity of timeouts? Is it an insurance against rogue
clients who won't respond back to lease recalls?
yes. It is to protect from rogue clients and prevent starvation of other clients.

In the current design, every lease is associated with lease-id (like lockowner in case of locks) and all the further fops (I/Os) have to be done using this lease-id. So in case if any fop comes to brick process with the lease-id of the lease which got flushed by the brick process, we can send special error and rebalance process can then replay all those fops. Will that be sufficient?

CCin Poornima who has been implementing it.


Thanks,
Soumya
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux