Re: Need inputs on patch #17985

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Update. Two more days to go for the deadline. Till now, there are no open issues identified against this patch.

On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


----- Original Message -----
> From: "FNU Raghavendra Manjunath" <rabhat@xxxxxxxxxx>
> To: "Raghavendra Gowdappa" <rgowdapp@xxxxxxxxxx>
> Cc: "Raghavendra G" <raghavendra.hg@xxxxxxxxx>, "Nithya Balachandran" <nbalacha@xxxxxxxxxx>, anoopcs@xxxxxxxxxx,
> "Gluster Devel" <gluster-devel@xxxxxxxxxxx>, "Raghavendra Bhat" <raghavendra@xxxxxxxxxx>
> Sent: Thursday, September 7, 2017 6:44:51 PM
> Subject: Re: Need inputs on patch #17985
>
> From snapview client perspective one important thing to note. For building
> the context for the entry point (by default ".snaps") a explicit lookup has
> to be done on it. The dentry for ".snaps" is not returned when readdir is
> done on its parent directory (Not even when ls -a is done). So for building
> the context of .snaps (in the context snapview client saves the information
> about whether it is a real inode or virtual inode) we need a lookup.

Since the dentry corresponding to ".snaps" is not returned, there won't be an inode for this directory linked in itable. Also, glusterfs wouldn't have given nodeid corresponding to ".snaps" during readdir response (as dentry itself is not returned). So, kernel would do an explicit lookup before doing any operation on ".snaps" (unlike for those dentries which contain nodeid kernel can choose to skip a lookup) and we are safe. So, #17985 is safe in its current form.

>
> From snapview server perspective as well a lookup might be needed. In
> snapview server a glfs handle is established between the snapview server
> and the snapshot brick. So a inode in snapview server process contains the
> glfs handle for the object being accessed from snapshot.  In snapview
> server readdirp does not build the inode context (which contains the glfs
> handle etc) because glfs handle is returned only in lookup.

Same argument I've given holds good for this case too. Important point to note is that "there is no dentry and hence no nodeid corresponding to .snaps is passed to kernel and kernel is forced to do an explicit lookup".

>
> Regards,
> Raghavendra
>
>
> On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
> wrote:
>
> >
> >
> > ----- Original Message -----
> > > From: "Raghavendra G" <raghavendra.hg@xxxxxxxxx>
> > > To: "Nithya Balachandran" <nbalacha@xxxxxxxxxx>
> > > Cc: "Raghavendra Gowdappa" <rgowdapp@xxxxxxxxxx>, anoopcs@xxxxxxxxxx,
> > "Gluster Devel" <gluster-devel@xxxxxxxxxxx>,
> > > raghavendra@xxxxxxxxxx
> > > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > > Subject: Re: Need inputs on patch #17985
> > >
> > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> > nbalacha@xxxxxxxxxx>
> > > wrote:
> > >
> > > > It has been a while but iirc snapview client (loaded abt dht/tier etc)
> > had
> > > > some issues when we ran tiering tests. Rafi might have more info on
> > this -
> > > > basically it was expecting to find the inode_ctx populated but it was
> > not.
> > > >
> > >
> > > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > > ownership of,
> > >
> > > * Identifying whether the patch in question causes the issue?
> >
> > gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> > checked whether its the same state stored by gf_svc_lookup_cbk and it looks
> > like the same state. So, I guess readdirp is handled correctly by
> > snapview-client and an explicit lookup is not required. But, will wait for
> > inputs from rabhat and rafi.
> >
> > [1] https://github.com/gluster/glusterfs/blob/master/xlators/
> > features/snapview-client/src/snapview-client.c#L1962
> >
> > > * Send a fix or at least evaluate whether a fix is possible.
> > >
> > > @Others,
> > >
> > > With the motivation of getting some traction on this, Is it ok if we:
> > > * Set a deadline of around 15 days to complete the review (or testing
> > with
> > > the patch in question) of respective components and to come up with
> > issues
> > > (if any).
> > > * Post the deadline, if there are no open issues, go ahead and merge the
> > > patch?
> > >
> > > If time is not enough, let us know and we can come up with a reasonable
> > > time.
> > >
> > > regards,
> > > Raghavendra
> > >
> > >
> > > > On 24 August 2017 at 10:13, Raghavendra G <raghavendra.hg@xxxxxxxxx>
> > > > wrote:
> > > >
> > > >> Note that we need to consider xlators on brick stack too. I've added
> > > >> maintainers/peers of xlators on brick stack. Please explicitly
> > ack/nack
> > > >> whether this patch affects your component.
> > > >>
> > > >> For reference, following are the xlators loaded in brick stack
> > > >>
> > > >> storage/posix
> > > >> features/trash
> > > >> features/changetimerecorder
> > > >> features/changelog
> > > >> features/bitrot-stub
> > > >> features/access-control
> > > >> features/locks
> > > >> features/worm
> > > >> features/read-only
> > > >> features/leases
> > > >> features/upcall
> > > >> performance/io-threads
> > > >> features/selinux
> > > >> features/marker
> > > >> features/barrier
> > > >> features/index
> > > >> features/quota
> > > >> debug/io-stats
> > > >> performance/decompounder
> > > >> protocol/server
> > > >>
> > > >>
> > > >> For those not following this thread, the question we need to answer
> > is,
> > > >> "whether the xlator you are associated with works fine if a non-lookup
> > > >> fop (like open, setattr, stat etc) hits it without a lookup ever being
> > > >> done
> > > >> on that inode"
> > > >>
> > > >> regards,
> > > >> Raghavendra
> > > >>
> > > >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> > > >> rgowdapp@xxxxxxxxxx> wrote:
> > > >>
> > > >>> Thanks Pranith and Ashish for your inputs.
> > > >>>
> > > >>> ----- Original Message -----
> > > >>> > From: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
> > > >>> > To: "Ashish Pandey" <aspandey@xxxxxxxxxx>
> > > >>> > Cc: "Raghavendra Gowdappa" <rgowdapp@xxxxxxxxxx>, "Xavier
> > Hernandez" <
> > > >>> xhernandez@xxxxxxxxxx>, "Gluster Devel"
> > > >>> > <gluster-devel@xxxxxxxxxxx>
> > > >>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
> > > >>> > Subject: Re: Need inputs on patch #17985
> > > >>> >
> > > >>> > Raghavendra,
> > > >>> >     As Ashish mentioned, there aren't any known problems if upper
> > > >>> xlators
> > > >>> > don't send lookups in EC at the moment.
> > > >>> >
> > > >>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey <
> > aspandey@xxxxxxxxxx>
> > > >>> wrote:
> > > >>> >
> > > >>> > > Raghvendra,
> > > >>> > >
> > > >>> > > I have provided my comment on this patch.
> > > >>> > > I think EC will not have any issue with this approach.
> > > >>> > > However, I would welcome comments from Xavi and Pranith too for
> > any
> > > >>> side
> > > >>> > > effects which I may not be able to foresee.
> > > >>> > >
> > > >>> > > Ashish
> > > >>> > >
> > > >>> > > ------------------------------
> > > >>> > > *From: *"Raghavendra Gowdappa" <rgowdapp@xxxxxxxxxx>
> > > >>> > > *To: *"Ashish Pandey" <aspandey@xxxxxxxxxx>
> > > >>> > > *Cc: *"Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>, "Xavier
> > > >>> Hernandez"
> > > >>> > > <xhernandez@xxxxxxxxxx>, "Gluster Devel" <
> > gluster-devel@xxxxxxxxxxx>
> > > >>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> > > >>> > > *Subject: *Need inputs on patch #17985
> > > >>> > >
> > > >>> > >
> > > >>> > > Hi Ashish,
> > > >>> > >
> > > >>> > > Following are the blockers for making a decision on whether patch
> > > >>> [1] can
> > > >>> > > be merged or not:
> > > >>> > > * Evaluation of dentry operations (like rename etc) in dht
> > > >>> > > * Whether EC works fine if a non-lookup fop (like open(dir),
> > stat,
> > > >>> chmod
> > > >>> > > etc) hits EC without a single lookup performed on file/inode
> > > >>> > >
> > > >>> > > Can you please comment on the patch? I'll take care of dht part.
> > > >>> > >
> > > >>> > > [1] https://review.gluster.org/#/c/17985/
> > > >>> > >
> > > >>> > > regards,
> > > >>> > > Raghavendra
> > > >>> > >
> > > >>> > >
> > > >>> >
> > > >>> >
> > > >>> > --
> > > >>> > Pranith
> > > >>> >
> > > >>> _______________________________________________
> > > >>> Gluster-devel mailing list
> > > >>> Gluster-devel@xxxxxxxxxxx
> > > >>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > > >>>
> > > >>> --
> > > >>> Raghavendra G
> > > >>>
> > > >>> <http://lists.gluster.org/mailman/listinfo/gluster-devel>
> > > >>>
> > > >>
> > > >> _______________________________________________
> > > >> Gluster-devel mailing list
> > > >> Gluster-devel@xxxxxxxxxxx
> > > >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > > >>
> > > >
> > > >
> > >
> > >
> > > --
> > > Raghavendra G
> > >
> >
>



--
Raghavendra G
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux