RE: [RFC PATCH 04/35] ceph: Convert ceph_mds_request::r_pagelist to a databuf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2025-03-17 at 11:52 +0000, David Howells wrote:
> slava@xxxxxxxxxxx wrote:
> 
> > > -		err = ceph_pagelist_reserve(pagelist, len +
> > > val_size1 + 8);
> > > +		err = ceph_databuf_reserve(dbuf, len + val_size1 +
> > > 8,
> > > +					   GFP_KERNEL);
> > 
> > I know that it's simple change. But this len + val_size1 + 8 looks
> > confusing, anyway. What this hardcoded 8 means? :)
> 
> You tell me.  The '8' is pre-existing.
> 

Yeah, I know. I am simply thinking aloud that we need to rework the CephFS code
somehow to make it more clear and easy understandable. But it has no relations
with your change. 

> > > -	if (req->r_pagelist) {
> > > -		iinfo.xattr_len = req->r_pagelist->length;
> > > -		iinfo.xattr_data = req->r_pagelist->mapped_tail;
> > > +	if (req->r_dbuf) {
> > > +		iinfo.xattr_len = ceph_databuf_len(req->r_dbuf);
> > > +		iinfo.xattr_data = kmap_ceph_databuf_page(req-
> > > > r_dbuf, 0);
> > 
> > Possibly, it's in another patch. Have we removed req->r_pagelist from
> > the structure?
> 
> See patch 20 "libceph: Remove ceph_pagelist".
> 
> It cannot be removed here as the kernel must still compile and work at this
> point.
> 
> > Do we always have memory pages in ceph_databuf? How
> > kmap_ceph_databuf_page() will behave if it's not memory page.
> 
> Are there other sorts of pages?
> 

My point is simple. I assumed that if ceph_databuf can handle multiple types of
memory representations, then it could be not only memory pages. Potentially, CXL
memory would require some special management in the future (maybe not). :) But
if we always use regular memory pages under ceph_databuf abstraction, then I
don't see any problem here.

> > Maybe, we need to hide kunmap_local() into something like
> > kunmap_ceph_databuf_page()?
> 
> Actually, probably better to rename kmap_ceph_databuf_page() to
> kmap_local_ceph_databuf().
> 
> > Maybe, it makes sense to call something like ceph_databuf_length()
> > instead of low level access to dbuf->nr_bvec?
> 
> Sounds reasonable.  Better to hide the internal workings.
> 
> > > +	if (as_ctx->dbuf) {
> > > +		req->r_dbuf = as_ctx->dbuf;
> > > +		as_ctx->dbuf = NULL;
> > 
> > Maybe, we need something like swap() method? :)
> 
> I could point out that you were complaining about ceph_databuf_get() returning
> a pointer than a void;-).
> 
> > > +	dbuf = ceph_databuf_req_alloc(2, 0, GFP_KERNEL);
> > 
> > So, do we allocate 2 items of zero length here?
> 
> You don't.  One is the bvec[] count (2) and one is that amount of memory to
> preallocate (0) and attach to that bvec[].
> 

Aaah. I see now. Thanks.

> Now, it may make sense to split the API calls to handle a number of different
> scenarios, e.g.: request with just protocol, no pages; request with just
> pages; request with both protocol bits and page list.
> 
> > > +	if (ceph_databuf_insert_frag(dbuf, 0, sizeof(*header),
> > > GFP_KERNEL) < 0)
> > > +		goto out;
> > > +	if (ceph_databuf_insert_frag(dbuf, 1, PAGE_SIZE, GFP_KERNEL)
> > > < 0)
> > >  		goto out;
> > >  
> > > +	iov_iter_bvec(&iter, ITER_DEST, &dbuf->bvec[1], 1, len);
> > 
> > Is it correct &dbuf->bvec[1]? Why do we work with item #1? I think it
> > looks confusing.
> 
> Because you have a protocol element (in dbuf->bvec[0]) and a buffer (in
> dbuf->bvec[1]).

It sounds to me that we need to have two declarations (something like this):

#define PROTOCOL_ELEMENT_INDEX    0
#define BUFFER_INDEX              1

> 
> An iterator is attached to the buffer and the iterator then conveys it to
> __ceph_sync_read() as the destination.
> 
> If you look a few lines further on in the patch, you can see the first
> fragment being accessed:
> 
> > +	header = kmap_ceph_databuf_page(dbuf, 0);
> > +
> 
> Note that, because the read buffer is very likely a whole page, I split them
> into separate sections rather than trying to allocate an order-1 page as that
> would be more likely to fail.
> 
> > > -		header.data_len = cpu_to_le32(8 + 8 + 4);
> > > -		header.file_offset = 0;
> > > +		header->data_len = cpu_to_le32(8 + 8 + 4);
> > 
> > The same problem of understanding here for me. What this hardcoded 8 +
> > 8 + 4 value means? :)
> 
> You need to ask a ceph expert.  This is nothing specifically to do with my
> changes.  However, I suspect it's the size of the message element.
> 

Yeah, I see. :)

> > > -		memset(iov.iov_base + boff, 0, PAGE_SIZE - boff);
> > > +		p = kmap_ceph_databuf_page(dbuf, 1);
> > 
> > Maybe, we need to introduce some constants to address #0 and #1 pages?
> > Because, #0 it's header and I assume #1 is some content.
> 
> Whilst that might be useful, I don't know that the 0 and 1... being header and
> content respectively always hold.  I haven't checked, but there could even be
> a protocol trailer in some cases as well.
> 
> > > -	err = ceph_pagelist_reserve(pagelist,
> > > -				    4 * 2 + name_len + as_ctx-
> > > > lsmctx.len);
> > > +	err = ceph_databuf_reserve(dbuf, 4 * 2 + name_len + as_ctx-
> > > > lsmctx.len,
> > > +				   GFP_KERNEL);
> > 
> > The 4 * 2 + name_len + as_ctx->lsmctx.len looks unclear to me. It wil
> > be good to have some well defined constants here.
> 
> Again, nothing specifically to do with my changes.
> 

I completely agree.

Thanks,
Slava.





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux