Re: [PATCH v2] pylibfdt: add FdtRo.get_path()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Tue, Jan 25, 2022 at 07:36:17PM +0100, Luca Weiss wrote:
> Hi David,
> 
> On Dienstag, 25. Jänner 2022 05:48:34 CET David Gibson wrote:
> > On Sat, Jan 22, 2022 at 11:56:54AM +0100, Luca Weiss wrote:
> > > Add a new Python method wrapping fdt_get_path() from the C API.
> > > 
> > > Also add a test for the new method.
> > > 
> > > Signed-off-by: Luca Weiss <luca@xxxxxxxxx>
> > > ---
> > > Changes in v2:
> > > - dynamically increase size of string buffer until it fits.
> > > 
> > >   The values are chosen relatively arbitrary, and terminates at lengths
> > >   over than 4096 as I don't expect paths to be longer and there should
> > >   be an exit condition to not eat up RAM in case of some bug.
> > >  
> > >  pylibfdt/libfdt.i       | 27 +++++++++++++++++++++++++++
> > >  tests/pylibfdt_tests.py |  7 +++++++
> > >  2 files changed, 34 insertions(+)
> > > 
> > > diff --git a/pylibfdt/libfdt.i b/pylibfdt/libfdt.i
> > > index ac70762..d103bb6 100644
> > > --- a/pylibfdt/libfdt.i
> > > +++ b/pylibfdt/libfdt.i
> > > 
> > > @@ -443,6 +443,28 @@ class FdtRo(object):
> > >          """
> > >          return fdt_get_alias(self._fdt, name)
> > > 
> > > +    def get_path(self, nodeoffset):
> > > +        """Get the full path of a node
> > > +
> > > +        Args:
> > > +            nodeoffset: Node offset to check
> > > +
> > > +        Returns:
> > > +            Full path to the node
> > > +
> > > +        Raises:
> > > +            FdtException if the path is longer than 'size' or other error
> > > occurs +        """
> > > +        size = 64
> > > +        while size < 4096:
> > > +            ret, path = fdt_get_path(self._fdt, nodeoffset, size)
> > > +            if ret == -NOSPACE:
> > > +                size += 64
> > > +                continue
> > > +            check_err(ret)
> > > +            return path
> > > +        raise ValueError('Node path is unexpectedly long')
> > 
> > Ah... sorry, this isn't quite what I had in mind.  The idea of
> > resizing the buffer is to avoid having an arbitrary limit, but you
> > still have the 4096 byte arbitrary limit here.  As Simon suggests, 4k
> > would probably be a good *starting* point, rather than an ending
> > point.  Not only are repeated calls expensive in Python, but
> > fdt_get_path() itself is also quite expensive, since it needs to scan
> > the dtb from the start.  I'd also suggest doubling the buffer size on
> > each attempt, rather than increasing linearly.
> 
> I do think that there should be a limit *somewhere* instead of hitting OOM if 
> there would be some bug in the underlying implementation or the file provided? 
> I'm fine with limiting it at 100MB or whatever but I would rather have a limit 
> there than not have it.

Well, it can never exceed the size of the whole dtb, structurally, so
I don't think it's really a problem.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Device Tree]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux