Re: [PATCH 0/3] Extend dtc with data type handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, Dec 23, 2013 at 08:00:54PM +0100, Tomasz Figa wrote:
> On Monday 23 of December 2013 23:08:14 David Gibson wrote:
> > On Fri, Dec 13, 2013 at 05:49:09PM +0100, Tomasz Figa wrote:
> > > This series intends to extend dtc with appropriate infrastructure
> > > to handle property data types. First, dtc is modified to preserve
> > > type information when parsing DTS. Then type guessing is implemented
> > > for flat and fs trees where type data is not available. After that,
> > > DTS generation code is modified to use only type information when
> > > printing property data.
> > 
> > Hrm.  I think this is completely backwards to the right approach.
> > 
> > For dt checking what we want is a schema.  That can specify datatypes
> > amongst other constraints, but in the end you can check the actual
> > bytes of the DT against the constraints imposed by the schema,
> > regardless of how those bytes are presented to the checker.
> 
> Those constraints are usually types, so you need a way to extract them
> from source DTS.

You really don't.

Consumers of the dt intepret it without type information, because they
know what's supposed to be there.  Likewise if we know the schema, we
can check whether the dt data fits it without having to have it
annotated with type information from another source.

> > By adding type information to the in-flight tree you're checking not
> > the actual content of the DT, but how you've chosen to express that
> > information in DTS format.
> > 
> > More concretely, we should be able to schema-check trees in binary
> > format, not just source.  With these patches that will fail if -I dtb
> > makes the wrong type guesses.
> 
> I'm not quite sure what do you want to check in binary format, where
> properties are just strings of bytes. All you can check is whether there
> is enough bytes or possibly some value constraints on particular parts
> of it, but not types.

You can check that if a consumer interprets the dt bytes according to
the schema, it will get sane data - and that's exactly what matters.

If a schema specifies a string followed by a 32-bit integer, that
implies constraints on the bytes, and that's what matters to a dt
consumer.  So we might expect:
	prop = "abc", <0xabcdef00>;
But:
	prop = <0x61626300>, "\xab\xcd\xef";

will still be interpreted correctly by the consumer, even though the
"type" information is wrong.

However:
	prop = "a\0b", <0xabcdef00>;

will NOT be parsed correctly by the consumer, even though the "type"
information is correct.

> IMHO what we primarily need is source validation. Of course using schemas,
> but that's already decided.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: pgpmIOIg1ZFI6.pgp
Description: PGP signature


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]
  Powered by Linux