Re: My current dummy_hcd queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 16 Dec 2011, Sebastian Andrzej Siewior wrote:

> The driver usually calls complete once the hardware told it, that it is
> done with it. So nothing can go wrong with it. If you do this for an IN
> endpoint that early you can't tell the gadget if something went wrong
> afterwards. Especially if the host side did not enqueue an URB.
> I admit that this example not really a problem because the host will
> enqueue an URB. And most likely nothing goes wrong with packet on its
> away unless someone pulls the cable but this will be noticed by both
> sides and nobody cares about that one packet.

In addition to all that, it's possible that the host could crash just
after the data had been transferred.  The gadget driver wouldn't know
anything was wrong.

If the gadget driver really needs to know that the host has processed
the data safely, the communication protocol has to include some kind of
a higher-level handshake.

> What is the problem with u32? It is a shortcut for unsigned int.

No, it isn't.  The fact that on all current systems it happens to be
the same is merely a coincidence.  You can just as logically claim that
"Obama" is a shortcut for "The last name of the U.S. president".  
Right now they happen to be the same.  In a few years they won't.

The intention behind the words is at least as important (to a human
reader, if not to a computer) as the literal meaning of the words.  The
intention behind "unsigned int" is "an unsigned integer of whatever
size the CPU handles most conveniently", whereas the intention behind
"u32" is "an unsigned integer of 32 bits".  These are not the same
thing.

> So you are saying that using an int or unsigned is okay but u32 confuses
> people beacuase a 32bit value is expected while the former provides the
> exact same data type?

That's right.  When I first read your code I thought: "Why did he
specify u32?  He must have some particular reason for requiring exactly
32 bits rather than whatever size the compiler decides on.  What could
that reason be?"  This is very distracting; it interferes with
understanding what's really going on.

Furthermore, my programming career goes back a fair distance.  I can
easily remember days when "unsigned int" on a PC would _not_ have
provided the same type as "u32" -- not in Linux, but in other
environments.  In fact, this must still be true today in the embedded
world; plenty of 8-bit and 16-bit processors are still being sold.  
Probably even more of them than 32-bit or 64-bit processors.

As far as I know, there is no guarantee anywhere in Linux that "u32"  
will _always_ be a typedef for "unsigned int", even though it is now.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux