On Tue, 20 Mar 2012 19:27:31 +1300 "Michael Kerrisk (man-pages)" <mtk.manpages@xxxxxxxxx> wrote: > On Tue, Mar 20, 2012 at 1:50 PM, Mike Frysinger <vapier@xxxxxxxxxx> > wrote: > > On Monday 19 March 2012 16:45:41 Michael Kerrisk (man-pages) wrote: > >> A quick question about one piece: > >> > +The count values might be individually capped according to > >> > \fIUIO_MAXIOV\fP. +If the Linux kernel is capped at smaller > >> > values, the C library will take care +of emulating the limit it > >> > exposes (if it is bigger) so the user only needs to +care about > >> > that (what the C library defines). > >> > >> I don't see anything in glibc that does this. Have I missed > >> something? > > > > i think you're correct. the code in glibc atm is merely a > > syscall(). i think the idea was to have the C library guarantee > > that and if moving forward the kernel changes, the C library would > > update by adding a wrapper. maybe just drop this sentence until > > that day comes ? > > So, does the kernel currently impose a limit on the size of the iovec? > It wasn't immediately clear to me from a quick scan of the source. > > The code calls rw_copy_check_uvector which does check that the iovecs are smaller than UIO_MAXIOV. Note that the following bit is not strictly true: "So if the counts are too big, or the vectors invalid, or the addresses refer to regions that are inaccessible, none of the previous vectors will be processed and an +error will be returned immediately." Whilst the code does check that memory regions in the process calling the system calls are accessible before any work is done, it does not check the memory regions in the remote process until just before doing the read/write. So in that case you can end up with a partial read/write if one of the iovec elements for the remote process points to an invalid memory region. No further read/writes will be attempted after this point though. Regards, Chris -- cyeoh@xxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-man" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html