Re: [PATCH v2 3/3] builtin/unpack-objects.c: change xwrite to write_in_full avoid truncation.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Randall S. Becker" <the.n.e.key@xxxxxxxxx> writes:

> From: "Randall S. Becker" <rsbecker@xxxxxxxxxxxxx>
>
> This change is required because some platforms do not support file writes of
> arbitrary sizes (e.g, NonStop). xwrite ends up truncating the output to the
> maximum single I/O size possible for the destination device if the supplied
> len value exceeds the supported value. Replacing xwrite with write_in_full
> corrects this problem. Future optimisations could remove the loop in favour
> of just calling write_in_full.
>
> Signed-off-by: Randall S. Becker <rsbecker@xxxxxxxxxxxxx>
> ---
>  builtin/unpack-objects.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/builtin/unpack-objects.c b/builtin/unpack-objects.c
> index e0a701f2b3..6935c4574e 100644
> --- a/builtin/unpack-objects.c
> +++ b/builtin/unpack-objects.c
> @@ -680,7 +680,7 @@ int cmd_unpack_objects(int argc, const char **argv, const char *prefix UNUSED)
>  
>  	/* Write the last part of the buffer to stdout */
>  	while (len) {
> -		int ret = xwrite(1, buffer + offset, len);
> +		int ret = write_in_full(1, buffer + offset, len);
>  		if (ret <= 0)
>  			break;
>  		len -= ret;

Why do we need this with a retry loop that is prepared for short
write(2) specifically like this?

If xwrite() calls underlying write(2) with too large a value, then
your MAX_IO_SIZE is misconfigured, and the fix should go there, not
here in a loop that expects a working xwrite() that is allowed to
return on short write(2), I would think.





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux