Re: [PATCH v3 5/8] odb: teach read_blob_entry to use size_t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Matt Cooper via GitGitGadget" <gitgitgadget@xxxxxxxxx> writes:

> -void *read_blob_entry(const struct cache_entry *ce, unsigned long *size)
> +void *read_blob_entry(const struct cache_entry *ce, size_t *size)
>  {
>  	enum object_type type;
> -	void *blob_data = read_object_file(&ce->oid, &type, size);
> +	unsigned long ul;
> +	void *blob_data = read_object_file(&ce->oid, &type, &ul);
>  
> +	*size = ul;

It is a bit curious place to draw the line; we want to make sure
that blob_entry can hold huge data, but in this step we do not mind
read_object_file() is not capable of going full 64-bit?

I guess I'll see soon enough why by reading later steps.  I can see
that for the purpose of making write_entry() aware of the size_t,
this is necessary at the minimum.

Looking good.

>  	if (blob_data) {
>  		if (type == OBJ_BLOB)
>  			return blob_data;
> @@ -270,7 +272,7 @@ static int write_entry(struct cache_entry *ce, char *path, struct conv_attrs *ca
>  	int fd, ret, fstat_done = 0;
>  	char *new_blob;
>  	struct strbuf buf = STRBUF_INIT;
> -	unsigned long size;
> +	size_t size;
>  	ssize_t wrote;
>  	size_t newsize = 0;
>  	struct stat st;
> diff --git a/entry.h b/entry.h
> index b8c0e170dc7..61ee8c17604 100644
> --- a/entry.h
> +++ b/entry.h
> @@ -51,7 +51,7 @@ int finish_delayed_checkout(struct checkout *state, int *nr_checkouts);
>   */
>  void unlink_entry(const struct cache_entry *ce);
>  
> -void *read_blob_entry(const struct cache_entry *ce, unsigned long *size);
> +void *read_blob_entry(const struct cache_entry *ce, size_t *size);
>  int fstat_checkout_output(int fd, const struct checkout *state, struct stat *st);
>  void update_ce_after_write(const struct checkout *state, struct cache_entry *ce,
>  			   struct stat *st);
> diff --git a/parallel-checkout.c b/parallel-checkout.c
> index 6b1af32bb3d..b6f4a25642e 100644
> --- a/parallel-checkout.c
> +++ b/parallel-checkout.c
> @@ -261,7 +261,7 @@ static int write_pc_item_to_fd(struct parallel_checkout_item *pc_item, int fd,
>  	struct stream_filter *filter;
>  	struct strbuf buf = STRBUF_INIT;
>  	char *blob;
> -	unsigned long size;
> +	size_t size;
>  	ssize_t wrote;
>  
>  	/* Sanity check */
> diff --git a/t/t1051-large-conversion.sh b/t/t1051-large-conversion.sh
> index bff86c13208..8b23d862600 100755
> --- a/t/t1051-large-conversion.sh
> +++ b/t/t1051-large-conversion.sh
> @@ -85,7 +85,7 @@ test_expect_success 'ident converts on output' '
>  
>  # This smudge filter prepends 5GB of zeros to the file it checks out. This
>  # ensures that smudging doesn't mangle large files on 64-bit Windows.
> -test_expect_failure EXPENSIVE,SIZE_T_IS_64BIT,!LONG_IS_64BIT \
> +test_expect_success EXPENSIVE,SIZE_T_IS_64BIT,!LONG_IS_64BIT \
>  		'files over 4GB convert on output' '
>  	test_commit test small "a small file" &&
>  	test_config filter.makelarge.smudge \



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux