Re: [PATCHv2 2/2] receive-pack: Add receive.objectCountLimit to refuse push with too many objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 13, 2011 at 19:03, Johan Herland <johan@xxxxxxxxxxx> wrote:
> The new receive.objectCountLimit config variable defines an upper limit
> on the number of objects to accept in a single push. If the number of
> objects in a push exceeds this limit, the entire push is discarded
> without storing the pushed objects on the server at all.
...
> +           ntohl(hdr.hdr_entries) > receive_object_count_limit) {
> +               char buf[1024];
> +               while (xread(0, buf, 1024) > 0)
> +                       ; /* nothing */
> +               return "received pack exceeds configured receive.objectCountLimit";

Discarding in core is painful. We are still consuming bandwidth to
toss away the data.

I wonder... should we instead export the objectCountLimit as part of
the advertisement to the client, and teach send-pack to look at this
and pass it down to pack-objects? If pack-objects winds up with more
than this limit, it aborts and the client doesn't even transmit data.
Newer clients would abort cleanly with a nice error.

I don't see it as a problem to advertise to a client "This server will
only accept X objects from you, sending X + 1 is an error and will be
rejected." If we are worried about an evil client using this
advertisement to try and DoS a server... he can easily do that with a
single giant blob. Or a very long delta chain of a single blob and
many, many tiny deltas applied onto it. Knowing what the remote's
objectCountLimit is doesn't increase the risk of a DoS attack.

For older clients that don't know this new advertised capability, they
should fail hard and not transfer all of this data. In my experience
when a user gets these strange errors from his Git client, he contacts
his server administrator with the screen output. At which point the
administrator can see the Counting objects line, check the repository
configuration, and tell the user what the problem is... and encourage
them to upgrade their client to a newer version.

If we are going to put limits in, does it make sense to try and push
these limits back to pack-objects in a more detailed way? You talked
about depth of history, or size of pack. pack-objects could
approximate both. If its depth of history, it might even be able to
cut off before it enumerates too many commits. :-)

The remote side obviously cannot abort early with number of commits or
pack size limits, but if those were soft limits suggested to a client,
while the object count was a hard limit, you might get a better
approximation for what you want. A server administrator might
configure a soft limit of 10 commits, but a hard limit of 5,000
objects. For most users, a bad push would abort very early on the soft
limit of 10 commits if they did an incorrect rebase. Meanwhile a user
who made 1 commit but changed every GPL header in the linux-2.6
repository (26,000+ files) would also be stopped for exceeding the
(hard) 5000 object limit.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]