On 2024-03-15 at 08:58:35, Christopher Lindee wrote: > Is this a potential avenue for DoS? No, it's not. In our implementation, there is a functional limit on ref updates and if you exceed it, the operation fails. We also have rate limiting that estimates future cost based on the cost of previous requests and delays or fails requests which are projected to exceed a reasonable threshold. (Thus, you can make many more cheap requests, or fewer expensive ones, your choice.) All of this is per-repository, so generally only you (and maybe your colleagues or collaborators) experience negative consequences if you attempt excessive use. I can't speak to other implementations, but robust rate limits are typically common. I'm sure all major implementations open to the public have some sort of rate limiting because otherwise they'd be down a lot. The difference is that failing operations and even well-explained, well-documented rate limits cause a poor user experience, user frustration, and user inquiries (e.g., support requests), as well as possibly noisy alerting and potentially pages for engineers. Anytime a user experiences rate limits, they have to make a change in their behaviour, and people don't like change. I'd prefer we left the default to do the cheap no-op because, as I said, that scales much better, and thus we allow users to do the obvious thing for much longer and it just works for them. That, in turn, provides everyone a better experience. Certainly, if people start using this option by default, then it will be a problem if they engage in excessive use and server implementations will probably scale worse, but usually users don't use non-default options unless they need them, so I don't think your new proposed option is a problem. -- brian m. carlson (they/them or he/him) Toronto, Ontario, CA
Attachment:
signature.asc
Description: PGP signature