Re: [PATCH] change the unpack limit threshold to a saner value

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nicolas Pitre <nico@xxxxxxx> writes:

> Let's assume the average object size is x. Given n objects, the needed 
> storage size is n*(x + b), where b is the average wasted block size on 
> disk.
> ...
> This is why I think the current default treshold should be 3 instead of 
> the insane value of 5000.  But since it feels a bit odd to go from 5000 
> to 3 I setled on 10.

I see you are optimizing for disk footprint, and this will
result in tons of tiny packs left between "repack -a".

I have not benched it yet, but the runtime pack handling code
was written assuming we have only a handful of big packs; I
suspect this change would affect the performance at runtime in
quite a bad way.


-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]