> Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes: > > >> + if (has_promisor_remote()) > >> + prefetch_to_pack(0); > >> + > >> for (i = 0; i < to_pack.nr_objects; i++) { > >> > >> > >> Was the patch done this way because scanning the entire array twice > >> is expensive? > > > > Yes. If we called prefetch_to_pack(0) first (as you suggest), this first > > scan involves checking the existence of all objects in the array, so I > > thought it would be expensive. (Checking the existence of an object > > probably brings the corresponding pack index into disk cache on > > platforms like Linux, so 2 object reads might not take much more time > > than 1 object read, but I didn't want to rely on this when I could just > > avoid the extra read.) > > > >> The optimization makes sense to me if certain > >> conditions are met, like... > >> > >> - Most of the time there is no missing object due to promisor, even > >> if has_promissor_to_remote() is true; > > > > I think that optimizing for this condition makes sense - most pushes (I > > would think) are pushes of objects we create locally, and thus no > > objects are missing. > > > >> - When there are missing objects due to promisor, pack_offset_sort > >> will keep them near the end of the array; and > > I do not see this one got answered, but it is crucial if you want to > argue that the "lazy decision to prefetch at the last moment" is a > good optimization. If an object in the early part of to_pack array > is missing, you'd end up doing the same amount of work as the > simpler "if promissor is there, prefetch what is missing". My argument is that typically *no* objects are missing, so we should delay the prefetch as much as possible in the hope that we don't need it at all. I admit that if some objects are missing, I don't know where they will be in the to_pack list. > >> - Given the oid, oid_object_info_extended() on it with > >> OBJECT_INFO_FOR_PREFETCH is expensive. > > > > I see this as expensive since it involves checking of object existence. > > But doesn't the "prefetch before starting the main loop" change the > equation? If we prefetch, we can mark the objects to be prefetched > in prefetch_to_pack() so that the main loop do not even have to > check, so the non-lazy loop taken outside the check_object() and > placed before the main loop would have to run .nr_objects times, in > addition to the main loop that runs .nr_objects times, but you won't > have to call oid_object_info_extended() twice on the same object? The main loop (in get_object_details()) calls check_object() for each iteration, and check_object() calls oid_object_info_extended() (oid_object_info() before patch 1 of this series) in order to get the object's type. I don't see how the prefetch oid_object_info_extended() (in order to check existence) would eliminate the need for the main-loop oid_object_info_extended() (which obtains the object type), unless we record the type somewhere during the prefetch - but that would make things more complicated than they are now, I think. > >> Only when all these conditions are met, it would avoid unnecessary > >> overhead by scanning only a very later part of the array by delaying > >> the point in the array where prefetch_to_pack() starts scanning. > > > > Yes (and when there are no missing objects at all, there is no > > double-scanning). > > In any case, the design choice needs to be justified in the log > message. I am not sure if the lazy decision to prefetch at the last > moment is really worth the code smell. Perhaps it is, if there is a > reason to believe that it would save extra work compared to the more > naive "if we have promissor remote, prefetch what is missing", but I > do not think I've heard that reason yet. I still think that there is a reason (the extra existence check), but if we think that the extra existence check is fast enough (compared to the other operations in pack-objects) or that there is a way to avoid calling oid_object_info_extended() twice for the same object (even with moving the prefetch loop to the beginning), then I agree that we don't need the lazy decision. (Or if we want to write the simpler code now and only improve the performance if we need it later, that's fine with me too.)