On Tue, 12 Sep 2023 19:14:35 -0700 Rob Clark <robdclark@xxxxxxxxx> wrote: > On Tue, Sep 12, 2023 at 6:46 PM Rob Clark <robdclark@xxxxxxxxx> wrote: > > > > On Tue, Sep 12, 2023 at 2:32 AM Boris Brezillon > > <boris.brezillon@xxxxxxxxxxxxx> wrote: > > > > > > On Tue, 12 Sep 2023 09:37:00 +0100 > > > Adrián Larumbe <adrian.larumbe@xxxxxxxxxxxxx> wrote: > > > > > > > The current implementation will try to pick the highest available size > > > > display unit as soon as the BO size exceeds that of the previous > > > > multiplier. That can lead to loss of precision in BO's whose size is > > > > not a multiple of a MiB. > > > > > > > > Fix it by changing the unit selection criteria. > > > > > > > > For much bigger BO's, their size will naturally be aligned on something > > > > bigger than a 4 KiB page, so in practice it is very unlikely their display > > > > unit would default to KiB. > > > > > > Let's wait for Rob's opinion on this. > > > > This would mean that if you have SZ_1G + SZ_1K worth of buffers, you'd > > report the result in KiB.. which seems like overkill to me, esp given > > that the result is just a snapshot in time of a figure that > > realistically is dynamic. Yeah, my point was that, generally, such big buffers tend to have a bigger size alignment (like 2MB for anything bigger than 1GB), but maybe this assumption doesn't stand for all drivers. > > > > Maybe if you have SZ_1G+SZ_1K worth of buffers you should report the > > result with more precision than GiB, but more than MiB seems a bit > > overkill. > > > > BR, > > -R > > > > > > > > > > Signed-off-by: Adrián Larumbe <adrian.larumbe@xxxxxxxxxxxxx> > > > > --- > > > > drivers/gpu/drm/drm_file.c | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c > > > > index 762965e3d503..bf7d2fe46bfa 100644 > > > > --- a/drivers/gpu/drm/drm_file.c > > > > +++ b/drivers/gpu/drm/drm_file.c > > > > @@ -879,7 +879,7 @@ static void print_size(struct drm_printer *p, const char *stat, > > > > unsigned u; > > > > > > > > for (u = 0; u < ARRAY_SIZE(units) - 1; u++) { > > > > - if (sz < SZ_1K) > > btw, I was thinking more along the lines of: > > if (sz < 10*SZ_1K) > > (or perhaps maybe 100*SZ_1K) I think I suggested doing that at some point: if ((sz & (SZ_1K - 1)) && sz < UPPER_UNIT_THRESHOLD * SZ_1K) break; so we can keep using the upper unit if the size is a multiple of this upper unit, even if it's smaller than the selected threshold. > > I mean, any visualization tool is going to scale the y axis based on > the order of magnitude.. and if I'm looking at the fdinfo with my > eyeballs I don't want to count the # of digits manually to do the > conversion in my head. The difference btwn 4 or 5 or maybe 6 digits > is easy enough to eyeball, but more than that is too much for my > eyesight, and I'm not seeing how it is useful ;-) > > But if someone really has a valid use case for having precision in 1KB > then I'm willing to be overruled. So, precision loss was one aspect, but my main concern was having things displayed in KiB when they could have been displayed in MiB, because the size is a multiple of a MiB but still not big enough to pass the threshold test (which was set to 10000x in the previous version). > But I'm not a fan of the earlier > approach of different drivers reporting results differently, the whole > point of fdinfo was to have some standardized reporting. Totally agree with that. > > BR, > -R > > > > > + if (sz & (SZ_1K - 1)) > > > > break; > > > > sz = div_u64(sz, SZ_1K); > > > > } > > >