11.09.2020 12:59, Mikko Perttunen пишет: > On 9/11/20 12:57 AM, Dmitry Osipenko wrote: >> 09.09.2020 11:36, Mikko Perttunen пишет: >> ... >>>> >>>> Does it make sense to have timeout in microseconds? >>>> >>> >>> Not sure, but better have it a bit more fine-grained rather than >>> coarse-grained. This still gives a maximum timeout of 71 minutes so I >>> don't think it has any negatives compared to milliseconds. >> >> If there is no good reason to use microseconds right now, then should be >> better to default to milliseconds, IMO. It shouldn't be a problem to >> extend the IOCLT with a microseconds entry, if ever be needed. >> >> { >> __u32 timeout_ms; >> ... >> __u32 timeout_us; >> } >> >> timeout = timeout_ms + 1000 * timeout_us; >> >> There shouldn't be a need for a long timeouts, since a job that takes >> over 100ms is probably too unpractical. It also should be possible to >> detect a progressing job and then defer timeout in the driver. At least >> this is what other drivers do, like etnaviv driver for example: >> >> https://elixir.bootlin.com/linux/v5.9-rc4/source/drivers/gpu/drm/etnaviv/etnaviv_sched.c#L107 >> >> > > I still don't quite understand why it's better to default to > milliseconds? As you say, there is no need to have a long timeout, and > if we go microseconds now, then there wouldn't be a need to extend in > the future. It will nicer to avoid unnecessary unit-conversions in the code in order to keep it cleaner. I'm now also a bit dubious about that the timeout field of the submit IOCTL will be in the final UAPI version because it should become obsolete once drm-scheduler will be hooked up, since the hung-check timeout will be specified per-hardware engine within the kernel driver and there won't be much use for the user-defined timeout.