Re: Fuzzing media before changes hit upstream

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 5, 2019 at 11:31 AM Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx> wrote:
>
> Hi Dmitry,
>
> My name is Ezequiel Garcia, I work for Collabora
> as a Kernel developer. Currently, we are helping the
> ChromeOS team with media subsystem driver
> upstreaming and other upstream core changes.
>
> I've been following syzkaller media fuzzing
> progress on vivid, vimc and other drivers, and I'd
> like to thank you for your hard work. It's very
> impressive.
>
> I'm currently exploring the possibility of using
> syzkaller as part of our development process,
> fuzzing core changes, new ioctls, etc.
>
> The configuration allows to restrict the syscalls
> used, but I fail to see if there is a way to
> restrict the devices nodes syzkaller will use.
>
> Also, someone mentioned that it was possible
> to "train" the system, so subsequent runs
> would be shorter. Is that the case or maybe
> I got the wrong idea?
>
> Ideally, if we can have something that can
> run on a developer's laptop for just a few hours
> (say 6, 8 or even 24 hours) then we could run
> this before submitting patches, and somehow
> increase the level of confidence on the changes.
>
> Thanks,
> Ezequiel

+syzkaller, linux-media mailing list

Hi Ezequiel,

Great to hear! Thanks.

Re restricting device nodes. The set of device nodes accessed is
dictated by the set of enabled syscalls.
Say, if you enable only "openat$vim2m" and "ioctl*", then syzkaller
will only access /dev/video35 and ioctls applicable to it. So
restricting set of syscalls should do what you want.

Re training. Generally every syzkaller run is infinite, so there is no
shorter or longer :)
But, yes, while it runs it collects corpus of interesting inputs and
persistent them. Then, on the subsequent  runs it will use the
collected corpus to more-or-less "continue" from where it stopped. If
you will get this automatically provided you have code coverage
enabled (which you should for efficiency anyway). A 6-24h run should
be good enough (esp provided you already accumulated some corpus).
Corpus is a single local file, so it can be copied across machines.
But also we have the syz-hub thing, that may allow a team to connect
all their local instances together and always reuse each other
progress:
https://github.com/google/syzkaller/blob/master/docs/hub.md

But note that syzkaller will not auto-magically test just every piece
of kernel code. It needs descriptions of the tested interfaces and
some special setup for some subsystems, e.g. dev nodes present and
arranged in some particular way. Have you looked at the current syzbot
coverage on the media subsystem by any chance? That's coverage links
on the dashboard:
https://syzkaller.appspot.com/upstream
Have you checked the existing syzkaller descriptions?
https://github.com/google/syzkaller/blob/master/sys/linux/dev_video4linux.txt
I can't guarantee any completeness nor quality of media coverage and
as far as I know none of kernel developers looked at the
coverage/descriptions.

If you run syzkaller locally to test a particular piece of kernel
code, it always helps to check coverage report to assess achieved
coverage.

This may be useful for configuring kernel:
https://github.com/google/syzkaller/blob/master/docs/linux/kernel_configs.md



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux