I think there's a misunderstanding in how all the "frameworks" and stacked on top of each other. I'm not very knowledgeable in this area, but I think the layers are more or less like this:6. Totem | Firefox | Rhythmbox | Audacity--------------------------5. GStreamer | FFmpeg--------------------------4. PulseAudio | JACK--------------------------3. Pipewire--------------------------2. ALSA | OSS--------------------------1. Hardware
This is from the headlines at pipewire.org:
PipeWire provides a low-latency, graph based processing engine on top of audio and video devices
that can be used to support the use cases currently handled by both pulseaudio and JACK.
To me, this sounds more like a replacement of the layer 4 (at least). Am I wrong to think that devices = hardware? Also, what Wim has told me "PipeWire
should be an under-the-hood change. No workflow or tools or apis are
changed, so we still use pulseaudio API, jack API, jack tools and
pulseaudio
tools for everything. Evaluation of this should be on how similar the
old setup was to the new one, there should ideally be no difference,
nobody should
notice a change, ideally." does not have to imply that "sound data" go to PulseAudio first and then to PipeWire. I believe that PipeWire mimics the PulseAudio ports to handle the situation by itself. However, I was not able to find any accurate description of how that actually works in the system, so if you have a link to a place where the above diagram is to be seen or confirmed, please share it.
The application from layer 6 of course doesn't need to use any framework from layer 5, it can talk to lower layers directly (it can talk directly even to ALSA on layer 2, but then you lose some benefits, like software sound multiplexing). But the important thing to notice is that gstreamer, ffmpeg, etc are way above Pipewire.If we pick a particular app and say "app X must be able to play sound at Beta" (or alternatively "you must be able to find an app that plays sound at Beta"), that app X might talk directly to e.g. PulseAudio and while it works, you might not detect that 90% of other apps using a framework like gstreamer don't work. That's why I believe gstreamer is mentioned in the criterion, because it ensures that a) hardware works properly, and b) that a large portion of our apps are likely to work properly as well (once you test at least one of them).
So ideally, I would like to avoid the situation where application X is fine, because it talks to a different layer than some other application, but the application Y has no sound as it talks to a different layer. However this could be tested in the scope of the Audio Test Day, because it creates quite a lot of combinations for daily validation testing.
Of course saying "at least one app must be able to produce sound" (which is basically what you proposed in the first email) is also valid, it's just a weaker version of that criterion (it will validate hardware and some low levels like Pipewire and ALSA, but it might or might not validate nothing above it). Audacity is a good example of an app using layer 4 at most, so layer 5 would not be covered.
Audacity basically could be an application that could check almost everything, because you can set it up to talk to ALSA directly, JACK or PulseAudio, depending on the settings. The quality of the recording experience might differ however.
This is of course above the scope of our testing.
Perhaps I'm misunderstanding you, but it seems to me that the current criterion is very much in line with what you're saying you want to do. It tests something from the (almost) very top all the way down to the bottom.
Originally, my motivation was to introduce a recording criterion and couple it with the playback criterion, but since there is no recording application installed by default and most audio applications with recording support do not rely on gstreamer, I thought decoupling the gstreamer thing would help it.
However, if you suggest that a recording test should mean to check that a sound could be captured by Firefox, then we actually do not need anything. We say that capturing sound is the "default functionality" of firefox and hey, we are set. no changes needed.
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx
--
Lukáš Růžička
FEDORA QE, RHCE
Purkyňova 115
612 45 Brno - Královo Pole
_______________________________________________ test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx