Hi Dmitry,
Trimming down my last reply to just the points that need your attention/ack.
I made some small edits for the sake of clarity.
> On Apr 16, 2024, at 6:28 PM, Dmitry Torokhov <dmitry.torokhov@xxxxxxxxx> wrote:
>
> On Mon, Apr 08, 2024 at 03:32:13PM +0000, James Ogletree wrote:
>>
>> +/* Describes an area in DSP memory populated by effects */
>> +struct cs40l50_bank {
>> + enum cs40l50_bank_type type;
>> + u32 base_index;
>> + u32 max_index;
>
> This looks like it is written to the device, so I think this needs
> proper endianness annotation. Is there any concern about padding between
> the type and base_index?
The structure itself is never written, so there is no concern about padding.
Only base_index is written, and the endianness conversion is handled by regmap.
>> +static void cs40l50_start_worker(struct work_struct *work)
>> +{
>> + struct cs40l50_work *work_data = container_of(work, struct cs40l50_work, work);
>> + struct cs40l50_vibra *vibra = work_data->vibra;
>> + struct cs40l50_effect *start_effect;
>> +
>> + if (pm_runtime_resume_and_get(vibra->dev) < 0)
>> + goto err_free;
>> +
>> + mutex_lock(&vibra->lock);
>> +
>> + start_effect = cs40l50_find_effect(work_data->effect->id, &vibra->effect_head);
>> + if (start_effect) {
>> + while (--work_data->count >= 0) {
>> + vibra->dsp.write(vibra->dev, vibra->regmap, start_effect->index);
>> + usleep_range(work_data->effect->replay.length,
>> + work_data->effect->replay.length + 100);
>
> If (I am reading this right you are spinning here playing the effect. It
> would be much better if you tracked outstanding number of replays for an
> effect and had a work re-scheduled that would play an instance.
> Similarly to what code in ff-memless.c is doing.
Yes, you are reading it right.
It appears that ff-memless.c handles repeats atomically, however for
reasons explained below, this driver must queue work for playback
executions.
This results in a possible scenario where if a playback is issued with a
high enough repeat value, an erase operation could arrive in the middle of
the chain of re-scheduling and disrupt the playbacks which have yet to be
queued. But with the current approach, the effect is guaranteed to repeat
any number of times without risk of being erased in the middle.
That’s my concern with adopting the re-scheduling approach for this
driver. Please let me know your thoughts.
>> +static int cs40l50_erase(struct input_dev *dev, int effect_id)
>> +{
>> + struct cs40l50_vibra *vibra = input_get_drvdata(dev);
>> + struct cs40l50_work work_data;
>> +
>> + work_data.vibra = vibra;
>> + work_data.effect = &dev->ff->effects[effect_id];
>> +
>> + INIT_WORK(&work_data.work, cs40l50_erase_worker);
>> +
>> + /* Push to workqueue to serialize with playbacks */
>> + queue_work(vibra->vibe_wq, &work_data.work);
>> + flush_work(&work_data.work);
>
> You already take the lock when you play, upload or erase an effect. Why
> do we need additional single-thread workqueue for this? Why do we need
> additional ordering and synchronization?
The workqueue is necessary is because playback blocks (via
pm_runtime_resume_and_get), and so playback must defer work. Upload
and erase are not called in atomic context, but without queueing those
executions, they could cut in front of playbacks waiting in the
workqueue.
But as the callbacks are already serialized via the workqueue, then I do
think the locks are excessive. That’s my thinking, let me know if it is
sound.
Best,
James
[Index of Archives]
[Pulseaudio]
[Linux Audio Users]
[ALSA Devel]
[Fedora Desktop]
[Fedora SELinux]
[Big List of Linux Books]
[Yosemite News]
[KDE Users]