Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2012-06-25 10:57, Avi Kivity wrote:
> On 06/25/2012 10:01 AM, Jan Kiszka wrote:
>> Flush pending coalesced MMIO before performing mapping or state changes
>> that could affect the event orderings or route the buffered requests to
>> a wrong region.
>>
>> Signed-off-by: Jan Kiszka <jan.kiszka@xxxxxxxxxxx>
>>
>> In addition, we also have to
> 
> Yes, we do.
> 
>>  
>>  void memory_region_transaction_begin(void)
>>  {
>> +    qemu_flush_coalesced_mmio_buffer();
>>      ++memory_region_transaction_depth;
>>  }
> 
> Why is this needed?
> 
>>  
>> @@ -1109,6 +1110,9 @@ void memory_region_sync_dirty_bitmap(MemoryRegion *mr)
>>  
>>  void memory_region_set_readonly(MemoryRegion *mr, bool readonly)
>>  {
>> +    if (!QTAILQ_EMPTY(&mr->coalesced)) {
>> +        qemu_flush_coalesced_mmio_buffer();
>> +    }
> 
> The readonly attribute is inherited by subregions and alias targets, so
> this check is insufficient.  See render_memory_region().  Need to flush
> unconditionally.
> 
>>      if (mr->readonly != readonly) {
>>          mr->readonly = readonly;
>>          memory_region_update_topology(mr);
>> @@ -1117,6 +1121,9 @@ void memory_region_set_readonly(MemoryRegion *mr, bool readonly)
>>  
>>  void memory_region_rom_device_set_readable(MemoryRegion *mr, bool readable)
>>  {
>> +    if (!QTAILQ_EMPTY(&mr->coalesced)) {
>> +        qemu_flush_coalesced_mmio_buffer();
>> +    }
> 
> This property is not inherited, but let's flush unconditionally just the
> same, to reduce fragility.
> 
>> @@ -1219,6 +1228,9 @@ void memory_region_add_eventfd(MemoryRegion *mr,
>>      };
>>      unsigned i;
>>  
>> +    if (!QTAILQ_EMPTY(&mr->coalesced)) {
>> +        qemu_flush_coalesced_mmio_buffer();
>> +    }
> 
> Ditto.  It's possible that an eventfd overlays a subregion which has
> coalescing enabled.  It's not really defined what happens in this case,
> and such code and its submitter should be perma-nacked, but let's play
> it safe here since there isn't much to be gained by avoiding the flush.
>  This code is a very slow path anyway, including and rcu and/or srcu
> synchronization, and a rebuild of the dispatch radix tree (trees when we
> dma-enable this code).
> 
>>      for (i = 0; i < mr->ioeventfd_nb; ++i) {
>>          if (memory_region_ioeventfd_before(mrfd, mr->ioeventfds[i])) {
>>              break;
>> @@ -1249,6 +1261,9 @@ void memory_region_del_eventfd(MemoryRegion *mr,
>>      };
>>      unsigned i;
>>  
>> +    if (!QTAILQ_EMPTY(&mr->coalesced)) {
>> +        qemu_flush_coalesced_mmio_buffer();
>> +    }
> 
> Same.

OK.

> 
> The repetitiveness of this code suggests a different way of doing this:
> make every API call be its own subtransaction and perform the flush in
> memory_region_begin_transaction() (maybe that's the answer to my
> question above).

So you want me to wrap the core of those services in
begin/commit_transaction instead? Just to be sure I got the idea.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux