On 07/21/2012 03:52 AM, Marcelo Tosatti wrote: > On Fri, Jul 20, 2012 at 09:15:44PM +0800, Xiao Guangrong wrote: >> On 07/20/2012 06:58 PM, Marcelo Tosatti wrote: >>> On Fri, Jul 20, 2012 at 10:17:36AM +0800, Xiao Guangrong wrote: >>>> On 07/20/2012 07:58 AM, Marcelo Tosatti wrote: >>>> >>>>>> - } >>>>>> + rc = ctxt->ops->read_emulated(ctxt, addr, mc->data + mc->end, size, >>>>>> + &ctxt->exception); >>>>>> + if (rc != X86EMUL_CONTINUE) >>>>>> + return rc; >>>>>> + >>>>>> + mc->end += size; >>>>>> + >>>>>> +read_cached: >>>>>> + memcpy(dest, mc->data + mc->pos, size); >>>>> >>>>> What prevents read_emulated(size > 8) call, with >>>>> mc->pos == (mc->end - 8) now? >>>> >>>> Marcelo, >>>> >>>> The splitting has been done in emulator_read_write_onepage: >>>> >>>> while (bytes) { >>>> unsigned now = min(bytes, 8U); >>>> >>>> frag = &vcpu->mmio_fragments[vcpu->mmio_nr_fragments++]; >>>> frag->gpa = gpa; >>>> frag->data = val; >>>> frag->len = now; >>>> frag->write_readonly_mem = (ret == -EPERM); >>>> >>>> gpa += now; >>>> val += now; >>>> bytes -= now; >>>> } >>>> >>>> So i think it is safe to remove the splitting in read_emulated. >>> >>> Yes, it is fine to remove it. >>> >>> But splitting in emulate.c prevented the case of _cache read_ with size >>>> 8 beyond end of mc->data. Must handle that case in read_emulated. >>> >>> "What prevents read_emulated(size > 8) call, with mc->pos == (mc->end - 8) now?" >> >> You mean the mmio region is partly cached? >> >> I think it can not happen. Now, we pass the whole size to emulator_read_write_onepage(), >> after it is finished, it saves the whole data into mc->data[], so, the cache-read >> can always get the whole data from mc->data[]. > > I mean that nothing prevents a caller from reading beyond the end of > mc->data array (but then again this was the previous behavior). 1024 bytes should be enough for instructions, may be we can add a WARN_ON to check buffer-overflow. > > ACK > Thank you, Marcelo! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html