Re: Process Scheduling Issue using sg/libata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fajun Chen wrote:
On 11/17/07, Mark Lord <liml@xxxxxx> wrote:
Fajun Chen wrote:
On 11/16/07, Mark Lord <liml@xxxxxx> wrote:
Fajun Chen wrote:
..
This problem also happens with R/W DMA ops. Below are simplified code snippets:
    // Open one sg device for read
      if ((sg_fd  = open(dev_name, O_RDWR))<0)
      {
          ...
      }
      read_buffer = (U8 *)mmap(NULL, buf_sz, PROT_READ | PROT_WRITE,
                             MAP_SHARED, sg_fd, 0);

    // Open the same sg device for write
      if ((sg_fd_wr = open(dev_name, O_RDWR))<0)
      {
         ...
      }
      write_buffer = (U8 *)mmap(NULL, buf_sz, PROT_READ | PROT_WRITE,
                             MAP_SHARED, sg_fd_wr, 0);
..

Mmmm.. what is the purpose of those two mmap'd areas ?
I think this is important and relevant here:  what are they used for?

As coded above, these are memory mapped areas taht (1) overlap,
and (2) will be demand paged automatically to/from the disk
as they are accessed/modified.  This *will* conflict with any SG_IO
operations happening at the same time on the same device.
..
The purpose of using two memory mapped areas is to meet our
requirement that certain data patterns for writing need to be kept
across commands. For instance, if one buffer is used for both reads
and writes, then this buffer will need to be re-populated with certain
write data after each read command, which would be very costly for
write-read mixed type of ops. This separate R/W buffer setting also
facilitates data comparison.

These buffers are not used at the same time (one will be used only
after the command on the other is completed). My application is the
only program accessing disk using sg/libata and the rest of the
programs run from ramdisk. Also, each buffer is only about 0.5MB and
we have 64MB RAM on the target board.
With this setup,  these two buffers should be pretty much independent
and free from block layer/file system, correct?
..

No.  Those "buffers" as coded above are actually mmap'ed representations
of portions of the device (disk drive).  So any write into one of those
buffers will trigger disk writes, and just accessing ("read") the buffers
may trigger disk reads.

So what could be happening here, is when you trigger manual disk accesses
via SG_IO, that result in data being copied into those "buffers", the kernel
then automatically schedules disk writes to update the on-disk copies of
those mmap'd regions.

What you probably intended to do instead, was to use mmap to just allocate
some page-aligned RAM, not to actually mmap'd any on-disk data.  Right?

Here's how that's done:

      read_buffer = (U8 *)mmap(NULL, buf_sz, PROT_READ | PROT_WRITE,
                             MAP_SHARED|MAP_ANONYMOUS, -1, 0);

Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux