Re: Is drivers' open() method get called while fork or dup fd?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your generosity and your patience, I realy benefit from your great help.

>and ultimately, userspace open()-ing of the driver will definitely map
>to (through sys_open->do_sys_open->do_filp_open()->dv1394_open()) the
>per-device's open() API.   this means THAT EVERY PROCESS that want to
>trigger the deviceXXX_open()  to be called must use userspace API
>open() to open the device.
>
>does that answer your question?

You answer is quite clear and helpful. I will find more info like you do from the code in 
my spare time.

>of course, if u want to get the process pid inside the open() api, it
>will be something like current->pid, but there are many other ways.

I think the origin of this question is following case:
I mmaped some physical pages as a whole buf into user-space by driver's mmap() method. I don't want to 
access the content in the buf  through the way one page by one another page. I want access  them in a continus
way, like we access one buf  with memcpy or strcpy functions in user space. In other word, I don't want to care
which pages to access, and which offset to start access from in that page. 

Maybe, you will ask me why not allocating  continuous buf in my driver. That's because I haven't try another way
to allocate memmory except using nopage member of struct vm_operations_struct combine __get_free_pages(), 
which described clearly in LDD3.

In this way, the order parameter of __get_free_pages() must be zero, namely the memory buf must be allocated 
ONE PAGE at a time. So, the pages allocated aren't be continnuous very likely.

To avoid accessing the buf  with  pages-arithmetic, I maintained the mmpped vitual memory address for every process
mapping those physical pages. i.e. I record the vma->vm_start in nopage method of struct vm_operations_struct and 
the process id in the following struct:

struct vmt_entry
{
 int used;                                   /* used flag */
 pid_t pro_id;                          /* corresponding process */
 unsigned long vma_start_addr;  /* vm_start in vma struct */
};

struct mux_dev
{
    void **data;           /* physical pages pool */
    struct vmt_entry  vmt[MAX_PROCESS_CNT];  /* mapping VMA table */
    int vmas;                /* how many processes is activing */
    int pgs;                /* totally pgs pages in our dev */
    size_t size;           /* totally bytes in dev */
    spinlock_t lock;
};

I store vma->vm_start in vma_start_addr of struct vmt_entry, and store process id in pro_id when driver's open() get called. 
And then, if need, access the buf using :

dev->vmt[index].vma_start_addr + SOME_OFFSET 

where dev->vmt[index].pro_id is equal to process id of current process.It seems much redundantl in this way, but it's the only 
way I can find. and fortunatlyl, It can work.

Because I need find a slot to store the pid in driver's open() method and store the vma_start_addr in that slot after that. So I want
to know whether the driver's open() get called in calling of fork() function, IF the open() doesn't get called, I think I have no opportunity
to reserve one slot for child process. I don't know is that true. And, maybe that's redundant misgivings.


Yihe Chenÿôèº{.nÇ+?·?®?­?+%?Ëÿ±éÝjw¦j)p?Øÿº{.nÇ+?·¤z¹Þ?w°n'¬þÚqªí?Ïç?ùb?ìÿ¢¸?æ¬z·?vØ^¶m§ÿÿ?êçzYÞÁ¸?³ú+?ñ@


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux