Re: ioremap bug? (was RE: DSPBRIDGE: segmentation fault after reloading dspbridge several times due to a memory leak.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My 2 cents on this, and based on previous experience (which may be
outdated), are there any limits (physical, virtual) regarding memory
space being crossed with these values?

I.e., are you in the limits of a memory page or the limits of a memory bank?

Just an idea. If I'm totally lost, my apologies. Just wanted to be of help.

Best regards,

Alex B.

On Thu, Mar 12, 2009 at 11:27 AM, Kevin Hilman
<khilman@xxxxxxxxxxxxxxxxxxx> wrote:
> "Menon, Nishanth" <nm@xxxxxx> writes:
>
>>> -----Original Message-----
>>> From: linux-omap-owner@xxxxxxxxxxxxxxx [mailto:linux-omap-
>>> owner@xxxxxxxxxxxxxxx] On Behalf Of Guzman Lugo, Fernando
>>> Sent: Thursday, March 12, 2009 2:04 AM
>>> To: linux-omap@xxxxxxxxxxxxxxx
>>> Subject: DSPBRIDGE: segmentation fault after reloading dspbridge several
>>> times due to a memory leak.
>>>         Reloading the dspbridge several times I am getting a Segmentation
>>> fault. Seeing the log it seems that the memory was exhausted
>>>
>>> The error happens when ioremap is called
>>>
>>> void MEM_ExtPhysPoolInit(u32 poolPhysBase, u32 poolSize)
>>> {
>>>             u32 poolVirtBase;
>>>
>>>             /* get the virtual address for the physical memory pool passed
>>> */
>>>             poolVirtBase = (u32)ioremap(poolPhysBase, poolSize);
>>> .
>>>
>>> Putting some printk's and printing the address returned by ioremap, I
>>> realized that address returned by ioremap each time I reload the dspbridge
>>> is different, in fact the address is increasing. I also put a printk where
>>> iounmap is called to make sure it is called and yes it is actually called.
>>> However testing with a kernel + bridge for linux 23x I always get the same
>>> address for pool memory. Any idea what the problem is? I have included the
>>> console output where you can see the address increasing.
>>
>> I duplicated this with the following dummy driver which ioremaps as per the same allocations that the bridge driver would have done:
>>
>> #include <linux/kernel.h>
>> #include <linux/module.h>
>> #include <linux/slab.h>
>> #include <linux/mm.h>
>> #include <linux/dma-mapping.h>
>>
>> #define BASE 0x87000000
>> #define SIZE 0x600000
>>
>> struct mem_s{
>>         void * vir;
>>         u32 phy;
>>         u32 size;
>> };
>>
>> struct mem_s b[]={
>>         {0,BASE,SIZE},
>>         {0,0x48306000,4096},
>>         {0,0x48004000,4096},
>>         {0,0x48094000,4096},
>>         {0,0x48002000,4096},
>>         {0,0x5c7f8000,98304},
>>         {0,0x5ce00000,32768},
>>         {0,0x5cf04000,81920},
>>         {0,0x48005000,4096},
>>         {0,0x48307000,4096},
>>         {0,0x48306a00,4096},
>>         {0,0x5d000000,4096},
>> };
>
> Nishant,
>
> Which of these physical addresses causes an increasing virtual
> address?  The addresses in the 0x48xxxxxx (L4, L4_WKUP) range should
> just trigger static mapping via the arch-specific ioremap, so those
> should always map to the same virt address.
>
> Could you do the experiment with a smaller number of mappings?  Maybe
> just one at a time of the non L4 mappings?  Probably starting with
> only the BASE,SIZE mapping.
>
> Kevin
>
>>
>> static int __init dummy_init(void)
>> {
>>         int i;
>>         for (i=0;i<(sizeof(b)/sizeof(struct mem_s));i++) {
>>                 b[i].vir = ioremap(b[i].phy,b[i].size);
>>                 if(b[i].vir == NULL) {
>>                         printk(KERN_ERR "Allocation failed idx=%d\n",i);
>>                         /* Free up all the prev allocs */
>>                         i--;
>>                         while(i>=0) {
>>                                 iounmap(b[i].vir);
>>                                 i--;
>>                         }
>>                         return -ENOMEM;
>>
>>                 }
>>         }
>>         return 0;
>> }
>> module_init(dummy_init);
>> static void __exit dummy_exit(void)
>> {
>>         int i;
>>         for (i=0;i<(sizeof(b)/sizeof(struct mem_s));i++) {
>>                 iounmap(b[i].vir);
>>         }
>> }
>> module_exit(dummy_exit);
>> MODULE_LICENSE("GPL");
>>
>>
>> Regression script:
>> #!/bin/bash
>> i=0
>> slee()
>> {
>>         echo "Sleep "
>> #sleep 5
>> }
>> while [ $i -lt 100 ]; do
>>         echo "insmod $i"
>>         insmod  ./dummy.ko
>>         if [ $? -ne 0 ]; then
>>                 echo "QUIT IN INSMOD $i"
>>                 exit 1;
>>         fi
>>         slee
>>         echo "rmmod $i"
>>         rmmod dummy
>>         if [ $? -ne 0 ]; then
>>                 echo "QUIT IN RMMOD $i"
>>                 exit 1;
>>         fi
>>         i=`expr $i + 1`
>>         slee
>> done
>>
>>
>>
>> after around 38 iterations:
>> <4>vmap allocation failed: use vmalloc=<size> to increase size.
>> vmap allocation failed: use vmalloc=<size> to increase size.
>> <3>Allocation failed idx=0
>> Allocation failed idx=0
>>
>> However cat /proc/meminfo after this error is:
>> cat /proc/meminfo
>> MemTotal:          61920 kB
>> MemFree:           56900 kB
>> Buffers:               0 kB
>> Cached:             2592 kB
>> SwapCached:            0 kB
>> Active:             1920 kB
>> Inactive:           1252 kB
>> Active(anon):        580 kB
>> Inactive(anon):        0 kB
>> Active(file):       1340 kB
>> Inactive(file):     1252 kB
>> Unevictable:           0 kB
>> Mlocked:               0 kB
>> SwapTotal:             0 kB
>> SwapFree:              0 kB
>> Dirty:                 0 kB
>> Writeback:             0 kB
>> AnonPages:           616 kB
>> Mapped:              688 kB
>> Slab:               1296 kB
>> SReclaimable:        480 kB
>> SUnreclaim:          816 kB
>> PageTables:           96 kB
>> NFS_Unstable:          0 kB
>> Bounce:                0 kB
>> WritebackTmp:          0 kB
>> CommitLimit:       30960 kB
>> Committed_AS:       2932 kB
>> VmallocTotal:     319488 kB
>> VmallocUsed:           8 kB
>> VmallocChunk:     319448 kB
>>
>>
>> We seem to have more than enough vmalloc space according to this.. am I right in thinking this is a kernel vmalloc handling issue?
>>
>> Regards,
>> Nishanth Menon
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-omap" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-omap" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Arm (vger)]     [ARM Kernel]     [ARM MSM]     [Linux Tegra]     [Linux WPAN Networking]     [Linux Wireless Networking]     [Maemo Users]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux