Comment # 4
on bug 66519
from Justin Piszcz
(In reply to comment #3) > Does it work if build the driver as a module and load it manually after the > system has booted to a non-X runlevel? No, same problem: [ 13.533427] igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX [ 13.533706] IPv6: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready (end of boot single mode) Then, load module (modprobe radeon): [ 189.052514] [drm] radeon kernel modesetting enabled. [ 189.052851] [drm] initializing kernel modesetting (CEDAR 0x1002:0x68E1 0x1787:0x3000). [ 189.052936] [drm] register mmio base: 0xFBC20000 [ 189.052984] [drm] register mmio size: 131072 [ 189.053098] ATOM BIOS: PARK [ 189.053195] radeon 0000:05:00.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used) [ 189.053251] radeon 0000:05:00.0: GTT: 512M 0x0000000040000000 - 0x000000005FFFFFFF [ 189.070227] [drm] Detected VRAM RAM=1024M, BAR=256M [ 189.070279] [drm] RAM width 64bits DDR [ 189.070449] [TTM] Zone kernel: Available graphics memory: 33022834 kiB [ 189.070499] [TTM] Zone dma32: Available graphics memory: 2097152 kiB [ 189.070548] [TTM] Initializing pool allocator [ 189.070599] [TTM] Initializing DMA pool allocator [ 189.070677] [drm] radeon: 1024M of VRAM memory ready [ 189.070729] [drm] radeon: 512M of GTT memory ready. [ 189.070855] radeon 0000:05:00.0: ffff88103d246c00 unpin not necessary [ 189.189112] radeon 0000:05:00.0: fence driver on ring 5 use gpu addr 0x000000000005c418 and cpu addr 0xffffc900159ba418 [ 189.189170] [drm] GART: num cpu pages 131072, num gpu pages 131072 [ 189.189646] [drm] enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2=0 [ 189.189739] [drm] Loading CEDAR Microcode [ 189.208711] [drm] PCIE GART of 512M enabled (table at 0x0000000000040000). [ 189.208891] radeon 0000:05:00.0: WB enabled [ 189.208947] radeon 0000:05:00.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff88103a3eec00 [ 189.209018] radeon 0000:05:00.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff88103a3eec0c [ 189.222481] radeon 0000:05:00.0: fence driver on ring 5 use gpu addr 0x000000000015e418 and cpu addr 0xffffc9001621c418 [ 189.222554] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 189.222612] [drm] Driver supports precise vblank timestamp query. [ 189.222691] radeon 0000:05:00.0: irq 130 for MSI/MSI-X [ 189.222701] radeon 0000:05:00.0: radeon: using MSI. [ 189.222784] [drm] radeon: irq initialized. [ 189.239494] [drm] ring test on 0 succeeded in 1 usecs [ 189.239608] [drm] ring test on 3 succeeded in 1 usecs [ 190.415928] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 191.436153] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 192.456377] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 193.476604] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 194.496828] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 195.517052] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 196.537273] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 197.557497] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 198.577725] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 199.597950] [drm:r600_uvd_init] *ERROR* UVD not responding, trying to reset the VCPU!!! [ 199.618022] [drm:r600_uvd_init] *ERROR* UVD not responding, giving up!!! [ 199.618083] [drm:evergreen_startup] *ERROR* radeon: error initializing UVD (-1). [ 199.618369] [drm] ib test on ring 0 succeeded in 0 usecs [ 199.618452] [drm] ib test on ring 3 succeeded in 0 usecs [ 199.619870] [drm] Radeon Display Connectors [ 199.619930] [drm] Connector 0: [ 199.619985] [drm] DP-1 [ 199.620039] [drm] HPD2 [ 199.620095] [drm] DDC: 0x6460 0x6460 0x6464 0x6464 0x6468 0x6468 0x646c 0x646c [ 199.620163] [drm] Encoders: [ 199.620218] [drm] DFP1: INTERNAL_UNIPHY1 [ 199.620274] [drm] Connector 1: [ 199.620330] [drm] DVI-I-1 [ 199.620384] [drm] HPD4 [ 199.620439] [drm] DDC: 0x6450 0x6450 0x6454 0x6454 0x6458 0x6458 0x645c 0x645c [ 199.620508] [drm] Encoders: [ 199.620563] [drm] DFP2: INTERNAL_UNIPHY [ 199.620623] [drm] CRT1: INTERNAL_KLDSCP_DAC1 [ 199.620680] [drm] Connector 2: [ 199.620735] [drm] VGA-1 [ 199.620790] [drm] DDC: 0x6430 0x6430 0x6434 0x6434 0x6438 0x6438 0x643c 0x643c [ 199.620859] [drm] Encoders: [ 199.620913] [drm] CRT2: INTERNAL_KLDSCP_DAC2 [ 199.621012] [drm] Internal thermal controller with fan control [ 199.621137] [drm] radeon: power management initialized [ 199.715497] [drm] fb mappable at 0x3C0FE035F000 [ 199.715554] [drm] vram apper at 0x3C0FE0000000 [ 199.715608] [drm] size 9216000 [ 199.715661] [drm] fb depth is 24 [ 199.715714] [drm] pitch is 7680 [ 199.715797] fbcon: radeondrmfb (fb0) is primary device [ 199.990513] Console: switching to colour frame buffer device 240x75 [ 200.042891] radeon 0000:05:00.0: fb0: radeondrmfb frame buffer device [ 200.043109] radeon 0000:05:00.0: registered panic notifier [ 200.043296] [drm] Initialized radeon 2.33.0 20080528 for 0000:05:00.0 on minor 0 Justin.
You are receiving this mail because:
- You are the assignee for the bug.
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel