I have a couple development servers running centos 6.3 64bit that have LSI 9211-8i SAS2 controllers connected to a SAS2 backplane. these work fine with SATA hard disks (populated with a bunch of 3TB SATA drives)... I'm trying to install a OCZ Vertex3 SSD on each of the two servers to do some ssd caching tests... system sees the drive, so I do the following... # parted -a min /dev/sdu parted> mklabel gpt parted> mkpart primary ext4 2048s -1s parted> q # mkfs.ext4 /dev/sdu1 and it hangs at 'Discarding device blocks: 0/58607505' which I gather is related to 'trim' aka 'discard'. iostat shows this drive as 100% busy with no activity, and 1 pending operation in the IO queue. every 30 secs or so /var/lkog/messages gets... Oct 3 12:48:42 svfis-sg2 kernel: sd 0:0:22:0: attempting task abort! scmd(ffff88062c893d80) Oct 3 12:48:42 svfis-sg2 kernel: sd 0:0:22:0: [sdu] CDB: Write same(16): 93 08 00 00 00 00 00 00 08 00 00 40 00 00 00 00 Oct 3 12:48:42 svfis-sg2 kernel: scsi target0:0:22: handle(0x001e), sas_address(0x500304800191d8e3), phy(35) Oct 3 12:48:42 svfis-sg2 kernel: scsi target0:0:22: enclosure_logical_id(0x500304800191d8ff), slot(23) Oct 3 12:48:42 svfis-sg2 kernel: sd 0:0:22:0: task abort: SUCCESS scmd(ffff88062c893d80) and after a few of those, I see... Oct 3 12:49:12 svfis-sg2 kernel: INFO: task mkfs.ext4:3545 blocked for more than 120 seconds. Oct 3 12:49:12 svfis-sg2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Oct 3 12:49:12 svfis-sg2 kernel: mkfs.ext4 D 0000000000000000 0 3545 3304 0x00000080 Oct 3 12:49:12 svfis-sg2 kernel: ffff88062bbfbc28 0000000000000086 0000000000000000 ffff88062a8f6938 Oct 3 12:49:12 svfis-sg2 kernel: 0000000000000201 0000000000000003 ffff88062bbfbbc8 ffffffff81248cfa Oct 3 12:49:12 svfis-sg2 kernel: ffff88062b699ab8 ffff88062bbfbfd8 000000000000f4e8 ffff88062b699ab8 Oct 3 12:49:12 svfis-sg2 kernel: Call Trace: Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff81248cfa>] ? __elv_add_request+0x4a/0x90 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff814edf15>] schedule_timeout+0x215/0x2e0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff812503c2>] ? generic_make_request+0x2b2/0x5c0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff814edb93>] wait_for_common+0x123/0x180 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff8105ea30>] ? default_wake_function+0x0/0x20 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff8125075f>] ? submit_bio+0x8f/0x120 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff814edcad>] wait_for_completion+0x1d/0x20 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff812574a2>] blkdev_issue_discard+0x152/0x1e0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff81257e8f>] blkdev_ioctl+0x65f/0x6e0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff811aefcc>] block_ioctl+0x3c/0x40 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff81189682>] vfs_ioctl+0x22/0xa0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff81189824>] do_vfs_ioctl+0x84/0x580 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff81189da1>] sys_ioctl+0x81/0xa0 Oct 3 12:49:12 svfis-sg2 kernel: [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b we're not running the /latest/ kernel, looks like its 2.6.32-220.17.1.el6.x86_64 so if thats significant, I can try an update. for what its worth, the SSD"s seem to work fine plugged into a 9261-8i sas2 megaraid on the same backplanes, but I want to do my caching tests without the raid controller in the way. anynone else seem issues like this with el6 and SSD on SAS ? -- john r pierce N 37, W 122 santa cruz ca mid-left coast _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos