Re: pm8001 performance degradation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jack,

I think the apparent degradation was the result of profiling flags in
the .config file.

I turned off TASKSTATS, AUDIT, OPTIMIZE_FOR_SIZE, PROFILING (including
OPROFILE), and GCOV_KERNEL.

Somewhere in there I got the performance back.

Without intending to run any of the tools at the time of my tests, I
did not expect the consequences (I would only expect that if I was
using a tool).

Apologies for any confusion I passed to others.


David



On Tue, Jul 12, 2011 at 12:34 PM, ersatz splatt <ersatzsplatt@xxxxxxxxx> wrote:
> Jack,
>
> fio script is:
> [global]
> rw=read
> direct=1
> time_based
> runtime=1m
> ioengine=libaio
> iodepth=32
> bs=512
> [dB]
> filename=/dev/sdb
> cpus_allowed=2
> [dC]
> filename=/dev/sdc
> cpus_allowed=3
> [dD]
> filename=/dev/sdd
> cpus_allowed=4
> [dE]
> filename=/dev/sde
> cpus_allowed=5
>
> (keep in mind this is a system with several cores)
>
>
> Before running the script I (of course) shut down coalescing:
> echo "2"> /sys/block/sdb/queue/nomerges
> echo "2"> /sys/block/sdc/queue/nomerges
> echo "2"> /sys/block/sdd/queue/nomerges
> echo "2"> /sys/block/sde/queue/nomerges
>
> echo noop > /sys/block/sdb/queue/scheduler
> echo noop > /sys/block/sdc/queue/scheduler
> echo noop > /sys/block/sdd/queue/scheduler
> echo noop > /sys/block/sde/queue/scheduler
>
> As you know, disk details are shown in the log on driver load:
> pm8001 0000:05:00.0: pm8001: driver version 0.1.36
> pm8001 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
> scsi4 : pm8001
> scsi 4:0:0:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:0:0: [sdb] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:0:0: Attached scsi generic sg1 type 0
> sd 4:0:0:0: [sdb] Write Protect is off
> sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdb: unknown partition table
> sd 4:0:0:0: [sdb] Attached SCSI disk
> scsi 4:0:1:0: Direct-Access     SEAGATE  ST9146803SS      0006 PQ: 0 ANSI: 5
> sd 4:0:1:0: Attached scsi generic sg2 type 0
> sd 4:0:1:0: [sdc] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:1:0: [sdc] Write Protect is off
> sd 4:0:1:0: [sdc] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdc: unknown partition table
> sd 4:0:1:0: [sdc] Attached SCSI disk
> scsi 4:0:2:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:2:0: [sdd] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:2:0: Attached scsi generic sg3 type 0
> sd 4:0:2:0: [sdd] Write Protect is off
> sd 4:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sdd: unknown partition table
> sd 4:0:2:0: [sdd] Attached SCSI disk
> scsi 4:0:3:0: Direct-Access     SEAGATE  ST9146803SS      0004 PQ: 0 ANSI: 5
> sd 4:0:3:0: [sde] 286749488 512-byte logical blocks: (146 GB/136 GiB)
> sd 4:0:3:0: Attached scsi generic sg4 type 0
> sd 4:0:3:0: [sde] Write Protect is off
> sd 4:0:3:0: [sde] Write cache: enabled, read cache: enabled, supports
> DPO and FUA
>  sde: unknown partition table
> sd 4:0:3:0: [sde] Attached SCSI disk
>
>
> The firmware version is 1.11.
>
> Let me know if you have any other questions.  Please let me know if
> you can confirm the performance degradation with the driver as it is.
>
>
> David
>
>
> On Mon, Jul 11, 2011 at 9:18 PM, Jack Wang <jack_wang@xxxxxxxxx> wrote:
>> Could you share your fio test scripts? disk detail and HBA firmware version
>> are also wanted if available.
>>
>> Jack
>>>
>>> I have one HBA connected directly to 4 SAS drives ... using a single 1
>>> to four cable.
>>>
>>>
>>> On Mon, Jul 11, 2011 at 6:27 PM, Jack Wang <jack_wang@xxxxxxxxx> wrote:
>>> >> Hello Jack Wang and Lindar Liu,
>>> >>
>>> >>
>>> >> I am running the pm8001 driver (on applicable hardware including a
>>> >> several core SMP server).
>>> >>
>>> >> When I run on an older kernel -- e.g. 2.6.34.7 -- I get about 73Kiops
>>> >> via an fio test.
>>> >>
>>> >> When I run a current kernel -- e.g. 2.6.39.2 -- on the same system and
>>> >> same storage I get about 15Kiops running the same fio test.
>>> >>
>>> >> Perhaps something has changes in the kernel that is not being accounted
>>> > for?
>>> >> Are you two still maintaining this driver?
>>> >>
>>> >>
>>> >> Regards,
>>> >> David
>>> > [Jack Wang]  Could you give your detailed topology, I will later try to
>>> > investigate the performance issue, but as I remember an Intel developer
>>> > reports in mailist some changes in block layer lead to JBOD performance
>>> > degradation.
>>> >
>>> >
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux