Fwd: Problem with LIO, OCManager, or Config?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



accidentally replied off list.


---------- Forwarded message ----------
From: Schlacta, Christ <aarcane@xxxxxxxxxxx>
Date: Thu, Mar 31, 2016 at 12:29 AM
Subject: Re: Problem with LIO, OCManager, or Config?
To: "Nicholas A. Bellinger" <nab@xxxxxxxxxxxxxxx>


Thank you Nicholas,

On Wed, Mar 30, 2016 at 10:59 PM, Nicholas A. Bellinger
<nab@xxxxxxxxxxxxxxx> wrote:
> Hi Christ,
>
> On Wed, 2016-03-16 at 19:36 -0700, Schlacta, Christ wrote:
>> I have most of my targets exported to Windows desktops at the moment.
>> My first goal is to centralize big games onto a central file server
>> with an SSD cache (Check).  I have a few desktops running emulex
>> cards, and using OneCommand Manager to manage them all, and on the
>> server side, most volumes are snapshot+clone images from a single
>> shared master, so that updates take up space, but the original install
>> is shared (Right now cerca 1GB per machine, but 250GB shared).  Works
>> well, but I've encountered some strange problems on the management
>> side, and I want to know where the problem lies, is it with LIO in
>> general, is it with my LIO Config, or is it with OneCommand Manager
>> from Emulex, or possibly, with the Windows Emulex drivers.  If it is a
>> client bug, and there's not an easy server-side workaround, I'll
>> contact Emulex, but I'm hopeful someone knows exactly what's wrong and
>> can tell me how to fix it.
>
> Just curious if you where able to sort out this Emulex OCM + MSFT host
> side with issue using block_size=4k..?

No joy.  Still have this issue.  Worked around it with a read-only
lun0 with 512b.

>
> Btw, AFAICT 4k sector support is a Windows 8 + Servers 2012 feature, I
> assume you're running one of those, right..?

Windows 10.  Need to test Server 2012, but I've been lazy.

>
>>
>> The problem is twofold.  First, any LUN exported as 4K shows in OCM
>> with "n/a" in EVERY field of the LUN INformation tab.    It's as if
>> OCM acknowledges that it exists, but knows nothing about it.  Other
>> than this, it functions sufficiently at the OS level, in that I can
>> read and write data to the drive.
>>
>> Second, If LUN 0 is a 4K drive, OCM shows absolutely no information
>> about any LUNs, and actually goes so far as to say "No Luns are
>> available", as I happily run Diablo 3 from my SAN.  I actually assume
>> this second facet is in fact a OCM bug and not a LIO or COnfig bug,
>> but I hope fixing 1 will resolve this as well.
>>
>> Finally, worth noting, 4K volumes perform in excess of 4x faster on
>> random IOPS and Sequential Read, in large part because of the backing
>> ZFS filesystem and COW operations on a wide raidz pool.  I'd get
>> better performance with a larger block size, but alas, I also
>> introduce other issues on the Windows side.
>
> Interesting..

Intrinsic property of COW FS' and RAID stripes.

>
>>
>> Below is an inline of my config file, with some superfuous snippets
>> removed.  All attributes and parameters are preserved so as to ensure
>> nothing important is omitted.
>>
>> Thank you for any and all assistance.  I'll probably have 1 more
>> request for help after this one is resolved.  Wish there was something
>> like an issue tracker to make this easier.
>>
>> storage fileio {
>>     disk common.empty {
>>         buffered no
>>         path /vdisk/common/img
>>         size 1.0GB
>>         wwn 31b83880-1ea3-40d6-a72e-3547361a6e15
>>         attribute {
>>             block_size 512
>>             emulate_3pc yes
>>             emulate_caw yes
>>             emulate_dpo yes
>>             emulate_fua_read yes
>>             emulate_fua_write yes
>>             emulate_model_alias yes
>>             emulate_rest_reord no
>>             emulate_tas yes
>>             emulate_tpu no
>>             emulate_tpws no
>>             emulate_ua_intlck_ctrl no
>>             emulate_write_cache no
>>             enforce_pr_isids yes
>>             fabric_max_sectors 8192
>>             is_nonrot yes
>>             max_unmap_block_desc_count 1
>>             max_unmap_lba_count 8192
>>             max_write_same_len 4096
>>             optimal_sectors 16384
>>             queue_depth 128
>>             unmap_granularity 1
>>             unmap_granularity_alignment 0
>>         }
>>     }
>>     disk izanami.games {
>>         buffered no
>>         path /vdisk/izanami/games/img
>>         size 256.0GB
>>         wwn 4aba9819-afa7-4746-b03b-d6456d4fb802
>>         attribute {
>>             block_size 4096
>>             emulate_3pc yes
>>             emulate_caw yes
>>             emulate_dpo yes
>>             emulate_fua_read yes
>>             emulate_fua_write yes
>>             emulate_model_alias yes
>>             emulate_rest_reord no
>>             emulate_tas yes
>>             emulate_tpu no
>>             emulate_tpws no
>>             emulate_ua_intlck_ctrl no
>>             emulate_write_cache no
>>             enforce_pr_isids yes
>>             fabric_max_sectors 8192
>>             is_nonrot yes
>>             max_unmap_block_desc_count 1
>>             max_unmap_lba_count 8192
>>             max_write_same_len 4096
>>             optimal_sectors 2048
>
> Might be worth-while to try with optimal_sectors=4096..

targetcli actually added an "optimal_sectors=16384" to my config, and
I had to lower it by powers of 2 incrementally before I could load it.
I need to post another issue about targetcli/configfs failing on
"invalid parameters", when clearly they shouldn't..  But I think
that's a targetcli bug, and it might be fixed in -fb.

coincidentally, it occurs to me that these power-of-two parameters
might be better handled as a shift offset instead of integers similar
to how ZFS handles ashift.

>
> Beyond that, the backend configuration looks as expected.
>

I actually had to manually remove several invalid lines and adjust
some values from the config generated by targetcli to make it
effectively load.  I'll get back with more details if that's a kernel
bug I should report, or I'll take it to targetcli if it's their
problem, but using the raw python shell it still fails with the
default generated config.
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux