Re: vmware + iscsi + tgt + reservations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick,

yes, vaai is disabled successfully.

Performance and everything is good.

The pure problem is, that ONE lun can only be used on ONE node at the
same time. So the reservation is not working.

There is no ATS and there is no vaai active. But still, of course,
vmware will, just like any system, make reservations, but it does not
release it automatically, i guess because tgt does not support it.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 02.09.2016 um 11:12 schrieb Nick Fisk:
> Have you disabled the vaai functions in ESXi? I can't remember off the top of my head, but one of them makes everything slow to a crawl.
> 
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Oliver Dzombic
>> Sent: 02 September 2016 09:50
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  vmware + iscsi + tgt + reservations
>>
>> Hi,
>>
>> VMFS-5.61 file system spanning 1 partitions.
>> Mode: public
>>
>> The Filesystem is working fine ( on the 1st node where multiple instances are started ). And it continues to work fine, also after
>> mounting the same LUN to the 2nd node and trying write operations there.
>>
>> So i have no reason to think that anything is corrupted.
>>
>> The problem seems to be, that tgt simply does not support reservations.
>>
>> And this way, the reservation of the lun of the 1st node got never released.
>>
>> I could release it now by force, and maybe it will work, but i cant manually releasing locks every time vmotion or what ever
>> administrativ operation is done.
>>
>> So if there is no solution with tgt, i would have to try out to map the rbd locally to the iSCSI servers and provide the targets via LIO (
>> which is slower and more complex from the solution, because i have to map the rbd locally ). I try to avoid that.
>>
>> In that case, maybe it would better to switch to NFS, but that can not be used in high availbility style without a lot of complexity of the
>> configuration ( loadbalancer in front of everything ).
>>
>> Changing the whole cluster to SuSE because of that is also no option.
>>
>> --
>> Mit freundlichen Gruessen / Best regards
>>
>> Oliver Dzombic
>> IP-Interactive
>>
>> mailto:info@xxxxxxxxxxxxxxxxx
>>
>> Anschrift:
>>
>> IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3
>> 63571 Gelnhausen
>>
>> HRB 93402 beim Amtsgericht Hanau
>> Geschäftsführung: Oliver Dzombic
>>
>> Steuer Nr.: 35 236 3622 1
>> UST ID: DE274086107
>>
>>
>> Am 02.09.2016 um 01:28 schrieb Brad Hubbard:
>>> On Fri, Sep 2, 2016 at 7:41 AM, Oliver Dzombic <info@xxxxxxxxxxxxxxxxx> wrote:
>>>> Hi,
>>>>
>>>> i know, this is not really ceph related anymore. But i guess it could
>>>> be helpful for others too.
>>>>
>>>> I was using:
>>>>
>>>> https://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
>>>>
>>>> and i am currently running in a problem, where
>>>>
>>>> ONE LUN
>>>>
>>>> is connected to
>>>>
>>>> TWO Nodes ( esxi 6.0 )
>>>
>>> What filesystem are you using on the LUN?
>>>
>>>>
>>>> And the 2nd node is unable to do any kind of write operations on the
>>>> (successful mounted, and readable ) lun.
>>>
>>> Depending on the filesystem you just corrupted it by mounting it
>>> concurrently on two hosts.
>>>
>>>>
>>>> As it seems, it has to do with reservations.
>>>>
>>>> So the question is now, how to solve that.
>>>>
>>>> The vmware log says:
>>>>
>>>> 2016-09-01T21:09:54.281Z cpu18:33538)NMP: nmp_PathDetermineFailure:3002:
>>>> SCSI cmd RESERVE failed on path vmhba37:C0:T0:L1, reservation state
>>>> on device naa.60000000000000000e00000000010001 is unknown.
>>>>
>>>>
>>>> tgtd --version
>>>> 1.0.55
>>>>
>>>> Any help / idea is appriciated !
>>>>
>>>> Thank you !
>>>>
>>>> --
>>>> Mit freundlichen Gruessen / Best regards
>>>>
>>>> Oliver Dzombic
>>>> IP-Interactive
>>>>
>>>> mailto:info@xxxxxxxxxxxxxxxxx
>>>>
>>>> Anschrift:
>>>>
>>>> IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3
>>>> 63571 Gelnhausen
>>>>
>>>> HRB 93402 beim Amtsgericht Hanau
>>>> Geschäftsführung: Oliver Dzombic
>>>>
>>>> Steuer Nr.: 35 236 3622 1
>>>> UST ID: DE274086107
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux