Hi Jake, Thanks for this, I have been going through this and have a pretty good idea on what you are doing now, however I maybe missing something looking through your scripts, but I’m still not quite understanding how you are managing to make sure locking is happening with the ESXi ATS SCSI command. From this slide It seems to indicate that for a true active/active setup the two targets need to be aware of each other and exchange locking information for it to work reliably, I’ve also watched the video from the Ceph developer summit where this is discussed and it seems that Ceph+Kernel need changes to allow this locking to be pushed back to the RBD layer so it can be shared, from what I can see browsing through the Linux Git Repo, these patches haven’t made the mainline kernel yet. Can you shed any light on this? As tempting as having active/active is, I’m wary about using the configuration until I understand how the locking is working and if fringe cases involving multiple ESXi hosts writing to the same LUN on different targets could spell disaster. Many thanks, Nick From: Jake Young [mailto:jak3kaj@xxxxxxxxx] Yes, it's active/active and I found that VMWare can switch from path to path with no issues or service impact. I posted some config files here: github.com/jak3kaj/misc One set is from my LIO nodes, both the primary and secondary configs so you can see what I needed to make unique. The other set (targets.conf) are from my tgt nodes. They are both 4 LUN configs. Like I said in my previous email, there is no performance difference between LIO and tgt. The only service I'm running on these nodes is a single iscsi target instance (either LIO or tgt). Jake On Wed, Jan 14, 2015 at 8:41 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com