Re: how to script with targetcli

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/14/2013 03:37 PM, Andy Grover wrote:
On 10/14/2013 01:11 PM, Tregaron Bayly wrote:
It was something else.  Using strace and strace -c we could see that
targetcli was doing several times the number of syscalls strictly
necessary to parse out all the configfs nodes and their values - opening
and closing the same files over and over again.  We wrote the skinniest
code we could to simply spin through and then write the correct JSON and
it allowed us to scale it up.  This may have been improved now as it was
a year ago that we were building this all out but the saveconfig time
and slow lvm processing were the biggest challenges we had to overcome.

The saveconfig (restoreconfig?) performance thing I think was addressed in January 2013. So, very interested if any reports using rtslib-fb28 or later.

Slow LVM processing? Can you elaborate on the issue and how you overcame it? (Sorry if a little OT.) Were you using lvmetad?

Yeah, probably way off topic so apologies in advance everybody.

Once you start adding a couple hundred or more logical volumes every lvm-related operation gets pretty slow.  With each subsequent lv performance degrades further - by the time you've got 1,000 targets you can be waiting a minute for pvs, lvdisplay or lvcreate to finish (longer if your disks are busy).  At first I tried to address this by maintaining my own cache of the information so I didn't have to look it up every time, but an lvcreate/lvremove/lvrename could still take minutes and there was no avoiding those commands.  In the end I came to the realization that it's getting slower simply because the tools are scanning through all the block devices in /dev looking for lvm metadata so the amount of work grew with each subsequent logical volume.  The maddening part is that it was all wasted - lvm metadata is stored on physical volumes (a fixed number) and not on logical volumes.  I changed lvm.conf to scan /dev/lvm instead of /dev and then wrote udev rules (and patched dracu
t with si
milar rules) to create symlinks for pvs in /dev/lvm:

ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", SYMLINK+="lvm/%k"

With that change the time to run a given lvm command is pretty constant regardless of how many logical volumes you have.  We've got machines with 5,000 lvs that can create new volumes in 3 seconds.


Any other pain points from anyone, besides the stuff you all mentioned already? Usability or other stuff you miss from using other targets?

Regards -- Andy


--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux