Re: [LSF/MM TOPIC] linux servers as a storage server - what's missing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/22/2011 09:54 PM, Shyam_Iyer@xxxxxxxx wrote:
> 
> 
>> -----Original Message-----
>> From: linux-scsi-owner@xxxxxxxxxxxxxxx [mailto:linux-scsi-
>> owner@xxxxxxxxxxxxxxx] On Behalf Of Vivek Goyal
>> Sent: Thursday, December 22, 2011 10:59 AM
>> To: Iyer, Shyam
>> Cc: rwheeler@xxxxxxxxxx; linux-fsdevel@xxxxxxxxxxxxxxx; linux-
>> scsi@xxxxxxxxxxxxxxx
>> Subject: Re: [LSF/MM TOPIC] linux servers as a storage server - what's
>> missing?
>>
>> On Thu, Dec 22, 2011 at 01:44:16PM +0530, Shyam_Iyer@xxxxxxxx wrote:
>>
>> [..]
>>
>>> Simple asks -
>>> 1) Provide a consistent storage and fs management library that
>> discourages folks to write their own usespace storage library. Include
>> things like fs formatting(fs profiles), transport configuration(eg:
>> iscsiadm as a library), thin provisioning watermarks, cluster
>> management, apis for cgroups etc.
>>                                       ^^^^^^^^^^^^^^^^
>> For cgroups, we have libcgroup library. Not many people like to use it
>> though as cgroup is exported as a filesystem and they prefer to use
>> normal
>> libc api to traverse and configure cgroups (Instead of going through
>> another library). Some examples include libvrit, systemd.
>>
>> Thanks
>> Vivek
> 
> Well honestly I think that is a libvirt/systemd issue and libvirt also
> invokes things like iscsiadm, dcb etc as a binary :-/
> 
> Some one could always use qemu command lines to invoke KVM/XEN but
> libvirt has saved me one too many days in doing a quick operation
> without wondering about a qemu commandline.
>  
> I am also asking for ideas on how to avoid this fragmentation because
> just like libvirt others are also encouraged to do their own libc
thing
> in the absence of a common storage management framework..
> 
> Does the standard interface for linux end at the user/kernel boundary
> or the user/libc boundary? If so I feel we would continue to lag
behind
> other OSes in features because of the model.
> 
StorageAPI _again_.

I was under the impression RH had someone working on it.
(Actually I was trying to give it a go, but then got buried under
customer escalations).

So yes, we know there is a shortcoming.
And yes, we should improve things.

But I feel another discussion about this will only give us more
insight, but not moving things forward.

What about having a separate session at the storage summit (or even
at the collab summit) to hammer out the requirements here?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@xxxxxxx			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux