[rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is great! Thank you. The only issue is that we use github as a 
public repo and we do not use github for our normal development 
workflow. Do you mind resubmitting the change using the development 
workflow as described 
here:https://github.com/gluster/gluster-swift/blob/master/doc/markdown/dev_guide.md. 
More information can be found at https://launchpad.net/gluster-swift

- Luis

On 09/19/2013 05:23 PM, Paul Robert Marino wrote:
> Thank you every one for your help especially Louis
>
> I tested the RPM and it went well every thing is working now.
>
> I did have to use the tenant ID's as the volume names. I may submit an
> update to the documentation to clarify this for people
>
> So in other words the volume names have to match the output of
>
> "
> keystone tenant-list|grep -v + | \
> grep -v -P '^\|\s+id\s+\|\s+name\s+\|\s+enabled\s+\|$' | \
> grep -v -P '^\w+:' | awk '{print $2}'
> "
>
>
> I've created an updated copy of gluster-swift-gen-builders that grabs
> the value of mount_ip from /etc/swift/fs.conf and posted it on github.
> you should see a pull request on the site for the submission. of the
> change
>
> Thank you every one for your help
>
> On Tue, Sep 17, 2013 at 4:38 PM, Paul Robert Marino <prmarino1 at gmail.com> wrote:
>> Luis
>> Thanks for the timely response.
>>
>> On Tue, Sep 17, 2013 at 1:52 PM, Luis Pabon <lpabon at redhat.com> wrote:
>>> On 09/17/2013 11:13 AM, Paul Robert Marino wrote:
>>>> Luis
>>>> well thats intresting because it was my impression that Gluster UFO
>>>> 3.4 was based on the Grizzly version of Swift.
>>> [LP] Sorry, the gluster-ufo RPM is Essex only.
>> [PRM] The source of my confusion was here
>> http://www.gluster.org/community/documentation/index.php/Features34
>> and here http://www.gluster.org/2013/06/glusterfs-3-4-and-swift-where-are-all-the-pieces/
>> These pages on the gluster site should probably be updated to reflect
>> the changes.
>>
>>
>>>
>>>> Also I was previously unaware of this new rpm which doesnt seem to be
>>>> in a repo any where.
>>> [LP] gluster-swift project RPMs have been submitted to Fedora and are
>>> currently being reviewed.
>> [PRM] Cool if they are in the EPEL testing repo Ill look for them
>> there because I would rather pull the properly EPEL signed RPMs if
>> they exist just to make node deployments easier. If not Ill ask some
>> of my friends offline if they can help expedite it.
>>
>>>
>>>> also there is a line in this new howto that is extreamly unclear
>>>>
>>>> "
>>>> /usr/bin/gluster-swift-gen-builders test
>>>> "
>>>> in place of "test" what should go there is it the tenant ID string,
>>>> the tenant name, or just a generic volume you can name whatever you
>>>> want?
>>>> in other words how should the Gluster volumes be named?
>>> [LP] We will clarify that in the quick start guide.  Thank you for pointing
>>> it out.  While we update the community site, please refer to the
>>> documentation available here http://goo.gl/bQFI8o for a usage guide.
>>>
>>> As for the tool, the format is:
>>> gluster-swift-gen-buildes [VOLUME] [VOLUME...]
>>>
>>> Where VOLUME is the name of the GlusterFS volume to use for object storage.
>>> For example
>>> if the following two GlusterFS volumes, volume1 and volume2, need to be
>>> accessed over Swift,
>>> then you can type the following:
>>>
>>> # gluster-swift-gen-builders volume1 volume2
>> [PRM] That part I understood however it doesn't answer the question exactly.
>>
>> Correct me if I'm wrong but looking over the code briefly it looks as
>> though the volume name needs to be the same as the tenant ID number
>> like it did with Gluster UFO 3.3.
>> so for example
>> if I do a " keystone tenant-list" and a see tenant1 with an id of
>> "f6da0a8151ff43b7be10d961a20c94d6" then I would need to create a
>> volume named f6da0a8151ff43b7be10d961a20c94d6
>>
>> If I can name the volumes whatever I want or give them the same name
>> as the tenant that would be great because it makes it easier for other
>> SA's who are not directly working with OpenStack but may need to mount
>> the volumes to comprehend, but its not urgently needed.
>>
>> One thing I was glad to see is that with Gluster UFO 3.3 I had to add
>> mount points to /etc/fstab for each volume and manually create the
>> directories for the mount points this looks to have been corrected in
>> Gluster-Swift.
>>
>>> For more information please read: http://goo.gl/gd8LkW
>>>
>>> Let us know if you have any more questions or comments.
>> [PRM] I may fork the Github repo and add some changes that may be
>> beneficial so they can be reviewed and possibly merged.
>> for example it would be nice if the  gluster-swift-gen-buildes script
>> used the value of the mount_ip field in /etc/swift/fs.conf instead of
>> 127.0.0.1 if its defined.
>> also I might make a more robust version that allows create, add,
>> remove, and list options.
>>
>>
>> Ill do testing tomorrow and let everyone know how it goes.
>>
>>
>>> - Luis
>>>
>>>>
>>>> On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon <lpabon at redhat.com> wrote:
>>>>> First thing I can see is that you have Essex based gluster-ufo-* which
>>>>> has
>>>>> been replaced by the gluster-swift project.  We are currently in progress
>>>>> of
>>>>> replacing the gluster-ufo-* with RPMs from the gluster-swift project in
>>>>> Fedora.
>>>>>
>>>>> Please checkout the following quickstart guide which show how to download
>>>>> the Grizzly version of gluster-swift:
>>>>>
>>>>> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md
>>>>> .
>>>>>
>>>>> For more information please visit: https://launchpad.net/gluster-swift
>>>>>
>>>>> - Luis
>>>>>
>>>>>
>>>>> On 09/16/2013 05:02 PM, Paul Robert Marino wrote:
>>>>>
>>>>> Sorry for the delay on reporting the details. I got temporarily pulled
>>>>> off the project and dedicated to a different project which was
>>>>> considered higher priority by my employer. I'm just getting back to
>>>>> doing my normal work today.
>>>>>
>>>>> first here are the rpms I have installed
>>>>> "
>>>>>    rpm -qa |grep -P -i '(gluster|swift)'
>>>>> glusterfs-libs-3.4.0-8.el6.x86_64
>>>>> glusterfs-server-3.4.0-8.el6.x86_64
>>>>> openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch
>>>>> openstack-swift-proxy-1.8.0-2.el6.noarch
>>>>> glusterfs-3.4.0-8.el6.x86_64
>>>>> glusterfs-cli-3.4.0-8.el6.x86_64
>>>>> glusterfs-geo-replication-3.4.0-8.el6.x86_64
>>>>> glusterfs-api-3.4.0-8.el6.x86_64
>>>>> openstack-swift-1.8.0-2.el6.noarch
>>>>> openstack-swift-container-1.8.0-2.el6.noarch
>>>>> openstack-swift-object-1.8.0-2.el6.noarch
>>>>> glusterfs-fuse-3.4.0-8.el6.x86_64
>>>>> glusterfs-rdma-3.4.0-8.el6.x86_64
>>>>> openstack-swift-account-1.8.0-2.el6.noarch
>>>>> glusterfs-ufo-3.4.0-8.el6.noarch
>>>>> glusterfs-vim-3.2.7-1.el6.x86_64
>>>>> python-swiftclient-1.4.0-1.el6.noarch
>>>>>
>>>>> here are some key config files note I've changed the passwords I'm
>>>>> using and hostnames
>>>>> "
>>>>>    cat /etc/swift/account-server.conf
>>>>> [DEFAULT]
>>>>> mount_check = true
>>>>> bind_port = 6012
>>>>> user = root
>>>>> log_facility = LOG_LOCAL2
>>>>> devices = /swift/tenants/
>>>>>
>>>>> [pipeline:main]
>>>>> pipeline = account-server
>>>>>
>>>>> [app:account-server]
>>>>> use = egg:gluster_swift_ufo#account
>>>>> log_name = account-server
>>>>> log_level = DEBUG
>>>>> log_requests = true
>>>>>
>>>>> [account-replicator]
>>>>> vm_test_mode = yes
>>>>>
>>>>> [account-auditor]
>>>>>
>>>>> [account-reaper]
>>>>>
>>>>> "
>>>>>
>>>>> "
>>>>>    cat /etc/swift/container-server.conf
>>>>> [DEFAULT]
>>>>> devices = /swift/tenants/
>>>>> mount_check = true
>>>>> bind_port = 6011
>>>>> user = root
>>>>> log_facility = LOG_LOCAL2
>>>>>
>>>>> [pipeline:main]
>>>>> pipeline = container-server
>>>>>
>>>>> [app:container-server]
>>>>> use = egg:gluster_swift_ufo#container
>>>>>
>>>>> [container-replicator]
>>>>> vm_test_mode = yes
>>>>>
>>>>> [container-updater]
>>>>>
>>>>> [container-auditor]
>>>>>
>>>>> [container-sync]
>>>>> "
>>>>>
>>>>> "
>>>>>    cat /etc/swift/object-server.conf
>>>>> [DEFAULT]
>>>>> mount_check = true
>>>>> bind_port = 6010
>>>>> user = root
>>>>> log_facility = LOG_LOCAL2
>>>>> devices = /swift/tenants/
>>>>>
>>>>> [pipeline:main]
>>>>> pipeline = object-server
>>>>>
>>>>> [app:object-server]
>>>>> use = egg:gluster_swift_ufo#object
>>>>>
>>>>> [object-replicator]
>>>>> vm_test_mode = yes
>>>>>
>>>>> [object-updater]
>>>>>
>>>>> [object-auditor]
>>>>> "
>>>>>
>>>>> "
>>>>> cat /etc/swift/proxy-server.conf
>>>>> [DEFAULT]
>>>>> bind_port = 8080
>>>>> user = root
>>>>> log_facility = LOG_LOCAL1
>>>>> log_name = swift
>>>>> log_level = DEBUG
>>>>> log_headers = True
>>>>>
>>>>> [pipeline:main]
>>>>> pipeline = healthcheck cache authtoken keystone proxy-server
>>>>>
>>>>> [app:proxy-server]
>>>>> use = egg:gluster_swift_ufo#proxy
>>>>> allow_account_management = true
>>>>> account_autocreate = true
>>>>>
>>>>> [filter:tempauth]
>>>>> use = egg:swift#tempauth
>>>>> # Here you need to add users explicitly. See the OpenStack Swift
>>>>> Deployment
>>>>> # Guide for more information. The user and user64 directives take the
>>>>> # following form:
>>>>> #     user_<account>_<username> = <key> [group] [group] [...]
>>>>> [storage_url]
>>>>> #     user64_<account_b64>_<username_b64> = <key> [group] [group]
>>>>> [...] [storage_url]
>>>>> # Where you use user64 for accounts and/or usernames that include
>>>>> underscores.
>>>>> #
>>>>> # NOTE (and WARNING): The account name must match the device name
>>>>> specified
>>>>> # when generating the account, container, and object build rings.
>>>>> #
>>>>> # E.g.
>>>>> #     user_ufo0_admin = abc123 .admin
>>>>>
>>>>> [filter:healthcheck]
>>>>> use = egg:swift#healthcheck
>>>>>
>>>>> [filter:cache]
>>>>> use = egg:swift#memcache
>>>>>
>>>>>
>>>>> [filter:keystone]
>>>>> use = egg:swift#keystoneauth
>>>>> #paste.filter_factory = keystone.middleware.swift_auth:filter_factory
>>>>> operator_roles = Member,admin,swiftoperator
>>>>>
>>>>>
>>>>> [filter:authtoken]
>>>>> paste.filter_factory = keystone.middleware.auth_token:filter_factory
>>>>> auth_host = keystone01.vip.my.net
>>>>> auth_port = 35357
>>>>> auth_protocol = http
>>>>> admin_user = swift
>>>>> admin_password = PASSWORD
>>>>> admin_tenant_name = service
>>>>> signing_dir = /var/cache/swift
>>>>> service_port = 5000
>>>>> service_host = keystone01.vip.my.net
>>>>>
>>>>> [filter:swiftauth]
>>>>> use = egg:keystone#swiftauth
>>>>> auth_host = keystone01.vip.my.net
>>>>> auth_port = 35357
>>>>> auth_protocol = http
>>>>> keystone_url = https://keystone01.vip.my.net:5000/v2.0
>>>>> admin_user = swift
>>>>> admin_password = PASSWORD
>>>>> admin_tenant_name = service
>>>>> signing_dir = /var/cache/swift
>>>>> keystone_swift_operator_roles = Member,admin,swiftoperator
>>>>> keystone_tenant_user_admin = true
>>>>>
>>>>> [filter:catch_errors]
>>>>> use = egg:swift#catch_errors
>>>>> "
>>>>>
>>>>> "
>>>>> cat /etc/swift/swift.conf
>>>>> [DEFAULT]
>>>>>
>>>>>
>>>>> [swift-hash]
>>>>> # random unique string that can never change (DO NOT LOSE)
>>>>> swift_hash_path_suffix = gluster
>>>>> #3d60c9458bb77abe
>>>>>
>>>>>
>>>>> # The swift-constraints section sets the basic constraints on data
>>>>> # saved in the swift cluster.
>>>>>
>>>>> [swift-constraints]
>>>>>
>>>>> # max_file_size is the largest "normal" object that can be saved in
>>>>> # the cluster. This is also the limit on the size of each segment of
>>>>> # a "large" object when using the large object manifest support.
>>>>> # This value is set in bytes. Setting it to lower than 1MiB will cause
>>>>> # some tests to fail. It is STRONGLY recommended to leave this value at
>>>>> # the default (5 * 2**30 + 2).
>>>>>
>>>>> # FIXME: Really? Gluster can handle a 2^64 sized file? And can the
>>>>> fronting
>>>>> # web service handle such a size? I think with UFO, we need to keep with
>>>>> the
>>>>> # default size from Swift and encourage users to research what size their
>>>>> web
>>>>> # services infrastructure can handle.
>>>>>
>>>>> max_file_size = 18446744073709551616
>>>>>
>>>>>
>>>>> # max_meta_name_length is the max number of bytes in the utf8 encoding
>>>>> # of the name portion of a metadata header.
>>>>>
>>>>> #max_meta_name_length = 128
>>>>>
>>>>>
>>>>> # max_meta_value_length is the max number of bytes in the utf8 encoding
>>>>> # of a metadata value
>>>>>
>>>>> #max_meta_value_length = 256
>>>>>
>>>>>
>>>>> # max_meta_count is the max number of metadata keys that can be stored
>>>>> # on a single account, container, or object
>>>>>
>>>>> #max_meta_count = 90
>>>>>
>>>>>
>>>>> # max_meta_overall_size is the max number of bytes in the utf8 encoding
>>>>> # of the metadata (keys + values)
>>>>>
>>>>> #max_meta_overall_size = 4096
>>>>>
>>>>>
>>>>> # max_object_name_length is the max number of bytes in the utf8 encoding
>>>>> of
>>>>> an
>>>>> # object name: Gluster FS can handle much longer file names, but the
>>>>> length
>>>>> # between the slashes of the URL is handled below. Remember that most web
>>>>> # clients can't handle anything greater than 2048, and those that do are
>>>>> # rather clumsy.
>>>>>
>>>>> max_object_name_length = 2048
>>>>>
>>>>> # max_object_name_component_length (GlusterFS) is the max number of bytes
>>>>> in
>>>>> # the utf8 encoding of an object name component (the part between the
>>>>> # slashes); this is a limit imposed by the underlying file system (for
>>>>> XFS
>>>>> it
>>>>> # is 255 bytes).
>>>>>
>>>>> max_object_name_component_length = 255
>>>>>
>>>>> # container_listing_limit is the default (and max) number of items
>>>>> # returned for a container listing request
>>>>>
>>>>> #container_listing_limit = 10000
>>>>>
>>>>>
>>>>> # account_listing_limit is the default (and max) number of items returned
>>>>> # for an account listing request
>>>>>
>>>>> #account_listing_limit = 10000
>>>>>
>>>>>
>>>>> # max_account_name_length is the max number of bytes in the utf8 encoding
>>>>> of
>>>>> # an account name: Gluster FS Filename limit (XFS limit?), must be the
>>>>> same
>>>>> # size as max_object_name_component_length above.
>>>>>
>>>>> max_account_name_length = 255
>>>>>
>>>>>
>>>>> # max_container_name_length is the max number of bytes in the utf8
>>>>> encoding
>>>>> # of a container name: Gluster FS Filename limit (XFS limit?), must be
>>>>> the
>>>>> same
>>>>> # size as max_object_name_component_length above.
>>>>>
>>>>> max_container_name_length = 255
>>>>>
>>>>> "
>>>>>
>>>>>
>>>>> The volumes
>>>>> "
>>>>>    gluster volume list
>>>>> cindervol
>>>>> unified-storage-vol
>>>>> a07d2f39117c4e5abdeba722cf245828
>>>>> bd74a005f08541b9989e392a689be2fc
>>>>> f6da0a8151ff43b7be10d961a20c94d6
>>>>> "
>>>>>
>>>>> if I run the command
>>>>> "
>>>>>    gluster-swift-gen-builders unified-storage-vol
>>>>> a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc
>>>>> f6da0a8151ff43b7be10d961a20c94d6
>>>>> "
>>>>>
>>>>> because of a change in the script in this version as compaired to the
>>>>> version I got from
>>>>> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the
>>>>> gluster-swift-gen-builders script only takes the first option and
>>>>> ignores the rest.
>>>>>
>>>>> other than the location of the config files none of the changes Ive
>>>>> made are functionally different than the ones mentioned in
>>>>>
>>>>> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/
>>>>>
>>>>> The result is that the first volume named "unified-storage-vol" winds
>>>>> up being used for every thing regardless of the tenant, and users and
>>>>> see and manage each others objects regardless of what tenant they are
>>>>> members of.
>>>>> through the swift command or via horizon.
>>>>>
>>>>> In a way this is a good thing for me it simplifies thing significantly
>>>>> and would be fine if it just created a directory for each tenant and
>>>>> only allow the user to access the individual directories, not the
>>>>> whole gluster volume.
>>>>> by the way seeing every thing includes the service tenants data so
>>>>> unprivileged users can delete glance images without being a member of
>>>>> the service group.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino <prmarino1 at gmail.com>
>>>>> wrote:
>>>>>
>>>>> Well I'll give you the full details in the morning but simply I used the
>>>>> stock cluster ring builder script that came with the 3.4 rpms and the old
>>>>> version from 3.3 took the list of volumes and would add all of them the
>>>>> version with 3.4 only takes the first one.
>>>>>
>>>>> Well I ran the script expecting the same behavior but instead they all
>>>>> used
>>>>> the first volume in the list.
>>>>>
>>>>> Now I knew from the docs I read that the per tenant directories in a
>>>>> single
>>>>> volume were one possible plan for 3.4 to deal with the scalding issue
>>>>> with a
>>>>> large number of tenants, so when I saw the difference in the script and
>>>>> that
>>>>> it worked I just assumed that this was done and I missed something.
>>>>>
>>>>>
>>>>>
>>>>> -- Sent from my HP Pre3
>>>>>
>>>>> ________________________________
>>>>> On Sep 2, 2013 20:55, Ramana Raja <rraja at redhat.com> wrote:
>>>>>
>>>>> Hi Paul,
>>>>>
>>>>> Currently, gluster-swift doesn't support the feature of multiple
>>>>> accounts/tenants accessing the same volume. Each tenant still needs his
>>>>> own
>>>>> gluster volume. So I'm wondering how you were able to observe the
>>>>> reported
>>>>> behaviour.
>>>>>
>>>>> How did you prepare the ringfiles for the different tenants, which use
>>>>> the
>>>>> same gluster volume? Did you change the configuration of the servers?
>>>>> Also,
>>>>> how did you access the files that you mention? It'd be helpful if you
>>>>> could
>>>>> share the commands you used to perform these actions.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Ram
>>>>>
>>>>>
>>>>> ----- Original Message -----
>>>>> From: "Vijay Bellur" <vbellur at redhat.com>
>>>>> To: "Paul Robert Marino" <prmarino1 at gmail.com>
>>>>> Cc: rhos-list at redhat.com, "Luis Pabon" <lpabon at redhat.com>, "Ramana Raja"
>>>>> <rraja at redhat.com>, "Chetan Risbud" <crisbud at redhat.com>
>>>>> Sent: Monday, September 2, 2013 4:17:51 PM
>>>>> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question
>>>>>
>>>>> On 09/02/2013 01:39 AM, Paul Robert Marino wrote:
>>>>>
>>>>> I have Gluster UFO installed as a back end for swift from here
>>>>> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/
>>>>> with RDO 3
>>>>>
>>>>> Its working well except for one thing. All of the tenants are seeing
>>>>> one Gluster volume which is some what nice, especially when compared
>>>>> to the old 3.3 behavior of creating one volume per tenant named after
>>>>> the tenant ID number.
>>>>>
>>>>> The problem is I expected to see is sub directory created under the
>>>>> volume root for each tenant but instead what in seeing is that all of
>>>>> the tenants can see the root of the Gluster volume. The result is that
>>>>> all of the tenants can access each others files and even delete them.
>>>>> even scarier is that the tennants can see and delete each others
>>>>> glance images and snapshots.
>>>>>
>>>>> Can any one suggest options to look at or documents to read to try to
>>>>> figure out how to modify the behavior?
>>>>>
>>>>> Adding gluster swift developers who might be able to help.
>>>>>
>>>>> -Vijay
>>>>>
>>>>>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130920/2d877beb/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux