Re: Radosgw and large files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After doing a little bit more digging it seems I was getting a 400 level http response when trying to upload the large file (10 GB).

I was able to get around it by renaming the file (from one with no extension) to a .txt file.

I had created the file on a mac using the 'mkfile' command for testing.

I have tested uploading small files that do not have an extension and I do not seem to be running into the same issue. 

This may be a bug of some sort so I am going to open a bug report when I get a chance.

Thanks,

Shain

Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

________________________________________
From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Shain Miley [SMiley@xxxxxxx]
Sent: Saturday, October 26, 2013 8:16 PM
To: derek@xxxxxxxxxxxxxx; ceph-users@xxxxxxxx
Subject: Re:  Radosgw and large files

Derek,
I also just got a 30 day PRO evaluation license for cross-ftp...even though a am using the 'pro' version at this point...I am still getting the same 'permission denied' error.

Can you tell me what client you are using with files over 5GB, and if you have anything special in your in your ceph.com related to radosgw?

Thanks again,

Shain



Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

________________________________________
From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Shain Miley [SMiley@xxxxxxx]
Sent: Saturday, October 26, 2013 7:25 PM
To: derek@xxxxxxxxxxxxxx; ceph-users@xxxxxxxx
Subject: Re:  Radosgw and large files

I'll try the pro version of crossftp as soon as I have a chance.

Here is the output using s3cmd version 1.1.0-beta3:

root@theneykov:/mnt/samba-rbd/Vantage/Incoming/ascvid# s3cmd -v put --debug 2 20130718_ascvid_cheyennemizeTEST4.mov s3://linux
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->PK...17_chars...F
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->nprs3.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.nprs3.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->J7...37_chars...Y
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'put' using UTF-8
DEBUG: Unicodising '2' using UTF-8
DEBUG: Unicodising '20130718_ascvid_cheyennemizeTEST4.mov' using UTF-8
DEBUG: Unicodising 's3://linux' using UTF-8
DEBUG: Command: put
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'' using UTF-8
DEBUG: DeUnicodising u'2' using UTF-8
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'' using UTF-8
DEBUG: DeUnicodising u'20130718_ascvid_cheyennemizeTEST4.mov' using UTF-8
DEBUG: Unicodising '20130718_ascvid_cheyennemizeTEST4.mov' using UTF-8
DEBUG: Unicodising '20130718_ascvid_cheyennemizeTEST4.mov' using UTF-8
INFO: Applying --exclude/--include
DEBUG: CHECK: 20130718_ascvid_cheyennemizeTEST4.mov
DEBUG: PASS: 20130718_ascvid_cheyennemizeTEST4.mov
INFO: Summary: 1 local files to upload
DEBUG: Unicodising '20130718_ascvid_cheyennemizeTEST4.mov' using UTF-8
WARNING: File can not be uploaded: 20130718_ascvid_cheyennemizeTEST4.mov: Permission denied

We will need a command line tool to use in cronjobs, etc...so I am hoping I can this working soon.

Thanks agin for the help already.

Shain


Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

________________________________________
From: Derek Yarnell [derek@xxxxxxxxxxxxxx]
Sent: Saturday, October 26, 2013 6:46 PM
To: Shain Miley; ceph-users@xxxxxxxx
Subject: Re:  Radosgw and large files

Hi Shain,

Yes we have tested and have working S3 Multipart support for files >5GB
(RHEL64/0.67.4).

However, crossftp unless you have pro it would seem does not support
multipart.  Dragondisk gives the error that I have seen when using a PUT
and not multipart, EntityTooLarge.  My guess is that it is not doing
Multipart.  Now from s3cmd's source it does support multipart have you
tried to give it a --debug 2 which will output boto's debug output I
believe.   That should tell you more about if it is correctly doing a
multipart upload.

Thanks,
derek

On 10/26/13, 5:10 PM, Shain Miley wrote:
> Hi,
>
> I am wondering if anyone has successfully been able to upload files
> larger than 5GB using radosgw.
>
> I have tried using various clients, including dragondisk, crossftp,
> s3cmd, etc...all of them have failed with a 'permission denied' response.
>
> Each of the clients say they support multi-part (and it appears that
> they do as files larger than 15MB gets split into multiple pieces)
> however I can only upload files smaller than 5GB up to this point.
>
> I am using 0.67.4 and Ubuntu 12.04.
>
> Thanks in advance,
>
> Shain
>
> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
> smiley@xxxxxxx | 202.513.3649
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
---
Derek T. Yarnell
University of Maryland
Institute for Advanced Computer Studies


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux