On Tue, Feb 19, 2013 at 1:58 PM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote: > On 02/19/2013 01:33 PM, Sam Lang wrote: > >> These two values are too large. There's currently a limit in the >> cephfs tool that doesn't support setting an object size greater than >> 4GB (UINT_MAX). >> >>> >>> Your line verbatim: >>> >>> # cephfs /ceph/blastdb set_layout -s $[1024*1024*1024*2] >>> Error setting layout: Invalid argument >>> >>> Per-calculated: >>> >>> # cephfs /ceph/blastdb set_layout -s 2147483648 >>> Error setting layout: Invalid argument >> >> These two are a different error. Is /ceph/blastdb a directory? Which >> version of ceph are you using? > > OK, let's backtrack a little. > > I've a 32GB (at the last count) haystack. It is split into >=2GB files > b/c of the limit in our "needle search" application. I want all of that > to be local on each host running the application. > > So from what you're saying I have to set_layout on every file > individually, I can't set it on the haystack directory because it's too > big. Correct? No, you shouldn't have to do that, I'm just trying to narrow down and diagnose the failure that you're seeing. This might be something that has been fixed, depending on what version you're using. > > The haystack is being generated before the run, I don't know how many > files there are. So if I understand it correctly, > - I create the files, > - set_layout for each, > - wait for them to get copied to every host (osd)? Your best bet is to set the layout on the parent directory. But again, none of this will do any good if htcondor schedules jobs on nodes without awareness of which files will be accessed by the job, and where that file is located. > > Ceph is bobtail from rpms, the kernel is more of a problem as it's a > stock elrepo's build of 3.0 (.65 ATM). Ok, so this may be from that old(ish) kernel. Sounds like you got cephfs (mostly) working though... -sam > > -- > Dimitri Maziuk > Programmer/sysadmin > BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com