Re: Grid data placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 15, 2013 at 11:00 AM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:
> On 01/15/2013 12:36 PM, Gregory Farnum wrote:
>> On Tue, Jan 15, 2013 at 10:33 AM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:
>
>>> At the start of the batch #cores-in-the-cluster processes try to mmap
>>> the same 2GB and start reading it from SEEK_SET at the same time. I
>>> won't know until I try but I suspect it won't like that.
>>
>> Well, it'll be #servers-in-cluster serving up 4MB chunks out of cache.
>> It's possible you could overwhelm their networking but my bet is
>> they'll just get spread out slightly on the first block and then not
>> contend in the future.
>
> In the future the application spreads out the reads as well: running
> instances go through the data at different speed, and when one's
> finished, the next one starts on the same core & it mmap's the first
> chunk again.
>
>> Just as long as you're thinking of it as a test system that would make
>> us very happy. :)
>
> Well, IRL this is throw-away data generated at the start of a batch, and
> we're good if one batch a month runs to completion. So if it doesn't
> crash all the time every time, that actually should be good enough for
> me. However, not all of the nodes have spare disk slots, so I couldn't
> do a full-scale deployment anyway, not without rebuilding half the nodes.

In that case you are my favorite kind of user and you should install
and try it out right away! :D
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux