Re: [announce] Accord, a coordination service for write-intensive workload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 4, 2011 at 9:06 PM, OZAWA Tsuyoshi <ozawa.tsuyoshi@xxxxxxxxxxxxx
> wrote:

> (2011/10/05 3:55), Larry Brigman wrote:
>
>> Since the corosync flatiron branch runs on RHEL5 and others, is there
>> going to be any
>> effort placed in getting it to compile on that platform?
>>
>
> Not yet, however, I'm going to support RHEL5 if there are needs.
> Accord should run on the platform which corosync supports, ideally, because
> it aims to build up corosync scalability.
> On the other hand, it increases maintenance costs, and I don't know whether
> there are the needs for new projects to support RHEL5 until now or not. Is
> there any opinion about this?
>

Well, at least +1 here.  I have a few developers that are working with me on
a cluster project
that is using corosync and pacemaker.  This looks like a good fit for a
metadata server that we need to build.
I can help on the packaging front too.

I understand not wanting to support RHEL5 as RHEL6 has been released but
there are still a lot of HPC systems running RHEL5 as it is a
known stable system.

>
>  On Mon, Oct 3, 2011 at 8:52 PM, OZAWA Tsuyoshi
>> <ozawa.tsuyoshi@xxxxxxxxxxxxx <mailto:ozawa.tsuyoshi@lab.**ntt.co.jp<ozawa.tsuyoshi@xxxxxxxxxxxxx>>>
>> wrote:
>>
>>    CCing corosync ml since there may be some cluster developers interested
>>    in participating in this project there.
>>
>>    I am pleased to announce the release of Accord, a coordination service
>>    like Apache ZooKeeper that uses corosync as a total-order messaging
>>    infrastructure. Accord is a part of Sheepdog project, going to be in
>>    charge of cluster management of Sheepdog to get more scalability.
>>
>>    The current design of Sheepdog has a scalability problem because of
>>    corosync.
>>    It gets more scalability by picking out coordination features of
>>    corosync as an external system - Accord.
>>    Accord provides complex components of distributed systems such as
>>    distributed locking, message ordering, and membership management for a
>>    large number of clients, aiming to be a kernel of distributed systems.
>>    Concretely speaking, it features:
>>
>>    - Accord is a distributed, transactional, and fully-replicated (No
>> SPoF)
>>    Key-Value Store with strong consistency.
>>    - Accord can be scale-out up to tens of nodes.
>>    - Accord servers can handle tens of thousands of clients.
>>    - Any clients can issue I/O requests, and any servers can accept and
>>    process I/O requests.
>>    - The changes for a write request from a client can be notified to the
>>    other clients.
>>    - Accord detects events of client's joining/leaving, and notifies
>>    joined/left client information to the other clients.
>>
>>    For instance, a distributed lock manager or a leader election feature
>>    for handling tens thousands of clients can be implemented easily by
>>    using the joining/leaving-clients notification and the KVS with strong
>>    consistency. In other words, Accord builds up the scalability of
>>    corosync.
>>
>>    These features are also provided in ZooKeeper in fact. The difference
>>    between Accord and ZooKeeper is :
>>    - Accord focused on write-intensive workloads unlike ZooKeeper.
>>    ZooKeeper forwards all write requests to a master server. It may be
>>    bottleneck in write-intensive workload. Benchmark demonstrates that the
>>    write-operation throughput of Accord is much higher than one of
>>    ZooKeeper (up to 20 times better throughput at persistent mode, and up
>>    to 18 times better throughput at in-memory mode).
>>    - More flexible transaction support. Not only write, del operations,
>> but
>>    also value-based cmp, copy, read operations are supported in
>> transaction
>>    operation.
>>
>>    As mentioned above, Accord is specific to write-intensive workload. It
>>    extends the application scope of Coordination service.
>>    Assumed applications are as follows, for example :
>>    - Distributed Lock Manager whose lock operations occur at a high
>>    frequency from thousands of clients.
>>    - Metadata management service for large-scale distributed storage,
>>    including Sheepdog, HDFS, etc.
>>    - Replicated Message Queue or logger (For instance, replicated
>>    RabbitMQ).
>>    and so on.
>>
>>    The other distributed systems can use Accord features easily because
>>    Accord provides general-purpose APIs (read/write/del/transaction).
>>
>>    More information including getting started, benchmarks, and API docs
>> are
>>    available from our project page :
>>    http://www.osrg.net/accord
>>
>>    and all code is available from:
>>    https://github.com/collie/**accord <https://github.com/collie/accord>
>>
>>    Please try it out, and let me know any opinions or problems via
>>    Sheepdog ML.
>>
>>    Best regards,
>>    OZAWA Tsuyoshi
>>    <ozawa.tsuyoshi@xxxxxxxxxxxxx <mailto:ozawa.tsuyoshi@lab.**ntt.co.jp<ozawa.tsuyoshi@xxxxxxxxxxxxx>
>> >>
>>
>>    ______________________________**_________________
>>    discuss mailing list
>>    discuss@xxxxxxxxxxxx <mailto:discuss@xxxxxxxxxxxx>
>>    http://lists.corosync.org/**mailman/listinfo/discuss<http://lists.corosync.org/mailman/listinfo/discuss>
>>
>>
>>
>>
>>
>> ______________________________**_________________
>> discuss mailing list
>> discuss@xxxxxxxxxxxx
>> http://lists.corosync.org/**mailman/listinfo/discuss<http://lists.corosync.org/mailman/listinfo/discuss>
>>
>
>
> --
> 小沢 健史
> NTT サイバースペース研究所
> OSS コンピューティングプロジェクト
> 分散仮想コンピューティング技術グループ
> TEL 046-859-2351
> FAX 046-855-1152
> Email ozawa.tsuyoshi@xxxxxxxxxxxxx
>
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux