Mix HDDs and SSDs togheter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jiajia zhong,

I'm using mixed SSD and HDD on the same node and I did it from url
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
I don't get any problems when run SSD and HDD on the same node. Now I want
to increase Ceph thoughput by increase network into 20Gbps (I want single
network stream get max 20Gbps - test by iperf ). Could you please share
your experience about HA network for Ceph ? What type of bonding do you
have? are you using stackable switches?

I very appreciate your help.

On Mon, Mar 6, 2017 at 11:45 AM, jiajia zhong <zhong2plus at gmail.com> wrote:

> we are using mixed too, intel PCIE 400G SSD * 8 for metadata pool and tier
> caching pool for our cephfs.
>
> *plus:*
> *'osd crush update on start = false*'  as Vladimir replied.
>
> 2017-03-03 20:33 GMT+08:00 ????????????, ???????? <vlad at itgorod.ru>:
>
>> Hi, Matteo!
>>
>>   Yes, I'm using mixed cluster in production but it's pretty small at the
>> moment. I've made a smal step by step manual for myself when I did this for
>> the first time and now put it as a gist: https://gist.github.com/vheath
>> en/cf2203aeb53e33e3f80c8c64a02263bc#file-manual-txt. Probably it could
>> be a little bit outdated since it was some time ago.
>>
>>   Crush map modifications are going to be persistent in case of reboots
>> and maintenance if you put *'osd crush update on start = false*' to the
>> [osd] section of ceph conf.
>>
>>   But I would also recommend to start from this article:
>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-
>> mix-sata-and-ssd-within-the-same-box/
>>
>>   P.S. While I was writing this letter I've seen a letter from Maxime
>> Guyot. Seems that his method is much easier if it leads to the same results.
>>
>> Best regards,
>> Vladimir
>>
>> ? ?????????,
>> ???????????? ????????
>> ???????? "???? ?????"
>> +7 343 2222192 <+7%20343%20222-21-92>
>>
>> ?????????? ? ??????????? ???????????
>> IBM, Microsoft, Eset
>> ???????? ???????? "??? ????"
>> ?????????? ??-?????
>>
>> 2017-03-03 16:30 GMT+05:00 Matteo Dacrema <mdacrema at enter.eu>:
>>
>>> Hi all,
>>>
>>> Does anyone run a production cluster with a modified crush map for
>>> create two pools belonging one to HDDs and one to SSDs.
>>> What?s the best method? Modify the crush map via ceph CLI or via text
>>> editor?
>>> Will the modification to the crush map be persistent across reboots and
>>> maintenance operations?
>>> There?s something to consider when doing upgrades or other operations or
>>> different by having ?original? crush map?
>>>
>>> Thank you
>>> Matteo
>>> --------------------------------------------
>>> This email and any files transmitted with it are confidential and
>>> intended solely for the use of the individual or entity to whom they are
>>> addressed. If you have received this email in error please notify the
>>> system manager. This message contains confidential information and is
>>> intended only for the individual named. If you are not the named addressee
>>> you should not disseminate, distribute or copy this e-mail. Please notify
>>> the sender immediately by e-mail if you have received this e-mail by
>>> mistake and delete this e-mail from your system. If you are not the
>>> intended recipient you are notified that disclosing, copying, distributing
>>> or taking any action in reliance on the contents of this information is
>>> strictly prohibited.
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170306/4812c7ab/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux