Re: Strange Ceph architect with SAN storages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After thinking about this more, you may also consider just adding the SAN in as a different device class(es). I wouldn't be scared of doing it, but you will want to paint a picture of this transitional environment, the end goal, and any steps the customer will need to take to get there. Also, spend some time thinking through all the different failure scenarios and how those map to Ceph.

-Brett

On Thu, Aug 22, 2019 at 2:11 PM Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
In a past life I had a bunch of SAN gear dumped in my lap, it was spec’d by someone else misintepreting vague specs.  It was SAN gear with an AoE driver.  I wasn’t using Ceph, but sending it back and getting a proper solution wasn’t an option.  Ended up using SAN gear as a NAS with a single client, effectively DAS.

It was a nightmare:  cabling, monitoring, maintenance.

This could be done, but as Kai says latency would be an issue.  One would also need to pay *very* close attention to mappings and failure domains, there is considerable opportunity here to shoot oneself in the foot when a component has issues.






> Just my quick two cents here.
>
> Technically this is possible which doesn't mean it's a good idea. I wouldn't use such a setup in a productive environment. I don't think they'll really save a lot and adding the latency etc on top not sure this is what they're really looking for.
>
> Just for testing and for giving it a try, sure but for the rest I would go with a clear "no" instead of encouraging them to do that.
>
> Maybe you can tell use more about their use-case? What are they looking for, how large should this get, access protocols etc.
>
> Kai
>
> On 22.08.19 17:12, Brett Chancellor wrote:
>> It's certainly possible. It makes things a little more complex though. Some questions you may want to consider during the design..
>> - Is the customer aware this won't preserve any data on the luns they are hoping to reuse.
>> - Is the plan to eventually replace the SAN with JBOD, in the same systems? If so you may want to make your luns look like the eventual drive size and count.
>> - Is the plan to use a few systems with SAN and add standalone systems later? Then you need to calculate expected speeds and divide between failure domains.
>> - Is the plan to use a couple of hosts with SAN to save money, and have the rest be traditional Ceph storage? If so consider putting the SAN hosts all in one failure domain.
>> - Depending on the SAN you may consider aligning your failure domains to different arrays, switches, or even array directors.
>> - Remember to take the hosts network speed into consideration when calculating how many luns to put on each host.
>>
>> Hope that helps.
>>
>> -Brett
>>
>> On Thu, Aug 22, 2019, 4:14 AM Mohsen Mottaghi <mohsenmottaghi@xxxxxxxxxxx> wrote:
>> Hi
>>
>>
>> Yesterday one of our customers asked us a strange request.  He asked us to use SAN as the Ceph storage space to add the SAN storages it currently has to the cluster and reduce other disk purchase costs.
>>
>>
>> Anybody know can we do this or not?! And if this is possible how we should start to architect this Strange Ceph?! Is it good or not?!
>>
>> 
>> Thanks for your help.
>>
>> Mohsen Mottaghi
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list --
>> ceph-users@xxxxxxx
>>
>> To unsubscribe send an email to
>> ceph-users-leave@xxxxxxx
> --
> SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg
> GF:Geschäftsführer: Felix Imendörffer, (HRB 247165, AG München)
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux