Re: Ceph Future

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hmmm, I have to disagree with

'too many services'
What do you mean, there is a process for each osd, mon, mgr and mds. 
There are less processes running than on a default windows fileserver. 
What is the complaint here?

'manage everything by your command-line'
What is so bad about this? Even microsoft is seeing the advantages and 
introduced power shell etc. I would recommend hiring a ceph admin, then 
you don't even need to use the web interface. You will have voice 
control on ceph, how cool is that! ;)
(actually maybe we can do feature request to integrate apple siri (not 
forgetting of course google/amazon talk?))

'iscsi'
Afaik this is not even a default install with ceph or a ceph package. I 
am also not complaining to ceph, that my nespresso machine does not have 
triple redundancy.

'check hardware below the hood'
Why waste development on this when there are already enough solutions 
out there? As if it is even possible to make a one size fits all 
solution.

Afaiac I think the ceph team has done great job. I was pleasantly 
surprised by the very easy to install. Just with installing the rpms 
(not using ceph-deploy). Next to this, I think it is good to have some 
sort of 'threshold' to keep the wordpress admin's at a distance. Ceph 
solutions are holding TB/PB of other peoples data, and we don’t want 
rookies destroying that, nor blame ceph for that matter.




-----Original Message-----
From: Alex Gorbachev [mailto:ag@xxxxxxxxxxxxxxxxxxx] 
Sent: dinsdag 16 januari 2018 6:18
To: Massimiliano Cuttini
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Ceph Future

Hi Massimiliano,


On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini 
<max@xxxxxxxxxxxxx> wrote:
> Hi everybody,
>
> i'm always looking at CEPH for the future.
> But I do see several issue that are leaved unresolved and block nearly 

> future adoption.
> I would like to know if there are some answear already:
>
> 1) Separation between Client and Server distribution.
> At this time you have always to update client & server in order to 
> match the same distribution of Ceph.
> This is ok in the early releases but in future I do expect that the 
> ceph-client is ONE, not many for every major version.
> The client should be able to self determinate what version of the 
> protocol and what feature are enabable and connect to at least 3 or 5 
> older major version of Ceph by itself.
>
> 2) Kernel is old -> feature mismatch
> Ok, kernel is old, and so? Just do not use it and turn to NBD.
> And please don't let me even know, just virtualize under the hood.
>
> 3) Management complexity
> Ceph is amazing, but is just too big to have everything under control 
> (too many services).
> Now there is a management console, but as far as I read this 
> management console just show basic data about performance.
> So it doesn't manage at all... it's just a monitor...
>
> In the end You have just to manage everything by your command-line.
> In order to manage by web it's mandatory:
>
> create, delete, enable, disable services If I need to run ISCSI 
> redundant gateway, do I really need to cut&paste command from your 
> online docs?
> Of course no. You just can script it better than what every admin can 
do.
> Just give few arguments on the html forms and that's all.
>
> create, delete, enable, disable users
> I have to create users and keys for 24 servers. Do you really think 
> it's possible to make it without some bad transcription or bad 
> cut&paste of the keys across all servers.
> Everybody end by just copy the admin keys across all servers, giving 
> very unsecure full permission to all clients.
>
> create MAPS  (server, datacenter, rack, node, osd).
> This is mandatory to design how the data need to be replicate.
> It's not good create this by script or shell, it's needed a graph 
> editor which can dive you the perpective of what will be copied where.
>
> check hardware below the hood
> It's missing the checking of the health of the hardware below.
> But Ceph was born as a storage software that ensure redundacy and 
> protect you from single failure.
> So WHY did just ignore to check the healths of disks with SMART?
> FreeNAS just do a better work on this giving lot of tools to 
> understand which disks is which and if it will fail in the nearly 
future.
> Of course also Ceph could really forecast issues by itself and need to 

> start to integrate with basic hardware IO.
> For example, should be possible to enable disable UID on the disks in 
> order to know which one need to be replace.

As a technical note, we ran into this need with Storcium, and it is 
pretty easy to utilize UID indicators using both Areca and LSI/Avago 
HBAs.  You will need the standard control tools available from their web 
sites, as well as hardware that supports SGPIO (most enterprise JBODs 
and drives do).  There's likely similar options to other HBAs.

Areca:

UID on:

cli64 curctrl=1 set password=<password>
cli64 curctrl=<controller #> disk identify drv=<drive ID>

UID OFF:

cli64 curctrl=1 set password=<password>
cli64 curctrl=<controller #> disk identify drv=0

LSI/Avago:

UID on:

sas2ircu <controller #> locate <enclosure #>:<slot #> ON

UID OFF:

sas2ircu <controller #> locate <enclosure #>:<slot #> OFF

HTH,
Alex Gorbachev
Storcium

> I guess this kind of feature are quite standard across all linux 
> distributions.
>
> The management complexity can be completly overcome with a great Web 
> Manager.
> A Web Manager, in the end is just a wrapper for Shell Command from the 

> CephAdminNode to others.
> If you think about it a wrapper is just tons of time easier to develop 

> than what has been already developed.
> I do really see that CEPH is the future of storage. But there is some 
> quick-avoidable complexity that need to be reduced.
>
> If there are already some plan for these issue I really would like to 
know.
>
> Thanks,
> Max
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux