Re: Ceph Future

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have to disagree with you Marc,


Hmmm, I have to disagree with

'too many services'
What do you mean, there is a process for each osd, mon, mgr and mds. 
There are less processes running than on a default windows fileserver. 
What is the complaint here?

I wrote: "Ceph is amazing, but is just too big to have everything under control (too many services)" under the point "MANAGEMENT COMPLICATIONS".
I found this sentence pretty CLEAR. Here I did NOT complaint about the fact that there are too many services to run a software.
Instead I was talking about the management complexity to find easily if there is something wrong.
There is not a clear view if everything is running right or not.

Is this a complaint about Ceph having less or more process than a windows fileservers? Of course that was not the point.
Please read carefully before answer with a non-sense comparison with other services.

'manage everything by your command-line'
What is so bad about this? Even microsoft is seeing the advantages and 
introduced power shell etc.
I'm saying that there nothing else EXCEPT the command line.
Again I'm said something different, please read again what i wrote.
Why call a service "manager" when it just act as a "performance monitor"?


I would recommend hiring a ceph admin, then 
you don't even need to use the web interface. You will have voice 
control on ceph, how cool is that! ;)
(actually maybe we can do feature request to integrate apple siri (not 
forgetting of course google/amazon talk?))

Wow you are so funny and cool!
Probably you love to have all your collegues thinking about how great are you managing such a complex system without any kind of error.

Instead I need to delegate to others.
The shell is powerfull. But from great power comes great responsability.
If you don't see the issue of giving the shell to lower-level techs even if it's just needed to setup a new RBD image it's your business.

The web interface is needed because: cmd-lines are prune to typos.
Moreover cmd-lines of Ceph (due to the complexity of the software) are very long and much more prune to typos.
Wrap inputs in commands before sending them in command-lines is just a safer way to handle a solution that holding TB/PB of delicate data of customers.
If you don't agree with such a simple statement it's because probably you are the only one that can brag to never type a wrong command in your life.

'iscsi'
Afaik this is not even a default install with ceph or a ceph package. I 
am also not complaining to ceph, that my nespresso machine does not have 
triple redundancy.
Nice metafora. However you are wrong.
I'm not complaining about the left of triple redundancy of your Nespresso Machine or something that has nothing to do with CEPH.

CEPH is a storage software. ISCSI is a storage connectivity technology.
If you miss the connection between them I'm so so so sorry for you.

Many people looked after for this feature since the beginning of this project.
Ceph have already integrated ISCSI in the latest release: http://docs.ceph.com/docs/master/rbd/iscsi-overview/#
However it's not clear if the support of this technology is optimized, or just a early feature.

ISCSI is still widely used by old system as unique way to connect remote storage.
XenServer just miss the possibility to connect with CEPH without a proper ISCSI connection.
If you don't feel the needed for ISCSI while other do... it's ok but please also don't feel the need to comment everything with silly metaforas.
Thanks.

'check hardware below the hood'
Why waste development on this when there are already enough solutions 
out there? As if it is even possible to make a one size fits all 
solution.
SMART is widely used.
I don't think it's stupid to read some data that can forecast your next disk failure.

Afaiac I think the ceph team has done great job. I was pleasantly 
surprised by the very easy to install. Just with installing the rpms 
(not using ceph-deploy).
Installation is easy.
Management is out of view.

Next to this, I think it is good to have some 
sort of 'threshold' to keep the wordpress admin's at a distance. Ceph 
solutions are holding TB/PB of other peoples data, and we don’t want 
rookies destroying that, nor blame ceph for that matter.
You are completly out of point.
Everybody knows already that no "wordpress" admin can manage an enterprise storage.
Ceph is runned and builded by people that also are aware to setup OS, LAN, BOND and so on.
So your talks about dangerous rookies are about nothing.

My opinion is pretty simple: the more a software is complex the more you'll be prune to errors.
It's not a matter of how much you are good with command line (you should never overstimate yourself).
It's just a matter that someday it'll happen.
Maybe because you are bored to run the same command everytime or maybe because after tons of working ours you miss to run properly a command before another.

A web interface can just make the basics checks before submitting a new command to the pool.
Review input,  check if elements included in argument list exists, and after ask you again if you are sure to go on.
This is just a clever way to handle delicate data.

To say "ceph is not for rookies, it's better having a threshold" can be said only from a person that really don't love it's own data (keeping management as error free as possible), but instead just want to be the only one allowed to manage them.

Less complexity, less errors, faster deploy of new customers.
Sorry if this sound so strange to you.





-----Original Message-----
From: Alex Gorbachev [mailto:ag@xxxxxxxxxxxxxxxxxxx] 
Sent: dinsdag 16 januari 2018 6:18
To: Massimiliano Cuttini
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Ceph Future

Hi Massimiliano,


On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini 
<max@xxxxxxxxxxxxx> wrote:
Hi everybody,

i'm always looking at CEPH for the future.
But I do see several issue that are leaved unresolved and block nearly 

      
future adoption.
I would like to know if there are some answear already:

1) Separation between Client and Server distribution.
At this time you have always to update client & server in order to 
match the same distribution of Ceph.
This is ok in the early releases but in future I do expect that the 
ceph-client is ONE, not many for every major version.
The client should be able to self determinate what version of the 
protocol and what feature are enabable and connect to at least 3 or 5 
older major version of Ceph by itself.

2) Kernel is old -> feature mismatch
Ok, kernel is old, and so? Just do not use it and turn to NBD.
And please don't let me even know, just virtualize under the hood.

3) Management complexity
Ceph is amazing, but is just too big to have everything under control 
(too many services).
Now there is a management console, but as far as I read this 
management console just show basic data about performance.
So it doesn't manage at all... it's just a monitor...

In the end You have just to manage everything by your command-line.
In order to manage by web it's mandatory:

create, delete, enable, disable services If I need to run ISCSI 
redundant gateway, do I really need to cut&paste command from your 
online docs?
Of course no. You just can script it better than what every admin can 
do.
Just give few arguments on the html forms and that's all.

create, delete, enable, disable users
I have to create users and keys for 24 servers. Do you really think 
it's possible to make it without some bad transcription or bad 
cut&paste of the keys across all servers.
Everybody end by just copy the admin keys across all servers, giving 
very unsecure full permission to all clients.

create MAPS  (server, datacenter, rack, node, osd).
This is mandatory to design how the data need to be replicate.
It's not good create this by script or shell, it's needed a graph 
editor which can dive you the perpective of what will be copied where.

check hardware below the hood
It's missing the checking of the health of the hardware below.
But Ceph was born as a storage software that ensure redundacy and 
protect you from single failure.
So WHY did just ignore to check the healths of disks with SMART?
FreeNAS just do a better work on this giving lot of tools to 
understand which disks is which and if it will fail in the nearly 
future.
Of course also Ceph could really forecast issues by itself and need to 

      
start to integrate with basic hardware IO.
For example, should be possible to enable disable UID on the disks in 
order to know which one need to be replace.
As a technical note, we ran into this need with Storcium, and it is 
pretty easy to utilize UID indicators using both Areca and LSI/Avago 
HBAs.  You will need the standard control tools available from their web 
sites, as well as hardware that supports SGPIO (most enterprise JBODs 
and drives do).  There's likely similar options to other HBAs.

Areca:

UID on:

cli64 curctrl=1 set password=<password>
cli64 curctrl=<controller #> disk identify drv=<drive ID>

UID OFF:

cli64 curctrl=1 set password=<password>
cli64 curctrl=<controller #> disk identify drv=0

LSI/Avago:

UID on:

sas2ircu <controller #> locate <enclosure #>:<slot #> ON

UID OFF:

sas2ircu <controller #> locate <enclosure #>:<slot #> OFF

HTH,
Alex Gorbachev
Storcium

I guess this kind of feature are quite standard across all linux 
distributions.

The management complexity can be completly overcome with a great Web 
Manager.
A Web Manager, in the end is just a wrapper for Shell Command from the 

      
CephAdminNode to others.
If you think about it a wrapper is just tons of time easier to develop 

      
than what has been already developed.
I do really see that CEPH is the future of storage. But there is some 
quick-avoidable complexity that need to be reduced.

If there are already some plan for these issue I really would like to 
know.
Thanks,
Max




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux