Re: Cephadm: unable to copy ceph.conf.new

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Sorry! fixed.

The configuration is a follows:
root@management-node1 # cat /etc/sudoers.d/ceph
ceph ALL=(ALL)       NOPASSWD: ALL

So.. no restrictions :^)
________________________________
Fra: Eugen Block <eblock@xxxxxx>
Sendt: 7. august 2024 10:38
Til: Magnus Larsen <magnusfynbo@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Emne: Re: Sv:  Re: Cephadm: unable to copy ceph.conf.new

Hi,

please don't drop the ML from your response.

Is this the first upgrade you're attempting or did previous upgrades
work with the current config?

> I wonder if can generate a new ssh configuration for the root user,
> and then use that to upgrade to the fixed version.
> The permissions will then be owned by root, which means we can't use
> the ceph user, no?

I do remember having an issue with non-root user on a customer
cluster, but IIRC it was because of insufficient sudo permissions. In
the end, they switched to root user, and there haven't been any issues
since, at least nobody reported anything to me.
Do you mind sharing your sudo config for the ceph user?

Thanks,
Eugen

Zitat von Magnus Larsen <magnusfynbo@xxxxxxxxxxx>:

> Hi,
>
> We do have client-keyring with the label:
> # ceph orch client-keyring ls
> ENTITY        PLACEMENT     MODE       OWNER  PATH
> client.admin  label:_admin  rw-------  0:0
> /etc/ceph/ceph.client.admin.keyring
>
> And the SSH-config is also correct (verified just now) - though we
> use ceph as the user, not the default root,
> which works normally, except that we can't upgrade until we get the
> fix in... which is in the next upgrade :<
>
> I wonder if can generate a new ssh configuration for the root user,
> and then use that to upgrade to the fixed version.
> The permissions will then be owned by root, which means we can't use
> the ceph user, no?
>
> ref: https://docs.ceph.com/en/octopus/cephadm/operations/#ssh-configuration
>
> Thanks!
> Magnus Larsen
>
> ________________________________
> Fra: Eugen Block <eblock@xxxxxx>
> Sendt: 7. august 2024 09:15
> Til: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> Emne:  Re: Cephadm: unable to copy ceph.conf.new
>
> Hi,
>
> I commented a similar issue a couple of months ago:
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/IQX2VXA6QQQPEZQ7GU3QY2WPHAIVPIUN/
>
> Can you check if that applies to your cluster?
>
> Zitat von Magnus Larsen <magnusfynbo@xxxxxxxxxxx>:
>
>> Hi Ceph-users!
>>
>> Ceph version: ceph version 17.2.6
>> (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
>> Using cephadm to orchestrate the Ceph cluster
>>
>> I’m running into https://tracker.ceph.com/issues/59189, which is
>> fixed in next version—quincy 17.2.7—via
>> https://github.com/ceph/ceph/pull/50906
>>
>> But I am unable to upgrade to the fixed version because of that bug
>>
>> When I try to upgrade (using “ceph orch upgrade start –image
>> internal_mirror/ceph:v17.2.7”), we see the same error message:
>> executing _write_files((['dkcphhpcadmin01', 'dkcphhpcmgt028',
>> 'dkcphhpcmgt029', 'dkcphhpcmgt031', 'dkcphhpcosd033',
>> 'dkcphhpcosd034', 'dkcphhpcosd035', 'dkcphhpcosd036',
>> 'dkcphhpcosd037', 'dkcphhpcosd038', 'dkcphhpcosd039',
>> 'dkcphhpcosd040', 'dkcphhpcosd041', 'dkcphhpcosd042',
>> 'dkcphhpcosd043', 'dkcphhpcosd044'],)) failed. Traceback (most
>> recent call last): File "/usr/share/ceph/mgr/cephadm/ssh.py", line
>> 240, in _write_remote_file conn = await
>> self._remote_connection(host, addr) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 922, in scp
>> await source.run(srcpath) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 458, in run
>> self.handle_error(exc) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 307, in
>> handle_error raise exc from None File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 456, in run
>> await self._send_files(path, b'') File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 438, in
>> _send_files self.handle_error(exc) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 307, in
>> handle_error raise exc from None File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 434, in
>> _send_files await self._send_file(srcpath, dstpath, attrs) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 365, in
>> _send_file await self._make_cd_request(b'C', attrs, size, srcpath)
>> File "/lib/python3.6/site-packages/asyncssh/scp.py", line 343, in
>> _make_cd_request self._fs.basename(path)) File
>> "/lib/python3.6/site-packages/asyncssh/scp.py", line 224, in
>> make_request raise exc asyncssh.sftp.SFTPFailure: scp:
>> /tmp/etc/ceph/ceph.conf.new: Permission denied During handling of
>> the above exception, another exception occurred: Traceback (most
>> recent call last): File "/usr/share/ceph/mgr/cephadm/utils.py", line
>> 79, in do_work return f(*arg) File
>> "/usr/share/ceph/mgr/cephadm/serve.py", line 1088, in _write_files
>> self._write_client_files(client_files, host) File
>> "/usr/share/ceph/mgr/cephadm/serve.py", line 1107, in
>> _write_client_files self.mgr.ssh.write_remote_file(host, path,
>> content, mode, uid, gid) File "/usr/share/ceph/mgr/cephadm/ssh.py",
>> line 261, in write_remote_file
>> self.mgr.wait_async(self._write_remote_file( File
>> "/usr/share/ceph/mgr/cephadm/module.py", line 615, in wait_async
>> return self.event_loop.get_result(coro) File
>> "/usr/share/ceph/mgr/cephadm/ssh.py", line 56, in get_result return
>> asyncio.run_coroutine_threadsafe(coro, self._loop).result() File
>> "/lib64/python3.6/concurrent/futures/_base.py", line 432, in result
>> return self.__get_result() File
>> "/lib64/python3.6/concurrent/futures/_base.py", line 384, in
>> __get_result raise self._exception File
>> "/usr/share/ceph/mgr/cephadm/ssh.py", line 249, in
>> _write_remote_file logger.exception(msg)
>> orchestrator._interface.OrchestratorError: Unable to write
>> dkcphhpcmgt028:/etc/ceph/ceph.conf: scp:
>> /tmp/etc/ceph/ceph.conf.new: Permission denied
>>
>> We were thinking about removing the keyring from the Ceph
>> orchestrator
>> (https://docs.ceph.com/en/latest/cephadm/operations/#putting-a-keyring-under-management),
>> which would then make Ceph not try to copy over a new ceph.conf,
>> alleviating the problem
>> (https://docs.ceph.com/en/latest/cephadm/operations/#client-keyrings-and-configs),
>> but in doing so, Ceph will kindly remove the key from all nodes
>> (https://docs.ceph.com/en/latest/cephadm/operations/#disabling-management-of-a-keyring-file)
>> leaving us without the admin keyring. So that doesn’t sound like a
>> path we want to take :S
>>
>> Does anybody know how to get around this issue, so I can get to
>> version where the issue fixed for good?
>>
>> Thanks,
>> Magnus
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux