Re: PG export import

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, finally started just super slow.
Currently I want to export import the pgs from the died OSDs make the cluster be able to start cephfs and save the data. Also looking for some space to be able to export the pg because it's quite big, 100s of gb.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Frank Schilder <frans@xxxxxx> 
Sent: Thursday, March 18, 2021 6:16 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>; Ceph Users <ceph-users@xxxxxxx>
Subject: Re: PG export import

It sounds like there is a general problem on this cluster with OSDs not starting. You probably need to go back to the logs and try to find out why the MONs don't allow the OSDs to join. MON IPs, cluster ID, network config in ceph.conf and on host, cluster name, authentication, ports, messenger version etc.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Sent: 18 March 2021 10:48:05
To: Ceph Users
Subject:  PG export import

Hi,

I’ve tried to save some pg from a dead osd, I made this:

Picked on the same server an osd which is not really used and stopped that osd and import the exported one from the dead one.

root@server:~# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-33 --no-mon-config --pgid 44.c0s0 --op export --file ./pg44c0s0 Exporting 44.c0s0 info 44.c0s0( empty local-lis/les=0/0 n=0 ec=192123/175799 lis/c=4865474/4851556 les/c/f=4865475/4851557/0 sis=4865493) Export successful

root@server:~# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-34 --no-mon-config --op import --file ./pg44c0s0 get_pg_num_history pg_num_history pg_num_history(e5583546 pg_nums {20={173213=256},21={219434=64},22={220991=64},24={219240=32},25={1446965=128},42={175793=32},43={197388=64},44={192123=512}} deleted_pools ) Importing pgid 44.c0s0 write_pg epoch 4865498 info 44.c0s0( empty local-lis/les=0/0 n=0 ec=192123/175799 lis/c=4865474/4851556 les/c/f=4865475/4851557/0 sis=4865493) Import successful

Started back 34 and it says the osd is running but in the cluster map it is down :/

root@server:~# systemctl status ceph-osd@34 -l ● ceph-osd@34.service<mailto:ceph-osd@34.service> - Ceph object storage daemon osd.34
     Loaded: loaded (/lib/systemd/system/ceph-osd@.service<mailto:/lib/systemd/system/ceph-osd@.service>; enabled-runtime; vendor preset: enabled)
     Active: active (running) since Thu 2021-03-18 10:38:00 CET; 8min ago
    Process: 45388 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 34 (code=exited, sta>
   Main PID: 45392 (ceph-osd)
      Tasks: 60
     Memory: 856.2M
     CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@34.service
             └─45392 /usr/bin/ceph-osd -f --cluster ceph --id 34 --setuser ceph --setgroup ceph

Mar 18 10:38:00 server systemd[1]: Starting Ceph object storage daemon osd.34...
Mar 18 10:38:00 server systemd[1]: Started Ceph object storage daemon osd.34.
Mar 18 10:38:21 server ceph-osd[45392]: 2021-03-18T10:38:21.817+0100 7f41738d5dc0 -1 osd.34 5583546 log_to_mon> Mar 18 10:38:21 server ceph-osd[45392]: 2021-03-18T10:38:21.825+0100 7f41738d5dc0 -1 osd.34 5583546 mon_cmd_ma>


Any idea?

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux