Re: Missing ceph data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

if you can verify which data has been removed, and that client is still connected, you might find out who was responsible for that. Do you know which files in which directories are missing? Does that maybe already reveal one or several users/clients? You can query the mds daemons and inspect the session output, it shows which directories are mounted (if you use kernel client):

quincy-1:~ # ceph tell mds.quincy-1.yrgpqm session ls


I doubt that you'll find much in the logs if you don't have debug enabled, but it might be worth checking anyway.

Zitat von dhivagar selvam <s.dhivagar.cse@xxxxxxxxx>:

Hi,

We are not using cephfs snapshots. Is there any other way to find this out?

On Thu, May 30, 2024 at 5:20 PM Eugen Block <eblock@xxxxxx> wrote:

Hi,

I've never heard of automatic data deletion. Maybe just some snapshots
were removed? Or someone deleted data on purpose because of the
nearfull state of some OSDs? And there's no trash function for cephfs
(for rbd there is). Do you use cephfs snapshots?


Zitat von Prabu GJ <gjprabu@xxxxxxxxxxxx>:

> Hi Team,
>
>
> We are using Ceph Octopus version with a total disk size of 136 TB,
> configured with two replicas. Currently, our usage is 57 TB, and the
> available size is 5.3 TB. An incident occurred yesterday where
> around 3 TB of data was deleted automatically. Upon analysis, we
> couldn't find the reason for the deletion. All OSDs are functioning
> properly and actively running.
>
> We have 3 MDS , we try to restarted all MDS services. Is there any
> solution to recover those data. Can anyone please help us find the
> issue?
>
>
>
>
>
> cluster:
>
>     id:     0d605d58-5caf-4f76-b6bd-e12402a22296
>
>     health: HEALTH_WARN
>
>             insufficient standby MDS daemons available
>
>             5 nearfull osd(s)
>
>             3 pool(s) nearfull
>
>             1 pool(s) have non-power-of-two pg_num
>
>
>
>   services:
>
>     mon: 4 daemons, quorum
> download-mon3,download-mon4,download-mon1,download-mon2 (age 14h)
>
>     mgr: download-mon2(active, since 14h), standbys: download-mon1,
> download-mon3
>
>     mds: integdownload:2
> {0=download-mds3=up:active,1=download-mds1=up:active}
>
>     osd: 39 osds: 39 up (since 16h), 39 in (since 4d)
>
>
>
>   data:
>
>     pools:   3 pools, 1087 pgs
>
>     objects: 71.76M objects, 51 TiB
>
>     usage:   105 TiB used, 31 TiB / 136 TiB avail
>
>     pgs:     1087 active+clean
>
>
>
>   io:
>
>     client:   414 MiB/s rd, 219 MiB/s wr, 513 op/s rd, 1.22k op/s wr
>
> ================================================================
>
> ID  HOST             USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA
> STATE
>
> 0  download-osd1   2995G   581G     14     4785k      6     6626k
> exists,up
>
> 1  download-osd2   2578G   998G     84     3644k     18     10.1M
> exists,up
>
> 2  download-osd3   3093G   483G     17     5114k      5     4152k
> exists,nearfull,up
>
> 3  download-osd4   2757G   819G     12      996k      2     4107k
> exists,up
>
> 4  download-osd5   2889G   687G     28     3355k     20     8660k
> exists,up
>
> 5  download-osd6   2448G  1128G    183     3312k     10     9435k
> exists,up
>
> 6  download-osd7   2814G   762G      7     1667k      4     6354k
> exists,up
>
> 7  download-osd8   2872G   703G     14     1672k     15     10.5M
> exists,up
>
> 8  download-osd9   2577G   999G     10     6615k      3     6960k
> exists,up
>
> 9  download-osd10  2651G   924G     16     4736k      3     7378k
> exists,up
>
> 10  download-osd11  2889G   687G     15     4810k      6     8980k
> exists,up
>
> 11  download-osd12  2912G   664G     11     2516k      2     4106k
> exists,up
>
> 12  download-osd13  2785G   791G     74     4643k     11     3717k
> exists,up
>
> 13  download-osd14  3150G   426G    214     6133k      4     7389k
> exists,nearfull,up
>
> 14  download-osd15  2728G   848G     11     4959k      4     6603k
> exists,up
>
> 15  download-osd16  2682G   894G     13     3170k      3     2503k
> exists,up
>
> 16  download-osd17  2555G  1021G     53     2183k      7     5058k
> exists,up
>
> 17  download-osd18  3013G   563G     18     3497k      3     4427k
> exists,up
>
> 18  download-osd19  2924G   651G     24     3534k     12     10.4M
> exists,up
>
> 19  download-osd20  3003G   573G     19     5149k      3     2531k
> exists,up
>
> 20  download-osd21  2757G   819G     16     3707k      9     9816k
> exists,up
>
> 21  download-osd22  2576G   999G     15     2526k      8     7739k
> exists,up
>
> 22  download-osd23  2758G   818G     13     4412k     16     7125k
> exists,up
>
> 23  download-osd24  2862G   714G     18     4424k      6     5787k
> exists,up
>
> 24  download-osd25  2792G   783G     16     1972k      9     9749k
> exists,up
>
> 25  download-osd26  2397G  1179G     14     4296k      9     12.0M
> exists,up
>
> 26  download-osd27  2308G  1267G      8     3149k     22     6280k
> exists,up
>
> 27  download-osd29  2732G   844G     12     3357k      3     7372k
> exists,up
>
> 28  download-osd28  2814G   761G     11      476k      5     3316k
> exists,up
>
> 29  download-osd30  3069G   507G     15     9043k     17     5628k
> exists,nearfull,up
>
> 30  download-osd31  2660G   916G     15      841k     14     7798k
> exists,up
>
> 31  download-osd32  2037G  1539G     10     1153k     15     3719k
> exists,up
>
> 32  download-osd33  3116G   460G     20     7704k     12     9041k
> exists,nearfull,up
>
> 33  download-osd34  2847G   728G     19     5788k      4     9014k
> exists,up
>
> 34  download-osd35  3088G   488G     17     7178k      7     5730k
> exists,nearfull,up
>
> 35  download-osd36  2414G  1161G     27     2017k     14     7612k
> exists,up
>
> 36  download-osd37  2760G   815G     17     4292k      5     10.6M
> exists,up
>
> 37  download-osd38  2679G   897G     12     2610k      5     10.0M
> exists,up
>
> 38  download-osd39  3013G   563G     18     1804k      7     9235k
> exists,up
>
>
>
>
>
> Regards
>
> Prabu GJ
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux