Re: replace dead SSD journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Checked the SMART status. All of the Samsungs have Wear Leveling Count equal to 99 (raw values 29, 36 and 15). I'm going to have to monitor them - I could afford loosing one of them, but loosing two would mean loss of data.

pt., 17 kwi 2015 o 21:22 użytkownik Josef Johansson <josef86@xxxxxxxxx> napisał:

the massive rebalancing does not affect the ssds in a good way either. But from what I've gatherd the pro should be fine. Massive amount of write errors in the logs?

/Josef

On 17 Apr 2015 21:07, "Andrija Panic" <andrija.panic@xxxxxxxxx> wrote:
nah....Samsun 850 PRO 128GB - dead after 3months - 2 of these died... wearing level is 96%, so only 4% wasted... (yes I know these are not enterprise,etc... )

On 17 April 2015 at 21:01, Josef Johansson <josef86@xxxxxxxxx> wrote:

tough luck, hope everything comes up ok afterwards. What models on the SSD?

/Josef

On 17 Apr 2015 20:05, "Andrija Panic" <andrija.panic@xxxxxxxxx> wrote:
SSD died that hosted journals for 6 OSDs - 2 x SSD died, so 12 OSDs are down, and rebalancing is about finish... after which I need to fix the OSDs.

On 17 April 2015 at 19:01, Josef Johansson <josef@xxxxxxxxxxx> wrote:
Hi,

Did 6 other OSDs go down when re-adding?

/Josef

On 17 Apr 2015, at 18:49, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:

12 osds down - I expect less work with removing and adding osd?

On Apr 17, 2015 6:35 PM, "Krzysztof Nowicki" <krzysztof.a.nowicki@xxxxxxxxx> wrote:
Why not just wipe out the OSD filesystem, run ceph-osd --mkfs with the existing OSD UUID, copy the keyring and let it populate itself?

pt., 17 kwi 2015 o 18:31 użytkownik Andrija Panic <andrija.panic@xxxxxxxxx> napisał:

Thx guys, thats what I will be doing at the end.

Cheers

On Apr 17, 2015 6:24 PM, "Robert LeBlanc" <robert@xxxxxxxxxxxxx> wrote:
Delete and re-add all six OSDs.

On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:
Hi guys,

I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD down, ceph rebalanced etc.

Now I have new SSD inside, and I will partition it etc - but would like to know, how to proceed now, with the journal recreation for those 6 OSDs that are down now.

Should I flush journal (where to, journals doesnt still exist...?), or just recreate journal from scratch (making symboliv links again: ln -s /dev/$DISK$PART /var/lib/ceph/osd/ceph-$ID/journal) and starting OSDs.

I expect the folowing procedure, but would like confirmation please:

rm /var/lib/ceph/osd/ceph-$ID/journal -f (sym link)
ln -s /dev/SDAxxx /var/lib/ceph/osd/ceph-$ID/journal
ceph-osd -i $ID --mkjournal
ll /var/lib/ceph/osd/ceph-$ID/journal
service ceph start osd.$ID

Any thought greatly appreciated !

Thanks,

--

Andrija Panić

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

Andrija Panić

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux