damn, good news for me, pssibly bad news for you :)
what is wearing level (samrtctl -a /dev/sdX) - attribute near the end of the atribute list...
thx
On 17 April 2015 at 21:12, Krzysztof Nowicki <krzysztof.a.nowicki@xxxxxxxxx> wrote:
I have two of them in my cluster (plus one 256GB version) for about half a year now. So far so good. I'll be keeping a closer look at them.
pt., 17 kwi 2015, 21:07 Andrija Panic użytkownik <andrija.panic@xxxxxxxxx> napisał:nah....Samsun 850 PRO 128GB - dead after 3months - 2 of these died... wearing level is 96%, so only 4% wasted... (yes I know these are not enterprise,etc... )On 17 April 2015 at 21:01, Josef Johansson <josef86@xxxxxxxxx> wrote:tough luck, hope everything comes up ok afterwards. What models on the SSD?
/Josef
On 17 Apr 2015 20:05, "Andrija Panic" <andrija.panic@xxxxxxxxx> wrote:SSD died that hosted journals for 6 OSDs - 2 x SSD died, so 12 OSDs are down, and rebalancing is about finish... after which I need to fix the OSDs.On 17 April 2015 at 19:01, Josef Johansson <josef@xxxxxxxxxxx> wrote:Hi,Did 6 other OSDs go down when re-adding?/JosefOn 17 Apr 2015, at 18:49, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:12 osds down - I expect less work with removing and adding osd?
On Apr 17, 2015 6:35 PM, "Krzysztof Nowicki" <krzysztof.a.nowicki@xxxxxxxxx> wrote:_______________________________________________Why not just wipe out the OSD filesystem, run ceph-osd --mkfs with the existing OSD UUID, copy the keyring and let it populate itself?pt., 17 kwi 2015 o 18:31 użytkownik Andrija Panic <andrija.panic@xxxxxxxxx> napisał:Thx guys, thats what I will be doing at the end.
Cheers
On Apr 17, 2015 6:24 PM, "Robert LeBlanc" <robert@xxxxxxxxxxxxx> wrote:_______________________________________________Delete and re-add all six OSDs.On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote:Hi guys,I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD down, ceph rebalanced etc.Now I have new SSD inside, and I will partition it etc - but would like to know, how to proceed now, with the journal recreation for those 6 OSDs that are down now.Should I flush journal (where to, journals doesnt still exist...?), or just recreate journal from scratch (making symboliv links again: ln -s /dev/$DISK$PART /var/lib/ceph/osd/ceph-$ID/journal) and starting OSDs.I expect the folowing procedure, but would like confirmation please:rm /var/lib/ceph/osd/ceph-$ID/journal -f (sym link)
ln -s /dev/SDAxxx /var/lib/ceph/osd/ceph-$ID/journal
ceph-osd -i $ID --mkjournal
ll /var/lib/ceph/osd/ceph-$ID/journal
service ceph start osd.$IDAny thought greatly appreciated !Thanks,--Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com--Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--Andrija Panić
Andrija Panić
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com