Re: [Warning: Forged Email] Ceph 10.2.11 - Status not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Oliver, Peter

Thanks, about an hour after my second email I sat back and thought about
it some more and realised this was the case.

I've also fixed the CEPH issue, a simple set of issues compounding into
ceph-mon's not working correctly.

1. We had power failure 7 days ago. Which for some reason the C3000
chassis did not tell me about.

2. The management platform does not check to see if the file system is
mounted before putting backups on it.

2. Backups on two systems where using the disk space used by ceph for
there backups

4. As a result two ceph-mon’s where having issues but not reporting why
(normally there would be a disk space report)

5. After a mon restart attempt two mon’s would not run but again did not
report until I put the debug level higher.


Again thanks, I've had a hell of a few weeks and I think the stress is
getting to me and I'm not thinking clearly when new issues hit me.

Really need the next few weeks to go well so I can get some de-stress time.

Mike





On 18/12/18 1:44 pm, Oliver Freyermuth wrote:
> That's kind of unrelated to Ceph, but since you wrote two mails already,
> and I believe it is caused by the mailing list software for ceph-users... 
>
> Your original mail distributed via the list (" Ceph 10.2.11 - Status not working") did 
> *not* have the forged-warning. 
> Only the subsequent "Re:"-replies by yourself had it. That also matches what you will find in the archives. 
>
> So my guess is that "[Warning: Forged Email]" was added by your own mailing system for the mail incoming to you after it was distributed by the ceph-users list server. 
>
> That's probably since the mailman sending mail for ceph-users leaves the "From:" intact,
> and that contains your domain (oeg.com.au). So the mailman server for ceph-users is "forging",
> since it sends mail with "From: mike@xxxxxxxxxx", but using it's own IP, hence violating your SPF record. 
> It also breaks DKIM by adding the footer (ceph-users mailing list, ceph-users@xxxxxxxxxxxxxx, http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com)
> thus manipulating the body of the mail. 
>
> So in short: The mailman used for ceph-users breaks both SPF and DKIM (most mailing lists still do that). My guess is that your mailing system
> adds a tag "[Warning: Forged Email]" at least for mail with a "From:" matching your domain in case SPF and / or DKIM is broken. 
>
> If somebody wants to "fix" this: The reason is sadly that SPF and DKIM are not well suited for mailing lists :-(. But workarounds exist. 
> Newer mailing list software (including modern mailman releases) allow to manipulate the "From:" before sending out mail,
> e.g. writing in the header:
>   From: "Mike O'Connor (via ceph-users list)" <ceph-users@xxxxxxxxxxxxxx>
>   Reply-To: "Mike O'Connor" <mike@xxxxxxxxxx>
> With this, SPF is fine, since the mail server sending the mail is allowed to do so for @lists.ceph.com . Users can still reply just fine. 
> Concerning DKIM, there's also a midway. The cleanest (I believe) is pruning all previous DKIM signatures on the list server and re-signing before sending it out. 
>
> S/MIME will still break by adding the footer, but that's another matter. 
>
> Cheers,
> 	Oliver
>
> Am 18.12.18 um 01:34 schrieb Mike O'Connor:
>> mmm wonder why the list is saying my email is forged, wonder what I have
>> wrong.
>>
>> My email is sent via an outbound spam filter, but I was sure I had the
>> SPF set correctly.
>>
>> Mike
>>
>> On 18/12/18 10:53 am, Mike O'Connor wrote:
>>> Hi All
>>>
>>> I have a ceph cluster which has been working with out issues for about 2
>>> years now, it was upgrade about 6 month ago to 10.2.11
>>>
>>> root@blade3:/var/lib/ceph/mon# ceph status
>>> 2018-12-18 10:42:39.242217 7ff770471700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.207:6789/0 pipe(0x7ff768000c80 sd=4 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768001f90).fault
>>> 2018-12-18 10:42:45.242745 7ff770471700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.207:6789/0 pipe(0x7ff7680051e0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768002410).fault
>>> 2018-12-18 10:42:51.243230 7ff770471700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.207:6789/0 pipe(0x7ff7680051e0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768002f40).fault
>>> 2018-12-18 10:42:54.243452 7ff770572700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.205:6789/0 pipe(0x7ff768000c80 sd=4 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768008060).fault
>>> 2018-12-18 10:42:57.243715 7ff770471700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.207:6789/0 pipe(0x7ff7680051e0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768003580).fault
>>> 2018-12-18 10:43:03.244280 7ff7781b9700  0 -- 10.1.5.203:0/1608630285 >>
>>> 10.1.5.205:6789/0 pipe(0x7ff7680051e0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>>> c=0x7ff768003670).fault
>>>
>>> All system can ping each other. I simple can not see why its failing.
>>>
>>>
>>> ceph.conf
>>>
>>> [global]
>>>      auth client required = cephx
>>>      auth cluster required = cephx
>>>      auth service required = cephx
>>>      cluster network = 10.1.5.0/24
>>>      filestore xattr use omap = true
>>>      fsid = 42a0f015-76da-4f47-b506-da5cdacd030f
>>>      keyring = /etc/pve/priv/$cluster.$name.keyring
>>>      osd journal size = 5120
>>>      osd pool default min size = 1
>>>      public network = 10.1.5.0/24
>>>      mon_pg_warn_max_per_osd = 0
>>>
>>> [client]
>>>      rbd cache = true
>>> [osd]
>>>      keyring = /var/lib/ceph/osd/ceph-$id/keyring
>>>      osd max backfills = 1
>>>      osd recovery max active = 1
>>>      osd_disk_threads = 1
>>>      osd_disk_thread_ioprio_class = idle
>>>      osd_disk_thread_ioprio_priority = 7
>>> [mon.2]
>>>      host = blade5
>>>      mon addr = 10.1.5.205:6789
>>> [mon.1]
>>>      host = blade3
>>>      mon addr = 10.1.5.203:6789
>>> [mon.3]
>>>      host = blade7
>>>      mon addr = 10.1.5.207:6789
>>> [mon.0]
>>>      host = blade1
>>>      mon addr = 10.1.5.201:6789
>>> [mds]
>>>          mds data = /var/lib/ceph/mds/mds.$id
>>>          keyring = /var/lib/ceph/mds/mds.$id/mds.$id.keyring
>>> [mds.0]
>>>          host = blade1
>>> [mds.1]
>>>          host = blade3
>>> [mds.2]
>>>          host = blade5
>>> [mds.3]
>>>          host = blade7
>>>
>>>
>>> Any ideas ? more information ?
>>>
>>>
>>> Mike
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux