----- "Kevin Fenzi" <kevin@xxxxxxxxx> wrote: > So, I got to looking at search engines again the other day. In > particular the horrible horrible mediawiki one we are using on the > wiki. > > This pointed me to sphinx. > > - There is a mediawiki sphinx plugin. (needs packaging) > - sphinx is c++ and already packaged. > - sphinx uses mysql directly to index the database contents. > - You can pass other data into it via an xml format. This could be a > pain for any non wiki setups. > > It was noted that the new tagger application uses xapian as it's > search > engine. > > - xapian is also c++ > - xapain has a web crawler/indexer (omega) that could index our other > stuff more easily than sphinx. > - There's no mediawiki plugin for xapian, but we could point the wiki > search box to a site wide search using xapian. > > So, there's tradeoffs either way. > > Would anyone care to lead an effort to test these two? > xapian would probably be easy to test from anywhere. > sphinx might require some access to our mediawiki database, but you > could also just setup a new mediawiki, the plugin and sphinx and see > how it works there. > > If no one steps up I can look at doing it next week. ;) > My concern has always been the wiki content search being horrible as Kevin also mentioned. For me sphinx sounds like the best tool for that job out of the box from the description provided. I have a couple concerns that we need to be sure to test with xapian being a crawler. - Will this work for pages on the wiki that are already hard to find because they are not linked to from anywhere? - Are we sure it will work on docs.fp.o and it's JavaScript navigation menu? I am willing to help out testing if another can take the lead on it. -- Bob _______________________________________________ infrastructure mailing list infrastructure@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/infrastructure