[linux-audio-user] Re: sound & midi applications page, was Re: [Consortium] linuxaudio.org

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 01, 2004 at 07:06:06 -0500, Paul Winkler wrote:
> On Sun, Feb 01, 2004 at 05:14:25PM +0000, Steve Harris wrote:
> > The RDF Primer is OK: http://www.w3.org/TR/rdf-primer/
> 
> OK. I've read the first 1/4 of it and skimmed the rest for now.
> lots to digest. 
> 
> A couple of questions:
> 
> - If you embed a domain name in all your URIs, does your stored
> rdf data need to be mass-updated if/when the site changes domain
> names? Or does appropriate use of Qnames make this a non-issue?

The URIs do not have to be resolvable. Thier just unique-ish identifiers.
The fact that they sometimes look like URLs is a pain. I think thier
should have been a 'namespace:' URI prefix, but its a bit late now.

So to answer your question, dont update your URIs.
 
> - The primer is long on "how" and short on "why",
> even in the example applications section.
> Are you advocating RDF primarily to allow third parties
> to do unspecified cool stuff with the data?  Or for the
> internal implementation of the website? Or both?

Providing RDF data representing the things in the site is the most
important, but I now find it easier to to run sites off RDF because the
site code tends to depend less on the data than with SQL. I guess it
depends how much SQL youve done and wether you are willing to change.
There are new thing to learn, so its understandable you wouldn't want to.

If your going to build RDF for your content, you may as well use it :)

> AFAICT the advantages lie with the former.  Could you give an 
> example of such a third-party app?  I'm not really coming up with 
> compelling use cases.

OK, take the LAD conference website. They might want to include a short
bio of each presenter - they can take all the project related data from
<insert name of new site> and add a few paras of text. After the event has
run they could publish photos on the website and mark up whos shown in
each photo, and publish the photo and bio stuff (against the same URIs),
the original site can then load this new stuff into thier KB and suddenly
you have a whole load of new data. Its like data for free.

You /can/ do all this stuff by web-scraping HTML into your internal
database format and resyncing with the source site every week, but who
would bother?

As an example, look at my entry on works KB:
http://triplestore.aktors.org/browse/?resource=http%3A%2F%2Fwww.ecs.soton.ac.uk%2Finfo%2F%23person-00384
I hardly wrote any of that, most of it came from other sources, eg.
someones online photo album and random databases published as RDF, click
"show sources" (top right) to see where it all came from.

- Steve

[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux