Category Archives: Semantics

Ontology Game: Humans Matching Concepts

A new “ontology game” has recently been announced, as a “game with a purpose” to help get humans to categorize objects properly according to a formal ontology.

How it Works

The game operates in a way that’s similar to Google’s image tagging application; pairs of users who do not know one another are presented with the abstract from a Wikipedia page, and they have to choose categories in an upper ontology that accurately describe the article. (E.g. does it correspond to an abstract concept? An agent? A happening?) Users get points when both users choose the same answer to categorize an article. As the game goes on, the categorization gets more and more specific until it “bottoms out” in the upper ontology. At that point, you jump to a new article and start the process over again.


In terms of gameplay, it feels a little bit rough in part because the game doesn’t choose the articles very intelligently. (In one case, I got the same article twice in a row) Also, after you tag 5-6 different articles, the player has a good working knowledge of the taxonomy of the upper ontology, and it becomes less fun as the game devolves into categorization along lines you’ve seen many times before. The key difference here from Google’s image tagging game is that in Google’s game, people enter free-form words, so your input is almost limitless. Oh, and one other thing – in order to categorize properly, you have to read the 2-3 sentence descriptions of what the categories mean, which can take some time the first time around when you have 6-7 categories to choose from.

These don’t appear to me though to be fatal problems for the game, just teething problems. It could be fun if the data set was widened substantially, and the category choice perhaps narrowed a bit. And of course in the background, they’re building an interesting data set mapping Wikipedia articles to high-level concepts of what they represent.


Here’s the original announcement email from Martin Hepp at DERI

We are proud to release the first one in our series of online computer
games that turn core tasks of weaving the Semantic Web into challenging
and interesting entertainment – check it out today at

A very early paper written in late Summer is in Springer LNCS Vol.
4806, 2007, pp. 1222-1232 [1].

A complete Technical Report including our quantitative evidence and video footage will be released shortly on our project Web page at

The next series of games for other tasks of building the Semantic Web is already in the pipeline, so please stay tuned 🙂

Please subscribe to our OntoGame mailing list if you want to be informed once new gaming scenarios or results are available. See [2] for details on how to subscribe.

What is it good for?
Despite significant advancement in technology and tools, building ontologies, annotating data, and aligning multiple ontologies remain tasks that highly depend on human intelligence, both as a source of domain expertise and for making conceptual choices. This means that people need to contribute time, and sometimes other resources, to this endeavor.

As a novel solution, we have proposed to masquerade core tasks of weaving the Semantic Web behind on-line, multi-player game scenarios, in order to create proper incentives for humans to contribute. Doing so, we adopt the findings from the already famous “games with a purpose” by von Ahn, who has shown that presenting a useful task, which requires human intelligence, in the form of an on-line game can motivate a large amount of people to work heavily on this task, and this for free.

Since our first experiments in May 2007, we have gained preliminary evidence that (1) users are willing to dedicate a lot of time to those games, (2) are able to produce high-quality conceptual choices, and, by doing so, (3) can unknowingly weave the Semantic Web.

Acknowledgments: OntoGame is possible only thanks to the hard work of the OntoGame team – special thanks to Michael Waltl, Werner Huber, Andreas Klotz, Roberta Hart-Hiller, and David Peer for their dedication and continuous contributions! The work on OntoGame has been funded in part by the Austrian BMVIT/FFG under the FIT-IT Semantic Systems project myOntology (grant no. 812515/9284),, which we gratefully acknowledge.

And now…. play and enjoy!

Best wishes

Martin Hepp and Katharina Siorpaes


Ontology Matching Book

Ontology Matching

Euzenat and Shvaiko’s book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching finds correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence.

With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the presented work and techniques can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The book presents the state of the art and the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.

Missing Data and Causal Chains

One of my colleagues today suggested an interesting way of looking at the problem of missing data. He referred back to a lot of work on process modeling, where people in essence try to “reverse engineer” existing business processes.

Let’s say you discover a process, and you are able to identify steps 1, 2, and 4 of the process. Obviously the analyst knows that there was a step 3 somewhere, and the name of the game from that point becomes locating and describing step 3 – the missing data – of the process.

More broadly, steps in a process or information that is gathered are part of some causal chain. If you can identify the causal chain that the missing data belongs to, you at least have a framework for understanding how the missing data relates to other observations, and a starting point for asking the question “what does it mean that this information is missing?”  This causal chain might be thought of as the context surrounding the missing data.

Judea Pearl and Causality

Judea Pearl, who is one of the more influential authors on knowledge representation, causal reasoning, and AI wrote a book in 2000 titled “Causality: Models, Reasoning, and Inference“.  (Dr. Pearl is also as it happens Daniel Pearl’s father)

On his page that describes why he wrote the book, he relates an interesting anecdote about the scientific community’s avoidance of discussion about causality, and how that is a problem. His book was intended to help “students of statistics who wonder why instructors are reluctant to discuss causality in class; and students of epidemiology who wonder why simple concepts such as ‘confounding’ are so terribly complex when expressed mathematically”. His summation:

“Causality is not mystical or metaphysical.
It can be understood in terms of simple processes,
and it can be expressed in a friendly mathematical
language, ready for computer analysis.”

Comparing Upper Ontology Definitions of “Information Resource”

Copia has an interesting post about comparing the definitions of “information resource” in various upper ontologies.

Just looking at the isolated semantic differences between a simple concept across all of these upper ontologies is a bit of a reality check.  I have read papers that claim that the data interoperability problem will be solved by simply mapping all of the upper ontologies together, and having each domain describe their data in terms of a particular ontology that uses the upper-ontology of their choice.  Looking at the complexity and subtlety of just “information resource” across these different conceptualizations makes that approach look pretty silly.

Ontology Annotation Use Cases

I recently found another interesting link dealing with ontology evolution and annotation. This paper is primarily interested in annotation from the perspective of maintenance and evolution. Indeed, the protoge collaborative software that’s being developed is focused on making suggestions for changes to a single existing ontology.

Work progressing on Ontrospect…

As I posted about before, I have a little project I’m working on.  Over the past couple of days, I managed to implement annotations in the system, which is basically the ability to add an arbitrary RDF triple to a separate model that “describes” another model.  For example, take the pizza ontology.  Of course this file is read-only because it’s on someone else’s site.  Annotations in Ontrospect would allow you to add extra triples to another model (let’s call it “Annotations: Pizza”) that make statements about resources in the pizza ontology.

The annotation could be a regular OWL or RDF assertion (such as indicating the type of a resource, or claiming that something is a subclass of something else) but it could also be a comment, a text tag, or more importantly a link to a completely different ontology.

I’m interested in this as a prototype system for building mappings between ontologies.  Hypothetically, you could for example take two upper ontologies and create links between them using these annotations.  For example, load the SUMO ontology into the program, and then link specific classes in SUMO to classes within DOLCE-lite.  (It could be something as simple as owl:sameAs, but would probably need to be something more sophisticated)

Where this becomes useful is when you union multiple models together with their annotation models.  You can then use that as a basis for reasoning across multiple ontologies, allowing you to work with mixed data.  An interesting other consequence is that it (hopefully) could make the semantic web read/write, instead of just read only.  The pizza ontology is out there, and you can’t change it.  What you can do is create your own set of annotations, and then publish a new model that is the pizza ontology union’d with your annotations – in effect a different model.