Within knowledge representation and automated reasoning, ontologies are usually defined to be "explicit specifications of conceptualizations". More particularly, the terms "ontology" and "ontological reasoning" are usually related to tools and formalisms such as RDF, OWL or description logics. In this context, ontologies are representations of information designed to enable logical reasoning over them.
In many cases, there is a need to transform data into ontological representation from other forms of more unstructured data (for example, natural language), or a need for using ontologies representing intersecting parts of reality in conjunction. These and other situations provide automatically generated ontologies. In the process of generating these ontologies, faults may be introduced in them. Moreover, ontologies created manually by humans may also contain faults because of the inexperience of the ontology designer, because of multiple designers working on the same ontology or for many other possible reasons.
The most general classification of ontological faults divides them in inconsistencies and incompletenesses. Simply put, an inconsistency implies that opposite statements are included in (inferred from) the ontology, and incompleteness states that neither a statement nor its opposite are part of the ontology. An alternative, more practical definition is that an incompatibility is the including of a false statement in an ontology, with respect to some ground truth that we assume exists (which we call the preferred model), even if we may not have direct or complete access to it. An insufficiency happens when a true statement (w.r.t. the preferred model) is not a consequence of an ontology.
My PhD looked at ontology debugging from a general and abstract point of view, and aimed to develop formalisms and methods to automatically detect and repair faults in logical ontologies. More precisely, it focused on approaches to encoding abstract or general information about typical faults and usual correct structures of ontologies as patterns in meta-ontologies that can be automatically detected with automated theorem proving mechanisms.
One of the main technical tools I developed to try to tackle this problem is dependency graph unification, which is a form of second-order unification specifically aimed at finding instantiations of patterns as the ones described above, using a novel data structure to represent partially solved unification problems, called dependency graphs. This is described in more detail in my PhD thesis.
My principal supervisor for this work was Alan Bundy, and my second supervisor, Perdita Stevens. This work was done under the CDT in Data Science programme, and was partially funded by Brainn Wave.
My PhD thesis was accepted in January 2023. You can download it directly here or from the Edinburgh Research Archive (DOI: http://dx.doi.org/10.7488/era/2941 ). The GitHub repository with the implementation of the program during my PhD can be found here .
Other previous results include my master's dissertation. Another GitHub repository with the implementation of meta-ontology fault detection as presented in my Master's dissertation can be found here.