NL is a finite domain first order formal theory meant to be interpreted in the natural language as it is used to build the sciences. We will
discuss further below the exact extent of "natural language" as domain of interpretation
of NL, but before that, I shall expose the motivation behind NL
with a short historical introduction.
The problems we try to tackle here are those of knowledge representation and automatic reasoning with it, and the approach taken is based on symbolic logic. This necessarily entails designing some kind of formalization of the way we use the natural language, because knowledge is expressed with the natural language, and automatic reasoning can only be done within a formal language. There is one obvious program to try to achieve this, and it starts with establishing a correspondence between the copular use of the verb "to be" and the
basic predicates of naïve set theory (NST). Proper nouns are taken as ur-elements, common nouns as sets, and the logical connectives as their obvious natural counterparts.
It is easy to see that
John is a man
can be translated into set theory as John belonging to the set of men,
A man is an animal
as the set of men is a subset of the set of animals, and
John is the author of the book
as equality among individuals.
The next step in the naïve program is to introduce the non-copular verbs as additional predicates of the theory, and relate them with the basic predicates through the use of unrestricted comprehension (UC). The problem with this is that UC leads to high order logics: NST cannot be expressed as a first order theory. And the "variables" that we use in the natural language have the form of the variables of a first order language: They can range over anything that can be the range of a variable, in any mixture of constraints. For example, we can have:
john feels that he wants to eat food,and rules of the form
if john feels something, then ...
if john feels that he wants something, then ...
Logic has historically been a study on the scientific use of the natural language. Silogisms are obviously forms found in scientific speech, and propositional logic is a study on truth and falsity and the use of the connectives of the natural language in the sciences. Frege realized that this study could be more fine-grained, and developed predicate logic, along with a particular formal theory, that he would have probably named of "concepts", but was later to be called NST. The form of these concepts he took from Cantor's theory of sets, but he identified them with the natural language concepts, ideas, meaningful expressions. He founded arithmetics on that theory, but arithmetics as it is used within the natural language by the scientists; not in isolation. He never forgot that 1 is used to say "there is one moon", and investigated the nature of numbers in that context.
His theory of concepts was, as far as he could see, a final model for the scientific language. He did not see the problem outlined in the previous title, because he had the axiom schema of UC, and at the same time he only had first order variables.
Around 1901, Russell told his famous paradox (does the set of all sets that don't belong to themselves belong to itself?) to Frege. This paradox gave away with UC and with all possibility of first-order quantification over predications, because there they had one, "to belong to the set of all sets that don't belong to themselves", that could not possibly correspond to a consistent class. This meant for them that their theory was not after all a model for the use of the natural language in the sciences.
The momentum gained up till that moment then split in two separate directions. On one hand, logicism did away with UC, and attempted and very much succeded at establishing a formal foundation for mathematics. They substituted UC with other more esoteric axioms, that defined the basic predicates of set theory in a way that completely estranged them from the natural copular verbs. On the other hand, the logical positivists assumed the existence of a true formalization of scientific natural language, searched for it, and pretty much failed.
of the last encounters with this problem can be appreciated in the
impossibility of providing the OWL-full specification of the
semantic web with an inference engine. To touch upon this matter, we must
speak of ontologies and description logics. An ontology can be many
things. In its original meaning, Ontology is the science that talks
about what is. The science of to be: not of the form of Being, that is
the subject matter of Metaphysics, but of what things are. In semantic web circles, an ontology is often a set of sentences
in a description logic (DL). A DL is a reasoning system that implements, not
some form of approximation to general first-order logic, but a specific first-order theory, a variation on set theory. Theories developed in these systems are called ontologies, probably because the basic predicates of set theory implemented in the systems are corresponded to the copular uses of the verb "to be" in the natural
language. This way, you get a system that allows you to speak and reason about things
that "are" something, like with Ontology, and at the same time is more efficient than any approximation to full classical first order logic.
There is, of course, one problem with the expectations that may be put in this schema. To look again into it, let us also consider "ontology" as might be applied within the natural language. In this sense, an ontology would be a set of sentences that seems obviously correct, consistent and meaningful to any one who studies it. Even if she doesn't believe it true. An example might be Darwin's theory of evolution. If we compare the ontologies implementable in DLs, with the ontologies developed in the natural language, we find that the ones in natural language are an order of magnitude more complex than those expressible in DLs. This degree of complexity is given by the possibility of having the equivalent of first order variables ranging over predications of the language. In the theory of evolution, for example, we might speak about traits, and traits might correspond to arbitrary predications on the individuals of the species. To speak comfortably about that, you need first order variables capable of ranging over those predications, and formal logic cannot provide that.
The semantic web started as grammars to build ontologies, modelled on the way we build ontologies in the natural language. Only later were inference engines added to it. And this addition set OWL-DL as the most complex ontology system that could be used for reasoning in the semantic web. In OWL-Full, classes (that unrestrictedly correspond to predicates of the language) can be taken as individuals, and therefore Russell's paradox applies.
I will now try to define the exact extent of "natural language" as domain of interpretation of NL. We shall assume an expression to be a unit of meaning in the natural language, excepting copular sentences. Any meaningful sequence of one or more words, that is not a copular sentence, is therefore an expression. NL's domain of interpretation is, then, the set of expressions of the natural language. Its predicates have the basic form of the predicates of set theory, without UC or any other restricted comprehension at all. They are interpreted as relations among expressions, and we assume these to be given by the copular sentences of the natural language. So, we follow the first steps of the program outlined above.
With this, the initial assertion that NL is interpreted in the natural language, would mean that any theory developed on top of NL is to be interpreted in the set of expressions, and relations among them, of an ontology (or scientific theory) developed with the natural language.
NL tries to solve the problem discussed above by translating the verb phrases of the natural
language (except those in copular sentences) to operations instead of
predications (i.e., by translating non-copular verbs to operators instead of to
predicates). This allows variables to range, if not over formal
predications, at least over the formal equivalents of natural language verb
phrases. This means that non-copular sentences (let's call them facts)
do not correspond to sentences of NL; only copular sentences correspond
to sentences in NL. But this is ok, because facts are expressions, so we can use the predicates of the language to relate facts
with other expressions, for example, with "fact". So, whereas
john eats some food
cannot represent a sentence in NL, we can have a
sentence in NL with the form
that john eats some food is a fact.
Copular sentences would be translated in their common straight-forward form, and we might have a sentence in NL with the form
johh is a man.
Instead of UC, we might have: "for every verb phrase that can appear in a fact, there exists a noun, such that everything that makes a fact with that verb phrase, is (is in the sense of belongs to, not of identity) that noun". That is almost what Frege, Russell, and OWL-Full where looking for; it has the same form, except that it leaves the copular verb out of the deal. For example, "john wants to be happy" would be unexpressible in NL, for that "to be" would have to correspond to a basic predicate of NL, and we might want to say: "if john wants something, then..." However, it is not difficult to express almost the same with a non-copular verb: "john wants to feel happy". And, in any case, we don't need such an axiom in NL, since we can directly quantify the equivalents of the verb phrases.
Two things I conclude from the above discussion. The first is that NL is not a perfect model of the scientific use of the natural language, because its variables cannot range over copular predicates. The second is that it almost is, and can be useful in that sense.
There is a python/CLIPS implementation of NL here.