Machine learning or Noam Chomsky's approach to NLP?

Post date: Oct 18, 2014 4:13:49 AM

Sometimes people from the machine learning community often use the following the argument to justify the model-less approach to natural language processing (NLP): a languages is best learned by experimenting/practicing rather than memorizing its grammatical rules. Hence grammatical rules are useless. Obviously, practicing has a point, but so do the grammatical rules of a language. These rules breath structures into a language, making it easier for learner to make sense and absorb. If a language is just like random noise, no intelligent creatures will be able to comprehend it. Furthermore, it is widely known that learning a new language is often much easier after than learning the second, the third languages. This is because all languages share some underlying structures that inherently "make sense" and help facilitate its learning curve, which I believe is Noam Chomsky's approach to NLP. This critical characteristic of NLP is not widely exploited yet by any machine learning approaches, to my best knowledge.