Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. Knowledge seems to play a central role in human learning and intelligence, including our superior cognitive and perception abilities. This inspires us to seek approaches to incorporate knowledge in applications that can benefit from big data. Our ability to create or deploy just the right knowledge in our computing processes will improve machine intelligence, perhaps in a similar way as knowledge has played a central role in human intelligence.
In this talk, we discuss the indispensable role of knowledge for deeper understanding of content and exploit big data where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.
You can read the presentation by Prof. Amit Sheth here.
Over the past few decades, dramatic improvements in communication and computing technologies have driven the growth of consumer internet which has improved efficiencies, increased customer base and created new business models in many industries including retail, banking, hospitality, and transportation. Newer technologies such as additive manufacturing, sensing technologies, artificial intelligence are changing our lives in ways we did not anticipate. A confluence of these technologies is rapidly changing the landscape of what is possible in manufacturing systems and asset health monitoring. This Industrial internet is going to have a big impact on heavy industries such as power, water, airlines, manufacturing, oil and gas, etc. by driving new outcomes and efficiencies in the industries. For example, 1% fuel saving in airline industry today is worth $30B over the next 15 years in the industry.
This talk will focus on Digital Twin technologies GE is driving for Digital Transformation of industries to improve efficiencies of industrial assets by bringing together sensing, monitoring, control, prognostics and optimization. Digital Twins are personalized learning models of different assets that assist in improved decision making related to maintenance and operations of these assets. For example, Digital Twins models of turbine system in GE90 engine are used to optimize maintenance saving tens of millions of dollars in unnecessary overhauls. In rail, Digital Twin models of locomotives enable customers to minimize fuel consumption and emissions by optimizing driving profile. In manufacturing, Digital Thread and Digital Twins are being connected to reduce cycle time and improve manufacturing productivity.
We will take the audience through the end to end process of training supervised and, time permitting, clustering models. We will use python and scikit-learn for most of our tutorial. Emphasis will be on gaining enough confidence in training a classifier end-to-end, and in exploring features, making sense of the data, and understanding how the classifier works. More than the theory, we will make sure that our audience will be able to go to their respective institutions and carry out some hands-on work in classification!