CALL FOR PAPERS
DeCoDeML 2019 - 1st Workshop on Deep Continuous-Discrete Machine Learning
September 16, 2019, Würzburg, Germany
Affiliated to ECML-PKDD 2019, http://ecmlpkdd2019.org
Workshop site: https://sites.google.com/view/decodeml-workshop-2019/
Since the beginnings of machine learning – and indeed already hinted at in Alan Turing’s groundbreaking 1950 paper ”Computing machinery and intelligence” – two opposing approaches have been pursued: On the one hand, approaches that relate learning to knowledge and mostly use ”discrete” formalisms of formal logic. On the other hand, approaches which, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use ”continuous” methods from numerical optimization and statistics. The recent successes of deep learning can be attributed to the latter, the ”continuous” approach, and are currently opening up new opportunities for computers to ”perceive” the world and to act, with farreaching consequences for industry, science and society. The massive success in recognizing ”continuous” patterns is the catalyst for a new enthusiasm for artificial intelligence methods. However, today’s artificial neural networks are hardly suitable for learning and understanding ”discrete” logical structures, and this is one of the major hurdles to further progress.
Accordingly, one of the biggest open problems is to clarify the connection between these two learning approaches (logical-discrete, neural-continuous). In particular, the role and benefits of prior knowledge need to be reassessed and clarified. The role of formal logic in ensuring sound reasoning must be related to perception through deep networks. Further, the question of how to use prior knowledge to make the results of deep learning more stable, and to explain and justify them, is to be discussed. The extraction of symbolic knowledge from networks is a topic that needs to be reexamined against the background of the successes of deep learning. Finally, it is an open question if and how the principles responsible for the success of deep learning methods can be transferred to symbolic learning.
Workshop format:
This is a half-day workshop. We are aiming for a real workshop with a lot of interaction, and find a workshop is the right format because the topic is cutting-edge with much on-going work. Note that the workshop focuses on basic research questions (continuous/discrete and learning/knowledge in the era of deep learning), not consequences thereof or the like. The workshop will consist of:
Paper submission:
Authors should submit a PDF version in Springer LNCS style using the workshop EasyChair site: https://easychair.org/my/conference?conf=decodeml2019
We request extended abstracts on work in progress, already finished work, published work, position statements, etc. between two and three pages long in Springer LNCS style. Author names and affiliations should be included (no blind reviewing). Submission will take place via EasyChair.
All submissions will be reviewed by at least three PC members. Accepted papers will be published on the workshop webpage. We are planning a topic in the section Machine Learning and Artificial Intelligence of the journal Frontiers in Big Data. Further possibilities and future events will be discussed at the workshop.
Important Dates:
PC members:
Organizers and contact: