Beyond Machine Intelligence:

Understanding Cognitive Bias and Humanity for Well-being AI

Description of the symposium

[Aims and new Challenges]

Recent AI technologies (e.g.: Deep Learning and other advanced machine learning technology) will definitely change the world. However, excessive expectation for AI (e.g.: science fiction of general purpose AI) and threat theory (e.g.: AI robs a job) distort the judgement of many people. What we must do first is to understand the possibilities and limitations of the current machine intelligence correctly.

Especially understanding machine intelligence in human health and wellness domains remains some challenging. Although statistical machine learning predicts the future based on past data, it is difficult to respond to the new event which has never seen in the past. How to create new values which really make people happy is one of the most important challenges in well-being AI. For this purpose, we need to share interdisciplinary scientific findings between human science (brain science, bio-medical healthcare, psychology etc.) and AI.

One of the important keywords in this year’s symposium is “cognitive bias”. In the recent trend on big data becoming personal, AI technologies to manipulate the cognitive bias inherent in people’s mind have evolved; e.g.: social media, such as Twitter and Facebook, and commercial recommendation system. “Echo chamber effect” is known that makes it easy for people with the same opinion to make community, which makes it felt that everyone has the same opinion. Recently, there have been a movement to use such cognitive bias also in the political world. Advances in big data and machine learning should not overlook the new threats to enlightenment thought.

The second important keyword in this symposium is “Humanity”. One of the purposes of AI is to pursue “what is intelligence?”. Early AI researchers focused their efforts to make progress on rational thinking, such as mathematical theorem proving, chess and so on. However, rational thinking is recently and rapidly being replaced by machines. It seems that many people might have begun to believe that irrational thinking is the root of humanity. Empirical and philosophical discussions on “AI and humanity” are welcomed in this symposium.

This symposium is aimed at sharing latest progress, current challenges and potential applications related with AI health and well-being. The evaluation of digital experience and understanding of human health and well-being is also welcome.

[Background and our previous symposium]

We organized AAAI Spring symposium on “Wellbeing AI: From Machine Learning to Subjectivity oriented Computing” at Stanford University, from March 27th to 29th, 2017. The symposium was successful in inspiring new ideas from diverse fields of participants (around 30 participants), and the participants expressed the desire to continue this initiative in further events. We extend our scope by incorporating new ideas of “cognitive bias and humanity”.

This symposium will present important interdisciplinary challenges for guiding future advances in AI community.

Scope of Interests

We will have the following four technical challenges on well-being AI and one philosophical discussions on “AI and Humanity”. The technical researches for clarifying the possibilities and limitations of “machine intelligence” or philosophical discussions on “AI and Humanity” are welcomed.

(1) Representation of cognitive biases and personal traits.

First, we need to represent the cognitive biases, human tacit and subjective health/wellness knowledge in explicit and quantifiable way. Much of knowledge in well-being science is subjective. For example, fuzzy properties of subjective word embeddings in human health & wellness might be better to be represented with concrete mathematical structures. Discussions on evaluating the possibilities and limitations of the current technologies are welcomed.

(2) Machine Learning and other advanced analyses for Health & Wellness

Second, we need to explore the advanced machine learning technologies, such as deep learning and other quantitative methods, in health and wellness domains. Right now machine learning research is interested in getting computers to be able to understand data that humans do: images, text, sounds, and so on. However the focus is going to shift to getting computers to understand things that humans don’t. We need to make a bridge to allow humans to understand these things. Discussions on evaluating the possibilities and limitations of the current technologies are welcomed.

(3) Models, Reasoning and Inference

Third, the reasoning about data through representations should be understandable and accountable to human. For example, we need to develop powerful tools for understanding what exactly, deep neural networks and other quantitative methods are doing. Not only for increasing accuracy rate of predictions, we need to understand the causality with reliable models, reasoning and inference. Discussions on evaluating the possibilities and limitations of the current technologies are welcomed.

(4) Better Well-being systems design.

Furthermore, we need to understand the human. While recent technological advances bring many truly great benefits, there is an opportunity to rethink about the impact of these fruits. We need to understand how our AI revolution affects our emotions and our quality of life and how to design a better well-being system that puts humans at the center. Discussions on evaluating the possibilities and limitations of the current technologies are welcomed.

(5) Discussion on “AI and Humanity”.

We welcome the empirical and philosophical discussions on “AI and Humanity.” The topics include the “Machine Intelligence vs. Human Intelligence”, or “How AI affect our human society or way of thinking”.

The following topics are scope of our interests, but not limited to;

1. How to quantify our cognitive bias or personal traits.

Wordtovec analysis, sleep monitoring, diet monitoring, vital data, diabetes monitoring, running/sport calorie monitoring, personal genome, personal medicine, new type of self-tracking device, portable mobile tools, Health data Collection, Quantified Self tools, experiments, Affective computing, Weables and cognition, Brain Fitness and Training, Learning enhancement strategies, sleep, dreaming, relaxation, meditation, Yoga, Psysilogy, Nutriton, Chemicals, Electrical Stimulation (tDCS, rTMS, CES, EEG, neurofeedback)

2. How to analyze the health and wellness data for discovering the new meanings.

Discovery informatics technologies; deep learning, data mining and knowledge modeling for wellness, collective intelligence/ knowledge, life log analysis (e.g., vital data analyses, Twitter–based analysis), data visualization, human computation, etc. ), biomedical informatics, personal medicine.

Cognitive and Biomedical Modeling; brain science, brain interface, physiological modeling, biomedical informatics, systems biology, network analysis, mathematical modeling, Disease Dynamics, Personal genome, Gene networks, genetics and lifestyle with Microbiome, health/disease risk.

3. How to design better health and well-being space.

Social data analyses and social relation design, mood analyses, human computer interaction, health care communication system, natural language dialog system, Personal behavior discovery, Kansei, Zone and Creativity ,compassion, calming technology, Kansei engineering, Gamification, Assistive technologies, Ambient Assisted Living (AAL) technology .

4. Applications, platforms and Field Studies

Medical recommendation system, care support system for aged person, web service for personal wellness, games for health and happiness, life log applications, disease improvement experiment (e.g., metabolic syndrome, diabetes), sleep improvement experiment, Healthcare /Disable support system, community computing platform.

5. AI and Humanity

Empirical or philosophical discussions on “AI and Humanity” are welcomed. The topics include the “Machine Intelligence vs. Human Intelligence”, or “How AI affect our human society or way of thinking”. The issues on “cognitive bias” in resent trend on “big data becomes personal” are especially our strong interests. However, the topics will not be limited to above examples.

Format of Symposium

The symposium is organized by the invited talks, presentations, and posters and interactive demos.

Submission Requirements

Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation) The electronic version of your paper should be send to aaai2018-bmi@cas.lab.uec.ac.jp by October 27th.

Important Dates


The submission deadline is extended:November 17th, 2017

Author Notification: November 27th, 2017

Camera-ready Papers: January 23th, 2018

Registration deadline: March 2nd, 2018

Symposium: March 26th-28th, 2018

Invited Speakers

We are planning to invite keynote speakers from Stanford University and international academic and industrial community on AI / health care (Well-being Sciences). The invited speakers will be listed here.


  • Canceled:John C. Havens (Executive Director, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.)
  • Prof. Anshul Kandaje lab (Dr. Avanti Shrikumar, Dr. Amr Mohamed, Stanford)
  • Pang Wei Koh (Prof Percy lab, Stanford, the first author of best award paper on ICML17)

Organizing Committee

Co-chairs

  • Takashi Kido (Preferred Networks, Inc., Japan)
  • Keiki Takadama (The University of Electro-Communications, Japan)

“Cognitive Bias” committee

  • Melanie Swan (DIYgenomics, U.S.A.)
  • Katarzyna Wac (Stanford University, U.S.A and University of Geneva, Switzerland)
  • Ikuko Eguchi Yairi (Sophia University, Japan)

“Humanity” committee

  • Fumiko Kano (Copenhagen Business School, Denmark)
  • Takashi Maruyama (Stanford, U.S.A)

“Discovery Informatics and machine learning” committee

  • Chirag Patel (Stanford University, U.S.A)
  • Rui Chen (Stanford University, U.S.A)
  • Ryota Kanai (University of Sussex, UK.)
  • Yoni Donner (Stanford, U.S.A)
  • Yutaka Matsuo (University of Tokyo, Japan)

“Designing space for health and happiness” committee

  • Eiji Aramaki (Nara Institute of Science and Technology, Japan)
  • Pamela Day (Stanford, U.S.A)
  • Tomohiro Hoshi (Stanford, U.S.A)

“Application, Platform, Field Study” committee

  • Miho Otake (Chiba University, Japan)
  • Yotam Hineberg (Stanford, U.S.A)
  • Yukiko Shiki (Kansai University, Japan)

Advisory committee

  • Atul J. Butte (University of California, San Francisco (UCSF))
  • Seiji Nishino (Stanford University, U.S.A.)
  • Katsunori Shimohara (Doshisha University, Japan)
  • Takashi Maeno (Keio University, Japan)
  • Hiroshi Maruyama (Preferred Networks Inc.)

Note: Since we are contacting to other researches, more program committees will be added to the above list.

Contact

Takashi Kido

〒100-0004 Chiyoda-Ku, Otemachi,1-6-1, Tokyo Otemachi building 2F Preferred Networks, Inc.

https://www.preferred-networks.jp/

kido.takashi@gmail.com