Journal papers
[1] Hancock, P. A., Kajaks, T., Caird, J. K., Chignell, M. H., Mizobuchi, S., Burns, P. C., Feng, J., Fernie, G. R., Lavallière, M., Noy, I. Y., Redelmeier, D. A. & Vrkljan, B. H. (2020). Challenges to human drivers in increasingly automated vehicles. Human factors, 62(2), 310-328. https://doi.org/10.1177/0018720819900402
[2] Mizobuchi, S., Terasaki, S., Häkkinen, J., Heinonen, E., Bergquist, J. and Chignell, M. (2008) The effect of stereoscopic viewing in a word-search task with a layered background, Journal of the Society for Information Display 16 (11), pp. 1105-1113. https://doi.org/10.1889/JSID16.11.1105
[3] Mizobuchi, S. and Yasumura, M. (2004) Tapping vs. Circling Selections on Pen-based Devices: Evidence for Different Performance-Shaping Factors. Journal of Human Interface Society 6(3), pp. 257-264. (In Japanese)
[4] Mizobuchi, S., Ren, X. and Yasumura, M. (2002) An empirical study of the minimum required size and the number of targets with a pen and with a cursor key on a small display, IPSJ Vol. 43, No. 12, pp. 3733-3743. (In Japanese) https://doi.org/10.1007/3-540-45756-9_15
International Conferences
[1] Jiang, H., Mizobuchi, S., & Chignell, M. (2023, December). Scenario Fidelity and Perceived Driver Mental Workload: Can Workload Assessment be Crowdsourced?. In Proceedings of the 13th International Conference on Advances in Information Technology (pp. 1-6). doi.acm.org?doi=3628454.3628458
[2] Jiang, H., Mizobuchi, S., & Chignell, M. (2023, October). Lower Executive Function Ability May Lead to Higher Perceived Mental Workload in Driving Scenarios. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (p. 21695067231192859). Sage CA: Los Angeles, CA: SAGE Publications. https://journals.sagepub.com/doi/pdf/10.1177/21695067231192859
[3] Henderson, J., Neshati, A., Mizobuchi, S., Zhou, W., Vogel, D., & Lank, E. (2023, April). Interaction Region Characteristics for Midair Barehand Targeting on a Television. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-7). https://doi.org/10.1145/3544549.3585877
[4] Zhang, F., Mizobuchi, S., Zhou, W., & Lank, E. (2021). Analyzing Midair Object Pointing Mappings for Smart Display Input. Proceedings of the ACM on Human-Computer Interaction, 5(ISS), 1-18. https://doi.org/10.1145/3488535
[5] Zhang, F., Mizobuchi, S., Zhou, W., Khan, T. A., Li, W., & Lank, E. (2021, August). Leveraging CD Gain for Precise Barehand Video Timeline Browsing on Smart Displays. In IFIP Conference on Human-Computer Interaction (pp. 72-91). Springer, Cham. https://doi.org/10.1007/978-3-030-85610-6_5
[6] Jiang, H., Chignell, M., Mizobuchi, S., Farhadi Niaki, F., Liu, Z., Zhou, W., & Li, W. (2021, June). Demographic Effects on Mid-Air Gesture Preference for Control of Devices: Implications for Design. In Congress of the International Ergonomics Association (pp. 379-386). Springer, Cham. https://doi.org/10.1007/978-3-030-74614-8_47
[7] Saniee-Monfared, G., Fan, K., Xu, Q., Mizobuchi, S., Zhou, L., Irani, P. P., & Li, W. (2020, October). Tent Mode Interactions: Exploring Collocated Multi-User Interaction on a Foldable Device. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-12). https://doi.org/10.1145/3379503.3403566
[8] Henderson, J., Mizobuchi, S., Li, W., & Lank, E. (2019, October). Exploring Cross-Modal Training via Touch to Learn a Mid-Air Marking Menu Gesture Set. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-9). https://doi.org/10.1145/3338286.3340119
[9] Duangcham, P., Vanijja, V., & Mizobuchi, S. (2017, April). The effect of Aging on Visual Attention Shifting in Collaborative Document Editing. In Proceedings of the 26th International Conference on World Wide Web Companion (pp. 1117-1120). https://doi.org/10.1145/3041021.3054932
[10] Mizobuchi, S., Chignell, M., Delange, T., and Ho, W. (2014) Sensitivity of a Voluntary Interruption of Occlusion Measure to Cognitive Distraction During a Pedal Tracking Task, Proceedings of the Human Factors and Ergonomics Society Annual Meeting September 2014 (HFES2014), vol. 58 no.1, pp. 2229-2233. doi:10.1177/1541931214581468. https://doi.org/10.1177/1541931214581468
[11] Chignell, M., Tong, T., Mizobuchi, S. and Walmsley, W. (2014) Combining Speed and Accuracy into a Global Measure of Performance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting September 2014 (HFES2014), vol. 58 no.1, pp. 1442-1446. https://doi.org/10.1177/1541931214581301
[12] Mizobuchi, S., Chignell, M., Canella, D., Eizenman, M., Yoshizu, S., Sannomiya, C., and Nawa, K. (2013) The Effect of Secondary Task Timing and Difficulty on Driving-Related Performance and Modality Selection, Proceedings of the 20th ITS World Congress Tokyo 2013, No. 4171. https://trid.trb.org/view/1322507
[13] Mizobuchi, S., Chignell, M., Canella, D., Eizenman, M., Yoshizu, S., Sannomiya, C., Nawa, K. (2013) Looking or Listening? : Impacts of Secondary Task Timing and Difficulty on Tracking Performance, Proceedings of the Human Factors and Ergonomics Society Annual Meeting September 2013 (HFES2013), vol. 57 no. 1, pp.1894-1898. https://doi.org/10.1177/1541931213571422
[14] Mizobuchi, S., Chignell, M., Canella, D., and Eizenman, M (2013) Individual Differences in Driving-Related Multitasking, Proceedings of the 3rd International Conference on Driver Distraction and Inattention (DDI2013), No. 72-P. http://www.distractionconference.com/ddi2013-en/program/program-papers
[15] Mizobuchi, S., Chignell, M., Suzuki, J., Koga, K. & Nawa, K. (2013) Shifting Between Cognitive and Visual Distraction: The Impact of Cognitive Ability on Distraction Caused by Secondary Tasks, Proceedings of the 7th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design (DA2013), pp142-148. https://doi.org/10.17077/drivingassessment.1480
[16] Mizobuchi, S., Chignell, M., Suzuki, J., Koga, K. & Nawa, K. (2012) The Impact of Central Executive Function Loadings on Driving-Related Performance, Adjunct Proceeding of the third conference on automotive user interface and interactive vehicular applications (Automotive UI 2012), pp. 68-75. https://www.auto-ui.org/12/proceedings.php
[17] Mizobuchi, S., Chignell, M., Suzuki, J., Koga, K. and Nawa, K. (2011) Central Executive Functions Likely Mediate the Impact of Device Operation When Driving, Proceeding of the third conference on automotive user interface and interactive vehicular applications (Automotive UI 2011), pp. 129-136. https://doi.org/10.1145/2381416.2381437
[18] Kuriyagawa, Y.,Kageyama, I., Watanabe, S., Kuramoto, H., Mizobuchi, S., Nawa, K. and Kumon, H. (2010) Evaluating Human Machine Interface of In-Vehicle Information Systems, FISITA 2010 (International Federation of Automotive Engineering Societies) congress proceedings, F2010-H-013.
[19] Kiritani, Y., Mizobuchi, S., and Ohhashi, H. (2006) Effect of latency of response on life-like communication using a dog-like robot, Extended Abstracts of CHI2005, pp. 971-976. https://doi.org/10.1145/1125451.1125638
[20] Mizobuchi, S., Terasaki, S., Keski-Jaskari, T., Nousiainen, J., Ryynanen, M. and Silfverberg, M. (2005) Making an Impression: Force-Controlled Pen Input for Handheld Devices, Extended Abstracts of CHI2005. pp. 1661-1664. https://doi.org/10.1145/1056808.1056991
[21] Mizobuchi, S., Chignell, M. H. and Newton, D. (2005) Mobile text entry: relationship between walking speed and text input task difficulty. In Proceedings of the 7th international conference on human computer interaction with mobile devices & services (MobileHCI '05). ACM, New York, NY, USA, pp. 122-128. https://doi.org/10.1145/1085777.1085798
[22] Ren, X., & Mizobuchi, S. (2005). Investigating the usability of the stylus pen on handheld devices. SIGHCI 2005 Proceedings, 12. pdf
[23] Mizobuchi, S. and Yasumura, M. (2004) Tapping vs. Circling Selections on Pen-based Devices: Evidence for Different Performance-Shaping Factors. Proceedings of CHI2004, pp. 607-614. https://doi.org/10.1145/985692.985769
[24] Mizobuchi, S., Mori, K., Ren, X. and Yasumura, M. (2002) An Empirical Study of the Minimum Required Size and the Minimum Number of Targets for Pen Input on the Small Display, in Paterno, F. (Ed.) Human Computer Interaction with Mobile Devices, proceedings for 4th International Symposium, Mobile HCI 2002, pp184-194. https://doi.org/10.1007/3-540-45756-9_15
[25] Mizobuchi, S. and Wanibe, E. (2001) How long is a "long" key press?, in Hirose, M. (Ed.) Human-Computer Interaction - INTERACT'01, proceedings for IFIP TC.13 International Conference on Human-Computer Interaction, pp. 735-736. pdf
[26] Mizobuchi, S. (2001) What do you expect when you press the "send" key?, Poster presentation in Annual conference of Usability Professionals' Association (UPA2001)
[27] Mizobuchi, S. and Kurosu, M. (2000) The effect of "virtual nodding" on the interaction over the computer communication system, International Congress of Psychology (ICP2000)
[28] Mizobuchi, S. (1996) Access methods to the representations in memory, Poster presentation in International Journal of Psychology, Vol.31 Issues 3 and 4, International Congress of Psychology (ICP96)
Research Report
Coyne, K. P. (2002). Beyond ALT text: making the web easy to use for users with disabilities. Nielsen Norman Group Report. http://www.nngroup.com/reports/accessibility/
Patents
[1] Zhou, W., Mizobuchi, S., Singh, G., Saniee-Monfared, G., Wang, S., Zhang, F., Li, W. (2022) Methods and systems for providing feedback for multi-precision mid-air gestures on gesture-controlled device, WO-2022242430-A1
[2] Li, W., Zhou, W., Mizobuchi, S., Saniee-Monfared, G., Liu, J., Arefin Khan, T., Veras-Guimaraes, R. (2022) Method and device for adjusting the control-display gain of a gesture controlled electronic device, US-11474614-B2
[3] Zhou, W., Mizobuchi, S., Veras-Guimaraes, R., Saniee-Monfared, G., Li, W. (2022) Methods and systems for multi-precision discrete control of a user interface control element of a gesture-controlled device, US-2022197392-A1
[4] Li, W., Mizobuchi, S. (2022) Devices and methods of multi-surface gesture interaction, US-2022197494-A1
[5] Luo, W., Mizobuchi, S., Zhou, W., Li, W. (2022) Methods and devices for hand-on-wheel gesture interaction for controls, WO-2022116656-A1
[6] Zhou, W., Hosseinkhani-Loorak, M., Mizobuchi, M., Yi, X., Ye, J., Hu, L., Li, W. (2022) Methods, systems, and media for context-aware estimation of student attention in online learning, WO-2022052084-A1
[7] Mizobuchi, S., Zhou, S. (2022) Method and apparatus for video conferencing, WO-2022011653-A1
[8] Li, W., Mizobuchi, S., Xu, Q., Zhou,W., Xu, J. (2021) Timeline user interface, US-2021294485-A1
[9] Mizobuchi, S., Koga, K., Kumon, H. (2012) User interface, JP-2012003690-A
[10] Mizobuchi, S., Watanabe, S., Kumon, H. (2012) User interface device, JP-2012003585-A
[11] Chipchase, J., Mizobuchi, S., Sugano, M., Waris, H. (2005) Method and apparatus for improved handset multi-tasking, including pattern recognition and augmentation of camera images, WO-2005065013-A3
[12] Mizobuchi, S., (2005) Real time communication system, transceiver and method for real time communication system, JP-2005341387-A
[13] Mizobuchi, S., Mori, E. (2004) Touch screen user interface featuring stroke-based object selection and functional object activation, US-7554530-B2
[14] Mizobuchi, S., (2004) Stylus ui system and stylus, JP-2004030329-A