The advancement of machine learning techniques holds great potential within the context of support for the disabled. However, recognition models and their application scenarios are often defined by experts, making it a challenging task to adequately reflect the diverse needs of the users with disabilities. To fully harness the potential of machine learning models, it’s essential to incorporate feedback and ideas from affected users, even if they don’t possess specialized knowledge. In this study, we examined how individuals with hearing impairments understand machine learning technologies and how they might design sound recognition systems based on these technologies, using an interactive machine learning environment workshop. Through hands-on experiences with machine learning and communication among workshop participants, we demonstrate the potential for these individuals to discuss concrete machine learning application ideas.
Reference Literature
- Yuri Nakao, Yusuke Sugano, “Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception”, in Proc. 11th Nordic Conference on Human-Computer Interaction (NordiCHI 2020).