Abstract
Many smart glasses technologies are being developed to improve the working efficiency or quality of life in various fields. In some enterprises, these technologies are used to help improve the working quality and productivity and minimize data loss. In real life, smart glasses are applied as an entertainment device with augmented/virtual reality or as an assistive manipulator for the physically challenged. Thus, these technologies have mainly adopted various operating systems depending on usages, such as a touchpad, remote control, and voice recognition. However, conventional operating methods have limitations in non-verbal and noisy situations where people cannot use both hands. In this study, we present a method of detecting a facial signal for touchless activation using a transducer. We acquired a facial signal amplified by a lever mechanism using a load cell on the hinge of an eyewear. We then classified the signal and obtained their accuracy by calculating the confusion matrix with classified categories through a machine learning technique, i.e., the support vector machine. We can activate an actuator, such as a radio-controlled car, through a classified facial signal by using an eyewear-type signal transducer. Overall, our operating system can be useful for activating the actuator or transmitting a message through the classified facial activities in non-verbal situations and in situations where both hands cannot be used.
Original language | English |
---|---|
Pages (from-to) | 1035-1046 |
Number of pages | 12 |
Journal | International Journal of Precision Engineering and Manufacturing |
Volume | 21 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2020 |
Bibliographical note
Publisher Copyright:© 2020, Korean Society for Precision Engineering.
Keywords
- Facial muscle activities
- Leverage
- Pattern recognition
- Sensor and actuator
- Smart glasses
- Support vector machine