+@g.ucla.edu, &@seas.ucla.edu. *These authors contributed equally to this work. 1University of California, Los Angeles.
TL;DR: LowKeyEMG is a real-time interface that uses RWKV to enable efficient one-handed text entry from only 7 (or fewer) sEMG-decoded gestures, achieving up to 98.2% top-3 word accuracy. In real-time experiments, participants achieved average typing speeds of 23.3 WPM.
We introduce LowKeyEMG, a real-time human-computer interface that enables efficient text entry using only 7 gesture classes decoded from surface electromyography (sEMG). Prior work has attempted full-alphabet decoding from sEMG, but decoding large character sets remains unreliable, especially for individuals with motor impairments. Instead, LowKeyEMG reduces the English alphabet to 4 gesture keys, with 3 more for space and system interaction, to reliably translate simple one-handed gestures into text, leveraging the recurrent transformer-based language model RWKV for efficient computation. In real-time experiments, participants achieved average one-handed keyboardless typing speeds of 23.3 words per minute with LowKeyEMG, and improved gesture efficiency by 17% (relative to typed phrase length). When typing with only 7 keys, LowKeyEMG can achieve 98.2% top-3 word accuracy, demonstrating that this low-key typing paradigm can maintain practical communication rates. Our results have implications for assistive technologies and any interface where input bandwidth is constrained.
Supplementary Video 1: Participant H1 performs the typing task using LowKeyEMG with 4 alphabetic keys, 1 space key, 1 select key, and 1 undo key. Each of 7 one-handed gestures is mapped to 1 key. H1 types 3 repetitions of a phrase under conditions C: completion+context, A: base, and B: completion, respectively.
Experimental performance
(a) Typing speeds across three conditions for the 3 participants, A: LowKeyEMG-base, B: LowkeyEMG-completion, C: LowKeyEMG-completion+context. Lines connect the same phrase typed in each condition. Purple triangle: mean, orange bar: median. One-sided Wilcoxon signed-rank test: p < 0.05, **0.01, ***0.001. (b) Participants achieved reduced GPC (with and without error gestures) with word completion and context. Colors represent whether each gesture corresponded to a correct key (blue), incorrect key (shaded red), or undo of an incorrect key (solid red).
Participant words per minute (wpm)
Typing Speed for 3 LowKeyEMG conditions, in words per minute (wpm, mean ± std).
Closed loop one-handed typing selection statistics
(a) The number of gestures before selection is shown for each participant for conditions A: base, B: completion, and C: completion+context. One-sided rank-sum test: p < *0.05, **0.01, ***0.001. All participants select word suggestions significantly sooner with word completion (B), and when RWKV recieves additional passage context (C). (b) Probability distribution of position of selected candidate suggestions aggregated (equally weighted) across participants for conditions A: base, B: completion, and C: completion+context. When participants select a word, it is in the top-2 candidates at least 94.8% of the time (condition B). For condition A and C, the selected candidate is in the top-2 97.3% and 98.6% of the time.
Simulated results
(a) Cumulative distribution functions of the position of each word among candidates after typing the entirety of each word and a space, using an optimized layout with 4 alphabetic classes, for LowKeyEMG and a 4-gram word LM. (b) Same as (a), but for optimized layouts of 2-8 alphabetic classes, using LowKeyEMG. (c) Gestures per character (GPC) for a simulated user who minimizes GPC for a 4-gram word LM, for LowKeyEMG with alphabetical and optimized layouts (distance-1 matching on except for 3 classes), and for LowKeyEMG where space selects the top candidate once the next character is typed.
@article{lee2025lowkeyemg
title={{LowKeyEMG: Electromyographic typing with a reduced keyset}}
author={Lee, Johannes Y and Xiao, Derek and Kaasyap, Shreyas
and Hadidi, Nima R and Zhou, John L and Cunningham, Jacob
and Gore, Rakshith R and Eren, Deniz O and Kao, Jonathan C}
year={2025},
journal={arXiv:.......}
}