Saturday, November 6

The Emperor's Smart Clothes...

Ever since reading Rainbow’s End by Vernor Vinge in 2007, I’ve been fascinated by the idea of wearable computing and alternate forms of computer user interfaces. While the book came out in 2006, the story is set in 2025 and it envisions amazing leaps in how technology can and will transform our lives. While much of it may seem obvious now, the book was released prior to the first iPhone when the RAZR flip phone was all the rage.


When I got my first iPhone in 2008, I was captivated by its potential but immediately felt constrained by it’s on screen keyboard. Now I could suddenly get online and do all these great things, but I had to fumble with the onscreen keyboard to type anything. 


In Rainbow’s End, children could chat with friends in class, and search online, etc; all without the teacher even really noticing. I desperately wanted the smart clothing from the book that provided custom gesture recognition, and with enough practice/experience would make text entry effortless whether sitting at a desk or walking down the street. 


The basic goals were simple;

  1. Support an interface for wearable computing

  2. Have it be built around a learning system to best personalize itself for each user.

  3. Have it work without touching or moving against anything.

  4. Make it always readily available.

  5. No vision required.


In researching, the closest I could find was a product called the Twiddler which was a one handed chording keyboard for mobile entry. I wanted a solution that didn’t require holding something in your hand though, to cover points #3 and #4 above.


Unfortunately nothing yet existed, but I thought if something even half as good as what was in the book could be built it would still be a huge improvement. The sensitivity of the iPhone’s accelerometer and gyroscope demonstrated what was already available, so what I thought were the key building blocks already existed.


Surface mount accelerometer/gyroscope chips were only about 1 square cm, so the idea was to mount a number of the sensors on rings and/or a glove to capture small hand movements/gestures without having to actually hold anything. As the user could be swinging their arms as they walked down the street, additional sensors above the palm/wrists/forearms could be used to help normalize the movement of individual fingers. Like the Twiddler, cording could be used to expand the range of possible inputs. 


As I experimented more and more with the sensors of my iPhone, I realized the gestures didn’t need to be rigidly defined. Maybe to enable/disable keyboard mode it could be as simple as wiggling both hands up and down simultaneously. Each use of `backspace` would be a valuable input to the ML model to not only identify that something was handled incorrectly, but the user’s final intent could be captured as well. (Ex. gesture "A"  was notnote).


Beyond just text, scrolling could be as simple as moving the pointer finger in a circle, `alt-tab` could be a flick of the wrist, etc. The gyroscope would capture the direction of gravity, so the direction or angle of the hang up/down could be used to differentiate meaning.


By toggling different input/interface modes, the inadvertent movements of a train/car/elevator/etc could be prevented from producing false input.


To support blind entry, simple haptic feedback could be used to register the recognition of a gesture. Like the little click a key makes on a keyboard.



While the ideas and simple experiments were abundant, I unfortunately lacked the CE or EE background to actually connect a number of small sensors together with a microcontroller. The Arduino Uno wasn’t released until late 2010, and I didn’t begin to develop against one until years later. Ten plus years ago, I had no idea how to solder well, or even have a microcontroller communicate with a surface mount MEMS sensor.


I figured the iPhone had only been out a few years by that point, so if I just gave it time someone would bring something to market. I mean Rainbow’s End won the Hugo award in 2007, so I figured whole rooms of CE undergrads working on wearable computing would be likely building something better than I could even imagine. 


Unfortunately it’s 2021 now, the iPhone has been out ~14 years; and the closest I’ve seen to Mr. Vinge’s vision was Google Glass. 


Now surrounded in a room littered with various Arduino hardware, it looks like I may have to build one myself after all. Luckily I’ve learned a lot in the past ten years, and the RP2040 looks like the perfect platform to get started with.


Wish me luck!