Last Summer I was selected by Google to try Glass, their prototype system to augment reality with the knowledge of the Internet. Wear Glass as you would a strange set of sun glasses or readers. Glass layers a 15 characters by 3 line display of text and graphics over your normal eye sight. I participated in the trial to prove the Appium mobile testing framework is compatible with the Glass software operating environment. Software developers will be able to use a future version of Appium to test their Glass apps for correct function.
Here are my notes on the Glass experience:
- Nice design, light weight
- The battery lasted about 3 hours of use and 10 hours stand-by
- Surprised that I can see the screen clearly without my +3.25 reading glasses
- Screen can handle 15 characters wide x 3 lines max
- As an interface/control the picture taking button and trackpad make no sense to me. It’s called Glass, not Touch.
- First thing I did after the “fitting” appointment at Google’s San Francisco office was to ask Google when the next train was leaving for San Jose. “No network connection” was the response. Worse, I could not share my iPhone’s hot spot. I needed to be on a laptop/desktop to configure Glass for networks.
- Looking up to see the screen doesn’t do it for me. I want to look through the augmented reality. Up is for menu bars that I want to forget about. Down is where the action is happening, usually.
- I am looking for a personal assistant, more than a new screen for my Android or iPhone.
While Glass is a fine prototype, its design seems to be too rooted to the Android operating system. Android was built for mobile phone devices. Android does well for making phone calls, and secondarily for playing games. For example, Android introduced many of us to gestures on a touch screen device, including pinching to make something bigger or smaller.
Glass is all about augementing the real world with helpful information. It’s inputs should include:
- Hearing my voice, including noticing if I am upset, calm, or panicked
- My body position, including noticing if I am standing up, sitting down, croutching, or running, and which way I am heading. It should also know how to ignore the things behind me and anticipate was is coming in front of me.
- My attention, including where the pupils of my eyes are looking
- 3 dimensional modeling of the things I am seeing. If I see a lovely necklace I should be able to view it in Glass, have Glass model it into a 3D file, and print it with a 3D printing service. Consider the first commerical 3D scanner.
Glass shows the need for a new operating environment for real-time augmented reality. While Glass is a good first prototype, there is a lot more that can be invented.
The experience with Glass got me thinking about the requirements for a good watch operating environment.