Text of my ‘When Worlds Collide’ column published in Ceylon Today Sunday broadsheet newspaper on 4 August 2013
There was a memorable scene in the film Star Trek IV: The Voyage Home (1986). Chief Engineer Scotty, having time-travelled 200 years back to late 20th century San Francisco with his crew mates, encounters an early Personal Computer (PC).
Sitting in front of it, he addresses the machine affably as “Computer!” Nothing happens. Scotty repeats himself; still no response. He doesn’t realise that voice recognition capability hadn’t arrived yet.
Exasperated, he picks up the mouse and speaks into it: “Hello, computer?” The computer’s owner offers helpful advice: “Just use the keyboard.”
Scotty looks astonished. “A keyboard?” he asks, and adds in a sarcastic tone: “How quaint!”
He then proceeds to use the keyboard with amazing dexterity — too good to be true for someone raised in a future world where human-computer interactions are mostly voice based.
The scene highlights a long standing debate in information technology (IT) circles: what is the best way for us to interact with computers, smartphones and other machines that are increasingly an integral part of our lives?
The answer depends on specific uses. The quest, from the early days of IT, was to make the experience as seamless as possible for as many people as possible.
Geeks who worked on early computers didn’t seem to mind interfaces using lots of text and code. But some of them realised early on that it had to be simpler and easier for the rest of us.
Mouse that Roared
By the 1960s, display screens and keyboards were standard components of computers; both were adapted from other industries. Then in 1963, American electrical engineer Doug Engelbart invented the computer ‘mouse’ in his research lab at Stanford Research Institute (now SRI International).
The basic idea had occurred to him in 1961 while sitting in a conference session on computer graphics. His first design was a wooden shell covering two metal wheels, with a long cable trailing out the back.
In 1967, SRI applied for a patent for the new device, under the name of “x,y position indicator for a display system,”. By the time it was granted in 1970, Engelbart and colleagues had improved the design. They first demonstrated it to the public in 1968.
So who coined the quirky name? He later recalled, “Five or six of us were involved in these tests, but no one can remember who started calling it a mouse. I’m surprised the name stuck.”
This mouse certainly took its time to come out. It was first used commercially by the Xerox company in 1981, but even they didn’t realise its full potential.
Around that time, Apple co-founder and maverick innovator Steve Jobs (1955-2011) was searching for simpler methods in which ordinary people – with no advanced technical skills – could relate to a computer.
In late 1979, while visiting Xerox’s PARC research lab in Palo Alto, California, Jobs saw an experimental computer with a mouse and a graphical user interface (GUI) – one that allows users to interact with electronic devices through images rather than text commands.
He later recalled, “Within 10 minutes… it was clear to me that all computers would work this way someday.”
Jobs tasked his team to build a simpler and more robust mouse that cost far less than Xerox’s original tag of US$300. After hundreds of prototypes, Apple settled for a single button design roughly the size of a deck of cards. (In contrast, Xerox mouse had three buttons.)
“SRI patented the mouse, but they really had no idea of its value. Some years later it was learned that they had licensed it to Apple for something like $40,000,” inventor Engelbart said in an interview.
It was the Apple Macintosh (1984) that introduced the silicon rodent to millions of users. Powered by Apple’s sleek design and forceful marketing, the humble device charmed its way into many homes and offices. The rest is computer history…
Engelbart, who died in July 2013 aged 88, expected it to “have a more dignified name” when the device escaped out to the world. But that didn’t happen. He was quite clear that the plural should be mice, not ‘mouses’.
Over the years, the name has spurred many jokes and cartoons. It also shows how old words acquire totally new meanings in the digital age.
Arthur C Clarke asked in 1999: “What would your grandmother have thought, if you told her that you would spend much of your day stroking a mouse?”
For a over generation, the mouse has been our primary bridge to the virtual world. But more than a billion units later, it faces stiff competition from newer technologies such as touch-screens and gesture control.
Apple is leading the charge again: its iPhones and iPads are raising a whole new generation of digital natives who barely recognize a mouse. (Engelbart’s own grandchildren, in their early 20s, do not use a mouse anymore.) Several computer companies are already building touch-sensitive monitors.
Gesture control is even more fascinating. We had a glimpse in the 2002 science fiction thriller Minority Report. In that film, directed by Steven Spielberg and based on a short story by Philip K. Dick, Tom Cruise was seen putting on his “data glove” and whooshing through video clips of future crimes.
The film highlighted computer interfaces using natural gestures without a keyboard, mouse or command line. Its chief scientific advisor John Underkoffler, at the time a researcher at MIT, soon developed the real thing — another case of science fiction inspiring tech innovation.
His technology, called the g-speak Spatial Operating Environment, is already being used in specialized fields like aerospace, bioinformatics and video editing. Underkoffler’s vision is to make it ubiquitous – in laptops, tablets, microwave ovens, TVs and elsewhere.
“Human hands and voice, if you use them in the digital world in the same way as the physical world, are incredibly expressive,” he told the Washington Post in October 2012. “If you let the plastic chunk that is a mouse drop away, you will be able to transmit information between you and machines in a very different, high-bandwidth way.”
Meanwhile, what about keyboards?
They will continue to be used until voice recognition software is perfected and users can directly converse with a computer, a la Star Trek. (Human voice recognition is hard for machines, given the many ways in which we speak even the same language.) Hopefully, we won’t have to wait till the 23rd century to get there…
But why are we still using the old-fashioned QWERTYUIOP arrangement in our Roman/English keyboards? Christopher Sholes, who invented the typewriter in 1868 and patented it a decade later, apparently adopted this layout to deliberately slow down early typists, to prevent keys from jamming together. (Others dispute this claim.)
We no longer have moving keys, but old habits die hard. Several alternatives – such as the Dvorak, Colemak and Capewell keyboards — arrange letters and characters for better efficiency, but they haven’t picked up. A quick comparison is at: http://mashable.com/2012/09/18/qwerty-keyboard/
Looking further ahead, the ultimate input-output device would bypass all the body’s sensory organs and pass its signals directly into our brains. In 3001: The Final Odyssey written in 1997, Arthur C Clarke described the ‘Braincap’ – a brain-computer interface technology that connects a computer directly to the human brain.
Neuroscience researchers are now tackling this challenge. One approach is to use EEG (electroencephalography) to non-invasively read brain waves and translate them into movement commands for computers and other devices.
Braincaps might arrive even before all mice go extinct. Watch this space.