I want to talk about user interfaces. This is a fancy name for things that allow you, the human, to tell the computer what you want it to do. It could be a mouse, keyboard, voice, touch and now, with Apple’s Vision Pro, a combination of eye-tracking and finger movement.
It all starts with Star Trek
But to begin with, let’s talk Star Trek. The original series used computers based on the Starfleet (TOS) Interface, which used buttons and voices to talk to the computer. In fact, the voice of the computer was played by Majel Barrett Roddenberry, Gene’s wife, from the first series until she died in 2009.
By the time Star Trek: The Next Generation (TNG) came along, the computer was always listening for the prompt “computer”. Just like Siri, Alexa and Google Voice do now. Some argue that mobile phones and personal audio assistants were inspired by Star Trek, but never forget that these were pretend interfaces to a computer system. Interfaces invented to work in the medium of film and TV, where typing into a computer is simply dull.
Jump forward to the excellent Picard and you get virtual interfaces. Here, people must swipe in space and move stuff that isn’t there; a key element of augmented reality (AR).
Apple, however, has taken things one step further with the Vision Pro. Its $3,500 ski googles will show you virtual screens or data alongside the outside world. To interact, cameras point downwards to capture your hand movements, and together with eye-tracking technology this means you only need look at something and pinch to interact with the computer.
I’m not going to rush out to buy a pair, although I can see the appeal in limited situations. But really, am I going to swap this for a pair of monitors, keyboard and a mouse?
The ultimate UI: the keyboard
If all mice and keyboards were like the ones I had to use during a recent demo session then perhaps I would change my mind. Here, I was forced to use a horrid “natural” bent keyboard with tiny keys that didn’t offer any travel or feed back alongside the world’s smallest mouse. A mouse I couldn’t use, so what was I to do?
Good news. Windows used to be built on a command line interface (CLI). For those who have never encountered such a thing, think a text box you can type commands into. The history is still there, and that means you don’t have to use a mouse to get things done in Windows.
Instead, keyboard shortcuts are your friend. I’m sure you all know you can highlight with the Shift key and the arrow keys, use CTRL+C to copy the highlighted text and then CTRL+P to paste. Well, in that training session I just dropped the tiny mouse and made do with the keyboard. It turned out that the people in the training didn’t know about these shortcuts and thought I was doing some form of magic.
It’s this that got me thinking about user interfaces (UIs) that have taught me a bagful of tricks I still use today. And what we can learn about UIs today.
Old-style versus Ribbon user interface
In the old days, Windows menus looked a little different to the ribbon we have today. Although I still use a program that takes advantage of keyboard shortcuts, as shown.
You can see that the developers even listed the keyboard shortcut to the menu item. So CTRL+O will open the find a file dialog box, just like clicking on the “Open…” shown. You can also see that the O is underlined, so if I press the O button at that point it will also open the file dialog box.
This interface was designed so you could “discover” shortcuts whilst also being able to do everything. This is a major advantage of a graphical user interface (GUI) over a CLI: you can explore all the options and find and pick what you want. When Microsoft invented the Ribbon interface, it claimed that this was so that people could find even more options.
The problem is that to get keyboard shortcuts now you must press the ALT key. The interface highlights then letters to give the options. That’s fine, but those change each time, and I have to work through the tabs to get to the option tab and then the letter. It allows for more options at the cost of time, but it is much more accessible.
Stepping up to Microsoft PowerShell
While Microsoft was doing its work on the Ribbon interface it also invented something called PowerShell. This is designed for admins like me. It’s a CLI and looks very Matrix, but what I really love is its time-saving power.
At its most basic, I can use PowerShell to control a lot of how Windows works, rather than using the GUI or the keyboard. But the real advantage of PowerShell is that I can write a load of these commands, bundle them into one file. Whenever I run that file, I know the same commands will be run every time. That’s great because lots of mistakes can happen if you’re doing the same thing repeatedly in a GUI.
The problem comes when Microsoft decides to only offer a way to control things via PowerShell, rather than through the GUI. It’s particularly annoying if I only need to perform a task once or twice.
In this case, I have to write and run a huge multiline command that will most likely generate an obscure error. An error I must then look up to work out what has gone wrong. I can see why Microsoft has done this; it allows tasks to be automated if you want, but not everyone needs this sort of automation.
Before you think this is just a go at Microsoft, please don’t. I spoke earlier about the Star Trek voice computer interface, and in the real world certain people have lots of fun interactions where the system doesn’t understand you or times out. Or, apparently wilfully, does the exact opposite of what was asked. See my previous column, AI hype: Why Microsoft Copilot is going to be the new Clippy, for my view on the supposed solution to this.
You might think people would never risk looking like idiots by shouting “No not that!” in the street with AR headsets on, but people already hold loud phone conversations on public transport.
A final word on touch interfaces
Which brings me back to touch as an interface. To precise, I mean a panel of glass that you touch, rather than a keyboard. Touch interfaces have an advantage on a phone or tablet, including having fewer buttons to let water in or go wrong.
That works fine on such devices, but now we’re getting them on cars as well. There are arguments for such an approach: a touchscreen is less expensive than a physical button and can be tweaked with software updates. But touch requires focus: the car’s user interface doesn’t remain the same in each scenario, so you can’t locate yourself on the screen without looking.
If I’m driving and I want to change the air con or the radio with physical buttons I can reach for them without looking. Tactile feedback tells me what I’m touching, and what I’m doing with each button, all without me having to look away from the main focus of what I am doing. That is, driving.
To do the same with a touch interface requires me to look and pay attention to a screen whilst I am supposed to be driving. And it doesn’t even save on time. (As a side note, a friend of mine and I were discussing cars and touch interfaces, and they said “also it is of no help to blind people” which I will just leave here.)
Enter the Apple Vision Pro?
There is one more type of interface that’s well known in cars and planes: the heads-up display (HUD). Here, the information sits in front of you without requiring you to look away from the road. What you can’t do is touch it.
Could this, then, be where the Vision Pro and its successors make an impact? By integrating what is effectively a HUD with touch, it brings something new to the world of user interfaces.
Still, I remain cynical. I like Apple devices, but the interface isn’t great at letting you discover the options. Apple doesn’t like writing manuals either. It sometimes seems to care more about the “look” of an interface than saying this needs to be simple, quick and discoverable.
In the end, I don’t care about style. I want an interface that fits the job, that allows me to tell the computer what to do without having to waste time doing it.
Perhaps the Vision Pro will eventually transform the way we interact with computers in the same way the iPhone changed the way we interact with our phones, but I can’t see it. My message? If you want to gain a skill to do your work faster, to out-perform your peers, then the keyboard shortcut remains king. On Windows and Macs.
Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology
Amazon introduces new storage class that makes it cheaper to store rarely used files
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition