There’s a lot of talk out there about new paradigms, such as:
- SaaS – Software as a Service.
- IaaS – Information as a Service.
- DaaS – Data as a Service.
These are paradigms that are happening in the enterprise realm – thus in reality they directly affect the IT activities of less than 10% of computer users. So while they’re interesting, they’re mainly interesting to IT workers and IT managers. They’re not the big paradigm change that’s starting to peek in from the future.
The big paradigm change is SaaA – Service as an Appliance. This is about at the consumer level, turning the computer into an appliance, shifting it out of the bedrooms or utility closets or room-corners of houses and into the pockets or hands of the consumers, to be picked up and dropped down at any point, anywhere in the house. (Some would argue that this is the intent of Cloud Computing, but I disagree. Cloud computing is not about taking away complexity; in its simplest form, Cloud Computing is about changing where either data and/or the processing happens – or how it’s organised. Hence things like SaaS, IaaS and DaaS.)
SaaA probably about the most exciting thing that’s happened in computing in the last three decades.
Undoubtedly there has been a growing percentage of the population that has a good grasp of computers. However, I’d suggest to you that it’s not the vast majority of computer users – or even approaching some high percentage mark. I’d even go so far as to say that outside of actual IT circles, the number of people who actually understand their computers would be less than 10% of computer users.
Why would I claim this? Let’s look at another complex technology: automobiles. When cars first started being manufactured, engineering knowledge was either necessary or was assumed to be bootstrapped as a result of studying the manuals. Consider for instance this manual for a 1924 Chevy. Within the manual you’ve got pages like this, which explain how the engine work, or this one about performing a front wheel alignment. These days it’s expected that the average car user doesn’t care how the motor operates, and instructions are certainly not given for self-servicing beyond say, changing tyres, refilling the radiator or windscreen washers, etc.
This is comparable to early computing. Fast forwarding past mini computers and shared computing to the desktop paradigm, the original 8 bit computers typically came with instructions like circuit board maps, BASIC programmers reference guides and sometimes even assembly language programming guides.
These days, like the updates to the car manuals, the average computer manual focuses on where to plug in the various cables and a few statements about liability, connecting to the internet, etc. The inner workings of the computer are no longer seriously documented, unless you deliberately buy individual components.
Yet, we’re not really at the “appliance” level of computing that we are with cars. While a lot of computer users don’t necessarily understand the inner workings or even components of their computer, the computer isn’t sufficiently like an appliance. So people point at a monitor and call it “the computer”; they point at a tower or desktop and call it “the hard drive”, and those users invariably become hopelessly confused by the notion of an all-in-one system like the iMac (or other similar products from Sony, HP, etc.)
Reaching the appliance state is not about taking away the need to document around the complexity, nor is it about reducing the risk of that complexity causing an error, which is what current computing models have mostly aimed for. In order to turn a computer into an appliance, you need to take away the complexity altogether.
Consider modern appliances. A standard front loading washing machine for instance will have a few buttons or a dial. The user presses a button to open the door, puts the clothes in, closes the door, fills up the appropriate detergent hoppers, etc., then presses another button or rotates a dial to select the wash settings, before pressing a button to start the cycle.
Microwaves are similar – put food in, press a number of buttons to provide the settings for cooking food, then press a button to start the cooking process.
Bringing this back to a computer though, sending an email, or surfing the web, or starting a video chat is not just about pressing buttons. It’s about navigating menus and potentially filesystems. It’s about moving a pointer around on screen using a piece of hardware that only has conceptual connectivity to the action on screen. The entire process is as successful as the end user’s understanding of the fundamental interface components. This means that I’m a high speed user, whereas my father rings me in a panic if a folder on his desktop has moved – and I have to explain what a “folder” is each time we talk about “folders”.
To me, with my usage requirements, my computer is indeed an appliance – a professional’s appliance. It does what I need it to do, and if it doesn’t, I can usually make it do what I need it to do. To my father, it’s a complex array of options that stand between him and iTunes, or him and Solitaire.
Understanding the interface’s operational paradigm is one of those insanely complex human activities that we take for granted when it happens, and abysmally fail to understand when it doesn’t happen. The closest way I can describe it is when animals first encounter themselves in a mirror. There’s usually shock, fear, anger, curiosity until finally the animal understands that the thing in the mirror is itself and it no longer registers. Children can go through similar reactions. As adults, we look at these reactions and struggle to fathom how the simplicity of a mirror could be misunderstood.
Ultimately, what I’m talking about is perception. To explain my point more fully, I’m going to refer to A. F. Chalmers, What is this thing called science? (ISBN 0-7022-1831-6, 2nd edition, 1982, University of Queensland Press.)
In Chapter 3 of this excellent work, Chalmers provides the following diagram:
Chalmers goes on to say:
Most of us, when looking at Figure 3 [the diagram above], see the drawing of a staircase with the upper surface of the stairs visible. But this is not the only way it can be seen. It can without difficulty also be seen as a staircase with the under surface of the stairs visible. Further, if one looks at the picture for some time, one generally finds, involuntarily, that what one sees changes frequently from a staircase viewed from above to a staircase viewed from below and back again. And yet it seems reasonable to suppose that, since it remains the same object viewed by the observer, the retinal images do not change. Whether the picture is seen as a staircase viewed from above or a staircase viewed from below seems to depend on something other than the image on the retina of the viewer. I suspect that no reader of this book has questioned my claim that Figure 3 looks like a staircase of some kind. However, the results of experiments on members of a number of African tribes whose culture does not include the custom of depicting three-dimensional objects by two-dimensional perspective drawings indicate that the members of those tribes would not have seen Figure 3 as a staircase but as a two-dimensional array of lines. I presume that the nature of the images formed on the retinas of observers is relatively independent of their culture. Again, it seems to follow that the perceptual experiences that observers have in the act of seeing is not uniquely determined by the images on their retinas.
This forms the heart of the issue experienced by people who don’t get current computer interface paradigms. They see the same thing as us, but they don’t interpret it the same way. Whether that’s because they literally can’t come to terms with the paradigm espoused by the interface, or they simply don’t want to invest the time to learn that paradigm, it doesn’t matter – the net result is the same: the computer is not an appliance for them, but a confusing array of options and doodads and physical/conceptual disconnects that they can’t relate to.
There’s two ways this can be addressed: assimilation or adaptation. The first roughly (and by roughly, I mean: drags kicking and screaming) assumes that all people can be bootstrapped to a certain level of computer expertise. The implication is the same as any cultural assimilation that has been documented in the last 500 years: those who don’t assimilate are left behind, usually with savage impact.
Adaptation, something we’re not so good at, is about either adjusting the existing paradigm, or developing a new paradigm that can co-exist with the other paradigm to ensure that we don’t end up in a situation where there’s disparate groups of haves and have-nots.
This is where SaaA comes in – having a complimentary interface paradigm that reduces the entire computer to a few buttons: a button for email, a button for web browsing, a button for music, a button for … you get the picture.
The netbook industry at some level claims to provide this level of complexity reduction, but realistically they don’t. They still end up looking and acting like normal computers, requiring a similar level of understanding by the end consumer. No current desktop/laptop operating system provides SaaA – nor do they even really come close to it.
Yet it’s here – in a small form factor, and about to arrive in a much bigger form factor. That’s via the iPhone/iPod Touch, and coming in the iPad. This is where Apple has leapt ahead of the game – they haven’t developed a “dumbed down” interface; they’ve introduced a new interface paradigm that turns the service into an appliance. The net result? SaaA will see the enabling of computer access in an entirely non-confrontational way to those who see the stairs as lines.
Now here’s the really exciting bit: if you think that I’m jumping up and down saying “Apple is better than anyone else”, you’re actually missing my fundamental point. Apple may be first, but what I’m really, seriously hoping, is that some other companies out there also understand the new paradigm and also focus on SaaA.
Wouldn’t it be beautiful if everyone could use a computer? Those who work with them and know them well can choose to use the computer in its conventional form. Those who just need an appliance can choose to use an appliance and get just the same level of satisfaction from the computer, without being forced into interface and paradigm assimilation.
(This has been an idea I’ve been gestating for a while, but the final incentive to put it all together was sparked by a comment tweeted by Simon Sharwood.)