Editor's Note: This guest post is written by Kwindla Hultman Kramer, who is the CEO of Oblong Industries the company known for developing the gestural interfaces in the film Minority Report. The company's current customers and partners include Boeing, SAP, GE, and others.
Computers have been getting steadily "better" faster, smaller, cheaper for sixty years. But they get "smarter" more capable and more broadly useful in discrete leaps, the biggest of which don't happen very often. We're overdue for our next big leap.
Working with computers is intoxicating. The price/performance curve is always moving to the right. Every year one can do more: Design new user experiences, write new kinds of programs, and develop new hardware. In this context of constant change, it's easy to focus on the trees rather than the forest. Good engineering is often about incremental improvement. Good business is often about finding product/market fit, while good design is often about giving users an interface that is easy to understand.
This is what's involved when building the "next thing." And it's great, fulfilling, and indispensable work. But, of course, it's also possible to build the "next next thing." And we should be doing that, too.
The fundamental driver of progress in computers is that the number of transistors that can be packed into a given-sized integrated circuit keeps rising. So the cost of computing power keeps dropping. This trend is so important everybody knows the name for it: Moore's Law. Since about 1960, Moore's Law has resulted in the increase in raw computing power at a steady, yet incremental rate. But, viewed more generally, what we can do with computers could change that not only incrementally, but also in big, saltational leaps.
Progress isn't just a megaflops numbers game. There are other drivers and constraints in addition to Moore's Law. Some of these are technical, some economic, and some social. It takes time for us to figure out how to use, and to adopt, additional computing horsepower.
There have been three saltational leaps next, next things that stand out in the history of modern computing: Interactive, general purpose, personal computers; graphical user interfaces; and the Web. In the 1970s, computers evolved from batch machines designed for experts to interactive machines that a much broader population could learn and use; the desktop era had begun.
By 1980, VisiCalc was turning bookkeepers into computer users. By 1983 WordStar, WordPerfect, and Microsoft Word were fighting for dominance in the exploding word processing market. In 1984, Commodore sold more than a million C64 home computers. By the time the desktop era was in full swing, though, the next next thing was already starting.
Apple released the Macintosh in 1984, a desktop computer designed from the ground up around graphics rather than text. The Mac was influential right away, but it took ten years for the Graphical User Interface to take over the desktop market completely. And by that time the next next thing was starting again.
Netscape brought the web to the computer-using masses, went public in 1995, and pushed Microsoft and Apple to take the internet seriously as a technology and commercial opportunity. Again, it took about a decade for this new mode of computer use to become ubiquitous and fully capable as a platform. By 2005, internet adoption had reached a tipping point at well over 10 million regular users, the term "Web 2.0? was in common use, and powerful web services with APIs (like Google Maps) were beginning to appear.
This time, though, the next next thing has taken a little longer to come into focus. Or maybe it seems like that only because we're living through it now, rather than looking back on it. Either way, though, it's worth thinking about what's coming, not just incrementally but disruptively, transformationally, mind-expandingly.
Phones and tablets are part of this, but we're only just beginning to scratch the surface of what can be done on mobile devices. And mobile devices themselves are just part of the equation. So applications like Foursquare and Instagram are terrific (and important) but they're only the tip of the tip of the iceberg.
What we need is a new set of approaches that knits together the entire user experience across all our computers and screens. The GUI unified our desktop experience. The web browser standardized client-server application development for the computing environment of the early internet age. The next next thing will give us the platform for the multi-device era.
We don't have a name for this, yet. "Post-PC" gets it partly right, in that we're expanding beyond the ideas that shaped the "personal computing" environment, in a number of ways. But post-pc is usually thought of as a move simply to new form factors. That doesn't begin to capture the potential of what we can already build with the raw technology available to us today. And it definitely doesn't capture the qualitative shift to a whole new way of thinking about the computing experience.
The transitions to the desktop era and then the graphical user interface were driven largely by Moore's Law. But the arrival of the ubiquitous internet was a little different. "Web 2.0? and all the subsequent advances in content delivery, mobile experiences, and embedded applications were enabled by vastly cheaper bandwidth, which is partly driven by Moore's Law but is, conceptually and architecturally, a new ingredient.
In a similar way, the transition to more-than-post-pc will be driven by cheaper pixels. The fall in pixel cost has snuck up on us. We don't talk about it very much. But the fact that a forty-two inch, two megapixel LCD screen costs less than $300 means that there are television-sized displays everywhere now. And a smartphone in a billion pockets is only possible because three-inch, half-megapixel displays cost $20.
One way of thinking about the next next thing is that we have a new resource available, all these pixels. Now we have to figure out how to make the best use of them. Pixels getting cheaper means that lots of them are in public spaces of various kinds. We don't have interactivity just on our desktops or in our hands. We have it in our living rooms at home, in hallways and meeting spaces at work, on retail end-caps and every wall of trade-show booths, and plastered throughout airports and train stations.
We have to build some new top-level interaction techniques. These pixels that are suddenly everywhere aren't like the scarce, fixed pixels of yesterday. For me, the Minority Report interfaces are still the gold standard here.
I want to be able to walk up to any screen in the world and point at it. And I want, in pointing, to actually reach through that screen into the network to get at all the content, all the programs, and all the communications channels I care about. And I want all the screens in my world mobile or fixed, small or big, owned by me or just something I'm near to be part of this networked ecosystem.
So that dream is about the interface, but also a lot more. We're going to have to change some fundamental assumptions about what we're doing when we write applications (and "apps").
Making the most of the GUI required that we learn how to build new user experiences. We couldn't just re-skin text-mode programs. Similarly, both web and mobile applications are most interesting when they leverage the "N squared" strengths of the internet social, crowd-sourced, service-backed, able to provide both centralized and de-centralized building blocks.
Our next next applications have to make similar leaps. We have to make a transition from a mindset that applications are built around one person, one screen and one task, to a new world view that's multi-user, multi-screen, and multi-device. We're starting to see little glimmers of this.
More and more applications are using the combined infrastructure of the network, the cloud and the device to synchronize interactive state and content automatically. Users are taking the "two screen" experience into their own hands, mashing up broadcast consumption and social interaction in ways that are forcing big media companies to adjust. I can push the display from (some of my) mobile devices across to (some of my) big screens. And my phone can selectively tell other people and computational agents where I am and what information I might like to see pushed from their devices and their screens.
But, again, we're just starting down this road. We're going to need a lot of new stuff to get to really interesting new capabilities. We'll need new platforms and tools, new standards, new hardware, new partnerships and business models, new coopetition and new evangelism. We're going to have to try a lot of things that aren't quite right to figure out what is right.
In other words, the next next thing is the perfect thing for startups to work on.
- - -
The interfaces in the film Minority Report were designed by John Underkoffler, based on ten years of work he'd done at the MIT Media Lab. In 2006, John and I founded a company, Oblong Industries, to turn John's vision of seamlessly interactive, integrated computing into commercial reality. So I'm a true believer. Oblong's customers and partners include Boeing, GE, SAP, and other early adopters of game-changing technology.
Also, a big tip-o-the-hat to Michael Lewis' "The New New Thing," which is both a great book and a great phrase. I'm humbly borrowing and repurposing his title. Go read the book.
No hay comentarios:
Publicar un comentario