9/01/2010

the mobile computing paradigm is already here

James Kwak contributes to a great econ and finance blog called the Baseline Scenario headlined by former IMF economist Simon Johnson. Johnson is one of the people with whom I would most closely associate my own personal perspectives on the ongoing economic, financial, and social crises confronting us which I like to only-half jestingly refer to as the Republican Recession. Kwak specifically is currently a law student, formerly a consultant for McKensie. I don't know as much about Reggie Middleton, but he runs the intriguing investment website boombustblog.com with the tagline ‘micro views of macro markets’.

But this post isn’t about economics, law, and investing per se. It’s a bit closer to my own formal academic training, strategic decision-making by large organizations and the interplay of technology and business. (Warning: it’s also, like me, long-winded.) Kwak has written two posts on the Future of Personal Computing (Part 1 and Part 2), while Middleton has assembled a whole series of posts examining the market relevance of what he terms the paradigm shift to ultra mobile computing. They’re intriguing reads and certainly worth pondering if you possess any interest in these matters.

What I would like to do however is offer a dissent, or an alternative reading of the tea leaves from the two particular perspectives. First, specifically, I think Apple’s strategic choices are better than they allow, and secondly, I think the broader course of computing is a little different from the conceptual framework presented. In short, I argue Apple’s model delivers significant value for consumers, while I find ‘cloud’ computing to be less beneficial than Kwak describes and less disruptive than Middleton describes. I agree that we are witnessing a paradigm shift in the current phase of the computer revolution, but it seems like their analysis employs the frameworks applicable to older paradigms. I argue that Apple gets the shift better than perhaps any other company on the planet.

Here’s how I would lay out the digital computing revolution, or what I'll shorten to the computer revolution (we've been computing for thousands of years, of course, and there was a time when analog computing seemed to be the wave of the future). While no model is precise, I think this broad overview helps us see the major strands and developments over time. The computer revolution, I posit, is one of the most important and disruptive transitions in human history. There have been periods where a certain paradigm has been dominant, and as computing has evolved, a shift occurs which emphasizes a different aspect of computing. However, I would suggest that within the computer revolution, the old paradigm isn’t displaced. Rather, the new builds upon and complements what has come before. While the newer paradigms feel, well, newer, each successive shift is actually less revolutionary than what came before, analogous to the communication revolution that saw paradigm shifts in the development of language, and then writing, and then the printing press, and then the internet. The internet is awesome, but if someone was trying to theorize something as revolutionary as, say, language itself, that would require something much grander, like figuring out how to communicate in an entirely different way, such as by touch, or movement, or telepathy. The intertubes aren't anywhere close to replacing Real Life.

The first paradigm I would describe as mainframe computing, characterized by the initial developments in the computing revolution. This is where you input your information, go grab lunch, come back, and see what happened (okay, yes, this is way oversimplifying the matter). The second paradigm would be personal computing, characterized by getting computers into homes and offices – and getting people to use them. The third paradigm I would call network computing. Now that we have all these computers big and small, let’s get them all talking to each other. The fourth paradigm shift, I think, is mobile computing. Here the question is how to access these networks of computers anytime, anywhere. I think there are three key components: the device capabilities (hardware and software), the back-end e-commerce support, and 'cloud' computing.

The title of this post comes from the sense that the core absence from the arguments by Kwak and Middleton (among others presenting similar thoughts) is that we’re already well into the age of mobile computing. In other words, we’re not approaching the outset of a new paradigm. We’re already well into its development and implementation. Then Apple CEO John Sculley laid out the term Personal Digital Assistant (PDA) in 1992, about the same time IBM was developing Simon, the first smart phone. By the late 1990s, we had the Newton MessagePad and the Palm Pilot and the Nokia Communicator. Apple’s concept of the ‘digital hub’ is almost a decade old now. Blackberry has became a household name. Online gaming has been incorporated into everything from Microsoft’s Xbox Live to Blizzard Entertainment’s Battle.net. SharePoint services in Office are so widespread now that even non-profits are using the technology for sharing documents. From Dell to HP to Amazon to Apple to PayPal to Ebay to a host of other companies, e-commerce is here. Netflix streams movies. OnStar brings Big Brother to life, and more generally, computers are in everything from cars to clocks. The iPhone is now three years old – the blending of the MessagePad, Pilot, Blackberry, Simon, and other precursors in a device that is now copied worldwide. Social networking has been around for a decade, and even Facebook is now over six years old. And Google, well, it's been a long time since the early days of the search engine. When someone actually posts a comment on this Google blog, Google Mail sends my phone an email message with the comment. It's even been five years since Google bought Android.

In short, the main components of mobile computing, far from being in the infancy stage, are well on their way to maturation: the front end hardware and software, the back end e-commerce, and the nebulous concept of ‘cloud’ computing. Look at the new Android smartphones: a Blackberry or iPhone user would instantly recognize them as a smartphone. More interestingly, a Newton MessagePad or Palm Pilot user would recognize them. I predict that the phones and tablets on sale this Christmas will be recognizable as mobile computing a decade from now. Sure, like Apple’s Quicktake digital camera, or Microsoft Windows 95, or Google Search from 1999, they’ll look clunky and slow and almost embarrassingly outdated. But you’ll know what they are.

Let’s dig into commentary specific to Apple. This is from Kwak’s Part 2:

“ …But I think the important point is that they are promoting a model of personal computing where most of the developers write for the iPhone OS, and if you want to use their applications you have to buy an Apple hardware product. Yes, Apple makes great hardware, but I think consumers will do better with an open model; if you look at smartphones, it’s already the case that many phones running Android — Google’s open-source operating system — are better than the iPhone at many different things. (The iPhone may still be the best overall, but there are many good reasons why you might pick a particular Android phone over the iPhone.) And Android has already passed the iPhone as the number two smartphone (measured by new sales), behind the BlackBerry... “

Kwak’s position is that Apple wants tight control over the hardware and software, and then significantly, that this desire conflicts with the best interests of Apple’s consumers. The former is absolutely true. It’s the latter part of the assertion where Kwak fails to substantiate his claim. Rather, it’s stated much more as a premise upon which to base further analysis rather than a claim which must be proven in and of itself, before any other conclusions may be drawn from it.

And here I think Kwak applies a more legalistic approach to what is really a business question, or to say it differently, he does what a lot of business analysts do. They forget to make the customer #1. They assume models and logic and processes can replace or circumvent or take precedent over the preferences of consumers. But at the end of the day, from drugs to diapers to digital devices, one ignores the actual wishes of the consumer at one’s own peril.

Apple’s model doesn’t restrict consumer choice. It embraces it. Choosing not to participate in the Apple experience is itself one of the choices Apple presents to consumers. If you prefer a Blackberry or a Droid or a Pre or an Evo, well, go buy one (interesting side note: just in the time period I’ve been drafting this piece, HP has bought Palm and the Pre isn’t even on sale any longer – a great illustration of the pace of change in mobile computing and also how the ‘personal computer’ makers are still quite relevant and profitable even in the age of the internet and smartphones and cloud computing). Same for a Zune MP3 player or a Dell laptop or an HP desktop. That’s competition, choice, consumer freedom, whatever you like to call it. Most citizens of our globe are not Apple customers. More philosophically, if you don’t want a fancy smartphone, or don’t want a cell phone at all, that is also your choice as a consumer (although the latter option in particular is increasingly open only to those who unwittingly or purposefully shun connecting with modern society, which presents its own potential line of philosophizing about what choice really means).

Rather, what Apple’s model does is impose restrictions within the Apple ecosystem. If you think Google’s online calendar application is superior to Apple’s iCal, well, then use it. That’s choice. That’s openness. The ability to choose different ways of interacting with technology. The fact that one of those ‘ways’ – tight restrictions upon and integration of hardware and software – is, in a limited sense, not open, doesn’t in any way restrict the consumer’s ability to choose among the various ways of interacting with technology. The closed system is itself one of the choices, and it’s precisely Apple’s customers who value the simplicity of that system that make Apple profitable. Consumers like me are willing to pay a finite but economically real premium to have ‘applications’ on our ‘computers’ that mostly ‘just work’. At certain times in Apple’s history, there have been fewer of us, and at other times, more of us, and Apple’s effectiveness at delivering on the strategy has varied over time, but that in a nutshell is why Apple is a multi-billion dollar transnational corporation. They offer their customers something for which their customers are willing to pay dollars. And yen, euros, and other currencies, too. For Kwak to perform a more rigorous analysis of Apple’s strategic stance specifically, he’ll need to bring to bear a sharp focus on why Apple customers are Apple customers, not on why non-Apple customers aren’t Apple customers.

Similarly, Middleton seems very interested in why Google’s Android is growing in smartphone marketshare. It’s a little tricky evaluating the nuances of his position, because he doesn’t really publish his full position. His pieces are mainly teasers to get people to subscribe to his investing research and analysis, and you can see that tone in the headlines, like “RIM Smart Phone Market Share, RIP?” and “Empirical Evidence of Android Eating Apple!”. More than a passing resemblance to entry worthy into the Apple Death Knell, in my opinion. However, because Middleton is primarily pushing his subscription model, you have to take the tone with a grain of salt: of course he’s trying to be controversial and exaggerate trends at the margins in order to attract eyeballs. Recognizing various cognitive biases is important for both investing and strategic analysis.

In the first piece in the series, Middleton lays out his overriding thesis:

“While everybody is celebrating the Ipad and the IPhone 4, pushing Apple stock through the plasmoshere (I actually like Apple as a company, a literal marketing market – what Microsoft use to be), Google is quietly creating a technological, business model and strategic advantage wherein there will be no way in hell Apple will be able to keep up if things continue to progress at the current pace. In essence, Apple will be relying more and more on marketing prowess and less on capability and competitive technological innovation to maintain margin and revenue growth. That is a dangerous place to be. Simply observe the speed upon which the Google/Android/HTC ecosystem has developed and the power, flexibility and usability of this early product after just two years on the market.”

The title tells you where he’s going: “There Is Another Paradigm Shift Coming in Technology and Media: Apple, Microsoft and Google Know its Winner Takes All”. I would suggest, however, that this is ‘old-paradigm’ thinking. Middleton is claiming there’s a paradigm shift and then applying analysis from the old paradigm. Two key points stand out to me at this juncture. First, there are more companies involved than Apple, Microsoft, and Google. Second, this isn’t going to be winner-take-all.

Let me elaborate on this second point. The Wintel and Office ecosystems at the heart of the personal computing phase are notable precisely for their rarity; there are extremely few instances of natural monopolies (or duopolies) existing in the private sphere where government actually helps the monopoly rather than trying to create competition. This is due to the way patent and copyright law has intersected with the network effects of IT products. Microsoft’s methods for communicating in its Windows OS and Office productivity suite have been recognized by the government as intellectual property owned by Microsoft. This is a once-in-a-billion arrangement. It’s like the power company owning the electrical standard in your outlet, or the phone company owning the concept of dialing numbers, or the water company owning the idea of a copper pipe, or GAP owning the use of cotton in clothing.

The paradigm shift that is mobile computing isn’t like that. The fundamental network effect of a smartphone – can it call other phones – isn’t owned by a private company. AT&T users can call Verizon users. Blackerry users can call iPhone users. An iPad or Kindle can access the data parts of cellular networks without even having to offer voice capability at all. This is critical to understanding the paradigm shift: smartphones and cloud computing and tablets and e-commerce and so forth lends itself more to oligopoly than monopoly (which itself builds upon the network computing phase that rendered the Wintel and Office dynasties less suffocating for established firms like Apple and HP and new entrants like Google and Amazon). Multiple firms will be creating the IT landscape of the future, and multiple firms will be profiting from it. In fact, I would argue, multiple firms have already created much of the landscape, and multiple firms are already profiting from it.

Now as for the cloud computing component of mobile computing specifically, there is great potential in distributed networks, in information flowing freely among different computers, or more generally, in stuff being available anywhere and everywhere. In the US, we have many ‘cloud’ systems. The electrical wiring my laptop uses to access the power grid from a condo complex in St. Louis works just as well to access the power grid from a hotel in Minnesota. That is not a small accomplishment, either of engineering prowess and manufacturing capacity or of effective government involvement in describing and enforcing the rules of the game for markets to follow. But there are still reasons that I want to take ‘my’ computer with me on the trip rather than using machines supplied by the airlines, hotels, restaurants, car rental places, and other vendors I encounter along the way. There’s also a reason that laptops, cell phones, and other mobile devices have batteries to complement the ‘cloud’ system that is the electric grid – sometimes the cloud just isn’t accessible.

Kwak suggests that

“ …The obvious alternative is Google, which has its own operating systems (Android and Chrome), but doesn’t particularly care if you use them or not — as long as you are using the Internet, where they sell their ads. I’d like to see an Android tablet with a real browser that can handle anything on the Web, and then I simply wouldn’t need most of the apps I have on my iPad (Calendar, Contacts, Notes, Maps, AccuWeather, Netflix, NPR, Bloomberg, etc.). Now, Google isn’t pursuing an open strategy because it’s nice; they’re doing it because they want everyone to go to the Internet to see their ads. But ultimately I think that’s a better model for consumers, because you avoid lock-in on the development level (developers don’t have to commit to the iPhone OS) and on the hardware level (anyone can build an Android device, which is already providing more innovation and choice when it comes to smartphones)… “

But this falls apart on several levels. Google also is a multi-billion dollar transnational corporation. They are this size because, like Apple, they deliver products people like. But unlike Apple, their users and their customers are two vastly different groups of people. Google’s customers are people paying for advertising. They sell their advertising by attracting users, from Google Search to Google Mail to Google Blogger to Google YouTube. At a fundamental level, it is far from obvious to me why a model which extracts value from users for the benefit of customers (advertisers) is inherently best from the user’s perspective. At a business level, Google is a corporate entity, overseen by a private, non-governmental Board of Directors. It is not obvious to me why Google is inherently a different model of governance than most other transnational corporations. [Note, generally speaking I ascribe higher value overall to the leadership and governance among technology companies than most other industries, like financial firms, energy, agribusiness, healthcare, telecommunications, and so forth.]

And most relevant for this analysis, I don’t follow Kwak’s reasoning at a technical level. The cloud only works when you can access it. You have to have some sort of standard terminal – just like a plug for an electric outlet – to ‘plug in’ or ‘dial in’ or ‘remote in’ or whichever is your favorite metaphor for what exactly we’re doing. The amount of stuff to download has expanded faster than the speeds at which it can be downloaded (just look at HD video for example). Now, Kwak has a very reasonable defense: offline access. But here’s the thing. The more computing is designed to seamlessly integrate ‘online’ and ‘offline’ information production, the more it resembles…Apple’s model! That quite literally describes the relationship between, say, an iPod, a PC, and iTunes, or for something Mac-specific, a MacBook, Mail, and Gmail.

You have to have ‘apps’ running on a ‘computer’ that is ‘local’ in ‘your’ possession. A lot of us like that model. I for one love using Blogger and Gmail and Google Search. I use FaceBook heavily, too, and Flickr. But I like (most of) my games and my contacts and my pictures and my music and so forth with me, where I am, rather than sitting on another computer somewhere else that I may or may not be able to access, that may or may not be part of a commercial advertising agreement between a third party who cares not the least what happens to my data. Yes, some of my personal data is on FaceBook. But it’s the data I want to be publicly available. Yes, some of my finances are online, from checking to retirement accounts. But I expect that those financial firms will keep that data private from third-party folks. Yes, Blogger lets you type directly through the browser. But a post like this I’m writing primarily in a stand-alone word processing application running on my local computer.

Is that more expensive than ad-supported cloud computing? Yeap.

But it costs a heckuva lot less today than that first Apple IIe I used over a quarter century ago. That’s value creation. Mainframes will of course remain valuable, too. And for the foreseeable future, over the next couple decades, personal computers aren’t going anywhere – except more and more of the places we like to take them. The cloud isn’t displacing servers and PCs; it’s built on top of them, rendering them even more important. The Apple model, where software and hardware is tightly integrated around a core of personal computing with a periphery of devices accessing networks and extending mobility and ease of use around the core, isn’t going anywhere. And Apple has bet the corporate farm not just on personal computing generally, but specifically on the market segments within personal computing that value tightly linked hardware and software devices that appear to the end user to simply work seamlessly together.

Now don’t get me wrong, I’m not an app-fiend on the iPhone. I bought one for using about half a dozen apps, not hundreds of apps. My total purchases at the iTunes App Store over the almost three years I’ve had my iPhone are less than $100, or less than three months’ worth of DSL/cable data connectivity thanks to the real market failures in our system, the telecommunications companies delivering internet access. But I don’t see the web browser replacing those other categories of applications any time soon. Rather, I’m on the ‘convergence’ bandwagon. Stand-alone applications are simply too valuable to people like me, for issues ranging from ease of use to features to data privacy to frustration with the telcos. What we want to be able to do with our computers is to continue realizing convergence with other aspects of our lives, with the computer as hub.

The losers in the shift to mobile computing aren’t going to be the likes of Microsoft, Google, or Apple. The real losers are the Sony Walkman and the newspaper classified ad and manufacturers of VHS tapes and entertainment companies that would like my iPhone data plan monthly budget and so forth. The computer revolution’s disruptiveness is seen particularly well through this light. The major IT firms are competing with non-IT products and services as much as they’re competing with each other. Individual firms can always go the way of Xerox, from iconic company to marginal bit player. RIM is perhaps most at risk of this fate at the moment. But with regard to Apple, I think they continue as a defining force in the computer revolution for years to come. As Windows 7 is basically the GUI personal computing experience known as a Mac, and as Android 2 is basically the multi-touch combined PDA/cell phone known as an iPhone, I think whatever competes with the iPad will basically, well, be an iPad. And that’s the other part that’s exciting for Apple customers. Apple continues to show a knack for popularizing what works in computing. The profits may not always accrue to Apple for this, but enough do to keep Apple at the cutting edge for the foreseeable future.

This is particularly true of mobile computing, where Apple’s iPhone is quite literally what the competition looks like, the iPad is basically Tiger Woods of the past decade (as in, Tiger vs. the Field), and for Apple shareholders, it has garnered a simply unfathomable amount of total industry profit in a sector where there were already established, major transnational corporate players. If Apple market share falls to around 10% long-term of the global smartphone market, that’s a phenomenal ROI from the Newton MessagePad a decade and a half ago. Combine that with the Office-like grip of the iPod, and the continuing business of core personal computing, and Apple has all the resources it needs to be relevant for whatever comes down the pike too far out in the future to predict today. Apple has figured out how to commoditize the underlying bits of computing, buying lots of stuff in huge quantities. The markup, the value-added, is in assembling a simple package of these components for consumers. Apple's entire core product line basically consists of three desktop computers (Mac mini, iMac, Mac Pro), three laptops (MacBook, MacBook Pro, MacBook Air), three music players (iPod Shuffle, iPod Nano, iPod Classic), and three iOS PDAs (iPod Touch, iPhone, iPad). This allows Apple to buy the underlying components by the millions while distinguishing their commoditized bits from the same commoditized bits used by other companies. There's a segment of the market that appreciates Apple doing this kind of work and giving them a manageable number of options and choices. Apple plays around at the margins, such as with Apple TV and MobileMe, to see how much it wants to incorporate things like cloud computing into its core products. Video streaming, for example, was part of that original Steve Jobs keynote on the digital lifestyle.

As regards Google, they’re actually in a more precarious state than I think Kwak, Middleton, and others allow. The shift from brilliant behemoth to blundering invader of Russia in winter is pretty subtle. Perhaps Google can compete on multiple fronts with upstarts and established players galore while maintaining a business model that extracts value from the company’s users for use by the company’s customers. But, perhaps Google comes to appreciate that companies like Microsoft and Apple and RIM and Nokia and so forth are more natural business partners than competitors. Does Google want to provide support for Android and Chrome if it starts making firms think twice about how much data Google collects via its prime moneymaker, Google Search? Can Google actually support multiple OSes over time? That I think becomes a very interesting strategic decision, a ‘what if’ scenario of more risk than anything Apple (or Microsoft) currently faces. What if, say, Microsoft, Apple, RIM, Nokia, Dell, and HP got together to make Bing a true competitor to Google Search? What if Motorola and HTC and LG and Samsung didn’t like being dependent upon Google for updates and support? What if Microsoft renders Exchange synchronization and other features less workable on Android devices? What if Google’s efforts to monetize its non-search properties for advertisers simply drives users somewhere else, undermining the one product Google has successfully monetized? What if the negative customer attitudes of the telcos rubs off on Google's brand as they work closer together?

Let me emphasize, my proposition is that all the major computer IT firms (Microsoft, Apple, Google, and many more) stand to benefit from the mobile computing paradigm shift in the ongoing computer revolution. I throw out the Google discussion here to push back against what I see some people suggesting that either the ‘old guard’ just doesn’t get it, or that somehow the new players aren’t susceptible to their own grand business risks, too.

Of course, the best judge is the passage of time. If Android-powered smartphones and Chrome-powerd tablets/netbooks/whatever leapfrog so far beyond the iPhone and iPad and MacBook as to leave them unrecognizable, then hey, maybe I’ll help out my hometown carrier Sprint by picking up the 2015 version of the HTC Evo and whatever iPad-killer actually, you know, ships.

No comments: