Logic can be a powerful thing.
I remember first stumbling across symbolic and predicate logic as a teenager, and being astonished that I'd discovered a sure-fire blueprint for winning any argument -- construct correct premises, put them into a valid logical structure, and the conclusion must be true. It was a liberating, eye-opening experience. (One that should have told me that law, not theater or technology, was my best career destination, but I digress.)
It took me a while to realize that logic isn't actually foolproof.
It's entirely possible to take false premises, put them into a logically sound argument structure, and end up with a true conclusion anyway. One example:- The moon is made of cheese.
- No cheese existed on earth prior to the Apollo 11 mission.
- Therefore, the Apollo 11 mission went to the moon.
This is a highly simplified version of a truly rigorous logical argument, but given that the premises are nonsense, it's not really necessary to have the argument be totally rigorous to show the point -- logic is a powerful tool when used properly, but it's also only as good as your facts.
This rumination on logic was inspired by reading a CNET News.com piece by Matt Asay, who comes by his solid conclusion through some seriously messed-up premises.- To overcome an incumbent's advantage in the marketplace, a competitor must pursue a 'disruptive' strategy; i.e: do something the leader does not do, or at least do something well that the leader does not do well.
- Nokia is attempting to upset Apple in the mobile space by ditching their open-sourced Symbian mobile OS in favor of an alliance with Microsoft, who has been struggling in the mobile space for longer than Apple has been in it.
- Motorola is attempting to upset Apple in the mobile space by adopting Google's open-source 'Android' mobile OS, but Google isn't committing the 'resources' necessary to make Android a compelling alternative to Apple's iPhone mobile OS.
- Therefore, Nokia and Motorola will fail to overcome Apple's leadership in the mobile marketplace.
I have no problem whatsoever with the conclusion, but each of the premises is quite silly, and can be corrected simply by checking the work of other, sharper technology thinkers.
The first premise looks like the most reasonable -- differentiation in the marketplace is a long-standing method to gain market- and mind-share, and in and of itself isn't necessarily a bad idea. The problem comes in when trying to apply this premise to Apple's position in the mobile marketplace.
Few companies or products challenge an incumbent, at least not on its own turf. Disruption is required to displace an incumbent, following Clayton Christensen's thinking in "The Innovator's Dilemma."
All of which makes me doubt Google's efforts to beat Apple in smartphones, and suggests Nokia and Motorola aren't going to fare much better. They simply aren't disruptive enough.
For starters, under some interpretations, it's arguable that Apple isn't even a player in the mobile phone marketplace -- the estimate of Apple's total market share for mobile phones runs at about 1.3%. Nokia, the 800-pound gorilla by this estimate, should be 'dominating' the global marketplace, since Nokia is estimated to be shipping nearly 40% of all global phones sold. (Same link.)
Interestingly enough, though, Apple is a serious player by another measure -- percentage of market profits. Nokia, the big gorilla, earns nearly 60% of the global profits from the cellular marketplace, while Apple, with less than 1/25 of Nokia's sales, earns about 20% of the global profits from the cellular marketplace. (We'll get to Motorola later, but for the purpose of this comparison, we'll note that Motorola isn't even profitable.)
Asay seems to thank that it's Nokia's job to do something 'disruptive' to cut into Apple's market position, when in fact precisely the opposite is happening -- Apple's iPhone, combined with its App Store, has sliced off 20% of the profits in the global mobile market in less than three years, and Apple is the one that's 'disrupting' Nokia.
The reason for this comes from a fundamental misunderstanding of what makes the technology business run; a leftover artifact from the first days of the personal computer.
In the 1970s, the personal computer was primarily viewed as a toy: one of the most popular models, the Commodore 64, was far better known as a gaming device, competing with the Atari 2600 and Mattel Intellivision game consoles in the consumer space rather than against the Apple II in the education space. Only futurists saw a role for computers in business, however.
That changed in 1981 when IBM introduced the first IBM-PC. (As if to point out that even then they were more than just a technology company, Apple took out a full-page ad in the Wall Street Journal to explain just what IBM's decision might mean, and found themselves both vindicated and buried by their foresight.) Slowly, the PC made inroads into the business world, and people's decisions on personal computers became less driven by entertainment considerations and more driven by compatibility with their work PC. (In these days, when you needed to work on a presentation at home, you'd copy the file to a 5.25" floppy disk and take it with you; a habit that seems both hopelessly backward and suicidally insecure these days.)
Microsoft helped drive the overall strategy of personal computing in those days, working with their hardware vendors to maintain a two-tiered system: higher-end computers for business, who could afford the expense, and lower-end machines for consumers, who largely couldn't pay what business would pay. Though it was possible to sell a high-end machine into the consumer market, the reality was that Microsoft's licensing model paid them largely the same money regardless of where the machine that their OS was installed ended up, so they encouraged an environment that came to view 'market share' in terms of units shipped; he who sold the most computers was the winner in Microsoft's eyes, and thus in the eyes of the Microsoft-adoring tech press.
This model may have worked fine in 1986, when most consumers didn't ask much of a computer except that it run the same software being run at work, which the consumer had no choice over anyway. Then, during the 1990's, Microsoft's operating system dominance concealed the reality of the computer hardware market -- computer hardware was much more like any other physical product than it was different. In effect, the computer hardware market was much more like the automobile market than people (except Apple fans) believed.
Consider the 2008 automotive global market. The largest player, with 15% global market share, is General Motors -- which just went through a government-shepherded bankruptcy proceeding and is trying desperately to remain profitable. Meanwhile, BMW controls just 2% of the global market, yet it's share of global profits in the automotive market rivals, if not exceeds Apple's in the mobile phone market. (That's not to say that BMW isn't being prudent in the face of a declining global economy -- they've announced that they're exiting Forumula 1 racing to save money.)
Now granted, the global mobile phone market, up until a few years ago, still believed and followed slavishly the premise that 'more units = more good'; carriers would pay to subsidize particular 'exclusive' phone models, which would be differentiated from the exclusive models of other carriers, but all of which were being offered on the same terms as Gillette famously offered razors -- sell the holder for little or nothing, and make your profit on the blades. In the carriers' case, what they were making profit on was the cellular service, not the hardware. In this environment, it makes sense for hardware to become commodity, lowest-common-denominator stuff. (The technical economic term for this system is vendor lock-in.)
Yet that premise was under siege even before Apple's entry into the mobile market, as RIM copied IBM's 1981 personal computer strategy with the first BlackBerry 'smartphone'. (From that market share link far above, you can see that RIM is also making huge inroads into Nokia's profits for little cost -- RIM has only about 2% of global phone shipments, but about 18% of the profits, making them similar to, though not quite as efficient as, Apple.)
You'd think that Asay's very point in his initial premise -- that Apple is the leader, despite having very little share of total units shipped -- would convince him that units shipped don't really matter when it comes to market position. Yet in his second premise, he falls right back into that old chestnut, talking, not about Nokia's plans for mobile phones, but about their plans for Windows netbooks.
Nokia, for its part, made a big gamble open-sourcing Symbian after years of nurturing it as proprietary software to run mobile devices. The company has now discarded Symbian for its foray into Netbooks by partnering with Microsoft, a move that exacerbates its weak-kneed decision to bolster its mobile strategy with Microsoft Office. Nokia's approach leaves pundits like Joel West wondering "how Nokia will have an advantage on scale, innovation, features, branding or distribution over existing netbook makers," not to mention traditional mobile and laptop makers.
The problem with looking at netbooks as a bellwether for the mobile phone market is that, as we've just described above, 'more units = more good' only works if that's the model your competitors are all using. PC makers are rushing to come up with competing netbooks because even their mid-range consumer hardware has such poor margins that the market share of a few million netbooks might mean the difference between profitability and bankruptcy. Apple, as we'll discuss in more detail below, isn't playing that game, and thus doesn't feel compelled to join the race for the bottom.
But 'more units = more good' isn't the only Microsoft-inspired market principle that isn't really true anymore, despite tech writers' insistence that the world hasn't changed since 1986. There's another gulf between what used to be and what is today, and Microsoft is on the wrong side of that gulf as well:
Microsoft compounds the error by playing up its more expensive application for Windows Marketplace for Mobile, a strategy doomed to fail. Microsoft is playing to the developers' wish to make more money per customer, but if those customers prefer the iPhone, who cares how much Microsoft lets developers charge?
Again, Asay touches the heart of the matter without really comprehending it. It's the same kind of impulse that makes folks like Joe Hewitt, developer of Facebook 3.0 for iPhone, complain about the Apple App Store process and how it's hostile to developers, without really understanding who the App Store and its policies are directed toward.
Again, flash back to 1986. Most users want the same software that runs on their machines at work to run on their machines at home. Most of these users don't really understand how software works, and wouldn't care if they did, because again, they're not controlling their own purchasing decisions.
However, as computers have become more mainstream, and more applications exist that are consumer-based rather than business-based, consumers become pickier about what apps they pick and use. Sure, some users will always just launch Windows Paint because that's what they're used to when it comes time to edit their photos. But more and more, consumers are demanding options when it comes to both the quality and the features of the applications they want to run, and many of those are willing to pay for a superior product. That's where Apple has lived since Steve Jobs's return as CEO, and it informs nearly everything Apple does, from their operating system, to their retail stores, to the iPhone.
The iPhone App Store doesn't exist for developers -- it exists for consumers. One observer who gets this is Farhad Manjoo, the last regular technology writer for Salon.com (before they decided to subscribe to Malik Om's GigaOm Network for their tech content):
The platform's [Android's] openness is certainly a boon for developers. You can submit an app to the Android store and have it appear on customers' phones more or less immediately—the same process takes weeks or months on the iPhone. The trouble is, even though it's easy to develop apps for Android, there aren't many incentives to do so. The iPhone's got all those ravenous customers; it's worth waiting weeks to have your program in the App Store. Without great phones—and thus without a lot of customers—developers see little reason to bother coding up programs for yet another mobile app store.
Microsoft was notoriously developer-centric in the 1980s and 1990s because that was the reality of the computing business in those days -- if your platform didn't have the apps customers needed, they couldn't buy your platform. In 2009, though, there are already more apps than customers can ever use, to do more things than they have time to do. Now, more than ever, it's the people who hold the purse strings who are calling the tune, and Apple seems to be the only computer company who understands this.
Speaking of Android, Asay does talk about Android and Google, but again falls into the trap of thinking that the world of computing still works by 20th century rules:
Google, for its part, has attempted to disrupt Apple's iPhone in its apparent area of weakness: its closed nature. Google open-sourced the Android platform and invited the world of third-party developers to flock to it.
They never came.
As Slate's Farhoo [sic] Manjoo writes, "Even though it's far friendlier to developers, Android has failed to attract anywhere near the number of apps now clogging the iPhone." Android may be open, but it's not cool, and "cool" is where customers and, hence, developers are.
Asay seems to think that developers aren't writing for Android because Android somehow isn't 'cool' enough, instead of making the obvious leap (as Manjoo does, which Asay still doesn't seem to get) that developers aren't writing for Android because they can't make any money writing for Android. We've covered this point before with respect to Linux, but it's true for all 'open source' platforms -- people who expect to get their OS for free are also going to be people who expect to get their applications for that OS for free, and if they don't want to pay, there's not much money to be made there.
The solution, according to Asay? Do what Microsoft used to do in the face of competition in the '90s and outspend it:
Which leaves me with my original question: if a vendor finds itself playing catch up, should it even bother running the race? In response I'd suggest that unless a vendor is willing to commit significant resources to a disruptive strategy, it might as well give up.
Of the companies mentioned above, only Google has a disruptive strategy, but it isn't spending nearly enough resources to tackle Apple's iPhone. Until it does, it will lose, open source or not.
So to recap the accurate syllogism:- Apple (and RIM) are disrupting the traditional mobile phone market and taking control of the nascent 'smartphone' market by catering to users rather than developers and focusing on the high-end, high-margin cultural pace-setters first, then bringing in the larger consumer market later.
- Nokia is responding to this assault on their market leadership by attempting to turn back the clock to 1986, ignoring significant changes in both the computing and cellular marketplaces since that time.
- Motorola is trying to stay in the game by turning to a free OS alternative which should lower costs and create buzz by doing something Apple isn't doing, but which won't solve their primary problem of simply not bringing in enough money to cover their costs.
- Therefore, Nokia and Motorola (and Google) will fail to overcome Apple's positive inertia and eventual leadership in the mobile phone and 'smartphone' markets.
There, that wasn't so hard, was it?