Revolution vs Evolution in Innovation

One useful way of analyzing the concept and processes of innovation is to make a distinction between Revolutionary and Evolutionary Innovation.  Both are valid and both produce excellent results.  But each is better suited to a different environment and, unfortunately, it’s not clear which produces better results.

Revolutionary Innovation

Revolutionary Innovation seeks to adapt the world to new ideas.

Revolutionary innovation is the type we see and hear about most in United Statese.  On the one hand, it can quickly make available wondrous new products and services.  On the other, it is disruptive and expensive, and it produces unpredictable outcomes.  Revolutionary Innovation requires large pools of highly risk-tolerant investors who are prepared to make large capital investments to try something completely new, and those investors require, in turn, very large returns for the few major successes that they come across.

Diversity and high levels of education are essential ingredients of revolutionary innovation.  Entrepreneurs and investors are much more likely to develop wholly new approaches to business and technology if their day-to-day experiences include exposure to and stimulation by a host of differences in thinking and working, and if they have both the training and the intellectual capacity to act on new ideas.

Unsurprisingly, heroic role-models and celebration of the individual are conducive to revolutionary innovation.  A social system that admires game-changers like Bill Gates, Steve Jobs, and Jeff Bezos is likely to produce a large number of aspirants trying to achieve the same glory.

Digital photography is an excellent example of revolutionary innovation.  It changed the way people take, share, and use pictures, destroying an entire ecosystem of companies from local camera stores to giant manufacturing companies like Kodak.  It also made possible whole new business models: Facebook would not exist on the scale it does today without digital photography.

Evolutionary Innovation

Evolutionary Innovation seeks to adapt new ideas to the existing world.

Evolutionary innovation dominates in countries like Japan, but it is also broadly followed in most very large corporations, regardless of their national heritage.  Evolutionary innovation tends to be incremental in nature and less expensive to develop than revolutionary innovation.  Evolutionary innovation focuses on preserving or gradually changing existing fundamentals, including people, product, and business relationships.  Because the changes tend to be smaller, investment in evolutionary innovation tends to be smaller, and because the destruction wrought by evolutionary innovation tends to be less dramatic and spread over a longer time frame, the costs, both in terms of dollars and in terms social and business disruption, tend to be smaller as well.

In part because the consequences of failure are smaller, evolutionary innovation also dispenses with risk takers and the need for big rewards, diversity in thinking and practice, and lionized role models.

The automobile companies of Japan, especially Toyota, exemplify evolutionary innovation.  Through a steady stream of small changes in manufacturing, design, distribution, support, and integrated technologies, Japanese auto makers quietly achieved the goal that all Silicon Valley and Silicon Alley entrepreneurs say they aspire to but rarely reach: World Domination.

Revolutionary vs Evolutionary: Which is Better?

There’s no easy answer to which is better, revolutionary innovation or evolutionary innovation.  Evolutionary innovation is clearly more boring, but it’s also more secure because it’s more predictable and manageable.

Companies and economies that rely on evolutionary innovation need to keep that innovation coming at a rapid rate or they will get left behind by someone else’s innovations.  Similarly, no matter how quickly they may innovate in many small ways, evolutionary innovators are subject to game changers from the revolutionary innovators.  As mentioned earlier, evolution is less expensive, both on the development side and on the consequences side, but entrenched interests, which tend to emerge when evolutionary innovation dominates, can stifle changes and improvements that are desired by and in the best interest of the majority in favor of smaller or entirely different changes and improvements that are in their own best interests.

In the long run, businesses and economies probably benefit from having a combination of both forms of innovation, which means having the social and economic infrastructure to support both.

 

Practical Applications of Virtual Reality

Virtual Reality designers have their heads in the clouds.  That’s fine for designers of MMOG (massively multiplayer online games), but what about those who are designing “practical” virtual worlds, virtual worlds intended to meet business or professional needs?  Caught up in the coolness of doing new things with technology, virtual reality developers haven’t yet stopped to think about when and how virtual reality can actually be useful.

All too often, virtual worlds generally available today are used to present information and to do things that could be done more easily and more cheaply in other ways.

Features like avatars that fly are fun, but essentially meaningless.  In fact, when it comes to non-entertainment uses of virtual worlds, almost everything built to accommodate or enhance avatars, from lecture halls and amphitheaters to lifelike animation of human movement and fancy clothes, is a waste of coding time, processing power, and bandwidth.

Displaying two dimensional, billboard-style displays of text or graphics (including videos) inside a virtual world is like putting twistable knobs on a smartphone to control volume and brightness.  If little knobs are ok, why use a smartphone, and if you’ve got a smartphone, why use little knobs?

When determining when and how to use a virtual world in a professional environment, ask a few simple questions:

1. Is doing something in virtual reality a substantively better experience than simply doing it “in your head” or by some other less flashy means?  Do you learn more?  Do you work faster?  Can you communicate better?  If so, why and how?

2. Does the improved experience provided by virtual reality justify the cost of creating a virtual world and operating in it?  (Note that sometimes, virtual worlds are cheaper than working in the real world, especially if participants are widely dispersed geographically, but they need to come together to work on something.)

3. Does virtual reality make possible something that was not previously possible?

4. What is lost by working in a virtual world rather than an off-the-shelf computer application or even a pencil and paper?  (This is sometimes a much harder question to answer than it seems, at least until after the virtual world has been implemented.)

If you don’t have clearly affirmative answers to the above questions, the odds are very good that the virtual reality effort will be an interesting but ultimately unproductive effort with low or negative return on investment.  Proponents of virtual worlds need to keep in mind that businesses ultimately make cost-benefit based decisions, so as cool as virtual worlds may be, if you can’t produce convincing, positive answers to the questions above, you’re not going to get a serious hearing from business people.

Here are some guidelines showing who in the real world (other than gamers) is likely to benefit from virtual reality:

1. Teams that are geographically dispersed in the real world that need to examine a 3D object in real time together.  (eg: An architect, a contractor, and a group of business executives discussing the layout of a new factory or office building.)

2. Teams or individuals that need to study possible changes to the real world. (eg: A consumer wanting to try on new clothes or a different hair style.)

3. Individuals or teams that need a safe space to experiment with an object that is dangerous or inaccessible in the real world.  (eg: Engineers trying to understand how to deal with a damaged nuclear power reactor or oil rig.)

4. Individuals or teams who need a safe space to practice dealing with a scenario that is dangerous in the real world.  (eg: Soldiers practicing cultural interaction with local non-combatants in occupied territory.)

5. Individuals or teams who need to examine an object that has not yet been created.  (eg: Engineers testing parts for a new jet engine.)

6. Individuals or teams who need to examine an object that is too small or too large to be directly studied in its natural state.  (eg: Pharmaceutical researchers or students examining a molecular reaction or astrophysicists analyzing models for galaxies or galaxy clusters.)

7. Individuals or teams who need to examine a 3D process that has a real-world time scale that is too long or too short to be adequately studied in its natural state.  (eg: Astrophysicists studying the creation of planetary systems or particle physicists studying the decay of one particle into another.)

Notice that few of the examples above require an avatar or a complete recreation of reality.  Those features would be nice-to-have, but they are not necessary to a productive, cost-effective use of virtual reality.

Augmented Reality vs Virtual Reality

Augmented Reality (AR) and Virtual Reality (VR) are sometimes confused in conversations, but in fact they have only one thing in common: the word “reality”.  Otherwise, they are completely different.

  • They are used for different purposes.
  • They use different technologies.
  • And, not least of all, augmented reality is much, much harder to do.

In this post we will look primarily at differences in the way the two technologies/services are used.  In a subsequent post we we look more closely at differences in technologies and the added complexities in implementing augmented reality.

Ok.  Saying augmented reality and virtual reality are “completely different” may be a bit of an exaggeration.  Both use computers and both can be used for entertainment.  Both commonly use graphics and animations.  And both can be interactive.

The best example of augmented reality is the yellow first down line in TV broadcasts of American football games. That line doesn’t actually exist on the field and if you’re sitting in the stadium there’s no yellow first down line to see.  The line is added digitally in the few seconds between the time a TV camera at the stadium takes in an image of the game and the time the image is reproduced in your living room or sports bar.  The heads up displays used by pilots are also forms of AR.  Augmented reality adds digitally supplied images or information to real world images.

Virtual reality, on the other hand, completely removes all traces of or connections to the real world and replaces the real world with a digitally produced artificial world.  Second Life, Cloud Party, and games such as World of Warcraft are virtual worlds that can be accessed using an ordinary computer, but if you’ve ever been exhilarated by a ride on Disney’s Star Tours, you’ve experienced immersive virtual reality, which stimulates all your senses.

Not so long ago, the term “virtual reality” encompassed both VR and AR.  Before distinctions in technology and application separated them, science fiction authors such as William Gibson, used the terms interchangeably.  The popular press continues the practice, but, as the two services have become more commonplace in both professional and entertainment applications, the distinction is apparent to more people. Even the definitions of VR and AR provided by Wikipedia differentiate between them appropriately.

Today, VR is primarily a means of entertainment.  MMOG (massively multiplayer online games), for example, take place in virtual worlds in which thousands of players (massively multiplayer) are participating in the same online game at the same time.  The worlds of World of Warcraft, Final Fantasy, and Starcraft are very complete universes that replicate many features of reality, including complex histories, large geographies, and constantly shifting socio-political rivalries.  These virtual worlds include mountains, prairies, plants, animals, buildings, and “intelligent” beings (non-player characters or NPCs), many of which respond to interaction.  Players visit these worlds to entertain themselves both through interaction with the virtual worlds and through interaction with other players.

Increasingly, large government organizations such as NASA and global corporations such as IBM, BP, and SAP use virtual world technologies to facilitate communications between employees and or with service partners who are separated by either geographic or time zone differences.  These programs reduce costs, increase cooperation and innovation, and improve time-to-market.

Augmented reality, on the other hand, is far more likely to be used to provide basic information.  The yellow first down line in TV broadcasts of football games also provides information that facilitates the enjoyment of the game, but it’s still providing information: the line itself is not a form of entertainment.  Star Walk is a smartphone app that allows you to point your camera to any part of the sky, day or night, and see what constellations, satellites, or planets are in that part of the sky.  As you move your camera, the information adjusts in real time.  Many people might find this “fun,” but they are more likely to consider it informative.

In augmented reality, virtual objects may be interactive and respond to clicks or gestures, but interaction with the objects is a means to an end.

Clearly, even the shared word “reality” refers to something very different in virtual and augmented realities.  The reality of AR is the same reality that everyone experiences through their senses from the moment they have consciousness.   We can modify and manipulate it, but it exists independent of human creative efforts.  A virtual reality, on the other hand, exists solely because someone or a group of people conceived of it, designed it, and created the computer programs that bring it to life.  Humans may use their senses to interact with a virtual reality, but in addition to their senses they need some sort of interface (a mouse, a keyboard, a gesture interpreter, a display) that allows them to experience and interact with the virtual world.

 

Augmented Reality: Towards a Better Working Definition

Common definitions of Augmented Reality (AR) are unnecessarily myopic and restrictive.  One professionally competent definition goes like this: “The ability to seamlessly and dynamically integrate graphic and other multimedia content with live camera views on PCs and mobile computing devices such as your smartphone.” (Mimi Sheller, Professor of Sociology, Drexel University and Director, Center for Mobility Research and Policy.  Professor Sheller has done a reasonably good job of describing the state of AR as popularly perceived today, but as is often the case, her definition is limited to her area of specialization and looks only at the near term timeline.  For this fast changing field, what about tomorrow and what about the broad range of fields that are looking at AR applications?

A better definition of AR would be: Augmented Reality (AR) is the artificial, seamless, and dynamic integration of new content into, or removal of existing content from, perceptions of reality.

The best example of practical augmented reality today is still the yellow first down line used in TV broadcasts of American football games, even though this implementation of AR dates from the late 1990’s and makes very little money.  If TV viewers don’t stop to think about it, they believe that the first down line is actually on the field.  It moves with the field, not the TV screen, and it disappears behind players as they walk across it, just like the white yard lines that really are painted on the field.  Similar technologies are used in TV broadcasts of other sports, sitcoms, and talk shows to add useful information or advertising to the TV video streams.

Examples of AR implementation that are more fashionable and better fit generally used definitions of AR are the iPhone app Star Walk and the Starbuck’s promotional app used around Valentine’s Day in 2012.  Or the McDonald’s app currently used in Australia, which is probably the best mobile promotion using augmented reality created to date.

AR is, however, in its infancy, so a definition of AR needs to be broad enough to include more than what is available today.

If the augmentation is not seamless and dynamic, it’s not augmented reality.  The provided information or stimulus should appear as part of reality, otherwise it’s not augmenting the reality.  That’s clear enough, so that needs to be part of the definition.

The content can, of course, be computer generated, like the first down line or the images in the iPhone apps mentioned above, but it could also be any form of new content that does not already exist in the reality being perceived.  Disney famously produced an AR event in New York City’s Times Square in which passers-by interacted with Disney characters (people in costumes, not computer generated images) that everyone in Times Square could see on giant monitors placed above the entrance to the Disney Store (no PC or mobile device required), but were actually in a studio outside Times Square.  That’s clearly AR, so the definition needs to encompass such implementations.

Also, why restrict ourselves to PCs and mobile devices for delivery?  The heads-up displays planned for near future generations of automobiles, like those on a modern fighter jet, are not exactly the sort of PC or mobile device envisioned in many definitions of AR, but most of us would agree that a heads-up display qualifies as augmented reality.  The TVs used to see the yellow first down line are neither PCs nor mobile (for the most part).  Any delivery mechanism is acceptable, as long as it results in an augmentation of reality.

Going a step further, why does AR have to be visual?  Auditory information, for example, can be communicated through headphones.  An app for blind people could provide street names and building numbers through a headset.  Deaf people could be given a tactile sensation when a car honks or a siren approaches.  At the other extreme, noise canceling head sets are, arguably, augmented reality devices because they change the perception of reality, in this case by reducing the perception of extraneous real sounds that interfere with perception of other sounds, real or artificially introduced, or simply a perception of quiet.  Indeed, any sense could be subject to perceptual change.  One can conceive of olfactory augmentation to alter the taste of food.  A definition of augmented reality needs to encompass these ideas as well.

In the end, we’ve eliminated from common definitions of AR the restrictions imposed by requiring a) computer generated content, b) visual systems such as a camera, and c) a PC or a mobile device for delivery.   We have maintained, however, AR’s tight connection with a) reality, b) the altered perception of reality, and c) the addition or removal of content.  The better definition gives augmented reality researchers and product developers a broader and more accurate spectrum for innovation while maintaining the connection with existing products, services, and technologies.  Augmented reality is the artificial, seamless, and dynamic integration of new content into, or removal of existing content from, perceptions of reality.

The Five Obstacles to Asking Questions

Innovation requires a culture that not only accepts but embraces the questions that lead to change.  Here are the five classic obstacles to questions and change.

Tight Schedules: People are busy and asking questions requires a bit of trial and error.  Questions assume a willingness to listen to and evaluate a range of answers (your own and those from other people).  Sometimes asking questions results in answers that impose delays, and anyone working under a tight deadline or a full daily schedule will avoid anything that causes delays.

Blinders: “I’m the expert.”  This is the single most insidious obstacle to change and innovation.  I once joined the Board of an organization set up to support the local library.  I posted a question on the Board’s email list about the future of public libraries.  My objective was to stimulate a discussion because I was then (and am now) convinced that public libraries are going to change dramatically and no one can know for certain what those changes will be.  (Who at Kodak, the photography experts from the 60’s through the 90’s, understood the implications of digital photography?)  The result of my query was a lecture from the two librarians in the group.  They were the experts.  They knew the future.  They knew the answers.  The questions created an opportunity for the librarians to get fresh, outside input on a question that is absolutely critical to their professional lives, but instead they shut down discussion by being the experts and providing “the answer.”

Inflexibility: “We’ve always done it that way.”  People often have a process that is familiar so they can do it automatically.  Working on autopilot is easier than absorbing and adjusting to change.  One of my assignments when I was sent to Japan by Beckman Instruments was to introduce a new competitive strategy.  Beckman sold scientific instrumentation that produced reams of data.  Scientists were expected to manually analyze the data, a process that took much longer than generating the data.  As competitors steadily commoditized the hardware side of the business, Beckman sought to sought to shift the competitive field by integrating data analysis software into the instrumentation.  Around the world, but especially in Japan, sales reps had trouble with the new strategy.  They were used to selling hardware specs, not software conclusions.  Software was seen as a give-away as opposed to a strategic value add that was worth money to the customer.  When asked why they resisted the new strategy, they simply said, “That’s not the way we’ve always done it.”  When I pointed out to them that  Japanese companies like Sony, Matsushita, and Toyota had created their global successes by establishing traditions that were, at one time, new and highly innovative, slowly, over time and many beers, the reps came to accept the value of change.

Routine: “If it ain’t broke, don’t fix it.”  There will always be some people who just don’t like change.  My grandmother was that way.  I’ve come to correlate acceptance of change with “being old,” and I have known a few people in their twenties who, in this respect, were more “old” than other people I have known in their 80’s and 90’s.  My father, at the age of 85, sold his Southern California home of 31 years and moved to Colorado.  He loved the California house.  It was full of memories, art, and souvenires from a lifetime of global travel.  But he knew it was time for a change, so he moved.  The changes in climate, physical living facilities, people, and daily routine that the move forced him to accomodate have kept him mentally and socially young.  At 88, my father is less “old” than many people I work with on a day-to-day basis.

Disconnect: “It’s not my job.”  Have you ever had to work with someone who, every time he was asked to do something, would get this look on his face that said, “Hmmm….  Is this in my job description?”  That reactionturns up frequently when people are asked to think about ways to make changes that will improve someone else’s (or even their own) circumstances.  On the surface, the work done by an executive assistant in the finance department can seem removed from the company’s product or service offering as seen from the view of the customer, but every role in the company impacts cost and quality, which are inherent to the customer experience.  Therefore, change, and the acceptance of questions that lead to change, is every employee’s responsibility.

Innovation: The Bare Naked Essentials

Questions are fundamental to innovation; the two are inseparable.  Right now, out there somewhere, someone is asking questions about how to do what you do better, faster, cheaper.

Whether you work on an assembly line, in a rock-and-roll band, or in a library, changes are being planned that will affect what you do, and the people making those plans are doing something very simple: They are asking questions.  You need to ask questions, too.

The first and most important questions to get right are:

  • “Who is the customer?” and
  • “What matters to the customer?”

Sometimes the customer is hard to identify.  Take clothing for toddlers, for example.  Toddlers don’t buy the clothes they wear, and they rarely influence the decision making process by expressing a particular preference before the purchase is made.  Therefore, the purchase decision for toddler clothing is usually made by a parent, a relative of the parents, or a friend of the parents.

In the case of toddler clothes, the user is not the customer, and the user has no influence whatsoever over the purchase decision.  (The toddler may express a distaste for a clothing item, but it comes after the purchase so it may affect future purchases but not the initial purchase).  Take it from me, the most important feature of toddler clothing purchases is durability, especially durability in repeat washing – something about which no toddler would ever be concerned.

What about a company that makes siding for houses?  A siding company may seek to improve competitive advantage by, for example, innovating in product durability or ease of installation.  The former is important to homeowners and the latter is important to contractors and builders of large developments.  Therefore, which innovations a siding manufacturer chooses to pursue depends on who they decide are their customers.

Another key question: “Why innovate?”

Sometimes the objective is pure avariciousness: more profit.  Of course, profit is always present in the decision making process, but most likely there are additional objectives as well.  The siding manufacturers mentioned above are trying to avoid price competition by differentiating themselves from each other.

A soft drink company that substitutes less expensive corn syrup for sugar in a soda is simply trying to lower costs so it can increase profit margins, but if the company changes the size of the soda can when introducing that same soda into the Japanese market, the change has more to do with meeting the needs of local channel partners and distribution practices.

Now ask, “What is our purpose?”  And think about that question in the context of how you will make yourself different.

Since Steve Jobs returned to Apple in the 1990’s, a major consideration in Apple’s new products, universally admired as innovative, has been style.  Apple requires its product teams to subordinate technical creativity to usability and aesthetics. The major technical innovations are only important to the extent that they serve the design objectives, especially functionality and sexiness.

Compare Apple’s Newton with its iPhone successor.  Take a look at the pictures of the two devices with the hands normalized to (roughly) the same size.  Consider the dimensions of the two instruments.  Now look at the screen layout.  Look at the styling of the cases.  Think about the way users are expected to interface with the devices (stylus vs fingers). Consider the range of uses (apps) available for each and the ease with which users can add or remove functionality (and the ways that Apple makes money from the apps).

Yes.  Of course, the technologies that make the iPhone what it is today simply were not available 15 years ago when the Newton was first introduced.  But don’t think for a moment that many consumers notice or care about the iPhone’s technology.  Consumers are not interested in the geeky side of GPS services, super hardened glass, accelerometers, programming languages, or gesture interfaces; they care about maps that tell them where they are and how to get where they want to go, products that fit into their pockets, and getting to the information and services they need with as little time and effort as possible.  They don’t care about 3G vs 4G LTE vs 4G; they care about how long it takes for maps or Facebook or photos to download and update.

What they care about is the look, the feel, the comfort, and the functionality of the device in their hand.

Steve Jobs and his product team played a simple thought game and asked what it was that consumers wanted from a smartphone.  They decided that “cool” for the consumer is not the same as “cool” for an engineer.  Consumers want ease of use, functionality, and sexiness.

The Newton came up short on all criteria.  So Jobs killed it.

Since Apple introduced the iPad, Amazon, Barnes & Noble, Samsung, and Google have introduced products intended to compete with it.  But are any of those products substantively different from the iPad?  They may be simpler and cheaper, but they copy the fundamental features and functionality, in one way or another, of the iPad.

The iPad’s competitors do not rethink the questions “Who is the customer and what does he want?” the way the designers of the iPhone did.  They see their purpose as making devices similar to the iPad.  Therefore, they are not likely to change the tablet market the way the iPhone changed the cell phone market.  But out there somewhere, someone – perhaps someone at Apple – is asking the questions.  And eventually, change will come.

Masayoshi Son Tweaks Some Noses Again

American mobile service providers are doing their customers a disservice.  In the end, it is the mobile service providers who will suffer.

A couple of weeks ago, a friend from Japan visited the US for the first time.  This is a friend who speaks little English and has spent a lifetime working for the Japanese government focused on domestic issues.  After he had spent a week traveling from Boston to New York City and Washington, DC, he spent a weekend at our house in New Jersey.  I felt compelled to ask him, “What do you notice that’s different here in the US?”  It was an open ended question.  He could have commented on anything, but he replied, “Mobile internet access is really slow.”

At the time, I laughed.  His comment was totally unexpected.

His observation, however, fits anecdotally into a pattern set by hard data: The internet infrastructure in Japan (and many other parts of the world) far surpasses America’s.  Looking specifically at mobile access to web pages, Google found that the average web page downloads in 4 seconds in Japan but takes more than twice as long, 9.2 seconds, in the US.  Nine seconds is an eternity when accessing data.

Part of the difference is the deployment of 4G and LTE data networks versus 3G networks.  Theoretically, 4G networks can handle 100Mbit/s throughput for devices on trains or in cars and 1Gbit/s throughput for devices held by pedestrians.  3G, on the other hand, maxes out at 56Mbit/s.  Then there are the latency issues….

Another part of the difference is the willingness of the providers to make maximum use of the available technologies.  In other words, they’re holding out on their customers.  When was the last time you had 56Mbit/s through put on your 3G phone?  Or even your 4G phone?

Earlier this month, Japanese mobile services provider Softbank Mobile announced plans to take a 70% stake in Sprint/Nextel.  Eight billion dollars of Softbank’s planned $21B acquisition will be used to improve Sprint’s infrastructure, shoring up the weaknesses in its 4G network.

How much pressure that will put on AT&T and Verizon, the American mobile heavyweights who are also rolling out 4G networks, depends in part on how aggressively Sprint pursues customer acquisition.  Masayoshi Son, Softbank’s founder and Chairman/CEO, has a reputation for taking big risks and competing aggressively, so aggressive competition is a foregone conclusion.

A Japanese national of Korean ancestry who was educated at UC Berkeley, Son is accustomed to being an outsider and has frequently taken contrarian positions, placing big financial bets and coming off the winner through shrewd and determined business strategies.  At various times in his career Son has unabashedly annoyed Japanese business leaders and government officials by making business decisions that did not conform to the established order.

Each time he has survived and on most occasions he has come out ahead; starting from very humble beginnings Son is now the second wealthiest man in Japan.

In 2006, Son directed Softbank to acquire Vodaphone’s failing Japanese mobile subsidiary, which was losing tens of thousands of subscribers per month to the two top mobile providers, NTT DoCoMo and KDDi.  After rebranding, investing in infrastructure, and acquiring exclusive rights to the iPhone in Japan, Son’s newly named Softbank Mobile, quickly attracted subscribers and took a solid third place in the mobile market on the strength of its data services for smartphones.

Not only did Softbank leverage high speed data services to attract the bulk of the new customers coming into the mobile market, it stole customers from DoCoMo and KDDi.  Surprised and feeling the pressure from this upstart, the market leaders have been forced to improve their service offerings as well.  4G and LTE are now ubiquitous in Japan and the vast majority of Japanese customers use them to access maps, graphics, and other throughput intensive applications.

Instead of whinging about how customers are using their smartphones for data intensive applications the way America’s mobile operators have, DoCoMo, KDDi, and Softbank have moved to provide their customers with what they want, for lower rates than Americans pay.

Sound familiar?

In the 1970’s, America’s auto companies were put on the defensive by better quality, less expensive Japanese auto manufacturers, who were simply doing what they should do, compete on quality and value.  Is it possible Masayoshi Son is poised to repeat the trick on US mobile service providers? Furthermore, will American mobile providers prove agile enough to respond to a competitor who plays by different rules.  Masayoshi Son may not be a “typical” Japanese, but like other Japanese business people he believes in competing by providing the best possible services to his customers at a reasonable price and making himself and his colleagues and investors wealthy in the process.

The Government’s Role in Innovation

During last Tuesday’s debate between Mitt Romney and Barak Obama, both candidates touched on the topic of government investment in businesses.  The principal issue at hand was whether the government can (or should) create jobs.  However, although jobs are a very valuable aspect of investment or spending, the question of whether or not the government should invest in businesses is a much larger issue.

First, to be clear, the US federal government has long standing programs for subsidizing businesses.  These programs, ranging from farm aid to insurance for nuclear power plants, are designed to support industries that the federal government has decided are of national strategic importance.

Second, the US federal government rarely invests in business in the traditional sense of taking an ownership stake.  The government in America has moved steadily away from ownership of any business, preferring, instead, to guarantee loans (eg: Small Business Administration (SBA) loans) or to make grants for research and development projects (eg: Small Business Innovation Research (SBIR) grants), especially for small and mid-sized businesses.

Third, because the government is not investing for direct and immediate financial gain, which is the objective of most private or corporate investors, the American government measures success in other ways than capital gain, cash flow, or IRR.  The federal government invests to improve military and food security, to keep America competitive in emerging and dynamic technologies such as renewable energy and aerospace, or to create high-wage, high value-add jobs.

Investing Objectives

The US government is not, therefore, in the business of “picking winners” among businesses.  It deliberately spreads its resources widely in a range of industries and individual companies with the objective of having, in the long run, a net positive impact on American competitiveness, prosperity, and security.

We can assume, therefore, that a major component of “success” requires innovation and that innovation is both an objective and a by-product of the subsidies.

SBIR grants directly encourage commercial innovation by seeding small businesses experimenting with new technologies or techniques, resulting in an average of 7 patent filings per day.  In addition, insurance for nuclear power plants encourages innovation in the nuclear power industry by ensuring the survival of the industry, because private insurers are unwilling to cover all of the risks of nuclear plants and without insurance utilities would not build them.

Investing for Near and Long Term Returns

DARPA (Defense Advanced Research Projects Agency) is a government program that funds research with particular objectives and has produced, for example, the internet.  One model DARPA has used successfully is to lay out a project or a problem and offer a cash subsidy for research in that area, with various companies or research centers submitting competitive proposals for the subsidies.   In 2011, DARPA’s subsidies to small business research totaled just over $74M.  To qualify, those businesses must have fewer than 500 employees and be independently owned and operated.

In addition to looking for near term solutions and breakthroughs, the government also provides long term stimulus to innovation.  The government makes grants to academic institutions and public research institutes like  the National Institutes of Health (NIH) that conduct basic research, including research in healthcare, the natural sciences, and the social sciences.  The vast majority of this research has no immediate market value, so traditional market forces fail to provide incentives for businesses to invest in them.

The NIH received $30B in funding in 2011, resources that could never be matched by the private sector.

However, the results of the research paid for by the government often make their way into industry in unexpected ways.  Laser cutting tools, gene therapy, and non-stick surfaces on fry pans are commercial applications based on research originally funded by the government.

Contemporary cancer treatment is directly derived from research done at NIH 20-30 years ago, so the companies that make the instrumentation and chemical/biological products used in those treatments are direct financial beneficiaries of that research.

Some projects are simply too big for private enterprise.  When President Kennedy set the objective of going to the moon by the end of the 1960s, the government created a whole new agency, NASA, to take up the challenge.  From the outset, the implications for national defense as well private industry were understood as significant.  While no business was large enough to invest the billions of dollars it took to eventually reach the moon and we all paid for those visits through our taxes, private business spinoffs that use technologies developed for NASA now produce consumer and commercial products, including specialty materials and computing systems, that have generated billions of dollars in revenue over the last four decades.

Today, since the basic technologies for space launches are widely understood and can be produced economically and used (relatively) safely, private enterprise is taking over the industry, launching rockets with commercial, government, academic, and even private citizen payloads into orbit.  Earlier this month, SpaceX became the first commercial rocket launch company to resupply the International Space Station.

Rules and Regulations

Getting back to the debate.  Governor Romney has consistently argued that regulation gets in the way of private enterprise, and with it innovation.  And he makes a good point.  Evidence of this can be seen outside the US in countries where regulation of business is more pronounced.  In France and India, for example, laying off employees requires considerable red tape and cash layouts, a well intentioned set of rules that protects existing jobs and the individuals who lose them, but also compels companies to add new employees slowly and carefully, resulting not only in prolonged periods of high unemployment but also a reluctance to try new businesses because the financial obligations that come with failure are high.

Without a doubt, therefore, government has to consider unintended consequences of its regulations, and citizens have to evaluate the tradeoff between the intended benefits of regulation and the impact they have on business.  For example, when the EPA imposes rules on how toxic substances are handled, the rules may be well intended efforts to  protect homeowners, schools, workers, or natural resources such as lakes and streams, but the regulations may also impact the ability of the businesses using those substances to profitably conduct research or manufacture products.

There are, however, regulations that impose costs that businesses are willing to bear because business benefits from those regulations.  For example, intellectual property (IP) protection is a form of regulation that prevents one business from using the protected innovations of other businesses without permission.  Through laws that protect IP, the government ensures that innovative businesses can reap the benefits of their investments in research and development.  The cost of filing and defending patents is not trivial, but the financial consequences of not doing so make the costs worthwhile.  The government’s role in managing patents and providing a mechanism for businesses to defend those patents is indispensable, and companies and entrepreneurs will not set up businesses in countries in which the government does not offer adequate IP protection.

Similarly, an often underrated role of government in innovation lies in providing a predictable environment for the conduct of business.  Rule of law, public safety, and a stable political and economic structure are essential for businesses and entrepreneurs to take the risks that go along with innovation.  When social, political, and economic environments are unpredictable, businesses and individuals focus on survival rather than positive innovation because they do not know whether unanticipated, changing circumstances will allow them to make a return on the investments they make on improvements.

Government, therefore, plays a valuable role, directly and indirectly, in the innovation we see around us.  At the most fundamental level, government supports innovation by providing the social, political, and economic stability necessary for businesses to invest in new ideas.  In addition, government contributes to long term innovation by paying for basic research in healthcare, physical sciences, and social sciences and by taking on large projects that are beyond the scope of private industry.  Finally, the government primes the pumps of near term innovations by making grants to innovators in areas that are of national strategic interest and offering incentives in the form of prizes to student, academic, and business teams that compete to find solutions to problems that benefit the government and society as a whole.

One Size Does Not Fit All

There is no perfect model for innovation.

Don’t let anyone tell you that there is only one way to do innovation.  That’s like saying there is only one way to cook a meal.  Innovation comes from a complex mix of potential ingredients and one of the characteristics of innovation that makes it exciting is simply that you can pick and choose your ingredients and the quantities of your ingredients depending on the desired outcome.

Valve Software, a relatively small computer game company that always seems to be one step ahead of the curve, is famous for its lack of structure.  Internally, titles are not used; levels of authority and job assignments are worked out dynamically by the employees themselves.  Many fans of innovation would hold Valve’s approach as the ideal for creating opportunities for innovation because they see structure as antithetical to change and creative thinking.

Innovation can, however, survive in many climates.  In the early days of the internet, a large, highly creative media company in Japan (Hakuhodo) once assigned each employee on a team the task of coming up with 100 ideas for a new online service within 24 hours.  Together they culled through the results and the result was a popular online service modeled on a real-world service that combines Japanese new year greeting cards with sweepstakes.  A key difference: the online version included brand advertising.  The new online version generated fresh revenues for Hakuhodo and solidified its reputation as an innovator.

PepsiCo, too, has systematically sought innovative business ideas.  Two years ago, PepsiCo launched PepsiCo10, an incubator program in entertainment, mobile, retail, and sustainability.  The mix of categories fits both PepsiCo’s brand and the fast changing industries that affect its ability to do business competitively.  Last year PepsiCo10 went to Europe looking for entries and this year (2012) they have added Brazil and India, which shows additional systematic thinking about where and how innovation can help their business.

As it turns out, according to a survey conducted by IDG for CA Technologies, the more innovative IT departments are in companies that plan for innovation and implement programs that support the development and implementation of fresh ideas.  Keep in mind, IT is a corporate, line function, not the marketing department or the R&D department where creativity is expected and yet, as structured as the department may be, innovation thrives.

Drawing from IDG’s research, CA Technologies concludes: “Innovative organizations are more likely than their less innovative peers to emphasize experimentation and exploration…. [However,] Counter to conventional thinking, they also place more emphasis on planning and structure, indicating that they take a more mature approach to how innovation projects are managed and measured.”

 Innovation, therefore, thrives in many corporate climates and can contribute to growth, ROI, and both customer and employee satisfaction in large, highly structured organizations as well as in small nimble ones.

Technology is Not Enough

Innovation is not just about technology.

Or engineering.

Anyone in the tech biz, given 5 minutes, can come up with half a dozen examples of “cool,” superbly engineered products that were commercial failures.  Just to make the point, here are a few:

  • Xerox’s GUI interface (stolen by Apple, and then stolen from Apple by Microsoft, before it became a commercial success).
  • Sony’s Beta video tape format.
  • Sony’s Librie ebook reader.
  • Netscape web browser.
  • Apple’s eWorld.
  • Second Life.
  • Eight-track cassette players.
  • Rotary engines.

These were all interesting technology implementations.  Most of them were ground breaking innovations and most of them were superbly executed from an engineering point of view.  But they were also commercial flops.

Brain power and imagination are not the problem when it comes to “cool” products that fail.  The engineers that I have met over the years, engineers working on products as diverse as DNA sequencers and eCard web sites, have been very bright people.

So why do some innovative products become game changers while others crumble in the face of less innovative existing competition?  To answer this question, we must first answer the question: What is a product?

A product is an object or a service provided by a business to a customer to solve a problem, a want, or a need at a price that the customer is willing to pay.

Simply put, successful products, especially products based fundamentally on technological innovation, are made up of a well balanced mix of:

  • features and function
  • price point
  • quality
  • perception – brand
  • delivery or distribution, including advertising and promotion
  • after sale service and support

Whether the innovative new product creates a new product category, disrupts an existing product category, or simply provides the manufacturer with a competitive advantage, all of the key elements of what makes a product must come together to make the product a commercial success.

Sony, for example, made the same mistake twice.  Famously, in the war over which video tape format would rule the consumer market, Betamax or VHS, Sony had arguably the better engineered product: betamax tapes and machines were smaller and the image quality they produced was better.  But Beta was also more expensive and, most importantly, Sony failed to convince content providers to make their movies available on the Beta format.  Consumers didn’t care whether or not the Beta format was better; they only cared which format had their favorite movies and TV shows.

That was in the 1970’s.  In 2004, Sony made the same mistake with the Librie, its pioneering ebook reader.  Sony had first mover advantage in a long anticipated product category, but Sony focused on the technology side of the product, especially the e-ink component, which was very cool and new at the time.  Unfortunately, Sony failed, once again, to provide content.  Consumers just won’t buy an eReader if little content is available or the material is not easily accessed or, worse, the material that is available is not the material consumers want to read, and the Librie suffered from all three of these content problems.

When an established and experienced consumer products company, such as P&G or Unilever, rolls out a new product, they carefully plan the entire package, including price, distribution, branding, and customer support.  Early stage tech companies often don’t feel they have the time and resources for such luxuries as market research and distribution planning, so they simply focus on the technology behind the product.  Technology is what the founders are familiar with and it provides the IP that the investors feel they are paying for.  The result is a higher than necessary failure rate.

Innovative technologies still need to be delivered as part of a complete product package, which means all the other pieces of what makes a product need to be taken as seriously as the technology that makes the product possible.