Is SciFi Dead? Putting a Prediction to Rest

When Spaceballs, the epic comic parody based on Star Wars, came out in 1987, a little known and now forgotten film critic announced that Spaceballs signaled the demise of science fiction film as a genre.  His logic was as follows: The appearance of a comic parody 10 years after the original film release could only mean that science fiction had little new to offer.  In short, the genre was doomed, strangled in a dead end by a dearth of original material.

Yet here we are almost 30 years later, and last year (2015) over 30 feature length science fiction films were released into the American film market.  True, some of them (eg: Jupiter Ascending, Pixels, and Hot Tub Time Machine 2) were horrible – or simply silly – with little social, technical, creative, or artistic merit.  Others (eg: Jurassic World and Terminator Genesis) could only be explained as money machines drawing on the reboot of established SciFi brands.  SciFi films for the summer of 2016 had similar problems; Star Trek: Beyond, Independence Day: Resurgence, and Ghostbusters are all largely considered flops despite famous heritage.  Looking only at these films, one might agree with the critic’s accusation that science fiction was out of material.

But lets take a closer look, especially at 2015 for which we have a full year of films – including the important Thanksgiving and year-end moviegoing weekends.  Several science fiction movies released in 2015 were well worth seeing, and still others were seriously good movies.

So what makes a “seriously good” science fiction movie?

+ Craftsmanship: A seriously good movie must meet high standards for story, acting, script, special effects, and other basic mechanics of a movie.

+ More than Entertainment:  A seriously good movie needs to be thought provoking.  In particular, it should look at social, political, or scientific issues.  Alternatively, the movie can inspire us to improve the world we live in – or will live in.

+ Plausibility: Finally, viewers must walk away from a seriously good movie believing that what they have just seen is possible.

Unfortunately, the plausibility requirement automatically excludes some long time favorite SciFi movies from consideration as “seriously good,” but movies that get all the craftsmanship elements right are well worth seeing and we have plenty of them in our sample of 2015 movies.  Star Wars VII: The Force Awakens and Mad Max: Fury Road fall into this category.  These movies are both entertaining and engrossing; they can be as much fun as a two hour roller coaster ride.

2015 saw several commendable efforts that went beyond entertainment.  For example, both Tomorrowland and Hunger Games benefit from having underlying themes that are more important than plot and almost as important as the special effects.  Both movies are arguably more than raw entertainment.  Both, however, lack plausibility, both succumb to Hollywood desire for flash, and Tomorrowland additionally suffers from weak craftsmanship – predictability and a somewhat obvious lecturing tone.  Besides, it doesn’t take long to find other 2015 SciFi movies that are worth seeing – depending on your tolerance for foul language (Chappie) or your patience with familiar concepts (Self/less).

Of the 2015 crop of SciFi movies, the ones that meet all the criteria and qualify as “seriously good” are Ex Machina and The Martian

Tension builds steadily in Ex Machina, but you won’t find any explosions and little gore in this low key film.  The movie is sexy but shows no sex.  The special effects are seamless and (along with the sexiness) serve the tragic plot line and the underlying messaging.  If you haven’t seen Ex Machina, put it at the top of your list.

The Martian, unlike Ex Machina, wants very badly to be a blockbuster film, and in the end it caves to the shallow demands of Hollywood to serve up a thoroughly unplausable climax.  Up to the very end, however, The Martian largely adheres to good science, good film craftsmanship, plausibility, and thoroughly entertaining storytelling.  If you’re a science fiction fan, you’ve undoubtedly already seen The Martian, but it’s worth seeing twice.  If you aren’t a science fiction fan, you’re still likely to enjoy The Martian.  Either way, just close your eyes at the end.

With movies like Ex Machina and The Martian, science fiction has shown that it is far from dead as a film genre.  If anything, SciFi is gaining strength.

As humankind becomes more confident of the future and of our own ability to shape it, good science fiction helps us imagine the science, technology, and social changes that are possible.  We can investigate both the outcomes that we want to avoid and the opportunities that we want to embrace.  Science fiction is far from dead.  More people embrace seriously good science fiction than ever before.  We just have to hope that Hollywood – or someone – will make the movies.

How Good is the Augmented Reality in Pokémon GO?

For those who have already played Pokémon GO and understand Augmented Reality (AR), you may skip to the next paragraph.  For those who have not or do not, Pokémon GO features real time insertion of animated images (creatures called Pokémon) into a live video stream that comes through the camera lens on your smart phone.  Point your camera lens at a the sidewalk in front of you and, if the software expects a Pokémon to be in that geography, you’ll see it on your screen even though you don’t see it when you look directly at the sidewalk.

Outside of Pokémon GO, you can find a number of examples of excellent AR implementations on contemporary TV, especially in sports broadcasts.  That yellow first down line you see in American football games?  Yup, that’s great AR.  It moves with the field when the camera pans, tilts, or zooms and it disappears behind players when they walk over it.  It really does look like there’s a yellow line painted on the field.  But if you think about it, you know the line is not really there.  No one in the stadium can see it – only people watching the live TV broadcast.

You know those ads on the wall behind home plate during The World Series?  Yup.  Those, too, are artificially inserted.  In fact, depending on where you are in the world, you’ll see different ads.  And they exhibit the same characteristics as the yellow first down line: they stick with the field when the camera pans, tilts, or zooms and they disappear behind anyone who walks in front of them.

The math and the video technology behind those two examples of AR would blow your mind.   Watch for more AR this summer during the Olympics.  You’ll notice that all these AR implementations exhibit the same three characteristics (in order of technical difficulty):

1. A graphic image, sometimes animated, is inserted into a live video stream.

2. The image seems to move with its “real world” surroundings, whether the real world surroundings consists of an American football field, a wall in a baseball stadium, or (in the Olympics) the bottom of a pool.

3. The inserted image disappears behind any object that passes between the camera lens and the place where the inserted image is supposed to be – including batters, catchers, and umpires in a baseball game or referees and players in a football game.

But what about Pokémon GO?IMG_8803

The AR software used in Pokémon GO is very cool and exciting, but it’s also pretty basic.  It only really manages the first of the three features mentioned above, and it’s the easiest one.  It also partially manages the second characteristic.  The Pokémon are inserted into the video stream when you point your camera in the direction of the Pokémon, but the Pokémon doesn’t move with the “real world” very smoothly.  If you tilt, pan, or zoom your camera, the Pokémon moves around fairly wildly and doesn’t give a very good illusion of actually being part of the “real world” surrounding it.

IMG_8805The third and most difficult characteristic of AR, of course, isn’t implemented at all in Pokémon GO.  If you find a Pokémon and, while viewing it through your camera lens, pass your hand between the lens and the Pokémon, the “real world” surrounding the Pokémon will disappear behind your hand, but the Pokémon will not (see the accompanying image).  The same thing applies if a friend, a car, or a passing dog comes between your phone and the Pokémon: the world disappears behind the new object but the Pokémon does not.

There’s no doubt about it: Pokémon GO has brought augmented reality into everyday life and made it fun for millions of people.  As video processing power on mobile devices improves and more games take advantage of AR (which they will) the AR implementation will improve and we’ll see better integration of the artificial images with the surrounding environment – and they’ll begin to disappear behind intervening objects.  For now, although the Pokémon GO implementation is far from ideal, “Progress over Perfection.”

Augmented vs Virtual Reality: Contrasting Technologies and Tools

While both augmented reality (AR) and virtual reality (VR) may use computing power, they use very different applications to achieve their ends.

Ok. Time to get the boring stuff out of the way…

First, a quick note on the distinction between AR and VR.

VR is the complete replacement of the real world with an artificial world.  At this time, VR usually replaces only visual and auditory input, but VR fans and businesses are increasingly incorporating artificial stimuli for other senses, especially balance (motion detection), touch, and smell, to make ever more complete virtual realities.

AR, on the other hand, is the artificial, seamless, and dynamic integration of new content into, or removal of existing content from, perceptions of the real world.  AR is most commonly seen in sports broadcasts, such as the yellow first down line in American football games, national flags in the lanes of swimmers and runners in the Olympics, and advertisements on the wall behind home plate in baseball games.  These augmentations of reality are so convincing that most TV viewers do not realize that they are artificially inserted.

Now for a small surprise…

Considering these two descriptions, one might assume that VR is much harder to implement than AR.  After all, VR has to replace all that visual and auditory input.  In fact, however, AR requires much more sophisticated programming and consumes more computing horsepower.

It turns out that recognizing and tracking a range of objects in the real world requires powerful, very fast computing.  The vast majority of computing power in augmented reality is consumed in identifying and tracking reality.  The insertion and removal of content, on the other hand, is relatively simple (other than occlusion, which is hard and discussed later).

Seeing something is not the same as recognizing it…

Human vision can identify all kinds of complex objects, distinguishing objects with only small apparent differences.  It’s useful, for example, to be able to tell your spouse from everyone else of the same gender with the same color hair, skin, and eyes (and let’s face it, there are a lot of people out there who are the same in those respects).

Computer vision still has a hard time even recognizing a large number of random, individual objects.  Specialized software exists for facial recognition and is used routinely by Facebook, iPhoto, and other applications, but to work reliably the camera angles and lighting have to be consistent.  Artificial intelligence (AI) that can generalize facial recognition skills into recognizing everything from sofas to automobiles and distinguishing between different kinds of sofas and automobiles has not yet made it into the public domain.

For now, AR must, therefore, settle for recognizing a few, clearly defined and pre-programmed objects: lines on a football field, or the blank green box behind home plate in a baseball stadium, for example.

As a further complication, with augmented reality, the developer has to allow for the nearly infinite array of variables that the “real world” could throw at his program, from shifting light, to varying camera angles, to falling rain, to a van partially obscuring a sign or the front of a building (making it unrecognizable to the AR application).

This is the point at which virtual reality programming becomes much easier than augmented reality.  In virtual reality, everything is known so all changes are understood.  If a van partially obscures the front of a building in a virtual world, the program knows about it because the van and the front of the building are both part of the virtual world created by the program.  If it starts to rain, or it snows, or a thick blanket of fog changes objects from clearly defined images into ghostly shadows, the program still knows what they are and how they will behave because the weather and the objects are part of the program.

Even the apparently random movements of a player controlled object in a virtual world are completely known to the program because, although the commands that result in the random movement come from an external source (the human), the program makes the changes in the player controlled object in the virtual world and thus knows how the object is changing or moving in the virtual world.

Believe it or not, it gets harder…

One of the big problems in augmented reality is determining whether a real world object is in front of or behind an augmented reality object.  For example, let’s assume you are watching a digital zombie walking down a sidewalk in a crowded city using a smartphone to do the appropriate digital insertion of the zombie.  Some of the (real) pedestrians will be behind the zombie and some will be in front of it.  Of course, the zombie must obscure the pedestrians behind it and the pedestrians in front of the zombie must obscure the zombie.  This is known as occlusion: objects in front hiding (or partially hiding) objects in back.

Right now, smartphones are far too dumb to handle occlusion, and, if the AR application chooses the wrong object to put in front, the results are visually very disturbing.  Therefore, AR apps on smartphones are learning the tracking piece, but have yet to make a serious attempt at occlusion, which means all of today’s AR digital insertions on smartphones and tablets float on top of the real world instead of integrating into it.

In the case of virtual reality, again, the program knows where each object is as well as the viewing angle, so, while the graphic rendering involved in having one object disappear behind another object and then reappear on the other side may not be easy, the harder process of identifying which object is in front of which is already solved.

Another big problem for AR is camera movement.  Let’s again imagine that digital zombie walking down a crowded city street.  Assume we have solved the problem of making people behind him disappear behind him and people passing in front of him causing him to disappear behind them.  The zombie is staggering down the sidewalk.  Each step, his foot lands on the pavement and plants itself relatively firmly in place while he lurches forward with the other foot.

Now start to move the camera with him.  Remember, he’s being inserted into the image seen through a camera.  Normally, the zombie will move with the camera, which means it looks like his feet are sliding along the pavement.  Or floating above the pavement.

So we have a new problem.  We have to be able to track changes in the camera view.  Cameras typically pan, tilt, and zoom.  They can also roll sideways, rise higher, or drop lower.  As the camera moves, the software that is inserting the digital zombie must know how to lock that zombie’s foot onto the ground so he moves with the “real world” environment, not with the camera motion.  This means precisely tracking the changes in the viewing angle, direction, and distance of the real world environment in which the digital zombie is moving.

In addition, the AR application has to change the angle and lighting for the digital zombie, making the zombie smaller or bigger as the camera either moves closer (zooms in) or moves away (zooms out) and shifting from a front view to a back view as the camera moves from in front of the zombie to behind the zombie, all of which is known and understood principally from analyzing the surrounding real world environment, a process that, at this juncture, even very smart computers find hard to do and smartphones find incomprehensible.

And then there’s lighting: Imagine an airplane or a cloud passing over the zombie and the city street.  The sidewalk, pedestrians, and litter blowing in the breeze are all briefly in shadow.  What happens to the digitally inserted zombie?  It would look very strange indeed if the AR app didn’t appropriately change the lighting on the zombie.

Keep in mind that the human eye/brain combination is trained and practiced at doing this routinely without effort.

Back to virtual worlds for a moment: If the zombie were in a virtual world, the software simply renders both the changes in the zombie viewing angle and the changes in the surrounding environment at the same time in the same way.

An interesting side note – one advantage that computers and AR have over the real world and vision: While it will take computers (especially mobile devices) decades to match the processing power of your average dog’s visual cortex, when it comes to augmenting reality, computers have access to information that dogs, let alone humans, will probably never access directly: gps data.

Here’s a thought experiment: Blindfold a human being, load him into a car for an hour long drive down winding roads, followed by an airplane ride of another hour, and then another car ride for a few hours.  Now take off the blindfold.  Unless the person has previously visited his new location and has a visual memory of it, he will have no idea where he is.

Now rewind back to the beginning of the experiment.  Put an iPhone in the pocket of that same blindfolded person and take him through the same confusing route.  Odds are that within a few seconds of taking the iPhone out of his pocket and turning it on, the iPhone will know with an accuracy level of a few feet where in the world the traveler has arrived and can display it to him on a built in mapping service.  It can even tell which way he is pointing the device.

AR applications often take advantage of gps systems to identify where they are and what buildings, streets, and businesses are nearby.  However, gps systems are not yet precise enough to plant a digital zombie’s feet on a real world sidewalk and zoom, pan, tilt, or roll past him as he staggers and lurches towards his next victim.

Even with the help of real world gps systems, AR is obviously harder computationally than VR.  True AR properly done has to recognize objects, appropriately occlude foreground and background objects, and keep digitally inserted objects tied to changes in the environment regardless of the camera angle or movement.

Revolution vs Evolution in Innovation

One useful way of analyzing the concept and processes of innovation is to make a distinction between Revolutionary and Evolutionary Innovation.  Both are valid and both produce excellent results.  But each is better suited to a different environment and, unfortunately, it’s not clear which produces better results.

Revolutionary Innovation

Revolutionary Innovation seeks to adapt the world to new ideas.

Revolutionary innovation is the type we see and hear about most in United Statese.  On the one hand, it can quickly make available wondrous new products and services.  On the other, it is disruptive and expensive, and it produces unpredictable outcomes.  Revolutionary Innovation requires large pools of highly risk-tolerant investors who are prepared to make large capital investments to try something completely new, and those investors require, in turn, very large returns for the few major successes that they come across.

Diversity and high levels of education are essential ingredients of revolutionary innovation.  Entrepreneurs and investors are much more likely to develop wholly new approaches to business and technology if their day-to-day experiences include exposure to and stimulation by a host of differences in thinking and working, and if they have both the training and the intellectual capacity to act on new ideas.

Unsurprisingly, heroic role-models and celebration of the individual are conducive to revolutionary innovation.  A social system that admires game-changers like Bill Gates, Steve Jobs, and Jeff Bezos is likely to produce a large number of aspirants trying to achieve the same glory.

Digital photography is an excellent example of revolutionary innovation.  It changed the way people take, share, and use pictures, destroying an entire ecosystem of companies from local camera stores to giant manufacturing companies like Kodak.  It also made possible whole new business models: Facebook would not exist on the scale it does today without digital photography.

Evolutionary Innovation

Evolutionary Innovation seeks to adapt new ideas to the existing world.

Evolutionary innovation dominates in countries like Japan, but it is also broadly followed in most very large corporations, regardless of their national heritage.  Evolutionary innovation tends to be incremental in nature and less expensive to develop than revolutionary innovation.  Evolutionary innovation focuses on preserving or gradually changing existing fundamentals, including people, product, and business relationships.  Because the changes tend to be smaller, investment in evolutionary innovation tends to be smaller, and because the destruction wrought by evolutionary innovation tends to be less dramatic and spread over a longer time frame, the costs, both in terms of dollars and in terms social and business disruption, tend to be smaller as well.

In part because the consequences of failure are smaller, evolutionary innovation also dispenses with risk takers and the need for big rewards, diversity in thinking and practice, and lionized role models.

The automobile companies of Japan, especially Toyota, exemplify evolutionary innovation.  Through a steady stream of small changes in manufacturing, design, distribution, support, and integrated technologies, Japanese auto makers quietly achieved the goal that all Silicon Valley and Silicon Alley entrepreneurs say they aspire to but rarely reach: World Domination.

Revolutionary vs Evolutionary: Which is Better?

There’s no easy answer to which is better, revolutionary innovation or evolutionary innovation.  Evolutionary innovation is clearly more boring, but it’s also more secure because it’s more predictable and manageable.

Companies and economies that rely on evolutionary innovation need to keep that innovation coming at a rapid rate or they will get left behind by someone else’s innovations.  Similarly, no matter how quickly they may innovate in many small ways, evolutionary innovators are subject to game changers from the revolutionary innovators.  As mentioned earlier, evolution is less expensive, both on the development side and on the consequences side, but entrenched interests, which tend to emerge when evolutionary innovation dominates, can stifle changes and improvements that are desired by and in the best interest of the majority in favor of smaller or entirely different changes and improvements that are in their own best interests.

In the long run, businesses and economies probably benefit from having a combination of both forms of innovation, which means having the social and economic infrastructure to support both.

 

Practical Applications of Virtual Reality

Virtual Reality designers have their heads in the clouds.  That’s fine for designers of MMOG (massively multiplayer online games), but what about those who are designing “practical” virtual worlds, virtual worlds intended to meet business or professional needs?  Caught up in the coolness of doing new things with technology, virtual reality developers haven’t yet stopped to think about when and how virtual reality can actually be useful.

All too often, virtual worlds generally available today are used to present information and to do things that could be done more easily and more cheaply in other ways.

Features like avatars that fly are fun, but essentially meaningless.  In fact, when it comes to non-entertainment uses of virtual worlds, almost everything built to accommodate or enhance avatars, from lecture halls and amphitheaters to lifelike animation of human movement and fancy clothes, is a waste of coding time, processing power, and bandwidth.

Displaying two dimensional, billboard-style displays of text or graphics (including videos) inside a virtual world is like putting twistable knobs on a smartphone to control volume and brightness.  If little knobs are ok, why use a smartphone, and if you’ve got a smartphone, why use little knobs?

When determining when and how to use a virtual world in a professional environment, ask a few simple questions:

1. Is doing something in virtual reality a substantively better experience than simply doing it “in your head” or by some other less flashy means?  Do you learn more?  Do you work faster?  Can you communicate better?  If so, why and how?

2. Does the improved experience provided by virtual reality justify the cost of creating a virtual world and operating in it?  (Note that sometimes, virtual worlds are cheaper than working in the real world, especially if participants are widely dispersed geographically, but they need to come together to work on something.)

3. Does virtual reality make possible something that was not previously possible?

4. What is lost by working in a virtual world rather than an off-the-shelf computer application or even a pencil and paper?  (This is sometimes a much harder question to answer than it seems, at least until after the virtual world has been implemented.)

If you don’t have clearly affirmative answers to the above questions, the odds are very good that the virtual reality effort will be an interesting but ultimately unproductive effort with low or negative return on investment.  Proponents of virtual worlds need to keep in mind that businesses ultimately make cost-benefit based decisions, so as cool as virtual worlds may be, if you can’t produce convincing, positive answers to the questions above, you’re not going to get a serious hearing from business people.

Here are some guidelines showing who in the real world (other than gamers) is likely to benefit from virtual reality:

1. Teams that are geographically dispersed in the real world that need to examine a 3D object in real time together.  (eg: An architect, a contractor, and a group of business executives discussing the layout of a new factory or office building.)

2. Teams or individuals that need to study possible changes to the real world. (eg: A consumer wanting to try on new clothes or a different hair style.)

3. Individuals or teams that need a safe space to experiment with an object that is dangerous or inaccessible in the real world.  (eg: Engineers trying to understand how to deal with a damaged nuclear power reactor or oil rig.)

4. Individuals or teams who need a safe space to practice dealing with a scenario that is dangerous in the real world.  (eg: Soldiers practicing cultural interaction with local non-combatants in occupied territory.)

5. Individuals or teams who need to examine an object that has not yet been created.  (eg: Engineers testing parts for a new jet engine.)

6. Individuals or teams who need to examine an object that is too small or too large to be directly studied in its natural state.  (eg: Pharmaceutical researchers or students examining a molecular reaction or astrophysicists analyzing models for galaxies or galaxy clusters.)

7. Individuals or teams who need to examine a 3D process that has a real-world time scale that is too long or too short to be adequately studied in its natural state.  (eg: Astrophysicists studying the creation of planetary systems or particle physicists studying the decay of one particle into another.)

Notice that few of the examples above require an avatar or a complete recreation of reality.  Those features would be nice-to-have, but they are not necessary to a productive, cost-effective use of virtual reality.

Augmented Reality vs Virtual Reality

Augmented Reality (AR) and Virtual Reality (VR) are sometimes confused in conversations, but in fact they have only one thing in common: the word “reality”.  Otherwise, they are completely different.

  • They are used for different purposes.
  • They use different technologies.
  • And, not least of all, augmented reality is much, much harder to do.

In this post we will look primarily at differences in the way the two technologies/services are used.  In a subsequent post we we look more closely at differences in technologies and the added complexities in implementing augmented reality.

Ok.  Saying augmented reality and virtual reality are “completely different” may be a bit of an exaggeration.  Both use computers and both can be used for entertainment.  Both commonly use graphics and animations.  And both can be interactive.

The best example of augmented reality is the yellow first down line in TV broadcasts of American football games. That line doesn’t actually exist on the field and if you’re sitting in the stadium there’s no yellow first down line to see.  The line is added digitally in the few seconds between the time a TV camera at the stadium takes in an image of the game and the time the image is reproduced in your living room or sports bar.  The heads up displays used by pilots are also forms of AR.  Augmented reality adds digitally supplied images or information to real world images.

Virtual reality, on the other hand, completely removes all traces of or connections to the real world and replaces the real world with a digitally produced artificial world.  Second Life, Cloud Party, and games such as World of Warcraft are virtual worlds that can be accessed using an ordinary computer, but if you’ve ever been exhilarated by a ride on Disney’s Star Tours, you’ve experienced immersive virtual reality, which stimulates all your senses.

Not so long ago, the term “virtual reality” encompassed both VR and AR.  Before distinctions in technology and application separated them, science fiction authors such as William Gibson, used the terms interchangeably.  The popular press continues the practice, but, as the two services have become more commonplace in both professional and entertainment applications, the distinction is apparent to more people. Even the definitions of VR and AR provided by Wikipedia differentiate between them appropriately.

Today, VR is primarily a means of entertainment.  MMOG (massively multiplayer online games), for example, take place in virtual worlds in which thousands of players (massively multiplayer) are participating in the same online game at the same time.  The worlds of World of Warcraft, Final Fantasy, and Starcraft are very complete universes that replicate many features of reality, including complex histories, large geographies, and constantly shifting socio-political rivalries.  These virtual worlds include mountains, prairies, plants, animals, buildings, and “intelligent” beings (non-player characters or NPCs), many of which respond to interaction.  Players visit these worlds to entertain themselves both through interaction with the virtual worlds and through interaction with other players.

Increasingly, large government organizations such as NASA and global corporations such as IBM, BP, and SAP use virtual world technologies to facilitate communications between employees and or with service partners who are separated by either geographic or time zone differences.  These programs reduce costs, increase cooperation and innovation, and improve time-to-market.

Augmented reality, on the other hand, is far more likely to be used to provide basic information.  The yellow first down line in TV broadcasts of football games also provides information that facilitates the enjoyment of the game, but it’s still providing information: the line itself is not a form of entertainment.  Star Walk is a smartphone app that allows you to point your camera to any part of the sky, day or night, and see what constellations, satellites, or planets are in that part of the sky.  As you move your camera, the information adjusts in real time.  Many people might find this “fun,” but they are more likely to consider it informative.

In augmented reality, virtual objects may be interactive and respond to clicks or gestures, but interaction with the objects is a means to an end.

Clearly, even the shared word “reality” refers to something very different in virtual and augmented realities.  The reality of AR is the same reality that everyone experiences through their senses from the moment they have consciousness.   We can modify and manipulate it, but it exists independent of human creative efforts.  A virtual reality, on the other hand, exists solely because someone or a group of people conceived of it, designed it, and created the computer programs that bring it to life.  Humans may use their senses to interact with a virtual reality, but in addition to their senses they need some sort of interface (a mouse, a keyboard, a gesture interpreter, a display) that allows them to experience and interact with the virtual world.

 

Augmented Reality: Towards a Better Working Definition

Common definitions of Augmented Reality (AR) are unnecessarily myopic and restrictive.  One professionally competent definition goes like this: “The ability to seamlessly and dynamically integrate graphic and other multimedia content with live camera views on PCs and mobile computing devices such as your smartphone.” (Mimi Sheller, Professor of Sociology, Drexel University and Director, Center for Mobility Research and Policy.  Professor Sheller has done a reasonably good job of describing the state of AR as popularly perceived today, but as is often the case, her definition is limited to her area of specialization and looks only at the near term timeline.  For this fast changing field, what about tomorrow and what about the broad range of fields that are looking at AR applications?

A better definition of AR would be: Augmented Reality (AR) is the artificial, seamless, and dynamic integration of new content into, or removal of existing content from, perceptions of reality.

The best example of practical augmented reality today is still the yellow first down line used in TV broadcasts of American football games, even though this implementation of AR dates from the late 1990’s and makes very little money.  If TV viewers don’t stop to think about it, they believe that the first down line is actually on the field.  It moves with the field, not the TV screen, and it disappears behind players as they walk across it, just like the white yard lines that really are painted on the field.  Similar technologies are used in TV broadcasts of other sports, sitcoms, and talk shows to add useful information or advertising to the TV video streams.

Examples of AR implementation that are more fashionable and better fit generally used definitions of AR are the iPhone app Star Walk and the Starbuck’s promotional app used around Valentine’s Day in 2012.  Or the McDonald’s app currently used in Australia, which is probably the best mobile promotion using augmented reality created to date.

AR is, however, in its infancy, so a definition of AR needs to be broad enough to include more than what is available today.

If the augmentation is not seamless and dynamic, it’s not augmented reality.  The provided information or stimulus should appear as part of reality, otherwise it’s not augmenting the reality.  That’s clear enough, so that needs to be part of the definition.

The content can, of course, be computer generated, like the first down line or the images in the iPhone apps mentioned above, but it could also be any form of new content that does not already exist in the reality being perceived.  Disney famously produced an AR event in New York City’s Times Square in which passers-by interacted with Disney characters (people in costumes, not computer generated images) that everyone in Times Square could see on giant monitors placed above the entrance to the Disney Store (no PC or mobile device required), but were actually in a studio outside Times Square.  That’s clearly AR, so the definition needs to encompass such implementations.

Also, why restrict ourselves to PCs and mobile devices for delivery?  The heads-up displays planned for near future generations of automobiles, like those on a modern fighter jet, are not exactly the sort of PC or mobile device envisioned in many definitions of AR, but most of us would agree that a heads-up display qualifies as augmented reality.  The TVs used to see the yellow first down line are neither PCs nor mobile (for the most part).  Any delivery mechanism is acceptable, as long as it results in an augmentation of reality.

Going a step further, why does AR have to be visual?  Auditory information, for example, can be communicated through headphones.  An app for blind people could provide street names and building numbers through a headset.  Deaf people could be given a tactile sensation when a car honks or a siren approaches.  At the other extreme, noise canceling head sets are, arguably, augmented reality devices because they change the perception of reality, in this case by reducing the perception of extraneous real sounds that interfere with perception of other sounds, real or artificially introduced, or simply a perception of quiet.  Indeed, any sense could be subject to perceptual change.  One can conceive of olfactory augmentation to alter the taste of food.  A definition of augmented reality needs to encompass these ideas as well.

In the end, we’ve eliminated from common definitions of AR the restrictions imposed by requiring a) computer generated content, b) visual systems such as a camera, and c) a PC or a mobile device for delivery.   We have maintained, however, AR’s tight connection with a) reality, b) the altered perception of reality, and c) the addition or removal of content.  The better definition gives augmented reality researchers and product developers a broader and more accurate spectrum for innovation while maintaining the connection with existing products, services, and technologies.  Augmented reality is the artificial, seamless, and dynamic integration of new content into, or removal of existing content from, perceptions of reality.

The Five Obstacles to Asking Questions

Innovation requires a culture that not only accepts but embraces the questions that lead to change.  Here are the five classic obstacles to questions and change.

Tight Schedules: People are busy and asking questions requires a bit of trial and error.  Questions assume a willingness to listen to and evaluate a range of answers (your own and those from other people).  Sometimes asking questions results in answers that impose delays, and anyone working under a tight deadline or a full daily schedule will avoid anything that causes delays.

Blinders: “I’m the expert.”  This is the single most insidious obstacle to change and innovation.  I once joined the Board of an organization set up to support the local library.  I posted a question on the Board’s email list about the future of public libraries.  My objective was to stimulate a discussion because I was then (and am now) convinced that public libraries are going to change dramatically and no one can know for certain what those changes will be.  (Who at Kodak, the photography experts from the 60’s through the 90’s, understood the implications of digital photography?)  The result of my query was a lecture from the two librarians in the group.  They were the experts.  They knew the future.  They knew the answers.  The questions created an opportunity for the librarians to get fresh, outside input on a question that is absolutely critical to their professional lives, but instead they shut down discussion by being the experts and providing “the answer.”

Inflexibility: “We’ve always done it that way.”  People often have a process that is familiar so they can do it automatically.  Working on autopilot is easier than absorbing and adjusting to change.  One of my assignments when I was sent to Japan by Beckman Instruments was to introduce a new competitive strategy.  Beckman sold scientific instrumentation that produced reams of data.  Scientists were expected to manually analyze the data, a process that took much longer than generating the data.  As competitors steadily commoditized the hardware side of the business, Beckman sought to sought to shift the competitive field by integrating data analysis software into the instrumentation.  Around the world, but especially in Japan, sales reps had trouble with the new strategy.  They were used to selling hardware specs, not software conclusions.  Software was seen as a give-away as opposed to a strategic value add that was worth money to the customer.  When asked why they resisted the new strategy, they simply said, “That’s not the way we’ve always done it.”  When I pointed out to them that  Japanese companies like Sony, Matsushita, and Toyota had created their global successes by establishing traditions that were, at one time, new and highly innovative, slowly, over time and many beers, the reps came to accept the value of change.

Routine: “If it ain’t broke, don’t fix it.”  There will always be some people who just don’t like change.  My grandmother was that way.  I’ve come to correlate acceptance of change with “being old,” and I have known a few people in their twenties who, in this respect, were more “old” than other people I have known in their 80’s and 90’s.  My father, at the age of 85, sold his Southern California home of 31 years and moved to Colorado.  He loved the California house.  It was full of memories, art, and souvenires from a lifetime of global travel.  But he knew it was time for a change, so he moved.  The changes in climate, physical living facilities, people, and daily routine that the move forced him to accomodate have kept him mentally and socially young.  At 88, my father is less “old” than many people I work with on a day-to-day basis.

Disconnect: “It’s not my job.”  Have you ever had to work with someone who, every time he was asked to do something, would get this look on his face that said, “Hmmm….  Is this in my job description?”  That reactionturns up frequently when people are asked to think about ways to make changes that will improve someone else’s (or even their own) circumstances.  On the surface, the work done by an executive assistant in the finance department can seem removed from the company’s product or service offering as seen from the view of the customer, but every role in the company impacts cost and quality, which are inherent to the customer experience.  Therefore, change, and the acceptance of questions that lead to change, is every employee’s responsibility.

Innovation: The Bare Naked Essentials

Questions are fundamental to innovation; the two are inseparable.  Right now, out there somewhere, someone is asking questions about how to do what you do better, faster, cheaper.

Whether you work on an assembly line, in a rock-and-roll band, or in a library, changes are being planned that will affect what you do, and the people making those plans are doing something very simple: They are asking questions.  You need to ask questions, too.

The first and most important questions to get right are:

  • “Who is the customer?” and
  • “What matters to the customer?”

Sometimes the customer is hard to identify.  Take clothing for toddlers, for example.  Toddlers don’t buy the clothes they wear, and they rarely influence the decision making process by expressing a particular preference before the purchase is made.  Therefore, the purchase decision for toddler clothing is usually made by a parent, a relative of the parents, or a friend of the parents.

In the case of toddler clothes, the user is not the customer, and the user has no influence whatsoever over the purchase decision.  (The toddler may express a distaste for a clothing item, but it comes after the purchase so it may affect future purchases but not the initial purchase).  Take it from me, the most important feature of toddler clothing purchases is durability, especially durability in repeat washing – something about which no toddler would ever be concerned.

What about a company that makes siding for houses?  A siding company may seek to improve competitive advantage by, for example, innovating in product durability or ease of installation.  The former is important to homeowners and the latter is important to contractors and builders of large developments.  Therefore, which innovations a siding manufacturer chooses to pursue depends on who they decide are their customers.

Another key question: “Why innovate?”

Sometimes the objective is pure avariciousness: more profit.  Of course, profit is always present in the decision making process, but most likely there are additional objectives as well.  The siding manufacturers mentioned above are trying to avoid price competition by differentiating themselves from each other.

A soft drink company that substitutes less expensive corn syrup for sugar in a soda is simply trying to lower costs so it can increase profit margins, but if the company changes the size of the soda can when introducing that same soda into the Japanese market, the change has more to do with meeting the needs of local channel partners and distribution practices.

Now ask, “What is our purpose?”  And think about that question in the context of how you will make yourself different.

Since Steve Jobs returned to Apple in the 1990’s, a major consideration in Apple’s new products, universally admired as innovative, has been style.  Apple requires its product teams to subordinate technical creativity to usability and aesthetics. The major technical innovations are only important to the extent that they serve the design objectives, especially functionality and sexiness.

Compare Apple’s Newton with its iPhone successor.  Take a look at the pictures of the two devices with the hands normalized to (roughly) the same size.  Consider the dimensions of the two instruments.  Now look at the screen layout.  Look at the styling of the cases.  Think about the way users are expected to interface with the devices (stylus vs fingers). Consider the range of uses (apps) available for each and the ease with which users can add or remove functionality (and the ways that Apple makes money from the apps).

Yes.  Of course, the technologies that make the iPhone what it is today simply were not available 15 years ago when the Newton was first introduced.  But don’t think for a moment that many consumers notice or care about the iPhone’s technology.  Consumers are not interested in the geeky side of GPS services, super hardened glass, accelerometers, programming languages, or gesture interfaces; they care about maps that tell them where they are and how to get where they want to go, products that fit into their pockets, and getting to the information and services they need with as little time and effort as possible.  They don’t care about 3G vs 4G LTE vs 4G; they care about how long it takes for maps or Facebook or photos to download and update.

What they care about is the look, the feel, the comfort, and the functionality of the device in their hand.

Steve Jobs and his product team played a simple thought game and asked what it was that consumers wanted from a smartphone.  They decided that “cool” for the consumer is not the same as “cool” for an engineer.  Consumers want ease of use, functionality, and sexiness.

The Newton came up short on all criteria.  So Jobs killed it.

Since Apple introduced the iPad, Amazon, Barnes & Noble, Samsung, and Google have introduced products intended to compete with it.  But are any of those products substantively different from the iPad?  They may be simpler and cheaper, but they copy the fundamental features and functionality, in one way or another, of the iPad.

The iPad’s competitors do not rethink the questions “Who is the customer and what does he want?” the way the designers of the iPhone did.  They see their purpose as making devices similar to the iPad.  Therefore, they are not likely to change the tablet market the way the iPhone changed the cell phone market.  But out there somewhere, someone – perhaps someone at Apple – is asking the questions.  And eventually, change will come.

Masayoshi Son Tweaks Some Noses Again

American mobile service providers are doing their customers a disservice.  In the end, it is the mobile service providers who will suffer.

A couple of weeks ago, a friend from Japan visited the US for the first time.  This is a friend who speaks little English and has spent a lifetime working for the Japanese government focused on domestic issues.  After he had spent a week traveling from Boston to New York City and Washington, DC, he spent a weekend at our house in New Jersey.  I felt compelled to ask him, “What do you notice that’s different here in the US?”  It was an open ended question.  He could have commented on anything, but he replied, “Mobile internet access is really slow.”

At the time, I laughed.  His comment was totally unexpected.

His observation, however, fits anecdotally into a pattern set by hard data: The internet infrastructure in Japan (and many other parts of the world) far surpasses America’s.  Looking specifically at mobile access to web pages, Google found that the average web page downloads in 4 seconds in Japan but takes more than twice as long, 9.2 seconds, in the US.  Nine seconds is an eternity when accessing data.

Part of the difference is the deployment of 4G and LTE data networks versus 3G networks.  Theoretically, 4G networks can handle 100Mbit/s throughput for devices on trains or in cars and 1Gbit/s throughput for devices held by pedestrians.  3G, on the other hand, maxes out at 56Mbit/s.  Then there are the latency issues….

Another part of the difference is the willingness of the providers to make maximum use of the available technologies.  In other words, they’re holding out on their customers.  When was the last time you had 56Mbit/s through put on your 3G phone?  Or even your 4G phone?

Earlier this month, Japanese mobile services provider Softbank Mobile announced plans to take a 70% stake in Sprint/Nextel.  Eight billion dollars of Softbank’s planned $21B acquisition will be used to improve Sprint’s infrastructure, shoring up the weaknesses in its 4G network.

How much pressure that will put on AT&T and Verizon, the American mobile heavyweights who are also rolling out 4G networks, depends in part on how aggressively Sprint pursues customer acquisition.  Masayoshi Son, Softbank’s founder and Chairman/CEO, has a reputation for taking big risks and competing aggressively, so aggressive competition is a foregone conclusion.

A Japanese national of Korean ancestry who was educated at UC Berkeley, Son is accustomed to being an outsider and has frequently taken contrarian positions, placing big financial bets and coming off the winner through shrewd and determined business strategies.  At various times in his career Son has unabashedly annoyed Japanese business leaders and government officials by making business decisions that did not conform to the established order.

Each time he has survived and on most occasions he has come out ahead; starting from very humble beginnings Son is now the second wealthiest man in Japan.

In 2006, Son directed Softbank to acquire Vodaphone’s failing Japanese mobile subsidiary, which was losing tens of thousands of subscribers per month to the two top mobile providers, NTT DoCoMo and KDDi.  After rebranding, investing in infrastructure, and acquiring exclusive rights to the iPhone in Japan, Son’s newly named Softbank Mobile, quickly attracted subscribers and took a solid third place in the mobile market on the strength of its data services for smartphones.

Not only did Softbank leverage high speed data services to attract the bulk of the new customers coming into the mobile market, it stole customers from DoCoMo and KDDi.  Surprised and feeling the pressure from this upstart, the market leaders have been forced to improve their service offerings as well.  4G and LTE are now ubiquitous in Japan and the vast majority of Japanese customers use them to access maps, graphics, and other throughput intensive applications.

Instead of whinging about how customers are using their smartphones for data intensive applications the way America’s mobile operators have, DoCoMo, KDDi, and Softbank have moved to provide their customers with what they want, for lower rates than Americans pay.

Sound familiar?

In the 1970’s, America’s auto companies were put on the defensive by better quality, less expensive Japanese auto manufacturers, who were simply doing what they should do, compete on quality and value.  Is it possible Masayoshi Son is poised to repeat the trick on US mobile service providers? Furthermore, will American mobile providers prove agile enough to respond to a competitor who plays by different rules.  Masayoshi Son may not be a “typical” Japanese, but like other Japanese business people he believes in competing by providing the best possible services to his customers at a reasonable price and making himself and his colleagues and investors wealthy in the process.