Human 2.0 is the Next Big Thing

We’re about to upgrade the human race. It’s more than a technology shift, it’s a cultural one. And it’s perhaps the first step on the singularity. This is most of what I’ve been thinking about lately. We’re sliding into it day by day, without noticing. I firmly believe it is the most significant change the human race faces, and it’s going to drive a tremendous amount of business and fuel wide-ranging ethical discussions. Most of the other technologies we cover here and elsewhere are simply building blocks for Human 2.0.

This is the first of many posts on the subject, and it sounds a bit muddy. Hopefully we can clarify that in the coming months. But if you’re willing to wade through some still-addled thinking, read on.

I’m lucky enough to be inundated by technology. Drinking from the blog/conference/client firehose, it’s easy to get caught up in all of the opportunities out there. Cloud computing, in particular, represents a tectonic shift in the capital structure of information processing. Mobility is huge, outstripping desktop computing worldwide. The switch from pull-centric (search) to push-centric (Friendfeed or Twitter) is also pretty important, with far-reaching consequences for the way we learn about new things. Contextual search is huge.

But none of those keep me up at night. All of the aforementioned technologies are just components a bigger whole. I’m obsessed with the human race growing new senses.

The convergence of location-based services, portable computing, haptic interfaces, cloud computing, search, and several other innovations will result in broadly adoptable consumer technologies that effectively give us new ways of perceiving the world. We’ll take them for granted.

I’ve been reading about this (less in science fiction and more in the real world) a lot lately, and I want to start covering it more. Unfortunately, right now it’s a scattered space, better explained through example and postulation than through a taxonomy. Augmented Reality (AR) has been a topic for decades, but it’s only now, with ubiquitous broadband and usable consumer portability, that it’s possible on a big enough scale to matter.

In science fiction, AR is often depicted as goggles with a heads-up display. The wearer sees additional information on them. Consider a fireman who sees the number of occupants in each house, or a cop who’s able to see traffic violations and current vehicle speed as he drives down the highway. Or closer to home, someone looking for coffee who sees giant, Pavlovian Starbucks logoes floating over their field of vision.

In that vein, consider:

Publications like Tag Magazine look at this subject matter, and it’s often touched on in Wired and elsewhere. The US military looks at this technology, aided in some cases by science fiction writers like Bruce Sterling. And video games are teaching a generation to manipulate the world around them through simple interfaces: The mouse-look-WASD-move convention is pretty standard by now.

Overlaying data onto goggles is hard for two reasons. First, you need to figure out which way the user’s eyes are pointing really, really fast so you don’t induce nausea. That’s hard, and it involves image recognition. And second, the goggles are clunky and hard to tolerate, causing eye strain. So today, we dispense with the goggles entirely. Google Maps or Google Earth both augment reality, overlaying knowledge and conversation. In controlled environments like museums, positionally-triggered audio gives us additional information.

And this is where, if you’re still with me, it gets messy. Because there’s a parallel AR happening right now — online.

If you’ve got the Skype plug-in installed, the web is different for you. Every time a phone number appears, it’s displayed differently so you can click it. There are lots of other tools and applications that augment your online experience. PMOG and Webwars turn websites into games. Bubblecomment lets people stick a video atop a page (admittedly, a copy of the page). There are dozens of other companies looking to extend the browser and the web, many of whose names escape me. Chrome just puts more into the mix, and with standards like Web Sockets — which allows fast two-way communication between browsers and servers — the variety of augmentations will increase dramatically.

To augment the web, developers either rely on a plug-in within the browser — which gives them more development power and control, but requires more work from the user — or embed Javascript in the page. Bitcurrent, for example, uses Apture to embed certain kinds of links, and this is done through Javascript and their application.

Surfing is easier to augment than the real world. You don’t have to figure out where the user is looking, or what’s in their field of vision. You can parse the page content and provide context, augmenting the data, translating it, and layering on information. Call it Augmented Surfing, the online equivalent to Augmented Reality. Because it’s easier to augment surfing, expect many of the AR innovations to play out online first, then find their way (through an iPhone) into the real world.

There’s a lot to consider here. How will this affect privacy? Will this benefit or hinder those with disabilities? When I run a website, will I be able to see what others are doing on it, or is a URL simply a pointer to dozens of online experiences, many of which I never see?

Augmented Reality, and its sister, Augmented Surfing, will collide with broad consumer adoption. We’ll think nostalgically of the days when we had only five senses, and couldn’t bring up overlays of the real world with a flick of the wrist or a tap of the finger. This will transform the way we perceive the world. It’s most of what I think about these days, and I’m hoping some of you will come along for the ride.

You can leave a response, or trackback from your own site.
Powered by WordPress, based on Mina theme.