All Video

Posts
Video

Increasingly, our everyday lives are influenced by computing algorithms that we cannot see or control.

This is the somewhat alarmist but nonetheless grounded in truth statement by Kevin Slavin in his recent TED talk (shown in the embedded player to the right). It’s not just financial markets, but movie scripts, book recommendations and advertising selections… the online and media world is increasing using software algorithms to tailor itself to what a mathematical equation thinks we want.

I find one of the most alarming examples is Facebook’s algorithm to determine what warrants “top news”. Effectively, Facebook is deciding for you which of your many friends’ updates is most important. And the implications of that are quite scary.. What if a friend thinks you are not listening because Facebook filtered out their update? Or what if you miss an opportunity for a future romantic involvement because Facebook hides a party update from what it thinks is someone you don’t care about?

Increasingly in the future we are going to have to think carefully about what decisions we allow software to make for us, and what things we should keep full control of ourselves.

Watch the TED video here or embedded above, or read the BBC News article for more information.

Posts
Video

The sequel to Deus Ex, one of the top-ranked games of all time and a pioneer in the cyperpunk genre, is nearing release. The sequel paints a pretty bleak picture of human augmentation. But this live action trailer goes way beyond promoting a game; it’s nothing short of a short film on the consequences of human augmentation.

Watch the clip. Forget it’s a video game. How likely is this kind of thing in coming years?

Featured
Links
Posts
Video

You know that scene in Bladerunner where Harrison Ford uses a computer to zoom, refocus and travel in 3D space within a photograph? For years we’ve all thought that would be forever impossible, but new technology from Lytro suggests that this sort of thing may soon be possible.

Their forthcoming light field camera captures not just one perspective of a scene, but uses a lenticular array to capture the entire light field, meaning that the 3D space from which the light originated can be explored after the photo is taken – so you can change which part of the scene is in focus, generate 3D images or even peek “behind” foreground objects.

The Silicon Valley startup clearly faces technical and financial challenges to change their prototypes into an affordable consumer product – but the cat is out of the bag on the idea, and we can expect camera manufacturers to race to catch up and enter this brand new market. This is a disruptive technology with huge potential to change the way we think about photography. Soon we may have a completely new kind of camera, which can truly capture a moment in a way we never thought possible. Some are wondering if it will take the skill out of photography, while others are already speculating about what this might do to re-ignite 3D film-making.

Read more details at AllThingsDigital and try refocussing images for yourself in Lytro’s Picture Gallery.

Links
Video

Zdenek Kalal, a PhD at the University of Surrey, has developed an impressive real-time system which looks within a live camera feed for an identified object or person, then watches and learns to track that object as it rotates, moves or disappears, reappears. He demonstrates a prototype of the system in the video shown to the right.

The project won him the ICT Pioneer award and has attracted a great deal of attention from press and industry alike, as this could enable a plethora of image-tracking applications, from security systems to video stablization and control systems for the handicapped.

What is remarkable about the system is that it needs no special training (for example learning what a face is), you can simply identify an object on screen and the system will learn to track it. It looks like the stuff of science-fiction, but it’s very real. Read more on his project page.

Links
Video

The non-profit grassroots organization ahumanright.org recently launched a bold new campaign to help to bring Internet access to some of the 5 billion people who aren’t online. They hope to raise sufficient funds to buy the abandoned TerreStar-1 satellite and offer free Internet access to citizens of impoverished nations, funded by renting usage of the satellite to other communications companies.

If it succeeds, it could become a lot harder for governments to shut down the Internet in their countries during civil unrest, as the satellite coverage would span international boundaries and the organization would be managed with a human right to information at its core.

If you have a spare $1m lying around you can make a donation at http://www.buythissatellite.org/. Read more at TIME or watch the TEDx talk.

Featured
Posts
Video

Periodicity

I have a rather awkward subject to discuss. The last time I brought it up in mixed company, someone slapped me. But I’m going to do it anyway, because it’s worth discussing.

Natural language processing and semantic analysis allows us to extract sentiment from documents. Marketing organizations and community managers rely on tools from Scoutlabs, Radian6, and others that try to understand how online communities feel about their brands and products.

As we share more of our lives online, there’s more to analyze. Researchers from Northeastern University and Harvard University analyzed Twitter’s mood over the day. This kind of sentiment analysis can look at someone’s online messages and decide whether they’re angry or content, happy or sad. Given data over time, it can likely recognize patterns of mood, even cycles.

Such as those that occur every twenty-eight days.

(It’s at this point that my dinner companion launched a well-aimed palm at my somewhat scruffy chin.)
Read more »

Links
Video

Swedish design company TAT just launched this video imagining the future of screen technology. There’s some great ideas in there like stretchable screens, see through monitors and being able to physical drag media between devices:

The ideas were the result of the OpenInnovation competition – read more at the site.

At first it seems quite useful, putting information onto surfaces like desks and mirrors. But if you take that to to the extreme you end up with something like the world shown in this second concept video, which uses augmented reality to put information everywhere. To me, it looks like something of a nightmare. What do you think?

(This video was created for an architecture project by Keiichi Matsuda. Read more here.)

Featured
Posts
Video

Understanding human behaviour is vital for good product design. But you can’t just ask people what they need, you have to observe them first-hand. iPods, eBay and TiVo exist because designers watched people, noticed a problem with current products, and designed a solution for a problem people didn’t even know they had.

At OXO Foods in the UK, researchers studied how people measure liquids while cooking, and noticed that most people need to bend down repeatedly to read the markings on the side of the container. None of them reported this as a problem when interviewed. So OXO designed a measuring jug(cup) which could be viewed from above (shown right). This is an example of the growing science of design ethnography – product design based on direct human observation.

How to measure human behaviour “in the wild”?

Observational studies are expensive to conduct, and sometimes distorted because you can’t always observe someone in their natural environment. Fortunately, computers now make it much easier to collect data from “real world” activities. Such data is invaluable – for product designers to better understand their users, and also for us to help us cultivate a deeper understanding of ourselves. Read more »

Links
Video

Researchers at the University of Plymouth in the UK, have made some significant steps forward in the quest to create an artificial intelligence that can learn about the world around it. Using the iCub robotic toddler, designed by a consortium of European universities, they have trained software to recognize and identify moving objects in its field of vision, based on their position relative to the robot’s body. This is the same way a human child learns.

To learn more, watch the video or read the New Scientist article.

Featured
Posts
Thumbnailed
Video
color stay by unclestabby from http://www.flickr.com/photos/unclestabby/2706034988/

As we move online, the definition of a community changes. Our neighbors aren’t just those people physically near us, but those we hang out with. This flexible definition of a community has serious repercussions for law and social morals: when we find kindred spirits online, we start thinking that everyone is just like us. At the same time, different communities hold us to different standards, and now that those communities leak into one another we need to apply context to our judgement.

In the 1970s bestseller The Joy Of Sex, we learn about a man who could only be aroused in a bathtub full of spaghetti. Back then, he probably led a lonely, normal life — albeit one in which he bought a lot of pasta and had a higher water bill than his neighbors. It’s unlikely that he had friends who shared his particular turn-on. Read more »

Powered by WordPress, based on Mina theme.