All Video

Links
Video

It’s one of the great philosophical debates of all time. Given enough computing horsepower, would we know the difference between the real world and a simulated one? With the cost of bandwidth, computing, and storage dropping precipitously, it’s harder and harder to tell the real from the simulated.

Check out the Lagoa Multiphysics plugin for Softimage. It’s pretty impressive. Tools like this do for interactive visualization what still-image tools like Photofuse do for traditional photographs.

You might argue that to really trick us into believing the world around us was a simulation, each particle the software represented could only consume one particle of computing infrastructure. But isn’t that what the universe looks like? And did I just blow your mind?

Links
Video

Apps without Programming

The new App Inventor takes Google’s “do what you like with your gadgets” approach one step further, by enabling anyone – even those who have never programmed before – to create their own apps with drag and drop ease.

App Inventor is a simple user interface for creating applications for the Android mobile platform, working in a similar way to Visual Basic – you drag buttons onto your screen and attach actions to them.

It’s interesting because in an age where there is fierce debate over whether you have the right to reprogram your device and customize it for your own use (consider Apple’s iOS vs Google’s Android), this presents a third option – by equipping ordinary people to develop exactly the functionality they need, without having to go outside the bounds of a controlled environment. Might we see Apple offer something similar for iPhones soon?

It’s also interesting to consider that if MySpace, Facebook and blogs took the idea of people creating websites and web content into the mainstream, what could happen if the capability to create software became equally mainstream? It would be sure to spark a total revolution in the way we think about computers…

Read more at CNET.

Links
Video

AR meets 3D modeling

Techi has a piece on Leonar3Do, a new take on 3D modeling. It looks as far from traditional modeling tools as they were from pen and paper.

Years ago, I played around with 3D modeling (and narrowly avoided a career at Matrox and Softimage in the process.) Building models was tedious: manipulating 3D space with two-dimensional tools like a mouse and a screen is tough. Software relies on all sorts of controllers, UI conceits, and tricks: rotating the onscreen image; holding down shift to move along the third dimension, and so on.

Google acquired Sketchup to help crowdsource 3D content for Google Earth largely because it was comparatively easy to use. But so far, we can’t work with 3D content in three dimensions.

That’s about to change: using modeling tools and AR visualization, designers can actually manipulate in three dimensions. Which will go a long way to making 3D models mainstream, unlocking all kinds of use cases.

Who knows, maybe soon, we’ll have opt-in vandalism, where taggers add 3D objects to the real world for those who want to see them.

Links
Video

Visualization at two extremes

I’ve been spending a lot of time on interfaces and visualization lately, as part of a new conference that melds Big Data, Ubiquitous Computing, and New Interfaces. Along the way, I was struck by these two extremes of visualization.

At one extreme, there’s the Canadian filmmaker who’s implanted a camera in his eye socket. It’s a great example of embedded, ubiquitous data collection — something we’ll likely all take for granted very soon. This is visualization in the truest sense.

At the other end of the spectrum is the Allosphere, an immersive, three-storey-tall sphere used for visualizing and interacting with data.

These two spheres are very different. One collects a single person’s perspective; the other reveals huge amounts of data. One shows things at an intimate, human scale; the other zooms out to galazies or in to neurons.

Links
Video

Augmented reality is getting a lot of attention in the media, but is often misunderstood. It’s only when you see examples that it really makes sense. This video demonstrates an impressive new application of the concept at the Wimbledon tennis championship in the UK.

With IBM’s new Seer 2010 app for iPhones and Androids you can simply point your phone at a court and get live video of the match being played there – effectively you can see through walls. You can also use the app to find food and drink stands or even get a live video view of the taxi queues.

It’s a great example of how augmented reality is already here today and making itself useful. You can read more here.

Links
Video

One of the problems with virtual reality is navigating it: How do you make it feel like you’re walking around? Short of jacking into the spinal cord, Matrix-style, this has always stymied interface designers.

Popsci is reporting that a hotel in Vegas will soon offer immersive VR built on the Virtusphere, a human-sized hamster ball that calculates position from sphere rotation–like being inside a giant trackball. 360 Virtual Ventures is commercializing the technology for installations like the one at the Excalibur.

Here’s a video showing the device in action:

Dim lights

Embed Embed this video on your site

Links
Video

Techcrunch has a writeup on Windows Live Essentials’ new Photofuse technology, which lets you pick from several similar pictures and create a new, optimal one. When revisionist editing gets this simple, poidh doesn’t apply any more.

Complete writeup on Techcrunch.

Links
Video

We’ve written previously about the benefits of bringing Internet-enabled screens to different parts of the home. This video from Jesse Rosten shows how with a couple of packets of Velcro and an iPad, you can change the iPad from a handheld Internet device into a way of putting information exactly where you need it, hands free.

In the future, perhaps iPad-like Internet touchscreens will become so cheap we can just install these permanently on our walls and into our appliances.

Links
Video

You may have heard of Big Dog, the four-legged robot developed by Boston Dynamics to carry equipment into battle, an electronic “pack mule” that can navigate a wide range of terrain.

But you might not have seen Little Dog, a Chihuahua to Big Dog’s Great Dane. Here’s a look at how far miniaturization and computing power have come in recent years.

Nice footsoldiers, Skynet.

Links
Video

This video from Carnegie Mellon University and Microsoft demonstrates the Skinput project, which uses a combination of audio and vibration sensors and a handheld projector to create buttons and displays on your forearm, which can then be used to control anything from MP3 players to cellphones. Devices are getting smaller and smaller, and the size is often defined by the size required for input and output. This is a significant step forwards towards removing that restriction.

Powered by WordPress, based on Mina theme.