Feeds:
Posts
Comments

Archive for the ‘convergence’ Category

Just discovered – in a Greener Gadgets forum of all places – a digital tattoo interface that actually has some real potential. A flexible silicon/silicone responsive display is injected and unrolled just under the dermis. The silicon layer controls an injected array of “ink” just above it to display black and white (or peach, or whatever your complexion is). How does it receive information? It’s Bluetooth enabled so the display can interact with your cell or PMP to display video or receive calls. How is it powered? By jacking into your body’s own arterial network. How awesome is it? Pretty awesome.

Read Full Post »

Last year I took a course that went over digital compositing using chroma keying – that’s like when the weatherman stands in front of a green screen, but on TV all the green is replaced with a weather map. The same technology is used in films like Transformers and The Golden Compass to pop people in front of huge robots or little girls on a polar landscape.

The problem with color-keying technology is in its imperfection. Color is very sensitive – if there is a green glow on your character or a glimmer of green mingled in their hair you could spend hours touching up single frames to perfect the shot.

I thought to myself “there’s got to be an easier way to do this,” and promptly came up with a genius idea: what if you could use the z-axis? See, two-dimensional photos have of course and x-axis and a y-axis. But in the world of 3-D graphics you’re blessed with a scale that measures depth – the z-axis. As far as I knew there were no cameras capable of capturing that depth information at the pixel level. If only there was a way – you could key out backgrounds with the click of a button.

Well, I must have been caught up in some photographic zeitgeist because the next day I saw a video of a new prototype camera that could capture depth information, beating me to the millions I deserved for thinking up something totally innovative. Now Adobe is pimping the technology out all over the place.

The prototypal camera has 19 distinct lenses – a plenoptic lens that looks like a fly’s eye – and a very very beta computer program that renders the image. See the video demo here.

The technology is wicked hard to understand – but I’ll try: Each mini-lens of the plenoptic camera takes a picture that is slightly different in focus and perspective. Adobe’s super-secret computer application then combines the smaller images into one big image, interpreting the minor differences between the images – the degree of focus and perspective – into meta data: depth information.

Having depth information for a photo is an astounding achievement. It means you can put things in focus that were previously out of focus. It also means you can “key” out portions of the image are of a certain depth. Did your son come out to blurry in that family shot in front of the Empire State Building? Use a deblur brush! You want to get rid of your ex-wife but keep the shot of the Grand Canyon? Make her disappear using depth info!

These cameras are a long way from practicality. It takes a ton of computing power to render depth data. The ability to incorporate this into video tech is even further off – meaning that if CG artists want to composite they’ll still have to rely on good ol’ fashioned green screen – or find some more efficient way of capturing depth info (email me, I have a few ideas for the right price…).

But Adobe thinks the tech will prove useful to the professional photog in the meantime. Prepare to see Papparazi taking photos of Angelina Jolie’s stomach from 19 different POVs within the decade!

Read Full Post »

Mind controlled games by Xmas ’08? If Emotive has anything to do with it there will be a fashionably correct $299 USB headset that will let you interact with games using your gold ol’ fashioned brain. A 6-second calibration and you’ll be able to manipulate a digital cube on screen, maybe even play Pong! And so another revolution in consumer-level interactivity begins.

Read Full Post »

Much fun has been made of social networks and the proliferation of web-based interaction. OurPrerogative brought to our attention an article by social critic Cory Doctorow that suggests a downside to the evolution of digital social networks:

“For every long-lost chum who reaches out to me on Facebook, there’s a guy who beat me up on a weekly basis through the whole seventh grade but now wants to be my buddy; or the crazy person who was fun in college but is now kind of sad; or the creepy ex-co-worker who I’d cross the street to avoid but who now wants to know, “Am I your friend?” yes or no, this instant, please.”

As a recent convert to Facebook (thanks Amy) I would definitely argue for its significance as a step up from the overloaded ox wagon that has become Myspace, and the fun younger sibling to LinkedIn‘s strictly business-minded approach. Facebook’s minimalist interface mixed with user-made applications have turned it into a whimsical but structured rolodex. You can keep track of friends easily, and enable them to keep track of you. Some might say if you’re under 30, live in the US, and DON’T belong to a social network you might not even exist.

But Doctorow suggests the ills of social networks may outweigh the benefits. Membership in Facebook increases the threat to your public image by allowing your contacts to virtually compare notes – see pictures of you, comments about you, comments you have made about others – which you would not share with everyone in the real world. You also open yourself up to a phenomenon of passive-aggressive “friending,” inviting losers, ex-flames, distant friends and others you rarely if ever talk to in person to get all up in yo’ bizness.

The risk to privacy is one thing – one might suggest that as digital social networks grow and converge with real life, privacy is less under attack and more out-of-style. A casually managed transparency is as de rigueur on the web as tactical honesty is in the real world.
Just as interesting though are the notions of the change in the rules of friendship. Christine Rosen refers to social networks as surrogates for live relationships. But what happens when the surrogate replaces the norm?
Certainly the online world isn’t without threats (e.g.). But these threats have more to do with trying to live out online relationships in the real world than with the online world itself.
Meanwhile, Rosen acknowledges that the leap to digital friendship is less a random phenomenon and more a convenient reaction to real world threats – emotional vulnerability, relationship violence, etc. If we really view this vulnerability as a threat – as early hominids might have viewed the sabertooth tiger – migration to the web could be a sensible evolution in interaction. And possibly inevitable.

Read Full Post »

Engineers love to make lists. In form, like some sort of speculative meta-Marshall Plan or Apollo Project, the National Academy of Engineering has assembled a committee to create a list of Grand Challenges for Engineering. The committee, including such venerable thinkers as Raymond Kurzweil and Larry Page, came up with 14 challenges that society faces to pull ourselves from the muddy trenches of the industrial age and fully into a new age of connectivity and sustainability.

Obviously, many of the challenges highlighted deal with energy and pollution issues that we’ve created for ourselves. And of course many also involve education, health, and improving society. But one stands out – reverse engineering the human brain.

Basically that means “understanding better how the brain works.” But the semantics are critical: Artificial Intelligence researchers have spent so much time trying to mimic how the brain seems to work without understanding how the brain actually works.

Being able to recreate the brain accurately would of course help us to fight dysfunctions of the brain such as Alzheimer’s, Parkinson’s, and Tourette’s. But what gets AI folks excited are the cognitive implications. Right now computers can understand information from only a binary approach. A human being views and learns fluidly – we see an object and have multiple interpretations of that object based on our experience. We can see a cow and think “Cow,” naturally. A computer could do the same because it might be programmed to look at objects in the world and say either “Cow” or “Not Cow.” But a human being can also see a cartoon of a cow, a black and white pattern, a bottle of milk, or a cowbell, and all of those objects can make us think “Cow.”

This is an example of the human neuron’s gray area. According to the committee “if engineers could replicate neurons’ ability to assume various levels of excitation,” we could have stronger machines, and human-machine interactions.

Read Full Post »


Last year NBC got all proprietary about their content, taking down all their shows and clips from YouTube and iTunes, realizing that as neat as these new platforms were, they weren’t proving themselves profitable for the mediamaker. Yes, YouTube has a massive audience, but an immature method of profit-sharing and (sans the embed feature) an inflexible presentation platform for those who want to control the viewing experience. iTunes, too has a massive audience, but their one-size-fits-all pricing pissed off the media producer who wanted their pound-o-flesh.

Enter Hulu. When NBC announced that they were going to make their own absurdly named version of YouTube, media critics and competitors shouted a collective you’ve-gotta-be-kidding-me. The market dominance of YouTube is unquestionable. And the image of an old media player attempting to be like “one of the kids” was laughable.

However, in the week after the Superbowl Hulu’s Beta Site got a ton of traffic when blogs used their embed codes to syndicate the best of the Superbowl ads. We got our hands on a Hulu Beta account and took it for a test run on our own and here’s our review:
• Interface (4 out 5 stars): Pretty cool. With pop-out, widescreen, and fullscreen options you can view content a bunch of different ways. Uses the ubiquitous Flash – content loads super quickly with few hiccups. In terms of selecting content, the interactivity is pretty cool – Netflix style rollovers give you a snapshot of each program. You can add content to your own personal playlist to watch later. The pages are huge though – lots of intense scrolling down the page to view comments and related content.
• User Interactivity (3 out of 5 stars): You can supposedly use Hulu’s internal application to remix content you find on Hulu, but the extent to which you can cut is weak. There’s no option for uploading, but that’s not the point of the site. Embedding for sharing is great, but embedding hulu content seems 90% pointless since most of the pieces are full shows, or clips from shows, rather than the kind of “Check out this video of a baby biting a boy’s finger” stuff you find on YouTube.

• Content (4.8 out of 5 stars): Content is where Hulu excels. Hulu hosts full seasons of many current shows like “24” and “The Office” and even old shows like “Doogie Howser, M.D.” and movies like “Sideways”. It will be great to see the libraries expand (they only have four episodes of Family Guy, for instance). 15-second ads are embedded in the programs at natural scene breaks (usually only two or three times per show) so you don’t even need to get up to go to the bathroom or get a snack. Hulu’s genius is that they realize audience desire to have permanent, immediate, free access to a large and diverse amount of high quality non-commercial programming, unencumbered by the proliferation of crap found on YouTube. While many networks host their content online, Hulu functions as a handsome, centralized clearinghouse. Loses marks for not having all full seasons of “24,” though.

• Potential Future in the Market (? out of ? stars) – Hulu, along with the revamped Apple TV with rentals and Netflix’s free unlimited streaming with rumors of a similar set-top device will create more pressure for platform compatibility. In the near future consumers will want to have the flexibility to watch EVERYTHING on EVERYTHING EVERYWHERE in high def, and immediately. Can distributors meet the challenge of allowing users to stream ALL of their content from a server, to their computer, to their TV, to their PMP/cell, to the web, and back with similar functionality on all platforms?

With any luck, Hulu will put on the pressure.

Read Full Post »

Nano Blasters. Sounds like a video game, but researchers with the University of Missouri-Columbia have been working on some nanobots meant to seek and destroy cancer cells in the body. Injected into your system these bots can break holes into cancer cells, sending shockwave of drug delivery to tumors at a speed approaching Mach 3. Hat’s off to the speed racer ambitions, but what’s particularly significant about this is the success rate – 99% in animal tissue.

Should we be surprised that the US Army is funding the study? I guess not – it could prove useful for IED and landmine detection as well. Let’s just hope that they don’t let somebody like Lockheed Martin get proprietary with the technology.

Read Full Post »


I don’t know how I missed the rise of synthetic biology, a discipline focused on the engineering of life. While programmers and roboticists sit around thinking of ways to create artificial life, biologists have slowly come to realize that we know enough about the nuts and bolts of cells, genes, and DNA to create life on our own. Going beyond mere genetic manipulation or cloning, biologists recently constructed the first synthetic genome and are on their way to creating the first full-fledged synthetic organism.

Read Full Post »

If you’ve spent more than say, three hours freestyle Googling (as I have), you get addicted to the interface. On more than one occasion I’ve gotten up from my computer, tried to find something in the physical world – a book, keys, underwear – and wondered, “Why can’t I just Google my apartment?”

Well, we’re not THAT close, but thanks to researchers at MIT, we’re not that far off either. A project called Quickies aims to take some of Google’s artificial intelligence concepts and apply them to the world of Post-Its. The idea: write your message onto a “Quickie Note,” and the information will automatically back up to your computer/cell phone/calendar/whatever.
Each Quickie Note is enabled with an RFID tag. A Quickie Note pad digitally interprets the handwriting, parses out keywords and symbols, and decides where the note applies. You write a note that says “Dinner with Eric and Sarah on Saturday at 6pm,” the notepad interprets the day and time as an appointment which is sent to your cell phone as a text and input into your calendar.
With RFID as inexpensive as it is, Quickies could become as ubiquitous as Post-Its, and certainly more useful for us forgetful folks. Researchers predict the product could make it to market in 2 to 5 years. And then, Alzheimer’s wins!

Read Full Post »

You’ve got the eyeball mounted screen (see below). But DARN! You burned out your retinas staring at the new Samsung HD screens at Best Buy. How will you ever be able to watch your brand new contact lenses/video screens? Try this little gem: the eyeball mounted camera.

The image to the right (albeit hand drawn) is not for the ocularly sensitive. What you see there is an actual video camera that can be mounted directly onto the lens of the human eye, reported on Monday by New Scientist. The transmission projected onto the back of the eye is then interpreted by nerve cells into an actual image.
Used as an artificial eye, this would be a great tool for the blind and near blind. Alternately, signals can be transmitted to an external hard drive, much to the delight of unwashed lifebloggers everywhere.

Read Full Post »

Older Posts »

Design a site like this with WordPress.com
Get started