Feeds:
Posts
Comments

Archive for the ‘computer’ Category

Brain researchers at the University of Leicester (pronounced “Lester” ’cause they’re British) have uncovered a way to identify almost exactly what a person is looking at or thinking about based on how their neurons fire.

Dr. Quian Quiroga used photos to prompt brain activity in test subjects implanted with intracranial electrodes. “In these experiments we presented a large database of pictures, and discovered that we can predict what picture the subject is seeing far above chance. For example, if the ‘Jennifer Aniston neuron’ increases its firing then we can predict that the subject is seeing Jennifer Aniston.”

“So, in simple words, we can read the human thought from the neuronal activity.”

Dr Quiroga and his team of altruistic researchers envision a world where this discovery could be used to help paralyzed patients – for instance, thinking about a cup of tea could prompt a bionic limb to reach for the item for you. Literal Dr. X-style brain power.

Alternately, it could help marketers know every time you are thinking about Jennifer Aniston.

Read Full Post »

Crayon Physics

I want this program, and the computer. The song’s okay, too.

Read Full Post »

Things have been busy for the last couple of weeks, but I had to come out of my self-imposed silence for this one.

According to the BBC neuroscientists have developed a software which, when combined with neural electrodes, can literally let your brain do the talking.

After a car crash that locked Eric Ramsey in a conscious but paralyzed state eight years ago, scientists began researching his brain. They started looking at the areas of his brain involved in speech and attempting to interpret the impulses into literal words. They think they’ve found the formula, and for the next few weeks they’re going to try to coax his brain into having conversations, which would be a huge leap forward in our understanding of how the brain functions.

This is still a ways from literally reading minds. It’s similar to Christopher Reeves’ using his muscles to create speech – only in this case, the brain is the muscle. That means Eric will be in control of what words he shares with the community and what he keeps to himself (though he still had better not let his internal monologue get too vivid).

It is also unidirectional. That is, the words can come out, but he still has to use his ears and eyes to understand other people’s words. Keep checking back though – eventually they have to come up with the technology to replace this human shell entirely.

Read Full Post »


Are you ready for this? CNET recently previewed an MIT report claiming to have found the most universal-to-date computer algorithm for translating brain activity into nonhuman physical action – i.e., Brain Powered Super Robots!! While technology already exists that takes advantage of reading brain patterns via EEG or optical imaging, each form of technology has its own language for interpreting brain activity into action. MIT’s recent calculations are something like a Linux approach, uniting all of these technologies under one mathematical language.

If we are reading this right, this could mean a path towards standardization of a rudimentary brain/computer language. Protect your children!

Read Full Post »

Those are really awesome. Especially that pen tool…

Read Full Post »

As all five of you may have noticed, I’ve been blogging for a little while on the concept of “convergence” – the growing integration of technology and human systems. (I don’t know whether this is the correct term for the phenomenon, but I’m presumptively stealing it as a layman. Ray Kurzweil probably has a better one.). I started the thread out with just a few posts on stuff like Artificial Intelligence and computers merging with the body and mind, but I’ve decided to expand to talk a little about technological interfaces in the community. So please bear with me as I begin to write more regularly about these trends rather than writing about….well, nothing I guess.
So I was procrastinating on the CNET gadget blog and read an post by Don Reisinger hating on the mouse. He started out just complaining about his Apple Mighty Mouse (I can’t tell what the problem is with the Mighty Mouse. The sensor right click and 2D scrolling trackball capabilities are pretty nice advances for their time if you ask me.). But then he challenged the mouse-computer interface altogether. His only suggested future solution to the mouse problem was the touch screen, which Apple is already hinting at in some of their recent patent submissions. But it got me thinking: what human-computer interfaces can we expect in the future?

Touch screen has been trashed by some of the lay-ergonomically-concerned who feel that tactile interaction (the springing back of the mouse or keyboard button) is a crucial to ease of use. However, having the screen, mouse, and keyboard all-in-one eliminates the whole challenge of hand-eye coordination. (All those years of typing class, gone to waste).

Then we’ve got a comment from logan1337 who predicts a future of eye-tracking cameras – cameras that sense where the pupil is looking, and direct actions based on their movement. The camera sensor thing isn’t new; that’s how microsoft’s multitouch screen coffee table works. And the eye-tracking thing may be adopted by ATM companies seeking an end to shoulder surfing (sorry drunken frat boys; you’ll have to concentrate to enter that PIN properrrrrrly). But questions arise: how do you “click” with your eye? Stanford researchers suggest either “dwelling” on the chosen key, or clicking a space bar with your hand, both of which seem more complicated than the single interface, immediate click of the traditional hand methods. The idea is to make it easier.

Okay, so let’s move a little bit further: we’ve got the Philip K. Dick by way of Tom Cruise envisioning of a multitouch glove with a massive transparent LED, suggested in Minority Report by way of Felipe M. G. Ceotto. This is good, certainly visually stunning if you’ve seen the movie, the way Tom Cruise picks up full windows and tosses them back and forth. Also it uses four whole buttons, two on each glove; if you’ve got all your fingers you could feasibly have ten different functions or alts literally in your hands – forget about all those apple hot keys! This vision may just be close to perfection. But what’s missing? You still can’t bring the window to you. Look at Tom Cruise. He’s standing. Can you imagine standing all day to do an Excel-based budget? Sure, you could make a pretty big spreadsheet, but you’d be on your feet all day! That may be okay for your average Heaven’s Gate Member Scientologist, but us lesser beings are a little bit more sensitive.

Enter another concept: the holographic touch screen. This technology already exists, even though it’s probably got more bugs than the first iPhone betas. But certainly one could imagine a future in which you could set up your holographic projector in one corner of the room and “grab” and “drag” the ethereal screen anywhere in the room. Tired of sitting at your desk? Take it to the couch. Don’t like the right in front of your eyes interaction? Keystone the screen so it appears above your lap. Screen too small? Stretch it so it’s just as big as that Tom Cruise one. Okay, so this still doesn’t solve the problem of tactile sensitivity. But it may be cool enough that our bodies will be willing to sacrifice the interactivity. And you’re not limited to your fingers either.

The ultimate? If you’ve read any of my previous posts on convergence you can guess that I would predict a day where the computer and the mind are seamless. The neuron and the computer chip will be able to share data directly. All your work can be done within the brain, manipulating data, images, and sounds within a mental interface (hopefully less feeble than it is currently). And when you want to share work with others just, I don’t know, mindbeam it to them I guess, if your brain has been enabled with WiFi 802.11n capabilities. Or is on the EDGE network. Hopefully our robot masters at VerizoSprinAT&T will have figured it out by then.

The possibilities are endless: having a mental iPod; downloading textbooks worth of information into your brain; sharing ideas without speaking; backing up your entire brain to a 500 petabyte hard drive. I mean sure, it will require a huge social change. Forget wire tapping. Heck, forget the Matrix. In a wireless mental computer world privacy will mean nothing – what kind of firewalls can you put around your memories? Everything in your mind would have to be just as accessible as everybody else’s brains are to you. Can you imagine the chaos? Remember Mel Gibson in “What Women Want”? Picture that going both ways. The upside: we might be permitted to understand what goes on in George W. Bush’s head (my guess: a constant double feature of Dennis Madalone’s “America, We Stand As One” video back to back with “Red Dawn“).

But seriously, the above suggestion of a mentally interfacing, biopowered, ethereally networked world might not be completely off the wall. Some scientists suggest that many animals have a kind of ESP that negates the need for physical communication. In early evolution, prior to the creation of language, humans might have had this capability. Should technology step up to the plate we’d essentially be creating a sophisticated version of what mother nature might have originally given us.

And we wouldn’t need that stupid trackball.

Did I hit all the potential interface methods out there? Tell me.

Read Full Post »

Design a site like this with WordPress.com
Get started