Drudge Retort: The Other Side of the News
Tuesday, January 23, 2018

The first time Abe Davis coaxed intelligible speech from a silent video of a bag of crab chips (an impassioned recitation of "Mary Had a Little Lamb") he could hardly believe it was possible. Davis is a Ph.D. candidate at MIT, and his group's image processing algorithm can turn everyday objects into visual microphones -- deciphering the tiny vibrations they undergo as captured on video. The research, which will be presented at the computer graphics conference SIGGRAPH 2014 next week, builds on work from MIT's Computer Science and Artificial Intelligence Laboratory to capture movement on video much smaller than a single pixel.

Advertisement

Advertisement

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone else remember the show Fringe predicted law enforcement would someday use this method?

#1 | Posted by Tor at 2018-01-23 11:21 PM | Reply | Newsworthy 1

Judging by the last experiment, a low based bass hum could disrupt the playback.

#2 | Posted by boaz at 2018-01-24 08:17 AM | Reply

I have heard that technology allows conversations to be recovered from window panes in the room the conversations took place, weeks after the fact. Can anyone confirm?

#3 | Posted by Zed at 2018-01-24 10:36 AM | Reply

Very cool.

Re #1 saw that. Fringe was awesome. Didn't he have a lab in the basement of MIT?

Not sure about that Zed.

But. Someday we may be able to detect sounds from the vibrations left behind long afterwards.

#4 | Posted by donnerboy at 2018-01-24 10:55 AM | Reply

Now maybe we can find out who these Time Travelers were talking to on their "cell phones" in 1928 and 1938.

www.youtube.com

#5 | Posted by donnerboy at 2018-01-24 03:48 PM | Reply

I read that as MTV Finds Sound in Silent Music Video.

It makes a lot more sense that MIT would figure it out.

#6 | Posted by IndianaJones at 2018-01-24 05:47 PM | Reply

At first I thought this was silly as you couldn't get around the nyquist sampling rate so you'd need an incredibly fast frame rate to get any reasonably audible signal. But then they mention rolling shutter, so with rather clever processing, you can maybe see the sound waves changing a membrane edge position as the camera scans down the image. So you could potentially get video line rate sound sampling, which could give you a good sound frequency range to hear human voice.
So I wonder if any older video could be processed to pick up secret audio conversations to give us some good juicy accidental-‘hot mic'-type embarrassing situations.

#7 | Posted by Snowfake at 2018-01-24 09:47 PM | Reply | Newsworthy 1

I think The Fringe lab was supposed to be at Harvard but was actually filmed at Yale.

I distinctly remember finding the Sonic recording device awesome and terrifying.

#8 | Posted by Tor at 2018-01-25 02:05 AM | Reply

#8; Season one of that show was so great.

#9 | Posted by IndianaJones at 2018-01-25 12:20 PM | Reply

I found season 1 to be kind of clunky.

I remain a fan.

#10 | Posted by Tor at 2018-01-26 12:37 AM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2018 World Readable

Drudge Retort