人机交互技术:利用声波识别手势 Gesture Control System Uses Sound Alone
當(dāng)支付寶(當(dāng)面付)開始用超聲波進(jìn)行近距離的小數(shù)據(jù)量通訊時,我就感慨,這真是一個偉大的而又令人敬佩的發(fā)明!因為它完全不需要額外的硬件支持!
后來微信將超聲波通訊應(yīng)用到了“雷達(dá)找朋友”,騰訊總是善于學(xué)習(xí)且發(fā)現(xiàn)的,也很令人敬佩。
而今天,微軟甚至將超聲波技術(shù)用到了手勢識別上!下面的鏈接對此詳細(xì)描述!
http://article.yeeyan.org/compare/286069
利用聲波實現(xiàn)手勢識別
從高中物理課本中的“多普勒效應(yīng)”我們知道當(dāng)波源在運(yùn)動時觀察者感受到波的頻率是變化的,救護(hù)車的鳴笛聲就是一個很好的例子,你也許沒有想到過利用“多普勒效應(yīng)”來控制電腦吧。
利用“多普勒效應(yīng)”來控制電腦?你沒有聽錯,位于華盛頓州雷德蒙德市的軟件巨頭微軟研究院就正在做這件事情?!笆謩菘刂啤奔夹g(shù)變得越來越常見,實際上這種技術(shù)已經(jīng)運(yùn)用到某些電視上了。當(dāng)其它“動作感應(yīng)"技術(shù)(微軟的Kinect)還停留在利用攝像機(jī)來感知動作的階段,SoundWave則是利用“多普勒效應(yīng)”結(jié)合某些智能軟件以及筆記本內(nèi)建的揚(yáng)聲器和麥克風(fēng)來實現(xiàn)這一點。
作為微軟研究院首席研究員以及SoundWave研究小組成員之一的Desney Tan說到利用SoundWave技術(shù)已經(jīng)能感知一些簡單的動作了,隨著智能手機(jī)和筆記本配備多個揚(yáng)聲器和麥克風(fēng),SoundWave技術(shù)將會變得更高效。由微軟研究院和華盛頓大學(xué)共同研究的SoundWave技術(shù)成果將在德克薩斯州奧斯汀市的2012年 ACM SIGCHI Conference大會的Human Factors部分展示出來。
SoundWave這個點子是去年夏天產(chǎn)生的,當(dāng)時Desney和其他人正在做一個項目,需要用到超聲換能器(能發(fā)射和接收超聲波)來產(chǎn)生觸覺,其中一個研究員發(fā)現(xiàn)當(dāng)他四處走動的時候聲波發(fā)生了奇怪的波動。超聲換能器發(fā)射出的超聲波從他身上彈開,他的動作改變了超聲波的頻率,在示波器上面就體現(xiàn)出來了。
研究院們很快意識到這個現(xiàn)象可能對動作感應(yīng)有用。由于許多設(shè)備已經(jīng)配備了揚(yáng)聲器和麥克風(fēng),他們又做了實驗想要弄清楚能否利用現(xiàn)有的傳感器來檢感知動作。Desney Tan 說:”標(biāo)準(zhǔn)電腦的揚(yáng)聲器和麥克風(fēng)在超聲波(人類無法聽到)中依然可以正常使用的,這就意味著你只需要一個筆記本或者裝有SoundWave軟件的智能手機(jī)“
來自卡梅隆大學(xué)的Chris Harrison,他是研究”感官用戶接口“的,他說SoundWave如果配合現(xiàn)有的軟硬件將是一項巨大的發(fā)明。
”我認(rèn)為SoundWave還是有一些潛力的“他說。
一臺裝有SoundWave電腦的揚(yáng)聲器發(fā)射出介于20KHZ~22KHZ之間的恒定頻率的超聲波。如果當(dāng)時周圍環(huán)境中沒有什么東西在移動,那么麥克風(fēng)接收到的的頻率也應(yīng)該是恒定的。但是如果有東西向著電腦的方向移動,麥克風(fēng)接收到的的頻率就會升高,反之則會降低。
相關(guān)的數(shù)學(xué)模型物理模型已經(jīng)有了,Tan說到,所以基于頻率我們可以用來分析出到底運(yùn)動物體有多大、物體運(yùn)動有多快以及它的方向。通過以上幾個數(shù)據(jù)SoundWave就可以實現(xiàn)動作識別了。
SoundWave準(zhǔn)確性徘徊在90%左右,Tan說到,而且在用戶作出動作和計算機(jī)回應(yīng)用戶動作這段時間內(nèi)并沒有很明顯的延遲。當(dāng)揚(yáng)聲器在處理其他的一些事情的時候SoundWave依然可以使用。
目前,SoundWave研究小組已經(jīng)想出了一組動作,包括上下晃動手掌、手接近或者遠(yuǎn)離電腦、彎曲你的四肢、身體接近或者遠(yuǎn)離電腦。有了以上動作,研究員們可以實現(xiàn)滾動網(wǎng)頁、一些簡單的網(wǎng)絡(luò)導(dǎo)航以及當(dāng)用戶接近電腦時自動喚醒,遠(yuǎn)離電腦時切換到休眠狀態(tài),Tand如是說。
Harrison認(rèn)為最好能控制下動作的數(shù)量,因為用戶要記住那么多動作可不是一件容易的事情。SoundWave小組還研發(fā)了一組動作來玩俄羅斯方塊,這樣既能娛樂又能測試SoundWave的準(zhǔn)確性和速度。
Tan希望SoundWave能夠和其它一些動作識別技術(shù)協(xié)同工作。他說道SoundWave不用考慮光線的問題而基于視覺的動作識別技術(shù)卻不行, SoundWave對于一些細(xì)微的動作識別不是很在行比如捏手指之類的?!崩碚撋现v全世界有各種各樣的識別器,用戶不會在乎這些識別器是什么,他們只關(guān)心識別器能否幫助他們解決問題“,Tan這樣說道。
only sound—thanks to the Doppler Effect, some clever?
software, and the built-in speakers and microphone on a laptop.
Desney Tan,
a Microsoft Research principal researcher and member of the SoundWave?
team, says the technology can already be used to sense a number of?
simple gestures, and with smart phones and laptops starting to include?
multiple speakers and microphones, the technology could become even more
sensitive. SoundWave—a collaboration between Microsoft Research and the
University of Washington—will be presented this week in a paper at the 2012 ACM SIGCHI Conference on Human Factors in Computing in Austin, Texas.
The idea for SoundWave emerged last summer, when Desney and others?
were working on a project involving using ultrasonic transducers to?
create haptic effects, and one researcher noticed a sound wave changing?
in a surprising way as he moved his body around. The transducers were?
emitting an ultrasonic sound wave that was bouncing off researchers’?
bodies, and their movements changed the tone of the sound that was?
picked up, and the sound wave they viewed on the back end.
The researchers quickly determined that this could be useful for?
gesture sensing. And since many devices already have microphones and?
speakers embedded, they experimented to see if they could use those?
existing sensors to detect movements. Tan says standard computer?
speakers and microphones can operate in the ultrasonic band—beyond what?
humans can hear—which means all SoundWave has to do to make its?
technology work on your laptop or smart phone is load it up with?
SoundWave software.
Chris Harrison, a graduate student at Carnegie Mellon University who?
studies sensing for user interfaces, calls SoundWave’s ability to?
operate with existing hardware and a software update “a huge win.”
“I think it has some interesting potential,” he says.
The speakers on a computer equipped with SoundWave software emit a?
constant ultrasonic tone of between 20 and 22 kilohertz. If nothing in?
the immediate environment is moving, the tone the computer’s microphone?
hears should also be constant. But if something is moving toward the?
computer, that tone will shift to a higher frequency. If it’s moving?
away, the tone will shift to a lower frequency.
This happens in predictable patterns, Tan says, so the frequencies?
can be analyzed to determine how big the moving object is, how fast it’s
moving, and the direction it’s going. Based on all that, SoundWave can?
infer gestures.
The software’s accuracy hovers in the 90 percent range, Tan says, and
there isn’t a noticeable delay between when a user makes a gesture and?
the computer’s response. And SoundWave can operate while you’re using?
the speakers for other things, too.
So far, the SoundWave team has come up with a range of movements that
its software can understand, including swiping your hand up or down,?
moving it toward or away from your body, flexing your limbs, or moving?
your entire body closer to or farther away from the computer. With these
gestures, researchers are able to scroll through pages on a computer?
screen and control simple Web navigation. Sensing when a user approaches
a computer or walks away from it could be used to automatically wake it
up or put it to sleep, Tan says.
Harrison thinks that having a limited number of gestures is fine,?
especially since users will have to memorize them. The SoundWave team?
has also used its technology to control a game of Tetris, which, aside?
from being fun, provided a good test of the system’s accuracy and speed.
Tan envisions SoundWave working alongside other gesture-sensing?
technologies, saying that while it doesn’t face the lighting issues that
vision-based technologies do, it’s not as good at sensing small?
gestures like a pinch of the fingers. “Ideally there are lots of sensors
around the world, and the user doesn’t know or care what the sensors?
are, they’re just interacting with their tasks,” he says.
總結(jié)
以上是生活随笔為你收集整理的人机交互技术:利用声波识别手势 Gesture Control System Uses Sound Alone的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 时序图 分支_UML用例图
- 下一篇: 如何重做计算机系统软件,电脑卡如何一键重