深度学习:又一次推动AI梦想(Marr理论、语义鸿沟、视觉神经网络、神经形态学)
??????? 幾乎每一次神經網絡的再流行,都會出現:推進人工智能的夢想之說。
前言:
Marr視覺分層理論
??????? Marr視覺分層理論(百度百科):理論框架主要由視覺所建立、保持、并予以解釋的三級表象結構組成,這就是:
??????? a.基元圖(the primal sketch)—由于圖像的密度變化可能與物體邊界這類具體的物理性質相對應,因此它主要描述圖像的密度變化及其局部幾何關系。 ??????? b. 2.5維圖(2.5 Dimensional sketch)—以觀察者為中心,描述可見表面的方位、輪廓、深度及其他性質。 ??????? c. 3維模型(3D Model)—以物體為中心,是用來處理和識別物體的三維形狀表象。語義溝壑 ????? Semantic gap(Wiki百科): CB IR 中的“語義鴻溝”就是:由于計算機獲取的圖像的視覺信息與用戶對圖像理解的語義信息的不一致性而導致的低層和高層檢索需求間的距離。??感知鴻溝(sensory gap),它是一種在現實世界的物體和該場景記錄下來的(計算上的)描述信息之間的鴻溝。語義鴻溝(semantic gap),它是由于所視覺數據中提煉出的信息與在特定場合下這些數據對用戶的解釋之間缺乏一致性。
視神經網絡分層模型
??????? 人體生理學研究有幾百年的歷史,對于視覺神經系統的研究,任然處于實驗模擬階段,并不能得到真正的阻斷實驗。目前可得出的生理學研究,視神經系統(百科)顯示出分層和稀疏特性。并從此能夠得到視覺神經系統到語義描述系統(語義鴻溝)的映射。
????? 自此,深度網絡為解決語義鴻溝指出了一個方向,且CNN可以從直覺上模擬人的神經系統,深度學習的深度有了真正地意義。
????
(1):深度學習:推動人工智能夢想
原文鏈接:http://www.csdn.net/article/2013-05-29/2815479
Key Word:淺層學習,深度學習;
淺層學習:淺層模型有一個重要特點,就是假設靠人工經驗來抽取樣本的特征,而強調模型主要是負責分類或預測。淺層模型:貌似只有一個隱含層的神經網絡。在模型的運用不出差錯的前提下(如假設互聯網公司聘請的是機器學習的專家),特征的好壞就成為整個系統性能的瓶頸。這樣經驗就起了很重要的作用!
深度學習:百度在線學習案例。
DNN與微軟同聲傳譯背后的故事:http://www.csdn.net/article/2013-06-09/2815737
????? “我們談到AI時,意味著高度抽象,Deep Learning是抽象的一種方式,但它遠不是全部。通過神經網絡能夠識別動物,并不意味就理解了世界,我甚至將其看做‘模式識別’而非‘智能’”,Seide這樣認為:“‘深’對智能系統來說很重要,但它不是智能的全部。語音識別可以視為AI領域的一個縮影,DNN也只是語音識別技術中的一部分——若從代碼長度的角度考量,它甚至只是全部技術中很小的一部分。”
???????? PS:這由讓我想起來 中文屋子? 的哲學討論
(2):機器學習前沿熱點–Deep Learning
???????? ? 機器學習前沿熱點:http://elevencitys.com/?p=1854
???? ??? 原始鏈接:http://blog.sina.com.cn/s/blog_46d0a3930101fswl.html
???????? 自 2006 年以來,機器學習領域,取得了突破性的進展。
??????? 圖靈試驗,至少不是那么可望而不可即了。至于技術手段,不僅僅依賴于云計算對大數據的并行處理能力,而且依賴于算法。這個算法就是,Deep Learning。借助于 Deep Learning 算法,人類終于找到了如何處理 “抽象概念”這個亙古難題的方法。
?????? 于是學界忙著延攬相關領域的大師。Alex Smola 加盟 CMU,就是這個背景下的插曲。懸念是 Geoffrey Hinton和 Yoshua Bengio 這兩位牛人,最后會加盟哪所大學。
Geoffrey Hinton 曾經轉戰 Cambridge、CMU,目前任教University of Toronto。相信挖他的名校一定不少。
Yoshua Bengio 經歷比較簡單,McGill University 獲得博士后,去 MIT 追隨 Mike Jordan 做博士后。目前任教 University of Montreal。
Deep Learning 引爆的這場革命,不僅學術意義巨大,而且離錢很近,實在太近了。如果把相關技術難題比喻成一座山,那么翻過這座山,山后就是特大露天金礦。技術難題解決以后,剩下的事情,就是動用資本和商業的強力手段,跑馬圈地了。
???????? 于是各大公司重兵集結,虎視眈眈。Google 兵分兩路,左路以 Jeff Dean 和 Andrew Ng 為首,重點突破 Deep Learning 等等算法和應用 [3](Introduction to Deep Learning.? http://en.wikipedia.org/wiki/Deep_learning)。
(3):Neuromorphic Engineering- A Stepstone for Artificial Intelligence
?? ? ? ?? 神經形態工程師的目標:http://elevencitys.com/?p=6265
??????? 這個全部黏貼了!
??????? 構建類似人腦的三大特征的計算機是神經形態工程師的目標!(低功耗; 容錯性; 自學習)。人類大腦的功率:約20W,當然這還只是TDP,平時消耗更低。容錯性:并行處理,因此也意味著并非完備,而是一個概率模型。自學習:這個屬于系統級別,包含整個感知-反饋-決策系統,復雜度暫時沒辦法分析。
??????? Here I would like to introduce the progress of?Neuromorphic engineering(神經形態工程學),?a branch of engineering built on electronic devices. The main goal of this subject is to emulate complex neuron network and ion channel dynamics in real time, using highly compact and power-efficient CMOS analog VLSI technology. Compared to traditional software-based computer modeling and simulation, this approach can be implemented in a extremely small size with low power requirement, when is used for large-scale and high speed simulation of neuron. This special feature provide possibility for the real computing applications, such asneuroprothesis, brain-machine interface, neurorobotics, machine learning and so on. [1]
???????
???????? A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. Neuromorphic engineering is a new interdisciplinary discipline that takes inspiration from?biology,?physics,?mathematics,?computer science?and?engineering?to design artificial neural systems, such as?vision systems,?head-eye systems,?auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.[2]
???????? Our human brain has three distinct feature, which are highly parallel processing. quick adaptability, ?and self-configuration. ?We ?have owned a deep understanding about the digital computers from the top to the bottom, from the operating system to the hardware design now. However, some analog computing, for example, voice recognition, learning etc. is not easy to implemented in the digit computers by now. In terms of the accuracy and power efficient, the mammal’s brain is so power and difficult to figure out. Since the artificial intelligence was pointed out in last century, we have invested lots of research effort in many areas, such as computer science, physiology, chemistry etc. to explain our brain. But it seems true that we know much more about the universe than the brain, it is sad, or promising? The only thing we are sure about, is that the brain do more than just information processing.
???????? Thus engineers began to ask for help from the biology perspective. But it is not so easy to emulate such a large scale computing machine, which owns about 85 billion neurons. Neuromorphic engineering is an important and promising branch to let us find the mystery of our brain. The feature of neuron computing is high parallelism, and adaptive learning, while bad at math. Same as the real CMOS technology, the placement of interconnect is a tricky job in Neuromorphic engineering. This engineering provides a potential to build the machine whose nature is learning.
???????
??????? DARPA SyNAPSE Program is an on-going project to build a?electronic?neuromorphic?machine technology that scales to biological levels. It has made several milestones since it was initialized from 2008.??It should recreate 10 billion?neurons, 100 trillion?synapses, consume one?kilowatt?(same as a small electric heater), and occupy less than two?liters?of space at last. [3]
??????? The initial phase of the SyNAPSE program developed nanometer scale electronic synaptic components capable of adapting the connection strength between two neurons in a manner analogous to that seen in biological systems(Hebbian learning), and simulated the utility of these synaptic components in core microcircuits that support the overall system architecture.
??????? Continuing efforts will focus on hardware development through the stages of microcircuit development, fabrication process development, single chip system development, and multi-chip system development.In support of these hardware developments, the program seeks to develop increasingly capable architecture and design tools, very large-scale computer simulations of the neuromorphic electronic systems to inform the designers and validate the hardware prior to fabrication, and virtual environments for training and testing the simulated and hardware neuromorphic systems. [4]
?????? To see more background:?http://homes.cs.washington.edu/~diorio/Talks/InvitedTalks/Telluride99/
Reference:
[1]?Rachmuth, Guy, et al. “A biophysically-based neuromorphic model of spike rate-and timing-dependent plasticity.”?Proceedings of the National Academy of Sciences?108.49 (2011): E1266-E1274.
[2]?http://en.wikipedia.org/wiki/Neuromorphic_engineering
[3]?http://www.artificialbrains.com/darpa-synapse-program
[4]?http://en.wikipedia.org/wiki/SyNAPSE
(4):最后呼吁:
?????? 不管怎樣都好,如果有一天,AI真的找到合適的程序構建模型,多少人還是希望我們對這個模型的了解能超過我們對于自身的了解。黑箱意味著不可控制,必然導致無法預料的結果,這是所有從事科學職業的人是不想看到的。
?????? 付出多少就能得到多少,付出多少才能得到多少,一勞永逸意味著滅亡。
總結
以上是生活随笔為你收集整理的深度学习:又一次推动AI梦想(Marr理论、语义鸿沟、视觉神经网络、神经形态学)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Web、网页、浏览器,web标准
- 下一篇: PyDev的使用-高效Py编程