学习深度学习需要哪些知识_您想了解的有关深度学习的所有知识
學(xué)習(xí)深度學(xué)習(xí)需要哪些知識
有關(guān)深層學(xué)習(xí)的FAU講義 (FAU LECTURE NOTES ON DEEP LEARNING)
Corona was a huge challenge for many of us and affected our lives in a variety of ways. I have been teaching a class on Deep Learning at Friedrich-Alexander-University Erlangen Nuremberg, Germany for several years now. This summer, our university decided to go “virtual” completely. Therefore, I started recording my lecture in short clips fo 15 minutes each.
對我們許多人而言,電暈是一個巨大的挑戰(zhàn),它以多種方式影響著我們的生活。 幾年來,我一直在德國的弗里德里希-亞歷山大大學(xué)紐倫堡大學(xué)教授深度學(xué)習(xí)課程。 今年夏天,我們的大學(xué)決定完全走向“虛擬”。 因此,我開始以每段15分鐘的短片記錄我的演講。
X-ray projections. Image by courtesy of Bastian Bier.X射線投影中使用 。 圖片由Bastian Bier提供。For a topic such as “Deep Learning”, you have to update the content of a lecture every semester. Therefore, I was not able to provide lecture notes so far. However, with video recordings and the help of automatic speech recognition, I was able to transcribe the entire lecture. This is why I decided to post a corresponding, manually corrected transcript for every video here on Medium. I was very glad that “Towards Data Science” published all of them in their esteemed publication. They even asked me to create a column “FAU Lecture Notes”. So, I’d like to take this opportunity to thank Towards Data Science for their great support of this project!
對于諸如“深度學(xué)習(xí)”之類的主題,您必須每學(xué)期更新一次演講的內(nèi)容。 因此,到目前為止,我無法提供講義。 但是,借助視頻記錄和自動語音識別的幫助,我可以轉(zhuǎn)錄整個講座。 這就是為什么我決定為Medium上的每個視頻發(fā)布一個相應(yīng)的,手動更正的成績單。 我很高興《 邁向數(shù)據(jù)科學(xué) 》在他們尊敬的出版物中發(fā)表了所有這些文章。 他們甚至要求我創(chuàng)建一列“ FAU講義 ”。 因此,我想借此機會感謝Towards Data Science對這個項目的大力支持!
AlphaStar playing Serral. The full video can be found here. Image generated using gifify.AlphaStar玩Serral。 完整的視頻可以在這里找到。 使用gifify生成的圖像 。To streamline the creation of the blog posts, I create a small tool chain “autoblog” that I also make available free of charge. All content here is also released unter CC BY 4.0 unless stated otherwise. So, you are also free to reuse this content.
為了簡化博客文章的創(chuàng)建,我創(chuàng)建了一個小的工具鏈“ autoblog ”,我也免費提供它。 除非另有說明,否則此處所有內(nèi)容也將在CC BY 4.0下發(fā)布。 因此,您也可以自由地重用此內(nèi)容。
In the following, I list the individual posts grouped by Chapter with a link to the respective videos. In case, you prefer the video, you can also watch the entire lecture as playlist. Note that I upgraded my recording equipment twice this semester. You should see that the video quality improve from Chapter 7— Architectures and Chapter 9 — Visualization & Attention.
在下文中,我列出了按章節(jié)分組的各個帖子,以及指向相應(yīng)視頻的鏈接。 如果您喜歡視頻,還可以將整個演講作為播放列表觀看 。 請注意,本學(xué)期我兩次升級了錄音設(shè)備。 您應(yīng)該看到視頻質(zhì)量從第7章“體系結(jié)構(gòu)”和第9章“可視化與注意”得到改善。
So, I hope you find these posts and videos useful. I case you like, them please leave a comment, or recommend this project to your friends.
因此,我希望您發(fā)現(xiàn)這些帖子和視頻有用。 如果您喜歡,他們請留下評論,或?qū)⒋隧椖客扑]給您的朋友。
第一章簡介 (Chapter 1 — Introduction)
Example sequence showing Yolo’s capabilities. The full sequence can be found here. Image generated using gifify.展示Yolo功能的示例序列。 完整的序列可以在這里找到。 使用gifify生成的圖像 。In these videos, we introduce the topic of Deep Learning and show some highlights in terms of literature and applications
在這些視頻中,我們介紹了深度學(xué)習(xí)的主題,并在文學(xué)和應(yīng)用方面展示了一些亮點
Part 1: Motivation & High Profile Applications (Video)Part 2: Highlights at FAU (Video)Part 3: Limitations of Deep Learning and Future Directions (Video)Part 4: A short course in Pattern Recognition (Video)Part 5: Exercises & Outlook (Video)
第1部分: 動機和高調(diào)應(yīng)用 ( 視頻 )第2部分: FAU的要點 ( 視頻 )第3部分: 深度學(xué)習(xí)的局限性和未來方向 ( 視頻 )第4部分: 模式識別短期課程 ( 視頻 )第5部分: 練習(xí)和Outlook ( 視頻 )
第2章-前饋網(wǎng)絡(luò) (Chapter 2 — Feedforward Networks)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Here, we present the basics of pattern recognition, and simple feedforward networks included the concept of layer abstraction.
在這里,我們介紹了模式識別的基礎(chǔ),簡單的前饋網(wǎng)絡(luò)包括了層抽象的概念。
Part 1: Why do we need Deep Learning? (Video)Part 2: How can Networks actually be trained? (Video)Part 3: The Backpropagation Algorithm (Video)Part 4: Layer Abstraction (Video)
第1部分: 為什么我們需要深度學(xué)習(xí)? ( 視頻 )第2部分: 如何實際訓(xùn)練網(wǎng)絡(luò)? ( 視頻 )第3部分: 反向傳播算法 ( 視頻 )第4部分: 層抽象 ( 視頻 )
第三章?lián)p失與優(yōu)化 (Chapter 3 — Loss & Optimization)
CC BY 4.0from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Some background on loss functions and the relation of deep learning to classical methods such as the Support Vector Machine (SVM).
關(guān)于損失函數(shù)以及深度學(xué)習(xí)與經(jīng)典方法(例如支持向量機(SVM))的關(guān)系的一些背景。
Part 1: Classification and Regression Losses (Video)Part 2: Do SVMs beat Deep Learning? (Video)Part 3: Optimization with ADAM and beyond… (Video)
第1部分: 分類和回歸損失 ( 視頻 )第2部分: SVM是否擊敗了深度學(xué)習(xí)? ( 視頻 )第3部分: 使用ADAM和其他功能進(jìn)行優(yōu)化… ( 視頻 )
第4章-激活,卷積和池化 (Chapter 4 — Activations, Convolution & Pooling)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。In this chapter, we discuss classical activation functions, modern versions, the concept of convolutional layers as well as pooling mechanisms.
在本章中,我們討論經(jīng)典的激活函數(shù),現(xiàn)代版本,卷積層的概念以及池化機制。
Part 1: Classical Activations (Video)Part 2: Modern Activations (Video)Part 3: Convolutional Layers (Video)Part 4: Pooling Mechanisms (Video)
第1部分: 經(jīng)典激活 ( 視頻 )第2部分: 現(xiàn)代激活 ( 視頻 )第3部分: 卷積層 ( 視頻 )第4部分: 池化機制 ( 視頻 )
第五章正則化 (Chapter 5 — Regularization)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。This chapter looks into the problem of overfitting and discusses several common methods to avoid it.
本章探討了過度擬合的問題,并討論了避免這種情況的幾種常用方法。
Part 1: The Bias-Variance Trade-off (Video)Part 2: Classical Techniques (Video)Part 3: Normalization & Dropout (Video)Part 4: Initialization & Transfer Learning (Video)Part 5: Multi-task Learning (Video)
第1部分: 偏差-偏差權(quán)衡 ( 視頻 )第2部分: 經(jīng)典技巧 ( 視頻 )第3部分: 標(biāo)準(zhǔn)化和輟學(xué) ( 視頻 )第4部分: 初始化和轉(zhuǎn)移學(xué)習(xí) ( 視頻 )第5部分: 多任務(wù)學(xué)習(xí) ( 視頻 )
第6章-常規(guī)做法 (Chapter 6 — Common Practices)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。This chapter is dedicated to common problems that you will face in practice ranging from hyperparameters to performance evaluation and significance testing.
本章專門討論您將在實踐中遇到的常見問題,從超參數(shù)到性能評估和重要性測試。
Part 1: Optimizers & Learning Rates (Video)Part 2: Hyperparameters and Ensembling (Video)Part 3: Class Imbalance (Video)Part 4: Performance Evaluation (Video)
第1部分: 優(yōu)化器和學(xué)習(xí)率 ( 視頻 )第2部分:超參數(shù)和集合 ( 視頻 )第3部分: 類不平衡 ( 視頻 )第4部分: 性能評估 ( 視頻 )
第7章-體系結(jié)構(gòu) (Chapter 7 — Architectures)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。In this chapter, we present the most common and popular architectures in deep learning.
在本章中,我們介紹了深度學(xué)習(xí)中最常見和最受歡迎的架構(gòu)。
Part 1: From LeNet to GoogLeNet (Video)Part 2: Deeper Architectures (Video)Part 3: Residual Networks (Video)Part 4: The Rise of the Residual Connections (Video)Part 5: Learning Architectures (Video)
第1部分: 從LeNet到GoogLeNet ( 視頻 )第2部分: 更深的體系結(jié)構(gòu) ( 視頻 )第3部分: 殘留網(wǎng)絡(luò) ( 視頻 )第4部分: 殘留連接的興起 ( 視頻 )第5部分: 學(xué)習(xí)體系結(jié)構(gòu) ( 視頻 )
第8章遞歸神經(jīng)網(wǎng)絡(luò) (Chapter 8 — Recurrent Neural Networks)
here. Image under 這里 。 CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Recurrent neural networks allow the processing and generation of time-dependent data.
遞歸神經(jīng)網(wǎng)絡(luò)允許處理和生成時間相關(guān)的數(shù)據(jù)。
Part 1: The Elman Cell (Video)Part 2: Backpropagation through Time (Video)Part 3: A Tribute to Schmidhuber — LSTMs (Video)Part 4: Gated Recurrent Units (Video)Part 5: Sequence Generation (Video)
第1部分: Elman單元 ( 視頻 )第2部分: 通過時間的反向傳播 ( 視頻 )第3部分: 向Schmidhuber致敬-LSTM ( 視頻 )第4部分: 門控循環(huán)單元 ( 視頻 )第5部分: 序列生成 ( 視頻 )
第9章-可視化和注意 (Chapter 9 — Visualization & Attention)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Visualization methods are used to explore weaknesses of deep nets and to provide better ways of understanding them.
可視化方法用于探索深層網(wǎng)絡(luò)的弱點并提供更好的理解它們的方式。
Part 1: Architecture & Training Visualization (Video)Part 2: Confounders & Adversarial Attacks (Video)Part 3: Direct Visualization Methods (Video)Part 4: Gradient and Optimisation-based Methods (Video)Part 5: Attention Mechanisms (Video)
第1部分: 體系結(jié)構(gòu)和培訓(xùn)可視化 ( 視頻 )第2部分: 混雜因素和對抗攻擊 ( 視頻 )第3部分: 直接可視化方法 ( 視頻 )第4部分: 基于梯度和基于優(yōu)化的方法 ( 視頻 )第5部分: 注意機制 ( 視頻 )
第十章強化學(xué)習(xí) (Chapter 10 — Reinforcement Learning)
Agent-based organ segmentation. Image by courtesy of Xia Zhong.基于主體的器官分割 。 圖片由夏忠提供。Reinforcement learning allows training of agent systems that can act on their own and control games and processes.
強化學(xué)習(xí)允許培訓(xùn)可以獨立執(zhí)行并控制游戲和流程的代理系統(tǒng)。
Part 1: Sequential Decision Making (Video)Part 2: Markov Decision Processes (Video)Part 3: Policy Iteration (Video)Part 4: Alternative Approaches (Video)Part 5: Deep Q-Learning (Video)
第1部分: 順序決策 ( 視頻 )第2部分: Markov決策過程 ( 視頻 )第3部分: 策略迭代 ( 視頻 )第4部分: 替代方法 ( 視頻 )第5部分: 深度Q學(xué)習(xí) ( 視頻 )
第十一章—無監(jiān)督學(xué)習(xí) (Chapter 11 — Unsupervised Learning)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Unsupervised learning does not require training data and can be used to generate new observations.
無監(jiān)督學(xué)習(xí)不需要訓(xùn)練數(shù)據(jù),可以用來產(chǎn)生新的觀察結(jié)果。
Part 1: Motivation & Restricted Boltzmann Machines (Video)Part 2: Autoencoders (Video)Part 3: Generative Adversarial Networks — The Basics (Video)Part 4: Conditional & Cycle GANs (Video)Part 5: Advanced GAN Methods (Video)
第1部分: 動機和受限制的Boltzmann機器 ( 視頻 )第2部分:自動編碼器 ( 視頻 )第3部分: 生成對抗網(wǎng)絡(luò)-基礎(chǔ) ( 視頻 )第4部分: 有條件和循環(huán)GAN ( 視頻 )第5部分: 高級GAN方法 ( 視頻 )
第十二章—分段與對象檢測 (Chapter 12 — Segmentation & Object Detection)
Detecting mitoses on a histological slice image is a classical detection task. Image courtesy of Marc Aubreville. Access full video here.在組織切片圖像上檢測有絲分裂是經(jīng)典的檢測任務(wù)。 圖片由Marc Aubreville提供。 在此處訪問完整視頻 。Segmentation and Detection are common problems in which deep learning is used.
分割和檢測是使用深度學(xué)習(xí)的常見問題。
Part 1: Segmentation Basics (Video)Part 2: Skip Connections & More (Video)Part 3: A Family of Regional CNNs (Video)Part 4: Single Shot Detectors (Video)Part 5: Instance Segmentation (Video)
第1部分: 分段基礎(chǔ)知識 ( 視頻 )第2部分: 跳過連接及更多內(nèi)容 ( 視頻 )第3部分: 區(qū)域CNN系列 ( 視頻 )第4部分: 單發(fā)檢測器 ( 視頻 )第5部分: 實例分段 ( 視頻 )
第十三章—弱者和自我監(jiān)督學(xué)習(xí) (Chaper 13 — Weakly and Self-supervised Learning)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Weak supervision tries to minimize required label effort while self-supervision tries to get rid of labels completely.
弱監(jiān)督試圖最大程度地減少所需的標(biāo)簽工作量,而自我監(jiān)督則試圖完全擺脫標(biāo)簽問題。
Part 1: From Class to Pixels (Video)Part 2: From 2-D to 3-D Annotations (Video)Part 3: Self-Supervised Labels (Video)Part 4: Contrastive Losses (Video)
第1部分: 從類到像素 ( 視頻 )第2部分: 從2D到3D注釋 ( 視頻 )第3部分: 自我監(jiān)督的標(biāo)簽 ( 視頻 )第4部分: 對比損失 ( 視頻 )
第十四章圖深度學(xué)習(xí) (Chapter 14 — Graph Deep Learning)
CC BY 4.0 from the 深度學(xué)習(xí)講座中 Deep Learning Lecture.CC BY 4.0下的圖像。Graph deep learning is used to process data available in graphs and meshes.
圖深度學(xué)習(xí)用于處理圖和網(wǎng)格中可用的數(shù)據(jù)。
Part 1: Spectral Convolutions (Video)Part 2: From Spectral to Spatial Domain (Video)
第1部分: 譜卷積 ( 視頻 )第2部分: 從譜域到空間域 ( 視頻 )
第十五章—已知的操作員學(xué)習(xí) (Chapter 15 — Known Operator Learning)
Twitter. Image under 推特 CC BY 4.0CC BY 4.0下的圖像Known operators allow the insertion of prior knowledge into deep networks reducing the number of unknown parameters and improving generalization properties of deep networks.
已知的運算符允許將先驗知識插入到深度網(wǎng)絡(luò)中,從而減少未知參數(shù)的數(shù)量并改善深度網(wǎng)絡(luò)的泛化特性。
Part 1: Don’t re-invent the Wheel (Video)Part 2: Boundaries on Learning (Video)Part 3: CT Reconstruction Revisited (Video)Part 4: Deep Design Patterns (Video)
第1部分: 不要重新發(fā)明輪子 ( 視頻 )第2部分: 學(xué)習(xí)的邊界 ( 視頻 )第3部分:重新研究CT重建 ( 視頻 )第4部分: 深層設(shè)計模式 ( 視頻 )
致謝 (Acknowledgements)
Many thanks to Katharina Breininger, Weilin Fu, Tobias Würfl, Vincent Christlein, Florian Thamm, Felix Denzinger, Florin Ghesu, Yan Xia, Yixing Huang Christopher Syben, Marc Aubreville, and all our student tutors for their support in this and the last semesters, for creating these slides and corresponding exercises, teaching the class in presence and virtually, and the great team work over the last few years!
非常感謝Katharina Breininger,Weilin Fu,TobiasWürfl,Vincent Christlein,Florian Thamm,Felix Denzinger,Florin Ghesu,Yan Xia,Yixing Huang Christopher Syben,Marc Aubreville,以及我們所有的學(xué)生導(dǎo)師在本學(xué)期和上學(xué)期的支持,制作這些幻燈片和相應(yīng)的練習(xí),在場和虛擬授課,以及過去幾年中出色的團(tuán)隊合作!
In case, you are not a subscriber to Medium, and have trouble accessing the material, we also host all of the blog posts on the Pattern Recognition Lab’s Website.
如果您不是Medium的訂戶,并且在訪問材料時遇到問題,我們還將在Pattern Recognition Lab的網(wǎng)站上托管所有博客文章。
If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep Learning Lecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.
如果你喜歡這篇文章,你可以找到這里更多的文章 ,更多的教育材料,機器學(xué)習(xí)在這里 ,或看看我們的深入 學(xué)習(xí) 講座 。 如果您希望將來了解更多文章,視頻和研究信息,也歡迎關(guān)注YouTube , Twitter , Facebook或LinkedIn 。 本文是根據(jù)知識共享4.0署名許可發(fā)布的 ,如果引用,可以重新打印和修改。 如果您對從視頻講座中生成成績單感興趣,請嘗試使用AutoBlog 。
翻譯自: https://towardsdatascience.com/all-you-want-to-know-about-deep-learning-8d68dcffc258
學(xué)習(xí)深度學(xué)習(xí)需要哪些知識
總結(jié)
以上是生活随笔為你收集整理的学习深度学习需要哪些知识_您想了解的有关深度学习的所有知识的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python自动化数据报告_如何:使用P
- 下一篇: 做梦梦到老鼠咬自己手是什么意思