深度学习术语_您应该意识到这些(通用)深度学习术语和术语
深度學習術語
術語 (Terminologies)
介紹 (Introduction)
I’ve recently gone through a set of machine learning-based projects presented in Juptyter notebook and have noticed that there are a set of recurring terms and terminologies in all notebooks and machine learning-based projects I’ve worked on or reviewed.
我最近瀏覽了Juptyter筆記本中介紹的一組基于機器學習的項目,并注意到在我從事或研究的所有筆記本和基于機器學習的項目中都有一組重復的術語和術語。
You can see this article as a way of cutting through some noise within machine learning and deep learning. Expect to find descriptions and explanations of terms and terminologies that you are bound to come across in the majority of deep learning-based projects.
您可以將本文視為消除機器學習和深度學習中的噪音的一種方式。 期望找到對大多數基于深度學習的項目必定會遇到的術語和術語的描述和解釋。
I cover the definition of terms and terminologies associated with the following subject areas in a machine learning project:
我將介紹與機器學習項目中以下主題領域相關的術語和術語的定義:
Datasets
數據集
Convolutional Neural Network Architecture
卷積神經網絡架構
Techniques
技巧
Hyperparameters
超參數
1.數據集 (1. Datasets)
Photo by Franki Chamaki on Unsplash照片由Franki Chamaki在Unsplash上拍攝Training Dataset: This is the group of our dataset used to train the neural network directly. Training data refers to the dataset partition exposed to the neural network during training.
訓練數據集 :這是我們的數據集,用于直接訓練神經網絡。 訓練數據是指訓練期間暴露于神經網絡的數據集分區。
Validation Dataset: This group of the dataset is utilized during training to assess the performance of the network at various iterations.
驗證數據集 :在訓練期間利用該組數據集來評估網絡在各種迭代中的性能。
Test Dataset: This partition of the dataset evaluates the performance of our network after the completion of the training phase.
測試數據集 :在訓練階段完成后, 數據集的此分區評估了我們網絡的性能。
2.卷積神經網絡 (2. Convolutional Neural Networks)
Photo by Alina Grubnyak on Unsplash Alina Grubnyak在Unsplash上拍攝的照片Convolutional layer: A convolution is a mathematical term that describes a dot product multiplication between two sets of elements. Within deep learning, the convolution operation acts on the filters/kernels and image data array within the convolutional layer. Therefore a convolutional layer simply houses the convolution operation that occurs between the filters and the images passed through a convolutional neural network.
卷積層 :卷積是一個數學術語,用于描述兩組元素之間的點積乘法。 在深度學習中,卷積操作作用于卷積層內的濾鏡/內核和圖像數據陣列。 因此,卷積層僅容納發生在濾波器和通過卷積神經網絡的圖像之間發生的卷積運算。
Batch Normalization layer: Batch Normalization is a technique that mitigates the effect of unstable gradients within a neural network through the introduction of an additional layer that performs operations on the inputs from the previous layer. The operations standardize and normalize the input values, after that the input values are transformed through scaling and shifting operations.
批處理歸一化層 :批處理歸一化是一種技術,它通過引入一個附加層來減輕神經網絡中不穩定梯度的影響,該附加層對來自上一層的輸入執行操作。 在通過縮放和移位操作對輸入值進行轉換之后,這些操作將對輸入值進行標準化和標準化。
MaxPooling layer: Max pooling is a variant of sub-sampling where the maximum pixel value of pixels that fall within the receptive field of a unit within a sub-sampling layer is taken as the output. The max-pooling operation below has a window of 2x2 and slides across the input data, outputting an average of the pixels within the receptive field of the kernel.
MaxPooling層 :Max pooling是子采樣的一種變體,其中將屬于子采樣層內某個單元的接收場內的像素的最大像素值作為輸出。 下面的最大池化操作具有2x2的窗口,并在輸入數據上滑動,輸出內核接受域內像素的平均值。
Flatten layer: Takes an input shape and flattens the input image data into a one-dimensional array.
拼合層 :采用輸入形狀并將輸入圖像數據拼合為一維數組。
Dense Layer: A dense layer has an embedded number of arbitrary units/neurons within. Each neuron is a perceptron.
致密層 :致密層中嵌入了任意數量的單元/神經元。 每個神經元都是一個感知器。
3.技術 (3. Techniques)
Photo by Markus Spiske on Unsplash Markus Spiske在Unsplash上拍攝的照片Activation Function: A mathematical operation that transforms the result or signals of neurons into a normalized output. The purpose of an activation function as a component of a neural network is to introduce non-linearity within the network. The inclusion of an activation function enables the neural network to have greater representational power and solve complex functions.
激活函數 :將神經元的結果或信號轉換為標準化輸出的數學運算。 激活函數作為神經網絡的組成部分的目的是在網絡內引入非線性。 包含激活函數使神經網絡具有更大的表示能力并能夠解決復雜的函數。
Rectified Linear Unit Activation Function(ReLU): A type of activation function that transforms the value results of a neuron. The transformation imposed by ReLU on values from a neuron is represented by the formula y=max(0,x). The ReLU activation function clamps down any negative values from the neuron to 0, and positive values remain unchanged. The result of this mathematical transformation is utilized as the output of the current layer and used as input to a consecutive layer within a neural network.
整流線性單位激活函數(ReLU) :一種激活函數,可轉換神經元的值結果。 ReLU對來自神經元的值施加的變換由公式y = max(0,x)表示 。 ReLU激活功能將神經元的任何負值鉗制為0,而正值保持不變。 該數學變換的結果被用作當前層的輸出,并被用作神經網絡內連續層的輸入。
Softmax Activation Function: A type of activation function that is utilized to derive the probability distribution of a set of numbers within an input vector. The output of a softmax activation function is a vector in which its set of values represents the probability of an occurrence of a class or event. The values within the vector all add up to 1.
Softmax激活函數 :一種激活函數,用于導出輸入向量內一組數字的概率分布。 softmax激活函數的輸出是一個向量,其中的一組值表示發生類或事件的概率。 向量中的值總計為1。
Dropout: Dropout technique works by randomly reducing the number of interconnecting neurons within a neural network. At every training step, each neuron has a chance of being left out, or rather, dropped out of the collated contributions from connected neurons.
輟學 :輟學技術通過隨機減少神經網絡中互連神經元的數量來工作。 在每個訓練步驟中,每個神經元都有機會被遺漏,或更確切地說,會從連接的神經元的整理貢獻中消失。
4.超參數 (4. Hyperparameters)
Marko Bla?evi? on 馬爾科布拉澤維奇對UnsplashUnsplashLoss function: A method that quantifies ‘how well’ a machine learning model performs. The quantification is an output(cost) based on a set of inputs, which are referred to as parameter values. The parameter values are used to estimate a prediction, and the ‘loss’ is the difference between the predictions and the actual values.
損失函數 :一種方法,量化機器“ 如何 ”學習模型執行。 量化是基于一組輸入的輸出(成本),稱為參數值。 參數值用于估計預測,而“損失”是預測與實際值之間的差。
Optimization Algorithm: An optimizer within a neural network is an algorithmic implementation that facilitates the process of gradient descent within a neural network by minimizing the loss values provided via the loss function. To reduce the loss, it is paramount the values of the weights within the network are selected appropriately.
優化算法 :神經網絡內的優化器是一種算法實現,可通過最小化通過損失函數提供的損失值來促進神經網絡內的梯度下降過程。 為了減少損失,最重要的是適當選擇網絡內的權重值。
Learning Rate: An integral component of a neural network implementation detail as it’s a factor value that determines the level of updates that are made to the values of the weights of the network. Learning rate is a type of hyperparameter.
學習率 :神經網絡實現細節的組成部分,因為它是決定對網絡權重值進行更新的級別的因子值。 學習率是一種超參數。
Epoch: This is a numeric value that indicates the number of time a network has been exposed to all the data points within a training dataset.
時代:這是一個數字值,表示網絡暴露于訓練數據集中所有數據點的時間。
結論 (Conclusion)
There are obviously tons more terms and terminologies that you are bound to come across as you undertake and complete machine learning projects.
在您完成并完成機器學習項目時,顯然會有更多的術語和術語。
In future articles, I’ll probably expand on more complex concepts within machine learning that appear frequently.
在以后的文章中,我可能會擴展機器學習中經常出現的更復雜的概念。
Feel free to save the article or share it with machine learning practitioners who are at the start of their learning journey or career.
隨意保存文章或與處于學習旅程或職業開始的機器學習從業者分享。
我希望您覺得這篇文章有用。 (I hope you found the article useful.)
To connect with me or find more content similar to this article, do the following:
要與我聯系或查找更多類似于本文的內容,請執行以下操作:
Subscribe to my email list for weekly newsletters
訂閱我的電子郵件列表 每周通訊
Follow me on Medium
跟我來中
Connect and reach me on LinkedIn
在LinkedIn上聯系并聯系我
翻譯自: https://towardsdatascience.com/you-should-be-aware-of-these-common-deep-learning-terms-and-terminologies-26e0522fb88b
深度學習術語
總結
以上是生活随笔為你收集整理的深度学习术语_您应该意识到这些(通用)深度学习术语和术语的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux下Matlab的安装和中文显示
- 下一篇: sprintf用法