ann人工神经网络_深度学习-人工神经网络(ANN)
ann人工神經網絡
Building your first neural network in less than 30 lines of code.
用不到30行代碼構建您的第一個神經網絡。
1.What is Deep Learning ?
1.什么是深度學習?
Deep learning is that AI function which is able to learn features directly from the data without any human intervention ,where the data can be unstructured and unlabeled.
深度學習是AI功能,它可以直接從數據中學習特征,而無需任何人工干預,其中數據可以是非結構化和無標簽的。
1.1 Why deep learning?
1.1為什么要深度學習?
ML techniques became insufficient as the amount of data is increased. The success of a model heavily relied on feature engineering till last decade where these models fell under the category of Machine learning. Where deep learning models deals with finding these features automatically from the raw data.
隨著數據量的增加,機器學習技術變得不足。 模型的成功很大程度上取決于特征工程,直到上個十年,這些模型都屬于機器學習范疇。 深度學習模型負責處理從原始數據中自動查找這些功能的地方。
1.2 Machine learning vs Deep learning
1.2機器學習與深度學習
ML vs DL (Source: https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners)ML與DL(來源: https : //www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners )2.What is Artificial neural network?
2.什么是人工神經網絡?
2.1 Structure of a neural network:
2.1神經網絡的結構:
In a neural network as the structure says there is at least one hidden layer between the input and output layers. The hidden layers does not see the inputs. The word “deep” is a relative term which means how many hidden layer a neural network have.
如結構所說,在神經網絡中,輸入層和輸出層之間至少有一個隱藏層。 隱藏的層看不到輸入。 術語“深層”是一個相對術語,表示神經網絡具有多少個隱藏層。
While computing the layer the input layer is ignored. For example in the picture below we have a 3 layered neural network as mentioned input layer is not counted.
在計算層時,將忽略輸入層。 例如,在下面的圖片中,我們有一個3層神經網絡,因為未提及輸入層。
Layers in an ANN:
ANN中的層:
1 Dense or fully connected layers
1密集或完全連接的層
2 Convolution layers
2個卷積層
3 Pooling layers
3個池化層
4 Recurrent layers
4個循環層
5 Normalization layers
5個標準化層
6 Many others
6很多
Different layers performs different type of transformations on the input. A convolution layer mainly used to perform convolution operation while working with image data. A Recurrent layer is used while working with time series data. A dense layer is a fully connected layer. In a nutshell each layer have its own features and used to perform specific task.
不同的層對輸入執行不同類型的轉換。 卷積層主要用于在處理圖像數據時執行卷積運算。 在處理時間序列數據時,將使用循環層。 致密層是完全連接的層。 簡而言之,每一層都有自己的功能,并用于執行特定任務。
Structure of a neural network (Source: https://www.gabormelli.com/RKB/Neural_Network_Hidden_Layer)神經網絡的結構(來源: https : //www.gabormelli.com/RKB/Neural_Network_Hidden_??Layer )2.2 Structure of a 2 layer neural network:
2.2 2層神經網絡的結構:
structure of a 2 layer neural network(Source: https://ibb.co/rQmCkqG)2層神經網絡的結構(來源: https : //ibb.co/rQmCkqG )Input layer : Each of the nodes in the input layer represents the individual feature from each sample within our data set that will pass to the model.
輸入層 : 輸入層中的每個節點代表數據集中每個樣本的獨立要素,這些要素將傳遞給模型。
Hidden layer :The connections between the input layer and hidden layer , each of these connections transfers output from the previous units as input to the receiving unit. Each connections have its own assigned weight. Each input will be multiplied by the weights and output will be an activation function of these weighted sum of inputs.
隱藏層 :輸入層和隱藏層之間的連接,這些連接中的每一個都將先前單元的輸出作為輸入傳輸到接收單元。 每個連接都有自己分配的權重。 每個輸入將乘以權重,輸出將是這些輸入加權總和的激活函數。
To recap we have weights assigned to each connections and we compute the weighted sum that points to the same neuron(node) in the next layer. That sum is passed as an activation function that transforms the output to a number that can be between 0 and 1.This will be passed on to the next neuron(node) to the next layer. This process occurs over and over again until reaching the output layer.
概括地說,我們為每個連接分配了權重,并計算了指向下一層中相同神經元(節點)的加權總和。 該和作為激活函數傳遞,該函數將輸出轉換為介于0到1之間的數字。這將傳遞到下一個神經元(節點)。 此過程一遍又一遍,直到到達輸出層。
Lets consider part1 connections between input layer and hidden layer , as from fig above. Here the activation function we are using is tanh function.
讓我們考慮輸入層和隱藏層之間的part1連接,如上圖所示。 在這里,我們使用的激活函數是tanh函數。
Z1 = W1 X + b1
Z1 = W1 X + b1
A1 = tanh(Z1)
A1 = tanh(Z1)
Lets consider part 2 connections between hidden layer and output layer , as from fig above. Here the activation function we are using is sigmoid function.
讓我們考慮隱藏層和輸出層之間的第2部分連接,如上圖所示。 在這里,我們使用的激活函數是S型函數。
Z2 = W1 A1 + b2
Z2 = W1 A1 + b2
A2 = σ(Z2)
A2 =σ(Z2)
During this process weights will be continuously changing in order to reach optimized weights for each connections as the model continues to learn from the data.
在此過程中,隨著模型繼續從數據中學習,權重將不斷變化,以達到每個連接的最佳權重。
Output layer : If it’s a binary classification problem to classify cats or dogs the output layer have 2 neurons. Hence the output layer can be consists of each of the possible outcomes or categories of outcomes and that much of neurons.
輸出層 :如果對貓或狗進行分類是二進制分類問題,則輸出層具有2個神經元。 因此,輸出層可以由每種可能的結果或結果類別以及大量的神經元組成。
Please note that number of neurons in the hidden layer is a hyper parameter like learning rate.
請注意,隱藏層中神經元的數量是學習率之類的超參數。
3. Building your first neural network with keras in less than 30 lines of code
3.用不到30行代碼用keras構建您的第一個神經網絡
3.1 What is Keras ?
3.1什么是Keras?
There is a lot of deep learning frame works . Keras is a high-level API written in Python which runs on-top of popular frameworks such as TensorFlow, Theano, etc. to provide the machine learning practitioner with a layer of abstraction to reduce the inherent complexity of writing NNs.
有很多深度學習框架作品。 Keras是用Python編寫的高級API,它在TensorFlow,Theano等流行框架之上運行,從而為機器學習從業人員提供了一層抽象層,以減少編寫NN的固有復雜性。
3.2 Time to work on GPU:
3.2使用GPU的時間:
In this we will be using keras with Tensorflow backend. We will use pip commands to install on Anaconda environment.
在本文中,我們將使用具有Tensorflow后端的keras。 我們將使用pip命令在Anaconda環境上安裝。
· pip3 install Keras
·pip3安裝Keras
· pip3 install Tensorflow
·pip3安裝Tensorflow
Make sure that you set up GPU if you are using googlecolab
如果您使用的是googlecolab,請確保已設置GPU
google colab GPU activationGoogle colab GPU激活We are using MNIST data set in this tutorial. The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from MNIST. The digits have been size-normalized and centered in a fixed-size image.
在本教程中,我們將使用MNIST數據集。 可以從此頁面獲得的MNIST手寫數字數據庫的訓練集為60,000個示例,而測試集為10,000個示例。 它是MNIST可用的較大集合的子集。 這些數字已進行尺寸規格化,并在固定尺寸的圖像中居中。
We are importing necessary modules
我們正在導入必要的模塊
Loading the data set as training & test
加載數據集作為培訓和測試
Now with our training & test data we are ready to build our Neural network.
現在,有了我們的培訓和測試數據,我們就可以構建我們的神經網絡了。
In this example we will be using dense layer , a dense layer is nothing but fully connected neuron. Which means each neuron receives input from all the neurons in previous layer. The shape of our input is [60000,28,28] which is 60000 images with a pixel height and width of 28 X 28.
在此示例中,我們將使用密集層,密集層僅是完全連接的神經元。 這意味著每個神經元都從上一層中的所有神經元接收輸入。 輸入的形狀為[60000,28,28],它是60000張圖像,像素的高度和寬度為28 X 28。
784 and 10 refers to dimension of the output space , which will become the number of inputs to the subsequent layer.We are solving a classification problem with 10 possible categories (numbers from 0 to 9). Hence the final layer has potential output of 10 units.
784和10表示輸出空間的尺寸,它將成為下一層的輸入數量。我們正在解決一個具有10個可能類別的分類問題(數字從0到9)。 因此,最后一層的潛在輸出為10個單位。
Activation function can be different type , relu which is most widely used. In the output layer we are using softmax here.
激活功能可以是使用最廣泛的不同類型的relu。 在輸出層中,我們在這里使用softmax。
As out neural network is defined we are compiling it with optimizer as adam,loss function as categorical_cross entropy,metrics as accuracy here. These can be changed based upon the need.
由于定義了神經網絡,我們在這里使用優化器將其編譯為adam,損失函數作為categorical_cross熵,度量作為精度。 這些可以根據需要進行更改。
AIWA !!! You have just build your first neural network.
AIWA !!! 您剛剛建立了第一個神經網絡。
There is questions in your mind related to the terms which we have used on model building , like relu,softmax,adam ..these requires in depth explanations I would suggest you to read the book Deep Learning with Python by Francois Chollet, which inspired this tutorial.
您的思維中存在與我們在模型構建中使用的術語相關的問題,例如relu,softmax,adam ..這些需要進行深入的解釋,我建議您閱讀Francois Chollet撰寫的《用Python進行深度學習》一書,這啟發了這一點。教程。
We can reshape our data set and split in between train 60000 images and test of 10000 images
我們可以重塑我們的數據集,并在訓練60000張圖像和10000張圖像的測試之間進行劃分
We will use categorical encoding in order to return number of features in numerical operations.
我們將使用分類編碼,以便在數值運算中返回特征數量。
Our data set is split into train and test , our model is compiled and data is reshaped and encoded. Next step is to train our neural network(NN).
我們的數據集分為訓練和測試,我們的模型被編譯,數據被重塑和編碼。 下一步是訓練我們的神經網絡(NN)。
Here we are passing training images and train labels as well as epochs. One epoch is when an entire data set is passed forward and backward through the neural network only once.Batch size is number of samples that will propagate through the neural network.
在這里,我們傳遞訓練圖像和訓練標簽以及歷元。 一個時期是整個數據集僅通過神經網絡向前和向后傳遞一次,批大小是將通過神經網絡傳播的樣本數量。
We are measuring the performance of our model to identify how well our model performed. You will get a test accuracy of around 98 which means our model has predicted the correct digit while 98 percentage of time while running its tests.
我們正在測量模型的性能,以確定模型的性能。 您將獲得大約98的測試準確性,這意味著我們的模型在運行測試時的98%的時間里預測了正確的數字。
This is how the first look of a neural network is. That’s not the end just a beginning before we get a deep dive into different aspects of neural networks. You have just taken the first step towards your long and exciting journey.
這就是神經網絡的外觀。 那不是結束,而是我們深入研究神經網絡各個方面之前的開始。 您剛剛邁出了漫長而令人興奮的旅程的第一步。
Stay focused , keep learning , stay curious.
保持專注,保持學習,保持好奇心。
“Don’t take rest after your first victory because if you fail in second, more lips are waiting to say that your first victory was just luck.” — Dr APJ Abdul Kalam
“第一場勝利后不要休息,因為如果第二場失敗,就會有更多的嘴唇在等待說你的第一場勝利只是運氣。” — APJ Abdul Kalam博士
Reference : Deep Learning with Python , Fran?ois Chollet , ISBN 9781617294433
參考:用Python進行深度學習,Fran?oisChollet,ISBN 9781617294433
Stay connected — https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/
保持聯系-https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/
翻譯自: https://medium.com/analytics-vidhya/deep-learning-artificial-neural-network-ann-13b54c3f370f
ann人工神經網絡
總結
以上是生活随笔為你收集整理的ann人工神经网络_深度学习-人工神经网络(ANN)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 支付宝双v尊享权益怎么取消
- 下一篇: 唐宇迪机器学习课程数据集_最受欢迎的数据