原文翻译:深度学习测试题(L1 W4 测试题)
導語
本文翻譯自deeplearning.ai的深度學習課程測試作業(yè),近期將逐步翻譯完畢,一共五門課。
翻譯:黃海廣
本集翻譯Lesson1 Week 4:
Lesson1 Neural Networks and Deep Learning (第一門課 神經(jīng)網(wǎng)絡和深度學習)
Week 4 Quiz - Key concepts on Deep Neural Networks(第四周測驗 – 深層神經(jīng)網(wǎng)絡)
1.What is the “cache” used for in our implementation of forward propagation and backward propagation?? ?
(在實現(xiàn)前向傳播和反向傳播中使用的“cache”是什么?)
【 】It is used to cache the intermediate values of the cost function during training.(用于在訓練期間緩存成本函數(shù)的中間值。)
【★】We use it to pass variables computed during forward propagation to the corresponding backward propagation step. It contains useful values for backward propagation to compute derivatives.(我們用它傳遞前向傳播中計算的變量到相應的反向傳播步驟,它包含用于計算導數(shù)的反向傳播的有用值。)
【 】It is used to keep track of the hyperparameters that we are searching over, to speed up computation.(它用于跟蹤我們正在搜索的超參數(shù),以加速計算。)
【 】We use it to pass variables computed during backward propagation to the corresponding forward propagation step. It contains useful values for forward propagation to compute activations.(我們使用它將向后傳播計算的變量傳遞給相應的正向傳播步驟,它包含用于計算計算激活的正向傳播的有用值。)
Note: the “cache” records values from the forward propagation units and sends it to the backward propagation units because it is needed to compute the chain rule derivatives.(請注意:“cache”記錄來自正向傳播單元的值并將其發(fā)送到反向傳播單元,因為需要鏈式計算導數(shù)。)
2. Among the following, which ones are “hyperparameters”? (Check all that apply.) I only list correct options.
(以下哪些是“超參數(shù)”?只列出了正確選項)
【★】size of the hidden layers?(隱藏層的大小)
【★】learning rate α(學習率α)
【★】number of iterations(迭代次數(shù))
【★】number of layers??in the neural network(神經(jīng)網(wǎng)絡中的層數(shù))
Note: You can check this Quora post orthis blog post.(請注意:你可以查看Quora的這篇文章或者這篇博客.)
3. Which of the following statements is true?(下列哪個說法是正確的?)
【★】The deeper layers of a neural network are typically computing more complex features of the input than the earlier layers. (神經(jīng)網(wǎng)絡的更深層通常比前面的層計算更復雜的輸入特征。)
【 】 The earlier layers of a neural network are typically computing more complex features of the input than the deeper layers.(神經(jīng)網(wǎng)絡的前面的層通常比更深層計算更復雜的輸入特征。)
Note: You can check the lecture videos. I think Andrew used a CNN example to explain this.(注意:您可以查看視頻,我想用吳恩達的用美國有線電視新聞網(wǎng)的例子來解釋這個。)
4. Vectorization allows you to compute forward propagation in an?-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, …,L. True/False?
(向量化允許您在層神經(jīng)網(wǎng)絡中計算前向傳播,而不需要在層(l = 1,2,…,L)上顯式的使用for-loop(或任何其他顯式迭代循環(huán)),正確嗎?)
【 】 True(正確)
【★】 False(錯誤)
Note: We cannot avoid the for-loop iteration over the computations among layers.(請注意:在層間計算中,我們不能避免for循環(huán)迭代。)
5. Assume we store the values for??in an array called layers, as follows: layer_dims = [, 4,3,2,1]. So layer 1 has four hidden units, layer 2 has 3 hidden units and so on. Which of the following for-loops will allow you to initialize the parameters for the model?
(假設我們將的值存儲在名為layers的數(shù)組中,如下所示:layer_dims = [, 4,3,2,1]。因此,第1層有四個隱藏單元,第2層有三個隱藏單元,依此類推。您可以使用哪個for循環(huán)初始化模型參數(shù)?)
for(i in range(1, len(layer_dims))):parameter[‘W’ + str(i)] = np.random.randn(layers[i], layers[i - 1])) * 0.01 `parameter[‘b’ + str(i)] = np.random.randn(layers[i], 1) * 0.016. Consider the following neural network.
(下面關于神經(jīng)網(wǎng)絡的說法正確的是:只列出了正確選項)
【★】The number of layers??is 4. The number of hidden layers is 3.(層數(shù)為4,隱藏層數(shù)為3)
Note: The input layer () does not count.(注意:輸入層()不計數(shù)。)
As seen in lecture, the number of layers is counted as the number of hidden layers + 1. The input and output layers are not counted as hidden layers.(正如視頻中所看到的那樣,層數(shù)被計為隱藏層數(shù)+1。輸入層和輸出層不計為隱藏層。)
7. During forward propagation, in the forward function for a layer??you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During backpropagation, the corresponding backward function also needs to know what is the activation function for layer?, since the gradient depends on it. True/False?
(在前向傳播期間,在層的前向傳播函數(shù)中,您需要知道層中的激活函數(shù)(Sigmoid,tanh,ReLU等)是什么, 在反向傳播期間,相應的反向傳播函數(shù)也需要知道第層的激活函數(shù)是什么,因為梯度是根據(jù)它來計算的,正確嗎?)
【★】 True(正確)
【 】False(錯誤)
Note: During backpropagation you need to know which activation was used in the forward propagation to be able to compute the correct derivative.(注:在反向傳播期間,您需要知道正向傳播中使用哪種激活函數(shù)才能計算正確的導數(shù)。)
8.There are certain functions with the following properties:
(有一些函數(shù)具有以下屬性:)
(i) To compute the function using a shallow network circuit, you will need a large network (where we measure size by the number of logic gates in the network), but (ii) To compute it using a deep network circuit, you need only an exponentially smaller network. True/False?((i)使用淺網(wǎng)絡電路計算函數(shù)時,需要一個大網(wǎng)絡(我們通過網(wǎng)絡中的邏輯門數(shù)量來度量大小),但是(ii)使用深網(wǎng)絡電路來計算它,只需要一個指數(shù)較小的網(wǎng)絡。真/假?)
【★】True(正確)
【 】False(錯誤)
Note: See lectures, exactly same idea was explained.(參見視頻,完全相同的題。)
9. Consider the following 2 hidden layer neural network: Which of the following statements are True? (Check all that apply).
((在2層隱層神經(jīng)網(wǎng)絡中,下列哪個說法是正確的?只列出了正確選項))
【★】?will have shape (4, 4)(的維度為 (4, 4))
【★】?will have shape (4, 1)(的維度為 (4, 1))
【★】?will have shape (3, 4)(的維度為 (3, 4))
【★】?will have shape (3, 1)(的維度為 (3, 1))
【★】?will have shape (1, 1)(的維度為 (1, 1))
【★】?will have shape (1, 3)(的維度為 (1, 3))
Note: See [this image] for general formulas.(注:請參閱圖片。)
10. Whereas the previous question used a specific network, in the general case what is the dimension of??, the weight matrix associated with layer??
(前面的問題使用了一個特定的網(wǎng)絡,與層ll有關的權(quán)重矩陣在一般情況下,?的維數(shù)是多少,只列出了正確選項)
【★】?has shape (,)(的維度是 (,)
Note: See this imagefor general formulas.(注:請參閱圖片)
備注:公眾號菜單包含了整理了一本AI小抄,非常適合在通勤路上用學習。
往期精彩回顧2019年公眾號文章精選適合初學者入門人工智能的路線及資料下載機器學習在線手冊深度學習在線手冊AI基礎下載(第一部分)備注:加入本站微信群或者qq群,請回復“加群”加入知識星球(4500+用戶,ID:92416895),請回復“知識星球”喜歡文章,點個在看
總結(jié)
以上是生活随笔為你收集整理的原文翻译:深度学习测试题(L1 W4 测试题)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 优化 | 利用SciPy求解非线性规划问
- 下一篇: 原文翻译:深度学习测试题(L1 W3 测