Caffe源码中Net文件分析
Caffe源碼(caffe version commit: 09868ac , date: 2015.08.15)中有一些重要的頭文件,這里介紹下include/caffe/net.hpp文件的內容:
1.??????include文件:
(1)、<caffe/blob.hpp>:此文件的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/59106613 ??
(2)、<caffe/common.hpp>:此文件的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/54955236
(3)、<caffe/layer.hpp>:此文件的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/60871052??
(4)、<caffe/proto/caffe.pb.h>:此文件的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/55267162
(5)、<caffe/layer_factory.hpp>:此文件的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/54310956
2.??????類Net:
通過合成和自動微分,網絡同時定義了一個函數和其對應的梯度。通過合成各層的輸出來計算這個函數,來執行給定的任務,并通過合成各層的向后傳播過程來計算來自損失函數的梯度,從而學習任務。Caffe模型是端到端的機器學習引擎。
Net是由一系列層組成的有向五環(DAG)計算圖,Caffe保留了計算圖中所有的中間值以確保前向和反向迭代的準確性。一個典型的Net開始于data layer------從磁盤中加載數據,終止于loss layer------計算分類和重構這些任務的目標函數。
Net由一系列層和它們之間的相互連接構成,用的是一種文本建模語言(protobuf)。Net是通過protobuf文件來描述整個Net是怎么由layer組成的。
Caffe中網絡的構建與設備無關。網絡構建完之后,通過設置Caffe::mode()函數中的Caffe::set_mode()即可實現在CPU或GPU上的運行。CPU與GPU無縫切換并且獨立于模型定義。
前傳(forward)過程為給定的待推斷的輸入計算輸出。在前傳過程中,Caffe組合每一層的計算以得到整個模型的計算”函數”。本過程自底向上進行。
反傳(backward)過程根據損失來計算梯度從而進行學習。在反傳過程中,Caffe通過自動求導并反向組合每一層的梯度來計算整個網絡的梯度。這就是反傳過程的本質。本過程自頂向下進行。
反傳過程以損失開始,然后根據輸出計算梯度。根據鏈式準則,逐層計算出模型其余部分的梯度。有參數的層,會在反傳過程中根據參數計算梯度。
只要定義好了模型,Caffe中前傳和反傳的計算就可以立即進行,Caffe已經準備好了前傳和反傳的實現方法。
實現方法:
(1)、Net::Forward()和Net::Backward()方法實現網絡的前傳和反傳,而Layer::Forward()和Layer::Backward()計算每一層的前傳和后傳。
(2)、每一層都有forward_{cpu,gpu}()和backward_{cpu,gpu}方法來適應不同的計算模式。由于條件限制或者為了使用便利,一個層可能僅實現了CPU或者GPU模式。
與大多數的機器學習模型一樣,在Caffe中,學習是由一個損失函數驅動的(通常也被稱為誤差、代價或者目標函數)。一個損失函數通過將參數集(即當前的網絡權值)映射到一個可以標識這些參數”不良程度”的標量值來學習目標。因此,學習的目的是找到一個網絡權重的集合,使得損失函數最小。
在Caffe中,損失是通過網絡的前向計算得到的。每一層由一系列的輸入blobs(bottom),然后產生一系列的輸出blobs(top)。這些層的某些輸出可以用來作為損失函數。典型的一對多分類任務的損失函數是softMaxWithLoss函數。
Loss weights:對于含有多個損失層的網絡(例如,一個網絡使用一個softMaxWithLoss輸入分類并使用EuclideanLoss層進行重構),損失權值可以被用來指定它們之間的相對重要性。
按照慣例,有著Loss后綴的Caffe層對損失函數有貢獻,其它層被假定僅僅用于中間計算。然而,通過在層定義中添加一個loss_weight:<float>字段到由該層的top blob,任何層都可以作為一個loss。對于帶后綴Loss的層來說,其對于該層的第一個top blob含有一個隱式的loss_weight:1;其它層對應于所有top blob有一個隱式的loss_weight: 0。
然而,任何可以反向傳播的層,可允許給予一個非0的loss_weight,例如,如果需要,對網絡的某些中間層所產生的激活進行正則化。對于具有相關非0損失的非單輸出,損失函數可以通過對所有blob求和來進行簡單地計算。
那么,在Caffe中最終的損失函數可以通過對整個網絡中所有的權值損失進行求和計算獲得。
為了創建一個Caffe模型,需要在一個protobuf(.prototxt)文件中定義模型的結構。在Caffe中,層和相應的參數都定義在caffe.proto文件里。
注:以上關于Net內容的介紹主要摘自由CaffeCN社區翻譯的《Caffe官方教程中譯本》。
<caffe/net.hpp>文件的詳細介紹如下:
#ifndef CAFFE_NET_HPP_
#define CAFFE_NET_HPP_#include <map>
#include <set>
#include <string>
#include <utility>
#include <vector>#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/layer_factory.hpp"namespace caffe {// 在圖論中,如果一個有向圖從任意頂點出發無法經過若干條邊回到該點,則這個圖是一個有向無環圖(DAG圖)
/*** @brief Connects Layer%s together into a directed acyclic graph (DAG)* specified by a NetParameter.** TODO(dox): more thorough description.*/
//
template <typename Dtype>
class Net {public:
// 顯示構造函數,內部調用Init函數explicit Net(const NetParameter& param, const Net* root_net = NULL);explicit Net(const string& param_file, Phase phase, const Net* root_net = NULL);
// 虛析構函數virtual ~Net() {}/// @brief Initialize a network with a NetParameter.
// Net初始化:創建blobs和layers以搭建整個網絡DAG圖,以及調用layer的SetUp函數,
// 初始化時也會做另一些記錄,例如確認整個網絡結構的正確與否等,
// 另外,初始化期間,Net會打印其初始化日志到INFO信息中void Init(const NetParameter& param);/*** @brief Run Forward with the input Blob%s already fed separately.** You can get the input blobs using input_blobs().*/
// 前向傳播,以下相關的前向傳播函數,內部最終均會調用ForwardFromTo函數const vector<Blob<Dtype>*>& ForwardPrefilled(Dtype* loss = NULL);/*** The From and To variants of Forward and Backward operate on the* (topological) ordering by which the net is specified. For general DAG* networks, note that (1) computing from one layer to another might entail* extra computation on unrelated branches, and (2) computation starting in* the middle may be incorrect if all of the layers of a fan-in are not* included.*/Dtype ForwardFromTo(int start, int end);Dtype ForwardFrom(int start);Dtype ForwardTo(int end);/// @brief Run forward using a set of bottom blobs, and return the result.const vector<Blob<Dtype>*>& Forward(const vector<Blob<Dtype>* > & bottom, Dtype* loss = NULL);/*** @brief Run forward using a serialized BlobProtoVector and return the* result as a serialized BlobProtoVector*/string Forward(const string& input_blob_protos, Dtype* loss = NULL);/*** @brief Zeroes out the diffs of all net parameters.* Should be run before Backward.*/
// 對Net中的所有diff_數據清零void ClearParamDiffs();/*** The network backward should take no input and output, since it solely* computes the gradient w.r.t the parameters, and the data has already been* provided during the forward pass.*/
// 反向傳播,以下相關的反向傳播函數,內部最終均會調用BackwardFromTo函數void Backward();void BackwardFromTo(int start, int end);void BackwardFrom(int start);void BackwardTo(int end);/*** @brief Reshape all layers from bottom to top.** This is useful to propagate changes to layer sizes without running* a forward pass, e.g. to compute output feature size.*/
// 調整layes shapevoid Reshape();// 前向反向傳播Dtype ForwardBackward(const vector<Blob<Dtype>* > & bottom) {Dtype loss;Forward(bottom, &loss);Backward();return loss;}/// @brief Updates the network weights based on the diff values computed.
// 更新Net權值和偏置void Update();/*** @brief Shares weight data of owner blobs with shared blobs.** Note: this is called by Net::Init, and thus should normally not be* called manually.*/
// 共享權值和偏置數據void ShareWeights();/*** @brief For an already initialized net, implicitly copies (i.e., using no* additional memory) the pre-trained layers from another Net.*/
// 從另一個Net拷貝train layersvoid ShareTrainedLayersWith(const Net* other);// For an already initialized net, CopyTrainedLayersFrom() copies the already// trained layers from another net parameter instance./*** @brief For an already initialized net, copies the pre-trained layers from* another Net.*/
// 從另一個Net拷貝train layers,加載已訓練好的模型void CopyTrainedLayersFrom(const NetParameter& param);void CopyTrainedLayersFrom(const string trained_filename);void CopyTrainedLayersFromBinaryProto(const string trained_filename);void CopyTrainedLayersFromHDF5(const string trained_filename);/// @brief Writes the net to a proto.
// 寫Net到NetParametervoid ToProto(NetParameter* param, bool write_diff = false) const;/// @brief Writes the net to an HDF5 file.
// 寫Net weights到HDF5文件void ToHDF5(const string& filename, bool write_diff = false) const;/// @brief returns the network name.
// 獲得Net名inline const string& name() const { return name_; }/// @brief returns the layer names
// 獲得所有layer名inline const vector<string>& layer_names() const { return layer_names_; }/// @brief returns the blob names
// 獲得blob名inline const vector<string>& blob_names() const { return blob_names_; }/// @brief returns the blobs
// 獲得blobinline const vector<shared_ptr<Blob<Dtype> > >& blobs() const { return blobs_; }/// @brief returns the layers
// 獲得layerinline const vector<shared_ptr<Layer<Dtype> > >& layers() const { return layers_; }/// @brief returns the phase: TRAIN or TEST
// 獲得Net phase狀態:train or testinline Phase phase() const { return phase_; }/*** @brief returns the bottom vecs for each layer -- usually you won't* need this unless you do per-layer checks such as gradients.*/
// 獲得每一個layer的bottom vectorinline const vector<vector<Blob<Dtype>*> >& bottom_vecs() const { return bottom_vecs_; }/*** @brief returns the top vecs for each layer -- usually you won't* need this unless you do per-layer checks such as gradients.*/
// 獲得每一個layer的top vectorinline const vector<vector<Blob<Dtype>*> >& top_vecs() const { return top_vecs_; }inline const vector<vector<bool> >& bottom_need_backward() const { return bottom_need_backward_; }inline const vector<Dtype>& blob_loss_weights() const { return blob_loss_weights_; }inline const vector<bool>& layer_need_backward() const { return layer_need_backward_; }/// @brief returns the parameters
// 獲得各種參數值inline const vector<shared_ptr<Blob<Dtype> > >& params() const { return params_; }inline const vector<Blob<Dtype>*>& learnable_params() const { return learnable_params_; }/// @brief returns the learnable parameter learning rate multipliersinline const vector<float>& params_lr() const { return params_lr_; }inline const vector<bool>& has_params_lr() const { return has_params_lr_; }/// @brief returns the learnable parameter decay multipliersinline const vector<float>& params_weight_decay() const { return params_weight_decay_; }inline const vector<bool>& has_params_decay() const { return has_params_decay_; }const map<string, int>& param_names_index() const { return param_names_index_; }inline const vector<int>& param_owners() const { return param_owners_; }/// @brief Input and output blob numbers
// input blob數目inline int num_inputs() const { return net_input_blobs_.size(); }
// output blob數目inline int num_outputs() const { return net_output_blobs_.size(); }inline const vector<Blob<Dtype>*>& input_blobs() const { return net_input_blobs_; }inline const vector<Blob<Dtype>*>& output_blobs() const { return net_output_blobs_; }inline const vector<int>& input_blob_indices() const { return net_input_blob_indices_; }inline const vector<int>& output_blob_indices() const { return net_output_blob_indices_; }bool has_blob(const string& blob_name) const;const shared_ptr<Blob<Dtype> > blob_by_name(const string& blob_name) const;bool has_layer(const string& layer_name) const;const shared_ptr<Layer<Dtype> > layer_by_name(const string& layer_name) const;// 設置是否顯示debug infovoid set_debug_info(const bool value) { debug_info_ = value; }// Helpers for Init./*** @brief Remove layers that the user specified should be excluded given the current* phase, level, and stage.*/
// 移除指定的layersstatic void FilterNet(const NetParameter& param, NetParameter* param_filtered);/// @brief return whether NetState state meets NetStateRule rulestatic bool StateMeetsRule(const NetState& state, const NetStateRule& rule, const string& layer_name);protected:// Helpers for Init./// @brief Append a new input or top blob to the net.
// 追加top blobvoid AppendTop(const NetParameter& param, const int layer_id,const int top_id, set<string>* available_blobs,map<string, int>* blob_name_to_idx);/// @brief Append a new bottom blob to the net.
// 追加bottom blobint AppendBottom(const NetParameter& param, const int layer_id,const int bottom_id, set<string>* available_blobs,map<string, int>* blob_name_to_idx);/// @brief Append a new parameter blob to the net.
// 追加blob參數void AppendParam(const NetParameter& param, const int layer_id, const int param_id);// 顯示debug info/// @brief Helper for displaying debug info in Forward about input Blobs.void InputDebugInfo(const int layer_id);/// @brief Helper for displaying debug info in Forward.void ForwardDebugInfo(const int layer_id);/// @brief Helper for displaying debug info in Backward.void BackwardDebugInfo(const int layer_id);/// @brief Helper for displaying debug info in Update.void UpdateDebugInfo(const int param_id);// Caffe中類的成員變量名都帶有后綴"_",這樣就容易區分臨時變量和類成員變量/// @brief The network namestring name_; // Net名/// @brief The phase: TRAIN or TESTPhase phase_; // Net Phase狀態:train or test/// @brief Individual layers in the netvector<shared_ptr<Layer<Dtype> > > layers_; // layersvector<string> layer_names_; // layers名map<string, int> layer_names_index_; // layers 索引vector<bool> layer_need_backward_; // 指定layers是否需要backward/// @brief the blobs storing intermediate results between the layer.vector<shared_ptr<Blob<Dtype> > > blobs_; // 存儲每一個layer產生的中間結果vector<string> blob_names_; // blobs名map<string, int> blob_names_index_; // blobs 索引vector<bool> blob_need_backward_; // 指定blobs是否需要backward/// bottom_vecs stores the vectors containing the input for each layer./// They don't actually host the blobs (blobs_ does), so we simply store pointers.vector<vector<Blob<Dtype>*> > bottom_vecs_; // 存儲每一個layer input bottom blobs 指針vector<vector<int> > bottom_id_vecs_; // 存儲每一個bottom blobs idvector<vector<bool> > bottom_need_backward_; // 指定bottom blobs是否需要backward/// top_vecs stores the vectors containing the output for each layervector<vector<Blob<Dtype>*> > top_vecs_; // 存儲每一個layer output top blobs 指針vector<vector<int> > top_id_vecs_; // 存儲每一個layer output top blobs id/// Vector of weight in the loss (or objective) function of each net blob,/// indexed by blob_id.vector<Dtype> blob_loss_weights_; // layer 的loss函數值vector<vector<int> > param_id_vecs_; // vector<int> param_owners_;vector<string> param_display_names_;vector<pair<int, int> > param_layer_indices_;map<string, int> param_names_index_;/// blob indices for the input and the output of the netvector<int> net_input_blob_indices_;vector<int> net_output_blob_indices_;vector<Blob<Dtype>*> net_input_blobs_;vector<Blob<Dtype>*> net_output_blobs_;/// The parameters in the network.vector<shared_ptr<Blob<Dtype> > > params_; // vector<Blob<Dtype>*> learnable_params_;/*** The mapping from params_ -> learnable_params_: we have* learnable_param_ids_.size() == params_.size(),* and learnable_params_[learnable_param_ids_[i]] == params_[i].get()* if and only if params_[i] is an "owner"; otherwise, params_[i] is a sharer* and learnable_params_[learnable_param_ids_[i]] gives its owner.*/vector<int> learnable_param_ids_;/// the learning rate multipliers for learnable_params_vector<float> params_lr_;vector<bool> has_params_lr_;/// the weight decay multipliers for learnable_params_vector<float> params_weight_decay_;vector<bool> has_params_decay_;/// The bytes of memory used by this netsize_t memory_used_;/// Whether to compute and display debug info for the net.bool debug_info_; // 是否顯示debug info/// The root net that actually holds the shared layers in data parallelismconst Net* const root_net_;// 禁止使用Net類的拷貝和賦值操作DISABLE_COPY_AND_ASSIGN(Net);
};} // namespace caffe#endif // CAFFE_NET_HPP_
在caffe.proto文件中,主要有一個message是與net
相關的,如下:
message NetParameter { // Net參數optional string name = 1; // consider giving the network a name,Net名// The input blobs to the network.repeated string input = 3; // Net的輸入blobs// The shape of the input blobs.repeated BlobShape input_shape = 8; // 輸入blobs的shape// 4D input dimensions -- deprecated. Use "shape" instead.// If specified, for each input blob there should be four// values specifying the num, channels, height and width of the input blob.// Thus, there should be a total of (4 * #input) numbers.repeated int32 input_dim = 4; // 輸入blobs的維度,已被廢棄,推薦用BlobShape代替// Whether the network will force every layer to carry out backward operation.// If set False, then whether to carry out backward is determined// automatically according to the net structure and learning rates.optional bool force_backward = 5 [default = false]; // 是否每一層都需要執行反向操作// The current "state" of the network, including the phase, level, and stage.// Some layers may be included/excluded depending on this state and the states// specified in the layers' include and exclude fields.optional NetState state = 6; // Net三種狀態:Phase、level、stage// Print debugging information about results while running Net::Forward,// Net::Backward, and Net::Update.optional bool debug_info = 7 [default = false]; // 是否打印Net的前向、反向、更新的結果// The layers that make up the net. Each of their configurations, including// connectivity and behavior, is specified as a LayerParameter.repeated LayerParameter layer = 100; // ID 100 so layers are printed last. layer參數// DEPRECATED: use 'layer' instead.repeated V1LayerParameter layers = 2; // 已被廢棄,用LayerParameter代替
}
net 的測試代碼如下:
#include "funset.hpp"
#include <string>
#include <vector>
#include <map>
#include "common.hpp"int test_caffe_net2()
{caffe::Caffe::set_mode(caffe::Caffe::CPU); // set run caffe mode// reference: caffe/src/caffe/test/test_net.cppstd::string prototxt{ "E:/GitCode/Caffe_Test/test_data/model/test_net_8.prototxt" };caffe::Phase phase = caffe::Phase::TRAIN;// 1. Net(const string& param_file, Phase phase, const Net* root_net = NULL)boost::shared_ptr<caffe::Net<float>> net(new caffe::Net<float>(prototxt, phase, nullptr));//caffe::Caffe::set_random_seed(1701);{std::vector<caffe::Blob<float>*> bottom;// 2. Dtype ForwardBackward(const vector<Blob<Dtype>* > & bottom)float loss = net->ForwardBackward(bottom);fprintf(stderr, "loss: %f\n", loss);}{// 3. Dtype ForwardFromTo(int start, int end)float loss = net->ForwardFromTo(0, net->layers().size() - 1);// 4. void BackwardFromTo(int start, int end)net->BackwardFromTo(net->layers().size() - 1, 0);fprintf(stderr, "loss: %f\n", loss);}{// 5. Dtype ForwardTo(int end)float loss = net->ForwardTo(net->layers().size() - 2);// 6. void BackwardFrom(int start)net->BackwardFrom(net->layers().size() - 2);fprintf(stderr, "loss: %f\n", loss);}{// 7. Dtype ForwardFrom(int start)float loss = net->ForwardFrom(1);// 8. void BackwardTo(int end)net->BackwardTo(1);fprintf(stderr, "loss: %f\n", loss);}{// 9. vector<Blob<Dtype>*>& ForwardPrefilled(Dtype* loss = NULL)float loss;std::vector<caffe::Blob<float>*> net_output_blobs = net->ForwardPrefilled(&loss);// 10. void Backward()net->Backward();fprintf(stderr, "net output blobs size: %d; loss: %f\n", net_output_blobs.size(), loss);}{// 11. string Forward(const string& input_blob_protos, Dtype* loss = NULL)std::string input_blob_protos{ " " };float loss;std::string output = net->Forward(input_blob_protos, &loss);net->Backward();fprintf(stderr, "output string: %s; loss: %f\n", output.c_str(), loss);}{// 12. vector<Blob<Dtype>*>& Forward(const vector<Blob<Dtype>* > & bottom, Dtype* loss = NULL)std::vector<caffe::Blob<float>*> bottom;float loss;std::vector<caffe::Blob<float>*> net_output_blobs = net->Forward(bottom, &loss);net->Backward();fprintf(stderr, "net output blobs size: %d; loss: %f\n", net_output_blobs.size(), loss);}// 13. void ShareWeights()net->ShareWeights();// 14. void Update()net->Update();// 15. void Reshape()net->Reshape();// 16. void ClearParamDiffs()net->ClearParamDiffs();// 17. void CopyTrainedLayersFrom(const NetParameter& param)caffe::NetParameter net_param;net->ToProto(&net_param, false);net->CopyTrainedLayersFrom(net_param);// 加載已訓練好的模型// 18. void CopyTrainedLayersFrom(const string trained_filename)std::string trained_filename{ " " };//net->CopyTrainedLayersFrom(trained_filename);// 19. void CopyTrainedLayersFromBinaryProto(const string trained_filename)//net->CopyTrainedLayersFromBinaryProto(trained_filename);// 20. void CopyTrainedLayersFromHDF5(const string trained_filename)//net->CopyTrainedLayersFromHDF5(trained_filename);// 21. void ShareTrainedLayersWith(const Net* other)caffe::Net<float> net1(prototxt, phase, nullptr);net->ShareTrainedLayersWith(&net1);// 22. static void FilterNet(const NetParameter& param, NetParameter* param_filtered)caffe::NetParameter param1, param2;net->FilterNet(param1, ?m2);// 23. static bool StateMeetsRule(const NetState& state, const NetStateRule& rule, const string& layer_name)const caffe::NetState state;const caffe::NetStateRule rule;const std::string layer_name;bool ret = net->StateMeetsRule(state, rule, layer_name);fprintf(stderr, "state meet rule: %d\n", ret);return 0;
}int test_caffe_net1()
{caffe::Caffe::set_mode(caffe::Caffe::CPU); // set run caffe mode// reference: caffe/src/caffe/test/test_net.cppstd::string prototxt{"E:/GitCode/Caffe_Test/test_data/model/test_net_8.prototxt"}; // 1~8caffe::NetParameter param;caffe::ReadNetParamsFromTextFileOrDie(prototxt, ?m);// 1. Net(const NetParameter& param, const Net* root_net = NULL)boost::shared_ptr<caffe::Net<float>> net(new caffe::Net<float>(param, nullptr));// 2. const string& name()std::string name = net->name();fprintf(stderr, "Net name: %s\n", name.c_str());// 3. const vector<string>& layer_names()std::vector<std::string> layer_names = net->layer_names();fprintf(stderr, "print all layer names: layer size: %d\n", layer_names.size());for (auto layer_name : layer_names) {fprintf(stderr, " %s\n", layer_name.c_str());}// 4. const vector<string>& blob_names()std::vector<std::string> blob_names = net->blob_names();fprintf(stderr, "print all blob names: blob size: %d\n", blob_names.size());for (auto blob_name : blob_names) {fprintf(stderr, " %s\n", blob_name.c_str());}// 5. const vector<shared_ptr<Blob<Dtype> > >& blobs()std::vector<boost::shared_ptr<caffe::Blob<float>>> blobs = net->blobs();fprintf(stderr, "print all blobs dim: blob size: %d\n", blobs.size());for (auto blob : blobs) {std::vector<int> shape = blob->shape();fprintf(stderr, "blob dim: %d, ", shape.size());for (auto value : shape) {fprintf(stderr, " %d ", value);}fprintf(stderr, "\n");}// 6. const vector<shared_ptr<Layer<Dtype> > >& layers()std::vector<boost::shared_ptr<caffe::Layer<float>>> layers = net->layers();fprintf(stderr, "print all layers bottom and top blobs num: layer size: %d\n", layers.size());for (const auto layer : layers) {fprintf(stderr, "layer type: %s, bottom blob num: %d, top blob num: %d\n",layer->type(), layer->ExactNumBottomBlobs(), layer->ExactNumTopBlobs());}// 7. Phase phase()caffe::Phase phase = net->phase();fprintf(stderr, "net phase: %d\n", phase);// 8. const vector<vector<Blob<Dtype>*> >& bottom_vecs()std::vector<std::vector<caffe::Blob<float>*>> bottom_vecs = net->bottom_vecs();fprintf(stderr, "print layer bottom blob: layer size: %d\n", bottom_vecs.size());for (auto layer : bottom_vecs) {for (auto blob : layer) {fprintf(stderr, "layer blob shape: %s\n", (blob->shape_string()).c_str());}}// 9. const vector<vector<Blob<Dtype>*> >& top_vecs()std::vector<std::vector<caffe::Blob<float>*>> top_vecs = net->top_vecs();fprintf(stderr, "print layer top blol: layer size: %d\n", top_vecs.size());for (auto layer : top_vecs) {for (const auto blob : layer) {fprintf(stderr, "layer top shape: %s\n", (blob->shape_string()).c_str());}}// 10. const vector<vector<bool> >& bottom_need_backward()std::vector<std::vector<bool>> bottom_need_backward = net->bottom_need_backward();fprintf(stderr, "print bottom need backward info: layer size: %d\n", bottom_need_backward.size());for (auto layer : bottom_need_backward) {for (auto flag : layer) {fprintf(stderr, " %s ", flag ? "true" : "false");}fprintf(stderr, "\n");}fprintf(stderr, "\n");// 11. const vector<Dtype>& blob_loss_weights()std::vector<float> blob_loss_weights = net->blob_loss_weights();fprintf(stderr, "print blob loss weights: blob size: %d\n", blob_loss_weights.size());for (auto weight : blob_loss_weights) {fprintf(stderr, "weight: %f\n", weight);}// 12. const vector<bool>& layer_need_backward()std::vector<bool> layer_need_backward = net->layer_need_backward();fprintf(stderr, "print layer need backward: layer size: %d\n", layer_need_backward.size());for (auto flag : layer_need_backward) {fprintf(stderr, "layer need backward: %s\n", flag ? "true" : "false");}// 13. const vector<shared_ptr<Blob<Dtype> > >& params()std::vector<boost::shared_ptr<caffe::Blob<float>>> params = net->params();fprintf(stderr, "print net params info: blob size: %d\n", params.size());for (auto blob : params) {fprintf(stderr, "blob shape: %s\n", blob->shape_string().c_str());}// 14. const vector<Blob<Dtype>*>& learnable_params()std::vector<caffe::Blob<float>*> learnable_params = net->learnable_params();fprintf(stderr, "print learnable params info: blob size: %d\n", learnable_params.size());for (const auto blob : learnable_params) {fprintf(stderr, "blob shape: %s\n", blob->shape_string().c_str());}// 15. const vector<float>& params_lr()std::vector<float> params_lr = net->params_lr();fprintf(stderr, "print learnable rate info: size: %d\n", params_lr.size());for (auto value : params_lr) {fprintf(stderr, "learnable rate: %f\n", value);}// 16. const vector<bool>& has_params_lr()std::vector<bool> has_params_lr = net->has_params_lr();fprintf(stderr, "print has learnable rate info: size: %d\n", has_params_lr.size());for (auto flag : has_params_lr) {fprintf(stderr, "has learnable rate: %s\n", flag ? "true" : "false");}// 17. const vector<float>& params_weight_decay()std::vector<float> params_weight_decay = net->params_weight_decay();fprintf(stderr, "print weight decay info: size: %d\n", params_weight_decay.size());for (auto value : params) {fprintf(stderr, "weight decay: %f\n", value);}// 18. const vector<bool>& has_params_decay()std::vector<bool> has_params_decay = net->has_params_decay();fprintf(stderr, "print has decay info: size: %d\n", has_params_decay.size());for (auto value : has_params_decay) {fprintf(stderr, "has decay: %s\n", value ? "true" : "false");}// 19. const map<string, int>& param_names_index()const std::map<std::string, int> param_names_index = net->param_names_index();fprintf(stderr, "print param names index info: size: %d\n", param_names_index.size());auto it = param_names_index.begin();while (it != param_names_index.end()) {fprintf(stderr, "param names index: %s : %d\n", it->first.c_str(), it->second);++it;}// 20. const vector<int>& param_owners()std::vector<int> param_owers = net->param_owners();fprintf(stderr, "print param owers info: size: %d\n", param_owers.size());for (auto value : param_owers) {fprintf(stderr, "param owers: %d\n", value);}// 21. int num_inputs() constint num_inputs = net->num_inputs();fprintf(stderr, "num inputs: %d\n", num_inputs);// 22. int num_outputs() constint num_outputs = net->num_outputs();fprintf(stderr, "num outputs: %d\n", num_outputs);// 23. const vector<Blob<Dtype>*>& input_blobs()const std::vector<caffe::Blob<float>*> input_blobs = net->input_blobs();fprintf(stderr, "print input blobs info: %d\n", input_blobs.size());for (auto blob : input_blobs) {fprintf(stderr, "input blob shape: %s\n", blob->shape_string().c_str());}// 24. const vector<Blob<Dtype>*>& output_blobs()const std::vector<caffe::Blob<float>*> output_blobs = net->output_blobs();fprintf(stderr, "print output blobs info: %d\n", output_blobs.size());for (auto blob : output_blobs) {fprintf(stderr, "output blob shape: %s\n", blob->shape_string().c_str());}// 25. const vector<int>& input_blob_indices()std::vector<int> input_blob_indices = net->input_blob_indices();fprintf(stderr, "print input blob indices info: size: %d\n", input_blob_indices.size());for (auto value : input_blob_indices) {fprintf(stderr, "input blob indices: %d\n", value);}// 26. const vector<int>& output_blob_indices()std::vector<int> output_blob_indices = net->output_blob_indices();fprintf(stderr, "print output blob indices info: size: %d\n", output_blob_indices.size());for (auto value : output_blob_indices) {fprintf(stderr, "output blob indices: %d\n", value);}// 27. bool has_blob(const string& blob_name)bool has_blob1 = net->has_blob("data");bool has_blob2 = net->has_blob("loss");fprintf(stderr, "net has blob data: %d, has blob loss: %d\n", has_blob1, has_blob2);// 28. const shared_ptr<Blob<Dtype> > blob_by_nameconst std::vector<std::string> blob_by_names{ "innerproduct", "loss" };for (auto name : blob_by_names) {const boost::shared_ptr<caffe::Blob<float>> blob = net->blob_by_name(name);if (blob != nullptr)fprintf(stderr, "blob shape: %s\n", blob->shape_string().c_str());elsefprintf(stderr, "unknown blob name: %s\n", name.c_str());}// 29. bool has_layer(const string& layer_name)const std::vector<std::string> has_layers{"innerproduct", "top_loss"};for (auto name : has_layers) {bool has_layer = net->has_layer(name);fprintf(stderr, "has layer %s: %d\n", name.c_str(), has_layer);}// 30. const shared_ptr<Layer<Dtype> > layer_by_nameconst std::vector<std::string> layer_by_names{ "data", "top_loss" };for (auto name : layer_by_names) {const boost::shared_ptr<caffe::Layer<float>> layer = net->layer_by_name(name);if (layer != nullptr)fprintf(stderr, "layer type: %s\n", layer->type());elsefprintf(stderr, "unknown layer name: %s\n", name.c_str());}// 31. void set_debug_info(const bool value)net->set_debug_info(true);// 32. void ToHDF5(const string& filename, bool write_diff = false)// std::string hdf5_name{"E:/GitCode/Caffe_Test/test_data/hdf5.h5"};// net->ToHDF5(hdf5_name, false); // Note: some .prototxt will crash// 33. void ToProto(NetParameter* param, bool write_diff = false)caffe::NetParameter param2;net->ToProto(?m2, false);fprintf(stderr, "new net name: %s\n", param2.name().c_str());return 0;
}
部分輸出結果如下:
GitHub: https://github.com/fengbingchun/Caffe_Test
總結
以上是生活随笔為你收集整理的Caffe源码中Net文件分析的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: C++/C++11中std::strin
- 下一篇: C++/C++11中std::set用法