WebRTC Audio 接收和发送的关键过程
本文基于 WebRTC 中的示例應用 peerconnection_client 分析 WebRTC Audio 接收和發送的關鍵過程。首先是發送的過程,然后是接收的過程。
創建 webrtc::AudioState
應用程序擇機初始化 PeerConnectionFactory:
#0 Init () at webrtc/src/pc/channel_manager.cc:121 #1 Initialize () at webrtc/src/pc/peer_connection_factory.cc:139 #6 webrtc::CreateModularPeerConnectionFactory(webrtc::PeerConnectionFactoryDependencies) () at webrtc/src/pc/peer_connection_factory.cc:55 #7 webrtc::CreatePeerConnectionFactory(rtc::Thread*, rtc::Thread*, rtc::Thread*, rtc::scoped_refptr<webrtc::AudioDeviceModule>, rtc::scoped_refptr<webrtc::AudioEncoderFactory>, rtc::scoped_refptr<webrtc::AudioDecoderFactory>, std::__1::unique_ptr<webrtc::VideoEncoderFactory, std::__1::default_delete<webrtc::VideoEncoderFactory> >, std::__1::unique_ptr<webrtc::VideoDecoderFactory, std::__1::default_delete<webrtc::VideoDecoderFactory> >, rtc::scoped_refptr<webrtc::AudioMixer>, rtc::scoped_refptr<webrtc::AudioProcessing>) () at webrtc/src/api/create_peerconnection_factory.cc:65 #8 InitializePeerConnection () at webrtc/src/examples/peerconnection/client/conductor.cc:132 #9 ConnectToPeer () at webrtc/src/examples/peerconnection/client/conductor.cc:422 #10 OnRowActivated () at webrtc/src/examples/peerconnection/client/linux/main_wnd.cc:433 #11 (anonymous namespace)::OnRowActivatedCallback(_GtkTreeView*, _GtkTreePath*, _GtkTreeViewColumn*, void*) () at webrtc/src/examples/peerconnection/client/linux/main_wnd.cc:70由 Conductor::InitializePeerConnection() 的代碼可知,PeerConnectionFactory 竟然是隨同 peer connection一起創建的。
在 webrtc/src/pc/channel_manager.cc 文件里定義的 ChannelManager::Init() 函數中會在另一個線程中起一個 task,調用 media_engine_->Init(),完成媒體引擎的初始化:
bool ChannelManager::Init() {RTC_DCHECK(!initialized_);if (initialized_) {return false;}RTC_DCHECK(network_thread_);RTC_DCHECK(worker_thread_);if (!network_thread_->IsCurrent()) {// Do not allow invoking calls to other threads on the network thread.network_thread_->Invoke<void>(RTC_FROM_HERE, [&] { network_thread_->DisallowBlockingCalls(); });}if (media_engine_) {initialized_ = worker_thread_->Invoke<bool>(RTC_FROM_HERE, [&] { return media_engine_->Init(); });RTC_DCHECK(initialized_);} else {initialized_ = true;}return initialized_; }媒體引擎初始化過程中,將會創建 AudioState:
#0 webrtc::AudioState::Create(webrtc::AudioState::Config const&) () at webrtc/src/audio/audio_state.cc:188 #1 Init () at webrtc/src/media/engine/webrtc_voice_engine.cc:260 #2 cricket::CompositeMediaEngine::Init() () at webrtc/src/media/base/media_engine.cc:155 #3 cricket::ChannelManager::Init()::$_3::operator()() const () at webrtc/src/pc/channel_manager.cc:135AudioState 伴隨著 PeerConnectionFactory 、ChannelManager 和 MediaEngine 的創建及初始化一起創建。
創建 WebRTC Call
應用程序根據需要創建 peer connection:
#0 CreatePeerConnection () at webrtc/src/pc/peer_connection_factory.cc:240 #1 webrtc::PeerConnectionFactory::CreatePeerConnection(webrtc::PeerConnectionInterface::RTCConfiguration const&, std::__1::unique_ptr<cricket::PortAllocator, std::__1::default_delete<cricket::PortAllocator> >, std::__1::unique_ptr<rtc::RTCCertificateGeneratorInterface, std::__1::default_delete<rtc::RTCCertificateGeneratorInterface> >, webrtc::PeerConnectionObserver*) () at webrtc/src/pc/peer_connection_factory.cc:233 #7 CreatePeerConnection () at webrtc/src/examples/peerconnection/client/conductor.cc:184 #8 InitializePeerConnection () at webrtc/src/examples/peerconnection/client/conductor.cc:148 #9 ConnectToPeer () at webrtc/src/examples/peerconnection/client/conductor.cc:422 #10 OnRowActivated () at webrtc/src/examples/peerconnection/client/linux/main_wnd.cc:433 #11 (anonymous namespace)::OnRowActivatedCallback(_GtkTreeView*, _GtkTreePath*, _GtkTreeViewColumn*, void*) () at webrtc/src/examples/peerconnection/client/linux/main_wnd.cc:70PeerConnectionFactory::CreatePeerConnection() 在創建 connection 時,會在另一個線程中起一個task 來創建 Call:
rtc::scoped_refptr<PeerConnectionInterface> PeerConnectionFactory::CreatePeerConnection(const PeerConnectionInterface::RTCConfiguration& configuration,PeerConnectionDependencies dependencies) {RTC_DCHECK(signaling_thread_->IsCurrent());// Set internal defaults if optional dependencies are not set.if (!dependencies.cert_generator) {dependencies.cert_generator =absl::make_unique<rtc::RTCCertificateGenerator>(signaling_thread_,network_thread_);}if (!dependencies.allocator) {network_thread_->Invoke<void>(RTC_FROM_HERE, [this, &configuration,&dependencies]() {dependencies.allocator = absl::make_unique<cricket::BasicPortAllocator>(default_network_manager_.get(), default_socket_factory_.get(),configuration.turn_customizer);});}// TODO(zstein): Once chromium injects its own AsyncResolverFactory, set// |dependencies.async_resolver_factory| to a new// |rtc::BasicAsyncResolverFactory| if no factory is provided.network_thread_->Invoke<void>(RTC_FROM_HERE,rtc::Bind(&cricket::PortAllocator::SetNetworkIgnoreMask,dependencies.allocator.get(), options_.network_ignore_mask));std::unique_ptr<RtcEventLog> event_log =worker_thread_->Invoke<std::unique_ptr<RtcEventLog>>(RTC_FROM_HERE,rtc::Bind(&PeerConnectionFactory::CreateRtcEventLog_w, this));std::unique_ptr<Call> call = worker_thread_->Invoke<std::unique_ptr<Call>>(RTC_FROM_HERE,rtc::Bind(&PeerConnectionFactory::CreateCall_w, this, event_log.get()));rtc::scoped_refptr<PeerConnection> pc(new rtc::RefCountedObject<PeerConnection>(this, std::move(event_log),std::move(call)));ActionsBeforeInitializeForTesting(pc);if (!pc->Initialize(configuration, std::move(dependencies))) {return nullptr;}return PeerConnectionProxy::Create(signaling_thread(), pc); }創建 WebRTC Call 的過程如下:
#0 webrtc::Call::Create(webrtc::CallConfig const&) () at webrtc/src/call/call.cc:424 #1 webrtc::CallFactory::CreateCall(webrtc::CallConfig const&) () at webrtc/src/call/call_factory.cc:84 #2 CreateCall_w () at webrtc/src/pc/peer_connection_factory.cc:364不難看出,在 WebRTC 中,Call 是 per peer connection 的。
為 WebRTC Call 注入的 AudioState 來自于全局的 MediaEngine 的 VoiceEngine。AudioState 是全局的,而 Call 則是 connection 局部的。
創建 WebRtcAudioReceiveStream
WebRTC 應用需要起一個專門的專門的連接,用于接收媒體協商信息。在收到媒體協商信息之后,則將媒體協商信息進行層層傳遞及處理:
#0 cricket::BaseChannel::SetRemoteContent(cricket::MediaContentDescription const*, webrtc::SdpType, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) () at webrtc/src/pc/channel.cc:299 #1 PushdownMediaDescription () at webrtc/src/pc/peer_connection.cc:5700 #2 UpdateSessionState () at webrtc/src/pc/peer_connection.cc:5668 #3 ApplyRemoteDescription () at webrtc/src/pc/peer_connection.cc:2668 #4 SetRemoteDescription () at webrtc/src/pc/peer_connection.cc:2562 #5 webrtc::PeerConnection::SetRemoteDescription(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) () at webrtc/src/pc/peer_connection.cc:2506#6 void webrtc::ReturnType<void>::Invoke<webrtc::PeerConnectionInterface, void (webrtc::PeerConnectionInterface::*)(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*), webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>(webrtc::PeerConnectionInterface*, void (webrtc::PeerConnectionInterface::*)(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*), webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) () at webrtc/src/api/proxy.h:131 #7 webrtc::MethodCall2<webrtc::PeerConnectionInterface, void, webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>::OnMessage(rtc::Message*) () at webrtc/src/api/proxy.h:252 #8 webrtc::internal::SynchronousMethodCall::Invoke(rtc::Location const&, rtc::Thread*) () at webrtc/src/api/proxy.cc:24 #9 webrtc::MethodCall2<webrtc::PeerConnectionInterface, void, webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>::Marshal(rtc::Location const&, rtc::Thread*) () at webrtc/src/api/proxy.h:246 #10 webrtc::PeerConnectionProxyWithInternal<webrtc::PeerConnectionInterface>::SetRemoteDescription(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) () at webrtc/src/api/peer_connection_proxy.h:101#11 OnMessageFromPeer () at webrtc/src/examples/peerconnection/client/conductor.cc:351 #12 PeerConnectionClient::OnMessageFromPeer(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) ()at webrtc/src/examples/peerconnection/client/peer_connection_client.cc:250 #13 OnHangingGetRead () at webrtc/src/examples/peerconnection/client/peer_connection_client.cc:403 #18 rtc::SocketDispatcher::OnEvent(unsigned int, int) () at webrtc/src/rtc_base/physical_socket_server.cc:790 #19 rtc::ProcessEvents(rtc::Dispatcher*, bool, bool, bool) () at webrtc/src/rtc_base/physical_socket_server.cc:1379 #20 WaitEpoll () at webrtc/src/rtc_base/physical_socket_server.cc:1620 #21 rtc::PhysicalSocketServer::Wait(int, bool) () at webrtc/src/rtc_base/physical_socket_server.cc:1328 #22 CustomSocketServer::Wait(int, bool) () at webrtc/src/examples/peerconnection/client/linux/main.cc:56 #23 Get () at webrtc/src/rtc_base/message_queue.cc:329 #24 ProcessMessages () at webrtc/src/rtc_base/thread.cc:525 #25 rtc::Thread::Run() () at webrtc/src/rtc_base/thread.cc:351 #26 main () at webrtc/src/examples/peerconnection/client/linux/main.cc:111在 webrtc/src/pc/channel.cc 文件里定義的 BaseChannel::SetRemoteContent() 函數中將收到的媒體協商信息,拋給另一個線程進行處理:
bool BaseChannel::SetRemoteContent(const MediaContentDescription* content,SdpType type,std::string* error_desc) {TRACE_EVENT0("webrtc", "BaseChannel::SetRemoteContent");return InvokeOnWorker<bool>(RTC_FROM_HERE,Bind(&BaseChannel::SetRemoteContent_w, this, content, type, error_desc)); }WebRtcAudioReceiveStream 最終由 webrtc/src/media/engine/webrtc_voice_engine.cc 文件中定義的 WebRtcVoiceMediaChannel::AddRecvStream() 創建:
#0 AddRecvStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1854 #1 AddRecvStream_w () at webrtc/src/pc/channel.cc:599 #2 UpdateRemoteStreams_w () at webrtc/src/pc/channel.cc:714 #3 SetRemoteContent_w () at webrtc/src/pc/channel.cc:951創建 WebRtcAudioReceiveStream 時,也會一并創建 Call 的 AudioReceiveStream:
#0 CreateAudioReceiveStream () at webrtc/src/call/call.cc:779 #1 RecreateAudioReceiveStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1224 #2 WebRtcAudioReceiveStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1090 #3 AddRecvStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1889 #4 AddRecvStream_w () at webrtc/src/pc/channel.cc:599 #5 UpdateRemoteStreams_w () at webrtc/src/pc/channel.cc:714WebRtcAudioReceiveStream 創建完成后,隨即將其加進 mixer,作為 mixer 的 audio source 之一:
#0 AddSource () at webrtc/src/modules/audio_mixer/audio_mixer_impl.cc:160 #1 AddReceivingStream () at webrtc/src/audio/audio_state.cc:60 #2 Start () at webrtc/src/audio/audio_receive_stream.cc:161 #3 SetPlayout () at webrtc/src/media/engine/webrtc_voice_engine.cc:1173 #4 AddRecvStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1899 #5 AddRecvStream_w () at webrtc/src/pc/channel.cc:599WebRTC 中音頻接收處理的關鍵流程
1. 從網絡收到 UDP 包
#0 OnPacketReceived () at webrtc/src/pc/channel.cc:507 #1 cricket::BaseChannel::OnRtpPacket(webrtc::RtpPacketReceived const&) () at webrtc/src/pc/channel.cc:468 #2 webrtc::RtpDemuxer::OnRtpPacket(webrtc::RtpPacketReceived const&) () at webrtc/src/call/rtp_demuxer.cc:177 #3 DemuxPacket () at webrtc/src/pc/rtp_transport.cc:194 #4 OnRtpPacketReceived () at webrtc/src/pc/srtp_transport.cc:230 #5 OnReadPacket () at webrtc/src/pc/rtp_transport.cc:268 #10 OnReadPacket () at webrtc/src/p2p/base/dtls_transport.cc:600 #15 OnReadPacket () at webrtc/src/p2p/base/p2p_transport_channel.cc:2499 #20 OnReadPacket () at webrtc/src/p2p/base/connection.cc:415 #21 OnReadPacket () at webrtc/src/p2p/base/stun_port.cc:407 #22 cricket::UDPPort::HandleIncomingPacket(rtc::AsyncPacketSocket*, char const*, unsigned long, rtc::SocketAddress const&, long) () at webrtc/src/p2p/base/stun_port.cc:348 #23 OnReadPacket () at webrtc/src/p2p/client/basic_port_allocator.cc:1673 #28 OnReadEvent () at webrtc/src/rtc_base/async_udp_socket.cc:132 #33 rtc::SocketDispatcher::OnEvent(unsigned int, int) () at webrtc/src/rtc_base/physical_socket_server.cc:790 #34 rtc::ProcessEvents(rtc::Dispatcher*, bool, bool, bool) () at webrtc/src/rtc_base/physical_socket_server.cc:1379 #35 WaitEpoll () at webrtc/src/rtc_base/physical_socket_server.cc:1620 #36 rtc::PhysicalSocketServer::Wait(int, bool) () at webrtc/src/rtc_base/physical_socket_server.cc:1328在 webrtc/src/rtc_base/physical_socket_server.cc 文件里 PhysicalSocketServer
類的 Wait() 函數中,通過 epoll 機制等待網絡數據包的到來。當數據包到來時,經過層層處理及傳遞,一直被傳到 webrtc/src/pc/channel.cc 文件里 BaseChannel 類的 OnPacketReceived() 函數中,該函數又將數據包拋進另一個線程進行處理:
2. 媒體引擎對收到的音頻包的處理
#0 InsertPacketInternal () at webrtc/src/modules/audio_coding/neteq/neteq_impl.cc:467 #1 webrtc::NetEqImpl::InsertPacket(webrtc::RTPHeader const&, rtc::ArrayView<unsigned char const, -4711l>, unsigned int) () at webrtc/src/modules/audio_coding/neteq/neteq_impl.cc:153 #2 InsertPacket () at webrtc/src/modules/audio_coding/acm2/acm_receiver.cc:117 #3 IncomingPacket () at webrtc/src/modules/audio_coding/acm2/audio_coding_module.cc:667 #4 OnReceivedPayloadData () at webrtc/src/audio/channel_receive.cc:283 #5 ReceivePacket () at webrtc/src/audio/channel_receive.cc:669 #6 webrtc::voe::(anonymous namespace)::ChannelReceive::OnRtpPacket(webrtc::RtpPacketReceived const&) () at webrtc/src/audio/channel_receive.cc:622 #7 webrtc::RtpDemuxer::OnRtpPacket(webrtc::RtpPacketReceived const&) () at webrtc/src/call/rtp_demuxer.cc:177 #8 webrtc::RtpStreamReceiverController::OnRtpPacket(webrtc::RtpPacketReceived const&) () at webrtc/src/call/rtp_stream_receiver_controller.cc:54 #9 DeliverRtp () at webrtc/src/call/call.cc:1423 #10 DeliverPacket () at webrtc/src/call/call.cc:1461 #11 OnPacketReceived () at webrtc/src/media/engine/webrtc_voice_engine.cc:2070 #12 ProcessPacket () at webrtc/src/pc/channel.cc:540webrtc/src/pc/channel.cc 文件里 BaseChannel 類的 ProcessPacket() 函數將收到的音頻包送進媒體引擎進行處理,這一過程包括,根據 RTP 包的 ssrc 派發進不同的 channel,ACM receiver 的處理,一直到最終插入 NetEq 的緩沖區。在 NetEq 中將會完成數據包的重排序,網絡對抗,音頻的解碼等處理操作。
音頻數據的解碼及播放
AudioDevice 組件被初始化時,即會啟動一個播放線程,如 (webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc):
AudioDeviceGeneric::InitStatus AudioDeviceLinuxPulse::Init() {RTC_DCHECK(thread_checker_.IsCurrent());if (_initialized) {return InitStatus::OK;}// Initialize PulseAudioif (InitPulseAudio() < 0) {RTC_LOG(LS_ERROR) << "failed to initialize PulseAudio";if (TerminatePulseAudio() < 0) {RTC_LOG(LS_ERROR) << "failed to terminate PulseAudio";}return InitStatus::OTHER_ERROR;}#if defined(WEBRTC_USE_X11)// Get X display handle for typing detection_XDisplay = XOpenDisplay(NULL);if (!_XDisplay) {RTC_LOG(LS_WARNING)<< "failed to open X display, typing detection will not work";} #endif// RECORDING_ptrThreadRec.reset(new rtc::PlatformThread(RecThreadFunc, this,"webrtc_audio_module_rec_thread",rtc::kRealtimePriority));_ptrThreadRec->Start();// PLAYOUT_ptrThreadPlay.reset(new rtc::PlatformThread(PlayThreadFunc, this, "webrtc_audio_module_play_thread",rtc::kRealtimePriority));_ptrThreadPlay->Start();_initialized = true;return InitStatus::OK; }這個線程不斷地從媒體引擎中拿數據進行播放,這個過程如下:
#0 GetNextAudioInterleaved () at webrtc/src/modules/audio_coding/neteq/sync_buffer.cc:85 #1 GetAudioInternal () at webrtc/src/modules/audio_coding/neteq/neteq_impl.cc:897 #2 GetAudio () at webrtc/src/modules/audio_coding/neteq/neteq_impl.cc:215 #3 GetAudio () at webrtc/src/modules/audio_coding/acm2/acm_receiver.cc:134 #4 PlayoutData10Ms () at webrtc/src/modules/audio_coding/acm2/audio_coding_module.cc:704 #5 GetAudioFrameWithInfo () at webrtc/src/audio/channel_receive.cc:332 #6 webrtc::internal::AudioReceiveStream::GetAudioFrameWithInfo(int, webrtc::AudioFrame*) () at webrtc/src/audio/audio_receive_stream.cc:276 #7 GetAudioFromSources () at webrtc/src/modules/audio_mixer/audio_mixer_impl.cc:186 #8 Mix () at webrtc/src/modules/audio_mixer/audio_mixer_impl.cc:130 #9 NeedMorePlayData () at webrtc/src/audio/audio_transport_impl.cc:193 #10 RequestPlayoutData () at webrtc/src/modules/audio_device/audio_device_buffer.cc:301 #11 PlayThreadProcess () at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:2121 #12 webrtc::AudioDeviceLinuxPulse::PlayThreadFunc(void*) () at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:1984WebRTC 中對于音頻,是即解碼即播放的,播放和解碼在同一個線程中完成,此外從解碼到播放,還將完成回聲消除,混音等處理。
上面創建 WebRtcAudioReceiveStream,并接收到音頻數據包,是這里能夠從 mixer 拿到數據并播放的基礎。
webrtc::AudioSendStream 的創建
webrtc::AudioSendStream 的創建由應用程序發起:
#0 cricket::BaseChannel::SetLocalContent(cricket::MediaContentDescription const*, webrtc::SdpType, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) () at webrtc/src/pc/channel.cc:290 #1 PushdownMediaDescription () at webrtc/src/pc/peer_connection.cc:5699 #2 UpdateSessionState () at webrtc/src/pc/peer_connection.cc:5668 #3 ApplyLocalDescription () at webrtc/src/pc/peer_connection.cc:2356 #4 SetLocalDescription () at webrtc/src/pc/peer_connection.cc:2187 #10 Conductor::OnSuccess(webrtc::SessionDescriptionInterface*) () at webrtc/src/examples/peerconnection/client/conductor.cc:544 #11 OnMessage () at webrtc/src/pc/webrtc_session_description_factory.cc:299 #12 Dispatch () at webrtc/src/rtc_base/message_queue.cc:513 #13 ProcessMessages () at webrtc/src/rtc_base/thread.cc:527 #14 rtc::Thread::Run() () at webrtc/src/rtc_base/thread.cc:351 #15 main () at webrtc/src/examples/peerconnection/client/linux/main.cc:111webrtc/src/pc/channel.cc 文件里的 BaseChannel::SetLocalContent() 將 MediaContentDescription 拋進 worker_thread 中進一步處理:
bool BaseChannel::SetLocalContent(const MediaContentDescription* content,SdpType type,std::string* error_desc) {TRACE_EVENT0("webrtc", "BaseChannel::SetLocalContent");return InvokeOnWorker<bool>(RTC_FROM_HERE,Bind(&BaseChannel::SetLocalContent_w, this, content, type, error_desc)); }webrtc::AudioSendStream 最終在 Call 中創建:
#0 CreateAudioSendStream () at webrtc/src/call/call.cc:707 #1 WebRtcAudioSendStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:735 #2 AddSendStream () at webrtc/src/media/engine/webrtc_voice_engine.cc:1803 #3 UpdateLocalStreams_w () at webrtc/src/pc/channel.cc:671 #4 SetLocalContent_w () at webrtc/src/pc/channel.cc:906音頻數據的發送
如前面看到的,AudioDevice 組件被初始化時,在啟動播放線程的同時,還會啟動一個錄制線程。錄制線程捕獲錄制的音頻數據,一路傳遞進行處理:
#0 ProcessAndEncodeAudio () at webrtc/src/audio/channel_send.cc:1101 #1 SendAudioData () at webrtc/src/audio/audio_send_stream.cc:365 #2 RecordedDataIsAvailable () at webrtc/src/audio/audio_transport_impl.cc:164 #3 DeliverRecordedData () at webrtc/src/modules/audio_device/audio_device_buffer.cc:269 #4 webrtc::AudioDeviceLinuxPulse::ProcessRecordedData(signed char*, unsigned int, unsigned int) ()at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:1971 #5 webrtc::AudioDeviceLinuxPulse::ReadRecordedData(void const*, unsigned long) ()at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:1918 #6 RecThreadProcess () at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:2229 #7 webrtc::AudioDeviceLinuxPulse::RecThreadFunc(void*) () at webrtc/src/modules/audio_device/linux/audio_device_pulse_linux.cc:1990AudioDevice 將錄制的音頻數據一直傳遞到 webrtc/src/audio/channel_send.cc 文件里的 ChannelSend::ProcessAndEncodeAudio():
void ChannelSend::ProcessAndEncodeAudio(std::unique_ptr<AudioFrame> audio_frame) {RTC_DCHECK_RUNS_SERIALIZED(&audio_thread_race_checker_);struct ProcessAndEncodeAudio {void operator()() {RTC_DCHECK_RUN_ON(&channel->encoder_queue_);if (!channel->encoder_queue_is_active_) {return;}channel->ProcessAndEncodeAudioOnTaskQueue(audio_frame.get());}std::unique_ptr<AudioFrame> audio_frame;ChannelSend* const channel;};// Profile time between when the audio frame is added to the task queue and// when the task is actually executed.audio_frame->UpdateProfileTimeStamp();encoder_queue_.PostTask(ProcessAndEncodeAudio{std::move(audio_frame), this}); }在該函數中,錄制獲得的 AudioFrame 被拋進編碼的線程中的 task 進行編碼,封裝為 RTP 包,并送進 PacedSender,PacedSender 將 RTP 包放進 queue 中:
#0 InsertPacket () at webrtc/src/modules/pacing/paced_sender.cc:200 #1 non-virtual thunk to webrtc::PacedSender::InsertPacket(webrtc::RtpPacketSender::Priority, unsigned int, unsigned short, long, unsigned long, bool) () #2 webrtc::voe::(anonymous namespace)::RtpPacketSenderProxy::InsertPacket(webrtc::RtpPacketSender::Priority, unsigned int, unsigned short, long, unsigned long, bool) () at webrtc/src/audio/channel_send.cc:391 #3 SendToNetwork () at webrtc/src/modules/rtp_rtcp/source/rtp_sender.cc:963 #4 webrtc::RTPSenderAudio::LogAndSendToNetwork(std::__1::unique_ptr<webrtc::RtpPacketToSend, std::__1::default_delete<webrtc::RtpPacketToSend> >, webrtc::StorageType) () at webrtc/src/modules/rtp_rtcp/source/rtp_sender_audio.cc:363 #5 SendAudio () at webrtc/src/modules/rtp_rtcp/source/rtp_sender_audio.cc:260 #6 SendRtpAudio () at webrtc/src/audio/channel_send.cc:568 #7 SendData () at webrtc/src/audio/channel_send.cc:497 #8 Encode () at webrtc/src/modules/audio_coding/acm2/audio_coding_module.cc:385 #9 webrtc::(anonymous namespace)::AudioCodingModuleImpl::Add10MsData(webrtc::AudioFrame const&) ()at webrtc/src/modules/audio_coding/acm2/audio_coding_module.cc:430 #10 ProcessAndEncodeAudioOnTaskQueue () at webrtc/src/audio/channel_send.cc:1152PacedSender 中的另一個線程從 queue 中拿到 RTP 包并發送:
#0 SendPacket () at webrtc/src/pc/channel.cc:397 #1 cricket::BaseChannel::SendPacket(rtc::CopyOnWriteBuffer*, rtc::PacketOptions const&) () at webrtc/src/pc/channel.cc:328 #2 cricket::MediaChannel::DoSendPacket(rtc::CopyOnWriteBuffer*, bool, rtc::PacketOptions const&) () at webrtc/src/media/base/media_channel.h:328 #3 cricket::MediaChannel::SendPacket(rtc::CopyOnWriteBuffer*, rtc::PacketOptions const&) () at webrtc/src/media/base/media_channel.h:249 #4 cricket::WebRtcVideoChannel::SendRtp(unsigned char const*, unsigned long, webrtc::PacketOptions const&) () at webrtc/src/media/engine/webrtc_video_engine.cc:1690 #5 SendPacketToNetwork () at webrtc/src/modules/rtp_rtcp/source/rtp_sender.cc:550 #6 PrepareAndSendPacket () at webrtc/src/modules/rtp_rtcp/source/rtp_sender.cc:791 #7 webrtc::RTPSender::TimeToSendPacket(unsigned int, unsigned short, long, bool, webrtc::PacedPacketInfo const&) () at webrtc/src/modules/rtp_rtcp/source/rtp_sender.cc:604 #8 webrtc::ModuleRtpRtcpImpl::TimeToSendPacket(unsigned int, unsigned short, long, bool, webrtc::PacedPacketInfo const&) () at webrtc/src/modules/rtp_rtcp/source/rtp_rtcp_impl.cc:415 #9 webrtc::PacketRouter::TimeToSendPacket(unsigned int, unsigned short, long, bool, webrtc::PacedPacketInfo const&) ()at webrtc/src/modules/pacing/packet_router.cc:123 #10 Process () at webrtc/src/modules/pacing/paced_sender.cc:390RTP 包被 PacedSender 一直遞到 webrtc/src/pc/channel.cc 文件里定義的 BaseChannel::SendPacket()。BaseChannel::SendPacket() 將包拋進網絡發送線程中發送:
bool BaseChannel::SendPacket(bool rtcp,rtc::CopyOnWriteBuffer* packet,const rtc::PacketOptions& options) {// Until all the code is migrated to use RtpPacketType instead of bool.RtpPacketType packet_type = rtcp ? RtpPacketType::kRtcp : RtpPacketType::kRtp;// SendPacket gets called from MediaEngine, on a pacer or an encoder thread.// If the thread is not our network thread, we will post to our network// so that the real work happens on our network. This avoids us having to// synchronize access to all the pieces of the send path, including// SRTP and the inner workings of the transport channels.// The only downside is that we can't return a proper failure code if// needed. Since UDP is unreliable anyway, this should be a non-issue.if (!network_thread_->IsCurrent()) {// Avoid a copy by transferring the ownership of the packet data.int message_id = rtcp ? MSG_SEND_RTCP_PACKET : MSG_SEND_RTP_PACKET;SendPacketMessageData* data = new SendPacketMessageData;data->packet = std::move(*packet);data->options = options;network_thread_->Post(RTC_FROM_HERE, this, message_id, data);return true;}網絡發送線程將數據包通過系統 socket 發送到網絡:
#0 rtc::PhysicalSocket::DoSendTo(int, char const*, int, int, sockaddr const*, unsigned int) () at webrtc/src/rtc_base/physical_socket_server.cc:479 #1 SendTo () at webrtc/src/rtc_base/physical_socket_server.cc:344 #2 rtc::AsyncUDPSocket::SendTo(void const*, unsigned long, rtc::SocketAddress const&, rtc::PacketOptions const&) () at webrtc/src/rtc_base/async_udp_socket.cc:84 #3 SendTo () at webrtc/src/p2p/base/stun_port.cc:301 #4 Send () at webrtc/src/p2p/base/connection.cc:1162 #5 SendPacket () at webrtc/src/p2p/base/p2p_transport_channel.cc:1473 #6 SendPacket () at webrtc/src/p2p/base/dtls_transport.cc:409 #7 SendPacket () at webrtc/src/pc/rtp_transport.cc:147 #8 SendRtpPacket () at webrtc/src/pc/srtp_transport.cc:173 #9 SendPacket () at webrtc/src/pc/channel.cc:457 #10 OnMessage () at webrtc/src/pc/channel.cc:757webrtc/src/rtc_base/physical_socket_server.cc 文件里定義的 PhysicalSocket::DoSendTo() 函數將數據包通過系統的 socket 發送到網絡上:
int PhysicalSocket::DoSendTo(SOCKET socket,const char* buf,int len,int flags,const struct sockaddr* dest_addr,socklen_t addrlen) {return ::sendto(socket, buf, len, flags, dest_addr, addrlen); }總結
以上是生活随笔為你收集整理的WebRTC Audio 接收和发送的关键过程的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux 下的 AddressSani
- 下一篇: Googletest 实现简要分析