LIVE555中RTSP客户端接收媒体流分析及测试代码
LIVE555中testProgs目錄下的testRTSPClient.cpp代碼用于測試接收RTSP URL指定的媒體流,向服務(wù)器端發(fā)送的命令包括:DESCRIBE、SETUP、PLAY、TERADOWN。
1. 設(shè)置使用環(huán)境:new一個BasicTaskScheduler對象;new一個BasicUsageEnvironment對象;
2. new一個RTSPClient對象;
3. 向服務(wù)器發(fā)送一個RTSP “DESCRIBE”命令,以獲取stream的SDP描述(description),異步操作: RTSPClient::sendDescribeCommand;
4. 處理從服務(wù)器端返回的“DESCRIBE”響應(yīng):SDP描述,通過SDP描述創(chuàng)建一個會話MediaSession,接著為此會話創(chuàng)建數(shù)據(jù)源對象MediaSubsessionIterator,continueAfterDESCRIBE為回調(diào)函數(shù);
5. 向服務(wù)器發(fā)送一個RTSP “SETUP”命令,異步操作,在setupNextSubsession函數(shù)中:RTSPClient::sendSetupCommand;
6. 處理從服務(wù)器端返回的”SETUP”響應(yīng):創(chuàng)建一個MediaSink對象DummySink對象用于接收流數(shù)據(jù),continueAfterSETUP為回調(diào)函數(shù);
7. 向服務(wù)器發(fā)送一個RTSP “PLAY”命令,異步操作,在setupNextSubsession函數(shù)中:RTSPClient::sendPlayCommand;
8. 處理從服務(wù)器端返回的”PLAY”響應(yīng):設(shè)置一個定時器用于結(jié)束接收流,continueAfterPLAY為回調(diào)函數(shù);
9. 接收流數(shù)據(jù),包括音頻流和視頻流,可通過MediaSubsession::mediumName()函數(shù)來判斷此流是視頻流(video)還是音頻流(audio);可通過MediaSubsession::codecName()函數(shù)來判斷是采用的哪種編碼標(biāo)準(zhǔn);每次接收流的大小是不同的,可通過frameSize來獲取當(dāng)次接收流的字節(jié)數(shù),流數(shù)據(jù)在fReceiveBuffer中獲取;在每次接收視頻流數(shù)據(jù)的開頭需要加入0x00000001,這樣FFmpeg才能成功解碼;
10. 當(dāng)出現(xiàn)異常情況或正常結(jié)束接收流時會向服務(wù)器發(fā)送一個RTSP “TERADOWN”命令,異步操作,在shutdownStream中:RTSPClient::sendTeardownCommand。
注:
1. 每一路都需要單獨(dú)創(chuàng)建一個RTSPClient對象。
2. 可以在某個時間點(diǎn)上設(shè)置eventLoopWatchVariable為非0值來正常結(jié)束程序。
3. MediaSource是所有Source的基類;MediaSink是所有Sink的基類;它們均又繼承自Media類。MediaSession用于表示一個RTP會話,一個MediaSession可能包含多個子會話(MediaSubSession)。Source和Sink通過RTP子會話聯(lián)系在一起。Source:發(fā)送端,流的起點(diǎn)。Sink:接收端,流的終點(diǎn),數(shù)據(jù)流保存到文件或顯示,數(shù)據(jù)通過Source傳遞到Sink。每一個MediaSubSession內(nèi)部維護(hù)一個Source和Sink。
以下是基于testRTSPClient.cpp的測試代碼:
#include "funset.hpp"
#include <iostream>
#include <thread>
#include <chrono>
#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>namespace {// Forward function definitions:
// RTSP 'response handlers':
void continueAfterDESCRIBE(RTSPClient* rtspClient, int resultCode, char* resultString);
void continueAfterSETUP(RTSPClient* rtspClient, int resultCode, char* resultString);
void continueAfterPLAY(RTSPClient* rtspClient, int resultCode, char* resultString);// Other event handler functions:
// called when a stream's subsession (e.g., audio or video substream) ends
void subsessionAfterPlaying(void* clientData);
// called when a RTCP "BYE" is received for a subsession
void subsessionByeHandler(void* clientData, char const* reason);
// called at the end of a stream's expected duration (if the stream has not already signaled its end using a RTCP "BYE")
void streamTimerHandler(void* clientData);// The main streaming routine (for each "rtsp://" URL):
void openURL(UsageEnvironment& env, char const* progName, char const* rtspURL);// Used to iterate through each stream's 'subsessions', setting up each one:
void setupNextSubsession(RTSPClient* rtspClient);// Used to shut down and close a stream (including its "RTSPClient" object):
void shutdownStream(RTSPClient* rtspClient, int exitCode = 1);// A function that outputs a string that identifies each stream (for debugging output). Modify this if you wish:
UsageEnvironment& operator<<(UsageEnvironment& env, const RTSPClient& rtspClient)
{return env << "[URL:\"" << rtspClient.url() << "\"]: ";
}// A function that outputs a string that identifies each subsession (for debugging output). Modify this if you wish:
UsageEnvironment& operator<<(UsageEnvironment& env, const MediaSubsession& subsession)
{return env << subsession.mediumName() << "/" << subsession.codecName();
}char eventLoopWatchVariable = 0;// Define a class to hold per-stream state that we maintain throughout each stream's lifetime:
class StreamClientState {
public:StreamClientState();virtual ~StreamClientState();public:MediaSubsessionIterator* iter;MediaSession* session;MediaSubsession* subsession;TaskToken streamTimerTask;double duration;
};// If you're streaming just a single stream (i.e., just from a single URL, once), then you can define and use just a single
// "StreamClientState" structure, as a global variable in your application. However, because - in this demo application - we're
// showing how to play multiple streams, concurrently, we can't do that. Instead, we have to have a separate "StreamClientState"
// structure for each "RTSPClient". To do this, we subclass "RTSPClient", and add a "StreamClientState" field to the subclass:
class ourRTSPClient : public RTSPClient {
public:static ourRTSPClient* createNew(UsageEnvironment& env, char const* rtspURL,int verbosityLevel = 0, char const* applicationName = NULL, portNumBits tunnelOverHTTPPortNum = 0);protected:ourRTSPClient(UsageEnvironment& env, char const* rtspURL, int verbosityLevel, char const* applicationName, portNumBits tunnelOverHTTPPortNum);// called only by createNew();virtual ~ourRTSPClient();public:StreamClientState scs;
};// Define a data sink (a subclass of "MediaSink") to receive the data for each subsession (i.e., each audio or video 'substream').
// In practice, this might be a class (or a chain of classes) that decodes and then renders the incoming audio or video.
// Or it might be a "FileSink", for outputting the received data into a file (as is done by the "openRTSP" application).
// In this example code, however, we define a simple 'dummy' sink that receives incoming data, but does nothing with it.
class DummySink : public MediaSink {
public:static DummySink* createNew(UsageEnvironment& env,MediaSubsession& subsession, // identifies the kind of data that's being receivedchar const* streamId = NULL); // identifies the stream itself (optional)private:DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);// called only by "createNew()"virtual ~DummySink();static void afterGettingFrame(void* clientData, unsigned frameSize, unsigned numTruncatedBytes,struct timeval presentationTime, unsigned durationInMicroseconds);void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds);private:// redefined virtual functions:virtual Boolean continuePlaying();private:u_int8_t* fReceiveBuffer;MediaSubsession& fSubsession;char* fStreamId;
};#define RTSP_CLIENT_VERBOSITY_LEVEL 1 // by default, print verbose output from each "RTSPClient"static unsigned rtspClientCount = 0; // Counts how many streams (i.e., "RTSPClient"s) are currently in use.void openURL(UsageEnvironment& env, char const* progName, char const* rtspURL)
{// Begin by creating a "RTSPClient" object. Note that there is a separate "RTSPClient" object for each stream that we wish// to receive (even if more than stream uses the same "rtsp://" URL).RTSPClient* rtspClient = ourRTSPClient::createNew(env, rtspURL, RTSP_CLIENT_VERBOSITY_LEVEL, progName);if (rtspClient == NULL) {env << "Failed to create a RTSP client for URL \"" << rtspURL << "\": " << env.getResultMsg() << "\n";return;}++rtspClientCount;// Next, send a RTSP "DESCRIBE" command, to get a SDP description for the stream.// Note that this command - like all RTSP commands - is sent asynchronously; we do not block, waiting for a response.// Instead, the following function call returns immediately, and we handle the RTSP response later, from within the event loop:rtspClient->sendDescribeCommand(continueAfterDESCRIBE);
}// Implementation of the RTSP 'response handlers':
void continueAfterDESCRIBE(RTSPClient* rtspClient, int resultCode, char* resultString)
{do {UsageEnvironment& env = rtspClient->envir(); // aliasStreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // aliasif (resultCode != 0) {env << *rtspClient << "Failed to get a SDP description: " << resultString << ", resultCode: " << resultCode << "\n";delete[] resultString;break;}char* const sdpDescription = resultString;env << *rtspClient << "Got a SDP description:\n" << sdpDescription << "\n";// Create a media session object from this SDP description:scs.session = MediaSession::createNew(env, sdpDescription);delete[] sdpDescription; // because we don't need it anymoreif (scs.session == NULL) {env << *rtspClient << "Failed to create a MediaSession object from the SDP description: " << env.getResultMsg() << "\n";break;} else if (!scs.session->hasSubsessions()) {env << *rtspClient << "This session has no media subsessions (i.e., no \"m=\" lines)\n";break;}// Then, create and set up our data source objects for the session. We do this by iterating over the session's 'subsessions',// calling "MediaSubsession::initiate()", and then sending a RTSP "SETUP" command, on each one.// (Each 'subsession' will have its own data source.)scs.iter = new MediaSubsessionIterator(*scs.session);setupNextSubsession(rtspClient);return;} while (0);// An unrecoverable error occurred with this stream.shutdownStream(rtspClient);
}// By default, we request that the server stream its data using RTP/UDP.
// If, instead, you want to request that the server stream via RTP-over-TCP, change the following to True:
#define REQUEST_STREAMING_OVER_TCP Falsevoid setupNextSubsession(RTSPClient* rtspClient)
{UsageEnvironment& env = rtspClient->envir(); // aliasStreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // aliasscs.subsession = scs.iter->next();if (scs.subsession != NULL) {if (!scs.subsession->initiate()) {env << *rtspClient << "Failed to initiate the \"" << *scs.subsession << "\" subsession: " << env.getResultMsg() << "\n";setupNextSubsession(rtspClient); // give up on this subsession; go to the next one} else {env << *rtspClient << "Initiated the \"" << *scs.subsession << "\" subsession (";if (scs.subsession->rtcpIsMuxed()) {env << "client port " << scs.subsession->clientPortNum();} else {env << "client ports " << scs.subsession->clientPortNum() << "-" << scs.subsession->clientPortNum() + 1;}env << ")\n";// Continue setting up this subsession, by sending a RTSP "SETUP" command:rtspClient->sendSetupCommand(*scs.subsession, continueAfterSETUP, False, REQUEST_STREAMING_OVER_TCP);}return;}// We've finished setting up all of the subsessions. Now, send a RTSP "PLAY" command to start the streaming:if (scs.session->absStartTime() != NULL) {// Special case: The stream is indexed by 'absolute' time, so send an appropriate "PLAY" command:rtspClient->sendPlayCommand(*scs.session, continueAfterPLAY, scs.session->absStartTime(), scs.session->absEndTime());} else {scs.duration = scs.session->playEndTime() - scs.session->playStartTime();rtspClient->sendPlayCommand(*scs.session, continueAfterPLAY);}
}void continueAfterSETUP(RTSPClient* rtspClient, int resultCode, char* resultString)
{do {UsageEnvironment& env = rtspClient->envir(); // aliasStreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // aliasif (resultCode != 0) {env << *rtspClient << "Failed to set up the \"" << *scs.subsession << "\" subsession: " << resultString << ", resultCode: " << resultCode << "\n";break;}env << *rtspClient << "Set up the \"" << *scs.subsession << "\" subsession (";if (scs.subsession->rtcpIsMuxed()) {env << "client port " << scs.subsession->clientPortNum();} else {env << "client ports " << scs.subsession->clientPortNum() << "-" << scs.subsession->clientPortNum() + 1;}env << ")\n";// Having successfully setup the subsession, create a data sink for it, and call "startPlaying()" on it.// (This will prepare the data sink to receive data; the actual flow of data from the client won't start happening until later,// after we've sent a RTSP "PLAY" command.)scs.subsession->sink = DummySink::createNew(env, *scs.subsession, rtspClient->url());// perhaps use your own custom "MediaSink" subclass insteadif (scs.subsession->sink == NULL) {env << *rtspClient << "Failed to create a data sink for the \"" << *scs.subsession<< "\" subsession: " << env.getResultMsg() << "\n";break;}env << *rtspClient << "Created a data sink for the \"" << *scs.subsession << "\" subsession\n";scs.subsession->miscPtr = rtspClient; // a hack to let subsession handler functions get the "RTSPClient" from the subsession scs.subsession->sink->startPlaying(*(scs.subsession->readSource()), subsessionAfterPlaying, scs.subsession);// Also set a handler to be called if a RTCP "BYE" arrives for this subsession:if (scs.subsession->rtcpInstance() != NULL) {scs.subsession->rtcpInstance()->setByeWithReasonHandler(subsessionByeHandler, scs.subsession);}} while (0);delete[] resultString;// Set up the next subsession, if any:setupNextSubsession(rtspClient);
}void continueAfterPLAY(RTSPClient* rtspClient, int resultCode, char* resultString)
{Boolean success = False;do {UsageEnvironment& env = rtspClient->envir(); // aliasStreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // aliasif (resultCode != 0) {env << *rtspClient << "Failed to start playing session: " << resultString << ", resultCode: " << resultCode << "\n";break;}// Set a timer to be handled at the end of the stream's expected duration (if the stream does not already signal its end// using a RTCP "BYE"). This is optional. If, instead, you want to keep the stream active - e.g., so you can later// 'seek' back within it and do another RTSP "PLAY" - then you can omit this code.// (Alternatively, if you don't want to receive the entire stream, you could set this timer for some shorter value.)if (scs.duration > 0) {unsigned const delaySlop = 2; // number of seconds extra to delay, after the stream's expected duration. (This is optional.)scs.duration += delaySlop;unsigned uSecsToDelay = (unsigned)(scs.duration * 1000000);scs.streamTimerTask = env.taskScheduler().scheduleDelayedTask(uSecsToDelay, (TaskFunc*)streamTimerHandler, rtspClient);}env << *rtspClient << "Started playing session";if (scs.duration > 0) {env << " (for up to " << scs.duration << " seconds)";}env << "...\n";success = True;} while (0);delete[] resultString;if (!success) {// An unrecoverable error occurred with this stream.shutdownStream(rtspClient);}
}// Implementation of the other event handlers:
void subsessionAfterPlaying(void* clientData)
{MediaSubsession* subsession = (MediaSubsession*)clientData;RTSPClient* rtspClient = (RTSPClient*)(subsession->miscPtr);// Begin by closing this subsession's stream:Medium::close(subsession->sink);subsession->sink = NULL;// Next, check whether *all* subsessions' streams have now been closed:MediaSession& session = subsession->parentSession();MediaSubsessionIterator iter(session);while ((subsession = iter.next()) != NULL) {if (subsession->sink != NULL) return; // this subsession is still active}// All subsessions' streams have now been closed, so shutdown the client:shutdownStream(rtspClient);
}void subsessionByeHandler(void* clientData, char const* reason)
{MediaSubsession* subsession = (MediaSubsession*)clientData;RTSPClient* rtspClient = (RTSPClient*)subsession->miscPtr;UsageEnvironment& env = rtspClient->envir(); // aliasenv << *rtspClient << "Received RTCP \"BYE\"";if (reason != NULL) {env << " (reason:\"" << reason << "\")";delete[] reason;}env << " on \"" << *subsession << "\" subsession\n";// Now act as if the subsession had closed:subsessionAfterPlaying(subsession);
}void streamTimerHandler(void* clientData)
{ourRTSPClient* rtspClient = (ourRTSPClient*)clientData;StreamClientState& scs = rtspClient->scs; // aliasscs.streamTimerTask = NULL;// Shut down the stream:shutdownStream(rtspClient);
}void shutdownStream(RTSPClient* rtspClient, int exitCode)
{UsageEnvironment& env = rtspClient->envir(); // aliasStreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias// First, check whether any subsessions have still to be closed:if (scs.session != NULL) {Boolean someSubsessionsWereActive = False;MediaSubsessionIterator iter(*scs.session);MediaSubsession* subsession;while ((subsession = iter.next()) != NULL) {if (subsession->sink != NULL) {Medium::close(subsession->sink);subsession->sink = NULL;if (subsession->rtcpInstance() != NULL) {subsession->rtcpInstance()->setByeHandler(NULL, NULL); // in case the server sends a RTCP "BYE" while handling "TEARDOWN"}someSubsessionsWereActive = True;}}if (someSubsessionsWereActive) {// Send a RTSP "TEARDOWN" command, to tell the server to shutdown the stream.// Don't bother handling the response to the "TEARDOWN".rtspClient->sendTeardownCommand(*scs.session, NULL);}}env << *rtspClient << "Closing the stream.\n";Medium::close(rtspClient);// Note that this will also cause this stream's "StreamClientState" structure to get reclaimed.if (--rtspClientCount == 0) {// The final stream has ended, so exit the application now.// (Of course, if you're embedding this code into your own application, you might want to comment this out,// and replace it with "eventLoopWatchVariable = 1;", so that we leave the LIVE555 event loop, and continue running "main()".)//exit(exitCode);eventLoopWatchVariable = 1;}
}// Implementation of "ourRTSPClient":
ourRTSPClient* ourRTSPClient::createNew(UsageEnvironment& env, char const* rtspURL,int verbosityLevel, char const* applicationName, portNumBits tunnelOverHTTPPortNum)
{return new ourRTSPClient(env, rtspURL, verbosityLevel, applicationName, tunnelOverHTTPPortNum);
}ourRTSPClient::ourRTSPClient(UsageEnvironment& env, char const* rtspURL,int verbosityLevel, char const* applicationName, portNumBits tunnelOverHTTPPortNum): RTSPClient(env, rtspURL, verbosityLevel, applicationName, tunnelOverHTTPPortNum, -1)
{
}ourRTSPClient::~ourRTSPClient()
{
}// Implementation of "StreamClientState":
StreamClientState::StreamClientState(): iter(NULL), session(NULL), subsession(NULL), streamTimerTask(NULL), duration(0.0)
{
}StreamClientState::~StreamClientState()
{delete iter;if (session != NULL) {// We also need to delete "session", and unschedule "streamTimerTask" (if set)UsageEnvironment& env = session->envir(); // aliasenv.taskScheduler().unscheduleDelayedTask(streamTimerTask);Medium::close(session);}
}// Implementation of "DummySink":
// Even though we're not going to be doing anything with the incoming data, we still need to receive it.
// Define the size of the buffer that we'll use:
#define DUMMY_SINK_RECEIVE_BUFFER_SIZE 100000DummySink* DummySink::createNew(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId)
{return new DummySink(env, subsession, streamId);
}DummySink::DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId): MediaSink(env), fSubsession(subsession)
{fStreamId = strDup(streamId);fReceiveBuffer = new u_int8_t[DUMMY_SINK_RECEIVE_BUFFER_SIZE];
}DummySink::~DummySink()
{delete[] fReceiveBuffer;delete[] fStreamId;
}void DummySink::afterGettingFrame(void* clientData, unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds)
{DummySink* sink = (DummySink*)clientData;sink->afterGettingFrame(frameSize, numTruncatedBytes, presentationTime, durationInMicroseconds);
}// If you don't want to see debugging output for each received frame, then comment out the following line:
#define DEBUG_PRINT_EACH_RECEIVED_FRAME 1void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned /*durationInMicroseconds*/)
{// We've just received a frame of data. (Optionally) print out information about it:
#ifdef DEBUG_PRINT_EACH_RECEIVED_FRAMEif (fStreamId != NULL) envir() << "Stream \"" << fStreamId << "\"; ";envir() << fSubsession.mediumName() << "/" << fSubsession.codecName() << ":\tReceived " << frameSize << " bytes";if (numTruncatedBytes > 0) envir() << " (with " << numTruncatedBytes << " bytes truncated)";char uSecsStr[6 + 1]; // used to output the 'microseconds' part of the presentation timesprintf(uSecsStr, "%06u", (unsigned)presentationTime.tv_usec);envir() << ".\tPresentation time: " << (int)presentationTime.tv_sec << "." << uSecsStr;if (fSubsession.rtpSource() != NULL && !fSubsession.rtpSource()->hasBeenSynchronizedUsingRTCP()) {envir() << "!"; // mark the debugging output to indicate that this presentation time is not RTCP-synchronized}
#ifdef DEBUG_PRINT_NPTenvir() << "\tNPT: " << fSubsession.getNormalPlayTime(presentationTime);
#endifenvir() << "\n";
#endif// Then continue, to request the next frame of data:continuePlaying();
}Boolean DummySink::continuePlaying()
{if (fSource == NULL) return False; // sanity check (should not happen)// Request the next frame of data from our input source. "afterGettingFrame()" will get called later, when it arrives:fSource->getNextFrame(fReceiveBuffer, DUMMY_SINK_RECEIVE_BUFFER_SIZE, afterGettingFrame, this, onSourceClosure, this);return True;
}void shutdown_rtsp_client_stream()
{while (1) {std::this_thread::sleep_for(std::chrono::seconds(2));eventLoopWatchVariable = 1;break;}
}} // namespaceint test_live555_rtsp_client()
{// reference: live/testProgs/testRTSPClient.cpp// Begin by setting up our usage environment:TaskScheduler* scheduler = BasicTaskScheduler::createNew();UsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);// We need at least one "rtsp://" URL argument:int argc = 2;char* argv[2] = { "test_live555_rtsp_client", "rtsp://184.72.239.149/vod/mp4://BigBuckBunny_175k.mov"};// There are argc-1 URLs: argv[1] through argv[argc-1]. Open and start streaming each one:for (int i = 1; i <= argc - 1; ++i) {openURL(*env, argv[0], argv[i]);}std::thread th(shutdown_rtsp_client_stream);// All subsequent activity takes place within the event loop:// This function call does not return, unless, at some point in time, "eventLoopWatchVariable" gets set to something non-zero.env->taskScheduler().doEventLoop(&eventLoopWatchVariable);th.join();// If you choose to continue the application past this point (i.e., if you comment out the "return 0;" statement above),// and if you don't intend to do anything more with the "TaskScheduler" and "UsageEnvironment" objects,// then you can also reclaim the (small) memory used by these objects by uncommenting the following code:env->reclaim(); env = NULL;delete scheduler; scheduler = NULL;return 0;
}
執(zhí)行結(jié)果如下:
GitHub:https://github.com/fengbingchun/OpenCV_Test
總結(jié)
以上是生活随笔為你收集整理的LIVE555中RTSP客户端接收媒体流分析及测试代码的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: RapidJSON简介及使用
- 下一篇: FFmpeg中RTSP客户端拉流测试代码