上一篇文章我们已经讲了一部分:
testH264VideoStreamer 重复从 H.264 基本流视频文件(名为“test.264”)中读取,并使用 RTP 多播进行流式传输。 该程序还具有内置的 RTSP 服务器。
Apple 的“QuickTime 播放器”可用于接收和播放此音频流。 要使用它,让玩家打开会话的“rtsp://”URL(程序在开始流式传输时打印出来)。
开源“VLC”和“MPlayer”媒体播放器也可以使用。
因为都有英文注释的,所以源码分析也简单
一、源码分析
首先我要说明一下,testH264VideoStreamer 我执行了完,然后一直不出现视频
尝试了好几个版本的live555,但是都不成功,很影响心情啊。然后看到一篇文章。
参看:live555编译、播放示例
这里的源码我试了一下,人家的是可以的,好伤心。为什么呢?先看一下它的源码,讲解的很明白了。
/**
本程序同时提供单播、组播功能。基于testH264VideoStreamer程序修改,另参考testOnDemandRTSPServer。
注:
单播:重开VLC连接,会重新读文件。无马赛克
组播:重开VLC连接,会继续上一次的位置往下读文件。每次连接时,出现马赛克,VLC出现:
main error: pictures leaked, trying to workaround
*/
#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
UsageEnvironment* env;
char inputFileName[128] = {0}; // 输入的视频文件
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;
Boolean reuseFirstSource = False;
void play(); // forward
void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
char const* streamName, char const* inputFileName);
int main(int argc, char** argv) {
strcpy(inputFileName, "test.264"); // 默认值
if (argc == 2) {
strcpy(inputFileName, argv[1]);
}
printf("Using file: %s\n", inputFileName);
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
// 描述信息
char const* descriptionString
= "Session streamed by \"testH264VideoStreamer\"";
// RTSP服务器,端口为8554
RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
if (rtspServer == NULL) {
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit(1);
}
// 组播
// Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
const unsigned short rtpPortNum = 18888;
const unsigned short rtcpPortNum = rtpPortNum+1;
const unsigned char ttl = 255;
const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum);
Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source
// Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = 200000;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);
// Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = 500; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = 100;
unsigned char CNAME[maxCNAMElen+1];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically
char const* streamName = "h264ESVideoMulticast";
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, streamName, inputFileName,
descriptionString, True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms);
announceStream(rtspServer, sms, streamName, inputFileName);
// Start the streaming:
*env << "Beginning streaming...\n";
play(); // 播放
// 单播
{
char const* streamName = "h264ESVideo";
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, streamName, streamName,
descriptionString);
sms->addSubsession(H264VideoFileServerMediaSubsession
::createNew(*env, inputFileName, reuseFirstSource));
rtspServer->addServerMediaSession(sms);
announceStream(rtspServer, sms, streamName, inputFileName);
}
env->taskScheduler().doEventLoop(); // does not return
return 0; // only to prevent compiler warning
}
// 继续读取文件
void afterPlaying(void* /*clientData*/) {
*env << "...done reading from file\n";
videoSink->stopPlaying();
Medium::close(videoSource);
// Note that this also closes the input file that this source read from.
// Start playing once again:
play();
}
void play() {
// Open the input file as a 'byte-stream file source':
ByteStreamFileSource* fileSource
= ByteStreamFileSource::createNew(*env, inputFileName);
if (fileSource == NULL) {
*env << "Unable to open file \"" << inputFileName
<< "\" as a byte-stream file source\n";
exit(1);
}
FramedSource* videoES = fileSource;
// Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES);
// Finally, start playing:
*env << "Beginning to read from file...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
char const* streamName, char const* inputFileName) {
char* url = rtspServer->rtspURL(sms);
UsageEnvironment& env = rtspServer->envir();
env << "\n\"" << streamName << "\" stream, from the file \""
<< inputFileName << "\"\n";
env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url;
}
二、概念介绍
然后,源码里有这样一段说明:
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model.
翻译一下:
注意:这是多播地址。 如果你想改为流
使用单播,那么你应该使用“testOnDemandRTSPServer”
测试程序 - 不是这个测试程序 - 只是一个模型。
他这里出现了单播和多播的概念?我之前还查考过还有直播和点播,这些都要什么区别?
参看:LIVE555再学习 -- 单播、多播、广播、直播、点播 都是个啥?
三、实例
参看:使用Live555类库实现的网络直播系统
该代码中,指定了多播IP地址和端口号。