http://www.live555.com/
三個基礎類別與功用為:
UsageEnvironment 作業系統環境
TaskScheduler 影音串流各 task 的多工
Medium 影音串流的來源
這三個類別通通都是 base abstract class,因此我們不直接使用,而是使用其衍生類別。
各基礎類別研究:
UsageEnvironment
BasicUsageEnvironment0
usageEnvironment
會把 << option override 來控制輸出字串
TaskScheduler -> abstract class, only interfaces
BasicTaskScheduler0 -> override doEventLoop()
BasicTaskScheduler -> override SingleStep()
TaskScheduler 實際上是在 UsageEnvironment.hh 中定義的,主要的工作是用來處理
各影音串流的 multi task,其中 doEventLoop() 的成員函式,是所有 live555 程式
執行的最後一道要執行的手續。 live555 本身附帶的是一個實作好的陽春版本為
BasicTaskScheduler,但實際上 live555 是在 BasicTaskScheduler0 中 override
doEventLoop() 的 doEventLoop 裡有一個 infinite loop 持續呼叫 SingleStep(),
而 SingleStep() 則會在 BasicTaskScheduler 中被 override, SignleStep() 則是
利用 socket 提供的 select() 函式來做 polling 的動作。
* select() 是除了 fork() 之外的另一種實現 multi-task 的方法,與 fork() 不同
的地方是不產生子行程,而是透過詢問的方式來共享資源。
select() 主要的兩個功用為:
wait input to arrive
wait output to complete
Medium
Medium 比較複雜,所有的串流影音控制都是由 Medium 衍生出來的。
幾個類別功用為:
Medium
MediaSink 串流影音撥放控制
ServerMediaSubSession
OnDemandServerMediaSubSession
FileServerMediaSubsession
MPEG1or2DemuxedServerMediaSubsession
MediaSource 串流影音的來源
FramedSource 影音每禎的來源
FramedFileSource
FramedFilter
RTPSource
MultiFramedRTPSource
MPEG4ESVideoRTPSource
ServerMediaSubSession 處理各 client 連線的 session
如果要把 RTP Server Streaming 的資料改成來自於 live 的來源如 webcam 或 mic,
可以拿 testProg 目錄裡的 testOnDemandRTSPServer.cpp 來當參考:
Video 的部份需要參考的程式:
1. MPEG4VideoFileServerMediaSubsession.cpp
其中的 constructor 與 createNewStreamSource 這個 member function
2. ByteStreamFileSource.cpp
3. FFMPEG
Audio 的部份需要參考的程式:
1. ADTSAudioFileServerMediaSubsession.cpp
2. ADTSAudioFileSource.cpp
3. FAAC
以上不管 video 與 audio 比較重要的是 FramedSource 裡的 getNextFrame 與 doGetNextFrame
,這幾個 member functions 是決定 source 如何來的。
以下是官網 FAQ 的大概說明原文:
FAQ:
But what about the "testOnDemandRTSPServer" test program (for streaming via unicast)? How can I modify it so that it takes input from a live source instead of from a file?
ANS:
First, you will need to modify "testProgs/testOnDemandRTSPServer.cpp" to set the variable "reuseFirstSource" to "True". This tells the server to use the same input source object, even if more than one client is streaming from the server concurrently.
Then, as before, if your input device is accessible by a file name (including "stdin" for standard input), then simply replace the appropriate "test.*" file name with the file name of your input device.
If, however, you have written your own "FramedSource" subclass (e.g., based on "DeviceSource") to encapsulate your input source, then the solution is a little more complicated. In this case, you will also need to define and implement your own new subclass of "OnDemandServerMediaSubsession" that gets its input from your live source, rather than from a file. In particular, you will need to provide your own implementation of the two pure virtual functions "createNewStreamSource()" and "createNewRTPSink()". For a model of how to do this, see the existing "FileServerMediaSubsession" subclass that is used to stream your desired type of data from an input file. Note that: Your "createNewStreamSource()" implementation will create (and return) an instance of your input source object. (You should also set the "estBitrate" result parameter to be the estimated bit rate (in kbps) of the stream. This estimate is used to determine the frequency of RTCP packets; it is not essential that it be accurate.) Your "createNewRTPSink()" implementation will create (and return) an appropriate new "RTPSink" (subclass) object. (The code for this will usually be the same as the code for "createNewRTPSink()" in the corresponding "FileServerMediaSubsession" subclass.)
沒有留言:
張貼留言