-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Named pipe consumer #1335
base: master
Are you sure you want to change the base?
Named pipe consumer #1335
Conversation
Audio can be sent down a separate pipe or the same pipe as video. Audio will preceed or follow video depending on whether the pipe is specified as an audio pipe or a video pipe (audio follows video in the latter). Error logging improved. Also added example pipe consumer config to casparcg.config file.
Is there available a build to test? |
While I've obviously built the code in this PR to test locally, it is missing a bunch of files (like CEF locales, media scanner, etc.) that would normally be included in a release and I do not know what steps I should be doing to add them. I could I also don't have a convenient place to upload the build (it's >100MB which is usually too large for free services). Is there a usual protocol for this? |
For the previous CasparCG Server 2.1 branch there were some issues under linux when using named pipes, instead of filing the pipe it would overwrite it as a new file with the same name to the filesystem. Maybe this older issue #596 is usefull. |
@vimlesh1975 I've now put the partial build here which should be sufficient for testing. @walterav1984 Thanks! That should be useful when I come to trying to make named pipes work on linux in the future! |
When I need to produce a build like this, I download a recent autobuild (http://builds.casparcg.com) and replace the casparcg.* files inside with the freshly built copies. As you haven't updated any dependencies that is safe to do. Fun fact, all of the 2.3.0 beta builds were produced this way ;) As for hosting, I either use google drive (as that has free storage) or a personal server. But what you have done as a tag on your fork works too |
@philipstarkey I tested now . Working fine. This Is very good feature. |
@Julusian Cool, thanks for the info! I'll do that next time I generate a build. I've identified a bug in my logic on line 435 in pipe_consumer.cpp (it is blatantly wrong if you only use video or audio as it spams the log with incorrect messages). I've also realised I should add:
I'll hopefully update this PR with those changes within the next few days. |
Fixed: * `openPipe`` loop did not reset `lastError` or call `closeHandle` when retrying * Corrected log messages when internal buffers overflowed * Fixed handling of invalid/missing parameters which now raise an exception so that ADD commands return a fail state rather than succeeding with the pipe consumer immediately ending. Exception message now displays the correct number of slashes required (which differs depending on whether the pipe name is specified in the config file or via terminal/AMCP command) * Improved graph usage. Dropped frames are now logged in the graph no matter where they are dropped. Other metrics (such as input buffer size) are also updated even when frames are not being transmitted. New Features: * new flag (AUTO_RECONNECT) for automatically reconnecting on pipe close * new flag (REALTIME TRUE|FALSE [default:TRUE]) to set whether the internal buffers should be cleared during pipe reconnections * new flag (EXISTING_PIPE) to connect to an existing pipe (created by an external program) rather than creating the pipe. * Internal audio/video sync drift detection (due to dropped frames) that will drop audio/video frames to restore sync.
I have fixed bugs and implemented the features mentioned in the last comment. Latest build is here Several bugs were also fixed (see commit message) and the diagnostics graphs were improved. I also added internal tracking of the audio/video sync when using separate pipes which is used to drop video/audio frames to ensure they stay in sync (even across pipe reconnections should the new Further examplesTo connect to a pipe already created by an external program (not needed for external FFmpeg)
To automatically reconnect to a pipe if a write fails or the pipe is closed:
Note that this is equivalent to:
The If frames should be buffered internally (up to the maximum set by
Note that the Obviously these new flags can be combined with each other and the previous parameters. |
2b0c540
to
8ec45a4
Compare
…ve an uneven audio cadence. This is necessary as it is impossible to ensure continued byte synchronisation between the pipe writer and reader under those circumstances without dropping multiple frames every time a single frame is dropped and/or reconnecting the pipe every time a single frame is dropped. So we just prevent it instead. Video formats with uneven audio candences can be used with separate video/audio pipes with no issue (since the number of bytes written in 1 second is equal, regardless of the cadence sychronisation between write and reader). Also renamed two variables for consistency.
Realised there was a potential issue with uneven audio cadences (59.94Hz, 29.97Hz framerates) when using the single pipe option. This is because you need to synchronise the audio cadence between the CasparCG pipe consumer and whatever reads the pipe so that you don't end up reading video bytes as audio bytes (and the reverse). Such a synchronisation could be implemented (synchronised to first write to the pipe), however it becomes impossible to handle nicely if frames are dropped at any point as it requires resynchronisation either by dropping more frames than are strictly necessary or by reconnecting the pipe. Either way you risk get choppy output down the single pipe. So I've blocked anything with an uneven audio cadence from working with single pipe mode. Those framerates will work best in multiple pipe mode (which is what external FFmpeg wants anyway!). |
Would it be possible to do 16ch audio output with this? I'd love to be able to feed them into FFmpeg and map audio channels to language tracks. |
@BlakeB415 It's been a while since I wrote this but I'm pretty sure my consumer is agnostic to the number of channels. It just passed the entire bytes array down the pipe so should just have whatever number of bytes is needed for the number of audio channels configured in caspar (unless Caspar is doing something weird where it stores audio data in blocks of 8 channels). Have you tried it and it didn't work or was this a theoretical question? |
I tried and it didn't work. The audio output was skipping and was pitched up. Not sure if that was ffmpeg or the output itself. I set -ac to 16 on your example command. |
@BlakeB415 Thanks for the info! I'll see if I can take a look on the weekend and reproduce. Could you post your casparcg config, the command you used to configure the pipe consumer, and the ffmpeg command you used to accept the pipe data? |
Thanks! ffmpeg version 4.3.1-2020-11-19-essentials_build-www.gyan.dev
|
Iirc CasparCG 2.2 and later have a fixed channel layout of 8 channels, the channel layout in @BlakeB415's config file is only compatible with 2.0 and 2.1 and is simply ignored in newer versions. |
Ah, that explains it. Well, I hope support for it comes at some point in later versions. |
Very interesting job. |
This PR adds a new consumer to CasparCG that sends raw video and/or audio down a named pipe.
Motivation
While CasparCG has an FFmpeg consumer, it is sometimes inefficient and accessing the latest FFmpeg features requires waiting for a new build of CasparCG (against the new FFmpeg) or building yourself. There are also instances where integration with a 3rd party tool is desired. Both of these requirements can be met if there is an easy way to get raw video/audio data out from CasparCG.
There current ways to get raw video/audio data out of CasparCG are:
The NDI consumer which, while supporting BGRA, is obviously only compatible with other NDI supporting hardware/software that agreed to the proprietary (but royalty free) license in order to utilise the NDI SDK. A notable example of a rejection of this is FFmpeg, who removed support for NDI after NDI violated the FFmpeg license.
The FFmpeg consumer via a stream. However the FFmpeg consumer appears to transcode to YUVA422p (which fortunately does not appear CPU intensive) and the act of sending 1080p60 down a socket appears to require a large amount of CPU (~15% on an i7-8700K). This CPU usage was consistent between UDP and TCP sockets, and even with the YUVA422p transcoding removed from the CasparCG source code. In addition, it is impossible to get synchronised raw video+audio down different sockets with the FFmpeg consumer (which is required if you want to ingest it into a standalone FFmpeg process)
Neither of these seemed ideal to me, hence the creation of the (named) pipe consumer.
Implementation
This consumer uses windows named pipes in blocking mode (frame data must be read at the receiver before the next frame is sent). Raw video and audio data can be sent down separate pipes, or the same pipe (end user configurable). Frame data from CasparCG is buffered (size of buffer is configurable by end users) starting from the moment the pipe is opened from the external receiver. If separate pipes for audio and video are used, the buffering begins from the moment the first of the pipes is connected (to ensure audio/video synchronisation). If two pipes are used, they run in separate threads so that the receiver is not required to read one video frame, one audio frame, one video frame, etc. (necessary when receiving in FFmpeg - possibly FFmpeg codec dependent).
Results
Initial tests outside of CasparCG showed that Windows named pipes were easily capable of streaming 1080p60. This has also proven the case when integrated with CasparCG. Streaming 1080p60 down a named pipe (using the code in this PR) to a standalone copy of FFmpeg appears to use 3% of my CPU (i7-8700K) on the CasparCG side.
Receiving the raw stream from CasparCG using the named pipe, and then using standalone FFmpeg to encode and restream over tcp (equivalent to
ADD 1 STREAM tcp://...
) resulted in a marginal CPU benefit (cumulative "CasparCG pipe consumer + standalone FFmpeg" CPU usage reduced from about 15% to 11% over streaming just with the CasparCG FFmpeg consumer.)Example usage
Here are some basic examples. The FFmpeg examples can of course be modified to use any input frame rate/resolution and any output options supported by the FFmpeg binary you use.
Streaming just video to FFmpeg
CasparCG command:
ADD 1 PIPE 1 VIDEO_PIPE \\\\.\\pipe\\CasparCGVideo
FFmpeg launched from terminal with:
ffmpeg -r 60 -s 1920x1080 -f rawvideo -pix_fmt bgra -i \\.\pipe\CasparCGVideo -r 60 -c:v libx264 -crf 23 -pix_fmt yuv420p caspar_pipe_test_1.mp4
FFmpeg will ingest video from CasparCG and encode it an an H.264 mp4
Streaming video + audio to FFmpeg
CasparCG command:
ADD 1 PIPE 1 VIDEO_PIPE \\\\.\\pipe\\CasparCGVideo AUDIO_PIPE \\\\.\\pipe\\CasparCGAudio
FFmpeg launched from terminal with:
ffmpeg -r 60 -s 1920x1080 -f rawvideo -pix_fmt bgra -i \\.\pipe\CasparCGVideo -probesize 128000 -r 60 -sample_fmt s32 -acodec pcm_s32le -f s32le -ar 48000.0 -ac 8 -i \\.\pipe\CasparCGAudio -r 60 -c:v libx264 -crf 23 -pix_fmt yuv420p caspar_pipe_test_2.mp4
FFmpeg will ingest video + audio from CasparCG and encode it an an H.264 mp4
Other options
To send video+audio down a single pipe (cannot be ingested by FFmpeg but may be useful for custom software):
Note: This sends 1 video frame followed by the audio for that frame down the pipe.
Send audio+video down a single pipe (cannot be ingested by FFmpeg but may be useful for custom software):
Note: This sends 1 audio frame followed by the video for that frame down the pipe.
Change size of internal buffer to 4.5 seconds of data (buffer size=4.5*fps):
Example casparcg.config consumer syntax is included in the casparcg.config file of this PR.
Future enhancements
If this pull request is accepted, I hope to slowly add support for linux named pipes and also named pipe producers. Didn't want to put in the extra work for that initially though as I wanted to gauge support for the general concept first (also I don't have a linux OS set up at the moment!).
I'm also happy to create documentation for using this new consumer. That said, given this is obviously not currently part of 2.3.0 LTS, I'm a little wary of polluting the current documentation with information for development versions. I see from your help wanted page that you're looking to move documentation to gitbook. I'm more familiar with sphinx+readthedocs, but I'd be interested in helping with the documentation migration to gitbook if that solves the problem of polluting docs with information on development features!
Please let me know what you think of this PR. It's my first time working on the CasparCG source code and the first time I've done any meaningful work with C++ in the last decade, so I might be a little rusty!
I hope the file headers are also OK. I couldn't find any files with a different copyright owner so I assume you want to have copyright over contributions? I'm happy to accept transfer of copyright to you on acceptance of this PR.