I didn’t think this would be so difficult.
All I want to do is play live video on my netbook (from /dev/video1:input=1:norm=NTSC) and record it at the same time. Without introducing lag.
mplayer plays the video fine (no noticeable lag).
mencoder records it fine.
The mplayer FAQ says you can do it this way:
mencoder tv:// -tv driver=v4l:width=324:height=248:outfmt=rgb24:device=/dev/video0:adevice=hw.1,0 -oac mp3lame -lameopts cbr:br=128 -flip -ovc lavc -lavcopts threads=2 -o >( tee filename.avi | mplayer -)
But that doesn’t work.
You can’t record and play at the same time because there is only one /dev/video1 device, and once either mencoder or mplayer is using it, the device is “busy” to any other program that wants to read the video stream.
I spent lots of time with mplayer, mencoder, ffmpeg, avconv, and vlc; as far as I can tell none of them can do it, directly or indirectly. There are ways that work if you don’t mind 200 or 300 ms of extra latency over mplayer alone. But I’m doing a FPV teleoperation thing and that’s too much latency for remote control.
I found a way that sort of works. Here’s a bash script (works in Linux Mint 15, which is like Ubuntu):
#!/bin/bash
mplayer tv:// -tv device=/dev/video1:input=1:norm=NTSC -fs&
outfile=$(date +%Y-%m-%d-%H%M%S)$1.mp4
avconv -f x11grab -s 800×600 -i :0.0+112,0 -b 10M -vcodec mpeg4 $outfile
This works by running mplayer to send the live video to the screen (full screen), then running avconv at the same time to grab the video back off the display (-f x11grab) and encode it. It doesn’t add latency, but grabbing video off the display is slow – I end up with around 10 fps instead of 30.
There must be some straightforward way to “tee” /dev/video1 into two virtual devices, so both mplayer and mencoder can read them at the same time (without one of them complaining that the device is “busy”). But I haven’t found anybody who knows how. I even asked on Stack Overflow and have exactly zero responses after a day.
(If you know how, please post a comment!)
Addendum for Linux newbies (like me):
After you put the script in file “video.sh”, you have to:
chmod +x video.sh # to make it executable (just the first time), then
./video.sh # to run the script (each time you want to run it)
You’ll probably want to tweak the script, so you should know that I’m using a KWorld USB2800D USB video capture device, which puts the composite video on input=1 (the default input=0 is for S-Video) and requires you to do norm=NTSC or it’ll assume the source is PAL.
-fs makes mplayer show the video fullscreen. Since I’m doing this on my Samsung N130 netbook with a 1024×600 screen, the 4:3 video is the 800×600 pixels in the middle of the screen (starting at (1024-800)/2 = 112 pixels from the left).
Also, many thanks to Compn on the #mplayer IRC for trying really hard to help with this.
Update 2013-11-02:
I haven’t given up on this, so I’ll use this space to record progress (or non-progress).
I started a Stack Exchange thread on this.
On IRC I was told that VLC can do this. I got as far as getting it to display the video at 720×96 (yes ninety-six) resolution, with a lot of lag (the source is VGA, 640×480). Googling about it, it seems the resolution problem is probably fixable with VLC, but the lag isn’t. So I gave up on that.
The most promising approaches at the moment seem to be:
- This page about ffmpeg which gives ways to create multiple output from a single video input device – exactly what I need. But I haven’t found any way to get ffmpeg to read from input=1:norm=NTSC (as mplayer can).
- This thread on Stack Exchange seems to describe ways to “tee” the video from one device into 2 (or more) other devices. One way using V4L2VD, the other using v4l2loopback. I haven’t figured out how to get either working.
Update 2013-11-03:
Pygame has the ability to read and display video streams, but ‘nrp’ (one of the developers of pygame) told me on IRC that he never implemented reading from anything other than the default input 0 (zero). He suggested that the info needed to update the pygame code to do that is here, and the source code is here. I’m not really up for doing that myself, but maybe somebody else will (I posted this bug on it, per nrp’s suggestion).
Another idea I had was to just buy a different USB video capture device, that works with the default input 0 and norm. So far I haven’t found one that does that.
But I’ve got two new leads:
- This thread seems to suggest that the v4l2 device can be pre-configured for the needed input and norm.
- There seem to be more than one Python extension that supports v4l2. Maybe one of them can do it…
Update 2013-11-03 #2:
I think I made a sort of breakthrough.
v4l2-ctl can be used to control the video4linux2 driver after the app that reads the video stream has started. So even if the app mis-configures /dev/video1, once the app is running you can configure it properly.
The magic word for me is:
v4l2-ctl -d 1 -i 1 -s ntsc
That sets /dev/video1 (-d 1) to input 1 (-i 1) and NTSC (-s ntsc).
Not only that, but I (finally) found out how to get avconv to configure video4linux2 correctly (and maybe also for ffmpeg).
For avconv, “-channel n” sets the input channel, and “-standard NTSC” sets NTSC mode. I think the equivalents in ffmpeg are “-vc n” and “-tvstd ntsc” respectively, but I haven’t tried those yet.
But this works:
avplay -f video4linux2 -standard NTSC -channel 1 /dev/video1
Now I can try to ‘tee’ the output from /dev/video1….
- do it…
Update 2014-06-01:
I gave up on this, but eventually got it working in Python with Windows (see this post); maybe that method will also work in Linux (I haven’t tried it).
For what it’s worth, this guy claims he has it working this way:
vlc -vvv v4l2:///dev/video1:input=1:norm=PAL-I:width=720:height=576 –input-slave=alsa://plughw:1,0 –v4l2-standard=PAL_I –sout ‘#duplicate{dst=display,dst=”transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}: std{access=file,mux=mp4,dst=test.mp4}}’
I’m doubtful (esp. re latency), but you could try it.
I found an insane but workable solution.
(1) A lot of places (e.g. http://superuser.com/questions/440761/v4l-capture-and-watch-at-the-same-time ) suggested the unbelievably terrible solution of using x11grab to capture the preview window. What are we, barbarians?
(2) A thread I can no longer find ( 🙁 ) suggested using ffmpeg’s ability to perform multiple simultaneous streams, with one of them encoding to udp://. I couldn’t find a way of making this work that didn’t have hideous latency.
What I found, however, was that I could use (2) to encode to a fifo! Here are the commands that worked for me:
mkfifo /tmp/livevideo.fifo &&
mplayer -cache 2048 -fps 300 -really-quiet /tmp/livevideo.fifo < /dev/null &
ffmpeg \
-f v4l2 -standard NTSC -channel 4 -i /dev/video0 \
… out.mkv \
-map 0 -f avi -vcodec rawvideo \
-y /tmp/livevideo.fifo
This creates out.mkv at the same time as it plays, and had low enough latency that it didn't bother me. This works fine for arbitrarily complicated ffmpeg lines; I actually have one video and three audio inputs, but only care about the video for the fifo. Presumably using multiple -map options would be sufficient if audio is necessary in the preview too.
I set -fps to 300 on the mplayer line just so that if there is any latency, mplayer will catch up. It actually plays the live fps since of course it can only play as many frames as it's received. Not sure what one could do for a similar audio issue. Typically I just do some pulseaudio mapping with loopback audio to get audio to my headphones as well as record it.
Posting this comment here since this is a post I found while trying to solve the same problem 🙂
Wow – thanks for the good info! I’ll give it a try.
I also made a little more progress since my last update of this post. I discovered that tvtime is a better real-time player than mplayer, ffplay, or avplay – it has lower latency and much better handling of interlace.
More important, I got ffmpeg to “tee” output to a player & encoder at the same time. Unfortunately, it seems ffmpeg adds ~ 100 mS of latency even when just “copying” data without encoding. So now I’m looking for a solution without ffmpeg.
I agree – grabbing the preview window is a barbaric solution (yet, so far I don’t have a better solution that doesn’t add latency).
tvtime is a nonstarter for me since I need several audio streams at once. I basically /need/ ffmpeg (or, equivalently, avconv) since nothing else seems to be able to keep a video stream and three audio streams all in sync.
mplayer has fantastic interlace handling, btw, you just have to ask for it. -vf pp=fd is very fast, -vf kerndeint is fast enough for realtime and a bit better than pp=fd, or for very good interlacing (but slower than 1:1) you can use mcdeint, e.g. -vf yadif=1:1,mcdeint=0:1:10,framestep=2 , with mcdeint=1,2,3 for even better deinterlacing.
As per latency, yeah, any program that can actual preview and record at the same time is going to do better than some wonky ffmpeg->fifo->mplayer nonsense 🙂