Oct 29

ffmpeg - audio and video length difference issues

Category: Linux,multimedia   — Published by tengo on October 29, 2009 at 5:43 am

When you use ffmpeg to, for example, extract audio and video as seperate streams, or if you import it into Blender's fabulous NLE, the Sequence Editor, you might encounter a different length of the audio and video part of the video. Call this audio video offset, audio video mismatch or simply audio video runtime/length difference (this sentence to allow users find this post here...).

Having a look at ffmpeg's command-line output when -i identifying or converting such a video, you are likely to see something like: "Seems stream 1 codec frame rate differs from container frame rate: 59.99 (11998/200) -> 30.00 (30/1)" This is a good hint at that the video is somehow screwed up on the encoder side. And this is not uncommon, I saw it on .mp4 videos downloaded from YouTube etc. Also, it seems as if some containers are prone to "forgetting" about fps rates of their elements...

A look at ffmpeg's docs tells us that there is (might be) a cure for this:

`-vsync parameter'
Video sync method. Video will be stretched/squeezed to match the timestamps, it is done by duplicating and dropping frames. With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one.
`-async samples_per_second'
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, the parameter is the maximum samples per second by which the audio is changed. -async 1 is a special case where only the start of the audio stream is corrected without any later correction

A scenario where these options are useless is, let's say, where you extract the audio in one step, and the video in another, so ffmpeg can't adjust the two against each other. In this case, I found out that relying on the extracted audio length is a good solution as it appears that ffmpeg seldomly (at least it never happened to me) speeds up/ slows down audio - it's always played at the right rate, whereas the video often receives a speedup/slowdown. So you need to find a way to adjust the video's length to the audio-length-reference. In my case I did this by adding a speed control effect in Blender's SequenceEditor and setting the lenth of video to be == the length of audio.

A look around
(other users wrangling with this problem and their ideas/solutions):

‎Update
Today I needed to finally solve this issue and entered another round in tackling this problem. Doing video editing in Blender, I just could not get rid of these audio-video offsets and strips differed in length. A fresh search around the web brought me on another track: fps (framerate/frames per second)!

Some threads (here and here) described exactly what I was experiencing. So I took mplayer to tell me the specs of a few videos, videos where I ran into the problem and videos which Blender imported okay. And voila: all videos that resulted in out of sync audio/video had NTSC framerates of29.xxx. As my Blender project had a fps base of 25fps this naturally resulted in offset video. Audio never differs in length as the audio track doesn't know of framerates.

So solutions are:

  • Setting playback to 30fps and a timebase of 1.001 to get the NTSC of 29.97fps.
  • Or outside of Blender, change the fps rate with, for example ffmpeg. A suitable command would be: ffmpeg -i <inputvideo> -r 25 -sameq <outputvideo>. Sadly, you can't use the -acodec copy/-vcodec copy switches if you are changing fps, you need to re-encode...