If you are using Ubuntu 18.04 or 20.04, you might want to use ffmpeg to cut your video that is too long and you are only interested in a part of it for a presentation or sent it to a friend or Reddit. ffmpeg is a versatile tool that can virtually do anything with videos files in any codec but this also makes it very hard to master, even for small use cases such as cutting videos.
There are a lot of StackOverflow questions and answers about how to cut videos with ffmpeg. However, after running a few of them, you may find that in the video there is a black screen in the beginning or you can click the last part of the video to see the end of it. You might also notice the length is not quite right: it should be shorter but the video cut ends up with an undesired length.
In this FAQ, I will answer the two questions you might have in the paragraph above in an easy but extensive way: how to have the correct length in the output cut video and how to avoid having a black screen in the beginning so you have a better watching experience. Long story short, you can try the following 2 methods:
How to have the correct length at the end?
You can put
-i specified the input video file and its position actually matters.
How to avoid the black screen at the beginning?
You can use
Here we pass
avoid_negative_ts. What it is doing here is to shift the timestamp of the video to 0. The deeper reason is that the codecs most videos are using have temporal compression, so the time you specified may lie on frames before and after that timepoint to be correctly decoded. When you use ffmpeg to cut videos and use copy mode, ffmpeg will have all frames before and after the trimmed frame that are needed to correctly decode the file.
Is there any example of using ffmpeg to cut video?
One example would be
ffmpeg -i "in.mp4" -ss 00:00:08 -to 00:00:37 -c copy -avoid_negative_ts make_zero "out.mp4". Here we use
ffmpeg to cut the video
in.mp4 from second 8 to second 47. We used the
-avoid_negative_ts make_zero parameter.
BTW, What is ffmpeg?
FFmpeg is a free, open-source software composed of a large set of libraries and programs that manage images, audio, and other multimedia files and streams. At its heart is the FFmpeg software itself, programmed to process video and audio files on a command-line basis and commonly used for transcoding formats, simple editing: trimming and concatenation, in our case we use ffmpeg to cut videos, picture scaling, post-production.
FFmpeg is part of the infrastructure of hundreds of other application projects, and its libraries are a key component of digital media players including VLC, which have been used in key YouTube distribution which iTunes video inventory.
The project name is influenced by the MPEG video standards community, combined with “FF” meaning “quick forward.” The emblem uses a zigzag design that illustrates how entropy encoding is done by MPEG video codecs.
What does the
ffmpeg command do?
ffmpeg command reads from an arbitrary number of “files” inputs (which could be standard directories, pipes, network sources, grabbing tools, etc.), defined by the
-i method, and writes to an arbitrary number of “documents” outputs defined by a simple url. Something that can’t be perceived as an alternative contained on the command line is known to be an output url.
In general, any input or output url will contain any amount of streams of different types (video / audio / subtitle / attachment / data). The number and/or types of streams permitted can be constrained by the size of the container. Select which streams from which inputs go through which output is either done automatically or through the
Under the hood, ffmpeg calls the libavformat library (which contains demuxers) to read input files and get packets that include encoded data. When there are multiple input files, ffmpeg attempts to keep them synched by tracking any active input stream with the lowest time marks.
Encoded packets must then be sent to the decoder (unless streamcopy is chosen for stream). The decoder produces uncompressed frames (raw video / PCM audio/ …), which can be further processed through filtering (see next section). Upon sampling, the frames are transferred to the encoder, which encodes the binary packets and outputs them. Those are eventually passed on to the muxer.