Video Export Settings Explained

SalvageSV

Hopes he doesn't crash this month
Based on my professional experience in video over the last 10 years, I've recognized that many people don't fully understand video export settings and how they affect the final output quality of a video. So, I thought I'd demystify the settings for those that may still be learning.

In this example I'm using Adobe Premiere Pro and a video set to export as an MP4 file with the H.264 codec. This is my recommended format for most applications. The values shown below should be applicable across multiple editing platforms.

When exporting in Premiere, this is the window you get:

video-output-settings.png


I'll just go down the list ...

Match Source - This option automatically adjusts the basic setting to match the values selected on your timeline. Not that this doesn't mean the output will match your source footage ... only the sequence settings. You can pick and choose which settings to match to source with the check boxes on the right of each setting.

* To make sure your sequence matches your footage you can select the option to "create sequence from clip" when beginning your edit. This also prevents the need to transcode footage on-the-fly when editing.

Width and height - Self explanatory really. If you want to make the videl dimensions smaller or larger, adjust these numbers. The link icon to the right retains the scale so you can easily drop something like a 1080 render to 960 or 720 if needed.

Frame rate - This is the number of frames in a second of video. Some common values are 60, 30 and 24. Setting this lower than your sequence or source footage can create smoother motion, lower output file sizes and speed up export. Setting it higher than your source forces frame interpolation which can soften footage and slow export times as it processes the extra frames.

Field order - You can select progressive or interlaced. This is what the "p" and "i" at the end of 1080p or similar means. Progressive scanning means that all 1080 vertical lines of data are shown at once for each frame of video. Interlaced means that only half are displayed at once, then the other half, in the order of odd lines, then even.

So, if you render 1080 progressive at 60 FPS, you get 60 full frames of video per second. If you render 1080 interlaced at 60 FPS, you get 120 half frames of video.

Interlaced scanning has its uses, but the majority of digital distribution of video is presented in progressive scan, such as your computer or cell phone. If you render progressive footage as interlaced, or vice versa, you can get "jaggies" where you can see the alternating lines in fast moving images.

interlace-01.jpg


Aspect - This changes the actual shape of the pixels rendered. Some formats, such as DVD or DV video, don't use square pixels. They are slightly rectangular. The image will stretch in one way or another to show properly on these formats. You would change this value if your footage was being displayed on those formats. For our standard YouTube or other PC-based distribution, square pixels are what you want to use.

TV Standard - You can choose NTSC or PAL. In the US, Japan and a few other countries, NTSC is the standard for SD TV broadcast. In Europe and many other areas, it's PAL. These are just broadcast standards, where NTSC is usually 480i at 30fps (60 hz) and PAL is 525i at 25fps (50 hz). Since you aren't broadcasting, this doesn't really com in to play for what we normally do.

Profile - This lets you select from the different available "profiles" for the selected export codec. In H.264 in this example you can select from baseline, main or high. Each profile is tweaked to function for an intended purpose and can change the scale and availability of different options when exporting. You can look up profile definitions online.

In this example, baseline is what would be used for quick compression for live streams, etc. Main is the standard that encompasses man different uses and high would be used for HD broadcast footage. For what we do, main or high would suffice.

Level - The level setting is similar to the profile and they work hand-in-hand. The level is a codec specific setting which defines the samples used to encode, max decoding speed, max frame size, etc.

Generally, selecting the highest profile and level will increase encoding times, but not show a drastic improvement in video quality. The best option here is usually to match your source data as you'll never really have a need to render in higher quality than your source becasue the extra data isn't there to take advantage of.

Render at maximum depth - This option forces 32-bit color depth output. You would check this if you are using advanced compositing with 32-bit effects or if you are transcoding footage to be used for advanced compositing or similar. For a standard video output, you don't really need 32-bit color as it's not highly supported at the consumer level.

Bitrate Encoding - To me, this is one of the most important setting you can tweak. The default is usually single pass CBR, or continuous bitrate. This means that you select a single bitrate (the amount of data per second of video) for your entire video and the video is reviewed and considered once before output.

My recommendation is to always use 2-pass VBR (variable bitrate) for the best quality. This means that the video is considered twice to ensure the best compression is used and the bitrate can dynamically adjust as needed for transitions and more data-heavy sections of video. This does effectively double encoding times though.

Target bitrate - This is the average bitrate of your video when compressed. For the absolute best quality, this should be no lower than your source footage bitrate. As an example, a standard GoPro will record at 25 mbps. A GoPro in Protune mode will record at 45 to 60 mbps depending on the model.

I don't actually recommend setting this to the max you'd need as the extra few bits just bloat your file size and give little in terms of quality. I usually set this to 20 or 22 mbps for my Protune footage.

One thing to remember about bitrate settings is that the more data you try to process, the high the required bitrate. A 720 res video at 30 fps doesn't require the same bitrate as a 4k res video at 60 fps, so you can run a much lower bitrate for the 720 video and get the same relative "quality" in terms of compression.

Maximum bitrate - This is the highest bitrate that will be used for a VBR encode. When the footage is evaluated, the encoder will use up to this value to give the best possible quality. You need higher bitrates for things like fast motion, transitions, etc. Increasing this value will allow the encoder will ramp up the bitrate only when needed, retaining quality where it's needed but keeping the output file size in check.

Keyframe distance - This value defines the spacing between keyframes used for compression. A keyframe is a full, uncompressed frame of video that's referenced for the compression of all frames between. The encoder compares compressed frames to the keyframes to basically "guess" at the motion happening and deciding where it can lose data without sacrificing detail.

For the best quality you want a lower number here. In general, I like to use a keyframe every second for 24p or 30p framerates. For footage that doesn't have a lot of motion, like an interview, a keyframe every 2 seconds is usually fine. If you have a lot of motion, consider having a keyframe every half second.

The more keyframes you have (lower number here), the larger your file will be.

Use Maximum Render Quality - This setting increases quality when scaling footage. For instance, rendering 1080 footage in 720. The encoder takes more time to interpolate the footage and gives a higher quality output when scaled. If you are rendering in native resolution, this makes now difference in quality.

Use Previews - If you have rendered previews of your timeline while editing, checking this box uses those previews for the export and can speed up render times.

Time interpolation - This setting changes how the encoder interpolates (guesses at changes) when using different frame rates. So, if your footage is 30fps and your timeline is 60fps, those missing 30 frames have to come from somewhere. This setting decides how they will be created.

And that's it. Sorry for the wall of text. I hope this helps explain some of the advanced settings and please let me know if you have questions. I'll try my best to answer or provide direction.
 
Dude, great post! I never used to know what any of that stuff means. Now I can actually make better use of these features and probably see a bit of an increase in quality
 
Dude, great post! I never used to know what any of that stuff means. Now I can actually make better use of these features and probably see a bit of an increase in quality
Happy to help! Let me know how it works out.
 
Based on my professional experience in video over the last 10 years, I've recognized that many people don't fully understand video export settings and how they affect the final output quality of a video. So, I thought I'd demystify the settings for those that may still be learning.

In this example I'm using Adobe Premiere Pro and a video set to export as an MP4 file with the H.264 codec. This is my recommended format for most applications. The values shown below should be applicable across multiple editing platforms.

When exporting in Premiere, this is the window you get:

View attachment 1719

I'll just go down the list ...

Match Source - This option automatically adjusts the basic setting to match the values selected on your timeline. Not that this doesn't mean the output will match your source footage ... only the sequence settings. You can pick and choose which settings to match to source with the check boxes on the right of each setting.

* To make sure your sequence matches your footage you can select the option to "create sequence from clip" when beginning your edit. This also prevents the need to transcode footage on-the-fly when editing.

Width and height - Self explanatory really. If you want to make the videl dimensions smaller or larger, adjust these numbers. The link icon to the right retains the scale so you can easily drop something like a 1080 render to 960 or 720 if needed.

Frame rate - This is the number of frames in a second of video. Some common values are 60, 30 and 24. Setting this lower than your sequence or source footage can create smoother motion, lower output file sizes and speed up export. Setting it higher than your source forces frame interpolation which can soften footage and slow export times as it processes the extra frames.

Field order - You can select progressive or interlaced. This is what the "p" and "i" at the end of 1080p or similar means. Progressive scanning means that all 1080 vertical lines of data are shown at once for each frame of video. Interlaced means that only half are displayed at once, then the other half, in the order of odd lines, then even.

So, if you render 1080 progressive at 60 FPS, you get 60 full frames of video per second. If you render 1080 interlaced at 60 FPS, you get 120 half frames of video.

Interlaced scanning has its uses, but the majority of digital distribution of video is presented in progressive scan, such as your computer or cell phone. If you render progressive footage as interlaced, or vice versa, you can get "jaggies" where you can see the alternating lines in fast moving images.

View attachment 1720

Aspect - This changes the actual shape of the pixels rendered. Some formats, such as DVD or DV video, don't use square pixels. They are slightly rectangular. The image will stretch in one way or another to show properly on these formats. You would change this value if your footage was being displayed on those formats. For our standard YouTube or other PC-based distribution, square pixels are what you want to use.

TV Standard - You can choose NTSC or PAL. In the US, Japan and a few other countries, NTSC is the standard for SD TV broadcast. In Europe and many other areas, it's PAL. These are just broadcast standards, where NTSC is usually 480i at 30fps (60 hz) and PAL is 525i at 25fps (50 hz). Since you aren't broadcasting, this doesn't really com in to play for what we normally do.

Profile - This lets you select from the different available "profiles" for the selected export codec. In H.264 in this example you can select from baseline, main or high. Each profile is tweaked to function for an intended purpose and can change the scale and availability of different options when exporting. You can look up profile definitions online.

In this example, baseline is what would be used for quick compression for live streams, etc. Main is the standard that encompasses man different uses and high would be used for HD broadcast footage. For what we do, main or high would suffice.

Level - The level setting is similar to the profile and they work hand-in-hand. The level is a codec specific setting which defines the samples used to encode, max decoding speed, max frame size, etc.

Generally, selecting the highest profile and level will increase encoding times, but not show a drastic improvement in video quality. The best option here is usually to match your source data as you'll never really have a need to render in higher quality than your source becasue the extra data isn't there to take advantage of.

Render at maximum depth - This option forces 32-bit color depth output. You would check this if you are using advanced compositing with 32-bit effects or if you are transcoding footage to be used for advanced compositing or similar. For a standard video output, you don't really need 32-bit color as it's not highly supported at the consumer level.

Bitrate Encoding - To me, this is one of the most important setting you can tweak. The default is usually single pass CBR, or continuous bitrate. This means that you select a single bitrate (the amount of data per second of video) for your entire video and the video is reviewed and considered once before output.

My recommendation is to always use 2-pass VBR (variable bitrate) for the best quality. This means that the video is considered twice to ensure the best compression is used and the bitrate can dynamically adjust as needed for transitions and more data-heavy sections of video. This does effectively double encoding times though.

Target bitrate - This is the average bitrate of your video when compressed. For the absolute best quality, this should be no lower than your source footage bitrate. As an example, a standard GoPro will record at 25 mbps. A GoPro in Protune mode will record at 45 to 60 mbps depending on the model.

I don't actually recommend setting this to the max you'd need as the extra few bits just bloat your file size and give little in terms of quality. I usually set this to 20 or 22 mbps for my Protune footage.

One thing to remember about bitrate settings is that the more data you try to process, the high the required bitrate. A 720 res video at 30 fps doesn't require the same bitrate as a 4k res video at 60 fps, so you can run a much lower bitrate for the 720 video and get the same relative "quality" in terms of compression.

Maximum bitrate - This is the highest bitrate that will be used for a VBR encode. When the footage is evaluated, the encoder will use up to this value to give the best possible quality. You need higher bitrates for things like fast motion, transitions, etc. Increasing this value will allow the encoder will ramp up the bitrate only when needed, retaining quality where it's needed but keeping the output file size in check.

Keyframe distance - This value defines the spacing between keyframes used for compression. A keyframe is a full, uncompressed frame of video that's referenced for the compression of all frames between. The encoder compares compressed frames to the keyframes to basically "guess" at the motion happening and deciding where it can lose data without sacrificing detail.

For the best quality you want a lower number here. In general, I like to use a keyframe every second for 24p or 30p framerates. For footage that doesn't have a lot of motion, like an interview, a keyframe every 2 seconds is usually fine. If you have a lot of motion, consider having a keyframe every half second.

The more keyframes you have (lower number here), the larger your file will be.

Use Maximum Render Quality - This setting increases quality when scaling footage. For instance, rendering 1080 footage in 720. The encoder takes more time to interpolate the footage and gives a higher quality output when scaled. If you are rendering in native resolution, this makes now difference in quality.

Use Previews - If you have rendered previews of your timeline while editing, checking this box uses those previews for the export and can speed up render times.

Time interpolation - This setting changes how the encoder interpolates (guesses at changes) when using different frame rates. So, if your footage is 30fps and your timeline is 60fps, those missing 30 frames have to come from somewhere. This setting decides how they will be created.

And that's it. Sorry for the wall of text. I hope this helps explain some of the advanced settings and please let me know if you have questions. I'll try my best to answer or provide direction.

Great post man! Thank you so much for sharing this info. :D
 
I only just registered to this vlog to thank you very much for your great explanation how to export video in Adobe Premiere Pro,
it's a great help to me.
 
Good post but for youtube I would recommend setting the keyframes to FPS/2. For example a FPS of 60 set keyframe to 30.
Also youtube will mess with the video file a lot if the bitrate is too high for example a bitrate of 12 for a 1080p @ 60fps seems to work better than a higher one. IE set a nice high bitrate and yes it looks good local but once uploaded to YouTube it re encodes it to buggery and destroys your keyframes above too.

https://support.google.com/youtube/answer/1722171?hl=en-GB
 
Hi, this is great info for everyone. Nevertheless, I have to ask something. I am using 1080i/60fps, CBR @ 14MBPs for exporting my videos that I am going to upload to YouTube. When they are uploaded as far as I know YouTube is dropping the video bitrate at 10 MBPs for HD vids and the quality sometimes really suffers. Is there anyway around this or maybe other settings more appropriate for online uploading?

PS. I have used VBR in the past (min 10MBPS) but still the result remains the same.
 
Good post but for youtube I would recommend setting the keyframes to FPS/2. For example a FPS of 60 set keyframe to 30.
Also youtube will mess with the video file a lot if the bitrate is too high for example a bitrate of 12 for a 1080p @ 60fps seems to work better than a higher one. IE set a nice high bitrate and yes it looks good local but once uploaded to YouTube it re encodes it to buggery and destroys your keyframes above too.

Was going to add this.

Youtube will compress anything that exceeds their bitrate limits and their compression is horrid.

I almost think of it as punishment for making them store a file that's bigger than they like.

(Also just found out that quoting a URL won't allow me to post )
 

Winners Video

Website Supported by Ipswich SEO

Latest posts

Back
Top