HDR and 4K are both features of video production that improve image quality, but they don’t work in the same way or on the same metrics.
- Release Date
- Original price
- Units Sold
Keep reading to learn everything you need to know about HDR and 4K video including what they are and how they’re used.
HDR vs. 4K: A Side-By-Side Comparison
|Function||Improves image color contrast||Improves image resolution|
|Used by||Dolby Vision, IMAX, Cinemark XD||Dolby Vision, IMAX, Cinemark XD|
|Similar Technologies||Multi-Exposure HDR||Ultra-High-Definition (UHD), DCI Digital Cinema System Specification, SMPTE UHDTV standard, ITU-R UHDTV standard, CEA Ultra HD, CinemaWide 4K, 2160p resolution|
|Storage Formats||RAW, OpenEXR, ACES, HDR10, HDR 10+, Dolby Vision, HLG||512e, Advanced Format|
|Possible Resolutions||Varies||3840 × 2160p, 4096 × 2160p|
HDR vs. 4K: What’s the Difference?
Since HDR and 4K aren’t similar technologies, it would not be ideal to compare them. Neither format does things that the other is capable of. Furthermore, most modern HDR cameras also record 4K video, and many 4K cameras also record HDR footage. So, realistically, you want to look for a device or viewing format that offers both HDR and 4K footage to record or view.
HDR: How Does it Work?
High Dynamic Range (HDR) refers to an image or video with a higher level of contrast between the darkest and lightest parts of the image, without over- or underexposing the vision to achieve a darker or lighter color.
An HDR image is typically achieved using “multi-exposure HDR.” Each frame is captured three times at a different shutter speed when using multi-exposure HDR. The result is three images with typical, bright, and dark colors. The camera’s image sensor then stitches together the images, creating one concurrent image with darker darks and brighter brights without sacrificing the mid-tones.
4K: How Does it Work?
Compared to how HDR video is captured, 4K is elementary. 4K video simply refers to footage captured and played back in a resolution with a horizontal measurement of around 4,000 pixels. Most 4K cameras use the same technology as non-4K variants. However, 4K cameras typically require a camera and SD card with a higher write speed.
Write speed is how fast the camera and SD card can interface to store image information. Since 4K cameras store more information, they’ll need a faster write speed to keep up with the processing volume.
As we mentioned, HDR and 4K are different technologies designed to improve image qualities in various metrics. For example, HDR improves the color quality of a video, while 4K enhances the resolution.
When shooting HDR videos, you’ll be able to see and differentiate colors more clearly. Additionally, since HDR requires shooting the same video in three different forms, you’ll have more ability to shoot in harsh lighting or dark situations; the camera will help correct areas that are over- or underexposed by combining several images with different exposures.
When shooting 4K video, the image resolution is increased. The images appear sharper and more defined when viewed. Native 4K video—a video that was initially shot in 4K, rather than being upscaled later—is commonly used in cinemas.
Displaying an image on such a large screen would typically require upscaling, resulting in the picture appearing fuzzy or blurry. Native 4K video doesn’t need upscaling and minimizes the blurriness when viewing video on a large screen.
Since the improvements to the image quality come in different forms, it’s not uncommon for modern cameras to feature HDR and 4K video, especially higher-end cameras designed for professional video production.
The most prominent application of HDR and 4K video is in cinema films. This is because there was a time when the only place you could reasonably expect to see a video that had HDR or 4K was going to a cinema. But, unfortunately, the technology needed to produce these videos was not readily available to the public since it cost so much to make these massive video files.
Nowadays, you can get consumer-level cameras that shoot in HDR and 4K, even if their features are a tad less robust than the cinema-level cameras. Moreover, it’s even possible to purchase cinema-standard cameras from companies like Blackmagic, which provide reasonably affordable cameras (by cinema camera standards) to the general public.
Aside from the native use of 4K and HDR video, videos can be remastered and upscaled to provide 4K, and HDR versions of videos recorded before these technologies were commonplace. However, upscaling and HDR conversion typically require additional knowledge and manual intervention to ensure quality control and faithful recreation of the original footage.
Standards and Storage Formats
HDR has multiple standards by which footage is recorded and handled. It also has several formats that can be used for storing and replaying HDR footage. Seven primary forms can be used to store and replay HDR footage: RAW, OpenEXR, and ACES can be used to store HDR footage. In addition, HDR10, HDR 10+, Dolby Vision, and HLG can be used to store and playback HDR footage. Finally, HDR footage can also be stored using any format with a linear transfer function with high bit-depth or formats that use logarithmic transfer functions.
These formats allow video to be recorded in multiple aspect ratios and resolutions. So, anything on the list can record a 4K video. These are just specific formats used to store HDR video. HDR video can be 4K if the camera and SD card are powerful enough to record that video in 4K resolution.
RAW image formats are not to be confused with Adobe Raw, Rawdisk, or ATL Server#SRF. It is also not to be confused with the German geographic domain for North Rhine-Westphalia. RAW is a type of image format known as “lossless.” This means that the footage or photos are minimally processed. However, this makes the resulting files absolutely massive, and it’s not uncommon for RAW video files to be larger than a few gigabytes.
On the consumer level, shooting footage in RAW format is rarely necessary. The majority of consumer-use video doesn’t need that kind of quality. While it might be nice to think of a lossless video of your toddler, it’s unlikely that the average person actually has the necessary skills and programs to do anything with the footage.
Additionally, since RAW footage can’t be played back on an output device (RAW footage is designed for video editors to get the highest-quality video for them to edit and then output to a similar quality output file type), you wouldn’t even be able to playback your video unless you converted it to a different file type.
OpenEXR is an open-source file format with various tools for capturing, editing, and replaying high-dynamic range video. The most prominent feature that draws people to the EXR format is its support of multiple channels with different pixel sizes.
EXR was first launched in 1999 by Industrial Light & Magic. It featured a multi-resolution and arbitrary channel format, making it exceptionally powerful for video compositing. For example, it may involve stitching together videos with different resolutions and channel types.
It also has an open-source library of tools, APIs, and other necessary information, making the switch to EXR exceptionally easy since you don’t have to pay for any of the materials needed. EXR’s library can also be downloaded in its RAW C++ format, or compiled into executables for Windows, macOS, and Linux, making it extremely accessible to everyone.
Academy Color Encoding System
“Academy Color Encoding System,” or ACES, is a color encoding system developed by the Academy of Motion Picture Arts & Sciences. It aims to create a seamless workflow where high-quality cinema footage can be edited together with accurate colors, regardless of the source.
Since it debuted in December 2014, ACES has been implemented by several upper-echelon vendors and used in several major motion pictures. In addition, ACES received a Primetime Engineering Emmy Award in 2012, when it was still in its beta and development stage.
ACES uses several concurrent systems to create a perfect color-accurate workflow space. This workspace includes the following components:
- Academy Color Encoding Specification (ACES): This defines the ACES color workspace, allowing for half-float high-precision encoding in scene linear light.
- Input Transform: This takes video content recorded in a space that doesn’t use ACES and transforms it into a video that uses the ACES color space. Several types of Input Transformers within the ACES specifications allow for video from almost any source to be transformed into an ACES-compatible video. Before ACES 1.0’s release, this was referred to as an Input Device Transform (IDT).
- Look Modification Transform (LMT): This is a specific change in a look that is systematically applied to the footage using the ACES Viewing Transformer.
- Output Transform: This encodes the video from the input color encoding to a single, continuous color profile that can be read and played by playback devices of your choice. In more readable terms, this is the final render function for ACES. It contains two subsections, the Reference Rendering Transform, which changes the color profile to a uniform ACES color profile, and the Output Device, which outputs the video to a playable video file.
- Academy Printing Density (APD): This is a reference that calibrates film scanners and film recorders.
- Academy Density Exchange (ADX): This is an encoder that helps the program capture information from film scanners.
- ACES Color Space SMPTE Standard 2065-1 (ACES2065-1): This is the primary scene-referred color space used by ACES (referring to a collection of different video files, known as scenes, rather than the output of a single video file with consistent color space and other features).
- ACEScc (ACES Color Correction Space): This is the color space typically used by video editors and compositors when correcting the color of videos to be uniform.
- ACEScct (ACES Color Correction Space with Toe): This is a color correction space that allows for use with a recreation of toe behavior seen in Cineon files.
- ACEScct (ACES Color Correction Space with Toe): This is a color space that can be used to render computer-generated graphics.
- ACESproxy (ACES Proxy Color Space): This space allows encoding formats without floating-point arithmetic, like SDI cables, monitors, and general infrastructure.
HDR10 is another open-source video format. However, unlike the above formats used to store and edit video, HDR10 can be used to store or playback video. HDR10 is one of the earlier HDR file formats, and, as such, it’s not backward compatible with SDR. It includes HDR static metadata but no capabilities for dynamic metadata.
If you have an HDR10 video without any metadata, you have a PQ10 video, which is the same as HDR10 but without the metadata.
HDR10+ is HDR10 video that includes dynamic metadata. This format uses dynamic metadata to adjust and optimize each video frame to provide a higher-quality viewing experience. HDR10+ competes with Dolby Vision as an output format that features HDR capabilities with dynamic metadata.
Dolby Vision is a proprietary video format developed by Dolby Laboratories to encompass every angle of HDR video, from shooting to commercial playback. Dolby Vision’s HDR has a deep color depth that sometimes produces colors that a specific output display cannot show. However, its dynamic metadata allows the creator to optimize and edit the video to be more true-to-life on all displays.
Dolby Vision also includes Dolby Vision IQ, designed to capture video while retaining the real-to-life look of ambient lighting. Dolby Vision is primarily used in Dolby Cinema productions as this is Dolby’s proprietary movie-viewing program.
Hybrid Log-Gamma (HLG) video is a royalty-free HDR format that is most useful for producing a video that must be backward compatible with SDR playback devices. This significantly reduces the operating costs of transmission since they can send out one information that is compatible with all playback devices rather than having to send out multiple transmissions; that doesn’t even cover the costs of producing both files for transmission, which may require additional resources if the only available rendering devices do not include SDR compatibility.
However, while it is compatible, the goal of HLG video files is not to be entirely backward compatible but to offer a bridge between the olden days and the new by allowing people who have yet to upgrade to an HDR device to continue viewing new transmissions. Still, HLG video isn’t natively compatible with any device that doesn’t support BT.2020 color. So, very old machines won’t get the same transmission, even if it’s transmitted as an HLG file.
4K also has several standards that videos shot in 4K are held to. Unlike HDR standards which can encompass 4K video, 4K formats only indicate that a video is shot in 4K and does not imply the presence of any other video features like HDR.
The most well-known commercial formats that use HDR video will be IMAX and Dolby Cinema—-the commercial property for which the video format was named.
Most films nowadays use HDR footage. However, you may still see an SDR version of the original film depending on where and when you see the movie. This situation will typically only occur when a theater doesn’t have an HDR projector.
Suppose you want to be sure you’re getting the whole HDR experience. In that case, it’s best to spend a little extra money on the IMAX (true IMAX only!) or Dolby Cinema ticket.
HDR vs. 4K: 5 Must-Know Facts
- HDR is a video format that focuses on more realistic color depth.
- 4K is a video resolution that determines the sharpness and detail depth of the image.
- HDR video can be recorded in 4K, but just because a camera records in 4K does not mean the video is in HDR.
- Most HDR formats are not compatible with SDR playback devices.
- IMAX and Dolby Cinema are the most well-known HDR movie playback platforms.
Learning as much as possible about the media you interact with is always good. Knowing what goes on behind the scenes is an excellent way to determine where you should spend your money.
Unfortunately, 4K and HDR video are still relatively new to the scene. Still, it’s already starting to trickle down into the consumer level of filmmaking. Further interest will only bolster the innovation behind the scenes of each camera. We’ll see the fruits of that labor coming down to general public use, too!