Working with ACES

  • Background
  • Recording
  • Dailies
  • Post-Production
  • FAQs

ACES is a framework for motion picture color management, and is the result of a 12-year development effort led by the Academy of Motion Picture Arts and Sciences’ Science and Technology Council. The Academy recruited volunteers from academia, from film and digital camera vendors, from film manufacturers, and from post-production facilities. Digital camera companies vigorously competing with each other in the marketplace (ARRI, SONY and Canon) cooperated wholeheartedly in ACES projects; similarly, Kodak and FUJIFILM worked together to ensure ACES would support photochemical workflows as well as it would purely digital ones.


Key ideas

There are several key concepts in ACES (and modern color management in general) that were novel to cinema professionals when they were first introduced in the late 2000s. They remain key in understanding ACES, so we review them here, and then use them frequently in the other tabs of these ACES pages.

Image state is a property of an image that describes whether its primary purpose is the representation of the scene being captured, or the depiction of the scene to some viewer. In a way, it describes the purpose of the image.

Scene colorimetry describes the luminances and hues of the real-world or synthetic objects that were photographed with a real or virtual camera. Display colorimetry describes the luminances and hues produced by the device that presents the reproduction of that scene to the viewer.

It may help to think of traditional photochemical filmmaking for a moment. The scene might be the set as lit by a cinematographer. The colors present on that set to a human observer are the scene colorimetry. The image state of the developed negative is (loosely speaking) said to be scene-referred. The image state of the film print is said to be display-referred or output-referred, and the image as it appears on the theater screen is presenting display colorimetry to the moviegoer.

In ACES there is considerable emphasis on assuring that the scene-referred image is as colorimetrically accurate as possible. If this were always done perfectly, then the ACES scene-referred images of a scene captured simultaneously by ARRI, SONY and Canon cameras would be identical. In practice, the images are not identical due to differences in dynamic range, in noise, and in the fine points of sensor design. That said, for that very large number of users who have tried to match the output of multiple types of cameras without having tools or theory to help them, ACES is something of a revelation — ACES brings such disparately-sourced images “within range” of each other, quickly, easily and consistently.

When the brightness of a display-referred image is much less than the brightness of the scene-referred image (think of the bright light on-set for beach scenes in a surfing movie, and then the relatively dim light of that movie when projected), just scaling the brightness of the colors doesn’t produce a very appealing image. To make the displayed image compelling, a rendering transform is used to convert scene colorimetry to display colorimetry.

This idea of rendering probably predates ACES by about five centuries; you can see a non-realistic but visually pleasing darkening of shadows and increase of highlight in Rembrandt’s The Night Watch (to take just one example).

But for cinematographers whose background was in classic, pre-digital electronic acquisition, where display-referred images were recorded, ACES’s providing for an explicit rendering step was something new. Such cinematographers were accustomed to obtaining artistic effects not in a separate step, but by ‘painting the camera’ as a form of rendering.

ACES provides for a two-step explicit rendering process. The overall rendering of scene-referred to display-referred image is done with the viewing transform. The viewing transform is itself composed of two concatenated transforms: the Reference Rendering Transform, often abbreviated RRT, and an Output Display Transform (ODT) that is specific to the particular imaging device. The RRT is responsible for imparting those aesthetically pleasing attributes such as a film-like “toe” and “shoulder” for an imaginary, idealized output device without gamut or luminance limitation; the ODT adapts such idealized images for a particular type of display, such as the SMPTE Standard Projector, or a professional reference monitor such as those used on-set.

ACES was originally known as the Image Interchange Framework, and to that end, a special form of OpenEXR file was designed to hold ACES files. This constrained form of OpenEXR is known as the ACES Container File and is specified in SMPTE ST 2065-4:2013. ACES container files can be produced very early, e.g. at the time a Codex Capture Drive is downloaded from an ALEXA XT; they can also be produced considerably later by transcoding the image data and metadata from ARRIRAW files, Log C ProRes-containing QuickTime or Log C DNxHD MXF clips and ARRIRAW-containing MXF clips. Note that all the traditional metadata available to ARRI camera users, including dynamic lens metadata, is likewise available in ARRI-generated OpenEXR files. 

A second container is defined by ACES for holding densitometric data from film scans. Just as ACES defines a constrained form of OpenEXR for holding colorimetric data, ACES also defines a constrained form of DPX for holding densitometric data, known as ADX. 

ARRI’s ARRISCAN film scanners can be configured to produce these specially-marked ACES-compatible ADX-containing DPX files via an augmented scanner calibration process. Digital Intermediate facilities or their clients who wish such a calibration should contact ARRI's service group. 

No special provision for ACES need be taken during film exposure, and no special provision be taken at the lab. Note that special film lab processing (such as negative bleach bypass) should be avoided, as it can make it nearly impossible for the ACES ADX-to-ACES transform to convert densitometry to scene colorimetry. 

Strengths of ACES

ACES has a strong theoretical foundation, with years of camera, film and display manufacturer expertise contributed to the project, as well as the considered judgements of many of the “golden eyes” of Hollywood.

ACES is very well documented, with at least a half-dozen ACES-specific SMPTE  standards, and perhaps twice that many white papers on ACES with the Academy imprint.

ACES is vendor-agnostic, and does not preferentially render for any particular vendor’s camera. (It should be noted that though not all significant camera vendors participated in the ACES design efforts, almost every one contributed test images for “golden eyes” evaluation.)

Organization of these pages

These pages are divided into five sections:

  • Background — what you have been reading.
  • Recording — what you need to know in order to operate effectively on-set for an ACES show
  • Dailies — detailed instructions on making dailies from ARRI camera output in an ACES workflow
  • Post-production — how to use ARRI camera output in an ACES workflow with the most common post-production tools
  • FAQs — commonly asked questions (differences from ARRI standard workflow; notes and caveats)

Digital capture

ARRI digital cameras do not directly produce or record ACES imagery. Instead, they produce either ARRIRAW data, QuickTime-packaged ProRes clips or MXF-packaged DnxHD clips. The data or clips can be converted to OpenEXR files full of ACES image data and metadata, fully conforming with the ACES image container standard (SMPTE ST 2065-4:2013, ACES Image Container File Layout). Note that only ProRes or DNxHD clips recorded with a Log C "REC processing" can be converted to ACES.

As the frames are not being recorded as native OpenEXR frames, but instead are being recorded as ARRIRAW data or ProRes or DNxHD clips, all the image-format-specific information regarding frame rate ranges, resolution, frame size, data volume, etc. that is found in non-ACES sections of these Workflow pages is equally appropriate for ACES production.

Signal monitoring

ACES defines 10- and 12-bit encodings, used on-set only, to pass gradable imagery around the set. These are not 16-bit float encodings of the ACES image, but are lower-bit-depth "proxy" images, termed "ACESproxy" images. The 10-bit on-set encoding is called ACESproxy10; the 12-bit encoding is called ACESproxy12.

Current ARRI cameras do not have a way of directly producing an ACESproxy monitoring signal, but it is still possible to monitor in ACESproxy with the appropriate downstream hardware using an external device that can convert Log C to ACESproxy and apply the ACES transforms appropriate for an on-set display. Mechanisms to do this type of monitoring (possibly including grading of ACESproxy data with ASC CDL) include: 

  • Using a node-based on-set live color grading system that is powerful enough to allow concatenation of multiple 3D LUT and ASC CDL operators
  • Using a monitor with with 3D LUT support in conjunction with an ACES-savvy application that performs ASC CDL grading
  • Using an external 3D LUT box in conjunction with a monitor that accepts ACESproxy input and that internally performs ASC CDL grading
  • Using an external 3D LUT box in conjunction with a standard monitor (foregoing any ASC CDL grading of the ACESproxy signal)

Grading with on-set live color grading systems
Highly configurable on-set live grading systems such as OSD Live! from Colorfront or Daylight from Filmlight can be set up to take a live Log C input signal, convert it to an ACESproxy signal, optionally apply ASC CDL grading operations to that signal, convert that (possibly modified) ACESproxy signal back to an ACES signal and finally apply the ACES output transform. Diagramatically this is shown in the figure below:

Click image to enlarge.

Note that the diagram shows the ASC CDL application to ACESproxy10 data. If the software package (here Colorfront OSD Live!) chooses to do so, it can convert Log C to the logarithmic full-precision floating-point encoding of ACES, termed ACEScc, in place of ACESproxy10. ACESproxy (both 10- and 12-bit versions) and ACEScc are designed so that, in terms of the results obtained, ASC CDL grades of ACESproxy and ACEScc will produce identical results across the luminance ranges one encounters on-set.

Grading with "smart monitors" and supporting apps
Some recent monitors, like the Flanders Scientific DM250, have enough internal processing power so that in conjunction with a grading application like Pomfort's LiveGrade Pro, the monitor can itself convert the Log C signal to ACESproxy10, apply ASC CDL, apply the ACES viewing transform and then display the result. This is shown below:

Click image to enlarge.

For information on this approach, see the Pomfort LiveGrade Pro support page explaining the use of this workflow. The Pomfort Livegrade ACES CDL panel is shown below.

Grading with 3D LUT boxes and "smart monitors"
Other monitors, though they cannot provide internal Log C to ACESproxy10 conversion, are capable of using ASC CDL to manipulate that data, and then applying the ACES viewing transform. When these monitors are fed a signal that has been converted upstream from Log C to ACESproxy10, they can be used with ARRI cameras in an ACES-based on-set workflow.

In the illustration below, a Fujifilm IS-mini 3D LUT box is using a Log C-to-ACESproxy10 3D LUT to perform such an upstream conversion. The Canon DP-V3010 provides a display controller with knobs that can adjust ASC CDL values applied to the incoming ACESproxy10 signal, and then render the result with the ACES viewing transform. ASC CDL values are exportable on a USB stick from the monitor's display controller.

Click image to enlarge.

Users of such outboard 3D LUT boxes should be aware that the conversion between Log C and ACESproxy10 is exposure-index specific, and that to maintain a strictly accurate conversion from Log C to ACESproxy10, the appropriate EI-specific 3D LUT should be loaded each time the ALEXA or AMIRA's exposure index is changed. That said, if a Log C to ACESproxy10 3D LUT for a mid-range exposure index such as EI 800 were used, the differences from nominal Log C to ACESproxy10 conversion would almost certainly not be visible in the context of a working set.

For the particular case of the Canon DP-V3010, users should verify that the display’s firmware is up-to-date, as the ACESproxy encoding changed between the time the display was first shipped and when ACES 1.0 was finalized in December of 2014.

Monitoring with 3D LUT boxes
If on-set monitoring is all that is required, the same on-set 3D LUT box used above could be loaded with a Log C to Rec. 709 3D LUT that performs, as one concatenated set of conversions, a Log C to ACES conversion and an ACES viewing transform for a Rec. 709 device. Such a setup is shown below.

Click image to enlarge.


File- and clip-based workflows
Managing near-set work can be an exacting task, encompassing verification, backup, sound sync, color correction, creation of dailies, archival, and many other critical production functions. Fortunately, the production of dailies from ARRI cameras is no more complicated in an ACES workflow than it would be in a traditional ARRI workflow. These pages will limit themselves to a discussion of dailies for delivery to editorial (or other distribution as per the needs of the production).

File-based ACES dailies workflows
ACES can be used in file-based workflows with ARRIRAW files arriving from the set, accompanied by any on-set color correction information (note that ARRI Look Files are not supported for use in ACES workflows). The illustration below shows a Codex Vault that processes Capture Drive cartridges arriving from the set, producing ARRIRAW files (.ari files). These .ari files would be converted to ACES by a near-set system (here, a Colorfront OSD system), the on-set color correction information — probably conveyed as an ASC CDL file — would be applied, and the result would be combined first with any show look, then with the standard ACES output transform. In this manner, the OSD system can create dailies for Editorial with the look and any on-set color corrections "baked in".

The OSD system also creates ACES images packaged in OpenEXR files for Visual Effects. These files are accompanied by "sidecar" files carrying any show look or on-set color correction, so that Visual Effects can preview their composites using the same transforms used by on-set or near-set creative staff.

Click image to enlarge.

Though this illustration has shown ARRIRAW being delivered as individual files to the OSD system, the workflow pertains equally well to ARRIRAW wrapped in MXF as produced by an ALEXA Mini.

The configuration given is illustrative; many others are possible, including direct production of deliverables from the Codex Vault, if that is the production’s preferred workflow.

Clip-based ACES dailies workflow
ACES can also be used in clip-based workflows where the scene was captured to a memory card in clip format, either as a ProRes or DNxHD clip from an ALEXA, or a ProRes clip from an AMIRA. The only constraint is that the content must have been captured using a Log C "gamma".

In this type of workflow, typically a single station is used to convert the captured Log C to ACES imagery, to apply any desired on-set grade and/or show look, and to apply that color correction and the ACES output transform to produce dailies for Editorial. For shots with visual effects, the station also makes ACES OpenEXR files for the Visual Effects artists. The artists are also provided with any on-set grading information and show look, so that any review of their material preserves the creative decisions made upstream.

Click image to enlarge.

Specific deliverable file types

Camera original ARRIRAW, ProRes and DNxHD
These should be cataloged and archived as would be done for a non-ACES show. Just as ARRIRAW processing has improved through the years, so similarly may the process of converting camera original content into ACES imagery and metadata; careful saving of the original files or clips can allow improved ACES processing in the future.

And if for any reason the production reverts to a more traditional ARRI workflow, it is the saved camera originals that will make that possible.

ACES container files (constrained OpenEXR)
ACES container files should be the primary deliverable for VFX work. An ACES container file is a 16-bit OpenEXR file containing not just the image but also the metadata that distinguishes an ACES-containing OpenEXR file from some other type of OpenEXR file.

ACES container files contain scene-referred data and should not incorporate any Look file effects.

Note that at the present time the ACES container specification only allows for uncompressed OpenEXR files to hold image data and metadata.

ProRes and DNxHD with "baked-in" color for Editorial
As few editorial departments can process the 16-bit OpenEXR images that will be used in the finishing suite, they will need to be provided with the same type of deliverables that they are accustomed to handling today — with the ACES output transform "baked-in".

If ASC CDL was used on-set or near-set on ACESproxy data as part of creative direction, then the effects of the ASC CDL should also be "baked-in" to the delivered dailies. Note that the ASC CDL is intended to work on ACESproxy or ACEScc data, not ACES data, so ACES dailies systems will briefly transform ACES data into ACESproxy or ACEScc, apply the CDL, and transform the result back to ACES.

It is also worth noting that much pain can be avoided at the end of the production if editorial monitors are calibrated according to the same standards used in visual effects and in digital finishing. By default, ARRI tools assume images they produce will be shown on monitors calibrated for Rec. 709 primaries, a Rec. 709 [D65] white point, and a 2.4 gamma response.

The variety of grades and looks

ACES does not mandate a particular workflow, but there is a fairly common ACES-based workflow that is flexible enough to meet most production needs. In this workflow, there are four points at which the color values in the captured scene can be changed for creative or for technical reasons: in a show look; in an on-set grade; in a VFX pre-grade; and in the final DI grade. Again, this is not a formal recommendation from the ACES development team, and even when adopted, each step is optional (though only live broadcast is likely to omit the final DI grade). If the workflow sounds familiar, it is because this type of workflow was established for high-end VFX-heavy productions long before ACES was developed.

Looks are color changes that are applied uniformly across the captured frame, and commonly are applied to many if not all shots in a production. An example look might be one emulating a particular film stock as developed and printed by a particular lab. Looks are typically developed during pre-production by the cinematographer and DIT, with input from other creative staff -- for example, the final DI colorist is often involved in look development.

Technically speaking, looks are applied to data in the linear ACES color space. For looks analytically derived from physical measurements (as would be the case for the print film emulation mentioned above) this causes no difficulty, but a linear space is not particularly colorist-friendly. So looks developed interactively by colorists are often developed in a colorist-friendly color space like ACEScc, and wrapped transparently in ACES-to-ACEScc and ACEScc-to-ACES transformations.

On-set grades
On-set grading is, typically, the application of ASC CDL operations — slope, offset, power and saturation changes — uniformly across the frame to log-encoded ACESproxy data either on-set during the composition of a shot, or shortly after the shot is captured in near-set dailies production. (In this sense, on-set grading is something of a misnomer; it happens as much near-set as on-set.) On-set grades are applied before any look is applied.

VFX pre-grades
Pre-grades are for shot-specific overall changes, typically to deal with uncompensated changes in illuminant over long periods of time. More detail on pre-grading is available in the post-production section, though it is worth noting here that the artists using the pre-graded plates should be provided with the on-set grades as well.

DI grade
The final DI grade is the last opportunity for creative change to the look of the image prior to distribution. More detail on the final DI grade is available in the post-production section, though it is worth noting here that the final DI colorist should be provided with the on-set grade, where it may be used as a starting-point grade, or at least as a historical reference.


ARRI camera output is metadata-rich. When converting ARRI camera output to ACES there are several options for preserving metadata.

ARRI ALEXA cameras
The ALEXA can produce three types of output: ARRIRAW files (or from the ALEXA Mini, ARRIRAW clips wrapped in MXF), ProRes or DNxHD clips, and/or an HD-SDI video signal.

For ARRIRAW-based workflows, the maximum amount of metadata is carried over to the ACES container file when ARRI’s two ARRIRAW conversion programs are used — either the ARRIRAW Converter (ARC) application, or the ARC_CMD command-line tool. These tools fill in from the ARRIRAW file all the required metadata to make the OpenEXR file a compliant ACES container, and embed much more optional ACES and ARRI-specific metadata as well. For details on what metadata is embedded, see the ARRI metadata documentation here. For details on how to use the ARRIRAW Converter application, see the Tools section below.

Alternatively, for Codex-based ARRIRAW workflows, ACES-containing OpenEXR files can be generated from the downloaded Capture Drive at the same time as the ARRIRAW files themselves. The Tools tab of this section shows how to use the Codex Vault UI to generate OpenEXR files as well as ARRIRAW files. (The ARRIRAW files should always be produced and saved even in an ACES-based workflow, as they offer the highest-quality archiving of the captured imagery.)

Log C HD-SDI output can be captured and later converted to ACES OpenEXR files as desired, using a third-party tool such as Blackmagic Design’s Resolve or Colorfront’s OSD. ACESproxy HD-SDI output can not be converted to ACES OpenEXR files, however, nor can Rec. 709 HD-SDI output.

For the AMIRA, only ProRes Log C clips can be brought into ACES, as the AMIRA has no ability to record ARRIRAW files or DNxHD files. For on-set or near-set dailies creation, ProRes Log C clips can be converted to ProRes Rec709 or DNxHD Rec709 clips using tools such as those described above.


ARRIRAW Converter
ARRIRAW files are the highest-quality path to ACES imagery when capturing a scene with ARRI cameras. The reference conversion tool is ARRI’s freely-downloadable ARRIRAW converter. (Users not already having the ARRIRAW Converter can download it here; registration is required.) Other portions of ARRI’s website describe its operation, but there are four key settings in the Render Settings "Format & Color Space" tab that must be correctly set to producing ACES files.

  • "OpenEXR" should be selected in the pull-down "File Format" menu
  • "Scene Lin" should be selected in the pull-down "Encoding" menu
  • "Uncompressed" should be selected in the pull-down "Compression" menu
  • "SceneLin. - ACES" should be selected in the pull-down "Color Space" menu

A Render Settings panel correctly configured for producing OpenEXR is shown below.

For scripted batch use, the reference conversion from ARRIRAW to ACES-containing OpenEXR is ARRI’s freely-downloadable ARC_CMD command-line tool, documented here (registration required). This tool is controlled by an XML file whose name is indicated in the command invocation. The key parameters and their values are:

  • colorspace — this parameter should have the value "ACES"
  • format — this parameter should have the value "exr"

Codex Production Suite

The Codex Production Suite allows for the creation of OpenEXR files that conform to the ACES Container Specification, and (given appropriate 3D LUTs) of dailies embodying the default ACES rendering. It also provides for taking upstream ASC CDL information and applying those ASC CDL operations in the ACES processing pipeline in the proper context, that is, in the ACEScc color space.

Producing ACES-containing OpenEXR files
First, make sure that there is an ACES OpenEXR target deliverable defined in the Project > Deliverables section of the Codex Production Suite UI. It should be defined something like this:

although the contents of "Roll transform" may be set to some production-specific value. Next, make sure that this deliverable is in the list of deliverables being produced by the VFS, as defined in Project > VFS:

with the various pathnames and filters set appropriately for the production.

Producing dailies embodying the ACES output transform
Applying the ACES output transform to ARRI Log C data means transforming the image from Log C to ACES, then rendering the ACES image data for a particular display. This set of three transforms can be approximated with a 3D LUT.

The example below shows a deliverable configured to produce high-quality ProRes dailies by using a hypothetical 3D LUT that approximates taking V3 Log C data, turning it into ACES data, and running it through the Reference Rendering Transform (RRT) and an Output Device Transform (ODT) suitable for a "Rec 709" monitor.

As of the 4.0 release of the Codex Production Suite, the Add button next to the Lut field allows for one to add in ASC CDL processing with that processing applied (correctly) within the ACEScc color space. That button also allows for the application of a Look Modification Transform (LMT) immediately after the ASC CDL application.

Resolve 12
Resolve 12 can be used to quickly process ACES OpenEXR sequences, ARRIRAW sequences or ProRes or DNxHD clips (assuming the clips were created with Log C "REC processing") into dailies deliverables, as follows:

  • Create a new project in Resolve and in the Project Settings’ "Master Project Settings" section, and after setting the appropriate values for image size, frame rate and so on, set "Color science" to "ACEScc" and set the ACES version to "ACES 1.0."
  • In the Project Settings’ "Color Management" section, set "ACES IDT" to "Alexa". Be sure the ACES ODT is set to "Rec.709".
  • Browse and bring in ARRIRAW, or ProRes Log C, or MXF Log C media into your Resolve project
  • Create a timeline and drag the media onto it.
  • Skip the Color correction portion of Resolve’s UI entirely.
  • In the "Deliver" portion of the UI, set the "Video Format" to your desired output format (QuickTime, DNxHD, etc.) and set the "Codec" similarly. In the example below the Format is set to produce QuickTime dailies encoded with the Apple ProRes 422 HQ codec.
  • Browse to or create a location into which the OpenEXR files would be rendered.
  • Embody the current settings as a render job with "Add to Render Queue", and render the clips to produce dailies with "Start Render".

Resolve 12 can also be used to produce OpenEXR deliverables from captured ARRIRAW sequences, or from ProRes Log C or DNxHD Log C clips. To do so, one would proceed as follows:

  • Create a new project in Resolve and in the Project Settings’ "Master Project Settings" section, and after setting the appropriate values for image size, frame rate and so on, set "Color science" to "ACEScc" and set the ACES version to "ACES 1.0."

    Note that these settings are identical to those used for dailies deliverables.
  • In the Project Settings’ "Color Management" section, set "ACES IDT" to "Alexa". Be sure the ACES ODT is set to "No ODT". 
  • Browse and bring in ARRIRAW, or ProRes Log C, or DNxHD Log C media into your Resolve project
  • Create a timeline and drag the media onto it.
  • Skip the Color correction portion of Resolve’s UI entirely.
  • In the "Deliver" portion of the UI, set the "Video Format" to "EXR", the "Codec" to "RGB half (no compression)"
  • Browse to or create a location into which the OpenEXR files would be rendered.
  • Embody the current settings as a render job with "Add to Render Queue", and render the clips to produce OpenEXR frames with "Start Render".

Colorfront OSD
Modern versions of Colorfront OSD provide templates that allow straightforward production of both dailies for Editorial and OpenEXR plates for visual effects. When an ACES-based project is created, ACES support should be specified as a creation parameter, as shown below.

The parameters of the resulting chain of processing nodes are preset for production of ACES dailies, as shown for the first "result" in the processing chain below (including ASC CDL application in the ACEScc space).

The second result, producing OpenEXR-containing ACES files, was manually added.

Color image encodings and metadata

The post-production workflow
ACES post-production emphasizes consistency of color in four contexts:

  • on-set monitoring
  • dailies viewing, whether in a director’s screening room or on a producer’s tablet
  • visual effects development and review
  • final DI grading

Providing consistent color across a variety of device types, file formats, and viewing environments requires considerable behind-the-scenes mechanism to make things "just work". This is true whether or not one is using ACES. The goal of ACES-based workflows is to enable flexibility while keeping the fundamental building blocks and pipeline consistent.

Color spaces and color encodings
Post-production in ACES introduced an additional color space for visual effects work named ACEScg. Like ACES itself, this is a linear color space. But where ACES uses a set of color primaries (termed "AP0") that encompass all visual colors, ACEScg uses the same set of primaries used by ACESproxy and ACEScc — a set of primaries named "AP1". AP1 primaries are nearer the spectral locus, and are closer to traditional grading primaries than are the AP0 primaries. The relationship between the color primaries of ALEXA wide-gamut, of ACES, and of ACESproxy/ACEScc/ACEScg is shown in the figure below.

Recapitulating all the color spaces involved:

  • ARRI V3 Log C: uses the ALEXA wide-gamut [AWG] primaries, and a log floating-point or log 10-bit or 12-bit integer encoding. An image storage space and a grading space.
  • ACES: AP0 primaries; linear floating-point encoding. An image storage and interchange space.
  • ACESproxy: AP1 primaries, log 10-bit or 12-bit integer encoding. A grading space. Not an image storage space because of insufficient bit depth to guarantee lack of banding.
  • ACEScc: AP1 primaries, log floating-point encoding. A grading space. Not an image storage space.
  • ACEScg: AP1 primaries, linear floating-point encoding. A working space for rendering and compositing. Potentially an image storage space, but not a recommended image interchange space.

Note that the log encoding for ACESproxy and the log encoding for ACEScc were designed to work together. If ASC CDL values were created using an ACES-supporting on-set tool that had applied those values to ACESproxy images, those same ASC CDL values will produce the same visual result when applied in downstream ACES-supporting tools that manipulate ACEScc images.

Flow of color image data and metadata across contexts
There are two key principles that provide consistent color across the four contexts mentioned above:

  • The same core set of transforms is used at every stage
  • If a transform is parameterized (in the way that an ASC CDL transform is parameterized by slope, offset, power and saturation values, or a 3D LUT transform is parameterized by the contents of a particular 3D LUT) then the same parameters are passed from one context to another throughout production and post-production

On-set production optionally produces an on-set grade. If it does produce such a grade, then the grading parameters, expressed as an ASC CDL file, are pased from on-set to near-set or Editorial, and thence to VFX, and finally (as a starting point or hint) to the final DI grade.

Similarly, any 3D LUT that might be implementing an overall show look is applied the same way, at the same place in the pipeline, in each context.

The diagram below shows the flow of image data (and, with dashed lines, of optional image metadata) between production and post-production contexts.

Click image to enlarge.

On-set has been covered in the "Recording" tab, and near-set or Editorial has been covered in the "Dailies" tab. This tab therefore focuses on the two contexts most tied to post-production: Visual Effects and final DI grading.

Visual effects

Using current software
ACES imagery requires a different set of image colorspace conversion and image viewing transforms than those to which users of vendor-specific workflows may have become accustomed. Fortunately, many visual effects tools today integrate OpenColorIO, a flexible open-source package for color management. Versions of OpenColorIO supporting ACES Release 1.0 became available in early 2015.

VFX tools like Nuke or Fusion rely on OpenColorIO (OCIO) for color management. When the latest versions of OCIO and its configuration files (ACES 1.0.1 or later) are used, these tools are ACES 1.0-compliant.

Nuke 10 has enhanced its support for OCIO, including incorporation of ACES 1.0.1-compliant OCIO configuration files. The related Project Settings panel choices have changed slightly, as shown below:

Note the use of the Nuke_1.0.1 OCIO configuration, bundled with Nuke 10.0 and later versions. Also note that for this project the default for “float files” has been changed to be ACES 2065-1; in current distributions shipped by The Foundry, this (somewhat controversially) does not reflect the original intent of the Academy, that images written to disk be expressed in AP0 primaries, not AP1 primaries.

Obtaining ACES plates
Background plates may be delivered to VFX as ACES Container files (i.e. OpenEXR files containing ACES data), with the conversion of ARRI camera output to ACES happening upstream of the VFX facility as part of the "pull". Alternatively, the conversion can be done at the VFX facility, either as a one-time batch process, or as part of the composite itself.

Batch conversion to OpenEXR has been covered in the "Dailies" tab. In the image below, we give an example of the Nuke Read node settings required to convert an ARRIRAW file to ACES.

Here the salient points are that the "color space" pop-up in the lower "ari Options" part of the Read node’s properties panel has been set to "Scene Lin. - ACES" and that the colorspace used for linearization in the upper part of the panel has been set to match, with a value of "ACES - ACES2065-1".

Note that the version of the ARRIRAW SDK used by Nuke does not offer the newer ACEScc or ACEScg color spaces as ARRIRAW decoder output options, so the label it provides to the Nuke UI code ("Scene Lin. - ACES") is effectively a shorthand for what Nuke more precisely identifies as "ACES - ACES2065-1".

In Nuke 10, reading in DPX files, TIFF files, ProRes clips or DNxHD clips containing Log C images to be brought into ACES containing Log C imagery is straightforward. The color space of the image data read from the files or clip is indicated in the Read node (including the Log C image’s exposure index), and Nuke converts the newly-read imagery from the indicated color space to the working space established in the project settings.

Note that the situation is slightly more complicated when using Read nodes in prior versions of Nuke. In the older versions of Nuke, there were two nodes required: a read node with "colorspace" set to "linear" and an OCIOColorSpace node converting from the EI-specific ARRI V3 Log C version to the working color space for the production.

The VFX "neutral grade"
When a squence is comprised of many shots, and each shot has multiple takes, it is common for the scene illumination of on-location shots to drift. The color temperature and intensity of natural light changes over time, and the plates for a complicated VFX sequence may take many hours (if not days) to shoot.

Especially when physically-based modeling and rendering are used, it is more efficient to have a compositing supervisor normalize all the shots in the sequence to some "neutral grade" before distributing the work amongst individual artists. This type of grade is always performed using solely linear operations such as per-channel gain changes, or 3x3 matrix multiplications.

Viewing transformations
OCIO provides for viewer process application of the ACES output transform (the Reference Rendering Transform [RRT] and some Output Device Transform [ODT]). In OCIO-color-managed Nuke, this is controlled by a drop-down menu in the main menu bar. In the illustration below, a Rec. 709 ODT has been selected from the OCIO’s pre-packaged library of combinations of RRT and various ODTs.

Matching on-set graded color at the artist’s desktop
Changes to the default rendering transform that were effected by the use of on-set ASC CDL operations on ACESproxy data can be mimicked at the artist’s desktop, by applying the ASC CDL operations to ACEScc data. Because most compositing programs work in linear space, before the ASC CDL is applied there will need to be a transformation between color spaces performed between the working space and ACEScc, and then after the ASC CDL is applied there will need to be a transformation from ACEScc back to the linear working space.

The following diagram shows the flow of image data and look metadata from the set, through editorial, into VFX. It does not show flow of image data and look metadata out of VFX and into final DI grading, other than to have solid or dashed arrows showing that the flow of data continues past this step.

Click image to enlarge.

Download PDF

In Nuke, with the ACES OCIO configuration active, the working space is ACEScg, and the viewer process expects ACEScg input. So at the point in the VFX section of the diagram just prior to the application of the On-set grade, where a two-step conversion takes place from ACEScg to ACES and then from ACES to ACEScc, an actual compositing script can collapse these two nodes into a single ACEScg to ACEScc OCIO Colorspace node. In addition, as the working space of the ACES OCIO configuration is ACEScg, the ACEScc to ACES transition immediately following in the diagram would instead be an ACEScc to ACEScg transition.

In fact, it is possible to collapse these three nodes even further, by noting that we are transforming from ACEScg to ACEScc, then applying the CDL, then transforming from ACEScc to ACEScg. This sort of ‘wrapping’ transform is exactly what the ‘working space’ colorspace selection in the OCIO CDLTransform node does:

VFX deliverables
The deliverables from a VFX workflow should be OpenEXR files (specifically, ACES Container files), with any LMTs used in viewing the composite accompanying the delivered frames (or being incorporated by an included reference). The effects of the LMT(s) should not be "baked in" to the resulting OpenEXR frames; this includes both the LMT effecting the on-set grade as well as the LMT(s) effecting any show look. Keeping the on-set grade and the look out of the image data, while passing on the on-set grade and the look as image metadata, allows the production’s creative staff to make straightforward and consistent global changes, if need be, in the final DI grade.

Final DI grading

Image and color correction color spaces
All major color grading systems today are capable of reading ACES container files, or of reading files created directly by a digital camera and applying the appropriate IDT or to produce ACES image data. Alternatively, ACES provides for a special type of film scan — an "ADX" scan — and transforming the scanned data into ACES imagery using a special film-to-digital transform termed the "unbuild".

Color-correcting an ACES image, whether it was born of a digital or of a film camera, is not very pleasant for the colorist, because ACES images are linear, and the human visual system’s response is logarithmic. So, just as when applying ASC CDL values, final DI grading is done in a logarithmic space. This is the ACEScc space, designed so that ASC CSL color manipulations made on-set to ACESproxy values will have the same effect if applied in the final DI grading session to ACEScc values.

This is shown in the diagram below, which is an extended version of the previous diagram that showed flow of image data and metadata through VFX. In this diagram, the flow is extended to cover final grading and creation of deliverables as well.

Click image to enlarge.

Download PDF

Note that though on-set grading information continues to accompany the image data, at this point it is more in the nature of a reference, or a hint, or a starting point. The entire point of final DI grading is to refine the look of the production, so the equality of displayed image that was present between on-set and editorial, and between on-set and editorial and VFX, will probably not be extended to the final DI grade.

Frequently asked questions

Q: Do ARRI cameras have their usual dynamic range when used with ACES?

A: Yes.

Q: Does ACES have different tonescale response than ARRI cameras’ default processing?

A: Yes. It’s a bit more contrasty, which can bring out more detail.

In the ARRI color processing architecture, the basic system tonal response comes from a sigmoidal curve influenced by camera 'black gamma', 'gamma' and 'knee' controls. In the ACES color processing architecture, the basic system tonal response comes from the Reference Rendering Transform [RRT]. 

The tonal response curves are somewhat different, because of a philosophical difference between the systems' authors: ARRI delivers by default a flatter image in midtones and highlights, as shown in the graph below: 

In the images below, a cropped region of an ALEXA-captured frame is shown rendered with the traditional ALEXA rendering on the left and an ACES rendering on the right. The ACES rendering shows increased contrast at the edges of the paving stones, and the separation of the curb from the sidewalk is much more evident. The contrast between lit and shaded blades of grass is also increased in the ACES rendering. 

The lower-contrast ARRI rendering is by design, not by accident: most ARRI camera users will be doing a later grade, and the ARRI rendering transform emphasizes showing latitude to a cinematographer over showing a near-final image. In another crop from an ALEXA-captured frame, a close-up of a rim-lit sweater sleeve shows that the ACES viewing transform renders the fabric "higher" on the tone reproduction curve. 

This can be seen more clearly in an exposure wedge, where it is evident all the image detail shown in the ARRI rendering is still there in the ACES image and can be revealed with a simple color grade that moves that content down the exposure curve. Once again the ARRI rendering prioritizes the cinematographer’s decisions about capture – "Am I done shooting this?" over decisions about finishing – "Is this the look I want to deliver?" 

Q: Are ACES fleshtones better or worse than ARRI’s fleshtones?

A: They are different. The ACES skin tone rendering, when applied to ARRI camera output, tends to be "tawny", that is to say, for ARRI camera output ACES will render skin a bit more yellow. Whether this is a welcome or unwelcome change will depend on the subject and the context. 

In the crop from a winter scene below, the ARRI rendering on the left is both less contrasty and less saturated. In the ACES rendering on the right, contrast and saturation are increased, and the skin tone shifts to be slightly more tawny. The right rendering looks, plausibly, like a colder day than the left, bringing out more color to the cheeks. 

In some cases the ACES rendering will render a "cold" subject somewhat warmer, as seen below. 

As one would expect, if someone has a naturally florid complexion, or is sunburned, such that their ruddiness in the ARRI rendering is at the limit of what a cinematographer would desire, then the ACES rendering could require some "dialing back" in the grade. This can be true for both the skin itself, and the highlights on the skin. In the tungsten-lit scene below, the ACES rendering shifts the skin highlights so that they are more yellow. 

To sum up: the ARRI rendering and the ACES rendering process skin tones slightly differently, and which is preferable will depend on the cinematographer’s taste for contrast and saturation in their subject’s default rendering.

Q: How similar are the underlying color processing models?

A: Very similar on the front end; mostly similar on the back end. Both systems are flexible but achieve that flexibility differently.

The diagram below compares the high-level ARRI and ACES processing pipelines. Starting from a common buffer of image data, each pipeline transforms the raw data into its own encoding space – ALEXA Wide-Gamut for the former, ACES for the latter – then applies any creatively-determined color adjustment, and then transforms the possibly-adjusted image for viewing on some specific display device (e.g. a monitor or projector). 

In the ARRI pipeline, systematic changes to the look of the image, such as a sepia-tone look for older material, are blended in to the 3D LUT that implements the default "look". The ACES pipeline explicitly separates creative color grading from more constant and uniform-across-the-frame color grading, by formalizing the latter as Look Modification Transforms (LMTs). Nevertheless, it is easy to see that the pipelines are very similar. 

The ACES pipeline in the lower part of the diagram is actually more general, in that in theory there is no limit to the number of LMTs that could be inserted into the pipeline. This contrasts with the ARRI pipeline in the upper part of the diagram, which is less flexible by design – since it has to run in real-time inside the camera. 

Q: What kinds of productions or facilities benefit most from using ACES with ARRI cameras?

A: Productions that are willing to take the time to test ARRI’s native workflow with content appropriate for their needs, and who can similarly test ACES-based workflows with that same content. Since ARRI does not record native ACES container files (OpenEXR files with ACES imagery and metadata) in their cameras, an initial test can be done without having to shoot the content twice. 

ACES is the best vendor-agnostic solution to the problem of how to mix the output of sometimes very different cameras. Productions trying to do this without ACES are either in for an enormous amount of trial-and-error color matching, or an ambitious exercise in reverse engineering, some of which may be proscribed by the camera manufacturer.

ACES may also be the best solution if production is dealing with VFX vendors who don’t have staff that understand concepts such as scene-referred compositing and consistent color spaces for camera output. While it is true that all the large VFX vendors have understood these ideas for years if not decades, it is common for ‘garage shop’ small vendors to be at a loss when confronted with challenging composites mixing the output of disparate devices. ACES can help these facilities ‘punch above their weight’.

Q: Are there any ‘gotchas’ when using ARRI cameras in an ACES workflow? Do they have workarounds?

This question has two answers.

A1: Chromatic abberation in some wide-open lenses can produce very saturated blue or purple fringes around strong light sources adjacent to very dark areas, such as the headlights of the cars in the nighttime scene below. When these colors are mapped into the ACES color space, they can fall outside the gamut of colors that the ACES rendering transform is designed to handle. The resulting color clipping manifests itself as an unpleasant artifact. 

The ARRI rendering (again, the ARRI crop is on the left) was specifically designed to be more forgiving in this circumstance, in that it provides a "soft falloff" to the color boundary; the ACES rendering on the right has a very sharp edge to the purple region surrounding the bright white of the headlight. 

Going from about a 5X blowup to a 15X blowup, in extreme close-up the difference is clear. 

It should be noted that, in a subsequent take with the lens closed down one stop, the purple fringing was much reduced, and the problem virtually eliminated. The point here is not to show the better handling of the edge case in the ARRI rendering, but rather that to demonstrate the edge case so that it may easily be avoided. 

A2: ACES isn’t a ‘universal cure’. The difficult problems that come with near-monochromatic stimuli, mixed lighting, etc., are still present in ACES; there is no ‘free lunch’. If you’re aware of this, and avoid the ‘gotcha’ of being buzzword-driven rather than needs-driven, then do your standard pre-production tests, use our documentation, and you will get predictable, solid results.

Q: Are there reference images available to verify that an ACES pipeline is producing correct results?  

A: Yes. ARRI provides the following downloadable zip file containing a sample ARRIRAW file and derivative images, including a Log C DPX frame, an ACES OpenEXR frame and display-ready frames. These can be used to verify that your ACES pipeline is producing results that match ARRI’s reference results.

Q: So is there an overall message about ACES from ARRI?

A: ACES is an alternative workflow to ARRI’s standard workflow. The standard tools almost all support ACES now. We think that if you do choose after testing to go with an ACES workflow, that ARRI provides the best and most comprehensive ACES support of any camera vendor.