ACES Gamut Mapping Architecture VWG - Technical Documentation Deliverable
February 1, 2021
Authors: Carol Payne, Matthias Scharfenberg, Nick Shaw

Introduction

This document is intended to detail the specification of the final CTL deliverable to be added to the ACES repository for release in ACES version 1.3. The original scope of the Gamut Mapping Architecture group was broad, so this document will include a summary of the research done in order to arrive at this deliverable, as well as workflow requirements and implementation considerations for the future Implementation VWG to consider

History / Research

When the proposal for this group was written, the main issue was outlined as:
  • Users of ACES are experiencing problems with out of gamut colors and the resulting artifacts (loss of texture, intensification of color fringes). This issue occurs at two stages in the pipeline. 

  1. Conversion from camera raw RGB or from the manufacturer’s encoding space into ACES AP0 
  1. Conversion from ACES AP0 into the working color space ACES AP1

It was acknowledged early on in the group that this artifacting can also occur in VFX/Color grading, as well as the Output Transform stages in the pipeline. 

The working group chairs set the scope:
  • Propose transforms between color spaces that avoid or reduce color clipping. Solutions for this may include: 
  • Proposing a suitable color encoding space for digital motion-picture cameras.
  • Proposing a suitable working color space.
  • Propose a suitable gamut mapping/compression algorithm that performs well with wide gamut, high dynamic range, scene referred content that is robust and invertible.

The group started out investigating the working and encoding spaces (ACES 2065-1 and ACEScg). However, it was agreed early on that although the possibility of creating a new ACES working space which mitigates common gamut issues should not be discounted, it would require a very strong case as to the benefits. Changing a core component of ACES would potentially introduce backwards compatibility issues, and would also be based only on the situation at the current time. It raised the possibility of having to change the working space repeatedly in future. Thus, the focus moved on to the third option - a suitable algorithm to solve the artifacting while maintaining as much of the current ACES standards and structure as possible. The gamut mapping approach chosen is one of compression, and will be referred to as such from here on out. It deals with the ACES image data “as is”, and simply strives to convert that into less problematic image data. 

Based on the history above, the general working assumptions were:
  • Samples are relative scene exposure values (i.e. scene-referred linear data) with no assumed min/max value range boundaries
  • The gamut mapping operator is per-pixel only (i.e. not spatial or temporal)

The stated ideals for a gamut compression algorithm were:
  • Exposure invariance f(aRGB)=af(RGB)f(a \cdot RGB) = a \cdot f(RGB)
  • Source gamut agnosticism
  • Monotonicity
  • Simplicity – suited to a fast shader implementation
  • Invertibility (see caveats later in document)
  • Colors in a “zone of trust” will be left unaltered

While a suitable algorithm should be able to map arbitrary gamut A into arbitrary gamut B, it should not be a requirement that all source data must be contained within gamut A. Nor is it necessarily a requirement that the output should be entirely bounded by gamut B. Indeed, allowing extreme source values to be mapped to output values close to, but not within, the target gamut means that the compression function does not need to tend to the horizontal at the boundary. This means that its inverse will not tend to the vertical, which is beneficial for invertibility.

Because the unreal colors which occur are a result of the mismatch between a camera and human observer (among other causes) and are outliers in the residual error of a transform optimized for a subset of important “memory” colors, what they “should” look like is somewhat undefined. The important thing is to remap them into values which are plausible rather than “accurate”.

What was determined to be outside the scope:
  • Colorimetric accuracy or spectral plausibility of input device transforms (IDTs) 
  • Display gamut mapping. (Required modifications to the RRT/ODT will need to be addressed by a subsequent group.)
  • Customizing for specific input/output gamuts
  • Working in bounded or volume-based gamuts
  • Actions which could limit creative choices further down the line (e.g. excessive desaturation)

User Testing


Once the working group settled on the baseline algorithm and its properties (discussed in the technical specification below) a set of targeted, small scale user tests were conducted to ensure the foundations of the work were solid. The testing was composed of two groups - VFX compositors and colorists. Between these two disciplines every major use case for the gamut compression algorithm could be tested and measured. The group gathered an open repository of test images that clearly exhibited the problem to be solved. It then derived a set of test scenarios for each group ranging from keying, blur, grain matching, hue adjustment, and more. The tests were conducted in Nuke and Resolve, on both SDR and HDR monitors. 

Overall, the results of the user testing were positive and uncovered no major issues in the algorithm functionality. 75% of compositors and 96% of colorists stated that using the algorithm helped them complete their work and achieve their creative goals. For full user testing results, please refer to the working group historical repository.