ACES Gamut VWG Participants(please add your own details here):
This document is meant to serve as a source of truth & summary of work done over the first ten meetings of the ACES Gamut Mapping Virtual Working Group(VWG). It will serve as a platform for group consensus and to reference as we move forward with gamut mapping algorithm development and testing.
HISTORY
A great deal of discussion has focused on spectral sensitivities of cameras, and the possibility of taking these into account, or using spectral reconstruction techniques as part of the gamut mapping approach. Any algorithm proposed in this group should be generalized and therefore should not presume to know the origin of every piece of image data being processed, or the spectral characteristics of the system which generated it.The gamut mapping approach we take should deal with the ACES image data“as is”, and simply strive to convert that into less problematic image data – what Daniele Siragusano refers to in his document as“gamut healing”. Any discussion of modifying the source data as it is transformed to ACES is the domain of a future IDT VWG, and outside the scope of this VWG. Therefore any such discussion which happened in the meetings will not be included in this document(unless a brief inclusion is necessary to place a relevant point in context) in order to keep things succinct and on-topic.
It was agreed fairly early on that although the possibility of creating a new ACES working space which mitigates common gamut issues should not be discounted, this would require a very strong case as to the benefits. Changing a core component of ACES would potentially introduce backwards compatibility issues, and would also be based only on the situation at the current time. It raises the possibility of having to change the working space repeatedly in future.
Likewise the option of using 2006 CMFs instead of 1931 was discounted. Harald Brendel’s paper shows that there is no obvious benefit. The ACES framework is based around the 1931 2 degree observer, and other common color spaces(e.g. sRGB) have no spectral definition. So there would be backwards compatibility issues to deal with. Also a conversion methodology would have to be defined, which is not something appropriate for this group.
This group is focused on gamut management of scene-referred data.While it is clear that some degree of gamut compression is beneficial as part of a display transform, that will fall under a different group. It has been noted that if the scene-referred data has already been compressed into a more reasonable domain, a“lighter touch” may be required of any display-referred mapping. It was also noted that while color appearance models may be useful for display-referred gamut mapping, these are not relevant to our scene-referred approach. Perceptual concepts such as hue, saturation and lightness do not have any real meaning for scene-referred data, as they are dependent upon the observer and their adaptation state. Daniele noted that HSV is a problematic projection for modifying colors in and instead suggested investigating opponent spaces. HSV might still be useful to provide the saturation component as a multiplication factor in the opponent space. Opponent color spaces that use a 0-1 domain could still be used by normalising the RGB components to the max value of the triplet. This could help in maintaining exposure invariance.
SETTING THE SCOPE
The aim is to change only as much as is necessary in order to achieve the goals for the current issues, and avoid the temptation to go back and tinker with prior decisions about how ACES functions. If future work(such as improvements to IDTs) affects work we do in this group, we will revise at that time.
Based on the history above, our general working assumptions are:
Samples are relative scene exposure values(i.e. scene-referred linear data) with no assumed min/max value range boundaries
The gamut mapping operator is per-pixel only(i.e. not spacial or temporal)
Some stated ideals for a gamut mapping algorithm are:
Exposure invariance – f(a·RGB) = a·f(RGB)
Source gamut agnosticism
Monotonicity
Simplicity – ideally suited to a fast shader implementation
Invertibility(see caveats later in document)
Colors in a“zone of trust” should be left unaltered
While a suitable algorithm should be able to map arbitrary gamut A into arbitrary gamut B, it should not be a requirement that all source data must be contained within gamut A. Nor is it necessarily a requirement that the output should be entirely bounded by gamut B. Indeed, allowing extreme source values to be mapped to output values close to, but not within, the target gamut means that the compression function does not need to tend to the horizontal at the boundary. This means that its inverse will not tend to the vertical, which is beneficial for invertibility.
Because the unreal colors which occur are a result of the mismatch between a camera and human observer(among other causes), and are outliers in the residual error of a transform optimized for a subset of important(memory colors / Pointer’s gamut?) colors, what they“should” look like is somewhat undefined. The important thing is to remap them into values which are plausible rather than“accurate”.
What is outside the scope:
Colorimetric accuracy or spectral plausibility of input device transforms(IDTs)
Full capture to display gamut mapping.(Required modifications to the RRT/ODT will need to be addressed by a subsequent group.)
Customizing for specific input/output gamuts
Working in bounded or volume-based gamuts
Anything which could limit creative choices further down the line(e.g. excessive desaturation)
HISTORY
SETTING THE SCOPE