Meeting Summaries 

ACES Output Transforms VWG 

← Use sidebar to navigate to a particular meeting.
Use Cmd+F or Ctrl+F to search this page.

Meeting #153, May 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas

Meeting Notes

  • Scott Dyer: For documentation I want a list of key changes from ACES 1. My list so far is:
  • - fix blue LED etc artifacts
  • - gamut mapping instead of clipping
  • - hue preservation with tone scale
  • - better HDR / SDR match
  • - lower mid slope contrast and softer highlight roll-off
  • - tone scale automatically adapts with peak luminance
  • - improved invertibility
  • Nick Shaw: SDR to HDR is now a continuum, with continuously varying mid grey level.
  • Kevin Wheatley: People may ask why it took so long. The pandemic is one reason. But also some of our requirements are in tension with each other. Nice look out of the box, but also reaching the corners, for example. We took a while to find the balance.
  • Scott Dyer: Each requirement is reasonably simple on it's own, but making them all work is hard. Be need to explain our decisions, and thinks we had to consider that other renderings don't.
  • Kevin Wheatley: People may say we ultimately have a per channel adjustment like the original, so why is it better? But it's more complex, and the tone scale is only applied to lightness. We tried more complex models, but they weren't as controllable, or were too complex. That was part of the journey. We spent a long time noodling edge cases.
  • Nick Shaw: Those are the hard ones. The colors in the middle that are inside all the gamuts are the easy bit.
  • Alex Fry: We spent a long time trying to fit values from cameras that produce data outside AP1, but ultimately gave that up as impossible with our constraints.
  • Scott Dyer: I want to have an overview for a general audience, and then have all the detail for those who really want or need to know.
  • Kevin Wheatley: Jeffrey asked if the final version was v59 or v60. Definitely not v59. The CTL is the reference which should match v60. But there may be bugs. We know of one in v60 that Scott found, where a value for the hull gamma is reciprocated twice.
  • Scott Dyer: We'll announce when we have matching CTL and Blink.
  • Kevin Wheatley: Coming at it from scratch for my code, I got confused by how many times focusJ, slope and other things are recalculated.
[Kevin showed his sketch of the limit (actual and smoothed approximation) reach and compression line]
  • Kevin Wheatley: The intersections we find for the limit and reach boundaries should be on the same line with the same slope.
  • Nick Shaw: The slope of the line at the top is modified by the focus distance gain.
  • Kevin Wheatley: Whatever the slope is it should be the same slope used everywhere. That doesn't seem to be quite what happens. In the reach boundary search we use the previously found intersection to recalculate the slope and focusJ. I wouldn't be confident that would give the same result.
  • Nick Shaw: I think that's because when we had multiple ways of finding intersections, in that sub-function it didn't have access to the original values, so re-solved for them from what it did have. The theory is that any point on the line solves for the same values, which is what makes it invertible. But it would be better with the "flattened" code to use the original values.
  • Kevin Wheatley: I can do some tests to confirm that passing the values in rather than re-computing them gives the same result. The other thing I noticed was that the reach uses the model gamma, whereas the limit uses a constant passed in. Also, that gamma is constant as the dynamic range changes. Is that intentional?
  • Alex Fry: I tuned the reach against Rec.709 initially.
  • Kevin Wheatley: In the documentation we should preempt questions people may ask. Implementers may want to take shortcuts, so they need to understand what the code is doing. Also because at any hue, only one cusp can be accurate, and others will be interpolated. Is it legitimate to use one hue for all the values? I think the logical one to be correct for is the limiting gamut, as we're trying to hit that corner. An implementer might want to put everything in one table, and if they do, which hue samples should they use? As they are all approximations anyway, people could legitimately ask if it's ok to combine the tables. Thinking as an implementer, I would want to minimize the pre-computation and cacheing.
  • Nick Shaw: Because we are puffing out and smoothing the limit, but we don't smooth the reach, is it more important that the reach is accurate?
  • Kevin Wheatley: I think the opposite. I think it's more important that the cusp value of the actual target is accurate. If you start at the actual corner, puff out then clip back, you will definitely hit it. If your samples cut off the corner, you can't be certain puffing out and clipping will hit it. The obvious test is to try putting images through versions using each set of hue samples for everything, and see of the results are noticeably different. Everything makes some difference. Even just whether you pre-calculate a value once or do it within a sub-function each time. We don't have a good metric for what is a good enough implementation. CLF had a metric, but is it appropriate for us?
  • Remi Achard: I had a related question. We currently calculate the tables for every peak value. Could we pre-calculate them for a couple of gamuts and then use those to derive values for every peak? I haven't checked.
  • Nick Shaw: I don't think that would work because the 'seams' at the primaries and secondaries of a gamut are not the same hue for all J values.
  • Kevin Wheatley: And the hue of the corners shift with the primaries. I wondered about that for the reach, because it's only a rough boundary to reach to.