Meeting Summaries 

ACES Output Transforms VWG 

← Use sidebar to navigate to a particular meeting.
Use Cmd+F or Ctrl+F to search this page.

Meeting #159, July 3rd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Jeffrey D Mathias
Carol Payne
Joshua Pines
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Nick and I have looked into reducing the number of lookup table entries. And we have ongoing transform ID discussions. My code only stores some attributes in the lookups. I calculate everything temporarily and then throw away what's not used by the final algorithm. For the reach gamut (AP1) we calculate a cusp and a max M value at the peak. Max M is used by both the chroma compression and the gamut mapper. The cusp is only used by chroma compression, which only uses the M value. The CTL stores a J value as well, which is unused. May code also resamples the lookup to store values at the same hues as the limiting gamut lookup, so it doesn't need to store the reach cusp h either. Nick wondered if we could drop the cusp M as well and calculate something from the max M. But it does change the behavior slightly.
  • Nick Shaw: Pekka may have tried this already. The cusp M is used for normalization in the chroma compression. That cusp is somewhere on the slope down from the M at limitJmax. Although before I said we couldn't derive the cusp from that, because they aren't at the same hue, I realized the hue is only correct at the cusp and max M. At every other J value we're interpolating and approximating. Those are kind of two arbitrary sample points on the curve, so if you only have one you can approximate the other. I made a DCTL version where I normalize to the M value of the boundary at source J. Because the limit calculation for the toe function also uses that boundary, the limit now becomes 1.0 as it cancels out. I didn't test round trips, and it does change the look, only in HDR. I could recreate the match by tweaking the chroma compress parameters. Doing this means I don't need to store the cusp at all.
  • Pekka Riikonen: That is the one thing I didn't try. I tried deriving the max M from the cusp. But it seems you didn't derive the same M value.
  • Kevin Wheatley: No. It's slightly different, and varies because of how the cusp changes with hue.
[Nick showed a plot of how reach cusp M and reach max M vary with hue]
  • Kevin Wheatley: We realized the max M only has three corners, because not having a gamut peak there are no secondary cusps. The biggest difference I saw was in blues.
  • Nick Shaw: To be clear I wasn't deriving the cusp M, because to do that we would still need to store cusp J, which cancels out the benefit. It seemed logical to me to normalize to the boundary at source J, as chroma compression only changes M so it moves horizontally.
  • Doug Walker: If you remove the luminance dependency, would that help make different outputs match better?
  • Pekka Riikonen: I got the best match by normalizing to cusp M.
  • Nick Shaw: This approach decouples it slightly from the target.
  • Kevin Wheatley: But the tone scale has already shifted things from where they would be. The bottom part shifts less, so the biggest change is in the highlight rendering.
  • Nick Shaw: Because it does change the look, it's probably not something we should rush into the release, because we don't have enough time to test. Maybe for 2.1.
  • Kevin Wheatley: But the outcome is that Rémi's code and the CTL store JMh for the reach cusp, but at least J is not needed. If we use one set of J samples we potentially lose the exact corners, so I was looking into adding extra samples just at the corners for the reach cusp. You need to sample the display corners and reach corners, and enough samples in between. I didn't get time to try that, or lower than 360 samples.
  • Doug Walker: If you definitely have the key points you could probably lower the number of samples elsewhere.
  • Kevin Wheatley: Maybe we could have uniform hue sampling, plus the extra six for the corners.
  • Scott Dyer: I posted an updated spreadsheet with a proposed alternate ID format. I haven't yet heard back from those who were concerned about parsing them.
  • Carol Payne: Is it OCIO that you need feedback from.
  • Scott Dyer: Thomas was one person who expressed concern about the previous IDs. Thomas and Daniele suggested they could be better but haven't given me examples of how. I want to get it right once.
  • Nick Shaw: I think Thomas was concerned about procedural config generation, which is easier with explicit structured IDs. Less looking up special cases in a spreadsheet.
  • Scott Dyer: I put the rendering space and white first to make it easy to identify transforms which us e the same rendering.
  • Kevin Wheatley: I prefer the word "in" to "as".
  • Scott Dyer: At some point there needs to be logic to parse the IDs into parameters for a transform.
  • Kevin Wheatley: In the config generation we assume a correlation between the ID and CTL file path. It would be nice to maintain that.
  • Scott Dyer: I based my list off the OCIO spreadsheet. One difficulty is that the cinema and SDR 100 nit transforms are essentially the same. HDR is absolutely defined.
  • Joshua Pines: That's been working for the last 20 years. Changing it would confuse people. In theatrical 1.0 corresponds to 48 nits and in SDR it's 100 nits.
  • Scott Dyer: The IDs say 1000 or whatever for HDR, but the SDR ones don't say because they could be 100 or 48.
  • KW I would prefer duplication, where two CTLs may have the same parameters, but the user name differentiates them.
  • Nick Shaw: The CTLs are different, because they include the encoding, so have different EOTFs.
  • Joshua Pines: I would agree with labelling all of them with nit levels, which conveys the intended viewing level.
  • Carol Payne: We would do that in OCIO anyway.