Meeting Summaries 

ACES Output Transforms VWG 

← Use sidebar to navigate to a particular meeting.
Use Cmd+F or Ctrl+F to search this page.

Meeting #95, March 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Christopher Jerome
Zach Lewis
Thomas Mansencal
Jeffrey D Mathias
Carol Payne
Pekka Riikonen

Meeting Notes

  • Nick Shaw: My DCTL and GLSL implementations are a couple of steps behind the Blink. They are still at v31. When I originally posted my shader implementation of the DRT for Baselight it had a placeholder inverse, which was only the curve. I've now added a full inverse. But is has some Nan-type artifacts. The code is derived from the DCTL, and should be functionally identical, so I haven't yet found what's happening. The DCTL has a NaN check at the end that just sets NaNs to black. But we should really investigate what might cause them.
  • Kevin Wheatley: Daniele raised concerns over performance. Have you seen that?
  • Nick Shaw: It maintains 1080p25 on my 2019 16" Intel MacBook Pro, so I haven't noticed performance issues. And it's not optimized at all yet. It would be good if others could test different Baselight systems. The other thing I realized while working on the GLSL was that our current approximation of the intersection of the compression vector with the gamut boundary is calculated using the J-axis intersection and the pixel JM value, and that gives a slightly different result for the compressed and uncompressed pixel values. So the inverse gamut compression doesn't use quite the same value. Finding the exact intersection of a straight line and a gamma curve is complex, but we don't need an exact value, as the gamma curve is only an approximation of the gamut shape. I came up with a new version that calculates the intersection only from the J-axis intersect and the slope, so is identical in both directions.
[Nick showed his Desmos plot of the alternative approach]
  • Alex Fry: It would be interesting to see how good a match it is to the real boundary.
  • Nick Shaw: I think the difference is negligible. It is just exactly the same in both directions rather than almost the same. The real boundary is close to the gamma approximation for most hues, but I think around h=0 it bends a bit more, and we are still using the same gamma. Does that have any affect on the rendering being better for some hues than others?
  • Kevin Wheatley: Hue of zero has a b value of zero. We need to be sure our atan2 function handles that properly and consistently on all systems. Also atan2 normally gives you Radians, which we then convert to Degrees. It's then converted back to Radians when we use it later. And that can lose precision. Degrees are only useful for people. Removing the two conversions would help. Also Radians go from -Pi to +Pi, not 0 to 360. We need to be sure that's handled correctly.
[Pekka then showed his updated rev33 implementation the improved curve match as described in his ACES Central post]
  • Pekka Riikonen: The tone-scaled lightness divided by original lightness of mid grey was different for different peak luminances in v32. In v33 I modified the curves for a match at and below mid grey, as we had before.
  • Nick Shaw: The most important thing is creating a perceptual match between different peak luminances. But should we be careful we're not nulling out differences that should be there due to the model?
  • Pekka Riikonen: The differences aren't from the model. We're doing it with the tone-scale.
  • Kevin Wheatley: I was thinking about something similar. The tone-scale means we're feeding a different image into the model, not a rendered image and asking it to adapt to different targets.
  • Nick Shaw: Is our entry point into the model wrong? Are we tone mapping in the wrong place and then needing to cancel out the effect of that?
  • Kevin Wheatley: I don't think so. Our tone mapping knows about the final target. It's an all in one transform, without the two step rendered image and then target mapping.
  • Pekka Riikonen: Another thing I looked at is different levels of exposure lift. Going from 10 nit grey at 100 nits to 15 nits at 1000 nits seems too much to me. I find that using 0.12 as w_g in the Daniele curve feels a better match for me.
  • Nick Shaw: We said we wouldn't change the value of 15 nits at 1000, because nobody had objected to that.
  • Pekka Riikonen: ACES 1.1 is a more finished image, so for our lower contrast image maybe a lower mid grey is more suitable, and people can add contrast and move grey if they want.
  • Alex Fry: Should we revisit our assumption that we do want to raise the mids with peak luminance?
  • Nick Shaw: Didn't people say they didn't like them being the same in 1.0, so 1.1 raised HDR mids?
  • Alex Fry: There is some disagreement. What do people here think?
  • Nick Shaw: Consumer TVs don't do SDR at 100 nits, so mid grey isn't 10 nits. But PQ may be more accurate, being absolute, so you may well get HDR greys where you specify, meaning HDR could look darker than SDR to home viewers.
  • Kevin Wheatley: We could look at what SDR mids are for consumer TVs.
  • Nick Shaw: Won't a 200 nit BT.1886 display just put mid grey at 20 nits instead of 10?
  • Kevin Wheatley: Depends on flare and display contrast.
  • Pekka Riikonen: 0.12 gives a match to Jed's curve, if that's important.
  • Alex Fry: The SDR match to the average data is the only important match, I think.