Meeting Summaries 

ACES Output Transforms VWG 

← Use sidebar to navigate to a particular meeting.
Use Cmd+F or Ctrl+F to search this page.

Meeting #155, June 5th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Chris Brejon
Daniel Brylka
Alex Forsythe
Jeffrey D Mathias
Willem Nagtglas
Doug Walker

Meeting Notes

  • Kevin Wheatley: Only a couple of items to discuss, logged by Nick. There was also a v60 parameter that was not correct.
  • Nick Shaw: Comparing the CTL and the v60 Blink I wasn't quite seeing a match and I realized the gamut compression exponent was still set to 1.2 in the Nuke script when it is 1.0 in the CTL, to make it like Reinhard. I opened a PR.
  • Kevin Wheatley: Not a big deal but worth merging and re-baking to bring things into line. Nick also found a case where a crash is cause by the CTL indexing -1 into the array.
  • Nick Shaw: I batch rendered all the images in our test set through the Rec.709 CTL, and all except three render fine. The dominant wavelength image, The Lego Movie and Red Xmas. The dominant wavelength image has some inf pixels, and I suspect that is the cause. The comments in the code talk about one extra entry for wrap around, but that isn't actually implemented.
  • Scott Dyer: I put it in at one point, but I had problems and took it out again. We need to write tests for hue wrapping, and other things where we know what it should be doing. I am doing manual tests, but we need automated ones.
  • Kevin Wheatley: The code currently initializes the high value beyond the end of the table. The comment says it's ok because of the extra value, but that's not there so we need to subtract one.
  • Nick Shaw: The error I saw was off the start, not the end of the table.
  • Kevin Wheatley: Nick also commented, as others have, that a bunch of things could be initialized once, but aren't. He also found part of the computation that has a negligible effect.
  • Nick Shaw: At single precision float, with L_A of 100, it has no effect at all. So it's extra code lines adding nothing. We also have a lot of extra lines that completely cancel out because D=1.0 when we discount the illuminant.
  • Kevin Wheatley: If it only happens once at initialization it doesn't matter if it's over-complex, and it is at least matching the model. So the crashing bug should be fixed. The other is up for debate.
  • Remi Achard: I have done a fairly straight port of the CTL as a PR in OCIO. I moved all the achromatic first part of the model out of the per-pixel path, but there are still a lot of optimizations to do. Doug had some concerns about the amount of work needed. My code exposes the transform parameters – peak luminance, limiting gamut, and AP1 clip. I need to expose the encoding primaries for the creative white. I plan to have the tone mapper as a separate OCIO operator.
  • Kevin Wheatley: I think Doug was commenting on the multiple scales back and forth to 100, and also whether the tone curve could be approximated with a spline.
  • Nick Shaw: How easy is it to make a spline to match arbitrary peak luminances with our parameterized curves? Ideally we would have modified the maths to work on J directly, but we have to go back to luminance because the tone curve is defined in the domain.
  • Kevin Wheatley: I've been looking at the gamut mapper as I think some things are calculated multiple times, which could be done once per hue, and included in the lookup. If we stored multiple values at one set of sampling intervals we could do less lookups. Also the chroma compression and gamut mapper both use the reach limit, but one is nominally source parameters related and the other is nominally output parameters related. At the moment the model parameters are the same for input and output. But if somebody wanted to change that, each reach gamut should use the appropriate parameters.
  • Doug Walker: I have some questions as an implementer. I haven't had time to follow the development, but as an R&D project it's been an amazing accomplishment. Looking at the CTL and Remi's port of that it looks like a research project. The conversions back and forth between spaces make the algorithm more understandable, but as an implementer I'm thinking how to productize this. The conversions may not be desirable in a production context where speed is a priority. In ACES 1, some implementers baked 3D LUT, others build a functional implementation. OCIO went from LUTs to functional, which is preferable for VFX with physically accurate values. I was hoping we wouldn't need LUTs for v2. There is a simplification pass needed. For example the tonescale converts J to luminance with a bunch of power functions, runs Daniele's parametric tone curve and then converts back to J with more power functions.
  • Nick Shaw: That's the simplified version! Originally it used the full model to go back and forth.
  • Kevin Wheatley: The Blink was built with many modules all working in the spaces they were defined in. AND The Daniele model was defined in terms of display luminance.
  • Doug Walker: OCIO won't be the only ones looking at this. In the game industry they often have an 'ACES' option which is a approximation of ACES. I'm thinking what do I do with this to simplify it into product ready code. Am I willing to convert a whole series of operations into a rational polynomial approximation, and change the algorithm, which is a much bigger project. I'm wondering if we have time to do all that and hit the VFX platform deadline. Im looking for guidance on how to approach this as someone who want's a shader that will run on a GPU.
  • Nick Shaw: My DCTL is a shader implementation that can sustain 24fps ALEXA 65 ARRIRAW at UHD.
  • Kevin Wheatley: We always had an eye to not going too crazy, which is one reason we picked a simpler model. We never had a budget to how long a frame should take to render on a given GPU.
  • Nick Shaw: And we always knew it would be computationally more expensive than ACES 1 because we're doing a lot more.
  • Doug Walker: It feels that given enough time there are a lot of things that could be simplified. An approximation of the tine curve applied in J. We could do that for each block, but the result might not match exactly and it would take time. Ideally that's something implementers would like to have. Can the ACES group take that on? Or do you leave it to individual implementers, which means there will be a bunch of different implementations that make different trade offs.
  • Kevin Wheatley: That would have been a question for the TAC. I've not done too much optimization so as not to paint implementers into corners.
  • Alex Fry: Even if the tone curve gets baked to a spline approximation, some version of code has to relate that back to the values things are defined in. The gamut hull approximation was always intended to be implementer friendly. Ideally we'd have an iterative exact boundary finder.
  • Doug Walker: Are implementation related considerations what you're focusing on right now? Looking for things that make it more performant?
  • Kevin Wheatley: Not the group. I am doing experimentation for my own purposes, but it's not part of the CTL. We don't want to keep changing the CTL because people may be tracking it. Nick's J to Y and back to J was already an optimization but we didn't try to capture the end to end curve as a spline or single function. That could be done if it was a particular pain point?
  • Doug Walker: Has any profiling been done to see what aspects take longest?
  • Kevin Wheatley:  No profiling, but I changed the lookups in the Blink from linear to binary searches, which speeded things up.