Meeting Summaries 

ACES Output Transforms VWG 

← Use sidebar to navigate to a particular meeting.
Use Cmd+F or Ctrl+F to search this page.

Meeting #180, February 12th, 12pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Jeffrey D  Mathias
Willem Nagtglas
Doug Walker

Meeting Notes

  • Kevin Wheatley: I didn't manage to make my PR due to work, as my GPU code is not yet fully in sync with the CPU. For example CPU uses achromatic A instead of going to J and back for the tone scale, and takes a and b from the Aab calculation together with M, rather than using sin and cos to recalculate them. I also tried Pekka's focus gain change using 0.5 instead of 0.55, which makes power of the reciprocal a square, so can be done with a mult. I would like others with HDR screens to look at the result. I hope to have time for the PR this week. It's mostly in my public branch for people to look at. We also got feedback from Apple on the Metal issue we discussed last week. They agreed with our guess on the problem. That's really an OCIO issue not ACES. But could be a useful pointer to other implementers.
  • Doug Walker: Morteza from Apple opened a fix PR, which we've merged. So the Metal and GLSL algorithms should be the same.
  • Alex Fry: I haven't yet goat anywhere meaningful with my ICtCp diff tool. Should it have a GUI? Just dump out images? Should it generate its own references from CTL? Now it just compares images A and B.
  • Kevin Wheatley: An image is fine, but we need statistics from that of max and min error. Top 2% or whatever.
  • Alex Fry: It seems to match reality except it reports higher than expected differences in the shadow region, where I se very subtle differences. I'm promoting SDR to 100 nits in PQ for comparing.
  • Doug Walker: PQ in the shadows the steps are much more than visually perceptible unless the image is all dark and your are adapted to that.
  • Kevin Wheatley: The current CTL renders in the DropBox aren't fully up to date. We need to rerun those to create a baseline. Then we can diff those with OCIO pre and post my changes. The comparison only applies to the forward direction renders, because changes to the inverse should match forward changes.
  • Nick Shaw: That is for assessing the differences your changes have made?
  • Kevin Wheatley: Yes. If my changes are similar size to the OCIO vs CTL diff. That will help us decide thresholds for implementations.
  • Nick Shaw: We expect the changed ones to be different. But the tricky question is how to assess if they can be considered the same if you are not A/Bing them.
  • Doug Walker: How did you assess results during development?
  • Alex Fry: Just people looking at images. No metrics.
  • Kevin Wheatley: It is hard to ask the same people is the new version better or worse, but not in terms of preference. Does it make it harder to work with?
  • Alex Fry: Is there a Blink version with Pekka's change?
  • Nick Shaw: You can just change one parameter (focus gain) from 0.55 to 0.5.
  • Kevin Wheatley: It only affects the really top end of pixels. I see something in SDR, but suspect it affect HDR more.
  • Nick Shaw: Looking at the XDR Display of my MacBook Pro, the difference in HDR is tiny, looking at the ARRI bar image. And no obvious change to the dominant wavelength image.
  • Doug Walker: There is already CTL for that.
  • Kevin Wheatley: But nobody's looked at the result.
  • Alex Fry: Is it scaled like the 48 nit cinema one?
  • Nick Shaw: Yes, it's a 625 nit (300 * 100 / 48) transform, fitted to a 300 nit display.
  • Doug Walker: OCIO were considering what should go into the built in configs from Thomas's config generator. Are the DCDM displays relevant for OCIO users? Do they need everything there is CTL for?
  • Kevin Wheatley: The ACES Reference config should have everything that's in the CTL. DCDM doesn't make sense for the CG config. But what about the Studio config? I think no.
  • Nick Shaw: There was mention that some apps may use OCIO which aren't the obvious compositing apps you think of.
[After discussion it was agreed that as long as everything was in the ACES Reference config, people with edge case workflows would likely have somebody who can make a custom config]
  • Doug Walker: OCIO TSC will make the final decision on Monday.
  • Kevin Wheatley: I'll use it to run some tests after my PR.

Meeting #179, February 5th, 12pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Willem Nagtglas
Cuneyt Ozdas
Carol Payne
Ake Sutthichat
Doug Walker

Meeting Notes

  • Kevin Wheatley: There have been various discussions on the OCIO Slack.
  • Alex Fry: I've been working on my ICtCp comparison tool, but nothing to show yet.
  • Carol Payne: I realized Alex is not on the OCIO Slack. I'll invite him.
  • Cuneyt Ozdas: I work with Doug at Autodesk, and he asked me to look into the Metal issue. I found that in Metal we have a wrapper class, which is absent in OpenGL, where we define a lookup table. This is populated in the runtime, probably for every thread, which hits the L1 cache and affects performance. Rémi had a PR which replaced the constant float buffer with a texture lookup, which seems to fix the issue. I also tried pulling the array outside the struct, making it a constant global array. That fixes it too. So it seems the Metal shader contractor is doing something wrong.
  • Rémi Achard: I guess we should share your findings with the Metal guys at Apple.
  • Cuneyt Ozdas: I guess this shows obviously in Apple Silicon, but the fix will probably improve other GPUs too.
  • Rémi Achard: I have thew same issue on my non M Mac with AMD GPU.
  • Carol Payne: Eric's reply suggests that's also happening in HLSL.
  • Rémi Achard: I think it's another reason in HLSL, where we were using a texture lookup inside the loop. That's why I switched to the constant array in the first place. It only affects some older DirectX versions.
  • Cuneyt Ozdas: I wondered about using uniform buffers instead of arrays.
  • Rémi Achard: I did try that. It doesn't change the speed. But it won't work with OpenGL before v2. We could do it for Metal
  • Doug Walker: There are limits to how many uniforms you can have.
  • Rémi Achard: There may be compatibility and limit issues.
  • Kevin Wheatley: So we certainly need to modify things for Metal and ask Apple if it's a bug or a feature. I would think using an extra texture would be less preferable.
  • Rémi Achard: That would be better for implementers. Although we already have 2 or 3 textures..
  • Kevin Wheatley: My branch has 2 textures, but could be reduced to 1 if we sample everything on the same hue. It might make the chroma compressor slower if used independently, but we may not care about that.
  • Rémi Achard: On my laptop I had an issue if I had too many instances of the transform, the GPU locked up. It doesn't happen if I use a texture.
  • Kevin Wheatley: If people report problems with that we could look at ways to reuse the same table it there are multiple instances with the same target gamut. That could be the most likely use case, going backward and forward through the same transform. I haven't opened a PR yet. I want to test more first. I made the tone scale go back from achromatic rather than J, as Nick suggested. It makes a difference at least on the CPU by eliminating a pow call. I made the CPU chroma compression use dot products, as the GPU already did. Other than those it's just small scaling factor changes and a rework of the J intersect solve. I didn't try  moving where the norm was calculated, which was discussed, moving it to a higher scope. Nor did I look at whether we store hues in degrees, radians or something else, to avoid some trigonometry. Nick mentioned that Doug had suggested we could maybe avoid the polar representation. We don't need the angle. We just need an index for the lookups.
  • Nick Shaw: It makes sense that since the angle came from the rectangular form in the first place, it makes sense to reuse that rather than recalculate it from the angle. The a and b used in Pekka's fitted curve are just scaled versions of the a and b from the Aab you have calculated on the way to JMh. Sin and cos h are also used later to go back from JMh to XYZ, and maybe if you had held the original a and b you could reuse them there too.
  • Kevin Wheatley: That's another reason to move the norm calculation to a higher level where a and b are still available.
  • Carol Payne: So if you need another couple of days before opening a PR that's ok, particularly if you can get that new stuff in, and have GPU/CPU parity, so people can test and profile. You won't have the Metal fix in it, correct? So we'll send it to the Metal guys together with our suggested fix. Then we have to decide if this is good enough.
  • Kevin Wheatley: I'm hoping for some code feedback from people.
  • Carol Payne: The hopefully we can put something out, together with some release notes on what we've done. Then later the CTL can be updated.
  • Nick Shaw: GPU profiling is not something a general user can do, is it?
  • Carol Payne: No. But the people who tested before can retest. Something others can do is look at the configs. Those will become part of the next release.
  • Alex Fry: For my delta tool I wondered where I could get a build of OCIO.
  • Doug Walker: You can just pip install 2.4.1. That includes ociodisplay, which uses the GPU, and ocioconvert. I also made a combined ACES 1 and 2 config for testing, which is linked from the optimization Wiki.
  • Alex Fry: Do we have an HLG output?
  • Doug Walker: Yes, and it's in the CTL.
  • Nick Shaw: It's just a display light cross-conversion of the 1000 nit PQ.
  • Carol Payne: Kevin, we can wait to Friday for your PR. As long as we have it for Monday.
  • Kevin Wheatley: I could keep tweaking for ever. But the cut off is the algorithm changes. Optimizations which don't change output can come later.
  • Carol Payne: That can be for 2.5

Meeting #178, January 29th, 12pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw

Rémi Achard
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Giardiello
Jeffrey D Mathias
Carol Payne
Ake Sutthichat
Doug Walker

Meeting Notes

  • Alex Fry: I've rebaked the LUTs from Pekka's PR and made the names less testing centric.
  • Kevin Wheatley: We have some results of testing OCIO builds on various systems. I've been merging my CPU updates into the GPU implementation, but not finished. Nick noticed CPU and GPU don't match. The GPU path was double smoothing the cusps, because it was already baked into the tables. And the GPU is still recomputing the slope where the CPU uses one value. I experimented with some other optimizations like more aggressive compiler optimizations. I didn't enable fast math. The biggest change was narrowing the search parameters for hue. Rémi did some GPU profiling with Insight. I noticed we are calling some lookups multiple times, so I changed that, as it seemed a lot of the time was waiting for lookups.
  • Rémi Achard: My profiling used your code from the 27th, not your latest.
  • Kevin Wheatley: It was good to see the GPU and CPU hotspots were in similar places. But some things seemed more expensive on the GPU than you might expect.
  • Rémi Achard: We found the current code is quite slow on Metal, even the updates that are faster on NVIDIA. We need to investigate. It may be the constant array usage. My macBook is old but the issue still exists on Apple Silicon.
  • Doug Walker: Your branch moves the constant array into a texture, Rémi?
  • Rémi Achard: Yes, and it removes the while loop, although I don't think that makes a big difference.
  • Kevin Wheatley: That depends how the samples are rearranged. My version you don't need many loops.
  • Rémi Achard: I'll test taking your branch and swapping the array for a texture.
  • Doug Walker: Our theory is there is a Metal specific issue.
  • Rémi Achard: You can't profile Metal shaders line by line.
  • Carol Payne: I can ask a Metal expert if needed.
  • Kevin Wheatley: I'm indirectly measuring a speedup on the GPU, but it would be useful if OCIO had a standard way of measuring GPU performance. For CPU we see improvements on all hardware, but some it's a huge speedup. Nearly 2x. Vectorizing would be a good way to make the CPU faster. Enabling auto-vectorization kind of speeds it up, but ads temporary buffers which may outweigh the win.
  • Doug Walker: For OCIO we don't want to multi-thread the implementation because that should happen in the caller. We could defer vectorization to later if it doesn't change the algorithm.
  • Kevin Wheatley: We're not using some vectorized code we already have for things like matrix multiplication. But those aren't the most complex parts, but they are easy small wins. Rémi's profiling shows the GPU is being quite efficiently used. Texture loading an pow functions seem to be the bottlenecks.
  • Rémi Achard: Looking at the intermediate DirectX in Insight can be helpful. But it's already only about 2x slower than ACES 1 except on Metal, which we need to investigate. And test more platforms.
  • Kevin Wheatley: Next steps for me is finish making the GPU match the CPU. I think that will reduce the size of the code that's running. Then it would be good to have more eyes on the renders we now have to confirm we haven't drifted too far.
  • Doug Walker: You've done most of your optimizations in the GPU?
  • Kevin Wheatley: Yes, except I need to make the CPU only calculate slope once. Nothing I've done changes the middle part of the image. It's only above the gamut mapping threshold.
  • Carol Payne: Then we need to add in Rémi's optimization.
  • Kevin Wheatley: Although that may only help Metal, as it adds another texture, which may slow NVIDIA, so better not to do for that.
  • Doug Walker: We should probably make an optimization branch on OCIO and merge Kevin's changes then Rémi's into that. Perhaps with the ability to use it with or without the extra texture.
  • Kevin Wheatley: I would prefer to finish matching my GPU and CPU first. Hopefully by the end of the week. But maybe we should just do it wherever it's at by the start of next week.
  • Carol Payne: The it will be easier to make unit tests against it.
  • Rémi Achard: As the implementation has changed slightly maybe we should check how that affects the LUT implementation.
  • Alex Fry: Shader based is obviously the way forward, but in many cases LUTs will still be needed.
  • Carol Payne: Some applications may always do it that way. Pomfort is now. It's important to say what the differences are.
  • Alex Fry: My bakes are done in Blink, but it mat be better to use the CTL now.
  • Nick Shaw: Would it make more sense to use OCIO as that's the latest?
  • Carol Payne: It's useful to have both to compare and have a marker of how it was.
  • Doug Walker: To reiterate, we're making these changes in OCIO and when that's done it will be ported back to CTL.
  • Nick Shaw: It's still probably useful to post LUTs people can try built for OCIO.
  • Doug Walker: As with OCIO v1, the choice of shaper makes a big difference. Once you have a shaper, it's trivial to write a Python script to bake LUTs.
  • Nick Shaw: So far we've always used the ACEScct curve, first with AP0 then AP1 primaries.
  • Alex Fry: And we know that doesn't really cover the whole range.
  • Kevin Wheatley: I wouldn't want to bake LUTs until I've finished my work and merged.
  • Alex Fry: Nick and I made code to do ICtCp comparisons. We should probably continue with that to have a standard comparison.
  • Carol Payne: Since Frankie and Chris are here today, they can pull stuff down and test on a range of machines.
  • Kevin Wheatley: Because we've seen weird behavior it would be good to test more platforms.
  • Chris Clark: We would be happy to run tests.
  • Kevin Wheatley: Wait for my PR, early next week.
  • Chris Clark: I've been talking to Alex because we are testing AMF and other things ahead of NAB.
  • Carol Payne: We have a deadline of early March for OCIO 2.4.2.
  • Chris Clark: We are already using the Pomfort implementation on multiple shows.

Meeting #177, January 22nd, 12pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw

Rémi Achard
Daniel Brylka
Carol Payne
J. Schulte
Ake Ake Sutthichat
Doug Walker

Meeting Notes

  • Nick Shaw: Pekka has opened a PR for the Blink.
  • Kevin Wheatley: It makes three small changes to bring it up to date with the CTL.
  • Nick Shaw: The CTL commits are noted in the PR. It's now called ACE2 DRT v001.
  • Alex Fry: I've merged it. I should generate new LUTs.
  • Kevin Wheatley: My latest update goes quicker but changes some pixel values slightly. This may alter the round tripping we'll talk about later. I've replicated my previous redistribution of the hue tables, and a couple of minor changes. I'll look tomorrow at adding that to the GPU path. One thing I haven't looked at yet is what Nick mentioned on Slack about using A from the middle of the XYZ to JMh calculation when going back to Y, to cut out the J to A step. The achromatic conversion to J is the highest on the list of slow-downs.
  • Nick Shaw: It's unfortunate that the chroma compression needs both uncompressed and compressed J, or you wouldn't need to go all the way to J first time round.
  • Kevin Wheatley: I wonder if it's possible to do the chroma compression without uncompressed J? Now the gamut compression is not that different to the tone curve, in terms of time. It may be the profiler.
  • Alex Fry: How different are the pixels?
  • Nick Shaw: It's just a different approximation of the same curves, just sampling them in different places.
  • Kevin Wheatley: Nick and Pekka had a discussion about pre-calculating cuspintersections.
  • Nick Shaw: The cusp intersection would be constant for a given hue if it weren't for the focus_gain above a threshold. So if you ignored that it could be pre-calculated in a table. Is it worth an extra table and interpolation errors?
  • Kevin Wheatley: I'll investigate when I look at Pekka's other suggestion of replacing pow(x, 1/0.55) with x*x. That's it for me. There was discussion on Slack of round trips. Should we be able to invert everything.
  • Nick Shaw: Doug found that 1000 nit P3-D65 has missing bits of the cube after a round trip. Looking back, we agreed that we needed to round trip SDR Rec.709 and P3, but knew we weren't able 
  • to round trip HDR. Is that a problem if people can't hit those corners, which are quite extreme values in HDR.
  • Alex Fry: Certainly not Rec.2020, or anything above 1000 nits.
  • Nick Shaw: Even P3 at 1000 nits doesn't quite round-trip. Do you need 1000 nit cyan in a logo? Looking at the Reveal P3 PQ LUT, it doesn't go near those corners, so I don't think it's a problem for LMTs. Might graphic or animation people want to hit those corners?
  • Alex Fry: I don't think so.
  • Carol Payne: It just needs to be clearly documented what the expectations are. That will need to go into OCIO too.
  • Nick Shaw: What was the ACES 2.0 document you mentioned that talked about a two 12-bit code value tolerance?
  • Kevin Wheatley: The CTL doesn't achieve that, even in SDR. I think it means implementations need to be within two 12-bit code values of the CTL output of the round-trip. Except OCIO is not ahead of the CTL.
  • Kevin Wheatley: The CTL update will then need to match OCIO to that tolerance.
  • Doug Walker I had thought after a round trip it should be that close to the original image. I found it was ~400 12-bit code values. The document needs clarifying. I said I would write unit tests, and started from those cube images as the most sensitive test. I need to come inside the gamut to find values which will round trip.
  • Kevin Wheatley: My code is within 8 or 9 bit precision for a Rec.709 round trip. Most errors on the three cube faces nearest black.
  • Nick Shaw: Might that be due to the fixed lower hull gamma?
  • Doug Walker 8-bit is better than I was seeing from OCIO 2.4.1 or CTL.
  • Kevin Wheatley: I didn't search for the largest error. I was looking at patterns to help pin down the source. It's noise like, so could be because one component starts at zero.
  • Nick Shaw: In my DCTL I was seeing some zero channels round tripping to ~1% in SDR. HDR is less good, although most of the inner values do round trip.
  • Doug Walker Is it useful if I write SDR round trip unit tests with 2% tolerance?
  • Alex Fry: We need to look at the full cube as well as the faces.
  • Nick Shaw: That's why we have the CMS pattern and cube faces images.
  • Rémi Achard: There are already some round trip tests for SDR Rec.709 and P3, and 1000 nit P3. They just use a small 3D LUT lattice, 8^3. They only test the whole transform.
  • Kevin Wheatley: It would be good to have tests of individual functions. Next steps are to port my changes to GPU. I need to see if the same things speed up the GPU.
  • Doug Walker We can help with that.
  • Kevin Wheatley: I can compare my improved binary search with Rémi's reverse LUT method. We also talked about making a Wiki to compile information on whether the improvements translate across all platforms.
  • Doug Walker I started on a Wiki. I'll get back to it. And I'll keep thinking about unit tests.
  • J. Schulte: I can contribute some round trip data with a pipeline that implements CTL and OCIO natively.
  • Kevin Wheatley: My next steps are to port the new improvements to the GPU, then run tests on a Linux machine. I'll also try to test with an NVIDIA card or two.
  • Alex Fry: I'll update the LUT bakes, and make the names line up with everything else.

Meeting #176, January 15th, 12pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Carol Payne
Pekka Riikonen
Doug Walker

Others TBC

Meeting Notes

  • Kevin Wheatley: I've been working on my optimizations with input from Rémi, Nick and Pekka. Pekka's suggestion I've just added as a comment in my code. He pointed out that we use a power of 1/0.55, and if we changed that to 1/0.5 it would be a simple square, which is more efficient. It would change the look slightly. Nick and I were discussing another optimization.
[Nick showed a Desmos plot]
  • Nick Shaw: As a reminder, the idea behind the invertible gamut compression is to base the compression slope not on the source JM values, but only on the intersection of the compression vector with the J axis, and solve for the intersection producing a line which passes through source (J, M). This means any point on the line will solve to the same intersection, so produce the same slope. So compressing along the line you can find the same line to invert back along. That got changed slightly at the top to smooth the path to white for bright saturated colors, so the slope is altered depending on the source J value. The way the Blink code worked, with all the options, in the solve for the reach boundary the source JM was not available. Rather than pass that value along I simply re-ran the J intersection solve using the gamut boundary intersection, which was available. Because I assumed that was on the same line it should produce the same J intersection and thus slope. But this assumption is broken above the threshold where the focus gain comes into play. There solving from source JM and boundary JM don't give quite the same intersection and slope, because their J values are different. In Kevin's code everything is inlined so the original intersection and slope are available. The solve_J_intersect function running once instead of twice will improve speed, but slightly change the result for some pixels. Using a different solve was an accident, but I don't know if it is needed to contribute to the smoothing of the path to white there.
  • Pekka Riikonen: I don't think it matters, because for the inverse we have to use an approximation above the threshold. But in the forward direction it will affect the look.
  • Nick Shaw: Because it only happens right at the top where the slope is pretty horizontal already, hopefully the effect is small because the J values are similar. Kevin found the largest effect on extreme magentas.
  • Kevin Wheatley: It does help the performance measurably.
  • Nick Shaw: I believe it is also the correct thing to do.
  • Kevin Wheatley: Yes because otherwise the ratios aren't quite what you assume they are, and it all adds up. I propose we include it, because other changes will have a larger effect. I can't make the binary searching any more efficient without redistributing the hues.
  • Nick Shaw: Because some parameters were fine tuned to only just make the round trip, we may have to tune them again after changes.
  • Kevin Wheatley: We should re-tune after all optimizations.
  • Pekka Riikonen: Do you have this in a Blink version?
  • Kevin Wheatley: Not this exact thing, but the changes I made were pretty much the same as what I did in my previous Blink version. The next thing for me to do is redistribute the hues. I've already made sub-functions where parts of different equations were actually doing the same thing. The gamut compress is still a large part of the time taken, but it's now broken down so we can see what parts contribute most. The cusp lookups are significant, and if I do the hue lookup separately, then the other lookups become more efficient because I already know what interval to lerp. Chroma compression is still heavy but I don't yet know why. I need to test other architectures. Matrix ops could be SSE optimized. There are some branches, but a lot of code could use SIMD ops. We may also be able to remove some checks for e.g. division by zero, if we know that can't happen. My current code is CPU, but I'll also have a look at the GPU shaders next week.
  • Nick Shaw: Some traps might have been needed when we had many options and adjustable parameters. But not all may be needed with the chosen options, and clamp to AP1.
  • Kevin Wheatley: Currently the AP1 clamp is external to the fixed functions, so if you use them in isolation, perhaps for tests, you could feed in data they were not designed to handle. That's an OCIO question.
  • Doug Walker: The clamp is so small it may be worth adding to the fixed functions.
  • Kevin Wheatley: I also wanted to remove the radians to degrees conversions. Degrees are useful for humans, but in code radians are simpler. These is currently an implicit assumption of degrees somewhere, which I haven't found yet. Changing the table size from 360 may help find that. We probably need some tests in OCIO of the individual components.
  • Doug Walker: Maybe we could help with that.
  • Kevin Wheatley: Testing forward and inverse of individual components rather than the whole transform would help. The current tests sometimes still passed when I introduced an error producing bad looking images. I've added more TODOs in the code but most are minor. I also tested my own power function using log and exp, and it was faster than powf. Don't know why. It may be architecture specific.
  • Rémi Achard: Before looking at that we should vectorize the code.
  • Doug Walker: That would be CPU specific. We should prioritize things that would help CPU and GPU.
  • Kevin Wheatley: So far I have a ~30% speed up on the CPU.
  • Doug Walker: What can we help with?
  • Kevin Wheatley: I could do with help with GPU code. And adding finer grained testing. I was testing with the gamut cube, and I suspect some of Rémi's test values aren't on the edge of the gamut where issues might show up. It would be good to plot the speed ups for each of my commits on different machines, and check the trend was always in the right direction.
  • Doug Walker: We'll create some tests and send them to you to look at. We can also help you port your CPU commits to the GPU.
  • Kevin Wheatley: Although having the GPU run the previous version is a useful comparison.
  • Doug Walker: Eric posted on the Slack about the GPU profiling results he got. We also want to try profiling Metal using Xcode.
  • Rémi Achard: Something odd happened when I tried that. I got exactly the same result every time when I expected variation.
  • Doug Walker: When we've ported Kevin's finished optimizations to the GPU we can engage with Eric again. And also check he's testing what we think he's testing.

Meeting #175, January 8th 2025, 12pm PT

Attendees

Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard

Others TBC

Meeting Notes

  • Kevin Wheatley: This week Is mainly an update from me on what I've done to optimize OCIO CPU performance in my fork. It's not definitely a final set of commits. I was initially rearranging things to put anything that changed output values into later commits, but gave up on that to prioritize progress. I'm still working on it and have more I'll push tomorrow. I haven't yet worked on the table lookups and gamut mapper performance. I did some investigation into whether there were possible lookup optimizations that could be done without changing the hue distribution.
[Kevin showed a graph of the actual hue distribution and the error from assuming the distribution was uniform]
  • Kevin Wheatley: It's always off in the same direction, so we could narrow the search window without changing the distribution. But we would need to check if that assumption holds for a range of gamuts. The alternative is to change the distribution to be linear, as I did, which removes iteration from the search. We need to look at the performance difference vs rendering difference. If we decide to redistribute the hues we should do it for all the lookups and combine them.
  • Scott Dyer: If we do change the pixels, it does make sense to make all those changes in one go. What effect on picture and speed do the changes you've already made have?
  • Kevin Wheatley: I tried combining the RGB to LMS matrix with the white balancing weightings, and was surprised to see it made a difference to the result. I need to look into that. Also I notice that in the JMh conversion there are equations in one direction, and those equations are expressed as a matrix in the inverse direction. Making both a matrix seems to change things. I need to have a way of comparing the effects of those against color shifts from other optimizations. I've also looked at eliminating some scaling factors in the J <> Y in the tone scale. Every little helps.
[Kevin showed the output of a performance analysis tool]
  • Kevin Wheatley: Why things are slow is not always obvious from the C++. Looking at the compiled assembly helps. The analysis confirms what Rémi saw, that pow is called a lot of times, and each time is relatively costly, so it is a significant bottleneck. I looked into alternate pow functions. We know a bit about the values going in. They are not negative and there are no infinities. The exponent or base is known. The C implementations are optimized for the general case, not necessarily our specific one. I've done some rearrangements to reduce the number of pow calls. Nick had noted on the OCIO Slack that if you grab achromatic A before going to J you can use that in the J_to_Y and remove a pow, and that would also apply to the GPU. When I really look at the GPU code, I'll need help from Rémi or somebody else from OCIO. My CPU improvements have gone from 9000ms to 6000ms. I've tried to avoid repeated lookups by looking up once and passing values down. We can simplify when we don't need the back and forth to reference state that was needed in V60 which still had a lot of options. The CTL may not suffer from all the slow downs because it has a static initialization stage, and memorizes the results. In C++ you have to hand build that. I'm continuing the optimizations that Rémi started.
  • Scott Dyer: I can see from your commits generally what you're doing. But I can't really help.
  • Kevin Wheatley: If a company or person could investigate e.g. an optimized pow(0.42) or compacting the J <> Y and tone scale, that would help. There are hacky ways of doing pow functions with the binary representations of floats directly. My laptop is old, so it's a worst case scenario. I'll try newer faster CPUs on Linux next week.
  • Rémi Achard: Kevin's stuff looks really good. In my previous tests I got 3x slower than ACES 1 on the CPU. Before I was testing on an old LibC, and recent updates make the math library much faster. Eric found about 2x slower than ACES 1 on the GPU. This is ACES main branch and can be improved.
  • Scott Dyer: We'll have these meetings at the same time each Wednesday to meet OCIO's end of February deadline.
  • Nick Shaw: I responded on the OCIO Slack to Gary Demos's comments.
  • Kevin Wheatley: I looked quickly at that. A lot of his points we had discussed before. It would have been good if we'd had input six months ago.
  • Nick Shaw: Things to consider for ACES 3.0!
  • Scott Dyer: Next week hopefully we'll have Doug and Carol. It will be my last week for a bit, but you can do what's needed to get the next OCIO release. I'll work that back into the CTL later.

Meeting #174, December 11th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Michael Chenery
Jeffrey D  Mathias
Willem Nagtglas
Carol Payne
Patrick Renner
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Mark Titchener
Doug Walker

Meeting Notes

  • Scott Dyer: It's great to see many people who don't usually attend. We want toto plan our work towards an implementation of the group's algorithm that works for everybody's performance needs in a way that coordinates with what people may already have implemented, so it plays nicely moving forward. What are the needs? And what are the options to fulfill those. with minimal impact?
  • Nick Shaw: Before we start that I'll just mention, I found why my DCTL is not quite in sync with the CTL. I hadn't kept up with the change that was made to the CTL to vary lower hull gamma with peak luminance. I don't think the v60 Blink has that either, as my DCTL matches that. When I make the change it matches the CTL. I'll test more tomorrow then push the change.
  • Kevin Wheatley: That change is already in OCIO. We should fix the v60 Blink. What initial thoughts do implementers have? Shall we start with OCIO?
  • Carol Payne: We release an OCIO implementers' preview in OCIO 2.4.0 in October. There is still optimization to be done, but we wanted to get it out there. We had planned to do a full release in 2.4.1 this week, but there is still work to do. The ACES 2 config's won't be built in, but are available for download. We now plan a 2.4.2 release in early March. We would like to talk to other implementers about the optimizations we are considering, as we want the optimizations to be agreed on by this group.
  • Doug Walker: ACES 1 was a LUT implementation in OCIO. This caused issues especially for bright colors. In OCIOv2 we moved to a functional shader based implementation. We'd like to have the same for ACES 2.
  • Kevin Wheatley: Worth noting that OCIO has a GPU and CPU path.
  • Doug Walker: So ACES 2 is in 2.4.0 now, and we've done some speed tests on the CPU path and it's 3 to 8x slower than ACES 1, depending on hardware. GPU profiling is harder. The algorithm is more compute intensive than ACES 1, and there have been suggestions here about possible optimizations, and we've been exploring some in OCIO.
  • Scott Dyer: We know some optimizations may change pixel output. If we decide those are desirable, we want to do it once and in a way everybody agrees on. Those wouldn't change the look, but we would need to update the test images.
  • Kevin Wheatley: I was just clarifying in the chat that OCIO has split the configs into a D65 and ACES white 'D60' config. We felt users would need one or the other. But configs with both can be made.
  • Daniele Siragusano: You want to not suggest D65 and D60 in a mixed environment, and have one or the other for both cinema and TV?
  • Kevin Wheatley: That was the feedback we had. In VFX you are using one or the other.
  • Carol Payne: This is OCIO specific. In finishing you might need both.
  • Doug Walker: The generator produces 3 configs – D65, D60 and one with everything. People can customize as they want.
  • Carol Payne: We want to make it clear so people don't get confused. SO we want sensible defaults, and encourage people to make a conscious choice.
  • Alex Fry: J asks if we should refer to it as ACES white rather than D60?
  • Nick Shaw: Might that make it sound as if it is the ACES default that people should use? "D60 sim" already confuses people.
  • J. Schulte: I'm calling a spade a spade. We here know the difference, but as we broaden the audience it's a good thing for people to call it the right thing.
  • Alex Fry: Does it need a better name?
  • Scott Dyer: In the documentation we refer to it as the ACES white point, and make clear that calling it D60 is shorthand. But it's a longer discussion, not for this meeting.
  • Daniele Siragusano: There are implications if you want to support it more generally.
  • Kevin Wheatley: Any other implementers have experiences they want to share?
  • Scott Dyer: Anybody else experience the speed issues OCIO mention.
  • Patrick Renner: We are currently baking LUTs, but we've noticed LUTs don't hold up so well for ACES 2, so we need to look at other options, one of which is OCIO. It feels the route we took was the right one, because otherwise we would be seeing the same issues OCIO is. So we are just observing for now.
  • Daniele Siragusano: We haven't started an implementation either, because ti is clear things will change. Rather than everyone doing different optimizations, it seems better if one party goes ahead and makes the obvious optimizations, and we follow. Or it will take another 100 meetings! OCIO is kind of the reference that's used in VFX, so it's more important for us to match that than the CTL.
  • Carol Payne: As an open source project we are a suitable test bed. People can see what we are doing and make contributions where they an help. We don't want to be making decisions other implementers would be unhappy with. So we want feedback. We need to get this done by March.
  • Daniele Siragusano: Are these discussion on ACES Central or OCIO channels.
  • Doug Walker: The discussions have happening mostly in this Wednesday meeting. We hope it will transition into an implementation group, with input from multiple vendors.
  • Scott Dyer: We need to decide what actions we can take. We have Kevin's changes, and the 1D LUT tone-scale. Can we measure the performance gains. How do we decide what's enough? Does it have to match the speed of ACES 1.
  • DQ: It doesn't have to match ACES 1. It's obviously a lot heavier. OCIO is tied to the VFX reference platform, as is ACES, and for calendar year 2025 it should be ACSE 2. We spoke to VFX platform vendors and it was ok to extend to deliver in March. We don't have a bench-mark, but we do have a deadline.
  • Scott Dyer: Should I make an experimentation branch of the CTL, so it's still a reference? We need feedback from OCIO about what changes are worth putting in.
  • Daniele Siragusano: It would be good to have a reference benchmark, so we could see the speed gains for a particular commit. I think some trade off being considered may not be valuable.
  • Scott Dyer: There's is no benchmarking framework for CTL. We need to be able objectively see gains.
  • Carol Payne: We have CPU benchmarks we can share. The GPU benchmarks vary more. Hopefully Epic can help us with that in the New Year.
  • J. Schulte: Would it be useful to look at the industries represented by the vendors and ask what is the baseline hardware or GPU target? And ask what frame time they can afford.
  • Kevin Wheatley: OCIO aimed to be textureless. It would be good to know constraints. Do people prefer we don't use textures? What shader model version should we follow?
  • J. Schulte: Are we adding a compute shader path for people without a fragment shader path. And I do think whatever we do needs to roll back into the CTL reference.
  • Carol Payne: Implementation groups need a reference and thresholds to validate implementations. That could be done in this group.
  • Nick Shaw: Do we need different levels of tolerance? On set might accept a lower level of precision than finishing.
  • J. Schulte: On set you can easily reach corner cases, and make expensive decisions if the image isn't true to final pixels. I don't want to have to re-run dailies when it leaves set.
  • Nick Shaw: The on set live view is always from a LUT box.
  • J. Schulte: Currently. We do have non LUT based on set solutions.
  • Alex Fry: Nick and I made some tests using Delta E ITP. Once we see real implementations we can decide thresholds for those. It would make sense to have standard test code to run on the test images.
  • Daniele Siragusano: Do OCIO have tools to test numerical stability. NaNs etc.?
  • Doug Walker: Scott gave us test images and we found NaNs from those and fed back and made changes. We have the ocioperf command line tool which can measure the CPU path. Not GPU unfortunately. OCIODisplay can use some OpenGL instrumentation.
  • Patrick Renner: What speed are you seeing now? How many fps can you process. And what shader language do you use?
  • Doug Walker: Our code is shader language agnostic, and then outputs different shader languages. Most of our testing has been of GLSL and Metal. We get real-time 4k playback on a range of GPUs, but there are things that could be sped up. But which should be done?
  • Kevin Wheatley: The challenge is the available budget, because there is more going on than OCIO.
  • Carol Payne: People have tried the preview implementation and commented that it looks good but is slow.
  • Daniele Siragusano: If it's slower than a 3D LUT implementation people may just use a LUT. It would be a noble aim to be comparable to a 65 tetrahedral LUT.
  • Kevin Wheatley: With simple cube LUTs HDR output suffers.
  • Carol Payne: The implementation group should document how to bake LUTs.
  • Nick Shaw: And the limitations of a LUT implementation. Because to cover the input range needed for HDR stretches things thin.
  • Kevin Wheatley: We tried naive LUTs, but a hybrid approach could do better.
  • Scott Dyer: What can we work on going forward. Do we need CTL updated with Kevin's changes?
  • Doug Walker: Rémi has tested with 1D LUTs based on Nick's work. So the next step is to get Kevin's changes into the CTL.
  • Scott Dyer: To optimize the tables and eliminate the binary searches.
  • Carol Payne: But we need to confirm if those are acceptable for a reference implementation.
  • Kevin Wheatley: I took the v60 Blink and optimized it for a production. Blink optimizations may not be applicable elsewhere for GPUs. Other changes related to NaNs are already in the CTL. I also simplified the code because it was originally modular.
  • Doug Walker: That would be useful in a PR to the CTL.
  • Patrick Renner: Is the end of February a cutoff?
  • Doug Walker: Yes, unless we change the VFX platform to stick with ACES 1.3 for 2025. But because it takes vendors longer and their customers longer still to update, so it would be a long time then before ACES 2 could be used.
  • Carol Payne: We want to get it out so people use it, and then give feedback. OCIO2.4.1 is on the GitHub now, so you can build it or pip install it. But it's not in shipping apps.
  • Doug Walker: What's there now matches the CTL. Rémi has work in progress that he hasn't pushed.
  • Nick Shaw: I think Rémi did some optimization of things that happen per pixel in the CTL, and only once in OCIO.
  • Carol Payne: Where should we discuss this all?
  • Scott Dyer: ACES Central of Slack?
  • Alex Fry: It should really be ACES Central.
  • Scott Dyer: I'll make a new ACES Central thread for the record.
  • J. Schulte: Is there interest in having a way to run CTL in Nuke as a reference?
  • Scott Dyer: If it was possible it would be great!
  • Kevin Wheatley: There will be no meeting in two weeks time!
  • Carol Payne: For January and February it would be good to have weekly meetings, and we could alternate the time so everybody could attend. For the gamut compression we did one week 9:30 PT and one 5:30.
  • Scott Dyer: I'll put in the next meeting for the 8th of January, 1pm PT for now.
  • Carol Payne: This time does clash with ASWF meetings.

Meeting #173, November 27th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Chris Brejon
Daniel Brylka
Willem Nagtglas
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: I have an update from the OCIO TSC. Nick has a minor Blink change.
  • Nick Shaw: It's been noticed that our Blink didn't work in Nuke v15.1. Foundry changed matrix handling so inversion happens in place. So now we copy the matrix before inverting. There is also a bug in 15.1 with float3 arrays, so they've been changed to float4, even though only three channels are used. Thanks to Mark Titchener at Foundry worked with me on these issues. Alex has merged my PR with the changes, so v60 is now compatible with Nuke 15.0, 15.1 and the 16.0 beta.
  • Alex Fry: Should we rename it from v60 to match the CTL.
  • Kevin Wheatley: Maybe when we make the next release candidate, as long as it's updated to match that. At the OCIO TSC meeting this week we felt getting substantial changes for the next point release was too tight, so we'll do another point release in January when we'll make more changes. The December release won't change appearance or include embedded configs, and will still be a preview. The configs will be available for download. OCIO would prefer if other implementers commented on changes and optimizations. We are only an architecture group, without implementer involvement, except OCIO. We would like a single optimal rendering.
  • Alex Fry: That's 2.4.1 in December?
  • Kevin Wheatley: Yes, and 2.4.2 in mid January.
  • Nick Shaw: Are OCIO wanting ACES to organize liaising between implementers. Are we talking about setting up an implementation group?
  • Kevin Wheatley: Carol expressed that desire in the past. OCIO felt uncomfortable being the only ones choosing the direction of travel.
  • Nick Shaw: It's hard to get the Resolve team involved due to time zones. But it would be good if they could join. Pomfort might join as they have an implementation.
  • Kevin Wheatley: We don't necessarily need a meeting. Just a forum of some kind.
  • Scott Dyer: We need to discuss how we roll out changes if they alter pixel values. Others have implemented ACES 2 without performance problems. We don't want them to have to redo things unless we can clearly communicate why it's necessary.
  • Nick Shaw: Are there real time implementations already? Pomfort bake LUTs so don't need real time video processing.
  • Scott Dyer: People we spoke to have made pre-baked tables for each target from the init(), and then the processing is GPU shaders.
  • Nick Shaw: That's what my DCTL does. The lookups are pre-baked in Python. Is it worth setting up a Slack channel for implementers? That avoids time zone issues.
  • Kevin Wheatley: Doug said some of OCIO's customers had said the GPU render was "over budget". We don't know exactly what that means. What hardware?
  • Nick Shaw: Would these people be prepared to talk about their issues publicly?
  • Kevin Wheatley: OCIO code is public, so we are happy to get involved.
  • Rémi Achard: The OCIO GPU code can play 4K or 8K without too many issues. It depends on the hardware and what other processing you need to do. Game engines may have tighter budgets.
  • Nick Shaw: As Doug said, game engines already take short cuts with ACES 1.
  • Kevin Wheatley: Not all game engines are used for games, so in some scenarios it may need to be more accurate. My changes you probably don't notice the difference in SDR, but in HDR you probably would. If VFX and DI don't match, it's a problem.
  • Alex Fry: That was the case with early OCIO ACES.
  • Kevin Wheatley: LUTs vs functional is one thing, but two functional implementations that differ is more of a problem.
  • Alex Fry: The other discussion was collapsing the tone scale into a 1D LUT.
  • Kevin Wheatley: That shouldn't affect the. look , but needs code to find a fit.
  • Nick Shaw: I was supposed to look into the domains for those, but didn't have time.
  • Kevin Wheatley: Maybe an AI could find optimizations there. It's odd a couple of power calls are causing such a problem.
  • Rémi Achard: That was mostly on the CPU. I upgraded my system and now it's 3x slower than ACES 1. Eric from Epic did some GPU profiling.
  • Nick Shaw: Simple refactoring of powers to exp and natural log, a compiler may do anyway.
  • Kevin Wheatley: Sort of. It's possible to do things a compiler won't because you know what the data it needs to handle is.
  • Rémi Achard: I also created a lookup to remove the binary search, with an inverse hue to index lookup. Worst case it's one off on the index. It's about 20% faster.
  • Kevin Wheatley: The other discussion is meeting schedule with Christmas coming up. Any other items. Documentation?
  • Nick Shaw: Should we write to implementers to see if they could make one of our meeting slots?
  • Scott Dyer: I can reach out to them. Do we want to use this meeting slot?
  • Alex Fry: The 11th of December is good. I can't do next week.
  • Scott Dyer: I'll reach out. And we can summarize the meeting for those who can't attend.
  • Kevin Wheatley: We could offer not to record the meeting if there are privacy concerns.
  • Nick Shaw: Although the meetings are public and anyone can join.
  • Kevin Wheatley: We can have separate discussions if we need to. The may be issues for publicly listed companies. We also need to look at documentation, to make sure it's up to date with the code.
  • Alex Fry: Where can we see the current docs?
  • Scott Dyer: They are on draftdocs.acescentral.com. It's very much a work in progress. It has Pekka's chroma compression text, and Nick's on gamut compression.
  • Nick Shaw: The source is all in a GitHub repo. And there's a Docker container for running the server locally.
  • Scott Dyer: It's all in Markdown with plugins like Mermaid for the flow charts. If anybody would like to contribute any content feel free. We'll meet next on the 11th. I'll post on ACES Central, and liaise with Steve to contact implementers.
  • Alex Fry: I'm doing a talk on ACES 2 at SIGGRAPH Asia next week. If anybody has any useful material let me know.
  • Scott Dyer: I've put together a package on making LMTs and Nick helped me work out how to capture the grades as a CLF. We'll contact some selected colorists directly, but also post on ACES Central. We hope to publish a number on an aces-looks repo as examples. We'll break them down as much as possible into separate process nodes in CLF.
  • Alex Fry: What's the ideal log space to represent that? We know ACEScct doesn't cover enough range. What about the original ACES log?
  • Nick Shaw: We looked at that before, but I forget why we thought it was unsuitable.
  • Scott Dyer: That goes up to 32768.
  • Alex Fry: What happened with the new ACES log discussions.
  • Scott Dyer: We decided we didn't need a new ACES log. We decided to deprecate ACEScc and ACESproxy, as they confused people.
  • Kevin Wheatley: So next meeting in two weeks, hopefully with some implementers.

Meeting #172, November 13th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Christopher Jerome
Willem Nagtglas
Doug Walker

Meeting Notes

  • Scott Dyer: We are well into implementation. I have heard form some vendors about their status, but some are waiting for OCIO. I know there are issues with speed.
  • Doug Walker: OCIO 2.4 shipped at the end of September with preview ACES 2 support by Rémi. It is at least 3x slower than ACES 1. People want it to be comparable as they need some processing for other operations. We are on hold waiting on Kevin's optimizations. Our goal is to ship a more performant OCIO 2.4.1 in December.
  • Nick Shaw: Is comparable performance realistic, given 2.0 is much more complex?
  • Doug Walker: Maybe with the current algorithm. We want to avoid 3D LUTs. 2x or 1.5x would be ok.
  • Scott Dyer: Do Kevin's changes make it that much faster?
  • Rémi Achard: He was optimizing the cusp table sampling and reduce the search to two iterations.
  • Nick Shaw: He now doesn't iterate because it if hard coded to two fixed iterations.
  • Rémi Achard: This is mainly for the GPU. For CPU and GPU the number of power functions are a concern.
  • Alex Fry: Is even 1.5x optimistic? Or could the original RRT/ODT have been more optimized.
  • Doug Walker: The red modifier and glow added complexity for minimal effect.
  • Rémi Achard: Real-time issues are for the GPU, correct? Game engines etc?
  • Nick Shaw: Don't game engines already take short cuts and not exactly match ACES?
  • Doug Walker: And that's an issue if something is called ACES but doesn't match it. They drop the red modifier and glow, and maybe the surround compensation. That could get worse with ACES 2.
  • Nick Shaw: If your source is known and bounded you could even drop the whole gamut compression.
  • Scott Dyer: Kevin is traveling for work in a different time zone. I'm not sure when he is back.
  • Doug Walker: We need him for the table optimizations, but we could look at the tone curve. We are doing unnecessary power functions to convert back and forth. We could look at that and maybe do a 1D LUT or spline.
  • Nick Shaw: It is a purely 1D function in J. We only convert back and forth because the tone scale is defined in luminance.
  • Alex Fry: The tone scale was always independent of the CAM back when we had ZCAM. Is a 1D LUT an option?
  • Doug Walker: IN OCIO we are trying to avoid LUTs, for various reasons, but it's an option in the short term.
  • Nick Shaw: We already use lookups for the cusps. It's not like one 3D LUT but we use LUTs. 1D LUTs are pretty accurate.
  • Doug Walker: What range do we need to cover and in what space?
  • Nick Shaw: J is already perceptual, so a reasonable space to apply it in. Does the tone curve go below zero? I think it clamps at zero. It goes at a shallow slope through limitJmax.
  • Doug Walker: So maybe zero to 10% above limitJmax?
  • Nick Shaw: It would be whatever the input J is that maps to limitJmax. That would be easy to calculate. Maybe add a bit and then invert the tone curve. I can investigate that using my Y_to_J and J_to_Y.
  • Rémi Achard: I was also wondering if we could use a 1D LUT in the cusp lookup, making an inverse 1D LUT at initialization. Maybe wait for Kevin.
  • Nick Shaw: His samples are evenly spaced in each sextant between each primary and secondary, so the initial guess is more accurate, and one more step can find the right interval to lerp. And he only needs one lerp t value because all the values are sampled at the same hues. It should be a significant speed improvement.
  • Rémi Achard: There is also the possibility of using the GPU's built in interpolator. It reduces precision, so we need a metric for acceptability.
  • Nick Shaw: If it's not visible according to e.g. Delta E ITP, it's probably ok.
  • Alex Fry: Did you find out what an acceptable tolerance is?
  • Nick Shaw: A value of one, according to the Dolby documentation 1.0 means a difference **may** be noticeable, not will be. So I think we need a threshold higher than 1.0, particularly for HDR.
  • Scott Dyer: We put a lot of effort into making it not do things ACES 1 did, but now some people want those things! Especially fire going yellow in ACES 1. So we need an LMT which undoes hue preservation. I know people made black body images for testing. If anybody has those samples please send them.
  • Nick Shaw: Don't people still have the option of using ACES 1?
  • Scott Dyer: They wan't the advantages of ACES 2 (HDR SDR match) but the look of ACES 1.
  • Nick Shaw: Isn't fire going yellow in ACES 1 specific to SDR? HDR is different.
  • Alex Fry: For LEGO Movie 2 and Ultraman we made an LMT which mapped the hue skews of SDR onto the HDR output, by using SDR chromaticity coordinates in HDR. I can see why people want that. Sunsets and fire come up in conversations I have about ACES 2, and I tell people they need that to be in the data, because the rendering doesn't do it. It's obviously easier for full CG. Could you do a luminance constrained slice where you bend the hue a bit?
  • Scott Dyer: Probably. But I need more examples.
  • Alex Fry: We've discussed this before. If you look at fire it does seem initially to go yellow, and tail lights go white at first then eventually back to red. How much is real?
  • Doug Walker: Another thing people have brought up is the lower contrast, so it looks less punchy out of the box.
  • Alex Fry: That's tough because the feedback from the RICD was ACES 1 was too contrasty.
  • Doug Walker: There was talk of a look library. It that still planned? And what is the timeframe?
  • Scott Dyer: The question is how to do that. How do we capture that if we just let colorists make looks? A 3D LUT is simple, but limited.
  • Alex Fry: ACEScct is a reasonable range.
  • Nick Shaw: But it doesn't cover the HDR range of our transform.
  • Alex Fry: That's what people are using for now.
  • Scott Dyer: The bulk of their image data should be in that range. And color correction tools don't line up with CLF operators. Ideally we would make a CLF constrained color corrector.
  • Nick Shaw: I looked at making DCTLs for each CLF operator.
  • Scott Dyer: I've done some. But it's not a toolset that colorists will like. I already have the 1D LMT to match the contrast of ACES 1. We cold use inversion to make LMTs that exactly match other renderings. But it would be better to have analytic LMTs that match their general appearance. I need to work out the best way to get something from colorists to make best use of their time if they volunteer it. Maybe we need an LMT working group. I know Pekka had some LMTs.
  • Alex Fry: A Resolve setup is still useful to get from somebody, and we can unpick it. Or make a 3D LUT.
  • Nick Shaw: You could break it down into a series of 1D and 3D LUTs.
  • Scott Dyer: It's on the road map, but we couldn't do it until the transforms was locked.
  • Doug Walker: Another thing I noticed is that the very negative values on the synthetic chart become positive in the rendering. In ACES 1 we added a clamp to make those black.
  • Nick Shaw: We clamp at the start to AP1, which should remove negatives.
  • Doug Walker: Large negatives in AP0 can become positive in some channels in AP1, so wouldn't be clamped.
  • Scott Dyer: It only seems to be happening with the top ramps which I think are P3.
  • Nick Shaw: Those kind of values are unlikely to occur in a real image. We could add an AP0 clamp, but it would probably be just for that test image. Clamping negatives in AP0 does make that bit of the chart black.
  • Scott Dyer: If we want changes like that I could include it in my next update with a few fixes.
  • Scott Dyer: It would probably have the largest effect on things like blue LEDs in AWG3 that are outside AP0.
  • Alex Fry: I'm going to be talking to people about ACES 2, so what are the VFX platform dates?
  • Doug Walker: It's a calendar year basis, so VFX platform 2025 was finalized last summer. The libraries usually release just after SIGGRAPH. For us that was v2.4. VFX platform says 2.4.x for OCIO, and we plan to release 2.4.1 in December, so it's available for products shipping in early 2025. The there will be a 2.4.2 and patch releases later in the year. 2.4 already has a preview release of ACES 2, as VFX platform 2025 says ACES 2. We would like to get any fixes in we can for 2.4.1 and also include built in configs for ACES 2.
  • Scott Dyer: We will provisionally make the next meeting in two weeks, unless Kevin is back and there is a need next week.
  • Doug Walker: In the mean time feel free to post in the OCIO Slack.

Meeting #171, October 30th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Meeting Notes

  • Alex Fry: Nick and I have both made tools to make Delta E ITP heat-maps to compare implementations.
  • Nick Shaw: Alex's is better commented and probably more robust than mine. We were just discussing whether Colour's RGB_to_ICtCp function expects linear or PQ input. Looking at ITU-R BT.2100, the input to ICtCp encoding has RGB without prime symbols.
  • Alex Fry: Neither BT.2100 nor the Colour documentation are completely clear.
  • Nick Shaw: The function includes applying a PQ curve, so I think the input must be linear.
  • Alex Fry: So I should be passing in absolute nits.
  • Nick Shaw: My code takes non-linear input and linearizes it.
  • Alex Fry: Now I've changed my code to use absolute nits, I'm seeing a lot of Delta E values around 1.2 which seems a bit high.
  • Nick Shaw: I'm seeing similar values. What are you comparing?
  • Alex Fry: CTL vs my LUT implementation. I can't see differences visually. Over 1 I feel I should see something.
  • Nick Shaw: Between Blink v60 and CTL I got values of around 1.3 maximum. But comparing Kevin's version with the CTL they were under 1 for SDR but maximum of ~9 for HDR. We should ask Thomas what the intend of his code is. But also we could do a test where we grade an image really subtly until we can just see differences, and then see what size Delta E ITP we get.
[Nick showed BT.2100 and his code running on SDR and HDR images]
  • Scott Dyer: OCIO have been working on making the algorithm faster. They will have questions for us about the intent of the algorithm. Rémi also found some NaN values from inverse compression and when he changed a > to >= it removed them. I think we should do the same in the CTL. Call it 2.0.1 or something.
  • Nick Shaw: Because I see Delta E values over 1, using 1 is too tight a tolerance. Maybe 2 is more reasonable.
  • Alex Fry: I see 10 with HDR
  • Nick Shaw: That isn't unreasonable with a LUT implementation of HDR. Can you see differences.
  • Alex Fry: Yes, even in the raw PQ.
  • Nick Shaw: Kevin's optimizations that OCIO are looking at are producing similar values of ~10 which doesn't seem acceptable. So if those are optimizations get used they should be rolled back into the CTL as a change to the render.
  • Alex Fry: I should compare the Blink too.
  • Nick Shaw: I am comparing v60 Blink, Kevin's Blink and the CTL. Between v60 and the CTL the Delta E is less than 1. Kevin's Blink you can see differences, particularly in HDR. It's not a different looking image, but you can see when you A/B them.
  • Alex Fry: If OCIO are the only implementers, it feels what they do should get rolled in.
  • Nick Shaw: I believe FilmLight are waiting to see what OCIO do. If people get forced to use LUTs, because it's not performant enough, we should change the reference to be more performant, because the difference will probably still the less than the difference with a LUT.
  • Alex Fry: This is a natural back and forth.
  • Nick Shaw: I haven't done any testing with Rémi's OCIO implementation. I'll email Thomas and check what the intended input of RGB_to_ICtCp is.
  • Alex Fry: Presumably 1 is a JND in Delta E ITP.
  • Nick Shaw: I can ask Dolby. Jaclyn Pytlarz is behind Delta E ITP, I think.

Meeting #170, October 16th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: This week we have a post on ACES Central about a blue image, and we'll have some discussion about OCIO and possible optimizations.
  • Nick Shaw: It was an image multiplied by [0, 0, 1] in Rec.709 so entirely on the Rec.709 blue primary.
  • Kevin Wheatley: And through the DRT [with input appropriately interpreted] it produced artifacts.
  • Nick Shaw: A harsh edge into the path to white. In order to hit the blue corner we balloon a bit outside it and then clamp. It was mentioned in the thread that ARRI Reveal doesn't get near that corner, but that means it can be smoother. With ours you need to do something to smooth it.
  • Kevin Wheatley: Pure blue in Rec.709 is not pure blue in AP0 or AP1, but still produces an artifact. Disappointing, but not unexpected.
  • Nick Shaw: Is that an edge case? I don't see it happening with a photographic image, but maybe with CGI.
  • Kevin Wheatley: Maybe blue LEDs in an enclosed space.
  • Pekka Riikonen: You can color grade to it, as Chris did.
  • Nick Shaw: Or back off the grade to desaturate.
  • Pekka Riikonen: The transform is not the final rendering. It's a start point.
  • Kevin Wheatley: The RGC or similar reduces the artifacts.
  • Alex Fry: As Jed said in the thread, you can't be smooth and hit the corners.
  • Nick Shaw: The RGC make it a little cyan.
  • Kevin Wheatley: Now OCIO 2.4.0 is released with an ACES 2.0 "technology preview" we can look at my optimizations, and see if those might apply to other implementations, and decide whether that means they should go in the CTL, or just be suggested optimizations. We didn't settle on tolerances for a satisfactory rendering. Might the optimizations be an acceptable implementation of the current CTL reference?
  • Remi Achard: We need to optimize the CPU path and also reduce the GPU path reliance on textures and multiple sampling of lookups.
  • Kevin Wheatley: My version has less iterations on the lookup, and pulls all the values from one location by combining tables. I even made one of the uniformly distributed table and made it unevenly distributed to match the sample locations. So if the OCIO experiments with that make it more performant, but not quite within tolerances, do we roll the optimizations into the CTL?
  • Remi Achard: Just adding filtering on the GPU texture lookups would mean increasing the tolerance. I don't think ACES 1 OCIO diverged from the CTL.
  • Kevin Wheatley: I couldn't compare with the CTL because I had issues with the submodules cloning the repos.
  • Alex Fry: Same here.
  • Kevin Wheatley: What should tolerances be. Visually my version still matches. Some things are different, maybe even smoother, if you look closely.
  • Nick Shaw: Could it be your samples being more even in JMh hue?
  • Kevin Wheatley: My smoothness was near the peak white.
  • Nick Shaw: Do OCIO have tolerances they use.
  • Kevin Wheatley: Those are for comparing platforms or CPU/GPU.
  • Nick Shaw: We are rendering to display-referred outputs, so should we use a JND metric?
  • Kevin Wheatley: OCIO comparisons are just based on numerical differences.
  • Nick Shaw: If we just used x code values, then that would mean something different for BT.1886 or PQ. We could consider relative difference in linear light like CLF.
  • Kevin Wheatley: Only delta E ICtCp works for HDR.
  • Alex Fry: If it exists in Colour that would be easiest.
  • Kevin Wheatley: There was a fancy one presented at SIGGRAPH.
  • Nick Shaw: That looks like more of a metric for lossy compression. We could try delta E ICtCp and see if it matches us thinking there is no visible difference.
  • Kevin Wheatley: We can run it on the reference CTL, OCIO and my implementation.
  • Remi Achard: I couldn't compare my version with the CTL because it produces some NaNs, which I added fixes for.
  • Nick Shaw: Can you open issues for your fixes?
  • Remi Achard: One optimization Doug wanted to try was fitting a spline for the tone scale so it could be applied in J without going back and forth to Y.
  • Nick Shaw: Could that be parameterized to work at any peak luminance? Ideally you would simplify the maths so the tone scale just worked in J.
  • Kevin Wheatley: If there is nothing else, when should we meet next.
  • Alex Fry: Two weeks' time?
  • Kevin Wheatley: The US won't have switched time by then, but the UK will.

Meeting #169, September 11th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Joshua Pines
Pekka Riikonen
Cesario Tio
Doug Walker

Meeting Notes

  • Kevin Wheatley: This week we’re mostly focusing on any OCIO implementation issues that have come up. There’ve been OCIO Slack discussions about the test images. Scott confirmed they are all up to date. Is everything as you expect, Rémi?
  • Rémi Achard: Thanks Scott. It all seems good. I’ve been comparing the still life and synthetic chart renders. No I need to move to the cube and CMS images for round trip testing. My comparisons were thrown off because I was using half instead of float. But I think it’s good now.
  • Kevin Wheatley: The round trip is cleaner in 32-bit than if you truncate the result of the inverse to half-float. But that’s what you have to do for comparison with the reference images. But there is noise then.
  • Nick Shaw: Do you mean visible noise, or noisy variations relative to a perfect round trip?
  • Kevin Wheatley: Precision issues in difference comparisons to the original, Not necessarily visible. But that ‘noise’ is in the reference renders due to 16-bit quantization of the intermediate image, and shows as patterns. It’s useful to do a 32-bit round trip as well to check your implementation.
  • Joshua Pines: We have tested the transform on a real show, because a colorist looked at it and thought it was great. We needed to make an LMT by inversion to match a show LUT. If we invert mathematically it’s great, but when we bake a LUT we found artifacts.
[Josh showed an image illustrating the artifact.]
  • Joshua Pines: You can see a blue artifact in the highlights. It’s better with a 65^3 LUT, but still there. Scott tested doing it mathematically and the artifact wasn’t there. Scott had made me a D60 transform, and maybe that’s part of the issue. We’re looking into it.
  • Kevin Wheatley: Is that a Rec.709 BT.1886 rendering?
  • Joshua Pines: It’s P3-D60 gamma 2.6. Our LUT goes to that. So we went through the P3-D60 inverse and back through the forward transform.
  • Alex Fry: Was the inverse you used a LUT?
  • Joshua Pines: I built an LMT as a concatenation of the inverse and my LUT. I had the inverse as a LUT. I think I can share the image.
  • Alex Fry: It looks like what you might expect using LUTs. We may have to advise people they need a mathematical inverse for this kind of thing. My gut says if you made your LMT LUT by concatenation a procedural inverse with your LUT it would behave better.
  • Joshua Pines: I would be very happy if all color corrector manufacturers all implemented these inverses mathematically.
[Kevin showed an image of a diff with a 32-bit and 16-bit round trip]
  • Kevin Wheatley: This with is my own implementation, not the official one.
  • Nick Shaw: I imagine this is cause by the inverse taking extreme values in the display space to very extreme values in scene space, which are then quantized.
  • Kevin Wheatley: That’s possible. I could investigate that. Certainly these things are more likely to occur nearer the gamut boundary.
  • Doug Walker: Thanks Scott for preparing those test images.
  • Kevin Wheatley: I’m going to be away for a few weeks due to holiday then work.
  • Scott Dyer: I may actually be back by next Wednesday, unexpectedly.
  • Kevin Wheatley: To meet VFX Reference Platform requirements we have to release OCIO v2.4.0 at the end of this month. I assume Rémi’s work will be in that. It will be labelled a preview/beta. Hopefully there will be OCIO 2.4.1 at the end of the year, which will be the real release that applications will ship with.
  • Kevin Wheatley: So what should the meeting schedule be from now? Do OCIO need particular input from us, or can we reduce the meeting frequency or make them ad hoc?
  • Doug Walker: We can meet as needed, but we do need to finalize by the end of the year. So we don’t want to stall progress.
  • Nick Shaw: This meeting slot will still be available, and I assume most of the regular attendees are free at this time. OCIO can let his know before each Wednesday if a meeting is needed.
  • Kevin Wheatley: And if any other vendors want a meeting we can have one.
  • Nick Shaw: I know Alex and Steve are meeting people at IBC. I can also ask any developers I speak to what their plans are.
  • Alex Fry: I need to expand the test config and bake it using CTL instead of Blink.
  • Kevin Wheatley: It would be interesting to see if that affects Josh’s issue.
  • Alex Fry: I expect it to remain if you use LUTs.
  • Joshua Pines: In my experience it’s cleaner if you keep the LUTs separate rather than baking a concatenated LUT. It would be useful to have a note saying “If building LMTs do it like this not that.”
  • Nick Shaw: If Kevin is away for a couple of weeks, it makes sense for us to just let OCIO get on with their work, unless they need specific help.
  • Rémi Achard: We have a TSC meeting on Monday, and can decide if we need to have a chat.

Meeting #168, September 4th, 1pm PT

Attendees

Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Alex can't be here today. We will discuss the test images, an artifact Scott found, and the future meeting cadence. Scott will discuss another release on GitHub. Doug asked if we have heard from any implementers other than OCIO. I haven't.
  • Scott Dyer: We've had people who have baked LUTs and demoed the transforms that way, but nobody has talked about a native implementation yet.
  • Doug Walker: Anything planned for IBC?
  • Scott Dyer: Steve and Alex will be there, and have meetings with vendors. At IBC we'll know more about people's roadmap, and maybe we can plan a plugfest in future.
  • Nick Shaw: I'll be at IBC and can join meetings if it's useful, subject to my availability.
  • Kevin Wheatley: I'm keen to know hat people have done with LUTs for HDR.
  • Nick Shaw: An ACEScct shaper works ok for normal images, but it wont hit the HDR peak.
  • Doug Walker: Be good to know more about people's plans.
  • Kevin Wheatley: If everybody is having to make similar compromises, maybe those could be folded back into the reference.
  • Doug Walker: Interesting to know if others are struggling with the same issues OCIO are.
  • Scott Dyer: The artifact I saw came from the lower hull gamma for HDR peaks. Pekka came up with a one line fix.
[Kevin showed the cube face image through various inverse round trips]
  • Kevin Wheatley: It works fine for SDR, but through the P3-D65 1000 nit inverse it shows artifacts after a round trip.
  • Doug Walker: The input values are the surface of the P3 gamut at 1000 nits?
  • Kevin Wheatley: Yes. I recreated it in my implementation and Pekka and I agreed it was the lower hull gamma. We could use a lookup for the lower hull gamma as I do, but that adds another lookup. Pekka found a scale for the gamma with peak luminance which fixed it. It's calculated once at init.
  • Pekka Riikonen: I don't know if it's really the lower hull gamma or the precision of the intersection approximation. But changing the gamma is the simplest fix. I found the gamma needed for a fairly clean inversion at 1000 nits. It never perfectly inverts at high luminance. I then increase the gamma with luminance in proportion to the log of peak luminance, to be 1.21 at 1000 and 1.28 at 10,000. I found that solving for lower hull gamma increases noise.
[Scott showed the commit]
  • Scott Dyer: I re-rendered all the inverse D65 transforms after the changes. And then I rendered the results through the forward transforms. I scaled the cube to a PQ value of 100, 500, 1000 etc. The source, transform and destination are listed in this spreadsheet. They are visually indistinguishable from the source, but the code values are a little more different than you might think.
  • Kevin Wheatley: If you look at the values in 3D there's a cluster of errors along each plane where one channel is zero, which suggests precision issues. Issues near peak white, and in the cyan and red corners. That's where you might expect errors.
  • Nick Shaw: And those errors are bake into the reference renders, which is what implementers will compare to.
  • Scott Dyer: Yes. I was comparing to the original, which shows the imperfection of the algorithm.
  • Kevin Wheatley: If you truncate the inverted ACES values to half-float, the round trip errors are exaggerated.
  • Nick Shaw: We need to be clear in the test documentation that's what you have to do to match the CTL reference images.
  • Scott Dyer: Are these reference files what you need Doug?
  • Doug Walker: They will be extremely helpful. So you've found suitable scalings of the test images to explore relevant areas of the color space?
  • Scott Dyer: Yes. For a PQ 1000 transform, anything with a PQ value over 0.75 won't be produced by the transform, so can't be inverted. I've added a clamp to the inverse to limit it to values that will invert. I'm working on documentation listing and describing the tests.
  • Doug Walker: The e.g. 1000 nit scaled cube images are suitable to test all 1000 nit renders, with any encoding primaries. If there are other images people need we can run the CTL on those, and also provide expected output values for particular input values. I posted about what I've made on ACES Central. To recap, we've re-organized the CTL in the repos. It's under github.com/ampas/aces. aces-dev has become aces-core. There are links to aces-look and aces-output. There will be tagged releases which are snapshots of ACES at a particular time. But adding e.g. a new IDT does not mean it is a new version of ACES. There is a look which is just the contrast curve of ACES 1 as a 1D LUT. I will add a LUT based LMT to match the whole ACES 1 look. Then I want to call it done, and any further changes will be point updates.
  • Doug Walker: What about the NaNs that Rémi found in the Marcie image?
  • Scott Dyer: I also found it rendering the synthetic chart. The precision of a pair of forward and inverse matrices means they don't quite produce an identity and you get small negatives. I added the clamp back in for PQ encoding which we previously removed.
  • Nick Shaw: Shouldn't we put it in all EOTFs? The error could manifest differently on different systems. Just clamping off negatives before the inverse EOTF.
  • Scott Dyer: I'll make those final changes and then update the repo and make a post. Then I will be out for a few weeks, so I want to leave it usable for people. Please report any issues you find.
  • Kevin Wheatley: So this could be the final 2.0 release, which means this meeting becomes triaging issues that arise. How often do we need to meet? Documentation can be done in the background.
  • Nick Shaw: Do we transition to an implementation group for triaging and documentation?
  • Scott Dyer: I'll reach out to individual people for documentation as needed. We can meet less often.
  • Doug Walker: OCIO need to ship our preview release of ACES 2.0 by the end of this month. So we probably need a weekly OCIO specific meeting.
  • Nick Shaw: We could use this slot if it works for the others from OCIO. Let's provisionally say we'll meet this time next week and take it from there for future meetings.

Meeting #167, August 28th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Working on my version I realized some implications of my optimizations. I’ve been able to eliminate the while loop in the hue lookup. Two iterations is always enough, so I’ve unrolled the loop. I’ve also eliminated if statements which helps optimization with some compilers, and should make things faster. I also had a discussion with Rémi about my changes and the OCIO implementation.
  • Rémi Achard: For OCIO we need to decide how we split the implementation into different ops. That means duplicating some code and textures, because for example the chroma compression and gamut compression use the reach M lookup. We also discussed Kevin’s strategy to efficiently sample the hue.
  • Nick Shaw: Kevin, I assume your new efficiency only works with your more evenly spaced hue samples.
  • Kevin Wheatley: Yes. Only then can I be sure how many interactions need to be unrolled. For the full version you would have to unroll it nine times.
  • Rémi Achard: Eric from Epic shared some profiling from NVIDIA. The gamut mapping was taking 50% of GPU time. Also the cusp table lookup took a lot of the time.
  • Alex Fry: A few people wanted to try the latest builds so I pushed out new bakes using ACEScct as the shaper. It improves some wiggles, but doesn’t help the top end issue where ACEScct doesn’t go high enough.
  • Nick Shaw: Because ACEScct covers a small negative range, and we clamp to AP1, the interval between two vertices in the cube will cover a range from negative to positive, but there won’t be a vertex at zero where the clamp happens. Ideally we would have a shaper where zero was zero, so the shaper did the clamp.
  • Doug Walker: As long as you have a grid point at zero.
  • Nick Shaw: True, but the clamp means grid points which represent negative input are wasted, so isn’t it better that the first sample is at zero?
  • Doug Walker: Sometimes it’s better to have samples outside the domain to prevent clamping.
  • Kevin Wheatley: We clamp right at the start, so a first sample at zero does the clamp in the shaper. We don’t have a LUT solution for higher nit values. You need a different shaper completely.
  • Alex Fry: It depends how much effort it’s worth putting into those configs.
  • Kevin Wheatley: Even if OCIO 2.0 came out next week it wouldn’t be in DCCs for some time, so the baked solutions will be needed for a while.
  • Alex Fry: There’s also a case to expand it to include things like DCI and others. Should the config reflect the whole set?
  • Kevin Wheatley: It’s a bit beyond our remit. The next discussion is about test images and the test process. Scott did you get a chance to implement what we discussed after the last meeting.
  • Scott Dyer: I’ve been tinkering but it’s not finished. I’m not sure what the expectations are for some cases. I’ve scaled and re-encoded the charts to ACES 2065-1. We talked about scaling the charts by 8x r_hit for the peak luminances. What output do we expect? I see what I expect on a chromaticity diagram, but most pixels in the output are white.
  • Kevin Wheatley: The cube faces should trace the output of the limiting gamut. But I started with no scale and as you scale it up it walks up the hull. Treating it like ACEScct would distribute it more uniformly and give you more samples at the bottom.
  • Nick Shaw: Could you treat it as ACEScct and then only multiply by 8x r_hit over 220 [actually 222.86 = ACEScct_to_lin(1.0)] so you get a log distribution with a max of 1024 or 4096?
  • Scott Dyer: I could try that.
  • Kevin Wheatley: Without a suitable distribution of samples you would end up with a lot of white. Last week we discussed the negatives in the P3-D60 transform. Nick looked into it and wondered if it was due to precision issues in forward and inverse matrices. Slightly negative values could generate NaNs when fed into a gamma function.
  • Nick Shaw: I was testing in Nuke, where negatives are passed through by a gamma op. But different things have varying behavior.
  • Doug Walker: What size of negatives are we looking at?
  • Nick Shaw: 10^-7 or so.
  • Doug Walker: That could be expected with single precision matrix calculations.
  • Rémi Achard: We removed a clamp that was there before the gamma. Maybe we should put it back.
  • Nick Shaw: In some implementations the encoding may be external. Is it a problem if we pass negatives to those?
  • Kevin Wheatley: It’s a note to implementers, even if we clamp.
  • Rémi Achard: The NaNs only happened in the CTL. The OCIO was ok.
  • Doug Walker: The OCIO gamma has options for negatives to clamp, pass through or mirror.
  • Kevin Wheatley: What else so we need to do? We need to get the test images and procedures ready. Anything else for you Rémi?
  • Rémi Achard: Just getting the parametric chroma compression norm in the CTL to make it official.
  • Kevin Wheatley: Texture entries are limited, and you need more textures. My approach of putting all the lookups on the same hues reduces that, and so does using the function, to limit things to one texture.
  • Rémi Achard: It doesn’t change it visually, but does change it. The OCIO should match the CTL.
  • Kevin Wheatley: Any strong opinions?
  • Doug Walker: I think it should be part of this release.
  • Scott Dyer: Is it a drop in replacement for the function?
  • Kevin Wheatley: Yes, and then you could clean up the code that is no longer used.
  • Scott Dyer: I need to lock any changes before September 1st, when I will not be available for a while.
  • Pekka Riikonen: And the clamp?
  • Kevin Wheatley: That’s a bug fix.
  • Doug Walker: When do we think we’ll have the test images?
  • Scott Dyer: If we know the list of what we want I can make the images.
  • Nick Shaw: I can make any final tweaks after 1st September if needed. Can the images DropBox be shared with me?
  • Kevin Wheatley: The inverse/forward test is fairly obvious. You round trip and check you get approximately the same thing. The forward test is less clear. How many variations do we need?
  • Scott Dyer: We don’t have CTL that tests the various steps. Just the whole transform.
  • Doug Walker: I think we need a list of what tests we need to do, ignoring the images. Then we can work out what images we need for them.
[Scott showed a spreadsheet of the tests]
  • Kevin Wheatley: I think the inverse is in two parts. First whether the inverse produces values close to the AP1 boundary. Second does the forward transform map those back near where they started. We can work out how close that should be once we have built them.
  • Doug Walker: Some of these are design goals for the transform rather than implementation tests. Tests should just be “run this image through this transform and compare to this reference image”.
  • Scott Dyer: Those are just there from the original design goals.
  • Kevin Wheatley: They also help implementers work out where an implementation might be going wrong. E.g. if their tone scale is not working as intended. You can draw more information from the results than just if the numbers are wrong.
  • Doug Walker: Are the three things to run images through the forward transform, the inverse transform, and then the forward followed by the inverse?
  • Kevin Wheatley: That’s what the code lets you do currently. We can’t be sure implementers use the same blocks we do, so we can’t have tests that require people to e.g. evaluate the conversion to JMh. So it is those three.
  • Scott Dyer: I have tests which run 0.18, 1.0 and 2.0 through the forward transform, and I know what those should come out as. And they should invert back. We also need descriptions of what things represent so we can say “interpret this image as P3 code values, and if you plot it on a chromaticity plot this is what you should see.” The boundary of P3 or whatever.
  • Kevin Wheatley: I tried taking the cube faces image, interpreting it as ACEScct and running through the Rec.709 transform. All the upper values are near white, but not quite there. The Botton half of values fade to white as they get higher, but it’s not an obvious image. Scaled to 8x r_hit it becomes solid white, so it’s not really a useful test. We need to be clear if it’s intended to produce a picture or a bunch of numbers for analysis.
  • Doug Walker: What can we get done before Scott leaves next week?
  • Kevin Wheatley: The easy one is treating the images as display referred and going backwards then forwards. To compare with somebody else’s implementation we just need to come up with an acceptable delta. Maybe one 10-bit code value. We can do it for the main renderings, but we don’t need to do it for all encodings. I think the still life and synthetic chart are enough for the forward transforms. And then the CMS and cube face images test the inverse and round trip.

Meeting #166, August 21st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Alex Fry
Willem Nagtglas
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: This week we want to lock down testing requirements for implementation validation.
  • Scott Dyer: The main branch of aces-output is updated to reflect our recent changes. Separated D60 and D65 directories. Last week I was confused about where the reference images were. There is now a directory in aces-output with a ReadMe with a DropBox link to the images and lists them all. It's a sub-directory of "tests" where we can add scripts and other things. Currently it's just the synthetic chart and still life. We need more automated tests for doing diffs. I have cube images Nick has generated, but they aren't uploaded yet while we work out exactly what tests are needed. So v2 dev release 2 is finalized.
  • Kevin Wheatley: Nick's images are one which samples the cube faces and another which is a dense mesh of the whole cube plus a more dense tone scale. Those should be run through the inverse transforms, and be validated not to produce any negative AP1 values. And the result of that could be mapped through the forward transform to test the display limits. What we are unclear on is what those image should be scaled to as sources for the forward transform. Maybe our 8x r_hit scaling.
  • Scott Dyer: I've tested those for Rec.709 and 2.6 gamma P3. I got no NaNs.
  • Kevin Wheatley: It would be nice to hav e a way of knowing how close to the AP1 boundary those images get on inverse. We could look at a chromaticity plot. If we're happy we could bless those results as golden images. We need a script which scales the images based on peak luminance. Would it be useful to have images that were broken out as stages in the chain? Not every possible output.
  • Rémi Achard: I've been doing something like that. It could be good to have intermediate values for a set of interesting input values.
  • Kevin Wheatley: We could probably do that for Rec.709 100 nits and P3 1000 nits.
  • Nick Shaw: Would we be interpreting my cube image as ACEScct before scaling?
  • Kevin Wheatley: Just ACEScct isn't enough range. But it's a good second interpretation.
  • Nick Shaw: If you interpreted it as linear and then scaled it enough to hit the maximum you miss a lot of the important values. Maybe you could treat it as ACEScct and then scale it.
  • Kevin Wheatley: Interpreting as ACEScct would be good for testing reasonable input values, but we have real images too for that. We want to test the limits of input range.
  • Alex Fry: Does the old school ACES log cover the full range?
  • Kevin Wheatley: No camera makes an image that natural hits max for HDR. You need to push it. Most log encoding don't go that high. It may be worth interpreting as AP0 to confirm clamping. These are unambiguous tests because nobody has a preconception of what these image should look like. I think we need to treat Nick's images as AP0, scaled AP1 and ACEScct, and do that for maybe Rec.709, P3 1000 and maybe Rec.2020.
  • Doug Walker: So we convert Nick's images to ACES2065-1 versions and add the to the test set, and Scott will process them through the CTLs?
  • Nick Shaw: That would let us pre-scale the HDR inputs.
  • Kevin Wheatley: Is there anything an an image the algorithm would be particularly prone to getting wrong? Is it worth using something like the dominant wavelength image, but adding the line of purples?
  • Nick Shaw: I have a Nuke script for that which includes the line of purples.
  • Alex Fry: Might be nice if it was spaced in JMh hue.
  • Rémi Achard: What about something with very small M values, because the code has some edge case checks. In my GPU implementation I found an issue with calling atan2() on very small values and generate NaN instead of zero.
  • Pekka Riikonen: Does the gamut mapper still have the check for very small M values?
  • Nick Shaw: The atan2() is in the XYZ to JMh, not the gamut mapper.
  • Pekka Riikonen: It's true that even neutral values don't produce zero M.
  • Kevin Wheatley: In my version I tried to remove thresholded values. All I still have is in the gamut mapper if the boundary M when you find the intersection is less than zero, which I never pinned down, and if J is greater than limitJmax I set M to zero. I have no near zero thresholds.
  • Pekka Riikonen: v60 had those, so the CTL does too. I also added setting M to zero if J is zero. But it may not be needed. It's not in the original model.
  • Rémi Achard: Also in the latest CTL which no longer has the clamp in the display encoding I got NaNs for some values. The saturated patches on the DLAD (Marcie) produce very small negative values for the P3 D60 for example, which then produce NaN when you add the 2.6 gamma.
  • Kevin Wheatley: I don't see that with my code.
  • Scott Dyer: A can reproduce it in CTL. I'll trace it and see where NaNs get produced.
  • Doug Walker: With the inverse do we need to test what would happen to wider values that might be created in grading?

Meeting #165, August 14th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Jeffrey D Mathias
Eric Renaud-Houde
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: We have a report from Rémi on OCIO.
  • Rémi Achard: I've broken down the different view transforms needed for all the CTLs in the repo, and grouped the ones that use the same transform together in a spreadsheet. For the D60 one there are more. Mostly it's one view transform per CTL.
  • Kevin Wheatley: It's similar to Scott's spreadsheet.
  • Scott Dyer: That needs updating.
[Doug showed Rémi and Scot's spreadsheets]
  • Scott Dyer: Yes.
  • Rémi Achard: The linear scale parameter is used for two different purposes, DCDM normalization and the peak luminance scaling for cinema. I just wanted to check that's the case.
  • Kevin Wheatley: It's an explicit version of something that was buried in ACES 1. A question for the OCIO TSC is whether OCIO needs DCDM.
  • Rémi Achard: I see the 300 nit cinema version has no 48/52.37 scaling.
  • Nick Shaw: That's because it uses ST.2084 which already has headroom. But I see in the D60 tab scale white is on for all of them. Shouldn't it be off for DCDM, because the headroom means white fitting is needed,
  • Scott Dyer: That may be a typo in the CTL.
  • Nick Shaw: Do we need white fitting for D60 in D65 ST.2084 300 nits? I guess so, as we do in the other HDR ones, so no code value goes above the PQ encoding of 300 nits.
  • Kevin Wheatley: We should have a note in the documentation about that.
  • Doug Walker: There seem to be more view transforms needed in the D60 tab.
  • Nick Shaw: That's because a different white scaling is needed depending on the encoding primaries.
  • Rémi Achard: Even if they have thew same white point.
  • Doug Walker: Is that everything that's in aces-output?
  • Rémi Achard: Yes. I just parsed the repo to build the spreadsheet. And these are all in my PR. I noticed there is no clear separation on cinema and video.
  • Doug Walker: I'm surprised there is no surround or brightness compensation.
  • Kevin Wheatley: That was some feedback that accepted practice is not to use any.
  • Nick Shaw: The dark to dim confused more people than it helped.
  • Doug Walker: OCIO need a conversation about the user facing names.
  • Kevin Wheatley: Remi, how hard did you find the repo to parse. We hoped it would be simple.
  • Rémi Achard: Pretty simple.
  • Kevin Wheatley: Good. The intent was for the directory structure to be user facing, and developers would ignore it when parsing. But I wondered if we should do more to help extract the parameters.
  • Doug Walker: That's a question for Thomas Mansencal, who's making the config generator.
  • Rémi Achard: You can get most parameters from the transform ID, but not all. It was pretty easy to see from the CTL which ones were the same view transform. I ran an RGB triple through to see which matched.
  • Kevin Wheatley: A single value or a set?
  • Rémi Achard: I only used one, but I need to improve it.
  • Kevin Wheatley: A single value needs to be beyond the gamut mapper threshold.
  • Nick Shaw: True. Rec.709 limited 2020 and P3 limited 2020 would give the same result below gamut compression.
  • Kevin Wheatley: A table of numbers for testing might be useful. Do OCIO users need DCDM? And if so do they need it in the stock config?
  • Nick Shaw: Artists sat at a VFX workstation probably don't need it. But you don't know what people might use OCIO for elsewhere.
  • Kevin Wheatley: The studio config targets artists.
  • Alex Fry: At my previous job we used the DCDM.
  • Nick Shaw: What about 2000 and 4000 nits?
  • Kevin Wheatley: 25 choices in a drop-down is a lot.
  • Doug Walker: It's less once you pick your display.
  • Nick Shaw: That's not how Nuke exposes it in the viewer, though. It's a flat list of 39 items for the current studio config.
  • Alex Fry: That UI needs to be improved.
  • Doug Walker: Is this the list that everyone needs to include to say they support ACES 2? Or is it a representative selection, and you choose what to support.
  • Scott Dyer: I think the latter. I'm trying to explain that in the documentation. In ideal implementation provides a subset they think their users need, but makes it easy to add custom ones.
  • Kevin Wheatley: How does it work with AMF with these transform IDs?
  • Doug Walker: It would be useful for AMF implementers to know which are recommended.
  • Kevin Wheatley: Last week we discussed what test images are needed to verify an implementation.
  • Nick Shaw: CML responded to my email saying they don't have a record of releases for their image, so we can't use it.
  • Kevin Wheatley: We need to find something similar.
  • Pekka Riikonen: The other similar image in our set is from the Stuttgart HDR set.
  • Nick Shaw: I'll enquire of them.
  • Pekka Riikonen: There is also the STEM 2 material.
  • Nick Shaw: Although that is a graded film.
  • Kevin Wheatley: Nick and I made gizmos to make cube surface values. That can be used as display values to go inverse then forwards, and also as AP1 boundary values to feed into the forward transform.
  • Nick Shaw: What scale do you use for that?
  • Kevin Wheatley: You do need to come up with a scaling factor for each peak luminance. You could use the 8 x r_hit clip limit.
  • Nick Shaw: Is going display inverse and back enough to test an inverse? I suppose an error could cancel out.
  • Kevin Wheatley: That's a valid test, but we also need other things like a tone scale, a CMS pattern, as well as some photographic images.
  • Nick Shaw: I made a CMS pattern that wasn't limited to a square, and found I could fit a 129^3 cube in a 2K DCI image.
  • Kevin Wheatley: To use those in the forward direction we need to assume log spacing. Nick, when you did LUT tests did you try anything smarter than simple log?
  • Nick Shaw: No. Just log2 with offset for zero, ranged to cover 8 x r_hit. Mostly they worked, but there was an area around magenta in HDR where I guess two adjacent vertices were either side of a dramatic change of direction, and interpolated output was visibly wrong. It doesn't happen with ACEScct, but that doesn't cover the necessary range for HDR.
  • Kevin Wheatley: We need to list all the images we need. We don't have an image with nice foliage.
  • Alex Fry: Thomas' Cornell box is good, but is substantially outside AP1.
  • Kevin Wheatley: Do we want images that stress the system, or ones that look good to demo it? There's a PR aspect.
  • Doug Walker: We have to ship 2.4.0 at the end of September, so we need images and a test procedure as soon as possible.
  • Kevin Wheatley: In the documentation we should make clear the transforms are made of blocks, so you only need to implement forward and inverse of each unique rendering once, and have separated encoding and decoding.
  • Nick Shaw: Should we make a list for implementers which ones are the same rendering?
  • Kevin Wheatley: As an informative set of examples. Once we have our test image set we can render and diff them and work out a suitable tolerance. And make a list of tests. I've be continuing with my implementation, and have a version that samples the hues of the cusps of the display cube and those for chroma compression, and then I fill in an evenly distributed set of hues, weighted by the distances between the fixed ones. I then use those hies for all the tables and also use Pekka's fitted normalization constant. The images are different, but hopefully within tolerance. OCIO implementations may adopt some of these in future. The init is slower though, but I hope the rendering is faster.
  • Doug Walker: Does this mean tables can be smaller?
  • Kevin Wheatley: Theoretically yes. I think I tried 90 samples instead of 360. It should still be better, particularly for inverse than a 3D cube.
  • Nick Shaw: You have less iterations and lerps too, because everything is sampled at the same hues.
  • Kevin Wheatley: And the binary search covers a more optimized range, so needs less iterations.

Meeting #164, August 7th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: We have SIGGRAPH updates. I spoke to Doug and Carol. It was accepted that separate ACES v1 and v2 config files was the way to go. Also separate D60 and D65 configs. This is built in configs. People could make their own combinations. We didn't talk about if OCIO needs all the options. Things like DCDM may not be relevant for most artists in stock configs.
  • Doug Walker: We discussed this at the OCIO birds of a feather and Alex Forsythe was there. There were no big objections.
  • Scott Dyer: What Doug wrote in TB-2014-013 is more like how OCIO breaks things down. Thats kind of how we do it now except for the white scaling part. I want to show how it has been rearchitected in the documentation for implementers. And examples of how it might be presented in a UI, separating rendering and display encoding.
  • Kevin Wheatley: By splitting ACES 1, ACES 2 D60 and D65 into three separate files we are saying there is one creative white and one ACES version per config. That way you need fewer view transforms, and the display transforms are just encodings. We combine the rendering and display limiter scaling into one block. If one config has one white it should be the same viewing limiter transform. We could propose the same to other implementers.
  • Nick Shaw: In the CTL there has to be one CTL file for every permutation. OCIO can have two drop-downs to make the permutations.
  • Scott Dyer: There could be separate CTL files for rendering and encoding. But we combine them for convenience.
  • Nick Shaw: An ACES transform ID needs to define a complete combination.
  • Scott Dyer: It could just define the rendering. And a separate ID for the encoding.
  • Kevin Wheatley: Wouldn't that cause a problem form AMF which needs one transform ID.
  • Scott Dyer: I'm just trying to figure out how we communicate to implementers "these are the renderings you need". Other implementers won't understand how we envision it as well as the OCIO team does. Is it just a documentation issue?
  • Kevin Wheatley: I think so. It's not obvious from just the code.
  • Nick Shaw: We need to point out that multiple CTLs have a rendering block with the same parameters.
  • Kevin Wheatley: Like OCIO we should probably combine the rendering and scaling into one group, and the rest is encoding. The rendering is what you have to invert in an inverse.
  • Scott Dyer: Currently the white scaling is in the encoding block. In the transform IDs we have the rendering "in" the encoding, but the directories are grouped by encoding. Should there just be inverses for the base renderings?
  • Kevin Wheatley: We haven't separated out the decoding to the base rendering in the inverses.
  • Nick Shaw: It depends what people want their inverse to do. If they have Rec.709 graphics, do they want them to appear in a D60 rendering exactly as they started or with white shifted to D60? Users don't want to have to understand converting to the "real rendering" and inverting that. They just want an inverse. So there probably has to be an inverse provided for everything.
  • Kevin Wheatley: It's a developer note that they only need to build inverses for the "real rendering".
  • Doug Walker: That raises an interesting point about if we need the D65 inverses in the D60 config?
  • Nick Shaw: Do people want the whites of their graphics shifted or should they have baked creative white in?
  • Kevin Wheatley: It depends! You may for example want white titles to be D60. Or not. There's no one right answer. The default should be round trip to the same values for me. If somebody wants something different they need a custom setup. The simple answer is "if you don't know better, pick the D65 one". In some ways the D60 is for backwards compatibility for people who want their monitor and projector to look the same when they are next to each other.
  • Doug Walker: In some ways it's a shame we don't make the D60 choice easier for people. For some it's the better or preferred choice.
  • Scott Dyer: We could move the white scaling in the CTL into the rendering block to align with OCIO.
  • Kevin Wheatley: I had it in a separate block in my implementation because you need the values form both the other blocks. The other thing to discuss is Doug's point about unit tests and tolerances.
  • Doug Walker: I was more saying we should have a tool which measures the difference between an implementation and a set of references. We need some test images and CTL renders of those through all the renderings, and a way of comparing another implementation. We could use oiiiotool as we did for CLF.
  • Scott Dyer: In the aces-output repo we have renders of images. We just need to make a Python script to compare to another folder.
  • Doug Walker: What images are there?
  • Scott Dyer: Currently it's the synthetic chart and the still life. We could make others if they are better.
  • Kevin Wheatley: I made an image of the AP1 gamut surface to track what happens at the extremities of valid data. It was useful for testing.
  • Nick Shaw: The CLF test metric was designed for linear data, which might not be appropriate for display encoded images. We could measure a JMh difference as that is meant to be perceptual.
  • Kevin Wheatley: Then you might cancel out your own errors.
  • Doug Walker: The CLF one was just a relative error, so is pretty generic. It could be adjusted to be used with something more perceptually uniform. Could we come up with a list of test images people agree are sufficient?
  • Scott Dyer: Does anybody have images they think are useful, or could make a test image in Nuke?
  • Kevin Wheatley: For inverses we want display encoded images, e.g. a BT.1886 cube to check it round-trips and gets to the corners.
  • Nick Shaw: We discussed display code values as a difference metric.
  • Kevin Wheatley: Jeffrey's suggesting combined synthetic and natural imagery in one image. We could, although I don't know if it makes much difference if they are separate. Such a single image could be an initial test.
  • Nick Shaw: Could we remove the huge negatives from the synthetic chart? Although do we need to test they don't result in infs or NaNs in error?
  • Kevin Wheatley: In the forward transform those shouldn't get past the clamp. The inverses don't have clamps. We assume the input is limited to display values. Although we have no tests for what happens to e.g. super-white input. We could do some mockups to see what happens to those.
  • Doug Walker: It's worth testing that the inverse doesn't do anything undesirable with input outside the expected range. The negatives are in the ACES 1 charts in case they came out fluorescent instead of black.
  • Kevin Wheatley: Some parts of the inverse have asymptotic behavior, so you need predictable values there.
  • Scott Dyer: And e.g. P3 in Rec.2020, if you start with a P3 cube and convert it to Rec.2020 you end up with a negative in the red corner. If you don't clamp you get crazy values through the inverse. When I tried to make LUTs for somebody, a lot of the input values were invalid for some.
  • Nick Shaw: In a LUT you will have one vertex being a valid [e.g a 2020 value inside P3] input and the adjacent vertex being invalid [e.g. a 2020 value outside P3] and interpolating between those will be meaningless [so you can't get output for the 2020 value on the P3 boundary].
  • Kevin Wheatley: We probably limit display cubes to 8-bit or they are very big.
  • Doug Walker: Do we need every 8-bit value?
  • Kevin Wheatley: Good question. That's why I used the cube surface. Is it enough to do the six cube surface planes and courser interior sampling?
  • Doug Walker: We need two sets of images – ACES images to test forward and display images for the inverse.
  • Kevin Wheatley: We have a spreadsheet about tests. We should add to that. Are there any usage issues with the test image set? Most of them are Kodak test images.
  • Nick Shaw: There's a CML ALEXA image in there. I'll contact CML and ask about the rights.
  • Scott Dyer: We said we'd make a roadmap for winding this group down or changing the cadence. But we're assigning more work now. I won't be around for a few weeks at the start of September.
  • Kevin Wheatley: We need to make a script to compare two directories of images with matching names.
  • Doug Walker: There is a script we made for CLF which could be modified as needed. We need to decide on the tests and the error metric, with the understanding the results may be in ACES or display RGB.
  • Kevin Wheatley: Round tripping from ACES values there's no expectation everything will end up back in the same place. We just need to test you don't get garbage out. But round tripping from display there is.
  • Doug Walker: We are talking about testing an implementation rather than the algorithm itself.
  • Nick Shaw: We can't have something that iterates over every CTL and goes inverse then forward and expects a full cube, due to ones like P3 in Rec.2020. Do we only test the inverse of the underlying transform.
  • Kevin Wheatley: You can still test in matches a reference result which may not be the original source.

Meeting #163, July 31st, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: We don't have Kevin or a few others this week due to Siggraph. There's been some talk about the default state of the RGC.
  • Scott Dyer: There's been discussion about the fact that we don't need the RGC enabled by default. The issues it mitigates are mostly handled by the stock transforms. It's still available if people need it as an LMT.
  • Nick Shaw:Presumably with greater adoption of AMF it will be easier to track if it was used on a shot. One reason for having it default to on in ACES 1.3 was that it was hard to track if it was used.
  • Pekka Riikonen: It should be in the implementer documentation that it should default to off, rather than on as it does in 1.3 in Resolve.
  • Alex Fry: It still has an effect on some images. It changes the effect of our AP! clamp.
  • Pekka Riikonen: We could change the threshold values.
  • Nick Shaw: Resolve and Baselight 6 have parametric and Reference versions available if people want to tweak it. So maybe best not to created a different 'reference' in case it confuses people.
  • Alex Fry: I can imagine making a gamut compression tool based on JMh. Maybe reaching specifically to various camera gamuts.
  • Pekka Riikonen: How many of these meeting do we still have? When are we wrapping up.
  • Scott Dyer: We can discuss strategy when Kevin is back. We are moving into implementation, and we don't need this meeting every week. I won't be around for a while from the beginning of September, so would like my part to be done by then.
  • Alex Fry: If there is nothing else we need to discuss we can finish early!

Meeting #162, July 24th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Rémi Achard
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We need to end this meeting on the hour so Scott and I can attend the ASWF OCIO town hall Mainly this week we need to go through the changes Scott has for release candidate 2.
  • Nick Shaw: As I mentioned last meeting I tested making LUTs with a log2 shaper covering the full range required. They seem to work fine for SDR, but for HDR they don't quite work. For most images they look fine, but ACEScct works fine for most images. Stretching the range to cover what we need for HDR there is a visible difference in strong magentas like the ARRI bar and Rec.2020 spheres. I need to investigate why that is.
  • Scott Dyer: For the last few weeks we've discussed things which are bugs or that we want to change for rc2. I have a list of commits which address those, and have written up a change log. If everyone here agrees, I think we are ready for rc2. I've compared the results with rc one, and apart from a couple of bugs I found and fixed, they match.
[Scott showed the commit list and corresponding issues, and went through them one at a time]
  • Remove primaries from namespace (housekeeping)
  • White scaling changed to a bool control, giving direct control rather than a condition
  • Simplify gamut compression compress function, removing power functions and unused parameters
  • Reorder EOTF enum
  • Remove unused smoothJ
  • Update transform IDs
  • Housekeeping and change log
  • Scott Dyer: In aces-output I added two top level directories for D60 and D65. The file names are now fully descriptive. The D60 folder duplicates D65 transforms but with a different limiting white and white scaling enabled.
  • Kevin Wheatley: The two trees make sense to me. The question is whether we tell implementers both trees are required? The organization of the mid level directories is up for debate. Those are organized by user-focussed names. It depends who the directory organization is intended to make it easier for. Somebody parsing it in code could ignore the mid level, because the names fully define the transforms. So the user names may be easier for more casual end users looking in GitHub.
  • Nick Shaw: This organization means that transforms which are the same rendering but differently encoded may be in different directories.
  • Kevin Wheatley: whether we need to provide D65 and D60 for everything is a high level TAC decision. If we want to be backwards compatible we need both. Although we've remove some so we're not 100% backwards compatible.
  • Alex Fry: What's the official ACES line on D60? Arguably only D60 is 'the correct way'.
  • Nick Shaw: D60 was only the default for theatrical. For SDR and HDR video it was always native D65 white.
  • Kevin Wheatley: I think we have to provide both, and recommend people don't make it a giant flat list.
  • Alex Fry: It makes sense that people make a top level decision on their white point, and everything below that only uses that white.
  • Scott Dyer: And it keeps "D60 sim" away from people who don't use or understand it.
  • Kevin Wheatley: We should snapshot the spreadsheet as a CSV or something to go with this release.
  • Scott Dyer: One other change I made was Nick's suggestion of making the linear scale factor 48/100 instead of 0.5 for all the cinema outputs. That makes it consistent. The only issue I haven't addressed is handling of Inf and NaN.
  • Kevin Wheatley: The clamp to 8x r_hit may deal with Inf, and possibly NaN, depending on how an implementation of the clamp works.
  • Scott Dyer: I'll test what CTL does. Are people happy with the other changes?
  • Kevin Wheatley: I think all those can be closed except the NaN issue.
  • Rémi Achard: I noticed a duplication of a table lookup which could be removed. Once in compressGamut() and once in getReachBoundary().
  • Kevin Wheatley: That's removed in my implementation, because you can get the value once and then pass it in.
  • Rémi Achard: The chroma compression curve approximation is not in rc2, correct?
  • Kevin Wheatley: No. We felt we needed more testing first. And to decide if it is an implementation optimization recommendation or actually becomes the reference.
  • Rémi Achard: I wondered which to use for OCIO. I can do either.
  • Scott Dyer: I just wanted to draw a line under these issues, but then I'm fine with adding it as a later update.
  • Kevin Wheatley: I have found a similar issue to Rémi's with table size on a laptop GPU.
  • Rémi Achard: I did make it work on my laptop with constant arrays, but I'm not confident it will work on any system.
  • Nick Shaw: Pekka has discovered that the Blink does not work in Nuke 15.1 due to a Blink update. It needs investigating, but is not a high priority for this group, because we are not claiming to provide a Nuke implementation for all Nuke versions. Anyone who wants to investigate is welcome to.
  • Kevin Wheatley: If there is nothing else, we can end the meeting in time for the OCIO town hall.
  • Scott Dyer: I will close the issues and make a post on ACES Central.

Meeting #161, July 17th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Hopefully a straightforward discussion around wrapping up. Pekka posted a follow up to last week.
  • Pekka Riikonen: I made pex2 version which fits tighter and a Python script to generate the parameters and plots. I based my scaling fit on AP1 green which works best. We talked last week about comparing cube size differences, so I compared 65 cube 33 cube and pex2. You can see the approximation has less difference than a 33 cube, except maybe skin tones, because chroma compression affects less saturated colors more. But you can still only see the differences when you gamma up a diff. It's visually identical so could be a good optimization. I used ACEScct as the shaper.
  • Alex Fry: ACEScct is a commonly used shaper.
  • Kevin Wheatley: Nick and I looked into optimized shapers, and how  much range it should cover. We already clamp AP1 at zero, but where could the upper clamp be? It's related to the peak value, and we found 8x r_hit worked well.
  • Pekka Riikonen: Isn't r_hit sufficient?
  • Nick Shaw: No. That would be for achromatic. The 8x makes sure you cover everything up to clipping at 10,000 nits even for the works case scenario, which is pure AP1 blue. Blue's contribution to J is low.
  • Kevin Wheatley: The shaper could be a straight log with offset to pin zero. I have tested the varying clamp in my code, but not tested LUTs. We could go a bit lower and find the exact value for blue, but 8 is a nice easy number, and a power of 2.
  • Nick Shaw: These would cover a much higher range than ACEScct. The LUTs in OCIOv1 were problematic for high values from CG. Out r_hit values are quite high. We need to test how precision is affected by such a wide range.
  • Kevin Wheatley: You have to ask if a LUT is suitable for a 10,000 nit display?
  • Alex Fry: Even if we had a 10,000 nit display, the issues would probably be more visible in the first 1000 nits, so we can look at that.
  • Kevin Wheatley: These LUT shapers would only be implementation guidance. But we might as well use it to set clipping.
  • Pekka Riikonen: When I fitted the gain for my approximation I only used high nit values. Trying to fit a range including smaller ones didn't work well.
  • Kevin Wheatley: Red and green are the points that fit worst, which may be the reason for the skin tones.
  • Pekka Riikonen: Hand tuning for skin tones means you don't hit the red.
  • Nick Shaw: But the skin tones aren't "wrong", are they? They are just slightly different if you A/B them.
  • Pekka Riikonen: And not even visually for images. Are we considering using this?
  • Doug Walker: Anything that simplifies the implementation with minimal impact on look is very interesting to OCIO.
  • Kevin Wheatley: We can't change the reference, but can suggest it as an optimization.
  • Nick Shaw: I thought we were considering making this the new ground truth.
  • Kevin Wheatley: It's up to the TAC if we can change the official version.
  • Nick Shaw: I thought we talked last week about making RC2 the bug fixes we have now, and maybe putting this and your hue table optimizations in an RC3.
  • Kevin Wheatley: What do you think, Scott?
  • Scott Dyer: If it sets us up better in the long term, I think we should do it.
  • Nick Shaw: Ideally it would already be locked, but it isn't. If this becomes the new ground truth it's what everybody will use.
  • Kevin Wheatley: Pekka's code is a simple swap out.
  • Rémi Achard: I tested implementing this today, and it was very quick to do. It's maybe 10% faster on the GPU, but the CPU is slower. I'm still having to use textures for the lookups, to maintian compatibility with a range of systems. Reducing the number of lookups is a good step.
  • Kevin Wheatley: My changes are more complex. But would reduce it to 5 columns with this.
  • Pekka Riikonen: We could maybe fit a function to the upper gamma too. It has a similar shape.
  • Kevin Wheatley: It may be simpler, as it has no secondary cusps.
  • Nick Shaw: But there are sharp turns.
  • Pekka Riikonen: If the fit is not accurate it affects the inverse.
  • Alex Fry: It needs to fit the waviness of the top half, or values end up too far inside or outside.
  • Nick Shaw: We'd need to make the fit very loose, so then wed have more clipping.
  • Kevin Wheatley: And then we cold end up with inverse values which are negative AP1. So we'd need a very good approximation. I've been working on fitting all the cusp values into the hue samples of the display, but I haven't finished everything.
  • Pekka Riikonen: I see a small difference between your version and the ACES 2 DRT.
[Pekka showed an image where the bright reds differed slightly]
  • Kevin Wheatley: I'm not sure why that would happen. I'll have to investigate. It should be essentially identical. I've also made my version compile on Nuke 13. I pre-calculate a lot of things which makes the code less clear. When I'm happy with a version I have, I'll post on ACES Central. I've internally had to make a version for a display with primaries outside Rec.2020, so also outside AP1. That may break some assumptions in the gamut mapper, so I'll have to see how that goes. I'll report back, and we may need to have a statement in the documentation. I'm not sure if my version is suitable as a next release candidate, or if it should be informational, as a different way of doing things.
  • Nick Shaw: Do you have an idea of the order of magnitude of the differences with your version?
  • Kevin Wheatley: Not yet. I was surprised by the difference Pekka showed. Because I put everything in the hue samples of the display, the AP1 cusp M value is where the larges differences will be. And that's what Pekka's approximation is approximating.
  • Nick Shaw: Is the difference in red possibly because that is near the wrap-around point?
  • Kevin Wheatley: That red should be quite a bit above the zero.
  • Alex Fry: I'm looking at the sRGB piecewise debate.
  • Kevin Wheatley: It's not in our remit to fix that! There are displays out there that do both, so we need to support both 2.2 gamma and piecewise. I'm going to experiment with adding Pekka's approximation to my code.
  • Nick Shaw: And I may try building some LUTs with the ranges we discussed.

Meeting #160, July 10th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Deke Kincaid
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Doug has some feedback fro OCIO, we have transform ID discussion, and then Pekka's thread. on an alternative to the AP1 cusp table.
  • Doug Walker: OCIO had an action item on transform IDs.We checked with Thomas and he spoke to Scott and all our concerns are addressed. Another issue was around recommended names, and we wanted to point people to the Color Interop Forum and its recommended list of names. We use transfer function then gamut, which is the opposite of ACES. We don't object to your names though. Regarding the list of transforms, our only strong feeling is about gamma 2.2 Display P3. For sRGB people implement it both ways, so support for both is needed. Display P3 is recent, and we can speak to Apple, and when we did they said unambiguously it's the piecewise function. If there is 2.2 power in their products we should log bugs.
  • Nick Shaw: Definitely my MacBook Pro measures as piecewise in P3 D65 ST.2084 reference mode, and 2.2 gamma in P3 D65 500 or 1600 nit mode.
  • Doug Walker: That should be reported to Apple rather than keeping going with two transforms.
  • Alex Fry: I've only seen piecewise, but I only use reference mode.
  • Kevin Wheatley: There was a question of what to call the gamut for sRGB. Rec.709 or sRGB?
  • Doug Walker: In Color Interop we use Rec.709 as the gamut. It's never a transfer function. sRGB. could be either.
  • Kevin Wheatley: Thomas noted that the sRGB spec gives rounded matrices, and we calculate from primaries, as Rec.709 doesn't specify the matrix. Which means calling it Rec.709 makes sense.
  • Scott Dyer: That white paper is similar to what Alex Forsythe and I had discussed. We will also write ACES documentation.
  • Kevin Wheatley: Josh and Scott have been discussing D60.
  • Joshua Pines: Scott make me a set of D60 LUTs including HDR. We  are testing them and are very happy so far, and plan to use them on a show. We just need to check the inverses for LMTs.
  • Kevin Wheatley: There was a question of if scaling should be used in HDR to keep all channels below the e.g 1000 nit code value.
  • Joshua Pines: Our default is to scale because some QC platforms will fail it otherwise.
  • Nick Shaw: So we clamp to peak white in D60, because our tone curve actually slightly overshoots peak white, and then we scale the result during the D65 encoding. I did wonder if the available extra headroom should be treated like the headroom in DCDM, and the answer is no. That is what happens now. Pekka, you commented you would be in favor of dropping D60 sim as long as creative white was exposed as a parameter in implementations. I'm not sure that's viable.
  • Doug Walker: Certainly for the OCIO configs it's a fixed set of transforms.
  • Pekka Riikonen: But if it's a parameter…
  • Kevin Wheatley: We need to provide a list of the minimum set to be available, and then a way for people to make others.
  • Doug Walker: We want a list of the out of the box transforms, and we will provide a parameterized version to make others.
  • Joshua Pines: I need D60, but if others want to get rid of it and make everything D65 you need to get rid of it completely and people like me have to make D60 LMTs.
  • Pekka Riikonen: A parameter would make that simpler.
  • Scott Dyer: It is parameterized. The issue is how implementers would expose that in their software. I don't think D60 should be there by default because it confuses people. Those who need it know how to make it.
  • Nick Shaw: What about the native P3-D60 for a P3-D60 projector?
  • Scott Dyer: That's identical to the P3-D65. If your projector is D60 equal code values are D60. If it's D65 they are D65.
  • Joshua Pines: That's true for neutrals, but is it for everything?
  • Kevin Wheatley: The limiting uses the different white so the limiting cusp is different.
  • Nick Shaw: There's a chromatic adaptation built into the Hellwig model. Equal maps to equal, but the other colors should differ subtly.
  • Joshua Pines: I don't mind, but it's cleaner to have 12 LUTs not 24 in the standard distribution. But how easy is it for implementers to expose the parameter?
  • Nick Shaw: I think implementers are expecting a drop in like for like new set of transforms.
  • Joshua Pines: If would be nice if they were at least grouped under D60 and D65.
  • Pekka Riikonen: I did an experiment of using a curve approximation to the AP1 cusp, instead of the lookup. I always planned todo that. I made an approximation, just one curve, and then fitted a scale function to match it at different peak white. There's no visual difference between it and the lookup version. So we could swap it out without changing the rendering. When might we do that if we chose to?
  • Nick Shaw: Doug, you've said that trig functions are expensive. Would this be better than a lookup?
  • Doug Walker: I couldn't say for sure.
  • Pekka Riikonen: There is only one sin and one cos. The rest are multiplications, additions and subtractions of those.
  • Doug Walker: We want not to use textures, so anything that reduces the table sizes is beneficial. Since nothing has shipped yet, it should be on the table.
  • Kevin Wheatley: I was looking at unifying the hues, so we only store one set of hues.
  • Pekka Riikonen: Reach needs to be precise if you want to invert to AP1.
  • Kevin Wheatley: I have a way of calculating hues efficiently, but I haven't implemented it. Putting things in a different non-uniform distribution does cut of corners, so I want to insert extra entries at those.
  • Nick Shaw: Is the inverse not filling AP1 a problem? It means AP1 corner input would get clipped.
  • Kevin Wheatley: How did you work out the scaling of the curve, Pekka?
  • Pekka Riikonen: I hand fitted values at different peak whites, and then fitted a curve by linear regression. It's not a simple power curve.
  • Nick Shaw: So the relative size of the real curve and the approximation stays constant.
  • Pekka Riikonen: The real shape varies a bit. In particular the blue hue changes.
  • Nick Shaw: So how much testing is needed before we consider using this in 2.0?
  • Scott Dyer: The CTL has mostly only been tested by me. So I don't have a problem.
  • Pekka Riikonen: Then I'll spend a couple more days finalizing it.
  • Nick Shaw: Colorists who've tested the current version won't notice a difference.
[Kevin showed a plot of how sampling on display hues cuts off the AP1 cusp corners]
  • Kevin Wheatley: I need to figure out how to cleanly insert the missing hues. I could even make the whole lookup uniform, and then add in all the corners. Finding the hues for the lookups in the init still requires a binary search. But more iterations in the init isn't  such a big deal. SO when might the next RC go out.
  • Scott Dyer: I was hoping soon, but does what we just discussed change that? I've been compiling PRs and issues and writing release notes. I'll post those for comment before I finally push it. Do we wait for Pekka's new stuff?
  • Kevin Wheatley: I feel we should do it in stages. Push what we know we have now, and then work on the new stuff for a possible RC3. We could look at the rendering differences between a 33 and 65 cube LUT, and if Pekka's differences are smaller than that it's a legitimate change. We haven't looked at an optimal cube shaper. Something AP1, now we are clamping to that, line ACEScct.
  • Nick Shaw: Might something like ACEScc which an offset to hit zero be better. With the clamp we don't need negative AP1 values, and having LUT vertices at exactly zero is better. ACEScc - 2^-9.72, or something like that.
  • Kevin Wheatley:What range do we want the LUT to cover?
  • Nick Shaw: Our upper clamp is much higher than ACEScc covers [it is 16384]
  • Joshua Pines: I did some work for the possible ACESlog. I could come up with something if I know the range that needs covering. ACEScc only goes to 222 linear.
  • Kevin Wheatley: Pekka it would be good if you had some Python we could show for your optimizations. The only ID and list decision is about creative white. We need implementer input on that. For LUT boxes on set people may need a bunch of LUTs that cover all possibilities.
  • Joshua Pines: Could the creative white be provided as an LMT?
  • Kevin Wheatley: That might make things more confusing for users.
  • Joshua Pines: I don't know how many people use D60 except me.
  • Scott Dyer: We've said D60 or D65, but the creative white could be anything. People will need to know how to make custom transforms.
  • Kevin Wheatley: So what is common? We are biased, but we get D60 and D65. I'm not aware of anything else.
  • Joshua Pines: Somebody will complain whatever choice you make.
  • Doug Walker: Chromatic adaptation is sometimes valid, but for delivering theatrical projects to video it isn't necessarily.
  • Joshua Pines: People pick the Rec.709 when sometimes they should use the D60 sim because they didn't understand what it was.
  • Scott Dyer: Let's put them all up there and try to group them by D60 and D65, and suggest that to implementers.
  • Doug Walker: We're just adding D60 sim options to the few that don't have it. We're not doubling up the transforms.
  • Joshua Pines: I don't like the term "sim".
  • Scott Dyer: People get confused that with the D60 sims the RGB aren't equal for white.
  • Kevin Wheatley: We should encourage implementers to do something that doesn't treat it as one giant list. It's a choice.
  • Joshua Pines: People make a creative choice from the options we show them and it's 50/50 D60/D65.

Meeting #159, July 3rd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Jeffrey D Mathias
Carol Payne
Joshua Pines
Pekka Riikonen
Doug Walker

Meeting Notes

  • Kevin Wheatley: Nick and I have looked into reducing the number of lookup table entries. And we have ongoing transform ID discussions. My code only stores some attributes in the lookups. I calculate everything temporarily and then throw away what's not used by the final algorithm. For the reach gamut (AP1) we calculate a cusp and a max M value at the peak. Max M is used by both the chroma compression and the gamut mapper. The cusp is only used by chroma compression, which only uses the M value. The CTL stores a J value as well, which is unused. May code also resamples the lookup to store values at the same hues as the limiting gamut lookup, so it doesn't need to store the reach cusp h either. Nick wondered if we could drop the cusp M as well and calculate something from the max M. But it does change the behavior slightly.
  • Nick Shaw: Pekka may have tried this already. The cusp M is used for normalization in the chroma compression. That cusp is somewhere on the slope down from the M at limitJmax. Although before I said we couldn't derive the cusp from that, because they aren't at the same hue, I realized the hue is only correct at the cusp and max M. At every other J value we're interpolating and approximating. Those are kind of two arbitrary sample points on the curve, so if you only have one you can approximate the other. I made a DCTL version where I normalize to the M value of the boundary at source J. Because the limit calculation for the toe function also uses that boundary, the limit now becomes 1.0 as it cancels out. I didn't test round trips, and it does change the look, only in HDR. I could recreate the match by tweaking the chroma compress parameters. Doing this means I don't need to store the cusp at all.
  • Pekka Riikonen: That is the one thing I didn't try. I tried deriving the max M from the cusp. But it seems you didn't derive the same M value.
  • Kevin Wheatley: No. It's slightly different, and varies because of how the cusp changes with hue.
[Nick showed a plot of how reach cusp M and reach max M vary with hue]
  • Kevin Wheatley: We realized the max M only has three corners, because not having a gamut peak there are no secondary cusps. The biggest difference I saw was in blues.
  • Nick Shaw: To be clear I wasn't deriving the cusp M, because to do that we would still need to store cusp J, which cancels out the benefit. It seemed logical to me to normalize to the boundary at source J, as chroma compression only changes M so it moves horizontally.
  • Doug Walker: If you remove the luminance dependency, would that help make different outputs match better?
  • Pekka Riikonen: I got the best match by normalizing to cusp M.
  • Nick Shaw: This approach decouples it slightly from the target.
  • Kevin Wheatley: But the tone scale has already shifted things from where they would be. The bottom part shifts less, so the biggest change is in the highlight rendering.
  • Nick Shaw: Because it does change the look, it's probably not something we should rush into the release, because we don't have enough time to test. Maybe for 2.1.
  • Kevin Wheatley: But the outcome is that Rémi's code and the CTL store JMh for the reach cusp, but at least J is not needed. If we use one set of J samples we potentially lose the exact corners, so I was looking into adding extra samples just at the corners for the reach cusp. You need to sample the display corners and reach corners, and enough samples in between. I didn't get time to try that, or lower than 360 samples.
  • Doug Walker: If you definitely have the key points you could probably lower the number of samples elsewhere.
  • Kevin Wheatley: Maybe we could have uniform hue sampling, plus the extra six for the corners.
  • Scott Dyer: I posted an updated spreadsheet with a proposed alternate ID format. I haven't yet heard back from those who were concerned about parsing them.
  • Carol Payne: Is it OCIO that you need feedback from.
  • Scott Dyer: Thomas was one person who expressed concern about the previous IDs. Thomas and Daniele suggested they could be better but haven't given me examples of how. I want to get it right once.
  • Nick Shaw: I think Thomas was concerned about procedural config generation, which is easier with explicit structured IDs. Less looking up special cases in a spreadsheet.
  • Scott Dyer: I put the rendering space and white first to make it easy to identify transforms which us e the same rendering.
  • Kevin Wheatley: I prefer the word "in" to "as".
  • Scott Dyer: At some point there needs to be logic to parse the IDs into parameters for a transform.
  • Kevin Wheatley: In the config generation we assume a correlation between the ID and CTL file path. It would be nice to maintain that.
  • Scott Dyer: I based my list off the OCIO spreadsheet. One difficulty is that the cinema and SDR 100 nit transforms are essentially the same. HDR is absolutely defined.
  • Joshua Pines: That's been working for the last 20 years. Changing it would confuse people. In theatrical 1.0 corresponds to 48 nits and in SDR it's 100 nits.
  • Scott Dyer: The IDs say 1000 or whatever for HDR, but the SDR ones don't say because they could be 100 or 48.
  • KW I would prefer duplication, where two CTLs may have the same parameters, but the user name differentiates them.
  • Nick Shaw: The CTLs are different, because they include the encoding, so have different EOTFs.
  • Joshua Pines: I would agree with labelling all of them with nit levels, which conveys the intended viewing level.
  • Carol Payne: We would do that in OCIO anyway.
  • Nick Shaw: Because it would be the same built in transform, but a different encoding.
  • Carol Payne: And the context matters because you will use these in AMF to delineate.
  • Nick Shaw: Column D indicates which are the same rendering.
  • Scott Dyer: Although as you said, it's more complex due to white fitting.
  • Doug Walker: Should column D be part of the ID?
  • Scott Dyer: Apart from where the white points don't match, the first part of the name [counting 48nits and 100nits as a match] will tell you if it's the same rendering. Everything after that is the encoding.
  • Kevin Wheatley: When are we expecting someone to do something with that information?
  • Nick Shaw: Presumably in OCIO you want to only create one built in transform for the 48 and 100 that are the same.
  • Carol Payne: We'll bring this up at the TSC on Monday, and get people's last looks.
  • Scott Dyer: The User Name is then the familiar name. But what order should the parts be in?
  • Nick Shaw: The User Name follows what ACES 1 does, which people are familiar with. Unless they find ACES 1 confusing.
  • Carol Payne: But now is the time to change it if it makes it better. It would be good if we could align the names with what we use in the Color Interop Forum.
  • Nick Shaw: There's a case for both ways – What's my monitor, then what do I want to show on it? Or what do I want to show and then what monitor do I have to show it on?
  • Kevin Wheatley: For the user you care first what your monitor is.
  • Joshua Pines: I personally hate the term "D60 sim" because people don't understand it. 60 or 70% of the LMTs I make have D60 white points. What I see missing is if I have an LMT I'm using with a D60 or D60 sim, and then I go to HDR my white point changes. Some think when you have a display with a different white point you have to chromatically adapt to that. Not in our world. If a creative wants D60 white they want it everywhere. I think there should be D60 versions of all of them.
  • Nick Shaw: Our algorithm can do HDR with D60 white.
  • Joshua Pines: Sure, but what people have access to now does not have that. I have to make D60 LMTs.
  • Nick Shaw: Our hope is that implementers will make it easy to add custom ones, as Resolve does now with parameterized DCTL custom ODTs.
  • Alex Fry: It's application specific, because if you have too many in a flat list it's unmanageable. Baselight handles it differently.
  • Joshua Pines: But that's the problem. They do it differently.
  • Kevin Wheatley: The HDR ones are currently the exceptions that don't have a D60 option. To reduce edge cases it makes sense to have them.
  • Nick Shaw: My DCTL implementations can do D60 HDR.
  • Alex Fry: Rather than make LMTs I would suggest you bake D60 versions.
  • Kevin Wheatley: How much influence does ACES have on how things are presented in UIs?
  • Doug Walker: OCIO can use families to group things.
  • Nick Shaw: Resolve is a big flat list, but they improved how LUTs worked, so they could do the same for ODTs.
  • Kevin Wheatley: I would rather the user names were more explicit, with the two sRGB ones identified explicitly, and Rec.709 also saying BT.1886.
  • Nick Shaw: I's like gamma 2.6 on the end of e.g. P3-D60.
  • Scott Dyer: Most of our users don't have a Josh Pines. They open the menu and go "OMG, there's 400 options. Which do I pick?"
  • Kevin Wheatley: Too many people don't understand Rec.709 is not the Rec.709 camera curve.
  • Carol Payne: In the Color Interop Forum we decided to stick to published standards. A string alone will never be enough to be completely clear to a user. There has to be documentation they can reference.
  • Scott Dyer: In ACES 1 the user names all had ACES 1.0 at the start.
  • Doug Walker: For some time people will still need both ACES 1 and 2.
  • Nick Shaw: Shouldn't that be higher up the preferences where you select an ACES version? Do people need both side by side in a long flat list?
  • Joshua Pines: Only at the start to compare the old and new and choose. Where different seasons of a show use different ACES versions you will need LMTs to emulate the old in the new.
  • Scott Dyer: Do we have a drop dead date for RC2.
  • Doug Walker: There are really two dates. One for Rémi's work on the implementation and one for Thomas's work on the config generator.
  • Carol Payne: Both need to be ready to go into VFX platform, because the built in configs ship as part of OCIO.
  • Kevin Wheatley: Bug fixes can come after the configs, because that can be a patch that doesn't change the API.
  • Doug Walker: We need that list locked ASAP. Fixes to the algorithm could be added in 2.4.1 in December/January. Changes that affect the picture we need sooner rather than later.
  • Scott Dyer: We're already past our original June 30th date. I need to put together a change list for the pull requests and merge them all in, and make RC2.
  • Carol Payne: We'll get our feedback in names to you Monday or Tuesday next week.
  • Doug Walker: Will the changed IDs and names from the spreadsheet be in RC2.
  • Scott Dyer: I hope so. I'd like to lock the algorithm too. I don't know how long it would take to integrate Kevin's changes to make the lists smaller or more optimized.
  • Nick Shaw: Kevin's changes are Blink, so it's not as simple PR to the CTL.
  • Doug Walker: We don't need CTL. Rémi is already looking at the Blink.
  • Carol Payne: Some changes are coming from what OCIO are finding as Rémi implements it. The time to give a date is past, and we just need to do everything we can with the time we have to make the implementation the best we can.
  • Nick Shaw: Hopefully what makes it more performant for OCIO will also help other implementers.
  • Doug Walker: I suggest you put cut-off dates for comments on the ACES Central threads.
  • Carol Payne: The date is the VFX platform deadline for ACES, which is August 31st or September 1st. OCIO has an extension, which ACES doesn't.
  • Doug Walker: It would be good to get as much locked as possible before SIGGRAPH because people will ask questions there.
  • Kevin Wheatley: Ideally the name string column in the spreadsheet becomes the file name.
  • Nick Shaw: I feel we should make a statement for the record that we will ship piecewise and 2.2 gamma sRGB and Display P3 transforms, despite the poll result. Whatever people think is "correct" monitors exist with both, and so we need to ship both.

Meeting #158, June 26th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Jeffrey D Mathias
Doug Walker

Meeting Notes

  • Kevin Wheatley: We have some admin updates and an update from Rémi.
  • Rémi Achard: With the CPU implementation I saw an 8x slowdown compared to ACES 1.I've made a hard coded GLSL shader version, with pre-calculated arrays, and that is more like 3x slower. It's less than a millisecond to process a 2k image. This was on a hight end GPU, so I need to test others. I found in OpenGL I couldn't fit all the tables into the constant buffer, so had to use textures. I know Nick's DCTL does fit everything into constant arrays, so I'll look into that more.
  • Nick Shaw: My code declares constants, but of course I don't know what Resolve does with that.
  • Doug Walker: Could the tables be made smaller than 360? I suppose a smaller table risks missing the corners.
  • Kevin Wheatley: The tables are non-uniform in order to hit the corners. The display and source tables each come from their respective RGB values, converted to JMh. That means the hue spacings are different. On some cases we tried uniform sampling, which makes the search easier, but you can't guarantee to hit the corners. Maybe we could sample the source space in the same place as we do the display space. That would save some entries, and be less binary searches. If you use the display space, you will do chroma compression on different hue samples for each target. That may or may not make a noticeable difference.
  • Doug Walker: If they definitely hit the corners, 360 seems a lot of samples.
  • Kevin Wheatley: In some areas they are smooth, but near the corners the curvature can change a lot.
  • Nick Shaw: Our samples are even in display space HSV hue, so uneven in JMh hue. I wonder if the spacing could be redistributed to make it denser near the corners.
  • Kevin Wheatley: You could do a pass to predict the required spacing from the surrounding pair.
  • Nick Shaw: What is the distribution we have just from them starting evenly spaced in HSV hue?
  • Kevin Wheatley: If other implementers are going to run into the same problems, it makes a strong case for us finding a solution.
  • Doug Walker: OCIO clients have said they don't want us to use up textures, as they need those for other things.
[Kevin showed a plot of the spacing of hue samples in JMh hue]
  • Kevin Wheatley: This would be a straight line if they were evenly spaced. And some of the JM values we store at those samples are quite spiky.
[Kevin showed a plot of cusp J and M against h]
  • Kevin Wheatley: And some have discontinuities. And they change shape for the same gamut at different brightnesses.
  • Alex Fry: Could these be approximated by splines?
  • Kevin Wheatley: For the smoother ones you could probably find a polynomial fit.
  • Nick Shaw: The spikes are the primaries and secondaries, so you would fit a curve across each interval.
  • Alex Fry: Somebody could hand draw them in Illustrator.
  • Kevin Wheatley: That won't work for custom gamuts.
  • Alex Fry: Just as an implementation optimization.
  • Nick Shaw: OCIO could make curves for all the standard set that they have to supply.
  • Kevin Wheatley: They still need to support custom gamuts.
  • Doug Walker: Would the input table be fixed, because that is always AP1?
  • Nick Shaw: There isn't an input lookup as such. There is a reach lookup, which is always AP, but changes depending on peak brightness. Nothing is the same for all targets.
  • Kevin Wheatley: You would need a way to generate a curve fit for any gamut by sampling some points. It wouldn't exactly match the current version, but could be close.
  • Nick Shaw: Other parameters would need tweaking, because we have made those the minimum to just work for a round trip currently. If we add another layer of approximation, we would need to slacken tolerances elsewhere to ensure the gamut is contained.
  • Doug Walker: How many LUTs are there for each transform in Scott's spreadsheet?
  • Rémi Achard: I think six tables.
  • Doug Walker: So we could make fits for all of those and check how close to the reference they come out.
  • Nick Shaw: And some spreadsheet entries are just different encodings of the same rendering, so they wouldn't need different tables.
  • Kevin Wheatley: The variation is the limiting gamut and peak white.
  • Nick Shaw:  My DCTL has two 360 entry tables of float3 and two of float.
  • Kevin Wheatley: I use float4 arrays and pack extra values into those. I store extra things like focusJ.
  • Scott Dyer: It include options for ID variations which we can ignore for now. I've added some possible new transforms like HLG and DCDM. And I've added gamma 2.2 as well as piecewise sRGB for Display P3.
  • Doug Walker: When we implemented DisplayP3 for OCIO we reached out to Apple and started a thread with a bunch of different people, and they were unambiguous that it's the piecewise curve.
  • Scott Dyer: That's what I thought.
  • Alex Fry: Me too.
  • Scott Dyer: So maybe we don't need DisplayP3 gamma 2.2.
  • Rémi Achard: I kind of agree with Daniele. When I measured my MacBook Pro it was gamma 2.2.
  • Nick Shaw: When I measured my MacBook Pro it was either gamma 2.2 or piecewise, depending on the display mode.
  • Rémi Achard: I think HDR uses an extended sRGB curve and SDR uses pure gamma.
  • Kevin Wheatley: These are all encoding variations that don't affect the table count. That includes D60 sim, because we could add that to a display encoding.
  • Nick Shaw: That's just the white scaling to not clip. D60 sims use a D60 white for the output JMh to XYZ conversion, so it would be a different limiting cusp table. I wonder about the non P3 limited Rec.2020 transforms. Could they be removed from the standard list? Most people don't have Rec.2020 displays. And those who do are probably capable of making their own custom transforms.
  • Alex Fry: SDR laser projectors are probably the most common Rec.2020 displays, so not HDR.
  • Doug Walker: The fewer there are in the standard set the easier it is for people to chose the right one.
  • Scott Dyer: It would be nice to only expose the most common ones and have an advanced mode that could do all the others.
  • Nick Shaw: Baselight does it nicely where you can filter which ones it normally shows, but when you hold shift down it shows them all.
  • Scott Dyer: Lots of people want all the options by default, but we're trying to get away from the idea that you can only use something that's provided by The Academy and is in the repo. We've made it easier to make custom transforms now.
  • Kevin Wheatley: I'm looking at how many of these are just different encodings. Daniele's case for DCDM was that it was for convenience, and also we previously supplied it.
  • Nick Shaw: There's also the headroom DCDM has. If you grade D60 on a D60 projector then encode as DCDM that's the same rendering. But if you grade D60 sim on a DCI projector, then encode a DCDM from that it's not the same as direct DCDM, because of the fit whit scaling that's been applied which DCDM doesn't need.
  • Alex Forsythe: The one that just renders to XYZ with no limiting primaries is an odd one out.
  • Nick Shaw: Out transform needs to be supplied with limiting primaries, doesn't it?
  • Kevin Wheatley: You could put XYZ as the limiting primaries, but it will probably break things.
  • Nick Shaw: We assume our limit gamut is inside the reach gamut, which wouldn't be the case for XYZ.
  • Alex Fry: I think we should limit it to Rec.2020 so it was physically realizable.
  • Scott Dyer: Does anybody have strong opinions about transform IDs? I want to know what needs changing to make it better. Should every token be in every one.
  • Kevin Wheatley: The default assumption is all the tokens are in all the IDs. If there is no length limit the ids can be fully specified, and the user name is the shorter version with assumptions about what "everybody knows".
  • Scott Dyer: That's the answer if the user name is the familiar term, and the transform ID can be explicit and programmatically generated and parsed.
  • Nick Shaw: Would that include things like saying D65 for Rec.709?
  • Kevin Wheatley: Yes everything. Arguably if it's encoded in the same way as the limit you only need to say it once.
  • Nick Shaw: There is the complication of what's a sim and what's a limit.
  • Kevin Wheatley: You don't put sim in the ID. It's just primaries white point and transfer function for the encoding, and primaries, white point and peak for the limit.
  • Scott Dyer: Sim would only exist in the user name. An intern here has prototyped something similar to what Nick did. Once we pin down the spec he will update that accordingly.
  • Kevin Wheatley: Doug, with the texture space s you were working on they were named transfer function, primaries, white point. Was there a reason for that order?
  • Doug Walker: That's how digital cinema camera vendors do it, so that's how OCIO is structured.
  • Kevin Wheatley: We currently have primaries then white point then EOTF. How are the IDTs done?
  • Doug Walker: The OCIO convention came from the first configs, which were influenced by the IDTs.
  • Kevin Wheatley: Does it make sense for IDTs and ODTs to have the same order.
  • Nick Shaw: As Scott has commented, for IDTs you linearize first, then apply a matrix. For ODTs you matrix first then apply an inverse EOTF. The order is inverted.
  • Scott Dyer: They don't have to be the same. It seems primaries come first, and the on the wire encoding is the last thing you think about.
  • Kevin Wheatley: The brightness is the equivalent of the EOTF for the limit.
  • Scott Dyer: The luminance being at the end doesn't make sense.
  • Doug Walker: I think it would be useful for all the parameters that affect the computation grouped together, and the encoding on the wire separate.
  • Nick Shaw: Then if you ordered the IDs alphabetically you would see them grouped by the ones that were the same rendering. Oh, except the white fitting may change that.
  • Kevin Wheatley: The fitting is still part of the encoding.
  • Nick Shaw: But it's not the OCIO break point.
  • Rémi Achard: Yes, the white point scaling depends on the encoding primaries, so in OCIO they still need to be different renderings.
  • Nick Shaw: Even if the difference is only a mult at the end.
  • Rémi Achard: It's already the case in ACES 1, where there is the highlight roll off in P3 DCI.
  • Nick Shaw: That's the edge case, where it isn't just a scale, because the scale would be too big.
  • Scott Dyer: It's a kludge right now, because we made up a number to split it between the scale and roll off. In v2 we just scale all of them.
  • Nick Shaw: We need to document "Don't use a DCI projector. Calibrate it to your limiting white point."
  • Scott Dyer: We tell people not to use DCI.
  • Doug Walker: Maybe DCI isn't a required one. It's roll your own if you want that.
  • Scott Dyer: Not having to provide P3 DCI that would be nice.
  • Doug Walker: Is SDR Rec.2020 video needed?
  • Alex Fry: I've never seen it used in reality.
  • Nick Shaw: You can set monitors to that, but nobody does.
  • Kevin Wheatley: The use case would be using an LED wall as a monitor, because they tend to take Rec.709 or Rec.2020.
  • Nick Shaw: Isn't that again the kind of specialist use case where you make your own? Everyone who downloads the free Resolve won't worry about that being missing from the list. As long as it's well documented and implementers make it possiboe for people to make their own custom ones.
  • Scott Dyer: The SSTS already makes that easier, but we are now making it easier still, with only a few parameters. I assume will do the same as they do now.
  • Kevin Wheatley: Does that mean if somebody makes a custom one it wouldn't have an official ID label, so people now it's non standard and not necessarily supported in all systems.
  • Scott Dyer: Yes. That's what the namespace is for. People can create facility namespaces for custom transforms.
  • Nick Shaw: Because Resolve has a folder for custom ODTs, maybe they could supply some of the more obscure ones in that folder so people could use them as templates for their own, but also they could just remove them from that folder to simplify the list in the UI.
  • Kevin Wheatley: If they are in that folder and appear by default, that raises the issue of whether they are expected to be available. Maybe they could be provided, but in a different folder, part of a ReadMe.
  • Scott Dyer: They don't need to validate the namespace to be able to add the IDs to an AMF. It's just for tracking so somebody else can recreate their pipeline.
  • Kevin Wheatley: But it's a weak spot if people think they can just put an ID in an AMF and it will just work by magic. It's about education.
  • Nick Shaw: If IDs are fully specified, if they put an ID in there that is a permutation of the standard primaries and EOTFs, the information is there to build the transform. We're not suggesting the software should build it automatically from the ID, but a knowledgeable person in a facility could.
  • Kevin Wheatley: Nobody has committed on the thread to saying which transforms should be included by default. Everyone has their own idea what they need, but there could be edge cases where others are useful.
  • Nick Shaw: Where are we at on bugs that need to be fixed.
  • Scott Dyer: I have a few that we can put in RC2, but I's like to know if any of Kevin's work should be included.
  • Kevin Wheatley: The only bug I saw was the off by one error in the searches. If we had a hard requirement to reduce the number of tables we could work on that. The rest of what I've done actually obscures what the code does.
  • Nick Shaw: Maybe we could put comments in the CTL for implementers pointing out optimizations and pre-calculations that could be done.
  • Doug Walker: For things that don't change the result it's better to keep it easy to read. If we change how tables work, that will affect the result, that should go in the CTL.
  • Kevin Wheatley: My version is similar, but not the same. And if we agree the pictures look the same, it might help us pick a tolerance for implementations. But my code isn't fully robust yet.

Meeting #157, June 19th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas
Doug Walker

Meeting Notes

  • Kevin Wheatley: We need to talk about what transforms we supply and how are they identified. Nick has been experimenting with parameterizing ID generation using the transform parameters.
  • Nick Shaw: There are a lot of permutations and we can't provide them all, but we want people to be able to make their own permutations and have a unique ID for them which they can put in an AMF, so two people who made the same combination would use the same ID. I made something where you can type in the parameters that are sent to the super-transform macro and it generates an ID. I used the chromaticity coordinates, which are the parameters we use, but I realized that's too complex. We can't generate an ID containing all those numbers, so my code identifies standard colour spaces, and just says "custom" for anything else. That isn't helpful to somebody who wants to generate the transform. So probably it should just be drop-downs to let you make any combination of the standard primaries and white points for limiting and encoding, and if you do something non standard it's up to you to communicate what that is, and it doesn't get a standard ACES ID.
  • Scott Dyer: Alex and I talked about this and thought of the same issues. Some are more simply named, because you can just say Rec.709, without saying 100 nit dim surround and BT.1886 EOTF. People know what Rec.709 is. The important thing is a unique ID that can be tracked in an AMF. We talked about having a registry for transform IDs where people can look up exactly what they mean. You can get that information by looking at the CTL of the transform. But a tool where you drop in a transform ID and it gives you all the details could be useful. Out base set deviates from including all the detail for things where people know what it is.
  • Nick Shaw: It's come up that people asked whether the P3-D65 transforms in our test set were gamma 2.6 or not, because it's not made clear. I feel the EOTF should be included in all IDs.
  • Kevin Wheatley: Do we need to distinguish official IDs that live in the AMPAS name space from custom ones? OCIO has been having similar discussions around the names of color spaces. What is the ID for? maybe there's a requirement that even if the file name is not fully descriptive, at the top of the file there is a full description.
  • Nick Shaw: That is the ID, which is in the top line.
  • Scott Dyer: And we have the user name, which is what should appear in the UI menu.
  • Nick Shaw: OCIO has to have a spreadsheet for the automatic config generation to map IDs to transforms, and part of the reason that's needed is because the IDs aren't fully descriptive, so the spreadsheet defines the mappings for the implicit aspects. My code doesn't yet have special cases for "known" aspects of a transform. But it does shorten names if the encoding and limiting space are the same, or the white points match, so it doesn't have to say "limited", or "sim".
  • Kevin Wheatley: It's harder to parse if it doesn't include everything.
  • Nick Shaw: We should probably open threads on ACES Central to have these discussions offline.
  • Doug Walker: I have looked at Rémi's notes about the profiling, but have no specific comments.
  • Rémi Achard: I've played with simplifying the code a bit. The main bottleneck is still all the power functions. I want to implement a GPU version to compare performance.
  • Kevin Wheatley: I've kept on with my Blink version. I've eliminated a number of the back and forth scalings, and incorporated others into the matrices. I've also looked at simplifying the powerP function for the exponent of 1. I've moved as much as I could into pre-computations. But if pow is the limiting factor, I don't know how much my optimizations help. I noticed there is a lot of protection agains negative values in the code, and if you protect at the beginning, you don't need to make every operation tolerant to them, and you can add the sign back in at the end. I haven't timed it recently, but also you need to decide what is the baseline to compare against. The original Blink has a lot of branching. My code only uses two hue spacings, one for the source and one for the display, and puts all the lookups at one or other of those, so you do one binary search for each and use the same lerp interval to get all the values needed. I also use radians to eliminate the scales to degrees and back. There are now slightly different rounding errors.
  • Doug Walker: Changing the hue tables will affect the numerical results, but probably not in a perceptually significant way. But it would affect conformance testing. If we do these kind of optimizations, can we re-declare the optimized version as the reference?
  • Kevin Wheatley: That goes agains the current CTL being the deliverable.
  • Nick Shaw: But if OCIO are the only people really implementing now, if they make optimizations those can be folded back in before anybody really looks at it, and that will benefit everyone.
  • Scott Dyer: We still have a window where we can roll in optimizations into the CTL.
  • Kevin Wheatley: Optimizations obfuscate it a bit for somebody trying to read it and compare it with Luke's paper.
  • Scott Dyer: Should we keep the original maths in the comments, and just say this is what it simplifies to in our case? Simplifying it to something that only works with our values breaks the functionality if anybody wanted to use it as Lukes full model. Some things are useful to keep, but others it may be better just to explain in documentation how magic numbers were derived.
  • Doug Walker: You may need two versions. A research version that is the springboard for ACES 3, and a simplified performant version. Maybe the CTL is the research version and OCIO is more performant. Combining things that don't change the result are fine to be verbose in the CTL. The challenge is if hue tables are interpolated differently that changes the numerical result, and if people refer to the CTL as the correct result, that needs to be in the CTL.
  • Kevin Wheatley: I haven't found a way to eliminate the binary search, because the reason for the uneven spacing is to hit the corners. Alex experienced with a way to predict the corner locations, but that's filtering the table, and may be more expensive than the search.
  • Nick Shaw: The tone scale itself isn't really the big overhead, is it. It's the chroma compression that's embedded in the tone scale step.
  • Kevin Wheatley: I haven't found a way to simplify that, but I assume that's where the overhead is.
  • Rémi Achard: I removed the safe power function, but it didn't have a big impact. Maybe on the GPU it would.
  • Kevin Wheatley: The other option is to approximate power functions by something faster. But I don't think we have time to look into that.
  • Nick Shaw: So Kevin, do you pass you optimizations to Rémi and see if they help?
  • Kevin Wheatley: I guess, although they are a bunch of small incremental changes that may add up to a noticeable improvement. There's no one big thing.
  • Nick Shaw: Have you checked whether your optimizations impact the inverse round trip?
  • Kevin Wheatley: Not exhaustively. But I've tested periodically. My biggest concern is that the reach gamut is sampled at two different spacings, and I wonder if the mismatch has any effect. It depends if the effect is to push things inside or outside the corner. I didn't look at if P3 red being outside AP1.
  • Nick Shaw: I thought it was only Rec.2020 that it was outside, and that's why AP1 was made slightly bigger than Rec.2020 [P3 red is indeed inside AP1]
  • Kevin Wheatley: Jeffrey is commenting that the DCTL runs real-time on 8k 60p footage. But on what GPU? And how many spare GPU cycles are left for grading. And how does that compare to ACES 1? We shouldn't be orders of magnitude worse.
  • Nick Shaw: Rémi's 8x slower was with an unoptimized direct CPU port of the CTL.
  • Kevin Wheatley: Doug mentioned using HSV hue angles instead of trig based ones.
  • Doug Walker: Using those the hue is not so different, but the chroma is. As long as you still use a Pythagoras based M value you could use HSV hue instead of h.
  • Nick Shaw: It depends on its effect on the spacing for the lookups. But how much time do we have to build different versions to experiment?
  • Doug Walker: Rémi's profiling showed the conversions to and from JMh are roughly 20% each of the time. Looking at the diagrams, the power function is 66% of the time and the arctangent is 1.3%. I don't see sin and cos at all.
  • Kevin Wheatley: So maybe it's not worth looking at. I didn't look at removing the scaling of J and M, because that would change other things.
  • Rémi Achard: This is all just CPU, although I don't know how we could profile the GPU to this level of accuracy.
  • Kevin Wheatley: I will try to compile the OCIO code and do some profiling, but I can't commit to having the time to do that. Will create two threads on ACES Central about transform IDs and the base list of transforms that re recommend implementers include.
  • Scott Dyer: There's a spreadsheet of what's currently included, with a gold highlighted list of others that have been suggested. There's a tab in there with a list of the v1 transforms, although it may not be up to date. We intentionally omitted DCDM, but there's been a request to have those back. And what peak luminances do we need for HDR Display P3? Just 1000?
  • Nick Shaw: 1000 nits is the common reference.
  • Doug Walker: OCIO needs the list. When will it be finalized
  • Scott Dyer: It needs to be soon. We need to get a consensus from the ACES Central discussion.
  • Doug Walker: The IDs are used in AMF. Are there used anywhere else?
  • Scott Dyer: I'm not sure.
  • Nick Shaw: Another question is whether 500 nits is the right level for the lower HDR value. Baselight use 600, and I heard Company 3 use 600 nits as their target for HDR.
  • Alex Fry: The JOLED panels peak at 520 nits. I think 600 is for PRMs.
  • Kevin Wheatley: I don't think there are many PRMs left.
  • Doug Walker: Are you saying if people roll their own combinations and generate an ID, an implementation needs to parse the ID and make a matching transform?
  • Nick Shaw: That's the question. Are implementations expected to make custom transforms on the fly from an ID in an AMF? Or is it just information for somebody who wants to make a matching custom transform?
  • Doug Walker: I strongly recommend a fixed list for implementers.
  • Scott Dyer: That's the intent. It's just for diagnostic information so people can figure out from the AMF what transform was used.
  • Kevin Wheatley: OCIO has to generate the fixed list via a spreadsheet. So maybe others do the same. Perhaps it would be good if the list could be built programmatically solely from the IDs with no special knowledge. My preference would be to not simplify the transform IDs by removing the thing that everybody knows from the descriptor.

Meeting #156, June 12th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Daniel Brylka
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Daniele Siragusano

Meeting Notes

  • Kevin Wheatley: We only have a few issues to discuss. We should wait for Daniele to discuss what he brought up on Slack, so let's start with the NaNs.
  • Nick Shaw: I did some tests running the CTL on our set of sample frames. And it worked on all except 3, where it bombed out. The dominant wavelength image contains infs, which become NaNs in JMh, and then the hue lookup fails, and we end up with the negative index I mentioned in my issue. So we need to prevent NaNs going in and just immediately return NaN, because there is no correct rendering of a NaN, and we need to clamp infs down low enough that they don't overflow, but high enough that they will always blow out to white, even at 10k nits. Kevin and I discussed this and found that clamping AP1 to 16384 at the same time as the main AP1 clamp means a worst case scenario of zero in red and green will still leave a high enough blue value for a J which hits the ceiling, plus a bit more to push through if unclamped. Kevin suggested a power of two.
  • Kevin Wheatley: We can justify this a bit better. Previously we clamped to half max, which was arbitrary, as we don't use halves anywhere. I suggested a power of two for float encoding accuracy and also as a potential good range for a LUT implementation.
  • Nick Shaw: 16384 is about 1.35 in ACEScct, so a range you could grade to.
  • Kevin Wheatley: There is still a cusp wrapping bug. For the binary search high_i should be initialized as gamutTableSize -1, not gamutTableSize. Otherwise the search result could run off the end of the table. Daniele asked about which transforms we provide, HLG etc.
  • Nick Shaw: That opens up a broader discussion about what we provide, and how easy it is for people to make their own for others.
  • Scott Dyer: Nick already has a PR in for HDR Display P3. But the question is how many permutations do we provide? My intent was we provide a basic subset that OCIO might build into a config. But 'standard' for different people varies. Where do we stop?
  • Alex Fry: We've shipped HDR Display P3 in all our test configs, because it's the only way to test HDR in Nuke on an Mac with an XDR display.
  • Nick Shaw: The question is how easy will it be for people to add non-standard ones to an OCIO config?
  • Kevin Wheatley: That's currently unknown. We haven't formalized what we want to expose. This informs that. Maybe we provide an example for making a custom target.
  • Nick Shaw: Are OCIO planning to have a fixed function that could be exposed at a config level, with parameters which are out 'macro' parameters?
  • Kevin Wheatley: In principle, yes. The idea is to be able to make custom targets without a LUT bake. LUTs may be used internally, but no external LUTs needed for a config.
  • Scott Dyer: Right now somebody could take a CTL 'macro' and just change the numbers, and make something custom. But they can't use the CTL in a tool. And it may not be efficient for implementer to implement it as we do with macro calls to a 'super-transform'. Some of the complications are just in encoding, which isn't an issue for OCIO or Baselight. People are already asking "what about a linear Rec.709 CSC?" Their tools can already do this. What they really want is a transform ID they can reference in an AMF if they've done that.
  • Kevin Wheatley: Strictly that's outside our remit. What's missing is an ACES ID generator that takes our parameters and produces an ACES ID.
  • Nick Shaw: As a test I made the HLG and DCDM transforms Daniele asked about. A couple of things came up. The 1000 nit Rec.2100 PQ transform ACES ID doesn't include ST2084 in it, so you can't differentiate it from an HLG variant.
  • Scott Dyer: That's a typo that needs fixing.
  • Kevin Wheatley: That's where the ID generator would be useful.
  • Nick Shaw: The other thing I found was that you can make a DCDM macro by putting [[1, 0], [0, 1], [0, 0]] as the encoding primaries, but you need to use [1/3, 1/3] as the white and the current white fit function then scales because the white's don't match. DCDM already has that headroom built in, so no scaling at all is necessary. So we need a way to switch off white fitting. I have a PR which bypasses white scaling if linear_scale_factor isn't 1.0, because 1.0 means it's already hard up against the ceiling.
  • Kevin Wheatley: Could you calculate the necessary scaling based on the headroom?
  • Nick Shaw: The DCDM scale factor is fixed and designed to be more than enough for any white. But if you calculated a scale based on headroom you would have a varying scale with DCDM when you should have none. But I suppose somebody could use a non 1.0 linear scale factor for some other reason, where white point fitting was still needed, so the alternative is to just have a parameter which is a switch for that.
  • Scott Dyer: I intentionally omitted DCDM because in ACES 1 all we had was people who had problems through misuse or misunderstanding of that transform. I think people should just set their projector to D60 or whatever there white is, and then use a separate DCDM encoding tool.
  • Nick Shaw: I said the same to Daniele about HLG. People could make a PQ master, which they probably need for one of their deliverables, and make a cross-converted HLG from that.
  • Daniele Siragusano: People hate two step renderings. And they need disk space for an uncompressed intermediate. People will expect at least the same transforms they had in ACES 1. Isn't DCDM one of the most fundamental outputs?
  • Scott Dyer: In v1 it seemed more trouble than it was worth. I'm open to adding it back if needed. People were confused about DCDM white points.
  • Alex Fry: You didn't hear from people using it without problems?
  • Nick Shaw: We at least need some mechanism to bypass the white point scaling, or it's not even possible to make a custom DCDM transform with the current super-transform macro. If we don't include them, we could put my DCDM and HLG transforms in the user contributed folder as examples.
  • Daniele Siragusano: Isn't this a fundamental thing that needs to be supported in AMF, not a user supported thing?
  • Kevin Wheatley: That would be what the ID generator would help with.
  • Nick Shaw: Or we add an entry to AMF which could literally be a list of parameters for a custom Output Transform.
  • Kevin Wheatley: Daniele had comments about CSC and CATs, but those are not Output Transform related.
  • Daniele Siragusano: I just posted on Slack everything I saw an issue with when I went through the repo. The only Output Transform things were HLG and X'Y'Z at 48 and 300 nits.
  • Nick Shaw: We haven't tested our Dolby Cinema transform, but we just took the principle of using the curve for double the peak luminance and 'fit to fill'. Would that work for 300 nit cinema, just using the 600 nit curve?
  • Daniele Siragusano: That's what we did in Baselight. The 300 nit projection looks like a 600 nit monitor. Of course it's a different size and different flare etc, but the same ball-park. It's just a linear light multiply.
  • Nick Shaw: Should 300 nit projection be part of the standard set? Or do we just document what we do for projection?
  • Alex Fry: It's a real DCI spec now.
  • Kevin Wheatley: We need to fix what Nick discovered, to make it possible to make a DCDM transform, even if we don't deliver one.
  • Alex Fry: Display P3 HDR seems sensible to me. During all the testing that was how I could look at HDR. It's not really a standard, but in the wild it's probably 2-3 orders of magnitude more common than some of the others.
  • Kevin Wheatley: So are we saying that for cinema we need 48, 108 and 300 nits. What Display P3 levels?
  • Alex Fry: Maybe just 100 and 1000.
  • Kevin Wheatley: We have Rec.709 BT.1886 at 100 nits, and various PQ levels. What about HLG.
  • Daniele Siragusano: 1000 nits is the common bridge between PQ and HLG, and from there HLG does its own scaling.
  • Nick Shaw: ACES 1 just uses the 1000 nit PQ transform and cross converts to HLG.
  • Daniele Siragusano: Why go via PQ, not linear light directly to HLG? Back and forth with large numbers can introduce issues.
  • Kevin Wheatley: And we need sRGB with the sRGB transfer function and 2.2 gamma. The only question is what we call them.
  • Daniele Siragusano: The original spec calls them sRGB encoding and sRGB display.
  • Nick Shaw: I think we need to explicitly say 2.2 gamma. Just "sRGB Display" is where the ambiguity comes from.
  • Kevin Wheatley: What about being clear about which environmental conditions the reach primaries are defined in. In the chroma compressor it's obviously related to the source. In the gamut mapper you're bridging between source and display. At one point we accidentally used the display white for the reach cusp calculation. If it's a mistake we could make it's a mistake any implementer could make, so we need to be very clear in documentation that the reach is generated from source parameters, even though it includes peak luminance, which is a display condition.
  • Nick Shaw: Do we still have separate input and output viewing condition parameters in the code? We only separated them for testing, and found they needed to be the same. So if we only use one set of parameters, people can't make them different.
  • Kevin Wheatley: If OCIO decide to implement the model as a generic function, we need to clearly document that the same parameters must be used.
  • Nick Shaw: Daniele, did you have questions as an implementer?
  • Daniele Siragusano: I just wanted to understand where you are, and what is still in flux.
  • Kevin Wheatley: The CTL is the reference, and anything from now should just be code bug fixes.
  • Nick Shaw: But we have discussed that if OCIO as the first implementer find simplifications, those could be folded back in for others.
  • Kevin Wheatley: The only thing raised so far has been looking into a spline fit for the tone scale, to be applied directly in J.
  • Daniele Siragusano: I thought we wanted to avoid splines.
  • Scott Dyer: We need to bench mark to find where the slow-down is that OCIO are finding.
  • Kevin Wheatley: Rémi's profiling suggests the tone curve is taking 40% of the time, which seems odd.
[Note: it is in fact the combined tone map and chroma compression step which takes that time]
  • Nick Shaw: The tone scale takes more time than the gamut compression? Rémi’s implementation is currently CPU only, yes?
  • Kevin Wheatley: The gamut mapper could be simplified by doing things like passing in values that remain constant, rather than computing them every time. Scott, when do you think we might get to the next release?
  • Scott Dyer: I only got to look at all this a few hours ago. I'm moving all the issues to the relevant repos. I'm closing old aces-dev issues, and making commits to fix the bugs we've found, so at some point we can make a new release candidate with detailed notes on what's changed. I need to make sure we include the list of outputs people need without overwhelming them with too many. I don't want people to think they can't use a particular output just because it isn't in the repo. The point is you can make your own. I don't know when the next release will be. We don't have a hard drop-dead date from OCIO. I need to incorporate bug fixes and do profiling of the CTL, and keep looking for bugs and issues. I don't know how long that will take.

Meeting #155, June 5th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Chris Brejon
Daniel Brylka
Alex Forsythe
Jeffrey D Mathias
Willem Nagtglas
Doug Walker

Meeting Notes

  • Kevin Wheatley: Only a couple of items to discuss, logged by Nick. There was also a v60 parameter that was not correct.
  • Nick Shaw: Comparing the CTL and the v60 Blink I wasn't quite seeing a match and I realized the gamut compression exponent was still set to 1.2 in the Nuke script when it is 1.0 in the CTL, to make it like Reinhard. I opened a PR.
  • Kevin Wheatley: Not a big deal but worth merging and re-baking to bring things into line. Nick also found a case where a crash is cause by the CTL indexing -1 into the array.
  • Nick Shaw: I batch rendered all the images in our test set through the Rec.709 CTL, and all except three render fine. The dominant wavelength image, The Lego Movie and Red Xmas. The dominant wavelength image has some inf pixels, and I suspect that is the cause. The comments in the code talk about one extra entry for wrap around, but that isn't actually implemented.
  • Scott Dyer: I put it in at one point, but I had problems and took it out again. We need to write tests for hue wrapping, and other things where we know what it should be doing. I am doing manual tests, but we need automated ones.
  • Kevin Wheatley: The code currently initializes the high value beyond the end of the table. The comment says it's ok because of the extra value, but that's not there so we need to subtract one.
  • Nick Shaw: The error I saw was off the start, not the end of the table.
  • Kevin Wheatley: Nick also commented, as others have, that a bunch of things could be initialized once, but aren't. He also found part of the computation that has a negligible effect.
  • Nick Shaw: At single precision float, with L_A of 100, it has no effect at all. So it's extra code lines adding nothing. We also have a lot of extra lines that completely cancel out because D=1.0 when we discount the illuminant.
  • Kevin Wheatley: If it only happens once at initialization it doesn't matter if it's over-complex, and it is at least matching the model. So the crashing bug should be fixed. The other is up for debate.
  • Remi Achard: I have done a fairly straight port of the CTL as a PR in OCIO. I moved all the achromatic first part of the model out of the per-pixel path, but there are still a lot of optimizations to do. Doug had some concerns about the amount of work needed. My code exposes the transform parameters – peak luminance, limiting gamut, and AP1 clip. I need to expose the encoding primaries for the creative white. I plan to have the tone mapper as a separate OCIO operator.
  • Kevin Wheatley: I think Doug was commenting on the multiple scales back and forth to 100, and also whether the tone curve could be approximated with a spline.
  • Nick Shaw: How easy is it to make a spline to match arbitrary peak luminances with our parameterized curves? Ideally we would have modified the maths to work on J directly, but we have to go back to luminance because the tone curve is defined in the domain.
  • Kevin Wheatley: I've been looking at the gamut mapper as I think some things are calculated multiple times, which could be done once per hue, and included in the lookup. If we stored multiple values at one set of sampling intervals we could do less lookups. Also the chroma compression and gamut mapper both use the reach limit, but one is nominally source parameters related and the other is nominally output parameters related. At the moment the model parameters are the same for input and output. But if somebody wanted to change that, each reach gamut should use the appropriate parameters.
  • Doug Walker: I have some questions as an implementer. I haven't had time to follow the development, but as an R&D project it's been an amazing accomplishment. Looking at the CTL and Remi's port of that it looks like a research project. The conversions back and forth between spaces make the algorithm more understandable, but as an implementer I'm thinking how to productize this. The conversions may not be desirable in a production context where speed is a priority. In ACES 1, some implementers baked 3D LUT, others build a functional implementation. OCIO went from LUTs to functional, which is preferable for VFX with physically accurate values. I was hoping we wouldn't need LUTs for v2. There is a simplification pass needed. For example the tonescale converts J to luminance with a bunch of power functions, runs Daniele's parametric tone curve and then converts back to J with more power functions.
  • Nick Shaw: That's the simplified version! Originally it used the full model to go back and forth.
  • Kevin Wheatley: The Blink was built with many modules all working in the spaces they were defined in. AND The Daniele model was defined in terms of display luminance.
  • Doug Walker: OCIO won't be the only ones looking at this. In the game industry they often have an 'ACES' option which is a approximation of ACES. I'm thinking what do I do with this to simplify it into product ready code. Am I willing to convert a whole series of operations into a rational polynomial approximation, and change the algorithm, which is a much bigger project. I'm wondering if we have time to do all that and hit the VFX platform deadline. Im looking for guidance on how to approach this as someone who want's a shader that will run on a GPU.
  • Nick Shaw: My DCTL is a shader implementation that can sustain 24fps ALEXA 65 ARRIRAW at UHD.
  • Kevin Wheatley: We always had an eye to not going too crazy, which is one reason we picked a simpler model. We never had a budget to how long a frame should take to render on a given GPU.
  • Nick Shaw: And we always knew it would be computationally more expensive than ACES 1 because we're doing a lot more.
  • Doug Walker: It feels that given enough time there are a lot of things that could be simplified. An approximation of the tine curve applied in J. We could do that for each block, but the result might not match exactly and it would take time. Ideally that's something implementers would like to have. Can the ACES group take that on? Or do you leave it to individual implementers, which means there will be a bunch of different implementations that make different trade offs.
  • Kevin Wheatley: That would have been a question for the TAC. I've not done too much optimization so as not to paint implementers into corners.
  • Alex Fry: Even if the tone curve gets baked to a spline approximation, some version of code has to relate that back to the values things are defined in. The gamut hull approximation was always intended to be implementer friendly. Ideally we'd have an iterative exact boundary finder.
  • Doug Walker: Are implementation related considerations what you're focusing on right now? Looking for things that make it more performant?
  • Kevin Wheatley: Not the group. I am doing experimentation for my own purposes, but it's not part of the CTL. We don't want to keep changing the CTL because people may be tracking it. Nick's J to Y and back to J was already an optimization but we didn't try to capture the end to end curve as a spline or single function. That could be done if it was a particular pain point?
  • Doug Walker: Has any profiling been done to see what aspects take longest?
  • Kevin Wheatley:  No profiling, but I changed the lookups in the Blink from linear to binary searches, which speeded things up.
  • Remi Achard: In the OCIO CPU port I didn't do any profiling, but I think the tone-scale is the largest step, and a lot of time is spent in power functions. We need to spend time profiling.
  • Nick Shaw: Is Remi's C++ the best thing to use for profiling?
  • Doug Walker: Yes, but we'd have to do it again once we get a GLSL version.
  • Remi Achard: The profiling I did was for the transform, not the table building, which is done at compile time.
  • Kevin Wheatley: If power functions are high on the list, one of the exponents being set to 1.0 will lead to optimization. That's the gamut compression curve.
  • Nick Shaw: PowerP with the exponent at 1.0 is just Reinhard, which is much simpler. I'll send you a link to the code lines.
  • Remi Achard: I can use my code to see which module is taking time., and I can try to move from research code to production code, to see if we could optimize the tone scale to operate in J.
  • Kevin Wheatley: There are a few multiply and divide by 100 that could be eliminated.
  • Nick Shaw: Everything is taken back to unity to apply power functions, then goes back to 100 scale. But it doesn't need to be 100 scaled.
  • Kevin Wheatley: The whole model could be rescaled to 1.0.
  • Doug Walker: Some matrices could be combined. It would be great if someone had reduced to the essentials. Maybe we could all collaborate on that, which could be the OCIO implementation.
  • Kevin Wheatley: I'm not sure there are many matrix pairs without something happening in the space between.
  • Doug Walker: What feedback have you got from other implementers. Are people looking at LUTs or functions.
  • Kevin Wheatley: I don't think anybody outside this call has implemented anything.
  • Alex Fry: My LUT bakes could be improved. Baselight with ACES 1 got a better result than OCIO due to different shapers. My current shapers cover more than is now needed with the AP1 clamp.
  • Kevin Wheatley: Next steps are to see what info we can get from Remi's code to give us performance hints. 
  • Scott Dyer: Do we want to talk about precision. Some of our parameter values are not representable as floats. I'd like to know if the differences I saw wer down to half-float precision and what might be math errors.
  • Nick Shaw: I believe the differences you showed last week were just down to using half-float EXRs. When I used 32-bit EXRs the CTL matched the Blink much better. Looking at JMh values, which are 100 scaled, saved to a half-float EXR, there is significant quantization.
  • Kevin Wheatley: If your output is only going to be half-float, how much precision is enough!
  • Nick Shaw: We're rendering display referred output, so it will probably be integer, not half float.
  • Kevin Wheatley: So you could set a tolerance in 12-bit code values.
  • Doug Walker: Once people like OCIO add optimizations you won't get 12-bit precision. We use things like an optimized power function approximation.
  • Kevin Wheatley: Do you have a view on the precision of the parameters we pass in?
  • Doug Walker: I like what Dolby did for PQ, where the constants are fractions with a power of 2 denominator, so exactly representable as float. You could tweak numbers to the nearest 32-bit float.
  • Chris Brejon: I remember Daniele commenting that the flare parameter is higher than his original suggestion. I compared ACES 1 and 2, and also Jed's Open DRT and I feel ACES 2 is more contrasty in the shadows. I wondered if this was due to the flare value.
  • Kevin Wheatley: When I compared to another rendering we use I felt the opposite.
  • Nick Shaw: Our intent was certainly to have lower contrast than ACES 1.
  • Alex Fry: I'm not seeing higher contrast when I test either.
  • Kevin Wheatley: The colorist feedback we had some say too much contrast and some not enough, so we sit in the middle. The tone scale hasn't changed since that evaluation.

Meeting #154, May 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We have some follow up from the TAC Meeting, some bugs to look at, and finalizing the date to draw a line under development.
  • Alex Fry: The v60 LUT bakes I made in last weeks call had an error, so if you downloaded them before Saturday, please re-download.
  • Scott Dyer: We need confirmation of dates from OCIO. Ideally we would be locked already, with a release candidate reviewed previously. We are still fixing bugs, and OCI are being flexible about dates, but we want to be as accommodating as possible for them. I reported to the TAC about our discussion with OCIO, that we would track issues and fixes with clear commits and change lists. Then there would be a release candidate 2. After a certain point future fixes will have to go to a future point release. When OCIO give us a deadline we should aim for a week or two before that. We’e going to speak to other developers and get their feedback. Any suggestions of changing how we do things need to be in before the OCIO cutoff.
  • Kevin Wheatley: OCIO subscribes to the VFX Reference Platform deadlines. So it needs to be “kind of ready” in July by SIGGRAPH. Ideally end of June. Rémi may have thoughts.
  • Rémi Achard: I don’t expect the code to change much. Maybe a few bug fixes. So I don’t think it’s a big issue for OCIO, as log as there is a clear change log. There are still small differences between the CTL and Blink, so I suspect there are more things to be found. Most of the work for OCIO will be the GPU implementation.
  • Kevin Wheatley: Even refactoring things to be quicker won’t fundamentally change the algorithm. Some optimizations may only apply to certain implementations.
  • Rémi Achard: There are a few things in the RGB <> JMh that can be precomputed.
  • Kevin Wheatley: I’ve been working on a Blink version that follows what Scott did in the CTL. I’ve looked at what can be precomputed or optimized. Looking at different implementations will also help set tolerances for tests.
  • Scott Dyer: Looking at my own code again I see things I could have done better. Some, but not all, may be worth fixing. I’ve also been making a stripped down Blink version to match the CTL. The parameters are all constants in the code. I want to be sure the CTL is close to the Blink. What degree of match do we expect to see? I've been looking at individual functions, like ACES to JMh. Even for neutrals, I'm seeing noticeably different J values. Is this a CTL vs Blink precision difference or an error?
  • Kevin Wheatley: The h value flipping between zero and 180 is expected from small differences in a and b. But I'm surprised by J. The Blink closely matches Thomas' original Python and also my C++.
  • Scott Dyer: When I implement a camera log function from a white paper they provide a table of some inputs and outputs to compare. I don't have that here. What's the best way to get ground truth.
  • Kevin Wheatley: With the ColorChecker, the ACES values are sampled from a real chart, and the neutrals aren't completely neutral. To rest my C++ I used values that were a multiple of the reference white. We could do that sort of thing in unit tests.
  • Nick Shaw: And we have the Y_to_J function as a comparison for neutrals.
  • Kevin Wheatley: We've diverged from Luke's original, but those changes shouldn't affect neutrals. If Thomas's implementation of that differs from the CTL but not the Blink, we should investigate why.
  • Scott Dyer: The CTL is defaulting to writing 16 bit files, so I guess I should force it to 32.
  • Kevin Wheatley: You should also probably feed in XYZ value, to eliminate the RGB to XYZ.
  • Scott Dyer: My CTL does the same as thew Blink does for RGB to XYZ.
  • Kevin Wheatley: The other thing we haver to look at is two bug reports. We need to decide are they bugs we need to fix? The first was found by Rémi. It's a difference in the tone scale parameter calculations. Nick pointed out it makes sense if you look at the values the curve hits at minimum and maximum. That's what the Blink does. Is the CTL wrong?
  • Nick Shaw: If it's called r_hit_max, it makes sense it should be the value of r_hit at 10,000 nits.
  • Scott Dyer: I think it should be fixed, as it's an error I made copying from an older version of code.
  • Kevin Wheatley: I found something in the gamut mapper which sets M to zero if it is below a threshold. I feel that's incorrect. We could remove that and compress for all M values. The trap was there for NaNs, but I can't reproduce that. The other option is to pass small M values unchanged. I think this is a bug.
  • Nick Shaw: I suspect the NaNs could occur near the bottom where J is also near zero. Did you test inverses? The comment says that's where NaNs occurred.
  • Kevin Wheatley: I did, but didn't work out what values could trigger a problem.
  • Pekka Riikonen: Was this fix added when Alex saw black pixels in the Marvel trailer.
  • Alex Fry: It might well have been. How big is an M of 0.0001?
  • Kevin Wheatley: That's the question. It may not be visible, but is the trap needed? I was looking at plots, not images.
  • Nick Shaw: I'm sure it's not visible to make almost neutral colors neutral, but if it's not needed it's a bug.
  • Alex Fry: It was probably a hack we put in to fix something at the time.
  • Nick Shaw: And we meant to come back to it but never did. And it may not even be a problem with the current algorithm.
  • Kevin Wheatley: Removing it is the cleanest solution if it doesn't cause problems. So two bugs which we think we have fixes for.
  • Nick Shaw: We aren't sending fixed through to implementers one at a time, are we? So we can test further and send a batch of bug fixes after testing for a while.
  • Alex Fry: For testing is an ACEScct cube a good test image?
  • Kevin Wheatley: We need to test each stage with suitable input and expected output. For this we should just test the gamut mapper forwards and backwards. Pekka, you used color wheels when testing parameter tweaks.
  • Pekka Riikonen: Yes. I was really just focussing on the inverse.
  • Kevin Wheatley: In the coming week we we should focus on test images for these bugs.

Meeting #153, May 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Jeffrey D Mathias
Willem Nagtglas

Meeting Notes

  • Scott Dyer: For documentation I want a list of key changes from ACES 1. My list so far is:
  • - fix blue LED etc artifacts
  • - gamut mapping instead of clipping
  • - hue preservation with tone scale
  • - better HDR / SDR match
  • - lower mid slope contrast and softer highlight roll-off
  • - tone scale automatically adapts with peak luminance
  • - improved invertibility
  • Nick Shaw: SDR to HDR is now a continuum, with continuously varying mid grey level.
  • Kevin Wheatley: People may ask why it took so long. The pandemic is one reason. But also some of our requirements are in tension with each other. Nice look out of the box, but also reaching the corners, for example. We took a while to find the balance.
  • Scott Dyer: Each requirement is reasonably simple on it's own, but making them all work is hard. Be need to explain our decisions, and thinks we had to consider that other renderings don't.
  • Kevin Wheatley: People may say we ultimately have a per channel adjustment like the original, so why is it better? But it's more complex, and the tone scale is only applied to lightness. We tried more complex models, but they weren't as controllable, or were too complex. That was part of the journey. We spent a long time noodling edge cases.
  • Nick Shaw: Those are the hard ones. The colors in the middle that are inside all the gamuts are the easy bit.
  • Alex Fry: We spent a long time trying to fit values from cameras that produce data outside AP1, but ultimately gave that up as impossible with our constraints.
  • Scott Dyer: I want to have an overview for a general audience, and then have all the detail for those who really want or need to know.
  • Kevin Wheatley: Jeffrey asked if the final version was v59 or v60. Definitely not v59. The CTL is the reference which should match v60. But there may be bugs. We know of one in v60 that Scott found, where a value for the hull gamma is reciprocated twice.
  • Scott Dyer: We'll announce when we have matching CTL and Blink.
  • Kevin Wheatley: Coming at it from scratch for my code, I got confused by how many times focusJ, slope and other things are recalculated.
[Kevin showed his sketch of the limit (actual and smoothed approximation) reach and compression line]
  • Kevin Wheatley: The intersections we find for the limit and reach boundaries should be on the same line with the same slope.
  • Nick Shaw: The slope of the line at the top is modified by the focus distance gain.
  • Kevin Wheatley: Whatever the slope is it should be the same slope used everywhere. That doesn't seem to be quite what happens. In the reach boundary search we use the previously found intersection to recalculate the slope and focusJ. I wouldn't be confident that would give the same result.
  • Nick Shaw: I think that's because when we had multiple ways of finding intersections, in that sub-function it didn't have access to the original values, so re-solved for them from what it did have. The theory is that any point on the line solves for the same values, which is what makes it invertible. But it would be better with the "flattened" code to use the original values.
  • Kevin Wheatley: I can do some tests to confirm that passing the values in rather than re-computing them gives the same result. The other thing I noticed was that the reach uses the model gamma, whereas the limit uses a constant passed in. Also, that gamma is constant as the dynamic range changes. Is that intentional?
  • Alex Fry: I tuned the reach against Rec.709 initially.
  • Kevin Wheatley: In the documentation we should preempt questions people may ask. Implementers may want to take shortcuts, so they need to understand what the code is doing. Also because at any hue, only one cusp can be accurate, and others will be interpolated. Is it legitimate to use one hue for all the values? I think the logical one to be correct for is the limiting gamut, as we're trying to hit that corner. An implementer might want to put everything in one table, and if they do, which hue samples should they use? As they are all approximations anyway, people could legitimately ask if it's ok to combine the tables. Thinking as an implementer, I would want to minimize the pre-computation and cacheing.
  • Nick Shaw: Because we are puffing out and smoothing the limit, but we don't smooth the reach, is it more important that the reach is accurate?
  • Kevin Wheatley: I think the opposite. I think it's more important that the cusp value of the actual target is accurate. If you start at the actual corner, puff out then clip back, you will definitely hit it. If your samples cut off the corner, you can't be certain puffing out and clipping will hit it. The obvious test is to try putting images through versions using each set of hue samples for everything, and see of the results are noticeably different. Everything makes some difference. Even just whether you pre-calculate a value once or do it within a sub-function each time. We don't have a good metric for what is a good enough implementation. CLF had a metric, but is it appropriate for us?
  • Remi Achard: I had a related question. We currently calculate the tables for every peak value. Could we pre-calculate them for a couple of gamuts and then use those to derive values for every peak? I haven't checked.
  • Nick Shaw: I don't think that would work because the 'seams' at the primaries and secondaries of a gamut are not the same hue for all J values.
  • Kevin Wheatley: And the hue of the corners shift with the primaries. I wondered about that for the reach, because it's only a rough boundary to reach to.
  • Nick Shaw: I investigated before whether one table would work for the reach and cGamutReach, as they are both AP1, but the corners aren't at the same hue.
  • Kevin Wheatley: Why is the reach gamma a calculated value from the model, and the limit gamma is a tuned arbitrary value.
  • PR: I think Pekka had a reason he thought it was correct that was the case.
  • Kevin Wheatley: I've found one bug in my code, so I'm getting closer to matching the Blink. Another thing that occurred to me is whether the reach gamut is scene or display referred. We use the same model parameters for input and output at the moment. But if we didn't, which would we use for the reach? This is chroma and gamut compression reach.
  • Nick Shaw: AT what point in the transform do we move from scene to display referred? The chroma compression is the picture rendering, so I suppose after that things are display referred.
  • Alex Fry: I am currently putting together a set of v60 LUT bakes. We need to make sure the v60 Blink is in line with the CTL.
  • Scott Dyer: Pekka gave me some slightly tweaked parameters, which aren't in the Blink.
  • Alex Fry: Pekka's v60 PR was supposed to update the Blink to match. But I think his Blink kernel doesn't match the previous version, and it is supposed to be only the parameters that change.
  • Kevin Wheatley: I also need to check round tripping the Blink because my code moves the yellow corner noticeably on a round trip.s
  • Nick Shaw: If I test the v58 Blink, yellow moves fractionally on a round trip, but only one or two 10-bit code values.
  • Kevin Wheatley: I saw a bigger shift than that. I need to check my code. Something I noticed was that the Desmos plots referenced in the code, and also Nick's documentation of the gamut compressor don't quite match what the code is now actually doing.
  • Nick Shaw: I need to review my documentation and update it with what is actually happening. I notice that I have some comments in my DCTL about whether some recalculations of values were necessary. I couldn't spot a difference if I didn't recalculate.
  • Kevin Wheatley: With the still image it looks the same pretty quickly, but it's the extreme values where you see variation.
  • Nick Shaw: Most pixels in the still life aren't touched by the gamut mapper.
[The remainder of the meeting was spent debugging Blink code]
  • Alex Fry: I will push LUT bakes of the bug fixed v60.
[Note: the v60 LUT bakes pushed straight after the meeting may still have errors in them, and so should not be used for now]

Meeting #152, May 15th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Remi Achard: What is the difference between v58 and the published CTL?
  • Nick Shaw: I think Pekka gave Scott some updated parameters that probably make no visible difference to the rendering.
  • Kevin Wheatley: The key difference is the CTL is a minimal implementation, with only the necessary functions.
  • Scott Dyer: I used Rémi’s NumPy version to compare against, with the parameters updated. I’ll give you the final parameter values. I refactored a little and renamed some functions. And Alex wanted me to structure the CTL is a particular way. And CTL has some limitations.
  • Remi Achard: The Python was updated yesterday
  • Nick Shaw: That was v58. But for clarity, the CTL was derived from v58, with small parameter changes. V59 is Alex’s experimental version with different lookup interpolation.
  • Kevin Wheatley: I’ve been not copying the CTL, but working on a Blink version based on the same principles. I’m still debugging. I put some default values in the Blink, so it doesn’t rely on the Nuke node. I noticed some values get rounded by float conversions. Do those changes make as difference. LMS matrix rounding can make a visible difference. We should look into that. Should we specify LMS primaries or specify the matrix? This is really an implementation issue. We may be better specifying inverses. I also found an issue with the tone scale where after a certain point it drops back to zero.
  • Nick Shaw: I remember the Daniele curve folds back beyond a certain peak value.
  • Kevin Wheatley: I added to the test list a check of a ramp over a certain range.
  • Remi Achard: I noticed a difference in the threshold for the binary search.
  • Kevin Wheatley: I think it was 10e-2 at one point, but then I increased the precision. We should add a test with a hue sweep and define the expected results.
  • Scott Dyer: After the discussion last week I added inverses for all the transforms, and updated reference images. I still expect the CTL may change after developer feedback.
  • Kevin Wheatley: Mostly what we need to look at today is tests.
  • Scott Dyer: I’ve not done as much as I hoped on the docs. draftdocs.acescentral.com should reflect the dev branch of the aces-docs repo. I want to get Nick’s documentation live on draftdocs, so people don’t need to run the Docker container locally to view it. This is technical documentation for those who want the detail of the maths. I’m woking on adding Pekka’s chroma compression document.
[Scott showed draftdocs running locally on his machine]
  • Scott Dyer: I’m editing it a bit for a consistent voice. We’re updating the whole ACES Docs pages to make better use of MKDocs.
  • Nick Shaw: Should we ask Daniele if he can contribute something on the tone scale maths? The main rendering code is simple, but the steps to derive the parameters are not obvious.
  • Alex Fry: I should get my repo in line with the CTL and bake LUTs.
  • Pekka Riikonen: I’ll open a PR for my tweaks and call it v60.
  • Kevin Wheatley: I’ve added some tests as they occurred to me as a programmer, when writing my minimal Blink. But it would be geed to get some tests for the things we have been trying to solve with our recent tweaks.
  • Nick Shaw: Most of those were about round tripping.
  • Pekka Riikonen: The latest one was for the P3 round trip. We need to make sure we include the EOTF in the round trip test.
  • Nick Shaw: Scott, what dod you do when testing your inverses for limited gamut?
  • Scott Dyer: They round trip to within the limiting gamut. The white point sim and not 10,000 nit PQ transforms won’t fully round trip either. My tests weren’t super scientific.
  • Kevin Wheatley: We could do the test and get a metric for how good it is.
  • Nick Shaw: The easiest ones to test are just the Rec.709 and P3 ones.
  • Kevin Wheatley: Going to AP0 and back. How big should the cube be? Maybe a sequence of 256 images to test an 8-bit cube.
  • Alex Fry: Do we provide tests for each step through the transforms?
  • Kevin Wheatley: Ideally. Things like the gamut mapper are the hard ones to create tests for. In my tone scale test, an input value above a threshold drops to zero. I don’t know why. We should note that the P3 primary is outside the spectral locus, so don’t be surprised by that. We should say neutral input should have near zero M.
  • Pekka Riikonen: It’s interesting it’s not exactly zero.
  • Kevin Wheatley: We need to define what to expect. Visualizations of what to expect could be useful.
  • Alex Fry: It would be could to have a way to embed interactive 3D visualizations in the documentation.
  • Nick Shaw: I just implemented the Daniele curve in Python, and it doesn’t seem to collapse even with input over a billion.
  • Kevin Wheatley: Maybe it’s a Blink limitation where it wraps around. The Python is double precision. 

Meeting #151, May 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas

Meeting Notes

  • Kevin Wheatley: Scott wants to bring something up he and Thomas discussed on Slack, and I came across some things reimplementing the transform in Blink.
  • Scott Dyer: The list of transforms in the v2 developer release includes a number of forward transforms with a limited gamut in a larger encoding one. In v1 we did not include separate inverses for those. We just has e.g. a Rec.2020 inverse, so anything within P3 in that would round trip back to within P3. Now with our chroma and gamut compressions I would expect there to be a difference between inverting a whole container and a limited gamut within that. I tried to do some modeling, and I ended up with NaNs from the inverse when I ended up with negative J going into inverse chroma compression. It's either a bug, or something we need to trap for.
  • Nick Shaw: Is this what happens when you try to invert a value outside P3 through a P3 limited transform?
  • Scott Dyer: I used the ACES Macbeth chart values through a forward transform, which I would expect to be valid. Adding explicit inverses for the limiting gamut seems like overkill. But maybe not.
  • Kevin Wheatley: The way I see it, a Rec.709 limited transform inside Rec.2020 should produce the same result as the Rec.709 transform. So the inverse would be to de-encapsulate it from the Rec.2020 and use the Rec.709 inverse. There are two aspects. Have we found a bug? And what is the expected behavior?
  • Nick Shaw: Using a Rec.709 inverse do you need to clamp the Rec.2020 to Rec.709 to limit it to meaningful values before inverting?
[Alex showed in Nuke various combined forward and inverse pairs]
  • Alex Fry: I'm only seeing small round trip differences in the fifth decimal place.
  • Scott Dyer: Maybe I made a mistake.
  • Alex Fry: When I do a forward and inverse of a full 10000 nit Rec.2020 cube I get a weird shape outside a 'cone of reasonableness'. We never tried to fill that cube with plausible AP1 values.
  • Thomas Mansencal: Why can't we do that?
  • Alex Fry: Our path to white will mean we can't hit some values.
  • Kevin Wheatley: So we can't invert that. We've only got to P3 and Rec.709 filling the cube. I would expect to be able take a full display P3 cube, embed that in a Rec.2020 container, and invert that and then when I go forward again get back something close to where I started. What's the difference between inverting full Rec.2020 and the P3 limited version? You'll get back different numbers, but it should work.
[Alex created a P3 in Rec.2020 round trip test]
  • Alex Fry: Most of it round trips, but we loose a little bit at one edge.
  • Kevin Wheatley: Those look like negatives, which we would expect the transform to force positive.
  • Nick Shaw: Why are we starting with any negative values?
  • Alex Fry: The P3 primaries are not all completely inside Rec.2020, so full P3 ends up with a few negative Rec.2020 values.
  • Kevin Wheatley: We're assuming somebody knows the limiting gamut of the container they are inverting. We should look at what happens if they don't and there are values outside the limit. We can use that as an example to show people why they shouldn't do that, and are better just using the right inverse from the limiting gamut. It may be you can just go backwards through full Rec.2020 as long as you go forwards through the transform afterwards. The inverse is not designed to create the best source pixels. It's just to create values that map back close to where you started.
  • Thomas Mansencal: From a UX standpoint we should pick a set of inverses and clearly document the reason only those are provided, and which should be used.
  • Nick Shaw: I feel we should only provide inverses for e.g. ST.2084 P3-D65, and then you can go forward either through the same or P3-D65 limited Rec.2020. Don't start with Rec.2020, because the inverse of values outside P3 is undefined.
  • Alex Fry: We clamp the forward transform, so we could clamp the inverse to the limit.
  • Kevin Wheatley: I'm against hard clamps unless there is a good reason. What do those values invert to? And where will they round trip to? If they break we may need a clamp. Or clamping is a parameter.
  • Nick Shaw: Because the forward transform clamps the input to AP1, something that inverts to outside AP1 will get skewed onto the boundary.
  • Alex Fry: The P3 red primary is non-physical, and outside Rec.2020.
  • Kevin Wheatley: We should note that embedding P3 into Rec.2020 produces negatives.
[Alex showed round-tripping the ARRI bar image]
  • Kevin Wheatley: We haven't tested, but we can't round trip full Rec.2020. We need to list out what ranges we can round trip from display referred. If there isn't enough room between e.g. Rec.2020 and AP1 for the compression it won't round trip.
  • Nick Shaw: In my fork of aces-docs I've converted the PDF I wrote on the gamut compression into Markdown, as a step towards final documentation. You can preview it in GitHub, but to see it properly with all the images you need to run the Docker container locally.
  • Kevin Wheatley: I started writing a Blink version with only the options in the CTL version. I didn't complete that. I hope to have that done by next week. It's almost half the number of lines of code of v59. Going though the code, I noticed a few things, such as the lower hull calculation being used more times than the upper one.
  • Nick Shaw: I think that makes sense, because the reach boundary is like the lower hull, but has not upper part, so that code wouldn't be used for reach.
  • Kevin Wheatley: We also use the offset and smoothness for the limit cusp but not the reach cusp.
  • Nick Shaw: Pekka needs to answer that, but they are used for different things. We are compressing from the reach boundary, which we want as exact as we can, to just outside the target gamut, which we then clip. So I think it might be ok.
  • Kevin Wheatley: That might relate to the issue with using Rec.2020 as the limit. When you expand that it may be outside the reach boundary. I added one thing to Scott's spreadsheet. And Scott has added one.
  • Scott Dyer: I thought we should have a list of where specific values should be mapped to.
  • Nick Shaw: I made a spreadsheet of the Daniele curve which shows where some values map to at different nit levels.
  • Kevin Wheatley: I suggested we should test that a neutral ramp maps to near zero M. The next steps are to fill out these test cases.

Meeting #150, May 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas

Meeting Notes

  • Kevin Wheatley: We'll mostly be discussing tidying up, matching CTL and Blink. We talked about having a Blink version of the release in a separate directory.
  • Scott Dyer: I pushed a bug fix to the CTL, but it doesn't change anything.
  • Kevin Wheatley: The CTL comments mention possibly adding the extra LUT entry, but you haven't done it.
  • Scott Dyer: I tried at one point because other math needed updating to make it work. I was aiming to match Rémi's Python (with updated parameters) as a reference. I plan to add it at some point.
  • Kevin Wheatley: We were discussing bug fixes vs changes.
  • Scott Dyer: We don't want to do more parameter tuning. Anything we add needs to be justified and explained to the developers. Steve and Alex talked to developers at NAB and plan to continue to assist them.
  • Kevin Wheatley: It would be good to have a target date for creating documentation.
  • Scott Dyer: I have been writing an explanation of the changes from ACES 1. What we changed and why. Then there needs to be documentation of what the transform does and howe, leveraging what Nick and Pekka have already written, for those who want to go deep. Also some user centric guides, showing the ODTs we've created and how to make custom ones. What the parameters do.
  • Kevin Wheatley: Yesterday [in the OCIO TSC meeting] Carol asked if there would be additional LMTs that OCIO would need to implement.
  • Scott Dyer: There is a plan to have some creatives create some LUTs. I am not yet sure how those will be converted to CTL or CLF. Ideally they would be broken down into mathematical operations, rather than being a single 3D LUT. That's hard to do from proprietary grading tools.
  • Nick Shaw: If the are CLFs, there is no implementation for OCIO to do. They can just add them to the repo.
  • KE: Exactly. Are they part of the ACES release, or can we point to where they are without caring what's in them?
  • Alex Fry: Mathematical looks would be nice.
  • Scott Dyer: But then what tools do we give people to create them?
  • Nick Shaw: There are no tools for building CLFs. I thought about building some DCTLs equivalent to the types of CLF process nodes, and then if a look was built only with those, it would transfer one to one to CLF. But that doesn't give colorists a nice UI connected to a control surface. The current ACES 1 contrast LMT is CTL?
  • Scott Dyer: Currently CTL but easy to make a CLF.
  • Kevin Wheatley: So the priorities are matching the CTL in Blink, and documentation. The implementers need to know if the looks are required for ACES compliance.
  • Scott Dyer: That is a point of confusion with the logo program. Not everything that supports ACES has to support CLF if LUTs aren't a function of their tool. Only what is applicable. That's better than forcing everyone to implement everything. We want to make it easy for people to add more things later.
  • Kevin Wheatley: The only complex transform we should consider is a full 1.2 emulation. We can show the improvements, but for those who prefer the previous version we have an LMT. In CTL you could make it out of forward and reverse old and new transforms. But implementers could simplify that. OCIO could optimize it down to the old transform. Going back to the Blink. How do we make it match the CTL. Do we just match Scott's parameter name changes? Or do we start from scratch and back port?
  • Alex Fry: A full match would be nice but is a lot of work.
  • Nick Shaw: Does the CTL do any extra pre-computing that the blink doesn't, such as the surround stuff?
  • Scott Dyer: I left that in because I was using the surround parameters in the dark to dim. But we're not doing it that way any more. I am sure the CTL could be optimized more, but some I kept for readability. I created a params structure like Rémi did in his Python.
  • Kevin Wheatley: In Blink it's not practical to put sub-functions in separate files as the CTL does. I did put encoding and decoding in separate nodes, so the DRT is XYZ to XYZ.
  • Scott Dyer: I wrote the CTL to can help with updating Blink, but I am not great with Blink.
  • Kevin Wheatley: Blink is not always easy. Has anyone verified Rémi's Python matches the Blink.
  • Scott Dyer: I tested to a certain extent when writing the CTL.
  • Nick Shaw: Rémi's Python is still at v55.
  • Kevin Wheatley: Things may match with normal images, but our recent changes have been about edge cases. We need to create test cases. They would be useful for implenter anyway.
  • Nick Shaw: Input being clamped to AP1 means we don't need to test such a wide range of input.
  • Kevin Wheatley: Except the inverse isn't clamped.
  • Nick Shaw: Should an inverse clamp its input the the display space it's inverting from. That is what it's for.
  • Kevin Wheatley: I don't like arbitrary clamps.
  • Alex Fry: If you invert 2000 nit PQ through a 1000 nit transform you will get weird values.
  • Nick Shaw: And inverting the whole Rec.2020 gamut through a P3 limited transform.
  • Kevin Wheatley: We can use the diagnostic modes in Blink to render out float values at different stages. In CTL we could write functions to do the same. We can create 'golden results' for the various functions. How exhaustive should the tests be? We can make tests as images. Integration tests are simpler than unit tests.
  • Alex Fry: What's the difference?
  • Kevin Wheatley: Unit tests test very small parts of the code. Integration tests are larger blocks, so the results are less specific about where an issue is.
  • Alex Fry: The 100 and 200 series diagnostic modes are the blocks of the DRT.
  • Kevin Wheatley: There should be tests to match those, and others. Checking generated tables, etc. We create a test image and run it through each diagnostic mode and compare with the same in CTL. Is the CLF test image enough?
  • Alex Fry: There are a lot of negatives in that which would go away with the AP1 clamp. I would suggest more precision in the area of interest. Maybe a 64^3 ACEScct image.
  • Kevin Wheatley: And some of the test images with ramps we already have.
  • Nick Shaw: We could take a test image and extract it as an image at each step through the DRT, then each process should take one of those images to the next.
  • Scott Dyer: Writing the DCTL I used the four major steps of the DRT to test – RGB to JMh, tone mapping and chroma compression, gamut compression, and final output.
  • Nick Shaw: For CLF we developed a relative metric. We talked about making that the metric for use for all ACES implementations.
  • Kevin Wheatley: That could work for full rendered images. For other things we need to make definitions of what an algorithm should do. "The output of this should always be positive".
  • Scott Dyer: Let's start compiling a list of these ideas.
  • Kevin Wheatley: And an input and output image name.
  • Scott Dyer: We could make lists of where 18% grey and 100% white end up through various transforms.
  • Nick Shaw: Should I try to make my DCTL match the CTL structure.
  • Alex Fry: Param names rather than structure.
  • Kevin Wheatley: I'll try and look at a CTL to Blink Port.
  • Nick Shaw: I'll look at my DCTL.

Meeting #149, April 24th, 1pm PT

Attendees

Kevin Wheatley
Alex Fry
Alex Forsythe
Nick Shaw

Lars Borg
Daniel Brylka
Sean Cooper
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen
Juan Pablo Zambrano

Meeting Notes

  • Kevin Wheatley: I have a fix for the table wrap issue from last week, adding an extra entry to the end of the tables. I opened a PR for that. I also tried to add support for different sized tables, but it doesn't work.
  • Alex Fry: The approach of inferring where the corner is seems better than using huge tables.
  • Kevin Wheatley: The corner is missed due to the uniform distribution. What if we used the same non-uniform distribution as for the cusp?
  • Nick Shaw: That uneven spacing is derived from the actual RGB values going round the edge of the cube. But the corner at limitJmax is not at exactly the same hue. Primaries and secondaries at different J values have slightly different h.
  • Alex Fry: I've been looking at an interpolation method which infers the position of the corner by extrapolating the intervals either side. But it seems to be moving everything a bit.
[Alex showed a plot of his corner interpolator, toggling is on and off, and showing the small shift of the whole boundary]
  • Alex Fry: It's actually not showing the actual output of the lookup. It is showing the boundary intersection using that reach.
  • Alex Forsythe: Scott has created CTL versions of v58 to provide to manufacturers as a developer release. Any changes we make from now will go into a future update. aces-dev now has a branch called v2-dev-release. The core code is a library, and there are separate repos for IDTs/CSCs, LMTs and ODTs. This is better for version control, as a new IDT or ODT doesn't necessitate a new ACES version. aces-core (the new name for aces-dev) will only change with version updates. We hope to announce the actual ACES 2.0 public release later this year.
  • Nick Shaw: Presumably we can push bug fixes to the CTL.
  • Alex Forsythe: Only critical bugs.
  • Kevin Wheatley: We need to make unit tests for the CTL.
  • Pekka Riikonen: We should port what Scott has done in the CTL back into Blink.
  • Alex Fry: I need to move the Blink into an AMPAS repo.
  • Nick Shaw: Is a special CTLRENDER build needed to run the CTL?
  • Alex Forsythe: You need to build from the CTL master branch using e.g. brew install --HEAD CTL. This is an important moment and a culmination of a lot of work.
  • Alex Fry: The Transformers One trailer came out last week, and that was done with v28 of the DRT.
  • Kevin Wheatley: If we back-port Scott's CTL to Blink we will have much less code, as he only took the path that is actually used. That should probably be a separate branch or directory, and what we have now becomes a dead end. But it's sill there in the history.
  • Alex Forsythe: Documentation is very important describing the methodology.
  • Kevin Wheatley: If we make a new Blink from the CTL, that's all we need to document. And we can make use of the documents Nick and Pekka have already written, updating them as needed.
  • Alex Forsythe: We will build the documentation in markup on https://docs.acescentral.com/. That's what we've done for e.g. the RGC. It's better than people downloading PDFs that go out of date.
[The remainder of the meeting was looking at merging Kevin's bug fix PR]

Meeting #148, April 17th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: The main thing that's come up recently is invertibility of SDR P3. We're getting a tiny bit of clipping on a round-trip.
  • Pekka Riikonen: I made a v58 that does round-trip in P3. It means Rec.709 is a bit more clipped because the intersection is puffed out a bit more. It's not a huge amount. It's because our gamut intersection is only an approximation so we always need to expand a bit. The approximation seems to get worse for larger gamuts.
  • Nick Shaw: And is 100% P3 round tripping essential to people?
  • Alex Fry: I think these days we need to be able to say we can hit the corners of P3. If we're clipping the input to AP1.
  • Nick Shaw: Could we expand the input clip just outside AP1?
  • Pekka Riikonen: I did experiment with clipping to the chroma compression gamut, and that worked.
  • Nick Shaw: I suppose people expect to hit the boundaries while working in positive AP1.
  • Pekka Riikonen: This post shows the changes in v58.
  • Alex Fry: If we need to change the lower hull gamma for P3, should we just use a solve for it.
  • Pekka Riikonen: Our solve gave gamma values that were unnecessarily large.
  • Alex Fry: It feels wrong to compromise Rec.709 just for a P3 round trip. Could we store different parameters for different gamuts.
  • Nick Shaw: Then it doesn't generalize to any gamut.
  • Alex Fry: We could document how we derived the values.
  • Nick Shaw: How visible is the extra Rec.709 clipping.
  • Pekka Riikonen: It's not visible until you gamma up and look at clipping.
  • Nick Shaw: And the core rendering obviously doesn't change at all.
[Pekka showed the difference between v57 and v58 with gamma raised]
  • Pekka Riikonen: Red actually clips less, but blue clips more. There's maybe a little more noise. v56 had more clipping.
  • Alex Fry: If we have to increase the gamma for P3, we'd have to increase it even more for Rec.2020 if people could see that.
  • Scott Dyer: Ideally we would round trip any gamut. But we need to ship something. My CTL now matches the Blink. That's when I plotted round trips and noticed small P3 differences. If v58 solves that we have to admit it's not perfect but lock it down. We've improved on the things we set out to. I have made an LMT which matches the v1 tonescale. I've updated the surround adjustment code, so it is just a luminance gamut the same size as in ACES 1. It's off by default, but the code is in there.
[Scott showed the code where the surround adjustment is applied]
  • Scott Dyer: I put it after the clamp to peak luminance, but before the white point scaling for any "sim" encoding.
  • Alex Fry: DO we need it for HDR? We didn't have it before.
  • Scott Dyer: Maybe not
  • Nick Shaw: It's off for now, but gives people the option if they think it's necessary.
  • Pekka Riikonen: Should vendors expose it?
  • Scott Dyer: Probably not to end users. We should probably only expose primaries and white point of the limit and the display, and the display EOTF. Other things are for advanced users.
  • Nick Shaw: Has the built in Output Transform function you just pass parameters to for a custom DCTL ODT, if you are not writing the processing from scratch. We probably only want to expose a similar set of parameters in there to what is there currently.
  • Alex Fry: There's a question in the chat, about "would a Rec.709 image look the same on a P3 display?" If the source fills a lot of AP1, no. Values on the edge will extend more into P3. If you have a display referred Rec.709 image and put that though the inverse Rec.709 and then forward P3 transforms, again it will be expanded at the edges into P3, along the same hue line.
  • Nick Shaw: But a Rec.709 red primary won't end up as a P3 red primary. It will be on the edge of the P3 gamut, but along from the primary at a more orange hue.
  • Alex Fry: In the code we have some cryptic variable names. We could rename those to more descriptive names.
  • Scott Dyer: I removed Nick and Daniele's names in the CTL.
[The remainder of the meeting was mostly looking at detail of the code, and tracking down a bug]
  • Alex Fry: I'll merge v58 and bake LUTs. When the CTL settles, I can back-port stuff from there to the Blink.

Meeting #147, April 10, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Chris Clark
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
JP Zambrano

Meeting Notes

  • Alex Fry: We talked last week about using the surround condition only at the output stage.
  • Nick Shaw: Scott, you exposed it in your CT, but is it more complex than that?
  • Scott Dyer: It's easy because it's there, but is it a good way to do it? Or should we just do what we do in v1, a gamma in luminance? The effect of changing the surround in the model is much larger than that. I only exposed the surround value used for the final JMh to XYZ.
  • Nick Shaw: I think it does the right thing inside the gamut, but the curvature of the display gamut hull in JMh is affected by the surround parameter, it's shape changes, and just using a different surround in the final JMh to XYZ, when we've gamut mapped to a hull with the original surround value, will cause either more clipping or not being able to reach the boundary. I've not worked out which.
  • Alex Fry: The output surround parameter in the Blink affects the hull shape and the final conversion.
  • Nick Shaw: We'd need to test if out gamut approximation still works when we do that.
  • Pekka Riikonen: I tested with adjusted numbers to better match ACES 1, and going to dark increases clipping a bit, and going to average reduces clipping. Does it matter if it clips a bit or doesn't quite reach the boundary?
  • Scott Dyer: It would need a lot more testing.
  • Nick Shaw: Given we are in XYZ on the way to output, it's easy to bounce to Yxy and apply gamma, which seems safer.
  • AD: We can ship something with defaults now and add a function later.
  • Alex Fry: Dim vs dark was not a complaint area for the original.
  • Nick Shaw: If anything some people complained the dark to dim messed up their expectations. I suspect changes people might make in a trim pass would be larger anyway, so it's a confusion. If we leave it in, I think it should be hooked up to nothing, with a "to do" comment, rather than hooking it up to the output surround condition where people could use it.
  • Christopher Jerome: Was there a plan to provide an LMT to match the ACES 1 tone curve? Could that work for this.
  • Scott Dyer: When we have a final version we will definitely make LMTs like that which will match ACES 1 contrast.
  • Nick Shaw: But an LMT affects all outputs, and the dark to dim creates a difference between targets.
  • Pekka Riikonen: Scott, have you found the inversion issues you had in your CTL?
  • Scott Dyer: I'm making progress. I'm using Rémi's NumPy version as a reference. But that is v55, so I'm updating it. I think my issue is in my gamut compression code. I obviously need to make it match the Blink before we can release.
  • Alex Fry: I need to bake some v57 LUTs.
  • Nick Shaw: Pekka, you mentioned a v58.
  • Pekka Riikonen: That would just be parameter changes.
  • Scott Dyer: Are we switching to Reinhard from powerP?
  • Pekka Riikonen: You can simulate that by changing the exponent in powerP to 1.0, which I could do in v58 so people can see the difference.
  • Nick Shaw: I don't know if powerP is a published curve, or something Jed invented for the RGC. It's an extra layer on top of Reinhard, and without that layer the maths is simpler.
  • Pekka Riikonen: It makes the compression a little more aggressive. It doesn't help the blue. The change is very small. If I change the powerP exponent to 1.0 in v58 and people are happy with it we can simplify the final code. I'll push v58.
  • Nick Shaw: Then I'll aim to make a DCTL v58.
  • Scott Dyer: Most of this week's forum discussion has been about the naming of sRGB and gamma.
  • Alex Fry: I have made a visualization with constant M rings to look at the kinks we were discussing last week.
[Alex showed his Nuke 3D JMh visualization with different chroma compression stages on and off]
  • Pekka Riikonen: The effect comes from the reach compression in the chroma compression – the in gamut compression. The limit is AP1, so there is a discontinuity there.
  • Nick Shaw: But we clamp to AP1, so there shouldn't be anything outside AP1. Or at least anything that was outside there is already skewed into AP1. Is there something that will occur in real images that will create the hue sweeps that show this effect.
  • Alex Fry: I think this is what creates the artifact in the color wheels.
  • Nick Shaw: The sweeps in the blurred star images are different because they are straight lines in CIExy. Does anything real create a hue sweep?
  • Christopher Jerome: Since primaries are most chromatic intuitively they should get least compression and they seem to get the most. Are primaries high luminance?
  • Nick Shaw: Secondaries are higher luminance, because there you get a second primary added in.
  • Christopher Jerome: So maybe primaries get more compression because they have lowest luminance and highest chromaticity.
  • Nick Shaw: That compression is what creates the path to white, isn't it, and it's doing it differently at different hues.
[Alex showed the code of the chroma compression]
  • Pekka Riikonen: The shape of the path to white is driven by the tone scaled J. The earlier versions had a different shape, because they weren't driven by that at all.
  • Christopher Jerome: Is the compression related to the output space? Otherwise it's odd that it's related to RGB primaries.
  • Nick Shaw: It's related to the dynamic range of the output because it's driven by the tone scale. But it's not related to the primaries.
  • Pekka Riikonen: We first rescale M to match the scaling we did to J, which keeps the source J to M ratio. The rest is look and path to white.
  • Christopher Jerome: So why is it related to any RGB primary?
  • Pekka Riikonen: The reach is still determined by AP1, so it's related to those primaries because of reach mode.
  • Christopher Jerome: Could you use a round shape soft clipped to AP1?
  • Pekka Riikonen: I tried something like that by smoothing the AP1 tables, but couldn't get it to work.
  • Nick Shaw: Will hue sweeps ever occur in real images? As Daniele showed in his video, additive light means straight lines in CIExy.
  • Christopher Jerome: The ICC created a reference gamut which was a smooth connection space. I'll post a link.
  • Alex Fry: It's hard to know if we're chasing our tail with unrealistic test images.
  • Nick Shaw: Is the AP1 used in chroma compression a display referred version of AP1 with a cusp.
  • Pekka Riikonen: The cusp is just used for normalization, using only the M value. I did try a constant normalization, but the constant needed to be varied for HDR, maybe for different gamuts too.
  • Nick Shaw: So the amount of compression is driven by the ratio of the M at limitJmax and at the cusp. Are the primaries with lower cusps compressed more?
  • Alex Fry: Is this tunable tin any way?
  • Pekka Riikonen: You could change parameter values, or use a different value for normalization instead of cusp M.
  • Nick Shaw: Could you use the M value at mid J, so it didn't ride up and down with the cusp? Or would the ratio to M at limitJmax then be constant?
  • Pekka Riikonen: Using the cusp was an easy way to make it hue dependent but not target gamut dependent.
[Alex showed a visualization with an AP1 cone as the source]
  • Alex Fry: This is a data sweep, rather than a realistic image.
  • Pekka Riikonen: You could add an M gradient.
  • Alex Fry: There seem to be two kinks. One coming from a straight line from the primary in CIExy and one from a straight line in JMh.
  • Nick Shaw: Why does the red primary poke out more?
  • Alex Fry: Actually all the primaries do it. Could something be misaligned?
  • Pekka Riikonen: We fixed the white mismatch in v57.
  • Nick Shaw: We could compare the cusp tables to see if the reach and cusp table primaries line up in terms of hue.
  • Alex Fry: We have an extra glitch that seems to be an error in the table.
  • Nick Shaw: Is that at the wrap-around point?

Meeting #146, April 3, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Remi Achard
Lars Borg
Daniel Brylka
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: Scott has a report on his CTL, and Pekka has some updates.
  • Scott Dyer: I’ve been working on all the Output Transform CTLs in a branch of my fork. I’ve duplicated the structure of ACES and removed unused ones and added some new. Each file is a top level macro with the parameters. I’ve added a scale factor parameter which is only needed for the 0.5 scale of PQ for Dolby Cinema 216 to 108 nits.  I’m making presets from these to test. It does run the unit for every frame. I’ve been testing and making plots, which lead to some questions. 
  • Pekka Riikonen: You have a parameter for surround
  • Scott Dyer: yes, exposed the surround conditions for output only. The values in the model change things too much, using a different gamma on the output conversion. After testing we could update the gamma values and use this mechanism. 
  • Pekka Riikonen: If you change model parameters it changes the shape of the gamut. 
  • Scott Dyer: Even if you just change it for the final output conversion? It seems to do what I expect. I don’t suggest anybody use it right now.
  • [Scott showed a plot of the tone curve for dark, dim and average]
  • Scott Dyer: With D60 sim, the channels are scaled to fit the highest channel to the display maximum, so the other two end up lower. But the max channel is clipped at peak, but the other two keep increasing because the tone scale still has slope. For PQ the tone scale can go above 1000. It that a problem?
  • Nick Shaw: Although the tone scale can keep going, the Blink and DCTL have a clip at peakLuminance. 
  • Scott Dyer: I can enable that for all outputs, but it doesn’t help with the white point scaling. Should we clip the green and blue channels before scaling them?
  • Nick Shaw: I asked about that in the issues document. What should fit white do in PQ? Do we clip to the virtual D60 monitor, then encode as D65 and let the channels go where they may? What are the criteria for QC failure? Is it just a code value threshold?
  • Scott Dyer: What about naming and directory structure? I currently have them sorted by primaries.
  • Alex Fry: I feel sorting by dynamic range is logical.
  • Nick Shaw: Do we still need Rec.2020 SDR?
  • Scott Dyer: I eliminated 2000 and 4000 nits, as people could easily make them. 
  • Alex Fry: 4000 is the official Dolby Vision target. Does anybody use that? 
  • Thomas Mansencal: There are LG TVs now that peak at 4000.
  • Scott Dyer: The big one is sRGB. What do we call the gamma 2.2 version?
  • Nick Shaw: I think FilmLight now default to the 2.2 gamma one.
  • Lars Borg: You need to clearly distinguish the two in the name.
  • Alex Fry: 2.2 gamma needs to be in the name, but with what in front? My gut says “sRGB, gamma 2.2”. 
  • Lars Borg: If you have “sRGB” and “sRGB, gamma 2.2” that’s still confusing.
  • Thomas Mansencal: It may be worth looking at OCIO naming conventions.
  • Alex Fry: If they are sorted by primaries is that limiting or encoding primaries?
  • Scott Dyer: Encoding is what you set the monitor to. So encoding then limiting as the hierarchy.
  • Thomas Mansencal: A hierarchy is good from OCIO because it’s based off aces-dev.
  • Nick Shaw: The organisation is changing isn’t it, so CSCs are no longer separate from IDTs. 
  • Scott Dyer: Alex Forsythe is rearranging things, so the core code is in aces-dev, and the inputs and outputs are each in a separate repo. So people don’t thing adding a new IDT or ODT has to be a new ACES version. The system version is the core code. I am close to having CTL implementations of all the new transforms. I’ll post when they are ready to test.
  • Pekka Riikonen: I pushed a v57 which only changes some parameter values, and fixes a wrong white point Scott noticed we were using. This improves the inverse to AP1. I’ve continued to experiment, and simplified cusp smoothing. I also tried using Reinhard instead of powerP. It’s simpler code, and doesn’t seen to affect the rendering much. Nick noted powerP is the same as Reinhard if the exponent is 1.0, so people can try it to compare without new code.
  • Nick Shaw: For the RGC we started with Reinhard, then changed to powerP, but I forget why.
  • Pekka Riikonen: Nick already has a v57 DCTL. 
  • Nick Shaw: I do, although I pushed a couple of bug fixes since the initial commit. 
  • Alex Fry: Should we look more into the “3 leaf clover” shape we saw last week. Maybe we should test with even rings in JMh, to remove the effect of rings outside the locus.
  • Pekka Riikonen: I think it’s an effect of reach mode. The primaries are pinched in. 
  • Nick Shaw: when it looked really bad wasn’t with everything enabled. 
  • Alex Fry: It was without gamut compression. 
  • Pekka Riikonen: Pinching is expected, but why not at the secondaries?
  • [Alex set up a test with even JMh rings]
  • Pekka Riikonen: How long should I keep iterating?
  • Scott Dyer: We should be done already! It’s I difficult task. Nobody could deny that where we are now is an improvement on v1.
  • Pekka Riikonen: I could do one more iteration and then stop.
  • Alex Fry: Small parameter changes won’t hold up implementers. 
  • Alex Fry: I’m testing rings with M of 50 and J going 0 to 500. 
  • Pekka Riikonen: That doesn’t look as bad as what we saw last week. 
  • Nick Shaw: It makes sense, since with reach mode primaries need more compression. 
  • Alex Fry: it’s more obvious at the top.
  • Pekka Riikonen: That’s expected because the compression is driven by the tone scale. 

Meeting #145, March 27th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Alex Forsythe
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
JP Zambrano

Meeting Notes

  • Alex Fry: No Kevin this week. We're going to discuss the blue issues and Scott's CTL work.
  • Pekka Riikonen: I pushed v56. It only changes the LMS primaries. The Blink is identical to v55. I posted comparisons with ARRI Reveal. I think both are reasonable renderings of these images. With the hue wheel the blue issue is still there but less. I plotted the result of a blue to magenta ramp. It's in the corner so sudden changes are expected. And I showed that ARRI Reveal doesn't reach the corners, so it's easier for it to be smoother. You can see they compress blue a lot.
  • Alex Fry: They have different requirements. We want to hit all screen colors. Some people said smoothness should be prioritized, but others pointed out different people have different priorities.
  • Nick Shaw: And an LMT can add smoothness.
  • Pekka Riikonen: Plotting all the stars there is clipping at every edge, which is the sharp kinks. Our gamut mapping isn't precise, so there is always some clipping.
  • Alex Fry: Cusp smoothing puffing out adds to that.
  • Pekka Riikonen: ZP Zambrano posted and interesting experiment with ring input. But you have to remember the ring is outside AP1, so is clamped on input which produces skews. So interpreting the rings as ACEScg, I could still see distortions due to the hue dependent chroma compression.
  • Alex Fry: It's interesting but it's not clear what a circle in chromaticity space means.
  • Nick Shaw: I did an experiment which was just a proof of principle that you can improve smoothness with LMT that's a hue qualified M compression in JMh space.
  • Alex Fry: That means you can't reach the corner any more, but it's smoother. Are you changing hues as well?
  • Nick Shaw: I am just compressing M, but I think it pulls stuff out of the region that would have got skewed, so it changes hue as well. It defaults to compressing hue of 250 which is the hue of the AP1 primary, but there are hue center and width controls. It doesn't seem to break any of our sample images.
  • Alex Fry: It presumably isn't specific to our transform.
  • Nick Shaw: It uses the same JMh space, so may be slightly better at preconditioning the data for that, but it could be used with any DRT.
[Nick showed the Fabian Matas nightclub image from the gamut mapping set with the K1S1 and DRT v55]
  • Nick Shaw: That image is so extreme there is blue fringing even with K1S1, and my LMT improves that.
  • Pekka Riikonen: Do you have compress mode enabled.
  • Nick Shaw: I don't currently. But maybe I should, because it doesn't need to match the exact JMh space in the DRT. And I don't have an AP1 clamp.
[Pekka showed a 3D plot of his variation of JP's ring image turning on each stage of the DRT]
  • Pekka Riikonen: The path to white from the chroma compression creates a strange shape. When we add the gamut mapper you can see the shape of the Rec.709 gamut. I assume the discontinuity comes from the fact that we don't touch anything outside the limit. We just clip.
  • Alex Fry: JP, you showed a rendering of your own that was much smoother with the rings.
  • JP Zambrano: I't a mix of a bunch of things to get smooth results, but it doesn't reach the corners, and it isn't invertible. It's based on AGX and Jed's OpenDRT plus some of my own stuff.
  • Christopher Jerome: Pekka, how much control do you have over the direction of the 'hook' with the star image? Blue skewing towards green is the opposite direction than I would expect. All hue lines I've seen curve in the opposite direction near blue.
  • Alex Fry: Straight lines in xy are curved in JMh and vice versa. The constant hue line we are compressing along curves one way in xy, so compressing back along that bends a straight xy line the other way. And clamping always skews towards the secondary, unless it's perfectly on the primary axis.
  • Christopher Jerome: Could something line a matrix in Jab control that curve in blue?
  • JP Zambrano: I think something may be doubling up. My guess is that because a straight line in xy is not straight in JMh, when it goes out to display it keeps that color. If it's too purple in JMh, which would be blue in xy it stays purple. In my DRT I engineer it to remove that, where here it's keeping it or even making it stronger.
  • Christopher Jerome: I've been experimenting with  known hue consistent gradients from primaries, and then inverting them in the model to see where they land.
  • Alex Fry: The only way to eliminate the kink at the end is to perfectly compress to the boundary.
  • Nick Shaw: I see very little difference from the final clip. It seems to be the gamut compression that is bending the line at the end.
[Nick showed a 3D JMh plot of the blue and yellow star image turning each stage on]
  • Nick Shaw: The chroma compression is putting a slight bend in the line, and then gamut compression bends it right back in. No correction could remove the kink, because anything one side of the line will bend one way, and anything the
  • other side will bend the other.
  • Christopher Jerome: I'll post some examples of what I'm thinking.
  • Alex Fry: Invertibility is not an ideal solution, but it's a practical production reality for things we need to do.
  • Scott Dyer: My CTL implementation is not production ready, but I have something I can let people try soon. I'll make a post. One thing we discussed is the tables are all being generated with D65 white, and some of them should use the input white.
  • Nick Shaw: I think we should remove the variable called refWhite because it's not a good name, as it doesn't tell you what it is. The reference is different for different things. We should use inWhite or limitWhite as appropriate in different places.
  • Scott Dyer: My code passes a white each time.
  • Pekka Riikonen: I'll make a v57
  • Nick Shaw: I'll wait for v57 before making a new DCTL.

Meeting #144, March 20th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Luke Hellwig
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: This week there have been a few posts on ACES Central about the blue behavior. And we want to discuss the parameter needed to create a set of deliverables.
  • Pekka Riikonen: I did som testing of the effect of the eccentricity factor.
[Pekka showed various images with and without eccentricity in the model]
  • Pekka Riikonen: Most images there is no difference. At the very top end it slightly darkens blues and lightens reds. I suggest leaving it out to keep the transform simpler. It doesn't help the blue issue we see on the color wheel. Moving the blue primary closer to Thomas's coordinates we slightly reduce the skew to cyan. But if we move it too far blues go magenta in highlights. With the star image it helps a little. It's still not as good there as ARRI Reveal. I assume the effect comes from the curvature of the hue lines from the model.
  • Alex Fry: Those should be perceptually consistent hue.
  • Nick Shaw: In the original model the lines are constant hue for colors in the source data set, but by stretching them to work for colors further out have we distorted the hue consistency for normal colors?
  • Alex Fry: Are we creating a skew at the top with our path to white?
  • Pekka Riikonen: Chroma compression is non-linear and only affects M. It doesn't change J, so doesn't preserve chromaticity. Straight lines in chromaticity space don't stay straight.
  • Nick Shaw: In Daniele's video he was talking about the importance of preserving linearity in scene space. In a display rendering we can do whatever looks right.
  • Kevin Wheatley: Moving our LMS primary happens in linear, so is no different than applying a matrix LMT, like Nick did.
  • Alex Fry: Can we visualize our output in JMh to see if we're adding skew, maybe from the final clamp?
  • Pekka Riikonen: I don't think it's that. The AP1 clamp does introduce skew.
  • Nick Shaw: I think sometimes people are fixated on images looking beautiful when untouched. There is a colorist in the loop to adjust problematic images. Images like blue bar are way outside AP1, and may need something doing to them, but it doesn't need to be in the DRT.
  • Kevin Wheatley: So there are two parts to the problem. One is the blue area that stands out, which we don't understand. Separately some blues skew cyan or purple, and a pre-grade could fix those. If we can fix the blue band by making it wider, the images will be easier to grade.
  • Pekka Riikonen: Björn Ottosson suggested the narrow blue is the result of compressing along constant hue lines. I did once suggest we might have smoothing along the hue axis. But we never looked at that.
  • Alex Fry: Smoother versions don't hit the corners.
  • Nick Shaw: Is it possible to do an LMT that rounds of the corners? It would prevent you hitting them, but you don't have to use it.
  • Alex Fry: A JMh based gamut compressor as an LMT could be useful.
  • Kevin Wheatley: There was a post about bumpy levels with a hue rotated ramp.
  • Nick Shaw: I couldn't replicate what he saw, but I think some of the jagginess came from using LUTs. If I hue rotate in Nuke the ramp goes outside the spectral locus very quickly.
[Nick showed his Nuke version of the hue rotated ramp]
  • Pekka Riikonen: There are still distortions. They are just smoother.
  • Kevin Wheatley: I think the non-linearity is combining with the gamut compression curve to produce these shapes.
  • Pekka Riikonen: Does reach mode affect it.
  • Nick Shaw: It's slightly smoother without reach mode.
  • Kevin Wheatley: Scott has made a list of suggested deliverables.
  • Scott Dyer: I've listed the current ACES ODTs and their Transform IDs. I'm adding a list of the ones we still want for ACES 2.0, and any additional ones. And I'm listing the parameter values to create those. How much of the CAM model are we using for viewing surrounds? Or will we do our own manual gamma? Also are there any cases where the limiting white and creative white are different?
  • Nick Shaw: And should we compensate. Some don't like the dark to dim in ACES 1. Josh Pines said his colorists preferred a straight conversion of encoding.
  • Scott Dyer: Similarly many people want the white to just be that of the display, but we have to give then options for at least D60. Probably the default should be no surround compensation, but the option should be in there.
  • Luke Hellwig: I'm not convinced the surround handling in the CAM handles it correctly. It comes from earlier CAMs than my model.
  • Kevin Wheatley: I have done a test of turning the lights on and off in the room, and I felt you need something to make them match. A gamma and colorfulness adjustment.
  • Scott Dyer: We tried to do experiments and came up with gamma and saturation value. But those got changed in the final version. We don't cover all the permutations in the current ODTs. We only do dark to dim for Rec.709.
  • Kevin Wheatley: People who have dark rooms have someone to generate LUTs. I think the issue is people with lighter rooms and sRGB. I always assumed the limiting white is the creative white, and everything after that is encoding.
  • Scott Dyer: In ACES 1 the SDR tone curve is for 48 nits and then we scale it for Rec.709. We've been working with a peak luminance of 100 nits in the Daniele curve. What about 108 nit Dolby Cinema? Just changing it to 108 isn't right.
  • Kevin Wheatley: We talked about a 200 nit version scaled for 108, halving it like we do for 48.
  • Nick Shaw: At 200 nits mid grey is 11.4 nits, and half that is 5.7, which is lower than the 7.2 the current Dolby Cinema ODT has.
  • Scott Dyer: We just picked 7.2 as a reasonable middle ground. If you want it brighter you can increase the source exposure.
  • Alex Fry: That's Dolby Cinema, but the DCI HDR spec is 300 nits [strictly 299.6]. DO you use 600 or 300 for that.
  • Kevin Wheatley: Where do you transition to not using the doubling/halving? We would need to test.
  • Scott Dyer: In my list I include a Rec.709 encoded as Rec.2100. What other common presets might people need?
  • Nick Shaw: That Rec.709 sim is useful for our testing, but how useful is it in the real world.
  • Alex Fry: I think it's useful in VFX to save switching monitor settings.
  • Scott Dyer: OCIO will handle the encoding part, but do we bake any surround compensation into what we hand off?
  • Kevin Wheatley: My restructured version takes in XYZ and outputs XYZ limited to the virtual display we are targeting. Everything else is just encoding.
  • Alex Fry: When we've testing changing the output surround the effect is much more severe than the ACES 1 dark to dim.
  • Kevin Wheatley: If you change the surround, you should also change the adapting luminance.
  • Alex Fry: In practice I've always found the dark to dim kind of annoying rather than useful.
  • Nick Shaw: I'd be in favor of defaulting it to off, with a parameter to enable it for those who want it. Luke, is it reasonable to change the dark and average surround coefficients to values closer to the 0.59 of dim to reduce the effect?
  • Luke Hellwig: People have definitely done that before. But I don't know if they even do what you want.
  • Pekka Riikonen: If you change those values the gamut shape changes, so it affects a lot of other things.
  • Alex Fry: It does affect the highlight rendering.
  • Scott Dyer: That suggests the Hellwig parameters should be fixed, because we don't want to change the rendering. So a number of other things that use those values can be pre-calculated.
  • Nick Shaw: If we output to XYZ, with encoding separated, when we're in XYZ we can just apply an optional gamma to Y before handing it off.
  • Kevin Wheatley: An offline discussion we had was the number of times the value 100 is used with different names.
  • Nick Shaw: And the possibility of not scaling to 100 at all, and working at 1.0 scaling. But that might have a knock on effect on other parameters.
  • Alex Fry: I think we need a Rec.2020 unlimited version as Rec.2020 displays become more common.
  • Scott Dyer: Does anybody have ideas for better nomenclature than "sim"?
  • Alex Fry: "Sim" is what we used before, so it's familiar. I was comparing v28 and v55 on some production material, and they are very similar for normal images and v55 is better for the edge cases.
  • Pekka Riikonen: I noticed if we changed input and output to dark instead of dim it helps a bit with the blue.

Meeting #143, March 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Thomas Berglund
Daniel Brylka
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: The main thing to discuss is Nick's DCTL v55.
  • Nick Shaw: I have updated my pure DCTL implementations to v55. I've updated some check boxes so there is now an AP1 clamp rather than M clamp. I've also posted a duplicate set with the tags and settings to use in Resolve's ACES Transforms folder, to add them to the built in IDTs and ODTs. I didn't do D60 sim version, but people could duplicate them and set the D60 sim check box. There are only 500 and 1000 nit PQ. I didn't do 4000. And the IDT inverse versions are only Rec.709, P3-D65 and PQ1000.
  • Alex Fry: I've found in Resolve using our LUT bake inverse and forward gets slow.
  • Nick Shaw: Could be GPU memory for two LUTs. I wonder how the pure DCTL will do.
  • Kevin Wheatley: I did some experimentation setting model parameters to the default. If you reintroduce the eccentricity function it darkens highly saturated blues which is something we are after. We may be fighting ourselves by removing it. Using the stock blue primary doesn't work. I also looked at Luke's Phd model. A lot of that is HK mode, and HK mode breaks assumptions our code relies on. J can go above 100. I also did some code refactoring.
  • Nick Shaw: Didn't Luke suggest the eccentricity wasn't needed in our use case because it cancels out?
  • Kevin Wheatley: It definitely made a difference. It affected very saturated blues. I took the ACES still like and increased saturation x10, and the blue bottle changes with eccentricity.
  • Nick Shaw: Our diffuse blues like the Macbeth chart ones are darker than other renderings.
  • Kevin Wheatley: We had some feedback from a VFX house. Not entirely negative. They said the contrast was a bit low, which was our intent. I will compare our tone scale again to my averagely average curve.
  • Scott Dyer: We'll ship a higher contrast LMT.
  • Kevin Wheatley: We should look at Nick's post about options we can drop / lock. I stripped out some obvious one in my branch. I went for all the "Nick" boundary solves. Variable gamma seems needed for the upper hull, but what don't we do it for the lower hull?
  • Nick Shaw: Does that vary so little it's not worth it?
  • Kevin Wheatley: I found the range of variation is similar.
  • Nick Shaw: The fixed 1.14 we are using for the lower is close to the lowest value in the solve. Why does the solve think it needs values up at 1.23, when 1.14 seems to invert fine?
  • Kevin Wheatley: As Nick said before, why does the cusp smoothing move the cusp in J as well as M.
  • Nick Shaw: I thought that intuitively, but Pekka set he found he needed to move it up and to the right. Is it because the cusp is generally high, so up and to the right approximates an angle that bisects the cusp? Might a variable angle that bisects the cusp be better?
  • Kevin Wheatley: There are two eccentricity functions in the code. The one in the model applied to M (currently removed) and one used in the chroma compression reach. The second one makes no sense to me. Or at least both reach cusps should use it.
  • Nick Shaw: I think that was an attempt to fix something that wan't right. A fudge factor. Putting eccentricity back into the model would apply it globally.
  • Kevin Wheatley: We have some duplication in tables, and if they are the same we can remove any redundant ones.
  • Nick Shaw: I got rid of one of the reach tables in my DCTL because they are both AP1, so identical.
  • Kevin Wheatley: We have gamutCuspTable, gamutCuspTableReach, cgamutCuspTable, cgamutReachTable, and one or two gamma tables. The two reach tables are identical if we agree they both have to be AP1. If we move the cusp, that should be baked into the table. Or do we even need them unmodified?
  • Nick Shaw: I think the two reach tables interact weirdly if they are not the same.
  • Kevin Wheatley: Six axis compression can go. HK mode can go because it breaks with it on. I've removed Bjorn compression, because we clamp to AP1 or recommend people do something themselves to sanitize the data. LMS matrix we need to keep to tweak.
  • Nick Shaw: Two current primaries are the stock ones and blue is moved.
  • Kevin Wheatley: We haven't really looked at viewing conditions. We may want to play with output conditions far a cinema version. Or rely on a 2x scale factor.
  • Nick Shaw: Luke suggested we should always discount the illuminant.
  • Kevin Wheatley: My code precomputes the achromatic, including discount illuminant, so we can leave the code in because it only happens once. We're not changing the tone scale, but are the parameters right?
  • Nick Shaw: We can lose the Daniele/linear drop-down, because linear is wrong and not the same as turning the tone scale off.
  • Kevin Wheatley: Reach mode clamp off is related to what we do with gamut compression.
  • Nick Shaw: That was an experiment to solve a problem that's now solved by the AP1 clamp.
  • Kevin Wheatley: We can eliminate that.
  • Nick Shaw: Focus distance gain is needed so saturated ramps like the dominant wavelength image go smoothly to white.
  • Kevin Wheatley: Clamp output and soft clamp should maybe for implementers to do externally.
  • Nick Shaw: That's slightly harder for things like P3-D65 in Rec.2020, but not that hard.
  • Kevin Wheatley: We can leave it in the code for now.
  • Nick Shaw: Soft clamp rounds cube corners and is a problem for a round trip.
  • Kevin Wheatley: Default to off for now.
  • Scott Dyer: There are values called "HDR" next to chroma compression and expansion. What are those?
  • Kevin Wheatley: Those are the factors for compress and expand, applied proportionally with peak luminance to do HDR/SDR matching. There is an unexposed 0.2 value which is a limit on the effect of those. Pekka said without a 4000 nit monitor he can't judge whether the values make sense.
  • Pekka Riikonen: I explain that a bit in my chroma compression document. [note, this document refers to parameter values from v53, not v55]
  • Kevin Wheatley: Daniel asked in the chat what a dim environment for scene values means, because the scene is probably not dim. It's really because the output defaults to dim, so for the tone scale to work as intended the input needs to be the same. Decisions we've made are based on that, rightly or wrongly.
  • Nick Shaw: We also scale diffuse white to 100, which is arbitrary.
  • Kevin Wheatley: It's not "correct", but it's the reference we picked. Shebbe has posted some comments on ACES Central.
  • Nick Shaw: We won't actually brand it as "CAM DRT" on release, will we? It will just be ACES 2.0. It's tweaked from the true CAM.
  • Kevin Wheatley: We are using a CAM as a perceptually uniform working space to manipulate things in, and in a more advanced manner than ACES 1. e.g. the gamut mapper. We aren't using everything a CAM could do for us.
  • Nick Shaw: I think he's referring to the blue "trench" we discussed last week.
  • Kevin Wheatley: I feel adding back eccentricity helps that but doesn't solve it.
  • Pekka Riikonen: I remember trying the eccentricity, and it had a very small effect.
  • Kevin Wheatley: If it helps a bit I felt it was better to have it on.
  • Nick Shaw: The eccentricity factor exposed in the UI is the chroma compression one is not the model one we are discussing.
  • Alex Fry: The AGX comparison is interesting, but I don't think AGX is invertible.
  • Pekka Riikonen: When we compressed more the blue was smoother.
  • Christopher Jerome: I feel the eccentricity is a useful option to have.
  • Kevin Wheatley: It only affects really saturated colors, so doesn't affect the core rendering. So reintroducing it isn't a problem.
  • Nick Shaw: Blue bar has blues outside AP1, and we have accepted we only gracefully render values in AP1.
  • Kevin Wheatley: It's more important how easy it is to grade to the blue you want that how a particular image renders.
  • Pekka Riikonen: Was there a difference with reds?
  • Kevin Wheatley: I didn't see any but I don't test exhaustively. Jeffrey is saying if you hue rotate blue bar it is smooth.
  • Alex Fry: My older Intel MacBook Pro struggles with Nick's DCTL. The LUTs still have a place.
  • Nick Shaw: Hopefully the Resolve team will optimize the final version far more than I have.
[Kevin showed color wheels and the effect of eccentricity]
  • Kevin Wheatley: Moving the blue primary has a larger effect.
  • Nick Shaw: I think the stock blue primary may be inside AP1.
  • Kevin Wheatley: We've currently only moved the blue from the stock one. I think Thomas's moves them all to preserve the shape.
[Pekka showed the various primary coordinates]
  • Kevin Wheatley: Jeffrey said he photographed a piece of lapis with an Ursa Mini 12k in sunlight and it matched well. I don't know what the IDT for that camera is like. Rotating the hue on the still life, the blue changes differently to everything else. But I don't know how Nuke's hue shift node works.
  • Nick Shaw: The Elvis image from the RGC repo certainly comes out blue under v55.
[Nick showed that image]
  • Pekka Riikonen: With ACES 1 that image comes out almost flat blue. So we're much better.
  • Nick Shaw: And better than ACES 1 with the RGC too.
  • Kevin Wheatley: Now Pekka is here we should look at the list of settings again.
  • Scott Dyer: The input gamut only needs to be AP0 for the CTL. The limiting gamut is the target, which may or may not match the output encoding. Reach is AP1 for chroma compression and gamut compression.
  • Pekka Riikonen: The limit for chroma compression is also AP1.
  • Nick Shaw: That table is a cusp which has J and M values, but the reach one only needs M, correct?
  • Pekka Riikonen: cgamutReachTable is at limitJmax, and cgamutCuspTable is the actual AP1 cusp, but we only use the M value from that for normalization in chroma compression.
  • Nick Shaw: For the reach table we only need one, but we could have a drop down to select something other than AP1 for both.
  • Pekka Riikonen: Do we need that at this stage?
[Kevin went though for Pekka the discussion we had previously about the parameter list]
  • Pekka Riikonen: I think the lower hull gamma solve is wrong, or the constant 1.14 value wouldn't invert.
  • Kevin Wheatley: I'll investigate that. We earlier discussed moving the cusp diagonally instead of just in M.
  • Pekka Riikonen: Because of how the smooth minimum works it would bias the roundness to the longer line below the cusp, which then clips more. Increasing cusp J as well as M biases it back towards the shorter line. But the difference is not visible except removing clipping.
  • Kevin Wheatley: I'll keep working on removing the things we discussed.
[Nick showed his cusp visualization]
  • Pekka Riikonen: I tried referencing that visualization to experiment with horizontal cusp shifting. It might work with variable lower hull gamma.
  • Nick Shaw: What about a move that bisects the cusp angle?
  • Pekka Riikonen: I did try changing it with the cusp, but it didn't work very well.
  • Nick Shaw: How much should we keep tinkering and how much do we accept it is what it is and simplify the code for delivery?
  • Scott Dyer: I was going to say none. We've already passed the deadline.
  • Kevin Wheatley: I don't think people will accept the blue. The narrow range of colors in grading that look visually blue.
  • Pekka Riikonen: Can we do anything about that?
  • Kevin Wheatley: I think it's related to the blue primary. I'm worried the blue may come back to bite us.
  • Pekka Riikonen: Is that blue the same thing Bjorn discussed a long time ago in this thread? It's inherent in all models. Particularly perceptual ones. It's there in Oklab and ZCAM too.
  • Kevin Wheatley: That's because it's a projection of the corner of a cube. If we move the blue primary further out the interior becomes smoother.
  • Pekka Riikonen:  I had to move it a long way and you lose highly saturated blues.
[Nick showed grading around blue in Resolve with his DCTL]
  • Nick Shaw: I'm not a colorist, but it doesn't feel to me like it's snapping to blue.

Meeting #142, March 6th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Daniel Brylka
Chris Clark
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We have some follow up from v55 that Pekka posted, and some discussions on ACES Central. And I have an image to show.
  • Alex Fry: There's discussion of how dark blue is rendering. Not sure what the other two being compared are.
  • Nick Shaw: I think they are ARRI Reveal and the DaVinci DRT.
  • Alex Fry: Our skies are definitely darker. It's the push-pull between saturation and brightness. The Frontier image is greener in ours, but the chromaticity is a little up towards green in the source. This is all without the RGC. The red image shows noise without the RGC.
  • Nick Shaw: I think that's like Red Xmas, with a lot of data outside the locus on the yellow side.
[Kevin showed his image of color wheels]
  • Kevin Wheatley: I am comparing ACES v1, plain sRGB, and our new rendering. It seems we collapse blue into a really narrow range. Red is quite narrow too, but blue is worse.
  • Alex Fry: Is that narrow range really correct, and people are used to other renderings collapsing a wide range all onto the blue primary?
  • Kevin Wheatley: That blue varies. It isn't all one blue.
  • Nick Shaw: Is that caused by the position of our blue LMS primary?
  • Pekka Riikonen: All our versions have done this. ZCAM blue was much lighter. I can only change it in Hellwig by varying the achromatic response.
  • Kevin Wheatley: We could change the range of blues that turn to blue, but we could run into out of gamut values with the AP1 clamp.
  • Pekka Riikonen: The only other way to lighten blues it to move the blue primary way out. But I don't think that's correct. And it doesn't affect surface blues like the ColorChecker blues.
[Pekka showed the effect of varying achromatic response and blue primary]
  • Alex Fry: Blue gets better but red gets worse.
  • Pekka Riikonen: I have to move the primary a long way. And it has the effect of bleaching out again.
  • Kevin Wheatley: So with the current model we see what we've always seen, but don't know if it's a critical problem, or just unfamiliar.
  • Alex Fry: We've always been an issue where blues go cyan if they aren't exactly on the blue axis. But what can we do? The blue primary is darker.
  • Kevin Wheatley: I've also looked at what the AP1 clamp does to near AP1 values. There are cases where the rendering with the clamp is 'better', but is it what people expect?
[Kevin showed his image of hue sweeps at different J values with increasing saturation]
  • Kevin Wheatley: The hue lines sometimes appear straighter, which may be misleading. The clamp makes things lighter, which makes sense because negatives are rising when they become zero. I didn't finish my experiment. Maybe I can show more next week.
[Kevin showed rendered spheres at different hues]
  • Kevin Wheatley: These are sRGB primary balls at different intensities. They don't go as red, green and blue as you might expect. It shows the hues track better.
[Alex showed his render of balls overlaid onto a CIE plot]
  • Alex Fry: It shows a similar thing to Kevin's render. It shows how before many things collapsed to the same primary.
  • Alex Forsythe: That's really interesting. The Macbeth balls didn't show what I needed, because those colors are quite well behaved.
  • Kevin Wheatley: It's only on the far out colors where you can show what goes wrong. The way we project into Rec.709 things don't fall where they used to. You can find values that will fall on the primary. Without a lot of grading we won't know if that's good or bad.
  • Nick Shaw: If you have to hit one sweet spot to get the hue you want, might a gradient that goes away from that hue behave unexpectedly?
  • Kevin Wheatley: We need to right kind of test image to evaluate that.
  • Nick Shaw: With synthetic images it's hard to know what they should look like. If we use our own model to make something with perceptually smooth hue it will probably work brilliantly.
[Alex showed a new version of his CIExy balls]

Meeting #141, February 28th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Thomas Berglund
Rod Bogart
Chris Brejon
Daniel Brylka
Chris Clark
Alex Forsythe
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: We hope to lock various things down today. Then we can eventually cut out the bits we're not using.
  • Alex Fry: I have updated the Baselight bakes. I fixed a typo, and it now includes more variants. The one called 600 nit is actually out 500 nit, but that is the closest viewing condition in Baselight. Baselight doesn't need the sim versions, because you can encode any revering for any viewing condition.
  • Pekka Riikonen: Are we going the "pex1" route?
  • Scott Dyer: v54_pex1 is what I've been showing people.
  • Kevin Wheatley: We've discussed whether smoothing should be include in the gamma finding.
  • Nick Shaw: It make sense to use the same thing the actual gamut compression uses. I think Alex only left it out because he couldn't get it to work.
  • Alex Fry: My visualization showed odd hull shapes when I used the angled boundary solve.
  • Nick Shaw: I have got a smooth result for the gamma solve using the full boundary search as used in the processing.
  • Pekka Riikonen: Have you tested that with the non pex1 version. Without compress mode it's quite different.
  • Nick Shaw: No. I though we were heading for using pex1. With compress mode there is a large flat part on the top of the gamut, and a large gamma value is needed to contain that.
  • Kevin Wheatley: We need to decide between the two version.
  • Pekka Riikonen: The AP1 clamp can have the scene side skews we discussed last week.
  • Kevin Wheatley: That's why Nick favored moving that to an LMT to put it in the hands of the user.
  • Alex Fry: Pekka's simplifications mean the data must be within AP1. There are options for getting it into AP1.
  • Nick Shaw: Because our inverse is not 100% perfect, v54_pex1 inverts a Rec.709 cube to 99.9% within AP1, but the red corner inverts to just outside AP1. So if you clamp in the DRT to AP1, you don't get that red corner back in a round trip.
  • Pekka Riikonen: We can adjust things for that. I hoped in ACES2 the RGC could be disabled by default. We handle a wider range without issues. The gamut group's images look better without the RGC. OR we could change the RGC parameters.
  • Nick Shaw: When we developed it we thought the parameters might need updating with ACES 2.
  • Kevin Wheatley: A lot of people bake it in to compensate for less than ideal IDTs. Changing the numbers makes it something different.
  • Pekka Riikonen: With the RGC we lose some benefits of switching off compress mode – darker reds and blues.
  • Nick Shaw: It makes red neons less saturated, but if we give people a clamping LMT as an option they can use that if they want really saturated neons.
  • Kevin Wheatley: I don't think we can change the RGC. But we can provide LMTs that may be based on the RGC. Are the changes pex1 makes desirable? We need to decide then merge all our changes. Any other areas of concern.
  • Pekka Riikonen: I think my changes make HDR more colorful as John Frith wanted.
  • Kevin Wheatley: Moving to a version with no compress mode is appealing, but I think it went too far. If we can dial it back that's my preference.
  • Nick Shaw: We added compress mode to deal with out there values that we are now not trying to deal with.
  • Pekka Riikonen: We need to handle them somewhere, because out of gamut values happen in noise, and then you need the RGC or a clamp.
  • Kevin Wheatley: But not the distortion compress mode introduces.
  • Nick Shaw: pex1 aims to match the existing look, other than slightly less HDR desaturation.
  • Pekka Riikonen: It looks quite similar, but there are differences.
  • Alex Fry: I can't see the red corner issue.
  • Nick Shaw: You need to look at BT.1886. The sRGB linear portion hides it.
  • Alex Fry: What happens if we don't have the clamp.
  • Scott Dyer: Red Xmas looks bad without the clamp.
  • Nick Shaw: And better with the clamp than the RGC.
  • Alex Fry: Thomas's spectra cornel boxes show a big change from purple to blue with the clamp.
  • Nick Shaw: That's near UV, so outside AP1. The clamp will skew it. Like it makes the purple blue screen blue again.
[Alex showed various images and the effect of the RGC and AP1 clamp]
  • Pekka Riikonen: The RGC skews towards primaries too.
  • Kevin Wheatley: Are there values outside AP1 that render better without being clamped?
  • Alex Fry: Grading in non AP1 spaces can create values outside AP1.
  • Nick Shaw: Intuitively leaving the clamp outside the rendering does give the user more control. But I'm coming round to the idea that building it in doesn't cause a problem as people can use other methods to get into AP1 before the clamp, and then the clamp will have no effect. But we reduce the risk of people turning the clamp off and getting bad results.
  • Pekka Riikonen: The clamp doesn't take any options away from the user.
  • Nick Shaw: Are there gradients going from just in AP1 to just outside that render better with no clamp.
  • Alex Fry: The blue screen goes purple but consistently.
[Alex showed a chromaticity plot of the purple blue screen staying linear with no clamp]
  • Alex Fry: The Rec.2020 spheres seem to have lower J values for red and blue before we do anything. So maybe looking darker is correct.
  • Kevin Wheatley: Do we have any noisy images.
[Alex showed an image with noisy shadows with and without the AP1 clamp]
  • Kevin Wheatley: Really clamping negatives in LMS is what is needed. The non-linearity in LMS made kinks outside the spectral locus.
  • Rod Bogart: What is this clamp you are looking at?
[Alex explained the rationale for the compress mode, and the AP1 clamp as an alternative to it]
  • Nick Shaw: Do we leave it up to people or give them a "safer" transform?
  • Rod Bogart: Previous ACES have always clamped to AP1, haven't they? It won't be a new surprise to people.
  • Nick Shaw: Any strong objections to an internal clamp. I think I've come round to it not being a problem.
  • Kevin Wheatley: Ideally I wouldn't but I'm not going to veto it.
  • Christopher Jerome: Are there any images in the spectral locus but outside AP1 that might render differently?
  • Pekka Riikonen: Ideally we would clamp to the spectral locus, but that's not simple. An LMS clamp did not work when I tried it.
  • Alex Fry: Troy made a comment in the chat.
  • Troy Sobotka: I was just saying that the cheeks in red Xmas are outside the destination medium on the high side. The R channel exceeds 100% in the destination.
  • Nick Shaw: The ARRI bar image spans the locus quite well as a test.
[Alex showed the purple light in the background getting a little more saturated without clipping]
  • Kevin Wheatley: It's a compromise we made before, and I think we have to make it again. So we go with pex1, and look at tweaks for the red corner in the round trip.
  • Pekka Riikonen: I'll look into that.
  • Kevin Wheatley: Everything else is independent code optimizations etc. I'll merge my changes, and Pekka can make changes on top of that and we can call that v55. Then we can get feedback form people on that. And in thew meantime we cull code that's not being used.
  • Rod Bogart: Thank you all for all your work.

Meeting #140, February 21st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Lars Borg
Daniel Brylka
Jeffrey D
Willem Nagtglas
Carol Payne
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We have Rémi here this week
  • Remi Achard: I have a NumPy implementation of the DRT, just updated to v54. I'm trying to align it with the Blink version. I've found some issues with the top gamma solve, particularly in HDR. I've been testing the round trip win SDR sRGB and HDR 1000 P3-D65 limits. I found that the focusJ is calculated differently for the gamma solve than in the pixel processing.
  • Nick Shaw: In the gamma solve we set focus distance to 10,000 to make the vector horizontal, so focusJ becomes effectively irrelevant. I've always wondered why it needs to be horizontal for the solve to work. In theory it should use the same angled vector as the real compression.
  • Alex Fry: I got a weird shape when I used angled compression in the solve, and setting it to horizontal finds the actual gamma for the boundary.
  • Nick Shaw: It is the true gamma, but the intersection only approximates the true gamma, and for larger exponents the approximation gets worse. Hopefully we can work out what's happening more easily using the Python.
[Nick showed his Nuke visualization of the true hull and gamma approximation]
  • Nick Shaw: Around yellow the gamma gets very large, which isn't really needed except for the weird flattened bit near the cusp. To contain that you end up with a gamma that goes way outside the rest of the shape. Cusp smoothing gets it clear of the flat part anyway, so the large gamma is unnecessary.
  • Kevin Wheatley: We should probably use the cusp smoothing and the rest when finding the gamma for the approximation.
  • Nick Shaw: I wonder if the flat part is related to compress mode. It may be less of a problem with that off as Pekka suggests.
  • Pekka Riikonen: We could use Nick's visualization to fine tune the cusp smoothing factors.
  • Kevin Wheatley: I've been looking at reducing some noise in the lookups. I've change the solves to use binary search, first in Python then Blink.
[Kevin showed plots of the difference with his new solves]
  • Kevin Wheatley: Mostly they match the old versions but are more precise and less quantized, and the old gamma solve is clipped at a lower value. I used a higher one. They also render twice as fast mostly because the hue lookups used to be inefficient. My boundary searches march out in large steps until it passes the boundary, then binary searches within that last step.
  • Nick Shaw: The gamma solve is very sensitive in the samples near the ends which results in noise. Moving the cusp out as smoothing does reduces that.
  • Kevin Wheatley: The code changes are large, because in Blink you can't pass and array as a function parameter. Also once duplication is trimmed it will get less. We need to think about the order we merge things in.
  • Pekka Riikonen: We've had a loot of comments about the saturated colors. v53+ improves that. But blue still desaturates weirdly, and there are issues with saturated gradients. Over a year ago I tested without compress mode which lead to the Alternate Compress Mode thread. We need compress mode to handle negative LMS values. These can happen in innocent images, and it beaks them without compress mode. If compress mode causes the blue issues, how can we eliminate it? The simplest is to clamp to AP1. And if I also change the custom primaries, LMS stays positive. This is what i did in v54-pex1. The new primaries except blue are closer to the stock primaries. I found the clamp cause no problems with the inverse, and it improves blue gradients and the SDR HDR match.
[Pekka showed various images with v53 and v54-pex1]
  • Pekka Riikonen: Blues gradients in blue bar and synthetic ramps look better. This is a result of removing compress mode, not the clamp. It also makes the upper hull gamma much straighter. I don't see any reason the gamma should go above 1. I think where we have the spike is an error.
  • Kevin Wheatley: I think the change is in the right direction, but has overshot. In the image of columns of Rec.2020 discs the red and blue hold saturation too long, where the other colors blow out.
  • Pekka Riikonen: We can change that with the projection angle and focus gain.
  • Kevin Wheatley: I'm not sure the hard clamp will be acceptable for everybody. Nick had a plot of some odd skews.
[Nick showed a chromaticity plot where a saturation ramp skewed one way and then back the other with AP1 clamping]
  • Nick Shaw: I am a bit wary of a hard clamp being built into the rendering. I would prefer to do it in a default LMT, so people could turn it off and handle those out of gamut colors a different way if they wanted.
  • Pekka Riikonen: We have to do something with those out of gamut values, that can occur even in normal images in the noise.
  • Kevin Wheatley: Is that because the non-linearity is mirrored and steep at zero?
  • Pekka Riikonen: Without the clamp hue lines go in weird directions.
[Pekka showed a hue lines plot]
  • Pekka Riikonen: The clamp is not about preferred rendering. It makes the blue screen blue, but that's not the reason for it.
  • Scott Dyer: Red Christmas looks wrong without the clamp. Their faces go yellow and there is banding. I've seen negative values blowing up, but that could be the LUT limitations.
  • Pekka Riikonen: The compress mode compresses to inside AP1, and even to inside Rec.709 in places. It might help if it didn't compress as much, but I don't understand the algorithm enough to do that. I find the clamp a good solution, and we have the RGC if people have issues. I never found another way to deal with negatives.
  • Alex Fry: When I was looking at our near Rec.2020 laser projector, with Thomas's spectrally lit Cornell boxes, all the purples collapsed towards blue, but the RGC helped with that. But the RGC is the default for a lot of people and applications. We originally wanted the DRT to handle all images, but now AP1 is the only really valid input. Maybe it's best that constraining to AP1 is done externally. It might be possible to use a CAM based gamut compressor instead of the RGC. I noticed some artifacts with 54-pex1 on the hue wheels image.
  • Nick Shaw: In that image line the columns one you can see the red and blue holding saturation.
  • Pekka Riikonen: We can adjust that. I did experiments with compress mode, only using it at the start, but that didn't work.
  • Kevin Wheatley: Perhaps some things could be improved by tweaking the non-linearity near zero. Nick you had some feedback.
  • Nick Shaw: Today I visited John Frith at MPC. We looked at the images on an X300 in a reference viewing environment. They agreed that v53 and 54 improved on their concerns about neons in v52. Their only real concern was skin tones feeling less saturated in HDR than SDR. I feel that too. Pekka, you said there is a control for that.
  • Scott Dyer: I think the skin tone match is better in v54-pex1
  • Pekka Riikonen: I did change the HDR desaturation in that. I would maybe have gone further. I had to change them anyway when turning compress mode off.
  • Kevin Wheatley: We should expose that control so people can find their preferred value.
  • Scott Dyer: With 54-pex1 I have seen some artifacts in blue in a synthetic ramp image.
  • Pekka Riikonen: The blue cusp is not smooth.
  • Kevin Wheatley: What are our next steps? I'll merge my changes
  • Nick Shaw: Does your new version still have the hues where it fails to find a gamma in HDR?
  • Kevin Wheatley: Yes, but I can clamp them like the current code does.
  • Nick Shaw: The problem is there is no correct answer. We're looking for a gamma value to match a curve which isn't exactly a gamma.
  • Kevin Wheatley: Pekka, where are you applying your AP1 clamp?
  • Pekka Riikonen: Right at the start.
  • Nick Shaw: So it could be external?
  • Pekka Riikonen: But how do you do that in the context of the ACES system? Without it innocent images may break, so the transform is broken. It would have o be enabled by default.
  • Alex Fry: That's not a massive change in behavior. The current RRT breaks with some images without the RGC.
  • Kevin Wheatley: So do we look into alternatives? We want to revise the saturation parameters. Is there anything else except the clamp?
  • Pekka Riikonen: It's pretty finished.
  • Nick Shaw: We need to finesse the gamma solve.
  • Kevin Wheatley: I assume we need to do it with the full path, with cusp smoothing and the angled solve. I'll look into that.
  • Nick Shaw: Pekka did some testing with cusp smoothing.
  • Kevin Wheatley: We should start a thread for people to suggest what options can be removed to simplify the code. I'll also look again at my code for alternative non-linearities.

Meeting #139, February 14th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Alex Forsythe
Christopher Jerome
Willem Nagtglas
Carol Payne
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Alex Fry: The v53 Bake script should be ground truth for last week's version, I didn't push the right version until later in the week. So if anybody pulled it after the meeting they should pull again. Also the Nuke script for v53 didn't include the changes made in the meeting. So that might have confused people who weren't using the baked LUTs.
  • Pekka Riikonen: Testing the parameters we came up with in the last meeting I saw some artifacts on a color wheel in HDR. I found new parameters to remove them – 1.15 for cusp_to_mid_blenad, and 1.4 for focus_distance.
  • Alex Fry: I wonder if that's related to the jumpiness we saw in the top hull gamma.
  • Pekka Riikonen: These parameter values don't affect the look. I also found I could reduce cusp smoothing to 0.19 without affecting the inverse. Optionally I also changed focus_gain_blend to 0.5 and focus_gain to 0.6 which I felt made the path to white smoother on the near infra-red ball. It has no affect on other images.
  • Alex Fry: It looks better but we need more testing. Just changing parameters, do we still call this v53 or make a v54.
  • Nick Shaw: In my DCTL I have no init() so declare all the lookups as constant arrays. I was using debugPrint() in the Blink and copying and pasting and reformatting, which was tedious. So I have ported the init() block to Python which generates the DCTL declarations. I then took the output of the Blink and Python and compared them in a spreadsheet, and the differences are mostly very small, which I put down to single vs double precision. There is most variation in the top hull gamma, which needs more investigation.
  • Kevin Wheatley: I made some plots of the output of Nick's Python. Most plot pretty smoothly. It shows some table entries are wasted on the hue index, or just zeros, which we knew. The reach tables are quantized to integers. The top gamma plot is very noisy and has one very sudden change. I added extra precision to the code, and the plot changes dramatically, so it's not just smoother. So that seems very sensitive to noise and the search values you pick. Nick and I previously discussed smoothing the values, but it's probably better to calculate them better in the first place.
[Kevin showed his plots comparing the original and finer grained top gamma solve]
  • Pekka Riikonen: That sharp transition is at the yellow cusp.
  • Alex Fry: I'm testing very close to the ends, which may be too sensitive. Maybe pull them in and rely on the cusp smoothing.
  • Pekka Riikonen: I think you needed those small samples to capture the yellow.
  • Nick Shaw: The sharp change is not necessarily wrong. The concavity could change suddenly across a cusp.
  • Kevin Wheatley: It seems odd that searching more finely finds a solution sooner, not just more smoothly. We have to ask how long we keep twiddling, and when we make a cutoff. Pekka tweaking the numbers to improve the image is worthwhile, but it would be good to understand why the artifacts are happening.
  • Alex Fry: Does switching to constant top gamma solve it?
  • Pekka Riikonen: No [in fact it does, but Pekka was looking at the wrong image] I wondered if it was the cusp to mid blend, which gets clamped at 1.0. The lerp of the focusJ now vaies with cusp height. The bias varies with the cusp so it doesn't darken yellows and cyans.
[Pekka showed the line in the code where focusJ varies with the cusp]
  • Alex Fry: What happens if you vary the sample points in the upper hull gamma fit?
  • PK: I have a version where those are parameters. It changes the artifacts, but can't fix them.
  • Kevin Wheatley: We need to compare the true shape and the found gamma fit.
  • [Alex showed his Nuke plot of the fit gamma and true shape]
  • Nick Shaw: The jitter we see there is why I started looking at smoothing. Around the yellow there the top surface has a flat part, and the cusp where that becomes curved moves across the top as hue changes. When it gets to the end the large gamma needed to contain it is suddenly no longer required, creating the sudden drop off.
  • Alex Fry: It's always seemed odd that the flat part is there.
  • Pekka Riikonen: Changing the primaries may affect that. Or compress mode.
  • Kevin Wheatley: The display gamut doesn't change with compress mode, but how we represent it does. I've seen similar odd shapes in other gamut visualizations.
  • Christopher Jerome: It would be interesting to see what real world emissions from a monitor look like, and what causes the abruptness between those hues.
  • Kevin Wheatley: Our approximation isn't a great approximation in some places. Because the search for the gamma fit is noisy it needs improvement. But it doesn't change the look, so we can put it out for testing while we work on improving it.
  • Nick Shaw: If the tables are going to be pre-calculated offline we don't need to worry about speed.
  • Kevin Wheatley: That ties to our discussion about matrices, where some are pre-computed and declared, but others are computed in the code.
  • Alex Fry: I merged Nick's PR to compute all the matrices.
  • Kevin Wheatley: It's a separate discussion about pre-computing matrices externally at double precision. Looking at Nick's Python output the reach tables have more quantization.
  • Nick Shaw: Those use a different method to populate them which just steps out in integer steps of M at each hue.
  • Kevin Wheatley: If we are pre-computing we can make the steps smaller. And we could iterate more efficiently with something like a binary search.
  • Alex Fry: It occurred to me we are currently using 100 nits as our SDR reference and scaling to 48 for theatrical, which is the opposite way round to ACES 1. Is that the right way round?
  • Nick Shaw: I think it has to be so you have a continuum from 100 nit SDR into HDR.
  • Kevin Wheatley: It suggests the golden master is the HDR, or at least 100 nits.
  • Nick Shaw: What about Dolby Cinema?
  • Kevin Wheatley: We would have to do 216 nits, and then scale that to 108.
  • Nick Shaw: We need to teat that assumption.
  • Kevin Wheatley: Can we get access to a Dolby cinema room? We would need to be sure what we were testing before we asked.
  • Pekka Riikonen: Has anybody done more SDT HDR comparison.
  • Scott Dyer: It's the same as before, but SDR is better in those reds, as people wanted.
  • Pekka Riikonen: Should we make an announcement on Lift Gamma Gain and ask colorists to test?
  • Nick Shaw: What version?
  • Pekka Riikonen: Version 53 is fine.
  • Alex Fry: What other targets do we need in the LUT bake? What about Rec.2020? And stick to D65?
  • Pekka Riikonen: This raises the issue of user facing parameters.
  • Kevin Wheatley: End users should only have canned transforms. The target parameters should be available for a color scientist/engineer to make a transform for a custom display. The other numbers which affect the look should be buried.
  • Pekka Riikonen: Would white point be a useful parameter for colorists?
  • Nick Shaw: Not if it can't be tracked.
  • Alex Forsythe: In the new repo we envisage having a set of standard output transforms which are just a set of parameters and call the DRT's API.
  • Nick Shaw: Resolve currently has the SSTS Output Transform available as a function in DCTL, and you can make your own custom Output Transform by just feeding it parameters. Because DCTL has no init() our lookups can't be calculated dynamically from parameters in DCTL.
  • Kevin Wheatley: How they actually implement it is out of our control.
  • Alex Forsythe: Scott has been investigating, and there is a way to do the equivalent of the init() in CTL.
  • Scott Dyer: You can declare a constant array and run a loop to populate it. Only the main() function runs per pixel.
  • Kevin Wheatley: Our main task is to refine the values in the top gamma lookup.
  • Alex Fry: I'll take Pekka's new stuff and bake v54 LUTs. All D65?
  • Kevin Wheatley: Yes, for now.
  • Pekka Riikonen: What questions should we ask colorists?
  • Nick Shaw: Mainly how does HDR and SDR compare and how easy is it to grade through? Can you easily get where you want?

Meeting #138, February 7th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Jeffrey D
Alex Forsythe
Christopher Jerome
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: There have been some discussions about the inverse and which gamma to use when. And Pekka has some updates.
  • Pekka Riikonen: Alex merged v53pex4 with the partially analytic inverse that I showed last week. Then I sent another PR which made some changes to keep the inverse to positive ACEScct values. 
  • Alex Fry: I've merged that as v53.
  • Pekka Riikonen: I made two changes. I had changed pex3 to use the lower hull gamma for the reach gamut, but now I've changed it back to use the model gamma. We have never investigated the accuracy of the reach mode to AP1. I changed the cusp smoothing scale factors. When we have a locked version I plan to try to find optimal values for all parameters, to minimize forward clipping but keep the inverse in AP1.
  • Kevin Wheatley: As you and Nick discussed, smaller gamma values may not reach out quite to AP1, but that means it won't invert outside it.
  • Nick Shaw: So we compress from just inside the reach gamut to just outside the target gamut.
  • Kevin Wheatley: There was another discussion about where the worst errors we saw last week came from. Nick and I both plotted the positions of round-trip errors above a threshold in the BT.1886 display gamut cube.
[Kevin showed 3D plots of the positions of the largest errors]
  • Kevin Wheatley: The largest errors occur along planes where one channel is zero.
  • Nick Shaw: I posted printouts of values at each step though the inverse then forward transform for the color with the largest error. The values invert back to the same values as far as output XYZ, and then a small error is noticeable when it goes back to linear RGB, and that is amplified by the BT.1886 inverse gamma to become ~4 10-bit code values. Everything we do in JMh seems to invert accurately. It's the final XYZ to RGB that introduces the error. Nuke's built in ColorSpace transform does not accurately round trip RGB to XYZ to RGB. OCIO too. I have noticed that the XYZ to limit matrix is calculated from primaries, but the XYZ to output one is declared to 10 decimal places. Could we be using mismatched pairs of matrices? But I know matrix transforms with small values can have precision issues. Is it affected by doing it at 100 scale, not 1.0?
  • Kevin Wheatley: I investigated gamma followed by inverse gamma at different precisions.
[Kevin showed error distribution plots]
  • Kevin Wheatley: I did forward then inverse and inverse then forward. The errors were smaller in one direction. There's a clear pattern that repeats no matter what precision you use. Just the magnitudes of the errors are different. It seems like not doing scaling if you don't have to may improve the results. I did it in Python with NumPy. A GPU implementation is likely to be worse. No errors were as large as we see, but we're combining the gamma with matrices and other things.
  • Christopher Jerome: Nick, was your test sRGB or gamma?
  • Nick Shaw: The large errors are from the inverse gamma of BT.1886. The linear part of sRGB hides the errors because they are small in linear.
  • Alex Fry: Seems we should calculate our matrices from scratch in the init(), not just declare them.
  • Kevin Wheatley: So has anyone looked at renderings with Pekka's latest? Are we happy with the partially analytic inverse? Is the complexity worth it.
  • Alex Fry: I feel it does flip the brightness relationship between some color pairs as you go up the exposure. It's a subjective tradeoff of color for brightness.
  • Nick Shaw: Somebody I was speaking to today commented on the highlight saturation difference between HDR and SDR in v52. So I've like to give them v53. The A/B we are doing is not a real world scenario for an end viewer. In production people will compare them.
  • Pekka Riikonen: v53 is the darkest we've had. Changing focus distance to 1.4 will get close to v28.
[Alex showed his original v53 compared to Pekka's update with the spectrally lit Cornell box image]
  • Alex Fry: The relative brightness of red and yellow is flipped in the highest exposure.
  • Pekka Riikonen: You can change the focus distance and also lower the cusp to mid blend. We need to get user comments now with the darker mapping.
  • Nick Shaw: The lighting in this image is spectral, so mostly outside AP1, and we only reach to there.
  • Alex Forsythe: It may be worth looking at real neon lights.
[Alex brought up a Hollywood Boulevard neon sign image]
  • Alex Fry: Maintaining saturation does create a mismatch with the amount of light it appears to be throwing into the scene. I'll bake out new LUTs for v53 with the tweaked parameters.
  • Pekka Riikonen: So are we happy with the partial analytic inverse.
  • Alex Fry: If most of the errors are coming from the XYZ to RGB conversion, we need to check what errors don't come from that.
  • Kevin Wheatley: We should take RGB conversion out of the equation for testing, because we can't control the encoding step in people's implementations.
  • Pekka Riikonen: It's worth emphasizing we have a great inverse compared to ACES 1.
  • Nick Shaw: The non analytic part of the inverse didn't seem to be contributing to the errors. Those from elsewhere were much bigger.
  • Alex Fry: Let's look at an older version to compare.
  • Nick Shaw: The person I spoke to today said they slightly preferred HDR skin tones in v31, as they had more saturation. But older renderings didn't have a full inverse. It's much easier to get a pleasing rendering if you don't have to fill the cube and have an inverse.
  • Alex Fry: People who have been using v28 haven't had issues with HDR vs SDR. They aren't getting surprised when switching to HDR. They complain about not being able to get to the red corner.
  • Pekka Riikonen: We have control over the desaturation in the chroma compressor, so we can change it. I did lower it in 53. I feel the transform is quite neutral now, so you can grade it to where you want, as long as SDR and HDR match.
  • Nick Shaw: Christian has commented in the chat that they compared SDR and HDR with v28 and found a good match. They walked between the HDR room and SDR room, refreshing their eyes in between, which is what you should do. Do they 'feel' the same. Not toggle the same monitor.
  • Alex Fry: If you look at HDR first, SDR always looks trash when you switch to it. Looking at the locus ramps image, v28 feels like brightness goes up-down-up in some places.
  • Pekka Riikonen: That's a reason I now tie the cusp to mid blend to the actual cusp height. The LUT stress test image shows a dark band in blue.
  • Kevin Wheatley: That blue is where the blue LMS primary lives, and the fact of where blue is in general.
  • Pekka Riikonen: I changed the blue primary in v53 after Nick commented that blue lights looked too dark. But maybe we could go back now we backed off the parameters. You would have to move the blue primary very negative to make that blue lighter.
  • Nick Shaw: The person I spoke to commented that v52 has fringing on the blue in the CG light saber image in HDR.
  • Pekka Riikonen: That started when we introduced reach mode.
  • Alex Fry: The blue is very bright in that image.
  • Pekka Riikonen: I think v31 had a bug, and didn't desaturate at the top end.
  • Nick Shaw: Pekka is the value you set the desaturation parameter your judgement of a good HDR / SDR match on your setup?
  • Pekka Riikonen: Yes. It affects HDR and SDR in v53. Try it yourselves.
  • Kevin Wheatley: Any artifacts are worse than subjective saturation, which gan be graded.
  • Pekka Riikonen: The only artifact I know of is the Rec.2020 HDR. The M clamp fixed that but messed up the inverse.
  • Nick Shaw: To be clear, you mean limiting to Rec.2020 for a true Rec.2020 display. Arguably AP1 is so close to Rec.2020 you could turn the gamut mapper off and just have a small amount of clipping.
  • Alex Fry: It would need an algorithm to make it gradually turn off, for displays that approach Rec.2020. We have a laser projector, so I can try to look at it on that.

Meeting #137, January 31st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Willem Nagtglas
Carol Payne
Pekka Riikonen
Troy Sobotka
Doug Walker

Meeting Notes

  • Kevin Wheatley: This wee we have Doug Walker, Carol Payne and Remi Achard here representing OCIO. I started a discussion in an OCIO TSC meeting a few weeks ago. Hopefully Pekka can address the concerns Doug had.
  • Pekka Riikonen: Last week Alex introduced the hue dependent lower hull gamma. But I found it hardly varies at all. I found 1.157 worked well. I've been trying to make the experimental mapper I showed a few weeks ago invertible. Before it needed 8 rounds. With the current mapper if we make the compression steeper it holds color up to limitJmax then clips to white. With my experimental version as J goes to white M goes to zero. I've now made a version which uses the standard mapper up to a threshold, so it has an analytic inverse, and above the threshold (initially I tried the cusp) but now it's a blend between the cusp and limitJmax) it gains up slope_gain to make the compression more horizontal. Using the cusp to determine the threshold helps with cyans and yellows that were going darker. I'm now using the lower hull gamma rather than the model gamma everywhere. This make it more scene rather than display referred. I adjusted the blue primary because Nick commented blues got too dark. Now we have a pretty perfect round trip, comparable to the current mapper. This post shows the difference between the mappers. It's now a 2 round inverse. The first round gets an approximation of the. original J, and the second round uses that approximation in the inverse. The difference to the analytic version is very small. This Desmos shows the gain applied to the slope, and the controls for it.
  • Nick Shaw: To be clear, the parameters are for us to fine tune our constants, not to be exposed to end users.
[Pekka showed a diff of the analytic inverse and the two round approximation]
  • Nick Shaw: The analytic one is still not perfect due to precision issues. 1 or 2 10-bit code values after a round trip. And it looks like the approximation has similar precision.
  • Kevin Wheatley: Do Doug or Carol want to ask anything?
  • Doug Walker: My concerns come from the fact that that in ACES 1 we originally had a fully invertible model, and then near the end colorist feedback about the look meant we added some additional steps which we didn't discover until after release meant it was no longer invertible. We had no automated unit tests which would have discovered that. On the listening tour one of the top 3 concerns that went into the v2 requirements was an analytic inverse. If the two step solve gets an inverse to near single float precision that's fine. The 1 part in 1000 errors I'm seeing I think are significant.
  • Kevin Wheatley: We never aimed for float precision. We aimed for 10-bit gamma corrected display referred precision for a round trip.
  • Doug Walker: It seems the inverse is not searching for the solution in the way I was worried about.
  • Carol Payne: The results Pekka is showing look good. But how much testing are you doing one range of images? How are you confirming 2 rounds are enough.
  • Pekka Riikonen: I'm just looking at a round trip of a display unit cube with a CMS pattern. And I'm comparing forward direction in SDR and HDR.
  • Alex Fry: What proportion of the colorists were concerned about holding saturation. It trades intensity for color. I feel the neons by holding color break their relationship with the light they throw into the scene. In Thomas's spectral sweep of the Cornell boxes, the bright / dark relationship to surrounding colors flips round. Maybe we need to back it off a bit.
  • Pekka Riikonen: For me it's about SDR HDR match. Or we have to desaturate HDR. I also think the old mapper produces a blocky color which this can fix, even if we don't use it to hold saturation. Or horizontal mapping which desaturates very quickly.
  • Kevin Wheatley: Nick also looked at smoothing the gamma calculation. He saw some jitter in the gamma around hues. That might also help keep the table size down.
  • Pekka Riikonen: We also talked about optimizing the tables to make them evenly spaced.
  • Doug Walker: OCIO originally used a LUT implementation, but OCIOv2 uses an analytic implementation on the GPU for ACES. Will the same be possible for ACES 2?
  • Nick Shaw: I have a DCTL implementation of v52 which I've had running 4Kp24 in real time. It's simpler than the Blink because it only includes the code paths for the current selected parameters, and it has pre-baked lookups which in Blink are re-baked in the init() when you change a parameter. DCTL has no init() but I imagine OCIO may similarly pre-bake the lookups for the standard targets.
  • Doug Walker: OCIO has a set of viewing transforms and a set of displays, and you can use the same viewing transform with different displays. E.g. Rec.709 and sRGB use the same transform.
  • Nick Shaw: Our transform comes out to XYZ. So encoding could be separated in the same way.
  • Kevin Wheatley: We had that in mind. In OCIO would we expose any of the component parts as OCIO ops?
  • Carol Payne: If we componentize it the need to be analytical is more important. Initially we need the main targets for ACES 2.0, and later we can add more options.
  • Kevin Wheatley: We want it parameterized, so people know how to make a transform for any new display that comes along.
  • Scott Dyer: The CTL we ship will target standard outputs, but the algorithm is parameterized and we'll document how to create renderings for new targets.
  • Doug Walker: Looking at the discussion about highlight rendering, I noticed the mapper goes horizontal at max J. Other gamut mapping algorithms I've used the mapping still has some direction at the maximum. If the point used in the quadratic solve was above max J, there would still be an angle at max J. it might solve the saturation issue.
  • Nick Shaw: I think for the kind of rendering colorists want the angle needs to change as M gets further out, and my quadratic solve has the same slope for any point along the compression line, which is what makes it invertible.
  • Doug Walker: But if you keep that algorithm, and move J max above display peak.
  • Kevin Wheatley: I think being horizontal at the top was Nick replicating the way it worked before.
  • Nick Shaw: Because the path to white is done in the gamut mapper, I think if it wasn't horizontal, things wouldn't blow out to white.
  • Alex Fry: I remember if you kept it pointing down towards the focus you got weirdly saturated highlights.
  • Kevin Wheatley: We need to merge these variations and look at Doug's suggestion.

Meeting #136, January 24th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Alex Forsythe
Luke Hellwig
Jeffrey Mathias
Willem Nagtglas
Carol Payne
Pekka Riikonen
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: We have some feedback from OCIO, and Alex has lower gamut gamma finder updates.
  • Alex Fry: I've cleaned up the top gamma code and duplicated it for the bottom gamma, so we use a variable lower gamma for the hull approximation rather than a fixed value. We were using 1.145, now I see values ranging 1.6 to 2. They are all in a narrow range, so we could narrow the search range and increase the precision. We also had an issue with absolute white not passing through the inverse, and I've found changing a check in the inverse gamut compression to check for a very small value rather than zero. Pekka's noticed some artifacts, and pointed out that a fixed, but slightly large value for the lower gamma has the same effect as the variable one, and is helped by cusp smoothing which puffs everything out, so we don't need to be that precise.
[Pekka showed examples of the artifacts with v53]
Pekka Riikonen: With gamma up you an see clipping and skews, particularly in the noise floor.
  • Nick Shaw: Reach mode clamp and soft clipping are off in v53, but on in v52. Does turning them on help?
  • Pekka Riikonen: It makes a minimal difference. I could get a perfect inverse with a fixed 1.15 lower gamma.
  • Kevin Wheatley: So is the search not finding the right values?
  • Pekka Riikonen: I tried all the things Alex mentioned – smaller range, higher precision. No difference. The values are higher than 1.15. It's quite linear near the bottom and the gamma is too large, so there's clipping. It's more critical for the lower hull gamma because of noise.
  • Nick Shaw: The search has smooth cusps off, so it finds a larger gamma value than is needed once that is on.
  • Pekka Riikonen: That doesn't help either. I think maybe the gamma is not as good an approximation as we think. Particularly at the lower end.
  • Kevin Wheatley: Could we flip the curve so the zero reference is at the cusp, and smoothing would fill in missing bits?
  • Nick Shaw: That is a possibility for the top part, but seems wrong for the bottom, because the model does use a gamma referenced to zero. We are looking at artifacts in a synthetic image, but I can see them in reds in a real image too.
[Nick showed the difference between v52 and v53 on the red dress in frame 51 of the sample set]
  • Alex Fry: I'm struggling to visualize the approximation vs the real hull. There is always an offset.
  • Nick Shaw: Do you think it's a visualization error, or is there an error in the approximation?
[Alex showed his visualization, and how it worked, and eventually fixed it so it showed the approximation overlaid on the real hull]
  • Kevin Wheatley: The OCIO TSC met yesterday, and discussed requirements for implementation. The iterative inverse created concern. So is the colorist feedback, and their desired look more important than an analytic inverse? It is a bounded iteration, so there is a cost implication and an accuracy one.
  • Nick Shaw: I think Doug Walker may have misunderstood the iterative inverse, and thought it was unbounded.
  • Kevin Wheatley: I didn't go into detail, but pointed out to him  things are still in flux, and it was only last week we found that a new algorithm would be needed to address the colorist feedback. We need to be mindful of the inverse.
  • Carol Payne: Doug's point was that the priorities for a VFX house may be different to those of a colorist. Colorists need an inverse less often.
  • Nick Shaw: When you need an inverse you are probably stacking up multiple operations, so you want something as least computationally expensive as possible.
  • Carol Payne: Because of how OCIO is architected it would be easier it it was computationally invertible. We could deal with iteration, but would prefer not to.
  • Nick Shaw: Pekka's proof of concept didn't use a loop. It was the same line six times in a row.
  • Kevin Wheatley: The precision isn't guaranteed and will vary for different input. Six might not be enough for e.g. P3.
  • Pekka Riikonen: The other option is to replace the quadratic with something else. The first chroma compression used a cubic curve which was invertible.
  • Nick Shaw: I'm not using a quadratic curve directly, so that's not what needs inverting. The equation for slope is a quadratic in intersectJ, and the fact that quadratics have a formula solution means we can find the intersectJ whose slope passes through the source values. Modifying the slope function stops it being trivially soluble. I experimented this week with a hack to add a modification in the middle of the DRT to compress neons. It's needed only in SDR. I take the length of the vector from the origin to (M, J) and apply a powerP compression to that, so it moves way out values diagonally towards the origin. It's only in my Rec.709 DCTL for now.
[Nick showed his DCTL and the effect of his neon compression]
  • Nick Shaw: It's subtle, but if I change the parameters to make it more obvious then the light saber image doesn't blow out to white. They look like green and red sticks!
  • Pekka Riikonen: The light sabers show the issue with the path to white. You want to maintain saturation, but at some point go to white. In my experimental version I found values that darkened neons but kept the light sabers clearly lights.
  • Kevin Wheatley: Is your neon compression similar to adjusting the chroma compression?
  • Pekka Riikonen: Chroma compression is lightness preserving. It only compresses M.
  • Nick Shaw: Mine compresses diagonally towards the origin.
  • Pekka Riikonen: In theory something like this could be added to the chroma compression, and the gamut mapper could then be lightness preserving.
  • Nick Shaw: We don't have time for too many experiments. My neon compression was just a very simple look at taking stuff that is way out and bringing it in. You could do something more subtle. Maybe modulate the compression with the angle of the diagonal. It would be interesting to look at what the JM values of the neons people want to preserve actually are.
  • Pekka Riikonen: Thomas's Cornell box image is useful for showing the issue. It's only reds and blues that need darkening.
  • Nick Shaw: My patch happens after the tone map, and I don't think you want to apply it in HDR. It would limit the neons in HDR. Could you make an SDR only LMT out of it?
  • Kevin Wheatley: Driving it by peak white could make it only really affect SDR. Maybe take account of the gamut too. That's one option. Pekka did you do any more on your proposal?
  • Pekka Riikonen: I don't have anything new. That's a bigger project. We'd have to change a lot of parameters, because v28 was quite different. We could make a version of the current mapper without the quadratic.
  • Nick Shaw: Your plot of the modified slope curve showed slope vs intersectJ, but you don't have intersectJ until you solve for it. And if you modify the slope equation you can't solve it. I'm not quite sure why the quadratic approach breaks down when you make it too steep. I'll try to investigate what's happening there.
  • Kevin Wheatley: We need to figure out how to preserve saturation or at least understand the saturation behavior.
  • Pekka Riikonen: What is the deadline?
  • Kevin Wheatley: We need to get something to OCIO in 3-4 weeks.
  • Nick Shaw: My DCTL may be easier for them to look at, because it only includes a path for the currently selected options.
  • Kevin Wheatley: The more we can document and simplify, the more time we have. OCIO has to produce a 2.4 version by August with all features for the next year's VFX platform, and then 2.4.1 in September to fix bugs.
  • Pekka Riikonen: The only real problem we have is the saturation. We don't have an inverse problem.
  • Nick Shaw: It's only the gamut mapper that is the concern. The overall rendering doesn't need to change. The question is whether we have time for more experiments if we have to deliver something in 3-4 weeks.
  • Kevin Wheatley: The safe option is the current version with bottom gamma tweaked. Then we still have to cull un-needed code, make CTL and document.
  • Nick Shaw: Jeffrey is saying in the chat that saturation is more critical in HDR. But in fact the HDR has plenty of saturation. It't the desaturation of neons in the SDR that make is look different to the HDR.
  • Pekka Riikonen: Since the reach mapper the HDR is more saturated.
[Alex showed the now working hue slice plot of the real and approximate hull]
  • Alex Fry: If I use a large slope_gain, so that the line is horizontal, the search finds a closer match to the true hull shape. The values now all come out 1.15-1.17.
  • Pekka Riikonen: I have always thought the lower hull gamma was the model gamma, plus a little bit, for whatever reason.
  • Kevin Wheatley: The red top surface is concave, and I wonder if that contributes to the saturation problem.
  • Pekka Riikonen: I think so.
  • Alex Fry: I'll push a version which uses 10000 as slope_gain when finding the gammas. I'll render out an animation of the gamut approximation plot.
  • Pekka Riikonen: You can see the issue with the blue on the plot – how dark it is below the cusp and how light above it.
  • Alex Fry: At yellow it has an odd s-shaped top.
  • Kevin Wheatley: And there's a cusp that looks like clipping.
  • Nick Shaw: It might be interesting to look at the JMh values at that cusp to see what it is and what may be happening there.

Meeting #135, January 17th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: We have updates from Alex, Pekka and Nick, and I have some things to bring up.
  • Alex Fry: I've worked on multiple sample points for the top gamma solve. I sample at 0.1 and 0.9 as well as half way up.
[Alex showed a hue slice plot of the real boundary and the solved gamma approximation]
  • Alex Fry: The new gamma end up puffed out a little more than the last one. It clears up most of the issues we saw in v52.
  • Nick Shaw: I notice in v52 the gamut compressor makes peak white go black.
  • Alex Fry: That still happens. Most of a cube round-trips, except a little bit down in Purple, which seems to be caused by the reach mode clamp. Looking at a top view the round-trip is not too bad.
  • Nick Shaw: I suggest round-tripping a BT.1886 EOTF rather than sRGB. This linear portion of sRGB can hide a lot.
  • Alex Fry: Then we're losing more than I thought. It's less if I turn soft clip off.
  • Pekka Riikonen: That's not necessarily the final soft-clip values. It could be smaller. It's interesting that the reach clamp causes round trip problems, because it should be exactly at the reach boundary.
  • Nick Shaw: I've noticed that the reach clamp causes ramps to be bent round at the boundary, which can sometimes then be pulled in-gamut by the gamut compressor.
  • Pekka Riikonen: How well does the magenta approximation now line up to the real gamut?
[Alex showed the hue slice at magenta]
  • Kevin Wheatley: Maybe we need more samples. The array will probably be built offline for a given gamut in real implementations. So more samples isn't a big issue.
  • Alex Fry: With top gamma disabled the peak white doesn't disappear.
  • Kevin Wheatley: Obviously a maths error that we can fix.
  • Alex Fry: If I change the sample points maybe we can catch the bit that's missing around yellow.
  • Nick Shaw: I am working on updating my DCTLs to v52 as shown last week. I think I've added the new stuff, but it doesn't quite seem to match the Blink. I'll push what I have so people can try it out, and maybe somebody will see what I've done wrong. With the caveat that it's not yet fully working.
  • Pekka Riikonen: I posted about an experimental version I made. It uses the same gamut mapper as v52, but with an extra step to create a smooth path to white. It's just a proof of concept. I also made a post about the issues with the current mapper if you try to make the compression steeper to retain highlight saturation. It shows focus distance of 1.0 on ramps in various gamuts. The lightness mapping function which controls the projection angle of the compression doesn't approach display white or black smoothly. It will just clip if the projection is steep enough, so it just goes suddenly horizontal at display white. My experiment added an ease-in to the slope function, but that means it is only invertible with iteration. I see three options - 1. keep the current mapper and find the steepest value with smooth gradients; 2. Make a reach version of the old mapper before the quadratic approach, and accept the inverse has to be iterative; 3. Add a separate path to white stage after the gamut mapper.
  • Nick Shaw: Shouldn't that be before the gamut mapper so the 'rendered image' has a path to white?
  • Alex Fry: Much as I hate the idea of rejigging a major part at this stage, the examples in your post are hard to argue with.
  • Pekka Riikonen: The ramps showing clipping are very steep projection to make the point.
  • Kevin Wheatley: You're talking about the steepness. Would changing the compression curve help?
  • Pekka Riikonen: Even without reach mode it's still a problem, so I don't think a different curve would help.
  • Kevin Wheatley: We should look at what are the pixel values people have a problem with, and where do they want them pushed to? Looking at the hue slice plots, the only way to get significantly more M is to go down in brightness. Using HK mode to calculate the boundary would mean saturated colors would be perceived as brighter than it looks on a graph. If we go too steep we may end up with the effects people don't like in other renderings.
[Pekka showed a Desmos plot of his modification of Nick's slope curve]
  • Pekka Riikonen: This was a useful adjustment to set the curve to what looked good.
  • Nick Shaw: Modifying the curve like that will break the intersect J solve because that is based on assuming the unmodified slope. So we'll be using the wrong intersectJ in the rest of the mapper. I don't know how large or important that error will be.
  • Pekka Riikonen: This is a proof of concept. I'm not suggesting using it, but rather making a reach mode version of the original mapper, and accept the approximation in the inverse.
  • Nick Shaw: If you do that we should ditch the whole quadratic solve, because it's purpose is to create an analytic inverse, which we won't have any more.
  • Kevin Wheatley: How much feedback to we have wanting the darker mapping?
  • Scott Dyer: That's been the main consistent piece of feedback.
  • Pekka Riikonen: My proof of concept version has similar saturation to v35. Originally I had a three round inverse, which appear to work with linear EOTF, and after Nick suggested BT.1886, I found six rounds worked for that. But the darker mapping shows up that the boundary approximation is not as good as we thought.
  • Kevin Wheatley: We could try more samples around the cusp to see if that helps. And does the lower part need to be more complex like the top?
  • Alex Fry: I've just tried moving my sample points, and that fixed the issue around yellow. Increasing lower hull gamma fixed the blue one, as long as reach clamp is off. But a gamma lookup for the bottom amkes sense.
  • Kevin Wheatley: 360 samples is an arbitrary number, just because of degrees. We should try bigger tables.
  • Alex Fry: Matthias's uneven approach explicitly stores the corner points.
  • Kevin Wheatley: I believe so.
  • Pekka Riikonen: I looked at a BT.1886 unit cube round trip with CAM_DRT_v052_pex_gc with six rounds and cusp smoothing 0.2, and it does miss a couple of small slices. But so does v52. Ten rounds should be enough. But steeper projections show up inaccuracies in the gamut approximation.
  • Kevin Wheatley: I was going to suggest we start tidying up for consistent code formatting. But actually we're not ready for that. When we are we can start pushing to a master repo on the AMPAS GitHub.
  • Scott Dyer: I'm working on a CTL version, and so we need to lock something soon, as we're supposed to be shipping code in a few weeks. Once we confirm the path we're taking I can merge that into the AMPAS repo.
  • Alex Fry: My next task is to rationalize the top gamut finder and write a bottom gamut finder.
  • Nick Shaw: Why does the code have a separate lowerHullGamma as a parameter, and model_gamma which is calculated?
  • Pekka Riikonen: The model_gamma is what the model uses, and is what Luke suggested we use for the reach gamut gamma.
  • Nick Shaw: But we know using the model gamma in our approximation doesn't quite work.
  • Pekka Riikonen: The question is whether the line from the cusp follows the model gamma.
  • Nick Shaw: I guess compress mode may distort that.
  • Pekka Riikonen: Luke suggested rescaling M using the model gamma. That keeps chromaticities constant.
  • Kevin Wheatley: I'm not sure. For where the boundaries are, we should use the model to compute what it computes. But for other purposes we don't have to use the model gamma in an approximation.
  • Alex Fry: Ideally we'd calculate the AP1 boundary at all J values, but because we only have it at one, we need to curve it as it goes down.
  • Kevin Wheatley: Is that representing a source space?
  • Alex Fry: It's more like a display if you had a display that could put out 100 nits at any chromaticity.
  • Nick Shaw: Or rather J of 100, not 100 nits. And we could easily check what the M for AP1 is at lower J, and see if that fit's the gamma we use.
  • Alex Fry: Originally I just found a gamma that worked, but it turned out to be very close to the model gamma.
  • Pekka Riikonen: Maybe if that gamma is too small for AP1 that's what is causing issues with the M clamp. But we don't know what the issue is with Rec.2020 and why the M clamp fixes it. The issue is always in the magentas.
  • Kevin Wheatley: If the feedback says we need a steeper mapping that is smoother, unless somebody can work out a variation of Nick's code that does that, we're stuck with the iterative inverse.

Meeting #134, January 10th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Daniel Brylka
Chris Clark
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: We've merged some changes and also had some more colorist feedback.
  • Yesterday I pushed some changes to the two repos. V52 has changes from Kevin, Nick and Pekka. My top gamma solve is now working. There are new v52 LUT bakes. The 540 nit version is now 500 nit, so it's not for one specific monitor. Some colorist feedback said they used the 540 nit one. The 1000 nit LUT inverse produces artifacts, even though the Blink inverts. Pekka pointed out a small magenta region being missed on a round trip. We may need two or more sample points for the top gamma.
  • Nick Shaw: May changes were to reorder the final encoding so clamping happens in the limiting space, not the encoding one, and it clamps to peak luminance, so 1000 nit PQ is clamped at 1000 nits not PQ 1.0, i.e. 10,000 nits. So there's a virtual display and then final encoding. I also made a small change to the top gamma solve so it uses the correct focus J rather than a hard coded value.
  • Pekka Riikonen: My changes added the M clamp to the AP! reach boundary after chroma compression. I also changed the chroma expansion so it uses the reverse of compression rather than the inverse, because it was expanding colors close to neutral too fast.
[Pekka showed a Desmos plot of what he meant by the difference between inverse and reverse]
  • Nick Shaw: So the curve is rotated 180 degrees, instead of reflected in x = y.
  • Kevin Wheatley: My changes were cleanups, because the NaN removal was already in. We need to remove commented out code, and then review the actual comments, some of which are out of date.
  • Nick Shaw: There are a few redundant parameters. I think referenceLuminance, mmScaleFactor and daniele_n_r are all the same thing, which is the 100 we normalize to 1.0.
  • Kevin Wheatley: We've tidied up the uses of 360. Now I think gamutCuspTableSize is used properly.
  • Alex Fry: The suggestion to make all tables evenly spaced to remove the iterative lookups seems sensible.
  • Nick Shaw: Pekka suggested a single table to pull all cusp values from one lookup. That needs to be evenly spaced as the current cusp tables have a different h value for each entry, as the spacing is different for different gamuts.
  • Kevin Wheatley: Now we are enabling cusp smoothing, the exact cusp value is less critical.
  • Alex Fry: The baked LUTs use the parameters from Pekka's PR, so different cusp to mid blend and cusp smoothing is on.
  • Pekka Riikonen: I think my parameters make saturated highlights as dark as we can. Any more and you get artifacts in ramps.
  • Kevin Wheatley: We had more colorist comments. Mostly positive. Somebody liked black handling, where the colorist in last week's feedback they didn't like it. One person liked the way hues bleached to white, but commented thst bright saturated colors lost detail with the bleaching. They also saw artifacts in high luminance blue near the Rec.2020 boundary.
  • Pekka Riikonen: Hopefully the new clamp fixes that.
[Alex showed the German fairground image which was described as showing this artifact]
  • Kevin Wheatley: There was a comment that blue moonlight highlights lost detail, and blues got darker as you saturated them.
  • Pekka Riikonen: The blue in this model is very dark, and light above the cusp. Even darker with stock primaries.
  • Kevin Wheatley: They said HDR SDR match was better than default Dolby conversion. It's preference.
  • Pekka Riikonen: The green dragon with red and blue balls, the bright side of the balls is very desaturated, which doesn't happen with ARRI Reveal.
  • Alex Fry: ACES 1.x clips, so maintains saturation.
[Pekka showed comparisons of ARRI Reveal, ACES 1.3 and v52, with the green dragon and spectrally lit balls]
  • Alex Fry: The primary and secondary cusps are all at different J values. For all hues to desaturate at the same point the cusps would have to be at the similar J.
  • Pekka Riikonen: The old mapper had a smooth gradient to white, but that needed an iterative inverse. The current gamut mapper is the limiting factor. This Desmos shows the slope plotted against intersect J. I made a version which mimics the old mapper, which goes into white more smoothly, using the focus distance gain from the old mapper. But it needs an iterative approximation for the inverse.
  • Nick Shaw: If we could make a curve with that shape, I don't think it would lead to a quadratic solution.
  • Kevin Wheatley: A curve like that would have to be at least a cubic.
  • Nick Shaw: Did the old gamut compressor with iterative inverse fill the display unit cube? Filling that is one reason we've moved to a different approach. Renderings like ARRI Reveal don't try to fill the cube.
  • Kevin Wheatley: So what are our next steps?
  • Nick Shaw: Do any of the issues with the current version mean it's unusable? Or give n the time constraints should we say "it is what it is" and work on getting a shippable version of it?
  • Kevin Wheatley: If it's always better than v1, we've achieved something. We need to fix the top gamut solve so it doesn't miss that sliver, maybe adding more samples. Other than that we don't have true bugs. The issues are subjective. Are we waiting for other feedback
  • Scott Dyer: Annie has been following up with people. But feedback has been way more positive than I anticipated.
  • Kevin Wheatley: We need to give last week's testers the updated versions.
  • Nick Shaw: And they got the DCTLs to see if their issues were LUT related.
  • Scott Dyer: Emily said the issue using gamma corrections still happened with the DCTL. But the gain in HDR issues, the DCTL fixed that. They trying to get clearance to send examples of what they are seeing.
  • Alex Fry: Thomas normalized the spectrally lit balls image in something spectral. I tried normalizing them in J, and the columns hold saturation more similarly.
  • Kevin Wheatley: I thought we should use HK for output only. But then you need to include that in calculating the gamut hull.
  • Pekka Riikonen: A default LMT could address the highlight bleaching issues.
  • Kevin Wheatley: We have't looked at LMTs. Can we make ones which match other transforms using inversion? It should be easier with our improved inversion.
  • Christopher Jerome: In the next round of testing, will it be one version, or several for people to choose their preference?
  • Nick Shaw: At tis stage, if some like one, and some like another, what do we do?
  • Christopher Jerome: Could there be a smoothness problem with the path to white?
  • Kevin Wheatley: We've tried to use smooth functions, and our testing with ramps hasn't shown up any issues.
  • Alex Fry: Does cusp smoothing affect invertibility?
  • Pekka Riikonen: It didn't invert without it. The gammas aren't accurate, because the cusp isn't accurate. Smoothing moves the cusp out. Maybe we could lower cusp smoothing if the gamma was more accurate. The magenta is concave and has the widest cusp.
  • Kevin Wheatley: If we look at Nick's animations of hue slices, we can find the hue where it breaks and see what's happening.

Meeting #133, January 3rd 2024, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Jeffrey D Mathias
Alex Forsythe
Luke Hellwig
Christopher Jerome
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Welcome back after the holidays. Alex has been merging code, and Pekka and Nick have been discussing some options regarding clipping.
  • Pekka Riikonen: I discovered that the artifacts when limiting to Rec.2020 come from the clipping happening after the gamut map. If we clip to AP1 reach space first, the problems go away. I think we should clamp the input to AP1 from the start, because the transform doesn't handle values outside that. This fixes the blue screen that went purple.
[Pekka showed comparisons of images with and without AP1 clamping]
  • Pekka Riikonen: ACES 1 also clamps in the RRT. I have seen no issues from doing this.
  • Lars Borg: The blue screen is blue, but the glass is still purple.
  • Alex Fry: The chromaticities of the screen are actually purple.
  • Kevin Wheatley: The refraction changing things so they stay purple when the screen is blue could be more problematic than consistent purple.
  • Nick Shaw: You're basically introducing a skew in the scene space before the rendering, clamping the input and skewing things onto the boundary of AP1. People may like this blue, which is wrongly purple, going blue again but other colors which are really purple will also end up blue.
  • Lars Borg: For keying, if you. change the colors of the screen but not those of the 50% screen, how do you handle that?
  • Nick Shaw: People should of course be keying in scene space, and only using the DRT downstream of the key. But we know some people will key in display space after a DRT.
  • Pekka Riikonen: The blue screen is an extreme example. Most images aren't affected like that.
  • Kevin Wheatley: On the synthetic image the purple column isn't as purple any more.
  • Pekka Riikonen: The clamp also fixes the issues I showed before with the gradients if we make the gamut compression too steep. So I have two clamps, the input clamp which fixes the blue screen and the clamp in JMh which is needed to fix the artifacts when limiting to Rec.2020.
  • Nick Shaw: That's limiting and encoding to Rec.2020, rather than limiting to P3 and encoding as Rec.2020? Limiting AP1 to Rec.2020 is a very small amount of compression, which will always be problematic.
  • Pekka Riikonen: Pre-clamping in JMh before the gamut mapper removes the artifacts, but I don't know why. The raspy line in the ARRI bar image that I solved with the ratio based limit comes back if you limit to Rec.2020. So the question is do we want to clamp? I think we should go for the better rendering.
  • Alex Fry: I'm not sure the blue screen is a better rendering. I am concerned about adding a clamp to fix this image.
  • Pekka Riikonen: If the rendering space is limited to AP1, does it matter if we clamp it.
  • Alex Fry: I don't think pre-skewing the data is desirable.
  • Pekka Riikonen: We don't handle data outside AP1 gracefully.
  • Alex Fry: But if we clamp we put it in gamut in the wrong place.
  • Kevin Wheatley: What do other renderings do with this image?
  • Pekka Riikonen: ARRI Reveal makes the screen blue.
  • Kevin Wheatley: But the glass still has purple in it. And looking at the chromaticities, our rendering is sort of right. It's not entirely wrong.
  • Nick Shaw: People can do an AP1 clamp of the input themselves, or use the RGC.
  • Alex Fry: The JMh clamp is a separate issue.
  • Pekka Riikonen: There we clamp M to what the gamut compression will reach to.
  • Kevin Wheatley: I feel that belongs in the chroma compression step.
  • Alex Fry: We can put a toggle in the UI to check it does what we want.
  • Kevin Wheatley: It won't affect the inverse because you're inverting to that limit.
  • Alex Fry: I have a v52 that incorporates Kevin and Nick's changes and my iterative top gamma. It all seems to work except the top gamma. That worked when I initially added it to v51.
  • Kevin Wheatley: We had some feedback from Company3.
[Alex showed the feedback summary]
  • Alex Fry: Mixed results. An improvement on ACES 1. Cleaner and colors track better and highlight rolloff is better. Interesting they mainly used the 540 nit version, which was a custom one for my OLED monitor. Should we include 250 and 500 nit versions in the standard set?
  • Nick Shaw: I found artifacts in the 540 nit version with blue bar that aren't in the 1000 nit.
[Nick showed the differences in the blue bar ceiling between 540 and 1000 nits with his DCTL]
  • Nick Shaw: I initially thought it might be an error in my DCTL, but it happens with the Blink too.
  • Pekka Riikonen: Is it the same desaturation we see in the 100 nit?
  • Nick Shaw: Maybe. At 100 nits that whole area gets desaturated, so maybe at 540 some is in the desat zone and some isn't. But it looks oddly blocky. We want to desat highlights less, so that might help.
  • Scott Dyer: Where did we get to with that? Tweaking parameters helped, but wasn't substantial.
  • Nick Shaw: Pekka, was it the input or JMh clipping that let you gamut map steeper?
  • Pekka Riikonen: That was the input clip to AP1. Otherwise if we map too steeply we get an issue in gradients, and I'm not even sure if it's invertible. I've seen some issues with inversion near white. But I'll wait for Alex's v52. Tweaking parameters I can't get back to the saturation of v35, because that had the saturation boost that pushed some things out of gamut for the gamut mapper to bring back in. The new chroma compression doesn't do that.
  • Kevin Wheatley: If we can hit every corner of the display, a colorist should be able to grade things to a place that renders to where they want. Then it's an artistic choice. What about the comments for CO3 on shadows and using gamma?
  • Pekka Riikonen: I couldn't see any issue with gamma in my testing.
  • Nick Shaw: Have they tried my DCTLs yet to check if they are just seeing LUT issues?
  • Alex Forsythe: I gave them the DCTLs but haven't heard back yet.
  • Nick Shaw: Make sure they have my latest. There are still artifacts with very high values that I need to investigate [fixed with latest commit].
  • Scott Dyer: I have a CTL implementation which seems to mostly match the others. I was having issues with square roots of negatives in the Bjorn compression, as Kevin has mentioned. Some of the thresholds and checks are different in the DCTL and Blink. We need to synchronize or document if differences are needed. I have some questions I'll ask you offline.
  • Kevin Wheatley: We need a lot of code tidying.
  • Nick Shaw: I've definitely simplified things in the DCTL, because I've only included one code path for the options we have selected in the Blink.
  • Pekka Riikonen: I have some simplifications to the chroma compression code which I'll submit once Alex has his v52 finished.
  • Kevin Wheatley: Given time pressures, Pekka's clamps could be the right solution. We need a short timescale for the reference CTL.
[The remainder of the meeting was collaborative bug finding in the Blink code, resulting in finding a fix for the iterative top gamma in SDR, but Alex needed to do more work to generalize it]

Meeting #132, December 20th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Michael De Caria
Alex Forsythe
Luke Hellwig
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen
Daniele Siragusano

Meeting Notes

  • Kevin Wheatley: I've been bug fixing and Scott has some feedback.
  • Scott Dyer: People were generally happy and said it improves on 1.x, but several said SDR desaturate red and blue highlights compared to HDR. Hopefully we can fix that by tweaking parameters. Also people commented on the blue-screen that goes purple, and some skies. We know why and need to explain it's the chromaticity of the blue in the source.
  • Nick Shaw: Does that happen with all blue-screens?
  • Scott Dyer: We need to check others.
  • Pekka Riikonen: There are other blue-screen images in the set and only that one goes purple. But I agree about the skies. They look more purple in HDR.
  • Scott Dyer: We need to explain that the algorithm is behaving as intended and provide a fix for blue behavior.
[Pekka showed the SDR highlight desaturation]
  • Pekka Riikonen: Highlights were more saturated in v28. HDR got more colorful with the change to reach mode, which exaggerated the difference. I tried tweaking parameters, but we can't go too far because gradients break and clip. The old gamut mapper you could go further than the quadratic. The best I found was to set mid-cusp bias all the way to mid (1.0) and focus distance to 3.4. We should get feedback on those settings.
  • Nick Shaw: When I designed the quadratic approach, I tried to make the behavior similar to the previous one, but it couldn't be exactly the same as that was not invertible. Previously the compression slope depended on the source J, and after compression that had changed, so you didn't know the slope to invert it. Now a point anywhere on the same compression line results in the same slope, which is a change in behavior. If we go back to a previous version because the look is preferred then we lose full invertibility.
  • Kevin Wheatley: Can we vary the slope algorithm to create a different slope for some regions.
  • Nick Shaw: Not really, because if it varies with source position like that we are back to non invertibility. The behavior I tried to emulate where compression went horizontal at white and black was there before. It didn't always focus towards one point, did it?
  • Pekka Riikonen: There was a focus distance gain modulated by J. That's why it wasn't invertible.
  • Nick Shaw: The gradients clipping you showed are with synthetic i ages with very high values. Should we worry about those if it doesn't affect real photographic images?
  • Pekka Riikonen: It's true those ramps go to the max half-float value.
  • Nick Shaw: Although perhaps we do need to worry about that as cameras' dynamic range increases, and people make CG with physically plausible values.
  • Kevin Wheatley: I have made some changes based on Nick's v51 which I think caught all the NaNs and infs. Alex has been merging.
  • Alex Fry: I'm working on a v52 incorporating everybody's changes, but I hav'e got it right yet. I still have NaNs and artifacts. I think there's a problem with my top gamma initialization.
  • Nick Shaw: I need to incorporate Kevin's fixes into my DCTL.
[Nick showed his v51 DCTL in Resolve]
  • Nick Shaw: I have a Rec.709 and a PQ1000 version. They go in nodes, not as custom ODTs, so you have some parameters like D60 sim and 709 in PQ. The Rec.709 invert seems to work for display-referred images. The only issue I have found is that with the dominant wavelength image it breaks at higher values. I don't know why. They seem to match the LUTs except for the soft clip which I have working in the limiting space, so for P3 limited in Rec.2020 mine soft clips to P3, and the Blink currently soft clips to 2020, which does nothing. I've sent Alex a fix for that. On my M1 Mac Studio it runs real-time in HD. I need to make a P3 gamma 2.6 version.
  • Kevin Wheatley: Nick and I have been working on the list of code tidy ups, some of what are already in my fixes. SO we're left with doing more testing and tweaking highlight saturation.
  • Alex Forsythe: We got some feedback from Company 3. They think they found issue with highlights breaking applying gamma with the LUTs.
  • Nick Shaw: They should try my DCTLs, in case what they are seeing is a LUT issue.
  • Kevin Wheatley: Mostly we have merging things and tidying up, and looking at feedback. I guess no meeting next week, and back in January.
  • Alex Forsythe: The timeframe is getting critical so we can get it to OCIO and other developers by February.
  • Nick Shaw: What will we do in the CTL deliverable which has no init(), so everything runs per pixel. I guess we structure it so everything that only needs to run once is together, to help implementers optimize. The lookups we have now could be be optimized, because they are unevenly spaced.
  • Alex Fry: It seems worth pre-processing those to make them even in the init().
  • Kevin Wheatley: Maybe at higher resolution to ensure we catch the corners.
  • Alex Forsythe: We need to declare something as 2.0 soon, and then after that you can work on 2.1.
  • Kevin Wheatley: We need to focus on parameter changes and glaring bugs. We don't have time for major rewrites.
  • Nick Shaw: Is there a possibility to feed things back into our code if developers we give it to see obvious optimizations? Or does the CTL have to be locked?
  • Alex Forsythe: It's not just going to OCIO, so any feedback would have to go towards the next version.
  • Kevin Wheatley: If somebody finds an obvious improvement it would be crazy not to share it.
  • Alex Forsythe: The CTL is just a reference of what is the right answer. People can implement it their own way to get to that answer.
  • Nick Shaw: The Baselight mathematical implementation of ACES 1.1 appears to follow the CTL pretty closely.
  • Daniele Siragusano: I think some splines are pre-baked, but it's pretty vanilla ACES. If people find optimizations it would be stupid not to share.

Meeting #131, December 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: I have found another case generating infs I fixed it in my code, which I will merge at some point. Some yellow pixels in the Sony still life aren't infs, but get very saturated from an interaction between the compress/decompress and the non-linearity in the model. I don't have anything worth showing yet. I will have to look again at adding a restricted slope instead of the infinite one in the non-linearity.
  • Nick Shaw: Does the compress/decompress need to be symmetrical, given there's a non-linearity in between?
  • Kevin Wheatley: I see the compression on the way in as pre-conditioning the pixels, and you don't necessarily need that on the way out.
  • Nick Shaw: My testing suggested you do need the inverse on the way out of what you have on the way in. We are modifying the model, so you need to come out through the same modified model or the colors get skewed.
  • Pekka Riikonen: Maybe we could just have compress on the way in and uncompress on the way out. I remember testing that and it works, and doesn't change the rendering much. I talked about it in my alternate compress mode thread.
  • Nick Shaw: The v51 I posted last week didn't do what I thought it did. I should have noticed, because it changed things too much. Now I have a fixed version which has minimal effect except is accurately invertible. I've also created a document where we've been noting things we know are suboptimal. Stuff that should be fixed in future, but works for now. Pekka an I had a discussion about soft clipping, which I hadn't realized was on by default, including LUT bakes. We need to check if soft clipping causes problems for colorists who want to push hard against the boundary. And with inversion, if you invert a display cube then go forward with soft clipping, it slightly rounds off the corners. But is it close enough? Soft clip is obviously beneficial to the picture.
  • Pekka Riikonen: It was on in all LUTs except v50. I found a soft clip threshold of 0.999 (instead of the 0.995 it's set to currently) is enough to fix many artifacts. Out gamut mapper uses an approximation so we map outside the real boundary and then clip. Hard clipping can create blocks of solid color there.
[Pekka showed the beneficial effect of soft clip on images]
  • Pekka Riikonen: Even without soft clip it's much better than ACES 1.x.
  • Nick Shaw: All that clamping should happen in the limiting space, where it currently happens in the encoding space. Or things like P3 limited in Rec.2020 aren't properly clamped to P3. Same for Red.709 sim in PQ.
  • Pekka Riikonen: Cusp smoothing helps smooth the internal shape, but balloons out so creates more clipping. It does mean you can lower the lower hull gamma value.
  • Nick Shaw: If we find a sweet spot for those values, will it apply to any gamut? We don't want to need hand tuning per target.
  • Pekka Riikonen: We haven't tested that.
  • Nick Shaw: Do people need to be able to round-trip a P3 display cube? Graphics are mostly sRGB/Rec.709. When people have provided P3 graphics, do they use the range, or are they just encoded as P3?
  • Kevin Wheatley: The examples I've seen did not use the full range of P3.
  • Nick Shaw: So maybe we don't need to be able to invert the whole P3 cube? Rec.709 is more important.
  • Kevin Wheatley: I haven't had a need, but others may.
  • Nick Shaw: I raised a concern before about taking a display referred Rec.709 image with saturated colors and that if you inverted it and then went through the forward P3 DRT you might end up with over-saturation. I tested and in fact because it stays on the Rec.709 hue line, even if it hits the P3 boundary, it doesn't look wrong.
  • Kevin Wheatley: New logos may be done in Display P3 in future.
  • Nick Shaw: They would cause themselves problems if they do that, because Display P3 is common but not ubiquitous. So many would have devices that couldn't show their logo.
  • Kevin Wheatley: Do we have anything else except merging our various bits of code fixes.
  • Nick Shaw: I've experimented with some code that moves the clamping stages into the limiting space.
  • Pekka Riikonen: I am working on updated documentation of chroma compression.
  • Alex Fry: Have you discussed making all the lookups evenly spaced?
  • Nick Shaw: It's mentioned in the document as a future optimization.
  • Kevin Wheatley: Luke, is a uniform degree distribution the right choice?
  • LH: The hue steps are roughly perceptually uniform.
  • Kevin Wheatley: If we use 360 steps, do the gamut corners fall exactly on a sample? Or might we round off some corners by interpolating?
  • Alex Fry: In the unevenly spaced ones the primaries and secondaries are exactly sampled.
  • Nick Shaw: Those lookups store the actual float values of h, which include the corners. In an evenly spaced version you don't need to store h, because h is the index.
  • Kevin Wheatley: There are optimizations we could do of the lookups.
  • Nick Shaw: Even for uneven ones we could make the search for the right interval more efficient with a binary slice search.
  • Kevin Wheatley: Or we could up the resolution of even sampling until it is good enough. I also think we can eliminate some conversions from radians and degrees and back.

Meeting #130, December 6th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Carol Payne

Meeting Notes

  • Alex Fry: I've been looking at procedurally generating the values for the top gamut gamma. We currently have 36 hand tuned values for Rec.709. I was slowed by Blink issues, with it not reloading includes unless you restart Nuke. I think we should recombine everything into one Blink file.
  • Kevin Wheatley: On Linux Nuke 14 it does reload includes.
  • Nick Shaw: I expanded my exploded version to have forward and inverse paths. That let me turn different bits on and off to see what inverted and what didn't. I believe the v51 invertible boundary finding ["Use Nick method for reach" check box] only has issue with round trips for hues where the gamut approximation for the top has a mismatch. I've also been working on a document which goes into the detail of my maths for the quadratic solve and invertible boundary finding. I need to work on it a bit more, but plan to post it to ACES Central.
  • Kevin Wheatley: I investigated the Bjorn compress/decompress as a possible source of NaNs.
[Kevin showed some plots of chromaticity grid before compression, after compression and after decompression]
  • Kevin Wheatley: Any chromaticities with negative y component are lost on the round trip. I removed threshold checks, and noticed we are sometimes taking the square root of negatives. If we take the absolute value before the square root, we get back a large proportion of the values. I need to look at what the thresholds were trapping for and try to handle them better without needing that. I haven't eliminated all odd pixels, but the NaNs are gone. I need to look into the source of the other edge cases.
  • Nick Shaw: Can we just add fabs() functions to the compress/decompress code?
  • Kevin Wheatley: Yes, but there are also other things like spow() being used unnecessarily where pow()  would work, and powers of two can re replaced by x*x, and power of half by sqrt(). I think we can remove some of the divide by zero traps if we reason through what's happening.
  • Nick Shaw: The spow() function clamps at zero but has a comment that it originally mirrored but that caused problems. That may have been back with ZCAM, and we might not need the clamp any more.
  • Kevin Wheatley: There are other things like taking one over a value, but that value was calculated as a/b, so we could just use b/a in the first place. I need to keep looking at the source of these yellow pixels.
  • Nick Shaw: I'd like to propose my v51 as the latest version. I added the "Nick boundary method" option, but without checking that it's identical to v50. I can open a PR.
  • Alex Fry: Congratulations to Luke on defending his thesis. He's now a doctor of color! Anywhere you have thoughts on what we're doing with your model?
  • Kevin Wheatley: The NaNs I've been looking at are from something not part of the original model. It's a big abuse of the model, and I wondered if they were needed on the way out where we should only have real values.
  • Nick Shaw: It doesn't round-trip if you don't use compress mode on the way out, because we have a modified JMh space, which is not right for the standard model to deal with.
  • Kevin Wheatley: We've been exposing the rendering to more colorists to get feedback.
  • Scott Dyer: I've sent the LUTs to quite a few people, and we have people lined up to come in, including the ASC.
  • Kevin Wheatley: If we merge what we've all been looking at together, have we nailed some of the major bugs? I think so, but there are more edge cases.
  • Nick Shaw: None of this should change the v50 rendering people are looking at, and we can tell them any single pixel errors they see are something we will fix. My invertible reach boundary find doesn't change the rendering visibly except in some very extreme images.
  • Kevin Wheatley: My fixes change nothing except fixing pixels which had errors.
  • Alex Fry: I see Jeffrey points out Resolve now has Remote monitoring, which should allow remote HDR on an iPad, now our ODTs are properly tagged.

Meeting #129, November 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Chris Brejon
Daniel Brylka
Chris Clark
Michael De Caria
Alex Forsythe
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Scott Dyer: I'll be testing in our lab where we now have the Flanders 55" OLED as well as the X300. We're inviting the ASC and some colorists, and giving some post houses a package for their own systems. I want to check I'm using the latest best version, and have the right settings in Blink.
  • Nick Shaw: v50 just added the D60 sim option which is not relevant for the LUTs.
  • Alex Fry: v49 fixed some HDR artifacts, but there are no v49 LUTs, so v50 is what to use.
  • Nick Shaw: Are we sticking to AP1 chroma compression primaries?
  • Scott Dyer: I thought that changing them made some artifacts go away, but I was changing a lot of settings.
  • Alex Fry: Images with extreme values do render better with a larger reach. Early versions reached right out to ARRI Wide, and so rendered some extreme images like blue bar better. But the inverse path becomes ridiculous, and you need to push to those extreme values to hit the boundary, which you can't do with a normal working space like ACEScct or ACEScg. We have to say you need the RGC for extreme values.
  • Nick Shaw: As I mentioned before, if reach compression is set to AP1, unless chroma compression is also set to AP1, then AP1 boundary values get double compressed and end up inside the target gamut. And likewise the target gamut boundary inverts to outside AP1.
  • Pekka Riikonen: Setting it to a wider space it won't invert any more. I think the two reach gamuts should match.
  • Nick Shaw: That makes a strong case for AP1.
  • Kevin Wheatley: My only concern is the hard boundary that might make.
  • Nick Shaw: Is there a visible band, or does it fall off smoothly to doing nothing at the boundary?
  • Kevin Wheatley: It's continuous but not entirely smooth.
  • Pekka Riikonen: That's why I think the reach gamuts should be the same, and anything beyond that is clipped at the end.
  • Kevin Wheatley: We should look a the effect of combining that wit the RGC.
  • Pekka Riikonen: I think the RGC parameters should be dialed back for ACES 2.0.
  • Nick Shaw: That was always the plan.
  • Pekka Riikonen: The v49 I pushed during the last meeting has some wrong parameters for ratio based compression. But that doesn't affect reach compression.
  • Nick Shaw: Version 50 is the same as v49 except I have added separated drop-downs for the primaries and white point of the limiting gamut. It affects the reference white used to convert JMh back to XYZ. Choosing ACES white with Rec.709 primaries is the same as the old D60 sim. There is also a "fit white" check box, which like the current D60 sim scales the maximum RGB channel of the simulated peak white back down to 100% so no channel is clipped. Although we talked before about this kind of tilting the neutral axis meaning some values are pushed out of one side of the encoding space, and get clipped, and a 'hole' opens up on the other side.
  • Kevin Wheatley: It a philosophical question, depending what your intent is. And it affects inverses. So we should document that you should only use it if it does what you want.
  • Nick Shaw: I also started experimenting with using the same method I used to find the target gamut boundary to also find the reach gamut boundary, ti make it invertible. It's work in progress. It now round trips reach compression pretty well, but a few values shift a little. I need to investigate more. Maybe it's the fit of the hull top gamma.
  • Kevin Wheatley: What else is still to be done.
  • Alex Fry: We need a way to find those top gamma values rather than hand tuning. All the reach boundaries are calculated in the code except the locus.
  • Kevin Wheatley: Cusp smoothing was on the list.
  • Pekka Riikonen: Pushing things out helps with some boundary inaccuracies.
  • Alex Fry: Having a larger constant value for the top gamma that included everything is the simple solution, but would lead to lots of clipping.
  • Nick Shaw: And even that larger value would differ for different targets.
  • Kevin Wheatley: Because display gamuts mostly align in hue, we might find it's just a scale factor for one set of values, which maybe you could calculate from e.g. just the green.
  • Alex Fry: Do you just look at the mid point and keep increasing gamma until it's over?
  • Nick Shaw: But an s-bend could have a mid point that's on the straight line, so you would wrongly choose gamma of 1.0.
  • Kevin Wheatley: I think we only have time to find out how to do it for the three main gamuts, and then document the principles if somebody need to to do it for another gamut.
  • Nick Shaw: We don't currently have a hard clip for gamuts smaller than the encoding gamut.
  • Kevin Wheatley: We need to add a clamp.
  • Alex Fry: We can just go XYZ to limiting primaries, clip negatives, then go to the encoding primaries.
  • Kevin Wheatley: Are we settled on gamut mapping method?
  • Pekka Riikonen: I think it has to be AP1 reach mode.
  • Kevin Wheatley: It doesn't need to invert to AP1 primaries. It needs to invert inside them.
  • Pekka Riikonen: You can do that with the ratio based version with the right parameters. And that would be simpler.
  • Kevin Wheatley: With reach compression, a Rec.709 cube inverts exactly to that boundary but where does P3 go to?
  • Alex Fry: With a P3 limit that should invert to AP1 too.
  • Nick Shaw: So if you invert a 709 cube though the 709 transform, then render the result through the P3 transform, dost it end up at the P3 boundary?
  • Alex Fry: Yes, so the same amount of information ends up on screen for all targets.
  • Kevin Wheatley: And if somebody is doing two SDR deliverables, P3 and 709, how do those look to them?
  • Alex Fry: Within the range that isn't gamut compressed they should look the same, and beyond that they both have the same information but fitted to their own boundary.
  • Kevin Wheatley: Is that what people would expect.
  • Alex Fry: I think it is, because it's the same information you want to present.
  • Nick Shaw: But you don't want a 709 red logo going though the 709 inverse and P3 forward transform, and producing P3 red.
  • Kevin Wheatley: Ignoring the inverse if you grade in 709 and then flick to P3, what should it do? The other way is ok. But what would people expect going 709 to P3?
  • Alex Fry: My assumption if I'm at the edge of 709 and go to P3 is that I will see more saturation.
  • Pekka Riikonen: If the scene is more saturated than P3, I'd expect P3 to show more than 709.
  • Nick Shaw: But does reach compression show more, or show the same and put it in a different place?
  • Alex Fry: It shows more because it's less compressed. Like with dynamic range, when we roll off to 100 nits and then go to a more capable display, it's the same information, but rolled off less.
  • Nick Shaw: It isn't though. Our curve rolls off and clips at linear 128 in the source for 100 nits, but clips at linear 876 or whatever for 1000 nits, as well as rolling off less.
  • Alex Fry: We don't have displays with 10x the gamut of 709. If we went from a 100 nit display to a 130 nit one, would we reveal more of the source or just roll off less aggressively?
[our current curve rolls off to linear 128 in the source for 100 nits, and linear 172 for 130 nits]
  • Kevin Wheatley: We are making a decision about what we do, but is it what people want, and if not can they trim to get what they want?
  • Christopher Jerome: Do clients ever specify their logo in P3?
  • Kevin Wheatley: It has happened.
  • Alex Fry: Talking about display referred graphic elements in scene referred terms doesn't really make sense.
  • Nick Shaw: Because 128 linear is required to hit peak white in SDR, then peak white will invert to 128. Doing things like scaling that in scene space will cause problems because it's so bright. 60 linear will hit 99.5% SDR, which is effectively 100% on a waveform.
  • Alex Fry: That will still cause anti-aliasing issues.
  • Kevin Wheatley: Any other items?
  • Alex Fry: The OCIO config now has the 709 within P3-65 2.6 gamma handled by the config, not in the LUT. That's only OCIO for now.
  • Kevin Wheatley: Nick can continue with his v51. Who can look at the to part gamma?
  • Alex Fry: I'll try to look at that.
  • Kevin Wheatley: And cusp smoothing?
  • Alex Fry: Anyone with Nuke can look at that with the Blink version.
  • Pekka Riikonen: It's just coming up with numbers that work for all hues, unless we make it hue dependent.
  • Nick Shaw: I don't think we have time for that.
  • Pekka Riikonen: We could bake a version with cusp smoothing for comparison, maybe 0.5. It helps with blues, but add hue skews with reds.
  • Kevin Wheatley: Do we still have NaNs?
  • Pekka Riikonen: Yes.
  • Nick Shaw: I think it's in the noisy shadows, perhaps where the noise makes M very large for small J.
  • Pekka Riikonen: Also is some shadow pixels J and M are zero, but h is 180. I think NaNs come from compress mode.
  • Kevin Wheatley: I'll investigate that.

Meeting #128, November 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Carlos Aviles
Lars Borg
Daniel Brylka
Alex Forsythe
John Frith
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Nick and Pekka have been looking into issued discussed last week. Nick posted a summary.
  • Nick Shaw: I have been experimenting moving all the functions to an external library, and made simple RGB to JMh and JMh to RGB nodes.
[Nick showed his Nuke script and demonstrated what he discussed in his post]
  • Nick Shaw: The DRT with everything off going AP! to AP1 has distortions on a CIExy plot. My simple nodes don't have that. If I clamp the incoming RGB to 65504, as the DRT does, I get the distortions. The dominant wavelength image has some huge values in it. So what should we clamp to? I also want to turn all clamps etc off, so I can see why they are needed. I can't see the tilt we've seen unless I use a different reference white going JMh to XYZ to what was used going XYZ to JMh. Deliberately changing the white point on the way out creates a D60 sim, so I think we should separate the limiting primaries and white in the UI.We would need scaling like the current sim ODTs to prevent clipping. The other thing I did was move all the functions in the Blink to separate library files, and then break the steps of the DRT into separate nodes which all use those same library files. The nodes only expose the parameters each step uses. The tonemap step in particular is affected by more parameters than you think.
  • Kevin Wheatley: The algorithm doesn't necessary need all those parameters, but the current code uses them. The input parameters should be done with early on, and not affect things like gamut compression.
  • Alex Fry: It may use input values to calculate the cusp path.
  • Kevin Wheatley: My testing also showed the pure Hellwig conversion inverts accurately. My only concerns were with compress mode. Do we need the divergences from the model on the way out.
  • Nick Shaw: I can see with my version that you need a matching LMS matrix on the way  out. Not using compress mode on the way out affects some aspects of the image, but doesn't make it look completely wrong.
  • Alex Fry: What if you set the over range values to zero.
  • Nick Shaw: I don't get the bog distortions, but it moves some values onto the AP1 boundary which shouldn't be there. But the incredibly high values should be ending up at peak white for any realistic display peak. We don't go RGB to JMh and back.
  • Alex Fry: Can we see an sRGB cube going to JMh and back.
  • Nick Shaw: I only see a shift if the limiting primaries (even if gamut compression is off) have a non D65 white, which i think is correct, and why we should separate the white point there for D60 sim.
  • Kevin Wheatley: If we have a scale to prevent clipping, where does it go? Should it affect the limit finding as well? What do we mean by a simulation?
  • Alex Forsythe: The D60 dim came from somebody who had a Rec.709 monitor in the theatre and wanted the whites to match, so I took out the chromatic adaptation.
  • Nick Shaw: The current code has the necessary scale factors as magic numbers. It would be better if the code calculated what scale factor was necessary to get the maximum RGB value of D60 encoded as D65 back to 100%.
  • Kevin Wheatley: Should we allow arbitrary creative white?
  • Alex Forsythe: DCP projectors have headroom so scaling isn't necessary, so one channel can go over 48 nits.
  • Kevin Wheatley: Should the sim affect the gamut mapping.
  • Nick Shaw: Daniele has said that he would treat the whole process as if you had a Rec.709 D60 display, and calculate XYZ values for that, and then just encode to Rec.709 D65 to simulate that. Just changing the white point of the limiting gamut will achieve that.
  • Kevin Wheatley: There was discussion last week about reach compression.
  • Nick Shaw: Why I think it isn't invertible at the moment. I'm not sure about reach compression anyway, because it's not gamut compression source AP1. It's gone though tone mapping and chroma compression.
  • Alex Fry: Ideally it would reach to the boundary of the result of the chroma compression. That's complicated.
  • Kevin Wheatley: Were the artifacts coming from that or the transition above and below the cusp?
  • Pekka Riikonen: The HDR artifacts are removed in v49 by making the threshold a ratio of the reach distance. Because otherwise when the limit is close to 1, with a threshold of 0.75 compression does almost nothing.
  • Nick Shaw: I wondered if the external library version could become the master, so I don't need to break down each new version again.
[Nick showed a 3D JMh plot of the source of the dominant wavelength image]
  • Nick Shaw: You can see the M curves back in for high J. Should it do that, or is it just what the hyperbolic curve fitted to lower values happens to do with high values?
  • Alex Fry: I believe very bright values bleach out to your eyes.
  • Kevin Wheatley: John Frith asked about the current state of the transform. I believe that the middle is pretty stable, and we're messing with what happens around the edge cases. I think the look is pretty stable.
  • Pekka Riikonen: V35 was a bit more saturated.
[Nick showed a 3D JMh plot of the dominant wavelength image after chroma compression]
  • Nick Shaw: It gets pretty flat at the top, but the locus cone is still further out than the values are.
  • Pekka Riikonen: The limit does get very large. We could clamp it to a maximum value. Near zero the limit becomes very small. So there we need a slightly larger limit. Cusp smoothing will push that outside.
  • Alex Fry: I should probably bake LUTs of v49. Nick, can you ad the white point separation for the limit to your broken out version.
  • Nick Shaw: I can do that.
  • Alex Fry: Should we merge Nick's library version?
  • Kevin Wheatley: I think that makes sense.
[Nick showed a comparison of v48 and v49 with the gamut mapping collage image]

Meeting #127, November 15th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Michael De Caria
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: We have had a private discussion where Pekka has pointed out some issues with the boundary finder, and some unattractive rendering, particularly with blues in HDR.
  • Pekka Riikonen: Compared to v46, v48 has harsh banding in blues. And I saw all kinds of artifacting in HDR.
[Pekka showed example images, and used a gain and gamma approach to show HDR images in SDR]
  • Pekka Riikonen: In the ARRI bar image the colorful back panels end up with a noticeable ragged band. It seems relate to reach mode compression when the limit gets very small. I made the compression threshold variable, so it's a ratio of the limit. So it's close to the boundary if the limit is small.
  • Nick Shaw: But if the limit is close in to the boundary, the compressor should be doing almost nothing. It's mapping the range from the threshold to something just above one to a range from the threshold to exactly one.
  • Alex Fry: I'm guess it's related to the fact that the reach boundary parallels the target boundary below the cusp, but above it the target boundary goes in, and the reach boundary keeps going out, so you get larger limit values.
  • Pekka Riikonen: My change fixed 80% of the artifacts I was seeing, particularly in HDR.
  • Alex Fry: It also seems to fix a hue swing in the purple corner of the ARRI bar.
  • Pekka Riikonen: v46 looks smoother there, but that was using the chroma compression space, not AP1, so it's not a like for like comparison. But in some cgi images like the light saber one I'm seeing odd banding that v46 didn't have. I need to do more testing.
  • Nick Shaw: What changed between v46 and v48? My boundary finding attempted to be more accurate for invertibility, but should be finding about the same value. So the big change was the slope gain, which mad compression steeper.
  • Alex Fry: The main thing in the baked LUTs is the chroma compression space used in v46 and v48 uses AP for both compression spaces. Looking at the collage of pathological images, putting reach compression space to the locus helped a lot. But that image has pretty wild chromaticity values.
  • Pekka Riikonen: The Fabian Matas nightclub image have a very visible ring around the blue light in HDR. I made some other small changes, so e.g. the cusp smoothing uses the same parameter in all modes. I also added an option to bypass the upper hull gamma and use a straight line.
  • Alex Fry: The values are tuned for Rec.709, but I don't know how wrong they are for e.g. HDR P3.
  • Pekka Riikonen: Bypassing it to a straight line had no effect on the artifacts.
  • Alex Fry: Yes, but I don't know if the values should be higher or lower in HDR. Some skews I saw look like clipping. The values we use are all about inverting. But we need a way to derive them procedurally.
  • Pekka Riikonen: Cusp smoothing means you can have lower gamma values, because that expands the boundary.
  • Nick Shaw: Is it definitely the gamut compression causing these artifacts? Because we've changed the chroma compression space.
  • Pekka Riikonen: I thought it had no effect, but it was not hooked up how I thought it was. Cusp smoothing also removes some of the artifacts. I need to investigate more, and will post my new version.
  • Alex Fry: I've noticed blue skies look similar to what we've seen before in SDR, but in HDR they are darker.
  • Pekka Riikonen: That's the steeper vector we mentioned before. I've changed it back to the old formula [in v49].
  • Nick Shaw: That should be the only change of look compared to before.
  • Pekka Riikonen: I think it is the change of reach gamut to AP1 that has changed things.
  • Alex Fry: That maybe make's a case for a slightly different inverse. We are compromising the forward rendering for the inverse.
  • Nick Shaw: But if people want their display referred images to land back in exactly the same place, they need to match. As I posted, if you want your inverse to land in AP1, the forward transform will only be able to handle AP1. Earlier versions than handled further out colors gracefully didn't invert properly.
  • Alex Fry: And people need to grade and do VFX in AP1.
  • Nick Shaw: Maybe we say the rendering only handles AP1, and tools need to have the RGC or a parametric version of it to bring values into AP1 if needed.
  • Alex Fry: Or maybe AP0. I still need to look into the white point issues we saw last week where achromatics shift. Scott, have you got feedback from the colorist you spoke to today?
  • Scott Dyer: He hated it! He likes ACES 1.0, and doesn't mind skews. He wants HDR to be scaled up SDR, with highlights maybe stretched out, which is different to what we chose to do.
  • Alex Fry: We could expose more parameters to allow that sort of thing.
  • Scott Dyer: People who don't like it can just use the thing that they like. We can't please everybody. He wanted to be able to clip out the red and blue spheres in the cg dragon image, and can't do that with our rendering. He also hated the tone scale.
  • Christopher Jerome: With the final deliverable we need to make sure we can make LMTs for popular renderings, like K1S1.
  • Alex Fry: That would be brute force going forward through the other rendering and backwards through ours.
  • Nick Shaw: If it's invertible and hits the boundary, you can do that. Although it's limiting, and will only definitely work in SDR, if the LUT is SDR.
  • Christopher Jerome: Does the cusp smoothing clip anything off?
  • Pekka Riikonen: It bulges out to contain the whole gamut, but means more gets clipped in the final target clip. I hope to find a way to work out the exact amount to expand, so the smoothing brings it back to the true cusp. Bulging out helps with inversion.
  • Nick Shaw: Alex, when you first introduced reach compression, you were only compressing horizontally, to make things simpler.
  • Alex Fry: Yes
[Nick sketched a representation of a target gamut and a reach gamut]
  • Nick Shaw: The intent of reach compression is that something on the reach gamut boundary maps exactly to the target gamut boundary. But finding the reach boundary along a horizontal line won't give the exact compression limit value to do that, if the compression is actually going at an angle. And it will give a different wrong value in the reverse direction from a compressed pixel.
  • Alex Fry: That may be why it's not quite round-tripping, and non-reach-mode does.
  • Nick Shaw: We could use exactly the same maths that I use to find the intersection with the bottom part of the target gamut to also find the intersection with the reach gamut. The values for the J intersection and slope used in the target intersection could be reused in finding the reach intersection. But this would be a subtle improvement. We still need to find the source of the banding and dithering we see. That feels like it might be to do with the extra clamps and limits we've added to protect against division by zero, and noise is making pixels fall either side of one of those thresholds.
  • Pekka Riikonen: I did try varying the clamps and it had no effect.
  • Christopher Jerome: So the idea of reach mode is that nothing in AP1 will get clipped?
  • Alex Fry: Yes, and things will invert to inside AP1.
  • Christopher Jerome: I was looking at the ARRI diver image, which has some very chromatic values. Rec.709 inverts fine and so does P3. But 709 limited P3 creates NaNs when you invert it.
  • Nick Shaw: If the forward and backwards transforms don't exactly match, the forward transform could produce values outside what will invert.
  • Alex Fry: That appears to be the case with the LUTs. The Blink doesn't do it.
  • Nick Shaw: A transform that is supposed to be limited to a gamut probably needs to clip to it, or it would fail QC like P3 in Rec.2020 would.
  • Christopher Jerome: Seeing where things fail can be useful in tracking down problems.

Meeting #126, November 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Anton Meleshkevich
Willem Nagtglas
Carol Payne
Pekka Riikonen
Daniele Siragusano

Meeting Notes

  • Kevin Wheatley: Alex, Pekka, Nick and myself have made small code changes.
  • Nick Shaw: Pekka noticed some artifacts in the shadows when he lifted the gamma, which turned out to be from single-precision float accuracy with small values in the quadratic solve. I changed to an alternative quadratic formula Kevin suggested, which improved this. I also altered the way the out of range J values are trapped for. I also removed the SSTS min/mid/max parameter and replaced it with a single peak luminance value, form which the Daniele mid value is calculated for the cusp to mid blend.
  • Alex Fry: Added extra diagnostic modes for the breakout version to make troubleshooting easier. I baked out a set of LUTs of this v48. It uses ACEScct as a shaper, as we aren't trying to cover such a large range. Reach gamut is AP and so is the chroma compression gamut. I also added a 4000 nit version. Testing random display referred material and going inverse then forward it behaves as expected.
  • Nick Shaw: I'm not seeing 100% invertibility with reach compression.
  • Alex Fry: No, not right to the edge. I'm not sure why.
[Alex showed a chromaticity plot where an inverted display cube doesn't quite hit the bounds of AP1]
  • Alex Fry: I hope we can diagnose this breakout version.
  • Nick Shaw: I found non reach gamut compression now inverts pretty accurately, but reach mode rounds off the edges of the cube. Is the value the reach compression goes to coming out the same in the forward an inverse directions? If it works the way the previous gamut intersection find did, it may not get quite the same result. The latest additions ensure finding the same gamut intersection in both directions.
  • Alex Fry: Reach compression just varies the parameters for the power P compression function.
  • Nick Shaw: To do that it has to find the intersection of the compression vector and the reach gamut boundary. The gamma is being applied to J / limitJmax, and J is different before and after compression. So the result will be slightly different.
  • Alex Fry: That's true, but I don't think it's the cause of what we are seeing in the chromaticity plot. That's a misalignment, which looks like using a wrong matrix somewhere.
  • Kevin Wheatley: We need to look into all these corner cases, but the key now is to get more people to try it.
  • Pekka Riikonen: I think in the new version the compression vector is steeper for higher peaks, due to the change in the way the focus distance is used.
  • Nick Shaw: My intent was to make the vector steepness modulated with cusp distance.
  • Pekka Riikonen: That is reasonable. But it might be better if you used limitJmax / 2 instead of the constant multiplier of 50 in this line.
  • Nick Shaw: I've only looked at the HDR on my MacBook Pro XDR display, and the match looks good to me. But we could change it if feedback from others suggests we should.
  • Alex Fry: I've been testing just going RGB to JMh and back, and it isn't quite round tripping.
  • Kevin Wheatley: The equations should invert.
  • Alex Fry: We may be mixing up the different whites in places.
  • Pekka Riikonen: That looks like the tilt we've seen before.
[Alex experimented with changing the whites used at different points in the code]
  • Kevin Wheatley: There was also discussion about cusp smoothing.
  • Alex Fry: The baked LUTs have no smoothing.
  • Pekka Riikonen: Without smoothing, because the threshold is inside the gamut there is a cusp inside the gamut, so it isn't smooth.
  • Alex Fry: I see that on plots but never on a real image.
  • Nick Shaw: A display gamut is a cube, so there is a change of direction at the edges.
  • Kevin Wheatley: So we need to look into the whites, and check for mismatches. We're are aware of the cusp thing. Anything else?
  • Nick Shaw: The change of steepness of compression with peak that Pekka mentioned, is obvious on the Desmos, but I haven't noticed as issue with images.
[Nick showed the effect in the Desmos]
  • Nick Shaw: In the current version I'm not seeing the big difference we saw between SDR 709 and P3 in the ARRI bar green bottle.
  • Christopher Jerome: There is still a difference, but not as much. The ALEXA 35 image of the diver with the red light is one where P3 is more blown out than 709.
  • Nick Shaw: We have to be careful comparing Rec.709 limited and P3 limited both viewed in P3. The P3 will have final clipping, but the 709 limited will have detail visible which is actually clipped in a real 709 on a 709 display.
  • Christopher Jerome: It seems to affect the rendering and the P3 rendering of the diver is not smooth.
  • Nick Shaw: I don't have the diver image, but RED Xmas is similar. I do see a band on the cheek in P3 that is less noticeable in 709. But I'm looking at both in P3. Looking at a real 709 there is clipping in the red channel on the cheek, but the P3 isn't quite clipped.
  • Alex Fry: One of the OCIO configs has 709 and P3 wrapped in P3, if you have an EDR Mac to compare on.
  • Scott Dyer: Are we happy to put out the LUT bakes Alex already made for people to look at?
  • Kevin Wheatley: Yes. We haven't changed anything today, just investigated the roots of issues.
  • Nick Shaw: We still have some NaNs we need to chase down too. I think those come from the compress mode, not the gamut compressor.
  • Kevin Wheatley: We need to look at the source of those, and the other invertibility shifts.

Meeting #125 November 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen
Daniel

Meeting Notes

  • Kevin Wheatley: Nick posted his work on the gamut intersection approximation, with help from Pekka, and Alex has implemented it in the Blink.
  • Nick Shaw: I realized the discontinuity was because I was normalizing to the cusp J to apply the gamma, when I needed to normalize to the intersection with the J-axis. So I added an extra quadratic solve to find the J intersection of the line going through the cusp. To apply a gamma to something not scaled 0-1, you have to divide by something to make a particular value one, apply the exponent, and then multiply back by the same thing. I was using the wrong value which meant it wasn't continuous across the cusp.
  • Kevin Wheatley: Nick and I were discussing what we should do when J goes above 100, where we saw the small fanning out last week. I propose that above there we keep the slope horizontal.
  • Nick Shaw: Shouldn't we just make M zero above 100? It tapers to achromatic at the peak, and it shouldn't fan out to anything non zero above that.
  • Pekka Riikonen: Should we also clip J to 100?
  • Kevin Wheatley: We deliberately let the Daniele curve go above 100, to allow a little room at the top.
  • Pekka Riikonen: I remember testing clipping J to 100 and it caused discontinuity of some ramps.
  • Nick Shaw: That may not be a problem with reach compression where anything in the locus ends up in gamut.
  • Kevin Wheatley: The other issue is that when you feed in an M of zero to the solve, the result is undefined. We need to trap for that.
  • Nick Shaw: When M is zero, no compression is needed, so output of zero M is correct.
[Alex then went through fixing some bugs in the Blink implementation of Nick's maths]
  • Alex Fry: I've hand tuned the gamma values so the approximation is always bigger than the true hull. The penalty is slight clipping instead of not being able to hit the edge.
[Alex added traps for M=0 and J>100]
  • Pekka Riikonen: I noticed preciously that R=B=B came out with a slightly non-zero M. So we should investigate that.
  • Nick Shaw: I seem to remember when in Jab, a and b being zero didn't lead to zero M. For now we can trap for M < a very small number.
  • Kevin Wheatley: It may be multiple degrees <> radians conversions. The issue with tiny values is they are small, but non-zero and may be significant.
  • Alex Fry: For now I'm glad it's inverting properly. It's currently using hand tuned gamma values and linear interpolation, so could be improved. We need to find a way to calculate the appropriate gamma values at different hues. Maybe sample half way up and calculate a gamma.
  • Kevin Wheatley: That may not be enough, because the compress mode means it's sometimes wavy.
  • Nick Shaw: An s-shape might appear to be on a straight line if you sample the mid point so you would use a gamma of 1.0, which wouldn't work.
  • Alex Fry: I think it may be worth baking some new LUTs of this version.
[Kevin showed a 3D plot made with his C++]
  • Kevin Wheatley: There are some odd shapes that are wavier than you might think.
  • Nick Shaw: A while back I posted a GIF which showed the cusp shape around different hues.
  • Alex Fry: There is an s-curve at points.
  • Kevin Wheatley: Some of the calculations in Nick's code are independent of source value, so could be pre-calculated rather than done per pixel.
  • Alex Fry: At the same time as we calculate the cusp lookup. I'll bake some new LUTs with the caveat that the hand tuned gamma is only valid for Rec.709 so people can test it. We have to decide what reach mode should reach to.
  • Kevin Wheatley:  think AP1.
  • Pekka Riikonen: If the gamut compression uses AP1, I think the chroma compression space should be the same. Although it will have more clipping in the blues.
  • Alex Fry: I'll call the new LUTs v47, and look at how we might procedurally generate those gamma values.

Meeting #124 October 25th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey Mathias
Carol Payne
Pekka Riikonen
Christian Wieberg-Nielsen
Daniel 

Meeting Notes

  • Kevin Wheatley: Daylight Saving changes are coming up. Any thoughts on convenient or inconvenient meeting times? I looked at the compress / decompress at the beginning of the DRT. I now kind of understand what it does. I noticed that doing a compress then uncompress round-trips to expected precision for positive values, for negatives it doesn't quite. I'm not sure why yet. Are we relying on that for our inverses? It also only guarantees that a certain range of values becomes positive. I've also been adding the other bits we use to my C++ which aren't in the stock model.
  • Nick Shaw: At the end of the model all display values should be positive, so hopefully round-trip errors aren't an issue there. So using the full inverse to get display referred values on the screen may not be affected by the issue. But we should still investigate.
  • Kevin Wheatley: We may or may not need compress mode on the way out, because values should be positive.
  • Nick Shaw: Using max(RGB) like the RGC means the max value is unchanged by the compression, so you have the same max value on the inverse. Using distance from an average, the average may be different when inverting.
  • Kevin Wheatley: I need to investigate more how it works.
  • Nick Shaw: I posted in ACES Central a new Desmos which adds a gamma curve in the top part, to try to better approximate the gamut shape. I picked one kind of inverted gamma curve. But there are two options. I don't know which better matches the real shape. We should look. The intersection maths in the Desmos is not what is currently in the DRT. I showed in earlier in the year as a possible option which is more accurately invertible, but it isn't in the code. It only approximates the intersection with a gamma curve, but is close for small gamma values. But the gamma is only an approximation of the true shape anyway, so the path of my intersection is probably as valid as the gamma curve.
  • Alex Fry: If m (the slope of the compression line, as in y = mx + c, not our M) is zero it matches exactly. For non-zero slope it seems to track better in the bottom part, and there is a jump at the cusp.
  • Nick Shaw: My maths obviously isn't as symmetrical as I thought! I realize I was assuming the slope was one way above the cusp and the other way below it. But the change is at focusJ, not the cusp. I guess there has always been a. small jump at the cusp but it was so small I never noticed. I need to tweak the maths. It would work if cusp-mid blend was set all the way to cusp. But we don't necessarily wan't that.
  • Alex Fry: I've put Nick's maths into my code, as an alternative boundary method. I have a 36 entry UI for gamma values for the top part at 10 degree hue intervals, lerped between. I tuned them by eye to better match the hull.
[Alex showed a 3D plot of the cube that the dominant wavelength image gets mapped to]
  • Alex Fry: I can get the match a lot better but still not perfect. Maybe I need more than 36 points, or better interpolation.
  • Nick Shaw: And in some places the real shape is an s-curve that dips under and over, so a gamma can't match it.
  • Kevin Wheatley: We need to find the values which are a minimal difference but still larger than the real gamut.
  • Nick Shaw: Presumably we would populate a table of gamma values in the init() at the same time as we populate the cusp table. We talked before about fitting a polynomial. But if we do that, then the intersection solve becomes even more complex.
  • Thomas Mansencal: And also it needs to be inverted for the inverse.
  • Nick Shaw: So if I can fix the maths, a simpler gamma approximation that sits outside the hull may be better.
  • Christopher Jerome: Are there drastic changes in the parameters around the problem areas?
  • Alex Fry: It's through the yellow to magenta range there are issues.
  • Christopher Jerome: Are those an obviously poor fit to the gamma?
  • Alex Fry: I need a better way to visualize the error.
  • Nick Shaw: Maybe a 2D visualization of a hue slice would be better. Does your current implementation include cusp smoothing?
  • Alex Fry: No. It's just what's in your Desmos.
  • Nick Shaw: The Desmos maths just has a conditional for using the expression above of below the cusp. Pekka's smoothing works out both the above and below solution for all the values and then does a smooth minimum between them.
  • Pekka Riikonen: Smoothing set to zero they just meet at the cusp. Higher values make it curve around.
  • Alex Fry: That would round the cusp inwards. Do you have an additional scale factor?
  • Pekka Riikonen: Yes. I move the cusp outwards first, multiplying y and x. Could you do a curve for the top part with the version of the maths that's there now?
  • Nick Shaw: Probably, but it might still have the same discontinuity problem, and I didn't notice. I will try to look at the maths over the next week, but I'm away from home.
  • Kevin Wheatley: We want to be careful not to have too many parameters that have to be set perfectly to get a smooth result. Before it was smoothly wrong, but now it's quite wobbly towards white.
  • Alex Fry: I need to fix the iterative gamut compressor to have a ground truth reference.
  • Kevin Wheatley: We need to bulge it out and then clamp. We'll get small deviations but near saturated values, so maybe less visible. Where we are getting NaNs maybe we could set a value like -999, so you know there is no solution to the quadratic there.
  • Nick Shaw: In this Desmos for J above 100, if you pull M out too far there is no solution for a slope which passes thorough JM.
  • Alex Fry: In the old version there is a bit where values above 100 poke out then stop.
  • Kevin Wheatley: At the limit where there is no solution. Seems we should clamp the values, so J above 100 gives M=0.
[Alex made a visualization which showed the delta between the approximation and the true gamut as a line, and colored it to show the direction of the difference]

Meeting #123 October 18th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: Pekka made some updates to SDR HDR matching.
  • Pekka Riikonen: v46 only changes HDR. People had commented that CAM DRT HDR matches ACES 1.x SDR better than it matches CAM DRT SDR. It seemed it was a saturation difference, so I lowered HDR saturation. I also fixed a bug. I also noticed the higher contrast of ACES 1.x made it look more like the HDR. The jump between ACES 1.x SDR and HDR seemed smaller than with CAM DRT. Also the skew towards the primary in ACES 1.x makes colors look more saturated, so maybe more like the Rec.2020. Other hue skews cause other issues and worse matches, so I don't think we should worry about that.
  • Alex Fry: Is your reduced HDR saturation a scale?
  • Pekka Riikonen: A value called 'sat` in the code controls how saturation scales with peak luminance.
  • Kevin Wheatley: Was the bug you fixed affecting the shifts we were seeing last week?
  • Pekka Riikonen: No. That happens without chroma compression. That was just a bug introduced in v45. I also slightly adjusted the chroma compression space in v46.
[Pekka showed his plot of the compression space from his post]
  • Kevin Wheatley: I see it doesn't quite enclose AP1 red.
  • Pekka Riikonen: We can extend it. I also added these primaries as an option to the gamut mapper, and added AP0 and AP1 as options for the chroma compression.
  • Christopher Jerome: So this is the rendering space of the chroma compression?
  • Pekka Riikonen: Yes.
  • Christopher Jerome: Is it also the maximum something can be restored to in an inverse?
  • Pekka Riikonen: No. Only the gamut mapper defines that. The is the space the chroma compression affects. Anything outside it is unchanged.
[Pekka showed his plots of original scene chromaticities and scaled chromaticities]
  • Pekka Riikonen: The shift here is interesting.
  • Nick Shaw: Is that the white point moving from ACES white to D65.
  • Pekka Riikonen: We are discounting the illuminant.
  • Nick Shaw: There is still an adaptation from scene white to display white [so equal RGB in maps to equal display RGB]
  • Kevin Wheatley: It looks roughly like D60 to D65.
[Pekka showed the plot of the chroma compression]
  • Pekka Riikonen: You can see that beyond a point nothing moves.
  • Nick Shaw: Is there any feathering into what doesn't move?
  • Pekka Riikonen: The chroma compression has no continuous derivative. But the cusp is outside any display gamut, so I've never seen an artifact from it.
  • Alex Fry: If the gamut compressor only reaches to this same boundary, they don't end up on screen.
  • Kevin Wheatley: They still end up on screen but we don't assign any great merit to values out there.
  • Pekka Riikonen: The compression gets less the further out we go. But personally I would like to compress cyans more. But that would affect inversion.
  • Christopher Jerome: So the chroma compression is opposite to the gamut compression, in that it compresses more closer to neutral.
  • Pekka Riikonen: They are two sides of the same coin. In theory you could do both at the same time.
  • Nick Shaw: The chroma compression curve is M against J, where gamut compression is M against M. So I'm not sure they could be combined.
  • Kevin Wheatley: I've been looking at the early stages of the algorithm in my implementation, and adding back what is missing. I tried to find the reference for Bjorn's compress mode. I want to understand what it's doing and why.
  • Pekka Riikonen: I looked at it a bit in this thread. I tested the ACES gamut compressor. I also tested only using compress mode on the way in.
  • Nick Shaw: I think the intent of it is to make all LMS values positive when they go into the non-linearity, as that doesn't handle negatives well. It squeezes them first then unsqueezes them. Although the unsqeeze is happening to something which is now non-linear.
  • Kevin Wheatley: There are alternatives for the non-linearity in the model. We need to understand and document it. And should we be using it on the way back to the display?
  • Pekka Riikonen: I found the link for using compress mode in extended Oklab.
  • Christopher Jerome: I think what ZCAM does with LMS primaries is looking to solve the same issues.
  • Nick Shaw: I think we already had this problem with ZCAM when we introduced compress mode. [actually I believe we had already introduced Hellwig]
[Alex showed the effect of different options on the mismatch between the approximation and real hull]
  • Alex Fry: It matches well in most places but in one area it bulges out in part and another bit is missed out. I looked at Desmos, and I can make the to boundary bulge out with a gamma, but haven't managed to get that into the intersection code. I want to fist make it work with a single gamma value, and then I will try to lerp between an array of gammas for different hues. Ideally we would fit a function.
  • Nick Shaw: The Desmos you showed doesn't include the intersection maths. That's in this one. There isn't a analytic solution to the intersection of a power curve and a straight line. What I have is an approximation, including a 'fudge factor'. I'll look at how to add a curve to the top intersection.
  • Pekka Riikonen: We could expand things by compressing to a position close to the boundary rather than the boundary itself. Maybe ignore it in the  inverse.
  • Alex Fry: The approximation is to improve speed. But maybe if it causes problems we shouldn't use it.
  • Pekka Riikonen: Reach mode needs a perfect boundary to land things on.
  • Kevin Wheatley: A close approximation that errs on the side of being slightly larger means only minimal clipping. Currently it's a bug which means we can't invert everything.
  • Alex Fry: I pushed an update to the LUT bakes to remove NaNs, which cause problems for some systems.
  • Nick Shaw: Should we revisit the shaper space for the LUTs before we put out something for wider testing? Daniele commented that he saw artifacts from the LUTs.
  • Alex Fry: Limiting it to AP0 might help.
  • Kevin Wheatley: Or even something more aligned with display spaces. Anything else?
  • Pekka Riikonen: I found NaNs come form compress mode.
  • Kevin Wheatley: I'll look into that in my code.

Meeting #122 October 11th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Chris Brejon
Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Alex Fry: I've update the LUT repo with v45. The DCTLs now have the output color space tag, so Mac color management and SDI/HDMI metadata are correct and trigger HDR display if available. I found some bright pixels going black with the inverse. I need to investigate more.
  • Nick Shaw: Last week Pekka posted his dominant wavelength image generator. I've built my own version which includes the line of purples, and has a few extra controls.
  • Kevin Wheatley: That image reveals a few things. There are some very high M values at the rad and blue ends of the purples. I've been testing how my C++ of the base model matches the Blink. It matches pretty well, so what we saw last week isn't from that. But the DRT version has extras like compress mode, which I don't have. I also stripped out a few unused parameters. Some very dark pixels don't inverse well, and they are values where my code doesn't match so well. It's right down i the noise so I'm not too concerned.
  • Alex Fry: I tested with a trailer from the internet, going inverse and forward. Going backwards from SDR and then through a forward HDR transform actually looks pretty good. But some orange values go black. They are right against the edge of the display cube, and so outside the reach of the DRT.
  • Nick Shaw: It makes sense that a vale the DRT can't reach can't be inverted from.
  • Alex Fry: It could be from one of our multiple approximations. It shows why we need to reach the edge.
  • Kevin Wheatley: Smoothing inside a cube means it can't reach corners.
  • Alex Fry: The logo red round trips reasonably well.
  • Christopher Jerome: If the inversion issues are in the gamut mapping, would it go away if you converted to P3 before inverting?
  • Alex Fry: Now there are differences in the 2nd decimal place. The problem pixels still go black though. But they invert to less negative.
[Kevin showed a plot of how high the red and blue ends of the line of purples go]
[Pekka showed nick's image plotted in CIExy though v45 XYZ <-> JMh]
  • Pekka Riikonen: If we go to ACEScct 1.468 (linear 65504.0 - Half Max) we see interesting effects particularly along the purples.
  • Alex Fry: Plotting Nick's image in display RGB through the DRT we see a void along the orange edge. That's where the problem pixels sit. I suspect that's above the gamut cusp in the area where our straight line approximation cuts a bit off. The iterative solver hits the boundary.
  • Pekka Riikonen: Cusp smoothing has not been optimized for reach mode. I think reach mode doesn't invert so well.
  • Kevin Wheatley: I checked Nick's image going XYZ < -> JMh in my C++ and the round trip error is very small.
  • Pekka Riikonen: The PowerP compression curve could be an issue. I'm not sure if it inverts for very high limits.
  • Alex Fry: I think I've tested pretty high values and it seems to invert.
  • Alex Forsythe: We did a demo at the SMPT/Colorist Society event. General reactions were good. Some felt that the SDR of ACES 1 was preferable, but also seemed to match the HDR of version 2v44. 2 or 3 people said that.
  • Kevin Wheatley: That is surprising.
  • Alex Forsythe: People really thought skin tones were more natural in the new version.
  • Nick Shaw: That seems contradictory to say skin tones are better than the old version but the old SDR matches the new HDR.
  • Alex Forsythe: We need to look more into this. Two of those who said this ween't colorists. We're going to be talking about ACES 2.0 in the TAC meeting tomorrow.
  • Pekka Riikonen: When I plot the inverse of reach mode, a lot of values explode outside the locus. Even AP1 reach, although less. I think if we use reach mode, the chroma compression should use the same gamut.
  • Alex Fry: I suspect it's the approximation vs the actual boundary.
  • Pekka Riikonen: What do we do about that?
  • Alex Fry: The forward direction needs to bias towards puffing out of the cube.
  • Nick Shaw: Can we just add a small gamma bend to the top part instead of the straight line?
  • Kevin Wheatley: We could sample an additional point or two between the cusp and peak, and fit a polynomial for each hue, calculated in the init() function. A simple power curve would be easier.
  • Alex Forsythe: How would we do curve fitting in CTL which runs per pixel.
  • Alex Fry: It would be useful to just add a power function and check if puffing it out like that helps.

Meeting #121 October 4th, 1pm PT

[Meeting Recording]

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We have a planned showing of the current candidate at a SMPTE event. And we need to list the remaining issues.
  • Scott Dyer: At the SMPTE event tomorrow night they will be talking about HDR grading. I'm preparing a 4-up of ACES 1.3 Rec.709 and HDR, and the same for v44. Just showing some stills.
  • Kevin Wheatley: As Daniele commented, be aware there are shadow issues in the LUT implementations. There have been discussions about reaching the red corner, and Pekka showed plots. Pekka said adjusting a threshold value helped reach the corner and Daniele asked if that value varied per color space.
  • Pekka Riikonen: I can get to the corner with negative blues, but not with positive values. My plots have cusp smoothing off, so it's not that.
  • Kevin Wheatley: It's not to do with the position of the red primary compared to the locus, or reach mode would affect it.
  • Pekka Riikonen: Actually in reach mode it does get to the corner, but clips more. AP1 reach, at least.
  • Kevin Wheatley: Jeffrey asks if its related to the green magenta/tilt we've seen.
  • Pekka Riikonen: I've tested stock primaries, and it's the same. I haven't tested compress mode off. But we need that.
  • Kevin Wheatley: I suppose the cyan magenta hook discussed previously is related. Any other issues?
  • Pekka Riikonen: A while ago I noticed a XYZ -> JMh -> XYZ round trip isn't perfect. I don't know if it's a bug or a precision issue. The spectral locus image shifts.
  • Kevin Wheatley: With my C++ implementation I only see round trip differences in the 7th decimal place.
  • Pekka Riikonen: Thomas's Colour version is the same.
  • Kevin Wheatley: We need to check the Blink. Maybe there's something like chromatic adaptation differences in directions.
[Pekka showed a chromaticity plot with the round-trip showing shifting]
  • Nick Shaw: The shift is varying with luminance, so points turn into lines on a CIExy plot.
  • Pekka Riikonen: I was only using XYZ <-> JMh with compress mode and custom primaries, but I forget which version. There's a Nuke version of the spectral locus image if people want it for testing.
  • Nick Shaw: I wrote a progress summary for next week's TAC, if people want to read that and see if they agree with what I wrote.
[Alex showed his 3D visualization of the red corner]
  • Alex Fry: There seems to be a bug with iterative gamut compress so it bulges at the top. I'll turn that off. In locus reach mode we seem to get within one code value of the corner. The locus lookup is an externally calculated look-up.
  • Pekka Riikonen: There are multiple look-ups with interpolation. The cusp table is at 1 degree intervals. Nothing is exact. Would be use the spectral locus if we go with reach mode.
  • Alex Fry: Spectral locus is purer, but I think for real world pipelines it's AP1. People are working in AP1, and want to hit the corners from that.
  • Pekka Riikonen: The blue corner is the only issue because it's so close to display blue. There's not much compression there. 
  • Kevin Wheatley: We might have to move it out a bit.
  • Pekka Riikonen: If we use reach mode, I think the chroma compressor should use the same space, whatever we choose.
  • Alex Fry: Or should the DRT be tolerant of more stuff, and AP1 is a medium term compromise as a working space? Does AP0 make more sense. The locus is more independent for an abstract display transform.
  • Kevin Wheatley: It's still dependent on an observer, and that observer is not a camera. If somebody shoots a not very saturated cyan, they expect it to hit the Rec.709 boundary [ColorChecker cyan is outside Rec.709] but if the locus compresses to the boundary, that cyan will end up deficient.
  • Alex Fry: Then it has to be graded there.
  • Kevin Wheatley: Grading controls can't easily shape something to the locus. It's easier to reshape to a 3-primary gamut. I think we need a well described boundary, but it needs to align well with incoming images, which are aligned with 3 primaries and a white point.
  • Alex Fry: Do we need two candidates for people to try.
  • Pekka Riikonen: A question is where we want to invert to? AP1 or slightly larger?
  • Kevin Wheatley: Inverting Re.709 or P3?
  • Alex Fry: Each gamut boundary will invert to the same place with an inverse of its own transform, because that's what the reach compressor does.
  • Kevin Wheatley: Inside a range you get the same picture for any gamut, but more saturated colors will behave differently.
  • Nick Shaw: And the threshold is a percentage of the target gamut. So for a larger gamut like P3, the bit inside 709 should stay the same, and you get a little more before the compressor kicks in.
  • Kevin Wheatley: Maybe we define a reference inverse relative to one target. And say this will put your display values somewhere sensible. We could test a camera shaped gamut for forward, but limit the inverse to AP1 for the reference gamut.
  • Nick Shaw: Aren't you only using the P3 inverse if you have a P3 image that you want to end up back in the same place on a P3 display. Having a 709 image, going through the 709 inverse and then using the forward P3 transform to a P3 display, that might be a problem.
  • Kevin Wheatley: That would be quite common. We could evaluate mismatched inverses. We have two issues to investigate. Why is the inverse JMh transform not getting back to quite where it should? Are the approximations compounding? The other question is what parameters we pick. That one will partly be solved by feedback from others. But we need to fix the bugs.
  • Pekka Riikonen: Are we allowed to come up with "AP2".
  • Kevin Wheatley: Inside the transform, ad we can define a range of acceptable values. But we can't externalize it as a space. So how do we track down any bugs? Go step by step forward and back, going one step further each time.
[Alex rigged up a quick forward / inverse test, and Kevin showed his C++ version]
  • Kevin Wheatley: I'm seeing a difference in the 2nd decimal place for values of about 21000 in my C++, which is very minor as a percentage. But I can't change my primaries, and don't have compress mode.
  • Alex Fry: I still see the issue without compress mode and with stock primaries.
  • Christopher Jerome: Is the partial adaptation affecting things?
  • Nick Shaw: In the DRT we set degree of adaptation to 1.0 when we use "discount illuminant".

Meeting #120, September 27th, 1pm PT

Attendees

Kevin Wheatley
Scott Dyer
Nick Shaw

Chris Brejon
Daniel Brylka
Chris Clark
Alex Forsythe
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Pekka posted about his latest version using techniques from the gamut mapper in the chroma compression.
  • Pekka Riikonen: The chroma compression wasn't exact in where it reaches to, and Alex suggested using the "reach mode". I've done that in the version called v45-pex. The chroma compression uses a space I came up with, and I calculated the maximum M value at the display white J. M is normalized by this value multiplied by a gain value in the UI, currently set to 1. This means it won't compress or expand beyond that compression space. What I'm currently using is close to AP1. I was able to use a wider compression gamut than before, because it is now more accurate where it goes to. This gives us what Alex was looking for. It greatly improved the inverse compared to v44. It changes the rendering a little. Magenta is lighter. Saturation changes a little, which we can tweak later if we want. I think it looks good, even with the default gamut mapper, not the reach gamut mapper.
  • Kevin Wheatley: So we can't combine this with the gamut mapper, because it uses different primaries. One concern I had what is the combined effect of two compressions could introduce kinks, perhaps outside that space.
  • Pekka Riikonen: I've noted before that my compression curve does not have a continuous derivative. I haven't found one that is and also has a closed form inverse. But any kinks should be outside the gamut, at least for P3 and Rec.709.
  • Kevin Wheatley: We need to check the effect of the composite compression around the hues. There could be weirdness from applying the same techniques with two unaligned gamuts.
  • Pekka Riikonen: That could happen with the current one as well. If we used the same primaries for chroma compression and gamut compression the look would be different for each target.
  • Kevin Wheatley: Have you looked at SDR / HDR matching?
  • Pekka Riikonen: I have. With the old method the limit was scaled with peak luminance. Now it's constant. I need to find a different approach.
  • Kevin Wheatley: Won't the use of max M for peak J mean you have different scaling for HDR? You compress less for HDR because you start bigger, so you perceive it as more colorful.
  • Pekka Riikonen: The distance from max M to the locus increases with peak luminance. But it's a normalization factor, not a boundary. It's normalized to the cusp of the compression space. But it might work if we didn't do that.
  • Kevin Wheatley: Improving the inverse makes it promising.
[Pekka showed the new version vs v44 with images]
  • Pekka Riikonen: Unfortunately reds do desaturate faster with v45. The compression goes higher as J increases, so reds compress more. You can also see it with magenta with this flower.
  • Kevin Wheatley: Does it affect the ability to reach the corners.
  • Pekka Riikonen: It still has the issue with the red corner, but the others seem fine.
[Pekka showed a 3D plot with a small slice missing at the red cusp, and pointed to an odd distortion around magenta / blue / cyan]
  • Kevin Wheatley: That may be what Jed is referring to.
  • Pekka Riikonen: I think the red corner is reachable if we reduce compression. AP1 reach mode gets there, but everything else is pulled in.
  • Kevin Wheatley: AP1 is a little arbitrary, so maybe there's a better option.
  • Pekka Riikonen: Maybe the chroma compression space, which is a little bigger than AP1. For rendering AP0 actually looks quite good, but even more doesn't get reached. I think the chroma compression works well, but for gamut mapping, how far do we want to reach? It's still quite arbitrary.
  • Christopher Jerome: What was your objective when adjusting the primaries?
  • Pekka Riikonen: One thing was the inverse, and the other factor was that the space could be bigger because it is more accurate. Also there'e no eccentricity factor any more as it's no longer necessary for a good inverse.
  • Christopher Jerome: Is there a place for the stock Hellwig or HK eccentricity?
  • Pekka Riikonen: Luke said the way we use M it would make no difference. It has a small impact on the look of the rendering.
  • Christopher Jerome: On the plots the inverse look very good.
  • Pekka Riikonen: Anything that goes outside on the inverse is the gamut mapper. The chroma compression only affects the interior. The chroma compression works the same as before. I'm just changing the limit where it stops compressing.
  • Kevin Wheatley: In the plot green and blue invert to clear points, but the green looks "chopped off".
  • Pekka Riikonen: Yes, and I don't understand why it doesn't go to the corner of the compression space. The old one was similar, but did have a point.
  • Kevin Wheatley: Judging things on a chromaticity plot is'n always the best.
  • Pekka Riikonen: I'm worried about the powerP curve. The higher the limit is the flatter it gets, and it may not be fully invertible. We could use a different compression function. Interesting AP1 reach does invert all the way to green.
  • Kevin Wheatley: It may be to do with the cube being projected to the flat CIExy plane. It may not be meaningful on a display.
  • Pekka Riikonen: The effect in yellow comes from the slight inaccuracy of the gamut approximation there. I could maybe fix it by increasing cusp smoothing.
  • Kevin Wheatley: So what are the pros and cons of the new version? The inverse is better. Is there more code.
  • Pekka Riikonen: There is an additional table, because I'm using a different gamut for compression.
  • Christopher Jerome: Could you simplify gamut compression by only doing reach mode once?
  • Pekka Riikonen: If they reached to the same place the lookup could be reused.
  • Christopher Jerome: Is there potential to refine one of the other methods of gamut mapping, when the space is more well known and within a reasonable boundary?
  • Pekka Riikonen: Maybe. The issue with the gamut mapper is what is the target we want to invert to. Is it AP1? Then that's what we have to reach to. But then images like blue bar will get clipped because there's a lot of blue outside AP1.
  • Kevin Wheatley: Mapping to different display gamuts you map to different places, but we ideally want a common rendering space.
  • Pekka Riikonen: This is the best inverse I've seen. We go out a little on red, which is why we miss that corner. The blues are the kind you see from cameras, which are negative in AP1.
  • Kevin Wheatley: Have you inverted a P3 input?
  • Pekka Riikonen: It's worse.
  • Kevin Wheatley: I'm less interested in better or worse, but how different is it? And what do you get if you invert 709 values in a P3 container? Same as 709 or different?
  • Pekka Riikonen: I haven't looked at images with the inverse, except forward, inverse, forward, which works fine.
  • Kevin Wheatley: We ned to examine this new version with some test cases.
  • Pekka Riikonen: It's yet another chroma compression.
  • Kevin Wheatley: And as with all of them, the stuff in the middle doesn't change much. It's the edges.
  • Pekka Riikonen: I'll make LUTs for this version and when Alex is back next week we can discuss merging this.
  • Kevin Wheatley: Did you look at the issue Daniele posted?
  • Pekka Riikonen: That is caused by the gamut mapper. Without it it is straight. For Jed's post, blue to magenta has to turn, but it could be smoother. I'm not sure what causes that, or why reds go orange.
  • Kevin Wheatley: We need to look further at those. And we need to look at v45 with test cases to see if it's better or worse. Or where it's better and where worse.
  • Pekka Riikonen: There's an ACES unit test image (frame 449). Some parts of that collapse to black in our transform, where ARRI Reveal renders it normally.
  • Christopher Jerome: Is that just on the latest version?
  • Pekka Riikonen: All of them.
  • Kevin Wheatley: Something to look into.

Meeting #119, September 20th, 1pm PT

Attendees

Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Daniele Siragusano
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: Luke and Alex apologize for their absence. Alex published the combination of his an Pekka's work. Jeffrey posted about seeing some NaNs, but Alex did do this in a rush. 
  • Pekka Riikonen: I think NaNs come from compress mode.
  • Kevin Wheatley: We need to trace values through to find the source of NaNs.
  • Pekka Riikonen: The quadratic in the gamut mapper can also produce NaNs.
  • Kevin Wheatley: We could look at other quadratic formulae. Or are there no roots there.
  • Pekka Riikonen: Limiting M makes NaNs go away.
  • Nick Shaw: The quadratic solve finds the angle for the compression vector which hits the J axis at a particular point. But outside the intended range there may be no solution which intersects at the right point.
  • Kevin Wheatley: Did anybody look at v44 and 44b?
  • Scott Dyer: I had a look. But didn't have a chance to do side by side with HDR. I saw some NaN/LUT artifacts, which I'm sure we can fix.
  • Kevin Wheatley: Our experiments are to match HDR and SDR and 709 and P3. Then we look at the problematic images, and check they are better than before.
  • Christopher Jerome: I think the ARRI bar image where P3 clipped before 709, it's now about equal.
  • Kevin Wheatley: Hopefully in the coming weeks we can kill the obvious bugs, then noodle with parameters to help with any discrepancies. It would be helpful if people can comment on problem images in the DRT thread on ACES Central.
  • Christopher Jerome: Added control over when something hits the boundary and at what angle would help.
  • Kevin Wheatley: Any tests people think should be done that they can't do themselves.
  • Pekka Riikonen: HDR SDR comparison. I will push some small tweaks I've made to parameters and algorithms in v44.
  • Daniele Siragusano: Should 44 and 44b reach the mastering space bounds? They struggle to hit the red corner. Are we still wanting to hit the corners?
  • Pekka Riikonen: Hit or get very close. 44 expands colorfulness more, where the previous version maybe went over. In 44b the locus reach mode compresses more.
[Daniele showed in Baselight that it wasn't hitting the Rec.709 red corner]
  • Nick Shaw: I could be the LUT shaper is limiting it. That is the same one we've always used, and the DRT has changed.
  • Kevin Wheatley: We want to hit 709 and P3 would be nice. We controls, particularly in the gamut mapping that may help.
  • Nick Shaw: One thing is saw in the new Resolve 18.6 is custom DCTL ODTs can be tagged with their color space, for OS color management and metadata. At IBC I mentioned we were using Blink not DCTL for prototyping, because Blink offers an init() function where DCTL runs everything per pixel. Nothing was promised, but I was told adding an init() to DCTL was not impossible. Maybe if we show them our use case example.
[Pekka showed a 3D JMh plot of v44b]
  • Pekka Riikonen: It doesn't hit the red corner in the Blink either. This is locus reach. AP1 reach is better. It's still missing a bit along the cyan edge.
  • Kevin Wheatley: That makes sense given display encodings cut a lot of the green to blue area off.

Meeting #118, September 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Sean Cooper
Luke Hellwig
Jeffrey D Mathias
Pekka Riikonen
Christian Wieberg-Nielsen
Juan Pablo Zambrano

Meeting Notes

  • Kevin Wheatley: We have progress reports from Alex and Pekka.
  • Alex Fry: I've added two new versions to the repo. V42 is Pekka's v42_pex. v43 is a merge of my v42 and Pekka's 42_new_scaling, with Luke's compression method. I've updated the spectral locus reach compression with iterative gamut finding. This has no path to white. Just M scaling and any path to white comes from the gamut compressor. It's less saturated than before, and looks "beefy". There are Resolve, Baselight and OCIO versions as well as Blink.
[Alex showed some images through the DRT]
  • Pekka Riikonen: I took the version with Luke's exponent on the M scaling, which gives a desaturated result, and changed it to increase saturation but without pushing outside the locus. V42_pex3 looks very similar to the old version. It has chroma compression to give a path to white, and also expansion do make in gamut colors look more as they did before. I also wrote a new HDR/SDR matching method. I scale the compression parameters with peak luminance. Now the SDR and HDR processes are the same, just with different parameters.
  • Kevin Wheatley: That will be better for in between peak luminances.
[Pekka showed images looking very similar to v35]
  • Pekka Riikonen: The reds are a little less saturated.
  • Nick Shaw: Can it still hit the red corner of the gamut?
  • Alex Fry: That may be more affected by gamut compressor settings than chroma compressor.
[Pekka showed the differences and original scene values on a chromaticity plot]
  • Pekka Riikonen: The chroma compression isn't perfect. It doesn't know where the locus is. It's hue dependent, and could be tweaked.
  • Alex Fry: I have the M boundary for the locus.
  • Pekka Riikonen: The chromaticities coming out of the chroma compression into the gamut mapper can be outside the locus in v35. This is much better.
[Pekka showed images through his new version]
  • Pekka Riikonen: The shadow noise is better than v35.
[Pekka showed the difference in the blurry neon image]
  • Kevin Wheatley: Different but not necessarily better.
  • Alex Fry: Are you using the angled ratio based gamut compressor?
  • Pekka Riikonen: Yes
  • Alex Fry: I want to see the effect of combining my read based gamut compression with your new stuff.
[Pekka showed 3D JMh plots of the shape of hue sweep ramps at different peaks luminances]
  • Pekka Riikonen: The shape of the path to white is more consistent. And there is a parameter we can easily tweak if we wan to make HDR more or less colorful.
  • Alex Fry: I'll merge that and make a v44. I'll bake LUTs with that and the standard ratio based gamut compression. But I think we need a reach based version too.
  • Kevin Wheatley: This seems a good start from which we can tweak parameters and look for issues like can we hit the primaries. Can we invert P3, etc. We can noodle for a couple of weeks.
  • Alex Fry: And look at thing like cusp smoothing.
  • Kevin Wheatley: And remove redundant options from the UI.
  • Nick Shaw: Need it more stable before porting to DCTL. There will need to be some pre-baked LUTs there for things like reach based compression.
  • Pekka Riikonen: Pushed pex4 with an additional check-box.
  • Kevin Wheatley: I'm thinking what point do I move on with my C++ version?
  • Alex Fry: Can we improve the gamut approximation? Maybe make the power function hue dependent.
  • Kevin Wheatley: Can we do better than the straight line for the top.
  • Nick Shaw: If the approximation encloses the gamut it's probably ok, with just small final clipping.
  • Pekka Riikonen: The cusp smoothing widens it out a bit which may help.
  • Kevin Wheatley: Pekka asked last week about hue smoothing. Maybe that's a finesse for later.

Meeting #117, September 6th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We want to bring up some discussions from private emails this week. It was about what could make M compression follow Alex's locus "bowl" shape. Pekka pointed out that there may be other more important issues. Part of our mapping is not smooth, and we are only smoothing in one dimension. Smoothness may be more critical than other factors. Luke and Alex also had a call last night to discuss all this.
  • Alex Fry: We went over Luke's proposed mathematical solution [raising the M scale factor to the power of 1/cz, where cz is the exponent used in the J calculation, incorporating the surround compensation]
  • Luke Hellwig: We concluded it didn't quite work for what we want to achieve.
[Alex showed a 3D plot of the effect of applying the a/cz exponent]
  • Alex Fry: If we can make this work, the chroma compressor should have less to do. And it may take some "taste factors" out of the equation.
  • Pekka Riikonen: I made a version of the current transform with Luke's suggested exponent. I still kept a non-linear saturation increase – more in shadows than highlights.
  • Nick Shaw: Is the cz exponent akin to the exponent we use in our gamut approximation, to approximate the way the display gamut curves like the locus bowl does? Does it vary with hue, like the gamut?
  • Luke Hellwig: My maths suggested that just the 1/cz should have done it. I need to investigate more.
  • Nick Shaw: Would it be affected by our LMS matrix tweak?
  • Luke Hellwig: No. It's separate.
  • Pekka Riikonen: With just the exponent it's a very cone-like shape, but the non-linear saturation makes it bulge more int he shadows. 
  • Kevin Wheatley: A main contributor to the curvature is the non-linear function applied to LMS. But the LMS matrix itself just modifies the source, a bit like if you shot on a different camera. Also after last week's meeting I recreated the spike artifact  we were looking at using the CAM16 model. So it's not specific to Hellwig. But they both use the same non-linearity.
  • Christopher Jerome: The non-linearity has been a well known source of problems since CIECAM02. Can you tell that particular LMS values will hav issues in the non-linear transform.
  • Kevin Wheatley: It's as one channel approaches zero.
  • Nick Shaw: We've also modified the non-linearity with "compress mode" to prevent things going negative.
[Pekka showed his 3D plot of linear scaling with the exponent]
  • Alex Fry: Does that still have your saturation boost?
  • Pekka Riikonen: With just the ratio it doesn't look as saturated as what we've seen up to now. My saturation boost is to match the previous saturation. Without that the shape of scene and compressed JMh stay the same.
  • Alex Fry: That seems desirable to me.
  • Pekka Riikonen: It looks quite desaturated.
  • Christopher Jerome: Could the saturation boost be modulate by e.g. S, so the hull didn't change, but inner colors were boosted?
  • Pekka Riikonen: The in gamut compression already does something like that, but in the opposite direction. I'm ok with compressing saturation. I wouldn't want to increase it. That's more of an LMT.
  • Alex Fry: To me it looks less saturated than before, but not wrong.
  • Pekka Riikonen: The purpose of the chroma compression is to be able to reach the corners, and still have good skin tones and nice highlights. I think modulating the compression is necessary for this. The original highlight desat didn't get near the corners.
  • Nick Shaw: The saturation is subjective choice. As long as you can add it with an LMT, I don't mind a desaturated start point, as long as it doesn't clip saturation.
  • Pekka Riikonen: I think then we need a default LMT.
  • Alex Fry: My gut says now with this shape, the reach-based gamut compressor will have an easier time.
  • Christopher Jerome: Could the gamut compressor have positive compression in some areas, and negative (i.e. boost) in others?
  • Pekka Riikonen: Not as it is now. I don't disagree with the idea of not boosting the boundary, and boosting inner values. But the issue is the schedule. Then we have the smoothing issues.
  • Alex Fry: I see smoothness issues in graphs, but not in images.
  • Pekka Riikonen: Maybe in some colored lights.
  • Alex Fry: It's a 3-way tradeoff between hitting the corners, smoothing corners, and introducing distortions elsewhere. I see that with CG images. That's why I chose v28 to use in a production. I'd like to try Pekka's test version with my reach-based gamut compression [the gamut compression limit is absolutely defined rather than relative to the target gamut]. Reaching to AP1 or the locus will be consistent between targets.
  • Pekka Riikonen: I'm not sure what I just showed is ready to merge.
  • Nick Shaw: Because the chroma compression downstream of that is designed for a completely different input than what this now produces.
  • Pekka Riikonen: Exactly. If we give people this it will look completely different to what they've seen before.
  • Alex Fry: They are experimental, so that's ok.
  • Pekka Riikonen: It will be a bunch of work to update the rest of the DRT for it.
  • Nick Shaw: Did what we just saw still have the rest of the chroma compression enabled?
  • Pekka Riikonen: Yes. That changes the interior but not the boundary. I think we also need to talk about hue smoothing. We have hue axis discontinuities. It skews blue to cyan or magenta.
  • Nick Shaw: Might the reach based gamut compression help that?
  • Pekka Riikonen: I don't think so. It think it's from compressing along constant hue lines in the perceptual space.
  • Alex Fry: I feel the dark blue band in the defocused neon image is die to J changes in the gamut compression. My reach-based compression is currently horizontal.
  • Pekka Riikonen: We are smoothing the cusp in the h plane, but as a hue sweep goes round an edge of the gamut, there is a sharp transition. ARRI Reveal smooths around these edges.
  • Nick Shaw: ARRI Reveal doesn't nee to be invertible or hit all the corners. They aim just for a nice looking image. A display gamut is inherently a cube with edges. If you want to hit those, you have to wrap around its sharp corners.
  • Kevin Wheatley: All we can do is of slightly wider and then have a final clipping. Jeffrey asked in the chat if the DCTL could have a slider. Only for experimenting. The final deliverable must be fixed. ACES has no mechanism to track parameters.
  • Christopher Jerome: Is it necessary that the M compression is pinned to change in J as it is now?
  • Kevin Wheatley: It's a good question. The tone curve is different for SDR and HDR, so our M compression changes with that. But is that what we should do? How much should the part of the HDR that can be represented on an SDR display be the same on both? And "the same" in what way?
  • Christopher Jerome: What about using slope of J, rather than change?
  • Pekka Riikonen: What is the timetable?
  • Kevin Wheatley: The intent was to wrap up in a couple of weeks, and move to testing, in order to work towards VFX Platform 2024.
  • Pekka Riikonen: The new approach couldn't be done in two weeks.
  • Kevin Wheatley: v35 was the last official baked version. What was wrong with that, and could it be fixed in two weeks?
  • Christopher Jerome: How does today's experiment affect images like blue bar, and the bright shadows on the pool balls?
  • Pekka Riikonen: It will look different with less saturation.
  • Nick Shaw: Would being less saturated change the pool ball shadows.
  • Pekka Riikonen: It could if that comes from the gamut mapper, which might not be applied there if it's less saturated. But what Is the problem with the current approach that the new one is trying to fix?
  • Nick Shaw: The blue neons in blue bar are outside the locus. With the new approach they's still be outside it.
  • Alex Fry: I'm keen that inverting values in the display space end up somewhere sensible. And a compressor that pulls way out values to the boundary will invert to way out there. With the reach based gamut compressor, which reaches to an explicit boundary, if the chroma compressor puts values outside that, the gamut mapper won't bring them in. I've given pup on the idea of one rendering that can handle any input. We need a limit on input we will support, or the inverse goes to infinity. I will bake an OCIO config of v41 so people can test that. There are disagreements about the angle of the gamut compression though. I find the P3 / 709 mismatch more objectionable than the way the highlights desaturate in the angled compression. Some highlight compression was coming from the gamut compressor. I prefer to have that in the "rendered image". I don't think 709 and P3 should change brightness.
  • Pekka Riikonen: I agree but I'm nor sure the horizontal compression is the solution. Maybe we need a better lightness mapping.
  • Christopher Jerome: Is there a CAM model value that is constant for the locus boundary? SO we can distinguish inside or out?
  • Nick Shaw: I don't think so. I do'n think the locus has a special status in the model.

Meeting #116, August 30th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: There have been some discussions on ACES Central.
  • Thomas Mansencal: Daniel showed Two renders combined. A pure grey scale and 3 sets of RGB patches. And he passed them through various display redesign transforms and then max merged them. Some show noting as the patches blend together, and some don’t. You can get the same result from a basis change, if you render in one space and do a DRT in another you get the same result. Originally the ACES rendering space was 2065-1, and all sources were converted to that, then output to a different display. When he was working on the Lego Movie, Anders Langlands noticed when rendering in 2065-1 the global illumination was doing weird things. Reds got hyper saturated, and blues were muted. He and I did some tests comparing spectral renders and RGB renders in different spaces. It showed how important the rendering space is for CGI and DRTs. The same maths in different spaces produces different results. We render in the same space for all inputs and outputs so we have the same rendering.
  • Kevin Wheatley: The concept of the rendering space is a bit of a red herring. Choosing a rendering space will fix some, but not all problems. Troy is also mentioning other image formation effects. The DRT doesn’t deal with those perception issues. One thing that may be causing us issues ifs the difference in tone scale for different renderings. The recent experiments were attempting to preserve colorimetry by preserving chromaticity.
[Alex showed the node graph and 3D plots of his hacked chromaticity preserving rendering]
  • Alex Fry: It works as expected for many colors, but for the near UV you get a spike of huge J.
  • Kevin Wheatley: I was thinking of doing the tine mapping on Y before into the model at all. We know going into the model, zeroing out M and coming out you don’t get the same Y. You lose energy by zeroing the colors.
  • Nick Shaw: Aren’t you adding energy back when you out the original x and y in?
  • Kevin Wheatley: The benefit of going into the model is it handles negatives better than simply clipping Y.
  • Nick Shaw: It’s only one way to do it, but we need to take Y and do something with it that involves X and Z as well to produce a new Y. That going be some kind of matrix, if it make more positive than there was before.
  • Kevin Wheatley: The modifications to the matrix in the model already does that, by being wider LMS primaries. Some kind of expansion like that alone would compress more values in. Effectively changing the primaries to derive the RGB to XYZ matrix.
  • Alex Fry: My rendering if producing the spike for near UV, but also putting a slight slant on everything. I hoped it would be flat. I’m looking at Thomas’s Mitsuba render, so all the values are plausible.
  • Luke Hellwig: Near UV at high luminance will have a J that’s off the charts. 
  • Alex Fry: When we exited the model J was capped at 100, but resetting xy and going back in I’m seeing some values of 400+.
  • Luke Hellwig: I’ll try to figure out what’s happening.
  • Nick Shaw: It seems reintroducing xy is changing J as well as M and h. Ideally we’d hold J constant and find the right M and h values which would come from the original x and y. I’m not sure how to do that.
  • Kevin Wheatley: There’s something about that area of the color space that affects M and J.
  • Nick Shaw: We know the model collapses beyond a certain point. We tweaked the matrix to keep our source values before the collapse. Perhaps reintroducing original x and y with J pushes things beyond the point of collapse.
  • Pekka Riikonen: What happens if you use the original primaries.
  • Alex Fry: I still see the same effect.
  • Christopher Jerome: We change the LMS primaries to expand what produces positive values. But later we tweaked them more to create a pleasing result in the blue/cyan range. Is that right? Is there an optimization between those.
  • Thomas Mansencal: We pushed the virtual LMS primaries so the focal point by blue was outside AP0.
  • Pekka Riikonen: Thomas, in your primaries for green, one component is negative and one positive. For CAM16 it’a the other way round.
  • Thomas Mansencal: I’d have to check my notebook for the derivation.
[Nick showed his version of the chromaticity preserving rendering with images]
  • Nick Shaw: Mine should be doing the same as Alex’s in a slightly simpler way. It’s based on my simplified version of the conversion from achromatic J back to Y. I tone map that and add back x and y, the go to XYZ and back into the model. Pekka’s chroma compression is not used, but the rest is the same for gamut compression. It shift’s colors compared to the current version, particularly saturated colors, but images look reasonable for most normal images. The bluescreen image goes black in the background. The blue must be beyond the collapse point. There’s no explicit path to white. Just the gamut compression.
  • Alex Fry: The path to white should be simpler.
  • Pekka Riikonen: How do the shadows look.
  • Nick Shaw: No obvious issues.
[Alex showed the 3D plot of the spike on the bluescreen image]
  • Alex Fry: The “table” is tilted, where we want it to stay flat in J. Just putting J back M is still too far out.
  • Luke Hellwig: It’ comes from the cross-talk. It wouldn’t be a problem if J was just based on luminance. J changes when you don’t change luminance but bring back in x and y. I have to think more.
  • Thomas Mansencal: Is it the human observer image or the ALEXA one.
  • Alex Fry: I’m using the human observer version.
  • Kevin Wheatley: There’s nothing in the model that magically soft clips things. The only thing a bit like that is the non-linearity, which is an asymptote. Could we tweak the Daniele curve to do a soft clip?
  • Nick Shaw: A small amount of negative Y might be ok with a tweaked curve, but I think some values go significantly negative.
  • Kevin Wheatley: We already move the blue quite a bit. So we could tweak our XYZ matrix. Then use the model for the rest.
  • Alex Fry: Just tone mapping Y I see Y collapse for some values. I also still see the spike near UV.
  • Christopher Jerome: I made a post with some experiments. I put some other rendering like ARRI Reveal back into the model and plotted the M scaling against the log of input J. I tried fitting curves to what I saw, and an asymmetrical Gaussian fitted well. The ARRI render does go over 1 for the M ratio. It’s similar to Pekka’s curves. Relying just on change in J to modulate M is a sensible start point, but we don’t have much control. So I looked for a controllable curve.
  • Pekka Riikonen: If you had a ramp with varying chroma, you would see a different curve. And perhaps a radical change as you went higher in chroma.
  • Christopher Jerome: I didn’t test highly saturated or desaturated colors. I used ColorChecker colors, 10 stops over and 10 under.
  • Pekka Riikonen: With our chroma compression you would wee similar curves for ColorChecker colors. My ratio curve never hits zero by design, so the subsequent in gamut compression has something to pull in on the path to white. I’m interested in seeing alternate curves. But some will make the inverse difficult, if we use M as a modulator for compression, as we don’t have the original M available in the inverse. The curve is technical, but there’s also a look component.
  • Kevin Wheatley: I tested using my C++ implementation of the original 2022 model. I get the same near UV spike Alex saw.
  • Alex Fry: What we’d like is the values tracing down following the “bowl” of the locus, staying inside it.
  • Kevin Wheatley: So do we continue down this path?
  • Alex Fry: I think so. It makes chroma compression easier.
  • Pekka Riikonen: Have you tested without compress mode?
  • Alex Fry: Tone mapping in Y, without compress mode, the table tilt is still there, and there is a spike in M, but not it doesn’t spike up in J.
  • Luke Hellwig: Are all those values in the LMS triangle. I think they cover the locus, but not AP0.
  • Alex Fry: All those values are in the locus. We need it so that values that start in the locus bowl remain in that bowl after tone mapping.
  • Nick Shaw: With our simple chroma compression, scaling M by the ratio J was changed by, that should move JM in a straight line toward the origin. I don’t quite understand how a point that starts on a concave surface ends up outside that surface if it’s moved along that line.
  • Luke Hellwig: What if you keep the new M and h but put back J?
  • Alex Fry: Then it stays below the table but the chromaticities still spike way out. Anything outside the bowl is a color we can’t produce in reality.
  • Christopher Jerome: What’s the spectral value of that tip?
  • Alex Fry: It’s right down the blue end of the locus. UV laser lit.
  • Luke Hellwig: In CIELab, that corner of the locus end up with infinite chroma. This seems similar.

Meeting #115, August 23rd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley

Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
Luke Hellwig
Christopher Jerome
Willem Nagtglas
Pekka Riikonen
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: We’ve had a request from ACES leadership to try to wrap things up soon. We plan to do a few more iterations over the next 3-4 weeks, to lock it down and move to the next phase of testing and and documenting to aim for a deliverable early next year. Let me know if anybody has ideas on what’s still outstanding. The time scale is based on the next years VFX reference platform and getting ACES v2 into OCIO in time.
  • Pekka Riikonen: I got chroma compression working without a saturation boost in v42 in my repo. It hasn’t changed much, just how the compression works internally. The scaling curve for Rec.709 isn’t near 1.0, so you could increase saturation a little if needed. It’s at 1.0 for 1000 nits and above 1.0 for 2000 nits, but that’s because tone scaled J is above scene J, so that’s needed to preserve saturation.
[Pekka showed his new saturation curves]
  • Pekka Riikonen: The look is very similar to v35. Alex, do you see the issues with the shape that you were concerned about?
  • Alex Fry: I’ll check it with my visualizer.
  • Pekka Riikonen: Kevin asked whether it’s a hard or soft boundary when the compression stops. The current curve does have a kink in the derivative, but it happens at far out values that will be gamut compressed. I haven’t seen any artifacts in images. I can’t find an invertible curve without that.
  • Kevin Wheatley: There were posts about the flare in the Daniele curve. Do you have comments on that?
  • Pekka Riikonen: I think the flare and mid grey lift are related. I don’t think the mid grey should be lifted as much at 1000 nits. So I made a version that sets mid grey to 13.3 at 1000 nits, which is the result of changing w_g from 1.14 to 0.1. I see a better contrast match between SDR and HDR. But flare is a separate discussion. I think Alexander’s post is saying he sees darker shadows at 1000 than 100 nits, but that maybe because of the bright highlights in HDR. The values we use came from the Desmos plot, not looking at images.
  • Kevin Wheatley: Although we’ve had feedback on the contrast and generally people liked it. But the SDR / HDR match is harder to set up and visualize properly.
  • Pekka Riikonen: I wouldn’t change the Rec.709. My change of HDR mid grey is the opposite of what Alexander suggested, but to my eye improves the match. But it’s subjective.
  • Kevin Wheatley: It’s a parameter we can get feedback on. We will get more feedback, hopefully with more active participation from more colorists, and an Academy hosted setup.
[Alex showed his 3D JMh visualization of Pekka’s v42]
  • Alex Fry: Is still seems to be pushing out a little at the bottom.
  • Pekka Riikonen: That’s from the tone scale not the compression.
  • Kevin Wheatley: The tone scale lowering J pushes in gamut values out.
  • Pekka Riikonen: It’s unavoidable for Rec.709 because it’s very squeezed.
  • Kevin Wheatley: We talked about preserving chromaticity, but that might have the same problem with a far out CIExy value.
  • Alex Fry: I was hoping its might follow the locus curve as we tone map. I thought it was the chroma compressor puffing the bottom out. But just scaling M by the J ratio it still goes out.
  • Kevin Wheatley: Part of the issue may be that the Daniele curve and the Hellwig non-linear compression function have a similar, but not identical shape.
  • Luke Hellwig: Did you try maintaining chromaticity, as we discussed?
  • Kevin Wheatley: Not yet.
  • Luke Hellwig: I think we should drop M scaling and switch to preserving chromaticity. M scaling is complicating things.
  • Pekka Riikonen: That might be a lot of work.
  • Kevin Wheatley: Constant chromaticity is straightforward, but the impact on other things may need work.
  • Luke Hellwig: My concern is that the foundation you build everything else on needs to be solid.
  • Alex Fry: I was hoping that scaling M by the same factor as J would make it ride the line of the locus precisely.
  • Kevin Wheatley: It will ride a cone of 45 degrees. Going down in equal proportions you will go down at 45 degrees towards the origin. Not for every point. As you move down the tone scale, the angle changes.
  • Luke Hellwig: Won't it ride that line if you do constant chromaticity, because that these lines are constant chromaticity lines?
  • Kevin Wheatley: But will it look the same? 
  • Pekka Riikonen: I would guess the chroma compression would have to be adjusted quite a bit.
  • Christopher Jerome: Pekka, what’s the biggest change in the chroma compression curve?
  • Pekka Riikonen: Small tweaks to modulate the compression so you don’t need a saturation boost.
  • Christopher Jerome: If we preserve chromaticity we need a separate path to white and maybe shadow desaturation.
  • Kevin Wheatley: With the split out DRT it should be relatively easy to try chromaticity preserving instead of the M scaling, and keep everything after that.
  • Alex Fry: Not as it is currently. We need to break it down more.
  • Christopher Jerome: Is your chroma scaling curve hue dependent?
  • Pekka Riikonen: Not now. The rendering could be better if it was.
  • Kevin Wheatley: Preserving chromaticity would mean anything that started on the cone would stay on the cone, and things would stay inside or outside it. But where inside might change, which would change the look.

Meeting #114, August 16th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias

Meeting Notes

  • Kevin Wheatley: Luke has some comments. And we can discuss where we are and the direction we are going.
  • Luke Hellwig: The issue was what to do with M after the tone map rescales J. The non-linear relationships with light of J and M are different in the model. So just doing the same rescale of M doesn’t work. Peak said my mathematical solution caused issues with yellows. I feel the option of simply preserving source CIExy chromaticity as you change J is preferable. 
  • Nick Shaw: You could simply preserve chromaticity with an RGB ratio preserving tone map, maybe using J as the weighting. Or maybe the Demos / Walker weighting. Then go into the model for gamut mapping. We would keep the J we have now, but calculating new M and h to preserve xy.
  • Kevin Wheatley: What we get from J is a guaranteed positive Y when we go back to it to tone map. We could just clamp luminance to be always positive.
  • Nick Shaw: Bouncing via J does more than clamp Y, doesn’t it?
  • Kevin Wheatley: We choose to treat it as achromatic, which is like saying I ignore X and Y and set them based on the white point.
  • Nick Shaw: but it’s a new Y, somewhat unrelated to the original Y, due to going via J.
  • Kevin Wheatley: Is it that different?
  • Nick Shaw: Colors that would have produced a negative Y come out with positive Y, not just zero.
  • Kevin Wheatley: We need to look at what is the real effect?
  • Nick Shaw: The correlate of lightness is trying to be a more accurate model of the perceived brightness of a color than luminance.
  • Alex Fry: What are we actually trying to fix?
  • Kevin Wheatley: The issue is the impact of the different SDR and HDR tone scales, and just scaling M the same as J doesn’t create a visual match.
  • Nick Shaw: Also we are trying to preserve hue, but preserving chromaticity doesn’t necessarily preserve h.
  • Kevin Wheatley: We only need to preserve h in the gamut compression. Before that it doesn’t factor in.
  • Nick Shaw: So we need to assess whether values that don’t get gamut mapped look the same in SDR and HDR with the same chromaticity. But I don’t fully understand what Pekka’s in gamut compression does.
  • Kevin Wheatley: Because we change the LMS numbers, we don’t care that much what the origin hue is. We just want to be consistent.
  • Nick Shaw: Even if it’s not a pleasing image, do SDR and HDR look the same with the same CIExy?
  • Kevin Wheatley: With our code, there is a clear point we can inject chromaticities back in. You could simplify the use of the model for the first step, because you only need J.
  • Christopher Jerome: Part of Pekka’s chroma compression was path to white. How would that work with xy?
  • Kevin Wheatley: It can happen elsewhere. We are only looking at the first tone mapping stage.
  • Nick Shaw: Perhaps Pekka’s in gamut compression can be stripped down if some of what it is fixing is not longer needed. But we still need path to white. We’re just looking at replacing the first scaling of M.
  • Kevin Wheatley: We should still end up with M and h values that relate to the tone scale. Either M will be much greater (which I don’t believe) or it will be the same or desaturated.
  • Nick Shaw: Hopefully we won’t need Pekka’s chroma boost which pushes some things too far out.
  • Kevin Wheatley: So moving on, if we look at the various versions we’ve had, what version numbers would we look at to compare?
  • Alex Fry: I looked at 28 and 35 to pick one for a production.
  • Nick Shaw: Versions beyond 35 simplified things, like horizontal compression to make testing easier.
  • Alex Fry: My issue with 35 were all the gamut compressor. Corner bulging. Clipping. Normal images were very similar.
  • Nick Shaw: Pekka built his new chroma compressor with the same aim. What were we unhappy with about 28, so we had to keep experimenting? Complexity?
  • Alex Fry: I think so. 28 didn’t use the gamut approximation.
  • Nick Shaw: Is 28 fully invertible?
  • Alex Fry: Yes, but it doesn’t quite reach the red corner, with positive AP1.
  • Christopher Jerome: Did 35 have some improved soft clipping in dark areas?
  • Alex Forsythe: I would suggest looking at the versions in a Git way. What is the head of a given branch, even if it went in different directions.
  • Kevin Wheatley: The gamut approximation was introduced in 29, and the new chroma compression in 32.
  • Alex Fry: I feel the gamut approximation introduced some issues due to the straight line used at the top.
  • Nick Shaw: My DCTL was v30, and I think was made possible by the approximation.
  • Alex Fry: Any thoughts on what Troy brought up last time?
  • Kevin Wheatley: The spatial and contextual stuff we can’t do anything about. The ramps are interesting.
  • [ Alex showed Troy’s ramps rendered through the DRT]
  • Alex Fry: I feel one yellow looks too bright, when the source actually has less energy than the background.
  • Kevin Wheatley: Lower Y luminance?
  • Alex Fry: Yes.
  • Nick Shaw: Isn’t that partly because we use J because Y doesn’t accurately represent brightness? What if you use J, not Y?
  • Alex Fry: All the J values are lower than the surround. The source ACEScg values are all lower than the surround.
  • Kevin Wheatley: So Y is always lower than surround, but that yellow seems to glow?
  • Alex Fry: Yes. This could be an image printed on paper, filmed by a camera.
  • Kevin Wheatley: To me the red stands out most.
  • Christopher Jerome: Is this what an eccentricity function is for? I believe in recent versions Pekka’s simplified the achromatic function. It may even no be using any blue/S. Or even just using green. Might that make it vulnerable to this?
  • Kevin Wheatley: I don’t recall that.
  • Alex Fry: The modified LMS matrix exaggerates the effect.
  • Kevin Wheatley: We modified the matrix to pull negative values in.
  • Christopher Jerome: Kevin, did you do Amy more work on your alternate non-linear curves?
  • Kevin Wheatley: Not yet. But I think although the first. Part needs to change, maybe for the later part we can use the correct model. We’re tweaking it because these cameras don’t match the HVS. Maybe some kind of pre-desat might make that less necessary.
  • Nick Shaw: We tweak the model to handle these “out there” chromaticities, but applying something like the RGC first would make that less necessary.
  • Kevin Wheatley: If the people on set only see monitoring through our rendering, that’s what we need to match.
  • Alex Fry: Actually switching on HK mode seems to fix it. On the final output stage.
  • Kevin Wheatley: Using HK will make finding an approximation to the gamut harder.

Meeting #113, August 2nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Wes Donahue
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen
Daniele Siragusano
Troy Sobotka

Meeting Notes

  • Nick Shaw: I posted a link to a movie that needs to be viewed in HDR, comparing gaining up an SDR display-referred image in both RGB and JMh, gaining J and M only by the same amount. On a non-HDR system the video gets tone mapped, so you don't see it properly. The JM gained one gets more saturated as it gets brighter. The RGB gain preserves CIExy chromaticity, but it expands for the JM version. Constant chromaticity is not necessarily the aim. Hunt would suggest lower chromaticity is needed for the same perceived saturation at higher brightness. If the model modeled the Hunt effect you might expect chromaticity to be lowered as brightness rises.
  • LHG: I don't think chromaticity drift is an intentional part of the model. I plotted the effect on chromaticity of taking [1, 0, 0] red though the model, raising J and M and then back. I think for CAM16 them fitted brightness to one set of data and colorfulness to another, and didn't look at how they tracked together. They have different non-linearities. I posted my suggestion that the drift is due to the exponent difference. The change in drift with surround confirms this.
  • Kevin Wheatley: We aren't adjusting dim/dark surround, but maybe we should be.
  • Luke Hellwig: I'm going in and out with the same condition. I'm just scaling like Nick did.
  • Nick Shaw: Do we want to maintain chromaticity. I thought the reason we were tone mapping using the model was to maintain perceived color while varying brightness. If we tweak the model so it maintains chromaticity, we're not really using the model except for gamut compression.
  • Luke Hellwig: I don't think the model treats colorfulness correctly with scaling. Chromaticity and saturation are very similar, so maintaining chromaticity is an easy way to maintain saturation. I tried adding the exponent into the colorfulness part of the model, and the drift is much less. You can use the adjusted model or not use it, depending on what looks better. Sometimes using the simple chromaticity model works better. A CAM should be usable to predict things, but I'm not sure about the rigor of the models.
  • Nick Shaw: We could test a simpler chromaticity preserving tone scale, and then go into the model for gamut compression.
  • Alex Fry: What would you tone scale?
  • Kevin Wheatley: You would tone scale luminance, and leave xy constant.
  • Alex Fry: But our initial problem was we had negative luminance. An appeal of tone mapping J was it was always positive.
  • Nick Shaw: You could use a different RGB weighting to tone scale RGB like the Doug Walker / Gary Demos weighted yellow power norm.
  • Pekka Riikonen: We should change the M formula to see how it behaves. But tonescaled J / J doesn't automatically produce pleasing looking images.
  • Kevin Wheatley: We could produce a faux camera LMS space to compute a weighting with all positive values. But tweaking the M formula is a simpler quick test.
  • Luke Hellwig: That might affect the curve of the hull.
  • Alex Fry: I feel it should be more of a cone.
  • Christopher Jerome: A stock CAM can produce significant changes if you use different parameters going in and out of the model. This graph and the one below show what you would have to change in J to match the effect of changing output surround or luminance. You could use a single tone curve, and then a really simple function on top of that to target different outputs, at least in J. I came out of the model through dark dim and average, and then went back in through average, and plotted the resulting J. Then did the same with varying output reference white. So it shows if you took average in and out, but applied a power function, it would closely match using dark or dim out.
  • Nick Shaw: Is the power function just the exponent difference between the surround parameters?
  • Christopher Jerome: I think it's effectively that but goes through a lot of other steps. It looks like a gain in J is enough to match a change of output reference white. This animation shows a comparison of adjusting output reference for one to match white to 100 nit D65 and the other to match ACES 1.2 10 nit grey. That's like targeting two different display luminances. It shows what I brighter monitor should look like. There's no increase in CIExy values, despite the Hunt effect. J and M increase, but S stays almost the same. I've also looked at how M changes with under/over exposure with other renderings. We need to find what relationship of M to J gives the desired result. If we set it for one display, we could find a way to map that to others. I think this chromaticity plot shows the direction things should change, getting more chromatic with slope. Below, with luminance change there is no change of chromaticity.
  • Luke Hellwig: That seems the wrong way round to me. It says that in a dark condition you need a more saturated stimulus to perceive the same colorfulness, which is the opposite to common sense.
  • Christopher Jerome: I thought that was a counterintuitive finding of Hunt.
  • Luke Hellwig: It could be an artifact of the J M non-linearity mismatch.
  • Christopher Jerome: It corresponds well to what ACES 1.x and K1S1 do.
  • Nick Shaw: ACES 1.x dark to dim reduces saturation going from theatrical to video. But that's going from 48 to 100 nits as well as dark to dim.
  • Christopher Jerome: In our candidates the slope does increase with brightness, which will look wrong if chroma is not increased too. And it will be hard to get a match.
  • Nick Shaw: So it seems when increasing brightness by just increasing reference white, chromaticity doesn't change.
  • Kevin Wheatley: Chris is describing a DRT with a single rendering, where the different output come from changing model parameters. That's ignoring highlight roll-off differences.
  • Christopher Jerome: It seems wise for different outputs to bas em on where diffuse white lands. ARRI choose 100 and 200 nits for diffuse white.
  • Nick Shaw: 200 nit white comes from broadcasters wanting to match 203 nit white for HLG 75%.
  • Christopher Jerome: ICC profiles do something similar. ACES 1.2 HDR and SDR fits my ideas pretty well. I scaled ACES 1.2 to match mid grey to SDR. It's a close match.
  • Alex Forsythe: The issues people had with HDR / SDR match was the skews that come in where SDR is rolling off and HDR isn't. You don't see it in these ColorChecker images because they are in the mid range.
  • Daniele Siragusano: The ARRI ALF-2, RED IPP2 and ACES renderings which are all based on RGB curves are all very similar to each other in HDR, but quite different in SDR. In HDR the relaxed tone curve does very little, especially in the middle range. How is it we can consider any of them an HDR SDR match?
  • Christopher Jerome: This mid range area of the ColorChecker is about the peak in Pekka's plots where M needs most boosting. Here you need to maintain or even boost chroma as you add slope. As you go up, chroma needs to decline. More in SDR than HDR. And a CAM model won't give you that.
  • Alex Fry: Pekka mentioned in the chroma compression thread about matching HDR to the Rec.709. Shouldn't the HDR be the reference, and we look for the best possible SDR match to that?
  • Pekka Riikonen: That has been my goal. That's why the gamut compression maps colors darker so they appear more saturated. But it's an impossible task.
[Pekka showed a 3D plot of Luke's tweaked M]
  • Pekka Riikonen: It's a bit narrower.
  • Alex Fry: A bit less curvature?
  • Kevin Wheatley: I noticed that this plot comparing average to dark surround and ACES 1.2 was interesting.
  • Christopher Jerome: It's very close in the mid range. Apart from the highlight roll-off. It's similar to film curves and other classic plots.
  • Kevin Wheatley: If we could figure out where to put  the adjustments, it would give another way of doing the highlight roll off. People complaining about ACES 1.x complained of too much contrast. Was that because we were giving them a dark surround match, and they probably weren't in a dark environment?
  • Nick Shaw: ACES 1.x does include a dark to dim gamma adjustment.
  • Christopher Jerome: Should a theatrical version have the same mid grey (proportionally) as 100 nit video?
  • Kevin Wheatley: Josh Pines said they should be the same curve, just linearly scaled between 48 and 100 nits.
  • Alex Forsythe: That's what we did in ACES 1.x. It's the same curve fit to 48 or 100 nits.
  • Nick Shaw: The dark to dim gamma does change it a bit, which Josh suggested is more of a hindrance than a help.
  • Daniele Siragusano: Don't forget that screen size is also a factor in perception of projection. There are many factors. Even with the same viewing angle, a bigger screen looks brighter to you. A factor of two works quite well.
  • Nick Shaw: I just checked, and Cinema puts grey at 4.79 nits and video at 10.4.
  • Daniele Siragusano: Because of the tiny gamut nudge.
  • Alex Forsythe: We were cross referencing with film average picture levels and using LAD. Everything ended up at around 5 nits.
  • Kevin Wheatley: The next step is to evaluate modulating M with the cz parameter. It's more work to try adding a tone scale before going into the model. The intent would be to make it easier to match different outputs because the colors don't drift so much. We should still consider using what the model can give us, with different surround outputs.
  • Nick Shaw: I thought we agreed the effect of that was too strong for images rather than color patches.
  • Alex Forsythe: when I simulated different surrounds with grey surrounds in the image it seemed too strong.
  • Christopher Jerome: The plots in my last post show creating a match in S to the effect of the surround adjustments using scaling within the model space. Even the ACES 1.2 rendering is a pretty close fit on average.
  • Nick Shaw: Although that might just be because they are relatively muted colors in the middle brightness range, so the skews are minimal.
  • Christopher Jerome: This is S which fits. C and M don't fit at all.
  • Kevin Wheatley: Is the issue with dim/dark/average that there are only three? You can do any points in between. We should say, there probably won't be a meeting next week, due to SIGGRAPH.

Meeting #112, July 26th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Pekka Riikonen
Troy Sobotka

Meeting Notes

  • Pekka Riikonen: I made another post after last week's meeting, adding a line to show the effect of mid grey changes. Changing exposure in JMh we only change J, but if we did it in RGB, M changes too. To keep saturation we should increase M as J increases.
  • Kevin Wheatley: We discussed if when talking about preserving colorfulness we actually mean saturation.
  • Nick Shaw: But mid grey being higher means it has been compressed less at higher peak luminances, so doesn't multiplying M by tone-mapped J / J result in higher M already?
  • Pekka Riikonen: I talk about that in the second last paragraph of my post. Without chroma compression mid tones can end up less saturated at 1000 nits than for Rec.709. So you need a saturation boost >1 to match them. Tone-scaled J / J is always kept <1 so can never increase saturation. Allowing it to go above 1 would increase it. It's telling that at 1000 nits tone-scaled J / J can go over 1. But  I do an adjustment of the chroma compression to keep it below 1 to get things to a similar range for the next step – the in-gamut compression.
  • Nick Shaw: Your plots don't show chroma compression, correct? So at 100 nits M looks wider because it isn't squeezed in yet.
  • Pekka Riikonen: Because saturation is M / J, increasing J without increasing M will decrease saturation.
  • Nick Shaw: Tone-scaling J then multiplying M by the tone-scaled J / J ratio seems like it should preserve M / J. We decrease J by tone-scaling and we decrease M as well by the same ratio.
  • Pekka Riikonen: We haven't decreased J. We have increased J. Middle grey values that we expect to come out a certain way have been lifted.
  • Kevin Wheatley: In the model looking at the first stage, when you multiply M by the same ratio, things stay nominally as they were. But then you map out to a display at a brighter intensity, which perhaps creates the discrepancy.
  • Pekka Riikonen: I did a test of increasing exposure in RGB, then reducing J to compensate. And you do get a higher saturation. Then I reduced RGB exposure and compensated by increasing J and the image got less saturated. Maybe Luke can answer, what does it mean in the model if you increase J without increasing M. What happens to the color?
  • Luke Hellwig: It gets less saturated.
  • Nick Shaw: I don't see that we are mapping to a brighter display after tone-scaling. The tone-scaling is what does that, and afterwards we are in display-referred JMh, and if the M / J ratio is the same there, the saturation is the same.
  • Kevin Wheatley: In SDR we have a rendering, and we want to match that in HDR, but that's affected by the fact that the tone-scale gives a different output for the same input in HDR.
  • Pekka Riikonen: I also tested not changing middle grey level with peak and then the match is very close.
  • Nick Shaw: Our rendering doesn't have a separate SDR and HDR. It's a continuum that we are examining two points on.
  • Kevin Wheatley: If we can find the parameters for these two, we can work them out for the in-betweens.
  • Luke Hellwig: Am I right that going ito the model lightness if being divided by the exposure of the scene, and colorfulness is not? And coming out the same happens again?
  • Nick Shaw: We don't do anything like that in the model. It's already happened in RGB. It's scene-relative RGB, normalized so diffuse white is 1.0. We don't know the scene exposure.
  • Luke Hellwig: What are the white points going into and out of the model?
  • Kevin Wheatley: On the input a Y of 1.0 maps to J of 100. On the output it can be different display intensities. You would have to render it to check.
  • Nick Shaw: It's almost as if we have a scene that is a virtual display with a diffuse white at 100 nits. We have a tone-scale in the middle, but on the way out a J of 100 goes to 100 nots on any display.
  • Alex Fry: Early on I tried using metadata to use absolute scene values, but it caused more problems than it solved.
  • Kevin Wheatley: 1.0 in goes to J=100 and we scale that down to e.g. 80, to leave room for highlights.
  • Luke Hellwig: In these models lightness is a relative scale and colorfulness is absolute. If you don't use the same white luminance going into and out of the model, it could cause these saturation changes.
  • Nick Shaw: We do use the same reference luminance with Y=100 on the way in and out.
  • Pekka Riikonen: Isn't the point of the scaling step to scale M to the range of the display? Once we tone-scale J is it right to call it lightness any more?
  • Kevin Wheatley: My question is what differences we apply to the model on the way out for different devices?
  • Nick Shaw: I don't think we do anything different for different targets on the way out. We just go out display-referred JMh to display-referred XYZ and then encode that for the display.
  • Kevin Wheatley: If we did nothing and had some JMh values we wanted to map to a display, the J of 100 needs to map to whatever the diffuse white of the display is.
  • Nick Shaw: You mean use a reference white on the way our that's derived from the tone-scale?
  • Kevin Wheatley: If for SDR you map diffuse white to say 80 nits, for a 1000 nit display you don't map it to 80 nits. You map it to 160 or 200 nits, because you assume your environment is brighter.
  • Alex Fry: Currently that comes from the tone-scale. After the tone-scale, whatever has a J of 100 comes out at 100 nits on all displays.
  • Kevin Wheatley: If a pixel is at J of 100 in SDR, is the mapping to a higher value for HDR done by the tone-scale or something else?
  • Alex Fry: Only the tone-scale.
  • Nick Shaw: If you changed reference whit on the way out you would double scale. Would using a different reference white on the way out change more than exposure?
  • Luke Hellwig: It would change saturation, so I don't think you should do that. If reference white in and out are the same, I don't think that's the cause of your saturation issue.
  • Kevin Wheatley: So if we feed grey scale through our rendering for SDR and HDR, do we get something that looks familiar?
  • Nick Shaw: I hope so!
  • Kevin Wheatley: Ideally you want a novel image so you have no preconceptions about what it looks like. Does it feel right on different displays, before we worry about any other aspects?
  • Pekka Riikonen: All the differences are driven by the tone-scale.
  • Thomas Mansencal: Just being achromatic is a big change to a familiar image.
  • Pekka Riikonen: It's clear that saturation reduces as we increase exposure. Ideally we could take a JMh value and ask the model what is the M value for an appearance match with a new J.
  • Thomas Mansencal: At higher luminance you will automatically get increased colorfulness from the Hunt effect.
  • Nick Shaw: Doesn't the model try to model the Hunt effect?
  • Pekka Riikonen: If you just map the image to a brighter display, it should look more colorful. But the tone-scale does the exposure change. So a color patch that should look the same is lighter, so not more colorful.
  • Thomas Mansencal: It depends how you compensate. How you need to compensate will vary per display, depending where you map.
  • Pekka Riikonen: I'm working on a new version I call "Hunt compensated" that scales M depending on peak luminance.
  • Nick Shaw: I know we need more than the simple M scaling to make the image look as we think it should, but with just that do we get an image that "matches" at different peaks, even if it doesn't look great?
  • Pekka Riikonen: If you just do tone-scaled J / J you will get a match, or even a more colorful image. Because the scaling curve will go over 1. We want to only compress.
  • Alex Fry: Does the simple compression ever boost?
  • Pekka Riikonen: For Rec.709 it stays way under one. Even the 1000 nit curve already goes above one. 2000 nits goes way above it. I am scaling the curve to keep it below one.
  • Nick Shaw: But then you need a saturation boost to look right. Isn't the model saying that an M gain >1 is what is needed for a match?
  • Pekka Riikonen: Indeed. For an appearance match in HDR we have to let it go above one.
  • Christopher Jerome: I did a lot of tests, and everything said today is exactly right. I will make a post. I think we're disrupting the intention of the model by changing J and then going out to 100. In SDR 1.0 maps to Y of about 45. The limited 1000 nit is just under 100 and the 1000 nit is just over 100. If they are pinned to the same grey, the tone scales have to change if they are mapping differently. But I I think it's very easy to just figure out what the model's predicted behavior is versus what it should be, and and then we can adjust accordingly. I'll post my tests of different J and M for output XYZ, to get an idea of the mapping required. I used a stock CAM model. I think the JMh should be identical regardless of display. It's the XYZ that should change. If they are pinned to the same Y grey, the slope has to change if the peaks change. I feel some considerations have been missed.
  • Kevin Wheatley: Ignoring image rendering, the model should tell us how a given display code value will be perceived. So we should be able to build a mapping between the two. We're not currently doing that because we use the same parameters for all outputs.
  • Pekka Riikonen: The transform does scale with peak and gamut. We have a model built into the model for different displays.
  • Christopher Jerome: I don't think we're taking full advantage of the model. In my post I'll show HDR limited to 100 nits in sRGB with a ColorChecker, and I think it should be easy to match them. I've also analyzed how M changes in other renderings ACES 1.2 and ARRI Reveal. They produce similar curves to Pekka's.
[Pekka showed the curve for tone-mapped J / J for 100 and 1000 nits]
  • Pekka Riikonen: At 1000 nits it goes over 1, and for an image that needs no gamut mapping or highlight roll of this might produce a match. But we don't use that curve because it goes above one at 1000 nits and very far above at higher peaks.
[Pekka showed the actual scaling curve from his new "Hunt compensation" version]
  • Pekka Riikonen: I concluded we do need a saturation boost. It makes sense to me, because if we did an exposure lift in RGB, J would be higher and so would M.
  • Alex Fry:  I'm still worried if we end up pushing the spectral locus out beyond where it sits in 3D JMh.
  • Christopher Jerome: Pekka your curve doesn't get to zero at the top.
  • Pekka Riikonen: Yes. But the in-gamut compression then takes that to zero. And the gamut mapper ensures it hits white.
  • Christopher Jerome: I think you might be able to make a curve like that in one step if it wasn't tied to the tone-map.
  • Pekka Riikonen: My original chroma compression had a controllable engineered curve. I changed to this for simplicity.
  • Nick Shaw: We moved from ZCAM to Hellwig because it was simpler. But we need to be careful about then layering too many complex fixes on top of that.
  • Pekka Riikonen: We didn't get as far down the line with finessing ZCAM.
  • Kevin Wheatley: We agreed on middle grey increasing with peak, although some think it should be constant.
  • Pekka Riikonen: It would be useful for the purposes of chroma compression to have a version of the curve where middle grey stayed the same. That's what I'm doing in my new version and it gives a pretty decent match.
[Alex showed a 3D visualization of a color wheel at 0.18 tone mapped with and without chroma compression]
  • Kevin Wheatley: If you just turn up the brightness of a display, would you expect the colors to match?
  • Nick Shaw: Does a Rec.709 primary at different brightnesses look like "the same color"?
  • Alex Fry: I wouldn't expect Rec.709 values to be pushed outside the Rec.709 hull.
  • Kevin Wheatley: A few versions ago we had a reasonable SDR / HDR match. So one dimension of improvement from there is simple. Another is a better match. If we end up with something similar but with worse complexity it could be a dead end.
[Pekka showed his new chroma compression code]
  • Kevin Wheatley: Some of your modifications only apply to a portion of the image, Is there a risk of banding?
  • Pekka Riikonen: I hope not, but we should check.

Meeting #111, July 19th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Carol Payne

Meeting Notes

  • Kevin Wheatley: Pekka isn’t here this week, but has made some posts. There was discussion about chroma compression and another about what happens near neutral, where M doesn’t end up at zero. Also discussion of polar notation.
  • Nick Shaw: Daniele told be he has issues with working in polar form. I thought polar was just a different way to represent ab, and scaling M is the same as scaling ab. He pointed out that near neutral a small change in ab can cause a large h change. But we keep h constant, so whatever it is we go back out at the same hue. Daniele said “as long as you leave hue alone you’re fine”. So maybe we’re ok. But he was against “hue specific tweaks. We do use hit to drive other things, but mainly in gamut compression which shouldn’t affect those near neutral values. I also read Pekka’s post on saturation recently, and had some thoughts. It makes sense that 100 nits will be widest spread, as it’s compressed most in J. But chroma compression should bring that back in. Scaling M by the ratio J has been scaled by should keep the angle in JM from the origin the same. It’s similar triangles.
  • Kevin Wheatley: Roughly, because J is perceptual. But if it was right that’s all you would need to do.
  • Nick Shaw: I thought keeping that angle the same would be what you want, but then realize that is what we are doing (in the first step) but Pekka then has more to make it look right.
  • Kevin Wheatley: We’re compressing as lot in J, but colorfulness doesn’t have the same range.
  • Nick Shaw: M does increase with J, so you have large M values, which is why we have to scale them down when we’ve scaled J.
  • Alex Fry: Pekka’s plot just shows the shape of the bottom part of the curve.
  • Nick Shaw: We see lines not points, so it’s hard to judge. Pekka says the issue is caused by the change of mid grey level with peak luminance. If we didn’t vary grey, colors with the same J as mid grey would tone map to the same place for all peaks.
  • Kevin Wheatley: Is colorfulness near mid grey significant? Or is it just what we balance around?
  • Nick Shaw: What would a colour wheel with a J of 34 for all colors look like? Would it seem the same brightness?
  • Luke Hellwig: It would be affected by the HK effect, so more chromatic patches would look brighter. In your tone mapping, what is the highest J you start with? What luminance has a J of 100?
  • Nick Shaw: It’s scene relative. Diffuse white maps to J=100 on the way in.
  • Kevin Wheatley: Any J above 100 is “a specular”.
  • Luke Hellwig: So tone mapping reduces the range of J values?
  • Nick Shaw: For SDR we tone map J down to roll off to 100 which maps to 100 nits on the display.
  • Alex Fry: At 1000 nits J is a max of 284 after tone mapping.
  • Nick Shaw: Which is the J equivalent of 1000 nits, where J=100 is 100 nits.
  • Alex Fry: An input of 65504 (half max) produces J of 3722.
  • Nick Shaw: 222 (ACEScct 1.0) gives J=993
  • Kevin Wheatley: Red at 65504 produces J of 3312 which is tone mapped much lower, but has a high M. And that gets brought in by the ratio J was reduced by.
  • Alex Fry: I think Pekka is saying if you have a color wheel with a J of 34 (mid grey J) and put that through the 100 and 1000 nit renderings, do they match? I’d expect them to be perceptually equal, with different M because J is different. M should be higher because J is.
  • Kevin Wheatley: I was thinking M should be lower, because a brighter image will look more colorful.
  • Alex Fry: Should that cancel in JMh?
  • Kevin Wheatley: Should the same M appear equally colorful at different J?
  • Luke Hellwig: What we see as perceptually the same is a saturation match, not a colorfulness match.
  • Nick Shaw: Is saturation the ratio of M to J?
  • Luke Hellwig: Yes.
  • Nick Shaw: And when we scale M by our J compression ratio we do preserve that. But it’s not enough.
  • Luke Hellwig: I don’t think the colorfulness scale is perfect across different J values.
  • Kevin Wheatley: I think Pekka may be saying that the to varies more than you think between different peaks.
  • Christopher Jerome: Would it be useful to work out the M values of e.g. Rec.2020 primaries at 100 and 100 nits? Changing scene exposure increases M for the same source.
[Alex showed the visual result of varying J up and down on a color wheel without changing M]
  • Alex Fry: The appearance changes a lot, unless we scale M as well.
  • Luke Hellwig: Colorfulness is not like brightness. J is divided by the diffuse white point. M is not. Chroma is the one that is.
  • Kevin Wheatley: We must remember that our incoming scene values don’t get scaled, but our output values do. What Alex was showing was scaling for different displays, not an exposure ramp on input.
  • Alex Fry: I believe Pekka’s chroma compression first scales M down based on the tone scale which is display specific. But the next chroma compression step is not, but the input to that has changed with the display. Is that the right order.
  • Nick Shaw: Pekka refers to the initial scaling as what’s needed to get the M values from the “mushroom” to the right range for the chroma compression.
  • Christopher Jerome: Are the values for surround we are using for the scene side appropriate?
  • Kevin Wheatley: We use dim in and out, so as long as it’s the same both ends it is the tone curve that controls contrast, not the model. People haven’t complained about or current contrast.
  • Nick Shaw: We’re not using the model to do dark to dim. We may (or may decide not to) do something ourselves for dark to dim.
  • Christopher Jerome: I just wondered if changing surround would affect the relationship between the different components. But is may be slight.
  • Nick Shaw: The curvature of the bottom part of the gamuts changes with surround.
  • Christopher Jerome: So does using a different one give any benefit?
  • Luke Hellwig: I think it’s better not to use the surround effect from CAM16. I think it’s unproven.
  • Christopher Jerome: Regarding zeroing of M, what is the precision of the white point going in?
  • Nick Shaw: I think it’s calculated running (1, 1, 1) though the RGB to XYZ matrix to calculate reference white.
  • Alex Fry: We get a small M, but it’s weird it’s not zero.
  • Nick Shaw: One constant (k4) is a very small number raised to the power of 4, so it’s extremely small.
  • Kevin Wheatley: My code calculates and stores that at double precision. Thomas is asking if we are doubling up on the hunt effect by increasing M with J? What should we be trying to preserve?
  • Nick Shaw: Just preserving saturation by scaling M by the J compression ratio produces skin tones we don’t like.
  • Kevin Wheatley: So the model is not tolerant to the s-shaped compression as much as we’d like.

Meeting #110, July 12th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Pekka and Alex have updates for us
  • Pekka Riikonen: Last week we discussed if we could remove the saturation boost from the chroma compression and just compress less. I've made it work for 100 nits. It's a close match to the previous version. Only cyan is really different. It. doesn't yet work for HDR. My assumption for scaling the curve based on peak luminance didn't work. I started again, and I now have a better HDR / SDR match, but still not as good as the previous one. But I think it can be done. My scaling of the curve base on peak luminance scaled it down too much.
[Pekka showed a plot of the scaling curve from an ACEScct ramp]
  • Pekka Riikonen: It's a plot of tonescaled lightness divided by original lightness. The toe of the Daniele curve desaturated the toe too much. So i removed the toe. I now have maths that sort of works for HDR. Ideally the lower part wouldn't change, but it does a bit. The curve gets wider with increasing peak, and the slope gets less. At 10000 nits the curve goes above 1.0, so would increase M. We could use a different controllable curve, rather than the tonescale.
  • Kevin Wheatley: I can see a kink in the shadows of that curve. It's not at the ACEScct break point.
  • Nick Shaw: Is this what Kevin mentioned last week, an interaction between the model's non-linearity and the tone curve producing a kink?
  • Kevin Wheatley: That's what I was thinking might happen, but hoped we didn't.
  • Pekka Riikonen: At 100 nits this is just the ratio of pre/post tonescale J. Above 100 nits it's scaled with some additional maths.
  • Kevin Wheatley: It at least seems to be the right shape of curve. Is it right that it goes concave as it increases? But is it just you don't yet have the right scaling factors as luminance increases?
  • Pekka Riikonen: I suspect the old curve was going a lot higher, but the saturation boost compensated. This seems to desaturate yellows too much in HDR.
  • Kevin Wheatley: Maybe we don't want the whole curve lifting, but we do want the end point lifting, creating a flatter shape, which might be easier to model.
  • Nick Shaw: That curve is just driven by ACEScct luminance, so isn't yellow compressed more because its luminance is higher, so it's further along the curve?
  • Pekka Riikonen: We don't want to lose the yellows so they can't be recovered. I see this as a technical step to get things to a manageable range so the in gamut compression can create the look.
[Pekka showed a plot of the curves for a ColorChecker ramp]
  • Kevin Wheatley: One curve is further out than all the others. Which is that?
  • Pekka Riikonen: It's the brightest neutral.
  • Kevin Wheatley: The curve should be smooth, so we need to test for kinks.
[Pekka showed the plot with the dominant wavelength image]
  • Nick Shaw: Interesting that the different wavelengths come out of black with different shapes, some turning down before they curve up.
  • Kevin Wheatley: So is it that we don't have the right parameters for the tonescale for the behavior we want? Or is the tonescale not what we want for the colorfulness, and we need a different but similar curve.
  • Pekka Riikonen: Because the tonescale is different for different peaks, the M scaling is different.
  • Alex Fry: I've been working on the gamut compressor, and I've made a version which can reach to different primaries or the spectral locus.
[Alex showed various 3D plots of the compression vectors]
  • Alex Fry: I am using the iterative gamut hull solve.
  • Pekka Riikonen: If you invert with the approximation, you have colors on the approximation hull that are just inside the real hull. So you can never get the real hull back.
  • Alex Fry: And values outside it can explode with the approximation.
[Alex then showed a CIExy plot, showing the boundary inverted to the selected gamut]
  • Nick Shaw: If your compression reaches to the locus, that means you would have to put an input color way out there on the locus, particularly for greens, for it to be on the display gamut boundary.
  • Alex Fry: That's an argument to use AP1. But this is just the gamut compressor, but there is more between the input and that point. I have also been trying to modify Thomas's Cornell box, so J is constant for each image. I've not got there yet.
  • Kevin Wheatley: So you have a gamut compressor that works as you want. We just don't know what the parameters should be.
  • Alex Fry: My thinking is that if you have a red wall with more saturated red text on it, you should be able to read that text on all displays.
  • Nick Shaw: We don't guarantee that the same highlight range can be seen on all displays. The "value hitting the roof" varies.
  • Kevin Wheatley: I think you only want that up to a point. Also how do you "make room" in a smaller gamut?
  • Alex Fry: Currently the threshold where compression begins is still relative to boundary of the limiting gamut at that J and h. The display gamut tapers to a peak, but I have a bowl that keeps going out like a cone to J=100.
  • Kevin Wheatley: For HDR J dons't mean the same thing because you\'re mapping it. J will go above 100 for 1000 nits.
  • Alex Fry: I need to check I'm dealing with things correctly for HDR.
  • Pekka Riikonen: You have limitJmax value you can use, which is max J for that gamut and peak.
  • Alex Fry: The input to this is the output of the chroma compressor. So what it needs to reach to is whatever volume the chroma compressor outputs.
  • Nick Shaw: The volume will change for different targets, because the chroma compressor uses the tone curve, which varies with peak luminance.
  • Alex Fry: Ideally you reference it to input values, so positive values in some input gamut, maybe AP1, will land on screen.

Meeting #109, July 5th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Christophe Brejon
Daniel Brylka
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Pekka has added a breakout version of the chroma compressor and posted a description of how it works. And Alex added back in the iterative gamut boundary solve, so compare with the approximation.
[Kevin showed Pekka's post]
  • Kevin Wheatley: The first step is ratio based M scaling, modified with a saturation controlling parameter K. The result is good but perhaps the implementation is over complex.
  • Pekka Riikonen: Because the tonescale depends on peak luminance, we need to work out how to match colorfulness between SDR and HDR. I found the best way was to get the scaling curve to match for mid grey and below.
  • Kevin Wheatley: The top curve almost levels off, but the lower ones dip. Particularly the bottom one. Is that the desired effect.
  • Pekka Riikonen: SDR has to desaturate faster than the 4000 nit version.
  • Kevin Wheatley: If I add a 10000 nit curve, it doesn't follow the others.
  • Pekka Riikonen: Same for the tone scale. Above 5000 nits mid grey drifts.
  • Kevin Wheatley: That may not be relevant for today's displays, but I wonder why?
  • Pekka Riikonen: Daniele pointed out that it's possible for the 10000 nit curve to go under the 100 nit curve, because of r_hit limitation.
  • Kevin Wheatley: Pekka shows plots of the result of the different stages at 50% saturation. Then images at the different stages. Everyone seems pleased with the result, so we want to preserve something close to this. Finally there is an illustration of the areas in the dominant wavelength image that are pushed outside the spectral locus by the saturation boost.
  • Pekka Riikonen: Not all tonescaled M values are smaller than the starting values because it's a global boost.
  • Kevin Wheatley: Then there was the post linking to other papers doing similar things.
  • Alex Fry: I put out a version which bolts in the old iterative solver for the gamut compressor. And now I've merged Pekka's breakout version, so v39 now has both. It has none of the focus controls, so is just horizontal compression.
[Alex showed the results of bypassing the different stages]
  • Pekka Riikonen: If we could leave the highly saturated colors untouched by the saturation boost, they wouldn't be pushed out. But adjusting colorfulness based on colorfulness, and still being invertible, is hard. I also tried a version which doesn't use the toe of the tonescale to calculate the M scaling. That retained more shadow and mid-tone saturation. But I still needed to boost saturation to make it look good. Increasing the toe didn't help either.
  • Kevin Wheatley: How much of your work it to make it "look nice". Could we do that separately?
  • Pekka Riikonen: I've always adjusted based on looking at the pictures.
  • Alex Fry: Your saturation boost is fighting the desaturation. Do we ever want something to end up more saturated than it started.
  • Pekka Riikonen: Perhaps in the shadows.
  • Kevin Wheatley: The saturation boost is to counter the desaturation of the in gamut compression. Can we modify that so it doesn't need so much boost?
  • Pekka Riikonen: With just scaling the shadows are quite dull. Removing the toe flare makes that a bit better. The in gamut compression only affects the bottom end a bit. It's mostly the top end.
[Alex showed the effect of the saturation boost and the difference between the iterative gamut solve and the approximation]
  • Nick Shaw: The approximation uses a straight line at the top. Is the inaccuracy noticeable at all hues or just some?
  • Alex Fry: It seems hue dependent.
  • Pekka Riikonen: That's more difference than I would expect.
  • Alex Fry: There may be an error in how I put the iterative solve back in. I need to check more.
  • Nick Shaw: We need to ask why the saturation boost is needed. What's not working right that we need it?
  • Pekka Riikonen: That's the big difference from the old chroma compression which had more control (v31 and older) and didn't need as much boost. It was very complicated to give that level of control.
  • Nick Shaw: But still invertible?
  • Pekka Riikonen: Yes. The complexity was the issue. There were additional steps to control the scaling curves to match SDR and HDR. But with the old one, even taking the toe right down, I couldn't get skin tones as good as the current one. I alway judge skin tones.
  • Christopher Jerome: I notice the SDR curve has a very different shape in how it dips.
  • Nick Shaw: Isn't that because it needs a faster path to white than HDR?
  • Pekka Riikonen: I've looked at exposure ramps in HDR, and I thought the SDR / HDR match was pretty good. Going back a year I made a saturation curve base on the derivative of the tone scale. But I needed more control, which is why I made this one.
  • Kevin Wheatley: It does look as if the lines cross, so a lower intensity line isn't always below a higher one.
  • Pekka Riikonen: It's scaled, so it may be. That's why it's not possible with this tonescale to have mid grey and below match exactly.
  • Kevin Wheatley: Might a different tonescale fix that?
  • Pekka Riikonen: Perhaps. We could make the curve do whatever we want at each peak luminance. We can decide the parameters rather than scaling them with n.
  • Nick Shaw: But then it couldn't be applied to arbitrary peak luminance. I think it needs to be an algorithm.
  • Kevin Wheatley: But maybe we don't have the right algorithm. If we didn't have the saturation boost, users might need to add a boost to taste.
  • Pekka Riikonen: The in gamut compression parameters are picked to work with boost. So if we remove the boost we can reduce the compression. I've adjusted for skin tones.
  • Kevin Wheatley: Or we define a boundary for boost, and reduce boost as we approach that. Maybe the locus.
  • Alex Fry: Or the original value. I'm dubious about increasing beyond the original value.
  • Pekka Riikonen: Perhaps the gamut boundary could be the limit. I did try a version like that, but you ended up with a sharp cusp inside the gamut. Smoothing it made a rounded shape that was almost the same as before.
  • Kevin Wheatley: That would also make it gamut dependent.
  • Pekka Riikonen: I think of the chroma compression as the other side of the coin to the gamut mapper. Together they produce a sigmoid, saturating values close to achromatic and reducing as it goes higher.
  • Alex Fry: Could we use the PowerP curve on saturation, with the pre-compressed value as the limit?
  • Pekka Riikonen: Then how do you invert?
  • Nick Shaw: You normalize at one point in the chroma compression. Could you apply a PowerP curve in that state?
  • Pekka Riikonen: Possibly.
  • Christopher Jerome: The the saturation boost the same at all luminances?
  • Pekka Riikonen: No. It's driven by J.
  • Kevin Wheatley: It's a scale factor for Hellwig M.
  • Nick Shaw: The chroma compression only changes M. It doesn't touch J, and we always hold h constant.
  • Christopher Jerome: It does look like blue gets darker, and maybe skews a little.
  • Pekka Riikonen: That could be due to clipping
  • Nick Shaw: Or because although J is constant, J is not the same as Y. We know we can have positive J with negative luminance.

Meeting #108, June 28th, 1pm PT

Attendees

Kevin Wheatley
Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Alex Forsythe
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We have two things to show from Alex and Nick.
  • Alex Fry: I have added new diagnostic modes which each do one step of the DRT so we can break it into one node per step. This Nuke script contains the DRT as one single node and broken down. This makes it easier to turn bits on and off and play with things by intercepting the data at various stages. Nick asked on ACES Central about the extra input. The tonemapping function required an extra input. In fact it turns out to be redundant because it is not used.
  • Nick Shaw: I have used the broken down version and made a copy where I replace the tonemapping node with a series of my own separate nodes. I wrote an achromatic version of Hellwig J to/from luminance which removes all the stuff which is redundant for achromatic. It becomes two lines of code for each direction. It needs no parameters except L_A, Y_b and surround.  Everything else cancels out for achromatic, and it becomes a 1D function. I pre-calculate all the viewing condition parameters in the init() function rather than per pixel. It can replace the full JMh to XYZ and back used for tonemapping, to apply the Daniele curve in the luminance domain. I also added nodes to divide tonemapped J  by un-tonemapped J and scale M by that. It exactly matches the simple chroma compression in DRTv37.
  • Kevin Wheatley: We discussed last week, what if you just took input Y without going to JMh and back?
  • Nick Shaw: That wouldn't be good to tonemap because our problem colors have negative Y, so would end up at zero.
  • Kevin Wheatley: We can now test that. I am concerned about going in and out of a hyperbolic function, applying Daniele's s-curve and going back into a hyperbolic. Combining multiple s-curves may produce an undesirable tonescale. If Daniele's curve maps to a region of the hyperbolic which doesn't do much, it may not be giving us anything much. A simple system may be more predictable.
  • Nick Shaw: But the J we want to tonemap isn't just a 1D transform of the original Y. It's affected by the original colorfulness. So colors with negative Y still have positive J.
  • Kevin Wheatley: On a log-log plot of the hyperbolic there is a region where it's almost a straight line, so it becomes a log contrast adjustment, so maybe we could eliminate some issues by simplifying. Just for image rendering. Then we use the model for gamut mapping and everything else.
  • Nick Shaw: My 1D function does not use the clamped spow function. It mirrors to preserve negatives, although the Daniele function clamps anyway. You wouldn't want to mirror the Daniele curve to extend it below zero. You would want to continue in a straight line.
  • Kevin Wheatley: Where an image has negative Y, how does J become positive?
  • Nick Shaw: I think it's the blue contribution to J which is not the case with Y. Which I see as why we're using J to tonemap.
  • Kevin Wheatley: But we've already distorted the LMS space to do image rendering. 
  • Alex Fry: Our distortions were to straighten hue lines. I think J has always stayed positive.
  • Luke Hellwig: Nick, how did you come up with your Y to J formula.
  • Nick Shaw: It's not just Y to J. It only works for achromatic, so I removed everything which cancels out for achromatic values.
  • Alex Fry: The model keeps J positive by design, even if Y is negative?
  • Luke Hellwig: Yes, because of the hyperbolic nature of that function.
  • Kevin Wheatley: Nick has show the code can be simplified. Can it be modified to help with the chroma compression?
  • Alex Fry: We need to re-implement Pekka's chroma compression as raw nodes so we can play with it's parameters and look at other options.
  • Nick Shaw: The simple chroma compression just scales M by the ratio J a=has been scaled by. So there is currently no path to white or black.
  • Alex Fry: That means it doesn't look good for normal stuff like skin-tones. We now be able to play with Pekka's chroma compression to prevent the ballooning at the bottom.
  • Kevin Wheatley: We can also use the broke down version to look at the gamut mapping and try other options.
  • PR: I'm writing a post about the chroma compression, to explain it. I could make the steps of the chroma compression extra diagnostic modes, so they can be examined in the broken down version. I have discovered that the ballooning is due to the global saturation boost that is my third step. Darker colors need a saturation boost. It pushes stuff that is already at the edge out of the locus.
  • Kevin Wheatley: So we need a modulation function on that saturation boost.
  • PR: I think chroma compression is an important step, but it doesn't have to be done the way I have done it.
  • Alex Fry: The image after chroma compression is our image that we want to map to different targets.
  • PR: How chroma compression changes with brightness affects how easy the image is to grade. OpenDRT had a very simple chroma compression step, which I found made it hard to grade through.
  • Alex Fry: My example script uses the locus gamut compression, but all the other options are available.
  • Kevin Wheatley: I feel that if the first stage is done well, the final stages should become simpler.
  • Alex Fry: I want to reintroduce the iterative gamut hull solve, to see what effects are coming from the approximation. The AP1 compression mode needs to be made J dependent.
  • Christophe Brejon: What stage do you feel you are at? Is it just code cleaning, or are there big rendering changes to come?
  • Alex Fry: I think things will only change for the edge cases. Normal colors have been pretty stable for a few versions. We are using v28 on a real show. The only issues are when people want to hit the corners. But it's better than ACES 1.2.
  • Christophe Brejon: Are you interested in feedback from CG vendors?
  • Alex Fry: Absolutely! I think with the old transforms people weren't shocked when they went from SDR to HDR, but for animation, you could get a big shock looking at something in HDR that was authored in SDR.

Meeting #107, June 21st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Daniel Brylka
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias

Meeting Notes

  • Kevin Wheatley: I've kept investigating the parts of the DRT. I wondered if it was possible to apply the Daniele curve more directly to J. Partly for efficiency. But also because by zeroing out M before going back to XYZ, you lose the energy from the colorfulness, so get back a lower Y. That Y is mapped through the Daniele curve, and calculate a new J value from that. When we then do the ratio of J values and multiply M by that, we are desaturating highly saturated colors, which may or may not be desirable.
  • Nick Shaw: If you did get back the original Y value you could have just used the original, without this back and forth. Are we not deliberately calculating the Y of an achromatic which would produce the same J as our color? And that's what we tone map.
  • Kevin Wheatley: That's the question. Is that what we intend to do, or an accidental side effect? It may be useful, but there is a difference in how the function is applied, depending on M.
  • Alex Fry: The passing pack through the model was always a temporary hack to get into the domain the tone curve was designed for.
  • Kevin Wheatley: At minimum we don't need to use the full inverse. We can simplify it to a 1D function because we know M is zero.
  • Nick Shaw: I feel it would be hard to make a matching version of the Daniele curve to apply directly to J, without just wrapping the inverting within the Daniele function.
  • Kevin Wheatley: The Daniele curve is s-shaped in log-log. So is the non-linearity in the model. Could the combination of the two cause wobbles? Could we do something simpler?
  • Nick Shaw: Are there not already multiple non-linearities in the model? It applies non-linear curves to LMS, which won't be at the same level, even for achromatics, and then taking a weighted sum of those. I have plotted a Y to J curve, and it didn't see to have any oddities.
  • Luke Hellwig: The iso-J plane in JMh is not isoluminant. I've always wondered if that's good or bad. Is CAM16 J a better predictor of achromatic response than luminance? I don't know, but they aren't the same. What is the reason for the forward and inverse?
  • Kevin Wheatley: We first calculate JMh of the original color. We then zero M and h and invert to calculate luminance from J. Then we apply our tone curve to Y and go forward through the model to find a tone mapped J, and the M value is multiplied by the ratio of new and old J.
  • Nick Shaw: The intent is that the net result is to tone map J. But our function works in absolute luminance.
  • Kevin Wheatley: We are preserving h, and scaling M by the same as J.
  • Luke Hellwig: Did we try applying the Daniele curve to the original XYZ..
  • Nick Shaw: We didn't, but isn't that then just XYZ curves instead of RGB, and would have skews?
  • Kevin Wheatley: The front end is a bit subjective. The curve is changing the dynamic range, and we then estimate M and h to preserve what the original color was.
  • Nick Shaw: It's about a hue preserving tone scale.
  • Luke Hellwig: You could apply the curve to Y, calculate J, and match that with M and H values from the original XYZ.
  • Kevin Wheatley: There could be a color shift depending on reference white.
  • Luke Hellwig: You could apply the curve just to Y and set X and Z to make it achromatic.
  • Nick Shaw: If we had the extra diagnostic modes to split the DRT into one node per step, it would be easy to try this kind of experiment.
  • Alex Fry: I've always thought we want the colorfulness to be reflected in the J value that we tone map.
  • Kevin Wheatley: A more extreme value is to map Y through the Daniele curve and pretend it's J, with a bit of scaling.
  • Alex Fry: Highly colorful values could look very different.
  • Kevin Wheatley: You could still use it to compute a J ratio. It's just a different look, and simplifies things.
  • Nick Shaw: The uniform color space is something else to add to the mix.
  • Kevin Wheatley: I did test that, and it is different, but not automatically better. My idea was if manipulating J and M it could be better in a uniform space.
  • Luke Hellwig: I am skeptical of using UCS. It's only really applicable to calculating color differences. For large values it's not uniform.
  • Alex Fry: I've quickly build a comparison between Y and the result of going back and forth. The near UV one in Thomas's image is black in Y. Some are similar. More colorful ones are more different.
  • Nick Shaw: Out of gamut colors have negative Y which seems a bad start.
  • Alex Fry: I think that's why we went this route.
  • Nick Shaw: The Daniele curve is clamped at zero, so anything negative will become zero, and our out of gamut values are lost. Or does that only happen with out gamut blues?
  • Alex Fry: I've had a look at Thomas's new virtual ALEXA rendering through my gamut compressor that reaches out to the locus.
[Alex showed the visual result and CIExy plot of the two versions]
  • Alex Fry: Christophe, have you found anything interesting recently?
  • Christophe Brejon: On Super Mario we used ACES 1.2 and limited our textures to sRGB. It was a steep learning curve. Before we weren't color managed. 1D viewing LUTs and no HDR output. We spent a lot of time adding OCIO to our tools. In the end the CTO said the DI didn't go any better or worse than before. The issues we had were with noise, because the adaptive renderer sampled based on the SDR ODT, and then noise would appear in the highlights of HDR. But this doesn't only happen with ACES. I think Troy mad an important point asking "where is the image in the chain?" Sampling based on the image would be independent of the targets.
  • Alex Fry: We had the same issue on Peter Rabbit 1. For Peter 2 we sampled based on PQ. In our DRT I consider the image state to be where it is after Pekka's chroma compression. That is an image you could look at on a theoretical display with infinite gamut.
  • Nick Shaw: But it's still tone mapped by then, which is tied to the DR of the target. There is no one image.
  • Christophe Brejon: In ACES 1.x the image is located after the RRT, isn't it?
  • Kevin Wheatley: I theory, yes. But the various ODTs manipulate the image a bit.
  • Alex Fry: The hue skews are all from the ODT.
  • Christophe Brejon: Troy's AgX now includes HDR. Troy said HLG looks really good. Tray says ARRI Wide Gamut and LogC is what the image state is in with AgX.
  • Nick Shaw: That's similar to K1S1, that everybody likes – tone mapping in LogC AWG.
  • Christophe Brejon: There's more to AgX than the 'desaturation trick' of K1S1. It has stuff based on complimentary purity.
  • Alex Fry: We'd be happy for Troy to come and give us input.
  • Nick Shaw: I think some of us assumed AgX only targeted SDR, which makes the task easier.
  • Christophe Brejon: He refers to an 'image formation chain' and feels he has located the image very precisely. So the rest is just display encoding.
  • Nick Shaw: Daniel Brylka says he's experimented with interpreting BT.1886 as HLG [he notes in his post that he also applies a Rec.709 to Rec.2020 conversion].
  • Christophe Brejon: You have very complex requirements, because you want invertibility, and to reach the corners of the cube. Troy focused on what he felt was most important, which was smoothness of gradients. MAybe you hav no solution without a set of LMTs.
  • Alex Fry: We're mostly focusing on a transform you can create looks through.
  • Nick Shaw: We have said we may need to ship a default LMT, so people don't reject ACES 2.0 because they don't like the out of the box look.
  • Christophe Brejon: I understand the need to put display-referred imagery back in the same place. But inverting produces weird scene-referred values.
  • Alex Fry: It would be easier if, like games, we could have separate buffers. That's not how we make movies currently.
  • Christophe Brejon: The Academy is best placed to change things! There's also things like tinting the image. As Troy says, we are discounting a whole century of filmmaking. 

Meeting #106, June 14th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Sean Cooper
Luke Crawford
Francesco Luigi Giardiello
Christopher Jerome
Jeffrey D Mathias
Anton Meleshkevich
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: I've been revisiting the use of the CAM, and Alex posted on ACES Central.
  • Alex Fry: I've been looking at the idea of using the spectral locus as the limit in the final target gamut compressor. I originally had a version which used the locus at a fixed J, so was only correct at that level. I now have a pre-cashed locus that I am scaling to be J dependent, and using a small power curve of 0.86 to approximate the true shape. This is in my rev37. I'm using a simpler gamut compressor with no slope to make this work for now.
[Alex showed 3D visualizations of the compression vectors with rev37]
  • Alex Fry: I think it is better behaved with images like Thomas's laser lit Cornell box. Currently it is using a simpler chroma compressor than Pekka's. I can see why Pekka needed to add stuff to protect less saturated values. People look a bit desaturated. The ARRI 35 green bottle image P3/Rec.709 mismatch looks better, but I think that may be more to do with the horizontal gamut compression.
  • Kevin Wheatley: Am I right that there is no HK included? HK might explain the purer P3 green appearing brighter. Or is just because the green primary is darker?
  • Alex Fry: Output HK makes things darker. I think the issue we see is mostly J changing during gamut compression. A larger compression moves further in J.
  • Kevin Wheatley: I just wondered if including HK might help compensate for the darker P3 primary.
  • Alex Fry: Does using the locus as a limit make sense?
  • Nick Shaw: I think so. It's a common target that isn't tied to any display gamut, but is scientifically defined, rather than just using something we invent.
[Alex showed visualizations comparing his simple chroma compression to Pekka's]
  • Alex Fry: Because I am reaching out to the locus, the part of Pekka's chroma compression that bulges beyond that will get clipped. But mine hits the interior colors too hard.
  • Kevin Wheatley: In traditional gamut mapping if you don't adjust values and just clip, you can project everything outside the gamut onto some definition of the nearest point on the boundary. You pick one thing to preserve – luminance, colorfulness – and what you preserve means other attributes have to change. You can do a 'fancy clip' or you have to move some of the interior values to make room. My other point is that for the same dynamic range we have different sized gamuts. Should we expect to see the same amount of detail in them all? For a larger gamut, your preserved volume can be bigger.
  • Alex Fry: Up to now the threshold parameter is normalized to the size of the gamut, so the preserved volume is relative to the gamut size. But we were also making the point we reach out to relative to the gamut size. My new version keeps the threshold relative, but reaches to the same point for all targets. Rec.709 reaches relatively quite far, but Rec.2020 reaches less far.
  • Nick Shaw: We need to be careful we are clear when talking about compression. We have four things called compression. Compress mode around the non-linearity, which is also a compression, the chroma compression, and the final target gamut compression.
  • Alex Fry: The compression amount will be hue dependent as the distance from the boundary to the locus varies.
  • Kevin Wheatley: Before it was a fixed amount of compression, but that meant is sometimes went further than needed.
  • Alex Fry: Anton mentioned the artifacts on the defocussed neon signs. It's still there, but less with this version. There still a slight blue band between cyan and magenta. This may be because of the horizontal gamut compression. Skin tones do look desaturated with rev35. I noticed that although our gamut approximation matched well in most places there is a discrepancy around red.
  • Pekka Riikonen: We always use a straight line for the top part, which is mostly a good approximation, but the real gamut is concave near red.
  • Alex Fry: This difference means the inverse explodes out around these areas for values on the boundary. I may temporarily go back to an iteratively found actual gamut boundary.
  • Nick Shaw: It's worth checking how many of the issues we see come from the approximation.
  • Kevin Wheatley: In cases where I would need an inverse I wouldn't need real time, as I would be baking it out. So an iterative solve would be ok.
  • Nick Shaw: If the straight line approximation is outside the true concave boundary, you can still hit that boundary.
  • Alex Fry: The inversion problems are actually around yellow and blue, where the approximation clips a bit off the true boundary.
  • Pekka Riikonen: Fixing that could improve the issues with the yellow inverse.
  • Kevin Wheatley: To get my head around what we are doing, I have been implementing the parts of the code independently. I went back to the papers, which made me look at how we use the model. In the output part we know the target and the viewing conditions, so we should use the least model in it's truest form. In the first part we are using the model for image rendering. So maybe we should decouple the parameterizations. That might simplify the choice of using HK, or adaptation etc.
  • Nick Shaw: That makes sense, but if the model is modified up front, we have JMh data that isn't true Hellwig JMh. So coming out through the real Hellwig model may not end up where we expect.
  • Kevin Wheatley: But still conceptually the first and last stages are different. With the tone mapping and chroma compression, we have image rendering with some subjective choice. I was looking at the spider looking plots and where they explode in CIExy in places where X + Y is about equal to -Z so the sum is at or near zero. So they explode when you divide by that. That's related to the LMS space. It makes sense for a camera with non-standard sensitivities to need a non-standard matrix. And if it has wider primaries we get desaturation for free. Going back out we can use a standard matrix which maps to the spectral locus.
  • CJ: We talked about flow diagrams. Color coding the blocks in that might make things clearer. I've also experimented with the LMS primaries, which I think are crucial. Where is the image. Where does a ColorChecker SG (which mostly fits in Rec.709) get rendered. Focus on a good rendering of that, and everything else id the problem images, related to gamut mapping.
  • Kevin Wheatley: Sean made the original Miro block diagram which we should revisit.
  • Nick Shaw: Alex, we talked adding extra diagnostic modes which would enable a Nuke node graph to be a block diagram, with one node per operation. But you've been busy with other stuff.
  • Alex Fry: I will refactor my diagnostic modes. maybe a new set starting at 50 for separate operations. Things like compress mode might only make sense on entry.
  • SC: Compressing in one space then coming out via another might affect consistency of hue lines.
[Kevin showed his "spider" plot of values through the inverse model]
  • Kevin Wheatley: The collapses are where things sum to zero, and explode when you divide by that. I also looked at the kinks around yellow. That comes from the non-linearity being mirrored. People have proposed various solutions. A straight line to zero after a point produces a hard kink at the boundary of the LMS gamut. So I fitted a polynomial instead. Parameterizing that means I can alter the curves, but can't straighten out the kink because it mirrors. So I made a hybrid with a linear segment joined with a polynomial. The three are shown in this Desmos plot.  The parameters are scaled relative to 100, and the Gill model has a discrepancy at the high end after 150 where it becomes a straight line. The question is are we using the scalings correctly in the model, and are we scaling ACES 1.0 appropriately. We currently map it to 100. I think it's right. The colorimetry is different on the input and output. On input we can assume fully adapted, if the IDTs handled that. We could do something different on the output. I also looked at the M compression based on the J compression ratio. CAM16 has an appendix which refers to a Uniform Color Space. I wondered about doing the compression in CAM16 UCS, where delta of 1 in J is a delta of 1 in M. This Desmos shows the difference between using the simple J ratio and using UCS. It's subtly different, but may help because it will desaturate slightly less in most cases. UCS uses two slightly different curves. They cross over below zero, so we may have to do something different there.
  • Alex Fry: The rendered image shouldn't have anything below zero.
  • Nick Shaw: The Daniele curve includes a clamp at zero.
  • Kevin Wheatley: The Daniele curve can slightly increase the output relative to input in some cases for higher display peaks.
  • Anton Meleshkevich: Most of the time the image is going to be clipped before the DRT by an ACEScct look LUT, either at the lens cap black or ACEScct zero. With ACEScct zero, there will be a small chromaticity linear extension of the AP1 gamut.
  • Kevin Wheatley: Cameras are optimized in different ways which can produce values outside AP0 and the spectral locus.
  • Nick Shaw: The RGC can't be implemented as an ACEScct LUT, because the problem colors has negatives far lower than represented by ACEScct zero.
  • Anton Meleshkevich: There’s no need to support the out-of-AP1 colors from cameras, because the first and must have step is to bring these colors into AP1 before any grading operations. And if a colorist pushed values out of gamut a look LUT will still clip them.
  • Kevin Wheatley: That kind of clipping produces hue distortions so we are trying to soft-clip. For the RGC we looked at a gamut that contained everything camera were producing.
  • Alex Fry: It would be easier if we did only have to deal with values limited by something like the RGC.
  • Nick Shaw: It could be dangerous to assume an ACEScct look cube. People are starting to develop LMT tools which are not limited like this, and we need to be forward looking.
  • Alex Fry: People often use ARRI wide for LMTs now.
  • Anton Meleshkevich: From a user point of view it’s the maximum supported gamut needed at the moment. It’s about the amount of people who will benefit from easily reachable corners as a result of a smaller gamut support vs people who will actually need the support of the wider input gamut and don’t need to reach the corners. I've used the DRT on a few projects, particularly where I needed to preserve hue across deliverables. But I was only delivering SDR.

Meeting #105, June 7th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Luke Hellwig
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Willem Nagtglas
Christian Wieberg-Nielsen
Jp Zambrano

Meeting Notes

  • Alex Fry: Nick and Juan have been discussing Juan's gamut compressor. I've been working on my gamut compressor with a fixed limit, but not got a solution yet.
  • Jp Zambrano: I issue I see with my gamut compressor is it's not connected to what the DRT uses. It's not perceptual. I see skews in the DRT, but I'm seeing them relative to chromaticity linear, which is not a good metric.
  • Alex Fry: We try to measure on it's own terms. We expect curves in CIExy, but in JMh they are straight lines and vice versa. Some skews are related to where we compress to before final clamping, which produces flat spots and skews. Thomas's image shows these up.
  • Nick Shaw: If we pre-compress the scene data to the target space, the DRT gamut compressing works less hard. JP showed compressing to Rec.709 before a Rec.709 DRT. We need a comparable image in different spaces, but we don't want to to limit a P3 rendering to Rec.709. And pre-limiting to P3 won't match pre-limiting to Rec.709. It works only for a single deliverable. It's a question of what data goes into the DRT that we expect it to work well with. A DRT that works fine with normal images may need the RGC or a compressor such as JP's for problem images with residual errors in the IDT.
  • Alex Fry: Thomas's latest image was interesting, showing the Cornell box though a virtual CIE observer, and a Grasshopper camera. The CIE observer produces extreme values that are on the spectral locus. The Grasshopper 2 (an industrial camera) produces the kind of non-physical values we have been looking at. It has yellows of the kind we had problems with from the curve inversion, before we introduced compress mode. Is it better to pre-condition this kind of data rather than make the DRT handle all possibilities? I've been working on feeding the spectral locus boundary to our gamut compressor to set the limit. Currently the limit is relative to the target gamut, so varies. I pre-computed the distance to the locus. But it is currently only for one J value.
  • Luke Hellwig: The actual spectral locus is cone shaped, with a bit of a curve to it.
  • Alex Fry: I pre-cache it for one J, but it needs to be scaled up and down with J. It's a hack, because even XYZ values produce a slice that varies in J. So I do it over a range then sample a cross-section at constant J.
  • Nick Shaw: It the cross section the same shape at every J value?
  • Alex Fry: It appears to be.
  • Nick Shaw: So hopefully we can fit a curve function in J for the scale factor. A power curve or maybe a hyperbola.
  • Alex Fry: Maybe Luke can tell us what that curve would actually be.
  • Luke Hellwig: Not off the top of my head!
  • Alex Fry: The question is whether that is an appropriate place to reach for. My gut says it is.
  • Nick Shaw: So the DRT doesn't work too hard to reach to "nonsense values" as Troy calls them.
  • Alex Fry: We've worked hard to deal with these values. But it's up for debate. Where should the limits be?
  • Nick Shaw: The current RGC recommendation is to apply it everywhere. But the hope is AMF adoption will let people apply it as needed (and maybe a parametric variation) and track that.
  • Alex Fry: A limited DRT could use the RGC or another gamut compressor to condition the data.
  • Nick Shaw: An aggressive compressor like JPs that compresses infinity to the gamut boundary is ok if you don't need inversion. But mixed ACES pipelines may have VFX in ACES, but delivered back in camera native. A compression before VFX couldn't be inverted out after it.
  • Alex Fry: I'll work on finding a J dependent locus boundary.
[Alex showed how his current sampling works]
  • Nick Shaw: Simple scaling will only work if it scale evenly about achromatic.
  • Alex Fry: It feels like it does. I can check. Currently it's only right at J=69. It. might get weird at the top, where the target gamut tapers to a point and we will be reaching way out to the locus. We have to see.
  • Christopher Jerome: Does the simulated camera include flare? I feel I'm seeing too many zeros. The RICD includes flare. The ACES synthetic chart includes flare.
  • Alex Fry: That's a Thomas question. Adding 0.0005 flare doesn't seem to make a big difference.

Meeting #104, May 31st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Daniel Brylka
Luke Crawford
Alex Forsythe
Luke Hellwig
Thomas Mansencal
Jeffrey D. Mathias
Pekka Riikonen
Daniele Siragusano
Juan Pablo Zambrano

Meeting Notes

  • Kevin Wheatley: Thomas posted an image on ACES Central, comparing ACES 1.2, AGX and rev35.
  • Thomas Mansencal: I could have used gradients or swatches, but wanted something tangible. We all know the Cornell box. We could even build it physically. I wanted to see what the transform is doing, which I think overall is good. I put Troy's AGX for comparison. I need to recheck my images because I don't think I encoded them right. But they are good enough to compare the various DRTs. I think the new one works much better. Especially in HDR. I would note the glow, particularly on the red column, compared to AGX. And the cyan at the very top and the pink one. Bill from Colorfront pointed this out in his demo. Chris also pointed it out on Red Xmas. I think if we fix that we are in a good place. Each column in my image is a particular wavelength – the BT.2020 primaries and secondaries, and the last two columns are 400nm and 700nm. The 400nm shows how purple breaks with the transform. The pink a bit too. I don't like AGX in the dark areas. I think it lacks contrast. I prefer ours.
  • Kevin Wheatley: With the cyan, ours seems to preserve saturation in highlights more than in other colors.
  • Thomas Mansencal: The virtual LEDs are normalized using luminous flux, but that won't make them the same perceptual brightness. I'd need to come up with something else if I wanted something perceptual that modeled HK. But it works ok with AGX.
  • Alex Fry: That's a really great test image.
  • Thomas Mansencal: I also wanted to do an experiment with the LED wall and a test subject to replicate the Xmas ball experiment across a range of hues. But I need clearance from work.
  • Alex Fry: The cyan at the top is quite compromised.
  • Thomas Mansencal: A few people – Bill Feightner, and Chris have pointed the issues with the glow at the terminator out. But nobody replied. I don't like it, but didn't have time to do anything.
  • Alex Fry: I want to load your EXR in Nuke and see how the cusp smoothing affects things.
  • AFo: There seems to be posterization at the edge of the shadow in the middle of the balls. I don't know if that's what you are talking about. A harshness to the stripe.
  • Thomas Mansencal: I'm sure I messed up the encoding, but it should affect that. I saw it on my monitor, even in HDR, although it's less offensive there. There is no AGX HDR to compare.
  • Alex Fry: I think part of this is the issue with P3 being less contained than sRGB. I always expected P3 rendering to be less difficult, but it's the other way around.
  • KW" We talked about the idea of a single surface (like a gamut) but we never decided what that would be. Just because Rec.709 looks cleaner doesn't mean it's the best target for all displays.
  • Alex Fry: I'm not sure if it's where we're compressing to, or from which is the primary cause of these issues.
  • Thomas Mansencal: I'd like to try rendering the image using camera spectral sensitivities instead of the standard observer. The wavelengths of each LED are shown in tiny writing on the image.
  • Kevin Wheatley: Looking at the image locally and via Zoom screen share, some colors look radically different. In my C implementation I've started looking at the chroma compression to understand how it works. I notice there are three places that use various different eccentricities, some not to do with HK at all. Luke's recent paper mentions there are different interpretations of that, and perceived luminance is affected more for wider color gamuts than narrower ones. If we include HK in some places but not others we could have a mismatch.
  • Alex Fry: Pekka is the only person who has a good handle on the chroma compressor – effectively the rendering step.
  • Pekka Riikonen: I made a post a while back showing the effect of the different eccentricity factors, which are more to do with the inverse than the forward direction. I normalize M to a compression gamut cusp, which is a way of making it hue dependent, without the hue curve I used before. I wanted the inverse to fill AP1, and needed an eccentricity factor to do that. I looked at the Hellwig eccentricity, and CAM16, and then made a custom one, which kept yellow inside AP1 on inverse.
  • Kevin Wheatley: That's just the inverse from Rec.709, and the eccentricity tries to straighten things out.
  • Pekka Riikonen: Luke commented that in a perceptual space they shouldn't necessarily come out so even. Making it even in chromaticity space is not necessarily how it's supposed to be. The eccentricity can be turned off, of course. I don't know if this is the right thing to do. I'm not sure about inverting to fit AP1, as that is an impossible requirement if you need to handle input way outside AP1.
  • Kevin Wheatley: So you'e trying to make positive AP1 values hit the boundary of AP1.
  • Pekka Riikonen: It's now easier to hit the corners in the forward direction than before, certainly. I did look at the ballooning that Alex mentioned last week. I consider that a bug. The chroma compression shouldn't expand anything. I know why it happens. When we scale the scene colorfulness values to the tone scaled range, the image gets very desaturated, so we add saturation to make a nice looking image. That pushes some of the values outside the locus. I don't think that should happen.
  • Alex Fry: If the rendered vales are not plausible, it's hard to work out what the gamut compressor should do with them. I think our rendered image needs to be physically plausible.
  • Pekka Riikonen: The shape now comes from the tone curve. I'll see if I can modify the shape so the ballooning doesn't happen. The shape comes from tone scaled lightness divided by the original lightness.
  • Alex Fry: is it possible to see the tone scaled chroma without adding the extra saturation?
  • Pekka Riikonen: Not at the moment.
  • Juan Pablo Zambrano: I'm using Nick's DCTL pure shader implementation of v30 so it's old.
[JP showed the difference when applying his compressor before the DRT]
  • ZPZ: With color ramps, the channels balloon above 1.0. I think it's more to do with the image formation than the gamut compressor.
  • Kevin Wheatley: There's may be precision issues there, as I'm seeing what look like interpolation artifacts.
  • Pekka Riikonen: Months ago I posted images of grey ramps with some color in them showing D65 through the rendering. That was with the old chroma compression but showed the same issue. Even close to achromatic we end up going above 1.0.
  • Juan Pablo Zambrano: I'm guess this happens because the image is created with the Hellwig LMS matrix, and the LMS space is much bigger than XYZ. When saturated colors go to a smaller gamut they always get brighter, causing this kind of bowing.
  • Pekka Riikonen: The chroma compressor tries to deal with this for in-gamut colors, and out of gamut colors are left for the gamut mapper.
  • Juan Pablo Zambrano: It helped when I converted from AP0 to Rec.709 and clipped it to that with my gamut compressor in a hue linear way then go back to AP0. It ends up less saturated because I'm compressing to Rec.709.
  • Alex Fry: It will be easier for the rendering to handle if the values aren't in the "problem zone".
  • Juan Pablo Zambrano: I still get the problems when I clip to Rec.2020.
  • Alex Fry: Currently we aren't reaching all the way to AP1 to pull in to Rec.709, so some values still get clipped. It varies between target gamuts, which is the root of some of our problems.
  • Juan Pablo Zambrano: My gamut compressor is sort of based on HSV, not a specific gamut, but the luminance channel is max(rgb) because blues can have negative luminance. It keeps the maximum value constant, and compresses in a chromaticity linear way to whatever gamut you're in. Hellwig is not linear in CIExy, but that can cause problems for the compression.
  • Kevin Wheatley: But that's the space our compression is operating in, so hue is fixed and we only change J and M.
  • Alex Fry: This is like putting the RGC in form of things. Pre-conditioned data is easier to handle. Which comes back to "what is acceptable input data?"
  • Thomas Mansencal: I don't know exactly what AGX does, but tone mapping in a larger space automatically desaturates. If the virtual primaries are further away, they skew less in the original gamut.
  • Juan Pablo Zambrano: AGX does bend but controls what it bends to, as opposed to depending on the destination space.
  • Alex Fry: What was the max(rgb) approach of the RGC?
  • Nick Shaw: Basically it used max(rgb) as a proxy for achromatic, and kept the max channel unchanged. The other channels were normalized to the max, and and one that was more than a certain distance from the max was moved towards the max with the PowerP compression curve. Then the normalization was inverted. It works in RGB with no color space conversions.
  • Kevin Wheatley: Luke am I right that in your latest paper you suggest that there is no hue dependence if you base things on CIECam? It's sqrt(J^2 + 66*C) or something very simple like that.
  • Luke Hellwig: I argue that the hue dependency in HK is compensating for the lack of uniformity in a model. So there's no hue dependency for HK in CAM16.
  • Alex Fry: I want to keep working on my gamut compressor version that pulls from the same source to the gamut boundary of every target. I think that will help us to be visually consistent between devices. My experimental version is cusp dependent. But it needs to be J dependent too.
  • Nick Shaw: Currently the vector angles are calculated based on the target gamut. Would you calculate them based on one gamut for all targets?
  • Alex Fry: Perhaps, but I don't think the angle is the most important factor.
  • Pekka Riikonen: The distances moved will still be different, so the brightnesses will be different. I think the best match will come from matching the brightness. So the angles will have to differ.
  • Nick Shaw: That may make inversion hard. But worth testing. If it doesn't solve the issue we don't have to worry about inversion.
  • Alex Fry: Nick and I have been looking at breaking the transform into separate nodes. I'm looking at adding more diagnostic modes, to apply only a single step in each node. It makes it like a block diagram and better for testing. 
  • Pekka Riikonen: If the Python and DCTL versions were all in the same repo, I could update them all together.
  • Nick Shaw: Yes, but the Python exists only as a Colab I was playing with, which is just RGB to JMh. And the DCTL is a work in progress which is way behind the Blink. You can only update them together if they start at parity. If they are in the same repo, people will expect them to match.
  • Juan Pablo Zambrano: Looking at the output of rev35 in my color space, I can see a bump in saturation and the max values.
  • Alex Fry: That may be because we're not hard compressing to the target. Values that are too far out will still be out of gamut after gamut compression.
  • Pekka Riikonen: Kevin, did you implement your modified curve you showed before?
  • Kevin Wheatley: Not yet. I will post what I've done, so people can look at it.

Meeting #103, May 24th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Christophe Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Francesco Luigi Giardiello
Gralk Klorggson
Jeffrey D. Mathias
Carol Payne
Troy Sobotka
Juan Pablo Zambrano

Meeting Notes

  • Kevin Wheatley: Over the last week Pekka posted about the differences when using different gamuts for limiting. No concrete conclusions. It's preference. Daniel commented on the blue ball going purple. The path of "constant hue" for near blue tends towards purplish, and collapsing that to Rec.709 goes too purple. We think that's the cause, but don't have a solution. The other discussion is about how much of the various color spaces should we be able to handle? AP1? AP0?
  • Alex Fry: Daniel's blue is not implausible, but our model tweaks to support implausible blues introduce effects in areas where the CAM should track better.
  • Kevin Wheatley: CG people would expect us to support AP1.
  • Alex Fry: Perhaps the part of AP0 that's within the spectral locus. What does outside the locus and AP0 mean? A lot of our hacks are in order to do sensible things with implausible values. But real cameras produce these values. Much of Red Xmas is outside AP0. I'd put that down to a bad IDT. Are we working too hard to make that look good?
  • Kevin Wheatley: The RGC aims to make things positive in AP1.
  • Alex Fry: All of the ARRI Bar image is plausible.
  • Nick Shaw: Are we compromising in range values to handle wide out of range values? Making space to pull them in.
  • Alex Fry: The yellows where we get the inversion in the original CAM are non physical. But there are a lot of blues that are outside AP1 but in the locus. They are up for debate.
  • Nick Shaw: Maybe we can dial back on the extreme lengths we go to to try to fit everything in.
  • Alex Fry: I wouldn't want it to produce errors, but maybe some values could clip like the current RRT. I've been experimenting with a gamut compression that "reaches out" to the same gamut for all targets. I tried with AP1. Currently it is based only on the cusp so looks ok in 2D, but isn't right in 3D. Needs more work.
  • Kevin Wheatley: How much of the compression is part of the rendering? Then we try to represent that on a real display.
  • Alex Fry: The output of the chroma compressor is the image we are trying to represent. But we can't look at that.
  • Kevin Wheatley: What shape is that? And is it reasonable?
[Alex showed a 3D visualization of the output of the chroma compress of the spectral sweep]
  • Alex Fry: The chroma compression pulls the top in, but the bottom bulges out, which may be causing problems.
  • Carol Payne: We don't need to clip values outside a given range. We just say you shouldn't expect a "reasonable rendering" for them. But they could be kept for inversion.
  • Kevin Wheatley: The locus is a fuzzy boundary. And some areas within it may be a waste, like the cyan green boundary, which cameras won't produce.
  • Alex Fry: Thomas's spectrally rendered Cornell box has values out there.
  • Kevin Wheatley: If you try to capture too much, you could compromise too much. The AP0 blue is negative so doesn't make any sense. I wouldn't expect to render that "correctly". AP0 was just designed to contain the locus.
  • Alex Fry: AP1 may be too small. There are a lot of plausible values outside it.
  • Carol Payne: For the RGC we worked out a max of the common camera gamuts as where we reached out to and mapped to AP1.
  • Nick Shaw: Some think the RGC is too aggressive because cameras won't produce values in the corners of their encoding gamuts.
  • Kevin Wheatley: The locus isn't totally relevant because a camera not a person saw these images.
  • Alex Fry: Is that an IDT question?
  • Kevin Wheatley: We can use the locus as a guide, but shouldn't hard clip there. Then there is the shape of the target gamut. The shapes vary, so you can't just shrink them down. There has to be a trade off. The spectral locus doesn't align with any of our destinations. You could use AP1 and expand it out a bit to cover purples and the problem yellows. Rec.2020 and AP1 are very similar by design. So if you used AP1, a Rec.2020 device would be the best representation of that.
  • Alex Fry: Because of the bulge at the bottom, the output of the chroma compression cant be represented, because it's made of negative light.
  • Kevin Wheatley: So we need to pick the right boundary, and see how we can make it compare across device gamuts. P3 and Rec.709, but also SDR and HDR.
  • Alex Fry: The mismatch is more obvious between SDR P3 and Rec.709. Going to HDR everything changes.
  • Nick Shaw: We could set HDR to be limited to Rec.709. Presumably we'd get the same kind of mismatches.
  • Kevin Wheatley: Juan ay in the chat "the 'limit' should be the gamut where the image is formed. We don't currently have a definition for what that is. It's a side effect of the compression. If we pick something for that we can say if anything falls outside it you need to modify the pixels with a gamut compression or something before the Output Transform.
  • Alex Fry: This shape of the spectral sweep after chroma compression has a kink we should look into.
  • Kevin Wheatley: That shape is the image we need to pull in. We need to define it more formally. I don't have a problem going a little outside the spectral locus. Not too far or it's wasteful. Mapping a completely weird shape to a destination would make things hard.
  • Alex Fry: Rec.2020 is at least a screen we could look at.
  • Kevin Wheatley: I'd say AP1, which is almost Rec.2020.
  • Sean Cooper: Are these limits on the input side, or the maximum your compression will support? I'd decouple those.
  • Kevin Wheatley: It's not the maximum input, but a sensible range to target reasonable rendering of. We need to place a limitation and not try to handle everything so we can finish.
  • Sean Cooper: If it's on the input side it's ignoring grading, because colorists will turn up saturation.
  • Kevin Wheatley: AP1 input is a straw man start point. Alex suggests the spectral locus. Or AP0.
  • Sean Cooper: I think it's more helpful to define it in the rendering domain.
  • FW: We are really because the shapes we are looking at are post rendering.
  • Nick Shaw: I've been experimenting with taking the Blink code and running the different parts of it in separate nodes, so it forms a block diagram, and you can tap into intermediate values. It's not ready to show yet.
  • Kevin Wheatley: I've been implementing Luke's original paper in code to see what we changed.
  • Troy Sobotka: i agree with Sean, tying the picture formation to the rendering. You'll always have values outside that hull which is why AP0 is a problem, because there is an implicit clip. You need to make those matters meaningful. But what really matters is what you see in the picture. That's about the relationship between the values. The neck in Red Xmas, I think the issues are due to clipping before picture formation, and those on the cheek are clipping in the output domain. The RGC distorts hues because it's not chromaticity linear.
  • Alex Fry: How should we deal with Rex Xmas. Should we pull those implausible values into AP0 so they are all meaningful?
  • Troy Sobotka: All camera spaces are opponency based, which means that you at least have an idea of the ratios, of the power of the purity of the given angle in relation to the overall intensity.
  • So if you're going to pull them in, you have to at least consider how they're going to end up in the picture in relation to the to that sort of an underlying, implicit assumption.
  • Alex Fry: I don't think the Red Xmas IDT has produced meaningful image data the way we define it. The encoding is wrong. It's an IDT problem.
  • Troy Sobotka: Cameras produce this data. You have to address it.
  • Alex Fry: There's an argument that anything outside AP1 is already gone in current color correctors.
  • Kevin Wheatley: It's not gone. It's negative AP1. It's not clipped in any color corrector. It's retrievable. There is a person watching the images as they change them. Ours is not sculpted per image.
  • Sean Cooper: I'm not suggesting you need to support infinite chromaticities. I just wanted to know what domain your limit is defined in. There is manipulation beyond the input domain.
  • Christophe Brejon: It would be good to have a demo of the gamut compressor Juan posted about on ACES Central.
  • Alex Fry: That would be good for next week.

Meeting #102, May 17th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Francesco Luigi Giardiello
Luke Hellwig
Christopher Jerome
Jeffrey D. Mathias
Willem Nagtglas
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Nick Shaw: I've made a Desmos plot with a modification from the original, where I invert the direction the compression vector steepness changes with cusp M. They are both just variations on equations that create the kind of behavior we think we want. They aren't the only possibilities. Pekka suggested that for a better P3/Rec.709 match with the ARRI bar green bottle, getting steeper, not flatter with increasing M might be better. I put M_cusp as a divisor instead of multiplier, and then added a multiplication factor to get the values in the same range as before. I also multiply by J_max, the J value for display peak, so things scale for HDR. This means it gets steeper based on the M cusp at different hues. That might not be what we want. You might want the change with hue t be as it was before, but have a divisor which is a single value for a gamut. The max M value of the cusp. Or perhaps something like the M value of the green primary could be representative of that. You would need a different constant multiplier to get the values to the right range. There are loads of possible variations. You could do things like raise the divisor to a power to control the rate of change, for example. Pekka implemented this in Blink so maybe can comment.
  • Pekka Riikonen: Overall the idea works. It's just a slightly different mapping result. The issue I saw was the yellow desaturated a lot faster, because the higher cusp makes the slope flatter. I compared it to the old version and ARRI Reveal. I looked at P3 and I don't think it solved the problem. I wanted to understand how big a problem the P3 / 709 appearance match is. I found it a very good match. Practically identical. It's the more saturated colors that differ. The/cyan is the worst match. I don't know the correct answer. Increasing gamut compression for green and cyan helped the match. When I compared to ARRI Reveal I notice the cyan is highly compressed compared to ours. Their match is better. But they don't have that intense "nuclear" cyan.
  • Kevin Wheatley: After the last meeting we looked at where the primaries fall for the various gamuts.The P3 green primary is darker, in terms of luminance, than the Rec.709 one. Is that a factor? Full green is more saturated but darker for P3. It may not be strictly true that one gamut is "bigger" than the other. There isn't a guarantee of containment.
  • Nick Shaw: Maybe this green falls in a place that's at a brightness Rec.709 can do but P3 can't. So P3 clips earlier.
  • Kevin Wheatley: because the cusp values are computed for the different gamuts, you get different vectors. Is that ideal? Maybe we should pick one. Even with the same vector it will hit a different gamut boundary at a different place. But is this an edge case? If you solve it by desaturating a bit more, should the DRT do that or is it a creative choice? If you don't have the problem color you might want to maintain the saturation.
  • Nick Shaw: We should be careful bout comparing with other renderings, because we have constraints that they don't. We want to fill a display cube and be invertible. Others may just target looking good. The ARRI Reveal LUT doesn't fill the cube.
  • Kevin Wheatley: If we picked one gamut to use for compression we might pick P3 as it is the destination gamut for a few modern displays, and for HDR.
  • Pekka Riikonen: I noticed that the gradient issues were more noticeable in P3 than Rec.709. More compression and cusp smoothing didn't help. It seems better when you map to a larger gamut and then clip, even though you do get skews.
  • Nick Shaw: Are we talking about just using P3 to calculate vectors, or actually compressing to P3 and then clipping to Rec.709.
  • Kevin Wheatley: The latter.  The question is whether this is part of the look, or gamut mapping?
  • Alex Fry: I was talking to Troy about this. Where is our "rendered image". I feel it's after Pekka's chroma compression. That's the formed image, if we had a display that could show it, and the gamut compression tried to fit that to a real display.
  • Kevin Wheatley: Because we don't have that display, we don't know if the P3 or Rec.709 is a better match.
  • Alex Fry: The chroma compression currently only acts on M. Could we do all our J modifications in that gamut agnostic step, and keep the gamut mapping linear and horizontal?
  • Pekka Riikonen: Even if the gamut mapper is technical, it does impart a look. In my post I showed that with a trivial LMT you can get close to ARRI Reveal. Does the out of the box matter if we can easily get where we want.
  • Kevin Wheatley: That's a good point. It has to be adaptable. We should focus on technical faults.
  • Pekka Riikonen: The Red Christmas artifacts looked worse in P3.
  • Alex Fry: I felt that was better in v28. Was that cusp smoothing?
  • Pekka Riikonen: I couldn't get that to help.
  • Kevin Wheatley: We need to understand where the issues come from.
[there was a discussion of the artifacts an difference between renderings of the images in this post Alex showed his 3D visualization, showing that the red values on the cheeks in Red Christmas we still outside Rec.709 after gamut mapping]
  • Alex Fry: So there is still clipping happening, which may be causing the artifacts. You have to desaturate it a lot to get it in gamut because they are so bright. But dropping the exposure a stops brings them in without changing saturation. You can see in my 3D plot that although the P3 green is darker, the Rec.709 primary is still contained within the top part of the P3 hull. I made a new visualization showing where the boundary of a gamut inverts out to through the gamut compressor. So it shows where the gamut compressor "reaches out" to. How much of the "formed image" ends up in gamut is quite different for Rec.709 and P3. But that may not be the only factor for the green in the bar image. I also added a visualization of the spectral locus at different intensities. You can see the gamut compressor reaches outside the spectra locus in places, but we knew that, because of blues in blue bar. Is is sensible to attempt to be able to represent all positive AP0 values that are within the spectral locus?
  • Kevin Wheatley: Because of camera engineering compromises we know they produce values outside the spectral locus. So maybe the locus plus, say 10% outside. Not display gamut but puling from something based on the HVS in a perceptual space. Is the large area of green that all gamuts cut off, but is inside the spectral locus useful to pull from though?
  • Alex Fry: An image like Thomas's synthetic Cornell box with the colored balls has values that are all plausible and inside the spectral locus, but no real camera would produce. But AP1 would be a good experiment. I can't yet get setting the reach to AP1 to work.
  • Nick Shaw: Would it be a useful experiment to test color patches by moving them along a vector in JM, inside a gamut we can see, and see if they still feel like the same color. Check the gamut compression is doing the right thing. If we can't find a vector that does what we need, it's pointless playing with controlling those vectors.
  • Alex Fry: I still feel the mismatch is mainly due to the distance not the angle.
  • Pekka Riikonen: Nick new version changes the angle to try to come closer to the same J value despite the distance travelled.
  • Alex Fry: Horizontal compression so the brightness matches creates a better P3 / 709 match.
  • Pekka Riikonen: Same for SDR / HDR. Christopher had commented that you see less detail in P3 than Rec.709 because Rec.709 is darker, so if you increase the compression  the detail comes back.
  • Christopher Jerome: It was less that Rec.709 was different from P3, but that Rec.709 limited P3 was different from P3. As far as P3 is concerned, Rec.709  is just some abstract gamut. So if some other ideal gamut within P3 was selected, it could give the properties we want. I posted a possible ideal gamut.
  • Nick Shaw: So if targeting Rec.709 or P3 on a P3 display that can show both is not creating a match, that suggests the gamut mapper is not doing what we want.
  • Alex Fry: Compressing towards a smaller gamut within P3 risks making it hard for P3 to be all that it can be.
  • Christopher Jerome: We need to hit the corners of Re.709. But is there a requirement to hit the corners of P3? Deep colors is the big advantage of P3 over Rec.709 for me.
  • Alex Fry: I think people expect to be able to hit the P3 corners.
  • Kevin Wheatley: Not necessarily with positive AP1 values.
  • Alex Fry: Although we're targeting Rec.709 and P3 now, it has to work doe Rec.2020 and other gamut in the future.
  • Christopher Jerome: Is there an ideal wide gamut that the compression could target?
  • Alex Fry: Maybe this should be in the chroma compress, not gamut compress, simplifying what the gamut compressor has to do.
  • Pekka Riikonen: The chroma compression already has an arbitrary compression gamut, which is larger than Rec.709 and smaller than P3. Changing that is an easy way to control compression for different colors.
  • Alex Fry: Can you explain that step more to me?
  • Pekka Riikonen: First we scale down M based on the tone scale applied to J, multiplying by tone scaled J over original J. You get a curve where the M eventually hits black and white. Then I normalize M to that new compression gamut and compress it with a cubic function or something like that. We normalize to the cusp of the compression gamut. Then we compress from zero to a limit which is about 1.2 in v35, then we denormalize back to the original range, and apply a final saturation adjustment. The compression gamut cusp has that eccentricity factor. That is what gets the inverse within AP1. I tried Hellwig and CAM16 eccentricities, them came up with my custom one. Changing the gamut and eccentricity factor is an effective way of changing the compression. The gamut cusp smoothing is also expanding the gamut, pushing M out and raising the J value of the cusp
  • Alex Fry: I'm concerned about that. I think it's creating more gamut clipping than we want.
[Alex showed the 3D plot of Thomas's spectrally rendered Cornell box through the different steps]

Meeting #101, May 10th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon De Lavergnée
Daniel Brylka
Alex Forsythe
Francesco Luigi Giardiello
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Willem Nagtglas
Carol Payne
Pekka Riikonen
Troy Sobotka
Juan Pablo Zambrano

Meeting Notes

  • Alex Fry: I've created a visualization to look at the behavior of the ARRI bar bright green bottle that was brighter in P3 than Rec.709. 
[Alex showed the scope images from his post, illustrating the compression angle and distance of the ARRI green in P3 and Rec.709]
  • Alex Fry: The green cusp is lower in P3 than Rec.709, and lower still in Rec.2020. Is the effect cause by being either side of the cusp? Or is it the different angles. I feel the distance travelled is the largest factor, which lowers J. The projection angle aims to preserve color at the expense of brightness. For comparison I'm setting focus distance to 1000 to make the lines almost horizontal, which makes things brighter but less saturated. The upside of that is more consistent brightness between gamuts, with the same dynamic range.
  • Pekka Riikonen: The red Christmas mage looks wrong.
  • Alex Fry: In HDR what you keel on the cheeks is intense redness, so maintaining color would be preferable for this image. But I'd say neither is "wrong". It's a trade-off.
  • Lars Borg: Could the focus distance be different for more and less saturated colors?
  • Nick Shaw: That could cause inversion issues.
  • Pekka Riikonen: We changed it so we didn't need to solve for the original color to invert. I think we should first look at the effect of having the same angle for both gamuts.
  • Kevin Wheatley: I felt neither of the two versions shown was quite right. I would pick somewhere in the middle.
  • Pekka Riikonen: Horizontal projection darkens dark colors more. We are brightening them, which may or may not be a good thing.
  • Nick Shaw: The steepness of the angles is affected by the M value of the cusp. I think I modeled that on the behavior of the previous version.
  • Kevin Wheatley: Hard coding the focus point makes it no longer hue dependent.
  • Nick Shaw: But it's a useful test. But for the real thing we could use the cusp of one gamut, e.g. Rec.2020, to set the focus point for all targets.
  • Alex Fry: I still think the distance travelled is the biggest factor.
  • Scott Dyer: I've been experimenting with tone scales, following Alex's request to support fixed mid grey. Setting w_g to zero in Daniele's curve, the bottom part doesn't line up exactly. But is it visible. But we would have to explain why it isn't identical. We abandoned theSSTS which gives more control there. What was the issue with C2 continuity? I've never seen a visible artifact. I've been playing with tuning the SSTS to match our current curve.
  • Nick Shaw: Daniele's curve was appealing because the rendering code was very simple. But our DRT now has so much complexity elsewhere, that may not be such a big factor.
  • Scott Dyer: I'm close to getting a match with the SSTS. Maybe I'll post that when I have it, and Daniele can point out the issues.
  • Kevin Wheatley: If you fix mid grey across all dynamic ranges, something else has to give. With the Daniele curve, as you raise peak intensity everything slips a bit, but rationalized by the contribution of flare at the bottom. If you have too many constraints, you don't have the flexibility you need.
  • Pekka Riikonen: There was a tone scale thread where Jed compared curves. One of those maintains the toe shape.
  • Kevin Wheatley: The brighter a display gets, that affects the deepest blacks you can see, which is probably the reason for changing the shadows with peak brightness.
  • Scott Dyer: I prefer what we have now, but there's a requirement to also be able to achieve the fixed mid grey.
[Alex made a modified version of the DRT exposing the focus point (J and M) as a fixed parameter, independent of gamut]
  • Nick Shaw: The focus point isn't a real fixed position in my formulation. It comes out of the quadratic solve for J intersection (because the compression becomes horizontal at the top an bottom, the focus point is at infinite distance there - zoom out on the Desmos plot)
  • Pekka Riikonen: You could say the current gamut mapper changes the steepness in the wrong direction between Rec.709 and P3. It should be steeper for P3. Can the maths be changed so the angle changes in the other direction.
  • Christopher Jerome: I posted a simple experiment blending 50/50 between P3 and Rec.709 limited P3. Then I clipped one to 709, and not the other. The P3 is more colorful, but they look very similar. Could there be an in between for the gamut boundary?
  • CB: It might be useful if Pekka could define on Aces Central what he thinks it "wrong" about these images.
  • PK: It feels that it desaturates took quickly.
  • Kevin Wheatley: It doesn't match my expectation for things like brake lights. But it's a preference.

Meeting #100, May 3rd, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Sean Cooper
Luke Crawford
Alex Forsythe
Luke Hellwig
Christopher Jerome
Zach Lewis
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Pekka Riikonen
Daniele Siragusano

Meeting Notes

  • Alex Fry: Welcome to the 100th episode! We need to talk about if we are hitting our requirements. There are some potential new requirements that came up recently too. We settled on the Daniele Evo tone-scale a while back, but we've been asked about supporting HDR with mid-grey in the same place as SDR, and only expanding the highlights. The SSTS did that by joining two curves. Is that important, and can Daniele's curve do it.
  • Daniele Siragusano: It should be possible.
  • Nick Shaw: Doesn't pinning mid grey and changing the top end also change the lower part?
  • Daniele Siragusano: Only if you have the value "hitting the roof" increasing with peak. If you fix that to a large value there can be an exact solution. I'll check. But how would you expose this option? Should we allow that?
  • Alex Fry: That would probably only be exposed in CTL for people doing custom stuff, not a user parameter.
  • Daniele Siragusano: And do you track it with AMF. How?
  • Alex Fry: To be decided. There were questions in the forum about if we chose the right CAM model. We've been suing Hellwig for a while, but are we adding patches to fix the difference from ZCAM? Chroma compression is only necessary for Hellwig, because of the shape of M at higher J. Hellwig has less code than ZCAM.
  • Thomas Mansencal: It would be useful to have a block diagram showing what each part of the code does. To help understand choices that were made.
  • Alex Fry: Jeffery asked in the AP1 primaries are too tight. The rendering doesn't actually use AP1 at all. It's just the entry point, to XYZ and then to JMh. But we are trying to make Rec.709 invert to within AP1, which places a restriction.
  • Nick Shaw: The problem e.g. blues are outside AP1. If everything started in AP1 we wouldn't have so many problems to solve.
  • Alex Fry: Or within the spectral locus.
  • Nick Shaw: We need to be able to handle those values or we look bad in comparison to the camera's own rendering. It's harder for us because we have to support all sources.
  • Alex Fry: Troy's post about visual fields I don't fully understand, but I think he's saying CAMs can't work if they are per pixel. That may be true, but we practically have to work per pixel.
  • Thomas Mansencal: We have to. Any fixed spatial transform could produce haloing, and wouldn't be invertible.
  • Nick Shaw: The core rendering should be per pixel, and the grading systems need tools to deal with spatial issues.
  • Daniele Siragusano: Perhaps he's saying we shouldn't follow a CAM blindly, and then meed multiple patches to fix things. Does an appearance model which works on color patches work per pixel with images?
  • Thomas Mansencal: What else can we do? We're at the limit of color science knowledge. Maybe a CAM isn't right, but what is?
  • Nick Shaw: An imperfect model is better than giving up and saying we can't do anything.
  • Thomas Mansencal: Chris comment about the rendering getting complex to fix things is a worth discussing.
  • Alex Fry: ACES 1.0 was very simple. We're more complex. But how much is too complex?
  • Thomas Mansencal: We have good reasons for our added complexities, and can explain the choices. And the image looks good.
  • Alex Fry: We've removed some parts of the model. Maybe we could lose more. And we've diverged from the model to handle colors outside AP1/AP0 which exist in our sources. If IDTs were better we'd need less.
  • Daniele Siragusano: Doing too much in one transform makes things complex. You could handle some things in an LMT. It's less fragile to have small parts than one complex process that needs to work for everything.
  • Nick Shaw: An LMT only for problem shots does not need to be included in inversion.
  • Daniele Siragusano: Many LMTs include image repair. Changing IDTs is a different discussion.
  • Thomas Mansencal: Better to sanitize the data than compromise the rendering.
  • Daniele Siragusano: The RGC already exists, but maybe a better rendering needs it less often. There's no one size fits all including edge cases. Same for IDTs.
  • Nick Shaw: Should we look at a scene side gamut compressor that uses a CAM to maintain hue, as the RGC doesn't?
  • Daniele Siragusano: The residual errors in IDTs make hue of extreme values fairly meaningless. The RGC maps towards primaries, which gets in gamut sooner.
  • Alex Fry: In our test image set, the normal images have been stable for a while. We've been tinkering to handle blue bar, red Christmas and the ACEScg lit CGI. Are we doing too much work just for edge cases?
  • Daniele Siragusano: We can look at cost functions, for what we lose and gain. How much bigger is the inversion space getting just to fit these extreme values?
  • Nick Shaw: A rendering which maps extreme values to the gamut boundary will inherently invert the gamut boundary back to those extreme values.
  • Alex Fry: Going back to the greens from last week which clipped more in P3 than Rec.709, I feel the difference is due to our compression limit being relative to the boundary. 1.3 x the P3 boundary is reaching further out. Should we try to reach out to a fixed point? E.g. the AP1 limit. Then Rec.709 and P3 would both invert to the same place. If you have a green wall which maps to the limit of Rec.709, with writing on it in a green only visible in P3, the two renderings don't have an appearance match.
  • Daniele Siragusano: In float you could maintain a difference, and clipping is a choice.
  • Alex Fry: The other thing is the greens rendering at different brightnesses. But these are extreme edge case values.
[Alex showed his 3D visualization of the compression vectors, showing Rec.709 compressing at a steeper angle, reducing brightness]
  • Alex Fry: Should the angle be consistent between gamuts?
  • Daniele Siragusano: Is J a valid proxy for brightness for values outside the spectral locus?
  • Alex Fry: Earlier on we kept J constant under gamut compression, but we added slope to do things like reduce brightness to maintain more saturation. But I think that's the cause of the brightness difference between P3 and 709 renderings.
  • Nick Shaw: Those colors are so far out they are still clipped after gamut compression. But maybe the clip above or below the cusp for different gamuts. This is the maths of the angle of the compression vector.
[Nick showed his Desmos plot]
  • Alex Fry: The distance difference is greater than the angle difference.
  • Nick Shaw: Related to my Desmos plot, a few weeks back Pekka posted about the M values going back up after a point. Outside the 0-100 range the angles of the vectors start pointing away from the gamut hull (and in extreme cases have no solution, leading to NaNs). That might cause the M bounce back in Pekka's plot. Because we taper to achromatic at display maximum, any J value above that should probably have an M of zero.
  • Christopher Jerome: The modified Hellwig primaries seem to increase saturation. Maybe the chroma compression is having to do more work because of this. Are we just moving primaries around or targeting something in JMh? Maybe things that were changed earlier on could be reverted due to subsequent changes.
  • Nick Shaw: The original Hellwig primaries are an option in the Blink.
  • Pekka Riikonen: All the images in my thread are with the original primaries. It doesn't make much difference. I like the original primaries, but blues get darker. The compress mode is the biggest hack but needed to deal with negative values.
  • Daniele Siragusano: I've been testing, and when setting w_g to zero in my curve, mid grey stays in the same place, if you fix the "value hitting the roof", and even varying that the variation in grey is small. The shadow shape is slightly different for low dynamic ranges. I think it's worth it for a rendering function as simple as two lines of code.
  • Nick Shaw: Opening up much brighter values will make a small disparity in shadows less noticeable.
  • Daniele Siragusano: To do it exactly you need a piecewise function, with its associated issues.

Meeting #99, April 26th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Daniel Brylka
Chris Clark
Alex Fry
Christopher Jerome
Zach Lewis
Thomas Mansencal
Jeffrey D Mathias
Willem Nagtglas
Carol Payne
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Scott Dyer: Alex Forsythe and I have been doing more colorist in person testing with v33. We hope to do a couple more this week. Looking at HDR/SDR match, gradability, default appearance preferences, etc. I've also been thinking about LMTs to do things like match the contrast of ACES 1.0, and matching HDR/SDR mid greys.
  • Alex Fry: Pekka's v35 is available as baked LUTs in the repo. I've been looking more at the issue with the green bottle in the ARRI ALEXA 35 bar image, where P3 clipped more than Rec.709 (green channel clipped to 1.0, red to 0.0).
[Alex showed a 3D JMh visualization of the Rec.709 and P3-D65 SDR gamuts with a 'net' showing the gamut boundary approximation used as 1.0 in the normalization for target gamut compression. He showed the effect of varying different parameters and flipping between limiting to Rec.709 and P3-D65. This can't be summarized in words, so it's necessary to watch the recording.]
  • Alex Fry: It's most noticeable with the bright green of the bottle, but its there in the red of the bauble too. I don't know the answer. Should the compression values be absolute rather than proportional?
  • Nick Shaw: They can't really be absolute, because the different gamut shapes give different normalizations.
  • Alex Fry: Turning compression right up fixes it, but messes up inversion.
  • Pekka Riikonen: I notice the green comes out darker in Rec.709, whereas in P3 it's compressed less. All the gamuts are compressed relatively with the same parameter values, but in absolute terms they differ.
  • Nick Shaw: Does the cup to mid blend change anything.
  • Pekka Riikonen: It does, but focus distance has more effect.
  • Alex Fry: That makes the gamut compression more horizontal, so desaturates highlights more.
  • Christopher Jerome: Might it just appear more desaturated because the hue is different?
  • Nick Shaw: Is it that for sRGB the projection vector is below the cusp, but the P3 one is above it?
  • Alex Fry: The sRGB cusp is higher, interestingly.
  • Pekka Riikonen: I looked at this image in HDR today and it looks fine. Jeffrey noted that the blue channel clips in the ceiling of blue bar. We can compress blue more by changing the chroma compression gamut, and the eccentricity factor it uses. But then we just get a lighter blue in higher exposure. The ColorChecker blue will still be darker than it should be. The other fix I added recently was Nick's fix for near-achromatic values. That also helps with NaN.
  • Nick Shaw: Is that just my conditional bypass for values below the threshold? It's not my whole alternate gamut compression normalization?
  • Pekka Riikonen: Correct.
  • Christopher Jerome: I am going to make a post regarding one equation in ZCAM which may be helpful. It's designed to help with issues in blue.

Meeting #98, April 19th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Luigi Giardiello
Luke Hellwig
Christopher Jerome
Zach Lewis
Jeffrey D. Mathias
Willem Nagtglas
Joshua Pines
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Scott Dyer: Sorry for the last minute switch to Zoom. We hope to use Zoom from now on (bit.ly/acesvwg). Luke has a couple of questions.
  • Luke Hellwig: I have questions about how ACES works and how the DRT fits into that. We talk a lot about Rec.709, but what color space are w coming from? AP1?
  • Alex Fry: 99% of the time AP1 is what people will be using. VFX uses ACEScg (AP1 linear) and DI often uses ACEScct (log AP1) or maybe other working spaces.
  • Luke Hellwig: What color space do our test images like blue bar start in?
  • Alex Fry: They start in a camera encoding. Blue bar is ARRI Wide Gamut v3. We work in AP1, but try to maintain source values outside that as negative AP1 values. The ACES RGC is designed to bring values into AP1.
  • Luke Hellwig: Samsung are looking to support getting Rec.2020 display out there in the longer term.
  • Alex Fry: We want to target wide gamut displays, but for testing have limited to P3, because most don't have Rec.2020 displays.
  • Luke Hellwig: Samsung have a quantum dot OLED which goes outside P3, if that's useful for testing.
  • Alex Fry: It would be good to test our assumptions about hue lines into wider gamuts. Currently the Blink only has preset display primaries, but we plan to be able to target arbitrary ones.
  • Luke Hellwig: Pekka noted he has a quantum dot OLED. It's not a reference display.
  • Alex Fry: With the old and new architectures you can target arbitrary primaries.
  • Joshua Pines: Studio deliverable requirements are to limit to P3 for now.
  • Pekka Riikonen: I've been looking more into the blue issue. I also looked at the inverse of the compression algorithm. If I remove the offsets the inverse works better. Removing these was mentioned in one of the papers Kevin linked to. It hardly changes the image visibly.
  • Nick Shaw: We should check the effect of this on the gamut hull shape, to make sure our approximation is still reasonable.
  • Pekka Riikonen: The addition and subtraction cancel out, but there will be precision loss.
  • Luke Hellwig: It seems reasonable to take it out.
  • Nick Shaw: Does it affect the slope at zero? Offsetting, applying a power curve and then subtracting the offset can be used to make a power function with finite slope at zero, what is sometimes useful.
  • Pekka Riikonen: It doesn't appear to affect the black level. Only color changes very slightly. The inverse round trip gets much better when using Bjorn's compress method. It has no effect when I use the ACES gamut compression approach on the LMS channels. The curve Kevin showed last week may improve things further. I've made a PR for v35 with these changes. I've also added a slider to go between the model's weightings for LMS and equal weightings. By default I've set that to 1, which is the model weights. Moving it towards zero makes blue lighter, but yellow hotter. I also changed the primaries a bit to improve HDR/SDR matching.
  • Alex Fry: I'll bake LUTs of v35.
  • Nick Shaw: I posted about the inverted shadows on the pool balls in blue bar. That isn't new. All our recent renderings do that. That image just highlights it. Moving the achromatic slider in v35 helps that. ARRI Reveal and K1S1 produce a less saturated, brighter blue light on the table, so the shadows are darker.
  • Pekka Riikonen: ARRI Reveal also makes the blue highlight on the ceiling brighter, and more cyan.
  • Alex Fry: We've noted before that blue can "suck light out" of pixels.
  • Lars Borg: If a rendering changes the order of brightness of colors that's a problem.
  • Alex Fry: Adding light shouldn't reduce brightness.
  • Luke Hellwig: It might be helpful to look at individual channels for the different rendering. Sometimes the red channel is going very dark.
[Pekka flipped through various DRT versions and ARRI Reveal, looking at different channels and the luminance (strictly luma – a weighted average of the non-linear channels)]
  • Alex Fry: We may be getting channel clipping from changes in gamut compression.
  • Pekka Riikonen: I guess it may be the new chroma compression.
  • Alex Fry: The balance between smoothness and hitting the corners.
  • Nick Shaw: A more desaturated rendering will always have channels that are ore similar. Preserving saturation will have more exaggerated channel differences.
  • Luke Hellwig: Can we compare Rec.709 and P3 to see if the problems are from cramming it into Rec.709? The ARRI bar image seems to have more green channel detail in the Rec.709 image than the P3 one, which seems wrong.
  • Alex Fry: We need to investigate that. Looking at it in 3D may be useful.
  • Scott Dyer: I did testing with v33 with Josh's colorists. The feedback was very positive. We certainly need some pre-made LMTs. People want to do things like limiting HDR highlights to make practicals less distracting. We should make a 1D LUT to match the contrast of ACES 1.0. People are now starting with the HDR grade, and using Dolby Vision to derive SDR.
  • Alex Forsythe: People commented to me at NAB that with ACES they need to create very bright scene values to create peak white SDR titles. We should have a utility LMT to help with this. Also people mentioned workflow considerations for matching mid-greys.
  • Scott Dyer: We generally thought it was a good idea that mid grey luminance increases with peak. But what about people who want the same grey in SDR and HDR?
  • Alex Forsythe: We don't want people making ACES masters with 18% grey in an unusual place that only works with one ODT.

Meeting #97, April 12th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Christophe Brejon
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Willem Nagtglas
Carol Payne
Pekka Riikonen
Christian 

Meeting Notes

  • Kevin Wheatley: Scott has a report on the initial colorist tests.
  • Scott Dyer: This is just initial testing with us in the room with the colorist, to be sure things are applied properly, and initial reactions on e.g. HDR / SDR match. We tested with Universal post yesterday, with normal imagery. Nothing extreme. Very positive reaction. They preferred it to ACES 1.0. They liked the Rec.709 sim so they could toggle. We’re testing with Josh Pines and some Technicolor colorists later today.
  • Kevin Wheatley: I've been doing some research on CAM16 and earlier variants. I first focussed on the step 1 hyperbolic function. I've collected some references. A few people pointed to failings of the hyperbolic function, including the infinite slope at zero, which creates issues with noisy images. One solution was a linear segment through zero to limit he slope. Also at high values the hyperbolic goes flat, and it was suggested to extend it linearly after a threshold. I would also like to know what the scaling of our input is, as that affects where you break. The linear segment has a gradient discontinuity at the break point, so I came up with an alternative. I've started to implement this in my own code.
  • Pekka Riikonen: Should this make the curvy lines less curvy?
  • Kevin Wheatley: That comes partly from the mirroring about zero. We could extend it differently. Also a number of people found problems in the blue corner, so people looked at the LMS space. It's all similar to what we've been trying.
  • Nick Shaw: If the curve is linear through zero, that might help wigglyness by not changing slope direction there.
  • Kevin Wheatley: Mine is not completely linear through zero, but still mirrors. We could extend linearly in negatives. But there is also a power function in the calculation of J, which will interact. So we need to be careful. Unfortunately noisy pictures always go below zero.
  • Pekka Riikonen: I wanted to make a new version for this week, but didn't get a good enough solution. To make HDR and SDR blue hues match, the blue gets very dark. Alex commented that the CAM16 primaries make the blue suck out light. Moving the primaries as we've done introduces skew. Why is it so dark. I've discussed options for changing the lightness metric with Luke. I made the weights of the RGB values in the lightness formula variables.
  • Nick Shaw: Are they RGB or LMS at that point?
  • Pekka Riikonen: They are LMS, or officially "sharpened RGB" in the paper. I found equal weights in the achromatic response produce the best result. It does affect other colors.
[Pekka showed the difference on the ColorChecker]
  • Pekka Riikonen: It needs more testing. I haven't looked at HDR or the inverse yet.
  • Kevin Wheatley: Your offset parameter is something that has been dropped in some versions. You need to reintroduce it later.
  • Pekka Riikonen: This only seems to affect the black level noticeably. I think I've included it everywhere I needed to. I am also now generating the inverse matrix on the fly, because of the variable achromatic response. Changing the weights doesn't seem to change hue, just lightness. I want to try including Kevin's new curve as well.
  • Kevin Wheatley: We could break the curve again at zero for a true linear extension. It shouldn't affect hue, but will affect chroma.
  • Pekka Riikonen: Darker colors get darker, so contrast increases.
  • Kevin Wheatley: I wonder if it's related to the HK effect, if you are adjusting what achromatic means.
  • Pekka Riikonen: All HK compensations only lift colors. Nothing gets darker. Luke suggested it was best not to include HK.
  • Alex Fry: What this does around blue makes sense, but I'm a bit worried about the inverting curve around red. Does that mean things are going negative?
  • Pekka Riikonen: I guess the curve reversal we see in red and yellow is where it becomes negative.
  • Kevin Wheatley: It may be related to where the primaries are.
  • Pekka Riikonen: I did try chromaticity linear compression which gives straighter lines.
[Pekka showed a version where you can select the type of compress mode]
  • Pekka Riikonen: The 2D plot doesn't tell the whole story.
  • Alex Fry: We've not seen problems with reds.
  • Nick Shaw: Is that because we don't have test imagery with red issues, or do cameras not produce values out there?
  • Kevin Wheatley: I've seen issues with brake lights.
  • Alex Fry: And Red Christmas has values out there. And the Venice candle image.
  • Pekka Riikonen: We could leave the current version as is and not worry about the blue hue.
  • Kevin Wheatley: So what outstanding items do we have, and what are our next steps?
  • Nick Shaw: Next steps are affected by Pekka's question – stick with the current or keep experimenting to reduce hue skews. It's blues skewing magenta as they desaturate that creates the HDR /SDR mismatch.
  • Pekka Riikonen: I'll keep tinkering. There are other issues like the NaNs from the gamut mapper.
  • Alex Fry: And creative white points
  • Nick Shaw: Daniele's suggestion involved targeting a display which actually has the desired white point and then the rest is encoding.
  • Pekka Riikonen: The soft clip in v34 may help.
  • Alex Fry: The scaling to prevent clipping should be procedural this time, not magic numbers.
  • Nick Shaw: We also need to consider the additional roll of to reduce the amount of scaling needed with a DCI P3 display. Does Baselight do that, or just scale as much as needed? Where did the extra roll off come from? I assume somebody asked for it because the scaling was too great.
  • Scott Dyer: I forget the exact process. Perhaps Doug Walker suggested it. I hope it's relatively simple signal prep. We need it to be more consistent than in the current version.
  • Nick Shaw: In theory the largest value is R=G=B=1.0 for a display with the desired white. So you encode that for the target display, find the highest channel and divide all three by that. I think I tried that once, and go t scale very close to the magic numbers in the current CTL.
  • Kevin Wheatley: You model the virtual device and find the closest point in the new encoding. The issue then is how big a scale you end up with.
  • Pekka Riikonen: I did some investigation of the gamut mapping producing NaNs. Something very odd happens with high values. The M channel  doubles back, and some values produce NaNs.
  • Nick Shaw: We haven't tried Kevin 's suggested alternate quadratic solve. It may be converging on zero divided by zero for horizontal gamut mapping, and the result becomes undefined, but should just be zero. I haven't had any time this week to investigate my more accurately invertible gamut compression. I'm getting an odd error where neutrals are changing brightness. Gamut compression should have no effect on those.
  • Alex Fry: I didn't get a chance to try adjusting the compression parameters to reduce skews as Pekka suggested. I was also wondering whether cusp smoothing expanding the gamut is the right thing to do. Or cusp smoothing at all.

Meeting #96, April 5th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Willem Nagtglas
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Alex Fry has some comments on discussions on ACES Central. Pekka has another update with work on the compression. Alexa and Scott plan to set up some testing through The Academy, supervised testing with some 'tame' colorists, who have their own media to test.
  • Pekka Riikonen: I posted about an alternate version of the LMS 'compress mode' we use. I wondered why the blue in the blue bar image doesn't match between SDR and HDR. SDR is purplish, but HDR is blue. There are two issues. How is compress mode supposed to work, and the choice of compression algorithm. The compression algorithm has the biggest effect in terms of inverse, which in the new version is much improved. It also effects the hue. How compress mode works, and where we apply it has a smaller impact. We use compression because we have negative LMS values. Compress mode straightens the hue lines in CIExy. Currently it compresses the LMS values before applying the non-linearity then uncompresses afterwards. And the same happens on the way out from JMh to XYZ. Why to we compress and uncompress multiple times? The LMS compression comes from Björn Ottosson's extended Oklab. It's unrelated to chroma compression or gamut compression. Does this uncompression after non-linearity makes sense? First I tried using a gamut compression in LMS, without the uncompress. Then I tried applying compression when going XYZ to JMh and uncompression when going JMh to XYZ. That's what I show in the rest of my post. I left the hue lines curvy to show that the choice of compression algorithm affects this. They could be straighter. The result of blues is much closer in SDR to HDR. Blue is darker because I'm using the stock CAM16 matrix. Blue stays more saturated for longer. The biggest difference is with the inverse, but that comes almost entirely from choice of compression algorithm.
  • Kevin Wheatley: Philosophically, is this compression (like the tone-scale) part of the creative choice? Or is it a technical fix? Or something else? If you only apply it in one direction, it becomes part of a look. With compress and uncompress it's a technical correction, or limiter. You're slightly redefining what with image state is at the various points. Now the middle part is all in the compressed state.
  • Pekka Riikonen: We still do the final uncompression, which is interesting, because it's not the same chromaticities coming out. Does uncompression make sense when we have scene chromaticities in and display chromaticities out? But it helps with the inverse.
  • Kevin Wheatley: So do they need to match? Because there are two reasons for doing it.
  • Pekka Riikonen: If we clipped scene chromaticities to AP1 we wouldn't need compress mode. The model could handle all the values. In fact if you do that the image is almost identical.
  • Kevin Wheatley: The camera spaces don't match AP1, but the data is still meaningful.
  • Alex Fry: It's interesting in blue bar we've kept blue but lost some intensity, which has always been the trade off.
  • Pekka Riikonen: In the version using CAM16 primaries. But not in all variations I tried.
  • Alex Fry: The original idea of compressing an uncompressing was to keep it limited to one point of the chain. But maybe it is redundant.
  • Pekka Riikonen: What do the uncompressed non-linear values mean?
  • Kevin Wheatley: We could adjust the non-linearity because part of the problem is that the equation is ill defined for negatives. There have been alternatives proposed.
  • Pekka Riikonen: Just taking the uncompression away the continuation of the hue lines seems wrong.
  • Alex Fry: The intent it to keep the hue lines straighter for longer, for cameras that produce data in that range.
  • Alex Forsythe: The camera sees the world a different way, so it's chromaticity diagram is different than ours. We are projecting it onto a diagram that is not how it sees the world, but the data isn't 'junk' outside our spectral locus.
  • Alex Fry: That data contains stuff we want to represent.
  • Lars Borg: Each camera extends the spectral locus in different ways, so shouldn't that be dealt with by the IDT? We need a camera specific correction.
  • Kevin Wheatley: True, but what we need is just smoothness and predictability.
  • Nick Shaw: We can't and shouldn't try to correct for colorimetric differences between cameras. We just need a universal rendering that handles them gracefully.
  • Joshua Pines: And there would almost always be grading which may move things out, so the results need to be reasonable within colorists' expectations.
  • Kevin Wheatley: I'd be interested to know what causes the SDR/HDR mismatch that started all this off. It seems more logical to fix that at the back end, not the front.
  • Pekka Riikonen: We're currently doing it at the front to get rid of values the model can't handle. The mismatch seems to be because when we desaturate blue it skews purple, and that happens much sooner in SDR. It would happen, but later in HDR. But why does it skew purple? Blue is most problematic for matching, then red. The new version does not change the HDR rendering much. But SDR changes and matches better.
  • Alex Fry: Perhaps the hue lines hooking back really does represent what is happening, but we are straightening them out because they collapse in the stock model.
  • Pekka Riikonen: It's hard to know what should happen.
  • Alex Fry: Maybe we're breaking hue linearity in the normal region in order to protect against collapse with the extreme values.
  • Pekka Riikonen: I don't think we need to compromise, as long as we make HDR and SDR images match.
  • Kevin Wheatley: I think we need to move on. Alex, can you show your experiments?
  • Alex Fry: I have a ramp from some production imagery that cases problems. It goes from <1 orange to a very bright red. Copmaring v28 to v34, v34 flattens faster and produces a kind of Mach band. It feels related to the final clipping to the display cube. Perhaps the new cusp smoothing, because it expands out rather than pinching in the cusp may make more values go outside the cube and therefore clip.
[Alex showed plots of the differences]
  • Alex Fry: The other question was about differences between the Blink and the OCIO LUT implementation. I'm guessing we have some values that sit in an awkward position between the LUT vertices.
  • Kevin Wheatley: After the last meeting Alex was looking at what it took to get things so in grading pushing the color wheel to blue would make things hit the display blue corner. If you add an offset to make it do that, it still hit the corner for P3 as well and almost for Rec.2020.
  • Alex Fry: Without the offset they land close for Rec.2020 [presumably because AP! is almost the same as Rec.2020]. With the offset it hits Rec.709 and near P3 but misses Rec.2020. It's desirable for colorists to hit the display corners by slamming the color wheel to e.g. blue.
  • Pekka Riikonen: What about opponency? Are blue and yellow aligned? In Hellwig they are not exactly opposite, but close.
[Alex added opponent colors to his 3D plot]
  • Alex Fry: I'm a little concerned about the amount of excursion [outside the display unit cube] I'm seeing.
  • Pekka Riikonen: The gamut mapper doesn't map everything in.
  • Alex Fry: Should we expect this much with AP1 ramps as input? I'm looking at a linear Rec.2020 display cube.
  • Kevin Wheatley: So AP1 primaries end up outside the Rec.2020 cube through the DRR. Although the cyan doesn't go out.  So are there areas of cyan you can't reach with sensible values?
[Alex compared the same plot with v34 and v28, showing that v28 stayed mostly well within the cube]
  • Pekka Riikonen: I think we're seeing the effect of limiting the chroma compression, rather than compressing the whole range.
  • Alex Fry: That would explain the skews I saw due to final clipping.
  • Pekka Riikonen: We could define a sensible range in the forward direction that should fill the cube.
  • Alex Fry: Or maybe we need to make the gamut compressor more aggressive.
  • Christopher Jerome: Are we going to bake LUTs of the new version?
  • Alex Fry: We can, but I think we should call it v35.
  • Christopher Jerome: The extensions further out seem very erratic. Can we use a different compressor and straighten those in a predictable way?
  • Pekka Riikonen: I'm using the ACES gamut compression which compresses towards primaries. It's not chromaticity linear. I can make e.g. blue skew more cyan or magenta if that's what we want.
  • Kevin Wheatley: What is the source of that erraticness? It's partly the mirrored function with negative LMS values. Originally we tried linear extension. Also near zero the slope is very high, what can have undesirable noise effects. So there are a lot of things that go into figuring out what needs to happen.
  • Christopher Jerome: It seems the control over the strength of blue in Pekka's latest experiments is worth investigating further.
  • Pekka Riikonen: I'll open a PR with a tweaked version that makes the lines less curvy.

Meeting #95, March 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Christopher Jerome
Zach Lewis
Thomas Mansencal
Jeffrey D Mathias
Carol Payne
Pekka Riikonen

Meeting Notes

  • Nick Shaw: My DCTL and GLSL implementations are a couple of steps behind the Blink. They are still at v31. When I originally posted my shader implementation of the DRT for Baselight it had a placeholder inverse, which was only the curve. I've now added a full inverse. But is has some Nan-type artifacts. The code is derived from the DCTL, and should be functionally identical, so I haven't yet found what's happening. The DCTL has a NaN check at the end that just sets NaNs to black. But we should really investigate what might cause them.
  • Kevin Wheatley: Daniele raised concerns over performance. Have you seen that?
  • Nick Shaw: It maintains 1080p25 on my 2019 16" Intel MacBook Pro, so I haven't noticed performance issues. And it's not optimized at all yet. It would be good if others could test different Baselight systems. The other thing I realized while working on the GLSL was that our current approximation of the intersection of the compression vector with the gamut boundary is calculated using the J-axis intersection and the pixel JM value, and that gives a slightly different result for the compressed and uncompressed pixel values. So the inverse gamut compression doesn't use quite the same value. Finding the exact intersection of a straight line and a gamma curve is complex, but we don't need an exact value, as the gamma curve is only an approximation of the gamut shape. I came up with a new version that calculates the intersection only from the J-axis intersect and the slope, so is identical in both directions.
[Nick showed his Desmos plot of the alternative approach]
  • Alex Fry: It would be interesting to see how good a match it is to the real boundary.
  • Nick Shaw: I think the difference is negligible. It is just exactly the same in both directions rather than almost the same. The real boundary is close to the gamma approximation for most hues, but I think around h=0 it bends a bit more, and we are still using the same gamma. Does that have any affect on the rendering being better for some hues than others?
  • Kevin Wheatley: Hue of zero has a b value of zero. We need to be sure our atan2 function handles that properly and consistently on all systems. Also atan2 normally gives you Radians, which we then convert to Degrees. It's then converted back to Radians when we use it later. And that can lose precision. Degrees are only useful for people. Removing the two conversions would help. Also Radians go from -Pi to +Pi, not 0 to 360. We need to be sure that's handled correctly.
[Pekka then showed his updated rev33 implementation the improved curve match as described in his ACES Central post]
  • Pekka Riikonen: The tone-scaled lightness divided by original lightness of mid grey was different for different peak luminances in v32. In v33 I modified the curves for a match at and below mid grey, as we had before.
  • Nick Shaw: The most important thing is creating a perceptual match between different peak luminances. But should we be careful we're not nulling out differences that should be there due to the model?
  • Pekka Riikonen: The differences aren't from the model. We're doing it with the tone-scale.
  • Kevin Wheatley: I was thinking about something similar. The tone-scale means we're feeding a different image into the model, not a rendered image and asking it to adapt to different targets.
  • Nick Shaw: Is our entry point into the model wrong? Are we tone mapping in the wrong place and then needing to cancel out the effect of that?
  • Kevin Wheatley: I don't think so. Our tone mapping knows about the final target. It's an all in one transform, without the two step rendered image and then target mapping.
  • Pekka Riikonen: Another thing I looked at is different levels of exposure lift. Going from 10 nit grey at 100 nits to 15 nits at 1000 nits seems too much to me. I find that using 0.12 as w_g in the Daniele curve feels a better match for me.
  • Nick Shaw: We said we wouldn't change the value of 15 nits at 1000, because nobody had objected to that.
  • Pekka Riikonen: ACES 1.1 is a more finished image, so for our lower contrast image maybe a lower mid grey is more suitable, and people can add contrast and move grey if they want.
  • Alex Fry: Should we revisit our assumption that we do want to raise the mids with peak luminance?
  • Nick Shaw: Didn't people say they didn't like them being the same in 1.0, so 1.1 raised HDR mids?
  • Alex Fry: There is some disagreement. What do people here think?
  • Nick Shaw: Consumer TVs don't do SDR at 100 nits, so mid grey isn't 10 nits. But PQ may be more accurate, being absolute, so you may well get HDR greys where you specify, meaning HDR could look darker than SDR to home viewers.
  • Kevin Wheatley: We could look at what SDR mids are for consumer TVs.
  • Nick Shaw: Won't a 200 nit BT.1886 display just put mid grey at 20 nits instead of 10?
  • Kevin Wheatley: Depends on flare and display contrast.
  • Pekka Riikonen: 0.12 gives a match to Jed's curve, if that's important.
  • Alex Fry: The SDR match to the average data is the only important match, I think.
  • Pekka Riikonen: It would be good if more people could evaluate different lift values.
  • Alex Fry: Maybe we could bake out a series of PQ encoded images with different values for people to compare.
  • Kevin Wheatley: Jeffrey posted his impressions of v33. He felt the Rec.709 sim didn't match the others.
  • Alex Fry: He's looking at Resolve where the sim uses a different LUT to the Rec.709, where OCIO now uses the same LUT and either BT.1886 or PQ encodes the result.
  • Nick Shaw: The Rec.709 sim emulates an ideal SDR display with zero black. Might the SDR on a real display not match that? On an OLED I suppose they should match.
  • Alex Fry: Comparing the Rec.709 and sim version in Nuke they match.
  • Scott Dyer: In my testing I have always used the Rec.709 sim LUT, so I'll check and compare too. What version should I use for testing with Alex Forsythe?
  • Alex Fry: I would use v33. That has improved the 540 nit match. SDR and HDR we ok in v32.
  • Pekka Riikonen: I posted last week about the possibility of making the final gamut clip a soft clip. It's not in v33 yet. It seemed to work really well. I see it as a replacement for the final display clip, but maybe not used in the inverse. It does mean nothing ever quite reaches 1.0.
  • Nick Shaw: Does it tend to 1.0 at infinity?
  • Kevin Wheatley: It may be worth letting it clip a very small amount, so people can hit 1.0, at least with display quantized values. Otherwise there might be QC issues.
  • Nick Shaw: Going back to earlier, I have the Rec.709 and Rec.709 sim plus PQ to BT.1886 transform up in Resolve, and they don't quite match. But I think it's LUT precision.
  • Alex Fry: I can redo the Resolve version so Rec.709 and Rec.709 sim use the same LUT.
  • Kevin Wheatley: That LUT difference could be what Jeffrey was seeing. Chris asked if we will have an official channel for colorist feedback. WE will initially have supervised colorist feedback with Alex and Scott, but there will be wider testing later. And they are all available in the repo if people want to test them.

Meeting #94, March 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Christopher Jerome
Shebbe Klaasen
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Alex Fry: Jeffrey, were the differences you noticed between Rec.709 and Rec.709 sim, or Rec.709 and P3-D65?
  • JM: it seemed to me that when in P3 the compression seem to be different than that of 709.... Rec.709 seemed to work well. The scopes , for what they can show, seemed to indicate something differed.
  • Nick Shaw: Because Pekka's chroma compression uses the Rec.709 gamut, within that, a Rec.709 and P3 rendering at the same dynamic range should be the same.
  • Pekka Riikonen: Inside yes, but with a wider gamut you have less compression of saturated colors.
[Alex showed this on a chromaticity plot by toggling between Rec.709 and P3-D65 SDR]
  • Nick Shaw: In the LUT DRTs the re's only P3-D65 at 1000 nits, so we wouldn't expect that to have the same chromaticities as 100 nit Rec.709, or look the same on a vectorscope.
  • Pekka Riikonen: With v32 the compression limit changes with peak luminance, so you can have more saturated colors in HDR. And I changed things a bit because red was coming out hot.
  • Kevin Wheatley: When you stare at a P3 display, you get used to it, and when you look at Rec.709 reds look more orange for example. The chromaticity changes I'm seeing look plausible. It would be interesting to look at 1000 nit Rec.709, so se see the different effects of the gamut and dynamic range.
  • Nick Shaw: We would expect a perceptual model to shift chromaticities to create a perceptual match at a different brightness.
  • Pekka Riikonen: In the previous version colorfulness was compressed throughout the range, but now after a threshold it is left unchanged for the gamut mapper to deal with. There is a point where you expect differences to the previous version.
  • Kevin Wheatley: We expect differences on scopes. But are they the differences we want? That's what we're testing so we can refine the parameter values.
  • Alex Fry: We need feedback from people who can look at it on P3 capable 100 nit displays, so they can see all the renderings properly.
  • Pekka Riikonen: I didn't use scopes at all. I went for what looked right to my eye.
  • Alex Fry: The Rec.709 and Rec.709 sim versions are different 3D LUTs, so there will be small differences from interpolation.
  • Scott Dyer: Alex Forsythe and I looked at rev31 this week on an X300. We were happy with the HDR/SDR match. We also compared the to the Colorfront LUTs. They have made slightly different choices. Reds in skin-tones seemed a little dead in ours compared to Colorfront. I'm looking for a metric to see what we could change to help skin-tones. Alex wants to get a candidate in the hands of a couple of Hollywood colorists. Any tweaks we ant before that?
  • Pekka Riikonen: Did you see the same skin-tone issues in SDR and HDR.
  • Scott Dyer: Yes. They looked fine in isolation, but people will compare them to other renderings they like.
  • Nick Shaw: Perhaps ours is more neutral, and the Colorfront rendering may have something added for preference. But can you add that same thing to ours with a grade?
  • Pekka Riikonen: We do need to look at it with various LMTs for contrast etc.
  • Scott Dyer: Generally for reds in particular ours look much better than ACES 1.0.
  • Alex Fry: We need to be careful if we tune it, because it is so affected by where cameras land skin-tones. We want to be objective and neutral. The chroma compressor is the most subjective thing we have.
  • Nick Shaw: T-Cam aims to be objective and neutral, and I've always thought skin-tones look a little dead with that until you add a "look" which is why FilmLight provide a selection of those.
  • Scott Dyer: Maybe we ship a few looks, like others supply look libraries.
  • Kevin Wheatley: There's a question in the chat about how to compare. As Pekka says, side by side or toggling on the same display are ideal. Are LUT implementations optimal for these colorists to evaluate?
  • Alex Fry: Baselight uses the same cube for Rec.709 and Rec.709 sim, because it handles that automatically.
  • Kevin Wheatley: Has anything in 32 affect Alex and Scott's evaluation of 31?
  • Pekka Riikonen: They are very close. 32 may have slightly more saturated skin tones.
  • Kevin Wheatley: Scott noted in the chat that Colorfront's bright saturated highlights render a bit darker to maintain saturation.
[Alex showed the differences between 31 and 32]
  • Pekka Riikonen: I did change some parameters. I took the compression threshold to zero, as Nick had previously suggested. The threshold starts from achromatic, but the power is so high it's effectively linear for the first part. It will be interesting to know how easily colorists can get the skin-tones they want, and hue skews if they want them.
  • Alex Fry: And if they introduce deliberate hue skews, to make sure they are consistent across the different targets.
  • Kevin Wheatley: It would be interesting to try emulating the old ACES rendering with an LMT. We could emulate the 1.0 SDR and HDR and look at each on the other, which we couldn't do before.
  • Alex Fry: I'll look at refactoring the DCTL to use the Rec.709 cube and procedurally encode as PQ. Maybe the OCIO configs too.
  • Kevin Wheatley: I'll keep working on my C++ implementation and unit tests. Pekka asked about the white point for the compression gamut.
  • Pekka Riikonen: Should it use illuminant E?
  • Kevin Wheatley: It depends what achromatic means at that stage. It's defined by what J represents. Is that still D60 from the ACES input?
  • Pekka Riikonen: What about the creative white? Do we just come out of the model with the chosen white?
  • Kevin Wheatley: I think that's what Daniele was suggesting.
  • Alex Fry: Creative white is what we say J is on the way out. What the achromatic J axis means.
  • Pekka Riikonen: Is it possible to come out of the model with the creative white point and then go back in with the destination white?
  • Kevin Wheatley: Daniele suggested you should never do that. He suggested setting your display to creative white, so you never see anything outside that. and after that it's just encoding.

Meeting #93, March 15th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Zach Lewis
Jeffrey D Mathias
Pekka Riikonen
Juan Pablo Zambrano

Meeting Notes

  • Kevin Wheatley: To help document the DRT I've been breaking it down and implementing the model in C++. Luke sent me expected input and output values to verify my implementation. Pekka posted on ACES Central about his latest experiments.
  • Pekka Riikonen: I've been looking at simplifying the chroma compression. I started by scaling down the values, and working from that. I like the results from my new prototype. I enable cusp smoothing, and removed ZCAM and linear extension, to simplify the code. The new chroma compression is much simpler. I scale M by the same ratio as J, then normalize so it's 1.0 at the cusp. So it's now hue dependent because we use the cusp. My compression curve is much simple than the old one, but has more control. I then reverse the normalization and add saturation. The compression is driven by tone scale lightness, so there's more compression for brighter values creating a path to white. I'm currently always using Rec.709 as the compression gamut. It means anything within Rec.709 is identical on all displays. The rendering is very similar is SDR and HDR. I also scale the cusp with an eccentricity factor. I tried three curves for that - CAM16, Luke's curve, and a custom curve. My custom curve doesn't have the dip around cyan that Luke's does, to prevent under-compressing cyans and blues. I find the custom one works best.
[Pekka showed the inverse with the three curves]
  • Pekka Riikonen: Luke's curve is fitted to the Munsell data-set. Maybe that just doesn't have enough samples in that region. Without the eccentricity we get very small compression of blues, and yellow goes too far out on inverse.
  • Luke Hellwig: You're concerned about where things fall on a chromaticity diagram. But that's not perceptually uniform. That's why the lines aren't straight.
  • Pekka Riikonen: It's not the eccentricity the model does. It's only applied to the cusp.
  • Luke Hellwig: So it's only an issue if you start from Rec.709 and only use the inverse model.
  • Alex Fry: You would never do that. You only use the inverse so things end up in the same place when you go back forward.
  • Kevin Wheatley: If the inverse ends up with negatives because they are outside the working space, if you use any tool before the forward transform that doesn't preserve those, you lose that detail and it clips. Or you get yellows that the display device can't make through the transform.
  • Pekka Riikonen: There are now parameters for the chroma compression - a limit, compression strength, and 'expansion strength' (how quickly the compression increases). It affects the path to white, for example. Currently it uses Rec.709, but it doesn't have to.
  • Alex Fry: Have you tried having it follow the limiting gamut?
  • Pekka Riikonen: That's what I tried initially. That changed the 'interior compression' for different gamuts. It made the compression gamut specific.
  • Kevin Wheatley: It's effectively part of the look of our rendering, and shouldn't vary with target. Rec.709 may not be ideal. It could be P3. But there's no reason it should be a display gamut.
  • Pekka Riikonen: Initially the SDR HDR match was bad, but then I mate the expansion rate depend on peak luminance. Pure red still stands out a bit much. Maybe we should chose a space to normalize to which makes better HDR colors.
[Pekka showed the images from his post which compare v31 and the new v32]
  • Alex Fry: The fire image made me worried we had introduced some skews.
  • Pekka Riikonen: Nothing is clipped, so the curves in CIExy are the hue lines in the model. We may just be seeing some orange that was always there, but was previously desaturated.
  • Lars Borg: Can I suggest another test image? With Sony's HDR demo you need to compress the clouds round Mount Fuji without making them look like air pollution. If you increase saturation you make ugly clouds.
  • Alex Forsythe: Did you look at large smooth sweeps of color? I was concerned that the Alexa35 diving image with the red. light had Mach bands. It may be internet compression.
  • Pekka Riikonen: Without cusp smoothing there is a noticeable band. With it I think it's better.
  • Alex Fry: We've used two different cusp smoothing. One which cuts into the gamut at at the cusp.
  • Pekka Riikonen: Matthias's original did that, so mine compensates by scaling it out, and going up. It does introduce more clipping, but the angle is so shallow it shouldn't be visible.
  • Alex Fry: Maybe the sharper cusp is what causes the band in the image.
  • Pekka Riikonen: With this image the red light comes out bright pure red, which doesn't match SDR.
  • Alex Fry: It's tricky. We don't want to limit HDR with the SDR desaturation. It's easier to desaturate HDR highlights if you want that. Harder to add saturation if the rendering desaturates.
  • Lars Borg: That's a preference question not technical.
[Pekka showed the ARRI Reveal rendering for comparison]
  • Pekka Riikonen: There were images where I couldn't push the gamut compression as far as I would have liked without getting NaNs. I mentioned that on the GitHub issue.
  • Pekka Riikonen: Rec.709 blue still inverts outside AP1. Could we target AP0?
  • Alex Fry: Blue is tricky, because cameras do produce unreal values out there.
  • Pekka Riikonen: What gamut should we use for the compression?
  • Alex Fry: Even it is really Rec.709 we should give it a different name.
  • Kevin Wheatley: I suggest changing the numbers a bit so they are definitely not Rec.709, so people don't think it is 'just Rec.709' and reuse some existing code, giving the wrong result.
  • Pekka Riikonen: Maybe we come up with a space that helps the blue.
  • Kevin Wheatley: In that GitHub issue I wondered if it could be the quadratic having unreal roots. But Nick said it isn't really a quadratic curve.
  • Nick Shaw: It isn't. It is an expression (which isn't quadratic) for the slope at a given J intersect. Then if you add in the constraint that the line has to pass through the JM coordinates of the pixel in question, that can be rearrange to a quadratic in J. You solve that to find what the J intersect needs to be so the compression vector passes though the required JM. There isn't guaranteed to be a solution for every JM value, and one of the quadratic roots gives the result  if it's above the cusp, [focus J actually] and the other if it's below it.
  • Kevin Wheatley: I suggested a different quadratic solution formula, which might have different numerical constraints. So Pekka will open a PR for v32 and Alex will bake LUTs for people to test in SDR and HDR. I see Lars noted the Sony demo was graded for HDR.
  • Lars Borg: It was, but it would be useful to see how our algorithm renders that for an SDR display. You would need to apply an inverse HDR ODT.
  • Kevin Wheatley: It's a different test, and a bonus feature if it works.

Meeting #92, March 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Michael De Caria
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We've had comments on the complexity of our model fro a couple of sources. And Alex had some questions for Luke.
  • Nick Shaw: So far the Baselight implementation in the candidate repo has used baked LUTs. Daniele told me Baselight can implement DRT as GLSL in an flspace file, so I've been porting my DCTL. I have the forward transform working, and Daniele commented he was a little wary about performance. My M1 Ultra Mac Studio gets 1080p25 playback, but I'd like feedback on performance on other Baselight systems. The DCTL has a constant array for the pre-calculated cusp table. GLSL can't initialize a constant array, so the code includes 360 lines setting each array element. That is in the body that runs per pixel. If Pekka fits a curve that will be better, but now I suspect that's the big performance hit. I will work on the inverse part when I have time.
  • Kevin Wheatley: those 360 lines aren't ideal! You would normally have a texture.
  • Nick Shaw: Don't know if that's possible. I'll ask Daniele.
  • Kevin Wheatley: It may mean we have to reconsider some things.
  • Alex Fry: Last week we looked at the 6/7 axis variable compression to help inversion around yellow. The guys at ILM felt maybe it's too complex, with patches on patches. I tend to agree. I'm looking at deriving those numbers procedurally, using e.g. AP0 as the outer boundary for the compression, or something similar. The chroma compression makes that more complex, but I hadn't looked deeply into that before. I played with bypassing the per hue part of that. The hue dependent version does add a 'wiggle' around yellow. We effectively have two gamut compressors. We need to constrain the M values that are too large once J is tone mapped. I wondered if M is the right component to use.
  • Luke Hellwig: You could perhaps scale M down the ratio you have scaled J by. That would preserve saturation rather than colorfulness. Saturation is closely tied to chromaticity, so moving from a larger to smaller gamut you'll still need to gamut map at the end.
  • Alex Fry: I'm trying to make the values easier to gamut map.
[Alex showed the 3D plot of how the M values stay large, as J is tone mapped, then chroma compression, then gamut compression]
  • Nick Shaw: The chroma compression includes the path to white, which we will still need if we use Luke's ratio based M reduction.
  • Pekka Riikonen: I have started looking at alternate chroma compression. I spoke to Luke about what he suggested. But we still need path to white and black on top of the 'normalization step'. What we co currently increases shadow colorfulness which make nicer looking images. I haven't had time to tinker.
  • Kevin Wheatley: How much of this comes from the s-curve? What would happen with just linear scaling?
  • Pekka Riikonen: Great question. If you plot colorfulness after the lightness tone map in a perceptual space like Hellwig, plotting the M channel over the lightness axis, you get an s-curve. Don't you need an s-curve for nice images?
  • Kevin Wheatley: We need a simpler more controllable operation, so we don't need layers of adjustments. If list all the things we get and look for one or maybe two steps that achieve them, not three or four.
  • Alex Fry: Is M the appropriate component?
  • Nick Shaw: Is chroma relative to J, so converting back after J tone mapping you use a lower J.
  • Luke Hellwig: No, that wouldn't have an effect. Chroma is M divided by adapting luminance, so it's the same for our purposes.
  • Pekka Riikonen: I think that in the model the correlates are disconnected, but we are connecting them so when we go back to RGB. Chroma compression effectively connects compressed lightness and colorfulness, because the model doesn't do it for us. It would be nice if the model changed the colorfulness when we changed lightness, to maintain color appearance. We still need path to white.
  • Alex Fry: What about the s component?
  • Luke Hellwig: There is redundancy, so in the inverse model you can start with JMh, Qsh, Jsh, etc. Any three work. My Matlab code make assumptions which to use if two are given. Here you need to be explicit, because you are changing J, not Q. Maybe inverse with Jsh, so the change in lightness is reflected in saturation. My saturation metric is M over Q, but that's equivalent to C over J. M over J would represent the same idea.
  • Nick Shaw: So reducing M by the same ratio as J would have the same effect as using an M over J correlate?
  • Luke Hellwig: Yes. That would hold saturation constant.
  • Alex Fry: I can hopefully bolt that together. Ideally I'd like a single gamut mapper, not two.
  • Nick Shaw: Could path to black and white be done as part of gamut compression? I'm unsure if path to black and white are part of picture rendering, or can be part of mapping to the display. Display gamuts taper to white and black.
  • Alex Fry: Early on we had no highlight desat, and we had yellow in highlights staying too long. Matthias's highlight desat only brought down M at the top.
  • Pekka Riikonen: That approach reduced pure colors too much to get nice highlights. So you couldn't reach colors like yellow. That's why I looked at what I did. Just scaling values as Luke suggests leaves highlights still too saturated.
[Pekka showed how his chroma compression smooths the values in the gamut]
  • Pekka Riikonen: The current one uses the derivative of the tone curve to normalize and do path to white together. I wanted to go back to something I abandoned months ago, which may be simpler. Without path to white skin tones look 'beefy'.
  • Alex Fry: I don't really understand what the hue dependent curve is.
  • Pekka Riikonen: I came up with a curve that compresses yellow less. It turns out to be almost identical to the curve Luke uses for HK. It may be coincidence. I just did it by looking at images.
  • Nick Shaw: Could it be in an LMT to remove subjective choices from the rendering?
  • Pekka Riikonen: If yellows get compressed too much an LMT can't make it hit yellow. How did you come up with the HK curve, Luke.
  • Luke Hellwig: It was fit to experimental data. But don't read too much into it. I'm working on a new model that doesn't have this shape of hue dependency. It may relate to yellow being high luminance.
  • Alex Fry: We would prefer to be objective, rather than subjectively nice pictures.
  • Pekka Riikonen: The goal is to reach some colors, not to look good.
  • Alex Fry: I would like to make the values that vary driven by something, like another gamut. That may help the inverse.
  • Pekka Riikonen: Normalizing M may allow us to take the gamut boundary into account in the chroma compression, allowing chroma compression and gamut mapper as a single curve.
  • Alex Fry: We want to avoid the RRT + ODT situation with two s-curves interacting. All these variations are similar in look for normal colors. It's what happens to bright and/or colorful values that varies. A single stage would be desirable.
  • Kevin Wheatley: Any other concerns?
  • Alex Fry: I had found some apparent discontinuities when I applied just the gamut compressor, with no tone curve or chroma compression.
  • Nick Shaw: Root finding in the quadratic solve may only work properly if the J values are in the expected range. Un-tonemapped J may give unexpected results.
  • Kevin Wheatley: Alternate root finding methods may be more robust.
  • Alex Fry: I did experiment with driving the compression from the cusp of another gamut. But something isn't working yet.

Meeting #91, March 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw

Lars Borg
Chris Brejon
Daniel Brylka
Michael De Caria
Alex Forsythe
Luke Hellwig
Jeffrey D Mathias
Carol Payne
Joshua Pines
Pekka Riikonen
Troy Sobotka
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: Last week we discussed creative white points. Recently Daniele made a post which clarifies some thinks, but doesn't answer everything I was thinking. Nick has continued working on his DCTL.
  • Alex Fry: I've been tinkering with compression parameters, and working on adding custom encoding and limiting primaries.
  • Nick Shaw: I've continued finessing the DCTL implementation, confirming it works under CUDA, OpenCL and Metal. I've moved the shared functions to a header file to reduce repetition of code. Ive added an "invert" check-box to the Rec.709 one only. The DCTLs are fixed versions for various display targets to let people test without worrying about LUT precision. I'm not getting artifacts from NaNs any more, except in Pekka's extreme dominant wavelength test image. They shouldn't come up with people testing grading real images. I can't confirm the inverse perfectly matches thew Blink due to  Blink crashing issues duplicating a node. But round-tripping a display referred image gives a visual match, but the vector-scope moves a little in yellow and the cyan/blue region. I've  made some efficiency improvements in the code, removing redundant XYZ <> RGB in the tone curve.
  • Kevin Wheatley: I see you questioned some parameter names. You renamed HALF_MINIMUM to float_epsilon, and noted that several variables have a value of 100 and may or may not mean the same thing.
  • Nick Shaw: For optimization much later on we can probably remove a few divide and multiply by 100.
  • Kevin Wheatley: If anyone sees anything else like that please add it to the GitHub issue. If you try the Blink on a machine without a suitable GPU it locks up. We may need other implementations for testing.
  • Nick Shaw: Should I look into a Matchbox implementation for Baselight? One other thing. My cusp lookup is slightly different to the Blink. It uses a pre-calculated table from my Python which is at equal one degree steps in Hellwig h. The Blink uses uneven intervals of h, which means there is an iteration needed in the lookup, where the DCTL is much simpler. I did my calculations offline leveraging NumPy's vectorization.
  • Pekka Riikonen: I still plan to approximate the cusp.
  • Nick Shaw: And do you think you can generalize that, so we can find the fit for preset gamuts, but document how to replicate that for arbitrary gamuts?
  • Pekka Riikonen: Maybe we provide a Python script.
  • Alex Fry: What does the approximation gain over brute force?
  • Nick Shaw: It doesn't need to be precise, as the gamut map is a soft clip. Invertibility may be better. Although the same lookup is used forward and back, as h is unchanged, so there is no inversion mismatch caused by using a lookup.
  • Kevin Wheatley: The other discussion last week was about white point. Are we happy with the LMS matrix white point?
[Alex showed his six-axis (now 7) hue-varying compression parameters]
  • Alex Fry: I'm loosening compression along the red to yellow boundary so you can fill a Rec.709 cube and invert to values in AP1 along this edge. The AP1 and Rec.709 blues are very close, so if we back off compression of blue, we won't compress a lot of the blues that do occur out there to within Rec.709. The round-trip cube looks much better. But you still need negative AP1 (but positive AP0) values to hit full Rec.709 blue. Pekka suggested we might combine chroma compression and gamut compression [did he suggest that, or just suggest a similar hue curve approach?] but that's not happening here.
  • Pekka Riikonen: It's important to look at highlights. Also fire will change because there will be more clipping in orange and yellow, so more skew. What is the input to the DRT? AP0 or AP1?
  • Kevin Wheatley: Officially AP0, but that's not what people are grading in. What we're interested in is an inverse that produces reasonable values.
  • Alex Fry: And forward too. People want to hit yellow in grading.
  • Pekka Riikonen: Is the cube using linear or 2.4 gamma?
  • Alex Fry: Linear.
  • Kevin Wheatley: Adding the 7th axis worries me a bit because it may be over-parameterizing, which could introduce unpredictable behavior.
  • Nick Shaw: It's pulled in flat along the boundary, but there are "waves" just inside it.
  • Kevin Wheatley: CIE 1931 xy is not the right space to judge it in, as it isn't uniform.
  • Pekka Riikonen: We should also look in 3D, because we compress shadows more to avoid clipping in shadows, which pushed those values out. Yellow and cyan alway clip first in the shadows. We could handle this differently, e.g. with the path to black. But that's not hue dependent. But it could be, although that adds more complexity.
  • Alex Fry: Is it a reasonable goal that we must hit full yellow with positive AP0?
  • Pekka Riikonen: Maybe not for color grading.
  • Nick Shaw: For a graphic, if you do an inverse, as long as the system preserves negatives, it doesn't matter what the values are as long as they round trip. But in grading if you need to grade and hit a pure primary the grading tools can't push into negatives. Unless we change the grading primaries.
  • Pekka Riikonen: If we move the blue primary out, we could enclose the problem blues without negative values.
  • Nick Shaw: Then the blue would have a negative chromaticity y value. I seem to remember in the design of AP1 there was a desire not to have negative primary chromaticities.
  • Alex Forsythe: It wasn't a hard requirement, but it was preferred because if you have a negative primary, when you add blue in grading luminance goes down.
  • Nick Shaw: That may become less important with color space aware grading tools. But not everyone uses those.
  • Kevin Wheatley: I seem to remember negative values are problematic in ICC. For the current rendering you need a grading space larger than AP1 to target the whole of Rec.709.
  • Pekka Riikonen: Those blues are not uncommon in real camera imagery. E.g. blue bar.
  • Nick Shaw: ACES now permits grading spaces other than ACEScct.
  • Kevin Wheatley: Our job is not to rearchitect the whole system. Are we looking at moving the blue to fix an artifact we can fix another way?
  • Nick Shaw: If we want to elegantly handle images like blue bar with values out there, then that's where Rec.709 will invert to.
  • Kevin Wheatley: Ignoring cameras, if we want to be able to hit all of Rec.709 it doesn't depend on the source, but it odes depend on the working space.
  • Alex Fry: An ideal IDT would put all values in AP0. Not necessarily AP1. I don't think we should have to go right out there just to hit Rec.709 yellow
  • Pekka Riikonen: I'm ok with the modified compression in yellow. I think we need to look at red, and test the forward direction more. The red is not very far out compared to green.
  • Nick Shaw: We tweaked the input matrix for values we saw in real images, so they didn't collapse in blue. And the green red discrepancy may not be as much as you think, because CIExy is not uniform. That further out green may be the same perceptual distance as the red.
  • Kevin Wheatley: We really need to look at output images with real pictures. Last week we discussed white points. Daniele posted on ACES Central and subdivided the white points. I wasn't thinking about the encoding space. I was thinking about what Daniele calls the mastering space and it's white point. We have ACES D60 incoming, and an e.g. D65 boundary, and encoding primaries, say P3 D65. You then have the fourth consideration, which is your creative white. Daniele only has three. What do we do for "D60 sim" or "pick your own white"?
  • Nick Shaw: Daniele's post matches my thinking, which is that a D60 sim is targeting an actual display with D60 white (even if it's only virtual) and simulating that on a different display. So the virtual display has a cube that it can only fill, and when you show that on a different display, it leaves "holes" and may clip some values. With clipping all you could do is gamut map, to minimize the visibility of clipping, but I don't think you should fill the holes, as the colorist never saw those values on their D60 display.
  • Kevin Wheatley: I agree on filling holes. But do you clip or gamut map? Gamut mapping would have to target a weird hybrid gamut.
  • Alex Fry: A boolean intersection of the virtual and actual target.
  • Nick Shaw: Then the nice gamut approximation doesn't work. It feels easier to do two steps with our simple approximation than try to target a weird hybrid shape.
  • Alex Fry: Or we clip!
  • Kevin Wheatley: And if we look at images that may work.
  • Alex Forsythe: What is the specific use case? The D60 sim was created for somebody who wanted to wheel their Rec.709 display into a projection theatre and see a match. If we took out the chromatic adaptation and it matched.
  • Alex Fry: If seen both the Rec.709 D60 sim and P3 DCI (which is effectively D60 sim) used quite a bit in the wild.
  • Kevin Wheatley: I suspect creative white is ~50 / 50 D60 / D65. So do they want equal RGB or not for their Rec.709 master?
  • Nick Shaw: And a lot may be based on misunderstanding of what should I do and why? Maybe picking one for the wrong reasons.
  • Alex Forsythe: I agree. The D60 sim terminology is confusing for people. People flip through transforms until they see one they like. We need to identify use cases, and specify how to use the tools to achieve the desired result.
  • Nick Shaw: Making D60 sim a special unique case may be confusing. Maybe better ff you just call it "creative white" and say it's part of your creative choices.
  • Alex Fry: Like Baselight…
  • Alex Forsythe: I need to re-read Daniele's post, but I think creative decisions should be made in the grading space, and the output should be more technical.
  • Alex Fry: You can't change the white in an LMT, because even if it's different at lower values, as you go up it converges to the creative white point of the transform. In a D60 image, the sky would blow out to D65.
  • Pekka Riikonen: Should we have a tint control?
  • Kevin Wheatley: I have a chart I made with a range of ISO hue lines, going up in one stop increments. I'm only using an sRGB display transform, but it shows yellows bleeding through, because you can go higher in yellow than other colors. I wanted to see, if you have a white point that is more blue, what happens t values that are more yellow and can go brighter?
  • Alex Fry: It feels wrong to go brighter than creative white if it differs from display white.
  • Kevin Wheatley: Daniele talks about scaling the primaries. But then you do need a clamp first.
  • Alex Fry: We don't clamp at the edge of the limiting volume. It's a soft compression that can go beyond it.
  • Nick Shaw: To hard clip you really need to matrix to the limiting space, clip negatives, and then matrix to the encoding space. That's also a question that needs answering. If a deliverable is Rec.2020 limited to P3 D65 it may well fail QC if there are values outside P3.
  • Alex Forsythe: I've never been a fan of the DCDM ODT. That feels like an encoding process that should happen afterwards.
  • Nick Shaw: Although the headroom to 52.37 nits in DCDM means there is no need for the scaling other "sim" ODTs have, because you can go above the 1.0 at 48 nits in one channel.
  • Kevin Wheatley: Eventually we need a mechanism to fit any creative white into any destination. Currently we have scaling, and some roll-offs, etc.
  • Nick Shaw: The scaling is only about fitting peak creative white. At the sides, if it clips it clips.
  • Alex Fry: I've been experimenting with allowing custom limiting and encoding gamuts. but it doesn't work yet.
  • Kevin Wheatley: Our current encoding space is effectively Daniele's mastering space. If we had a subsequent encoding step we could evaluate the impact of encoding into a different space.
  • Alex Fry: The JMh to XYZ makes the XYZ effectively our mastering space.
  • Nick Shaw: If we scale everything down to fit into the encoding gamut, the whole cube gets smaller. So maybe almost nothing pokes out, and clipping isn't an issue.
  • Kevin Wheatley: The tone scale goes through 1.0, so we potentially have higher values.
  • Nick Shaw: As Daniele said, if your colorist was grading on a real D60 display, clipped at 100%, and then you encode for a different one, you should clamp, so you don't put anything into the encoding that the colorist didn't see. I see sims as a special case, and clamping is ok.
  • Kevin Wheatley: Lose the word "sim" and say you have a D50 master, you build a D50 mastering space, render to that, clamp to it, then encode to a different space, scaling as necessary. The scaling is happening after the gamut compression. Should we be calculating what the scaling will be before the gamut mapping?
  • Nick Shaw: I think it's compressed to your D50 or whatever mastering display, and then you scale it to fit in your D65 display, even if some little bits poke out. I don't see inversion as relevant here. Is there a use case where you need to put a D65 white graphic with peak white into a master with a D60 creative white?
  • Kevin Wheatley: End titles and logos on a D50 mastered film?
  • Alex Fry: I think they just can't be at peak white.
  • Kevin Wheatley: We need to document how to bring in logos in that circumstance.

Meeting #90, February 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer

Daniel Brylka
Michael De Caria
Francesco Luigi Giardiello
Luke Hellwig
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen
Matthias Scharfenberg
J Schulte

Meeting Notes

  • Kevin Wheatley: Nick is absent today. He's been working on his DCTL implementation and found some bugs in his porting of the Blink but also some other errors, particularly with the white point used in the calculation of the XYZ to LMS matrix.
  • Alex Fry: I was wondering why the discount illuminant option didn't work as expected. Nick and Pekka had an email exchange about the weird values used as the custom primaries' white point. {4200, -1050} makes no sense. CAM16 uses {0.333, 0.333}. I think it was a slip of the mouse error. It has no effect when discount illuminant is enabled, as we now have it. Using D65 white toggling discount illuminant behaves as I expect.
  • Luke Hellwig: In CAM16 which my model is based on the reference illuminant is the equal energy illuminant – X=Y=Z, or {1/3, 1/3} which is yellower than D65. Using that would make discount illuminant have no effect. I recommend having discount illuminant on.
  • Alex Fry: I was assuming we could use this for creative white points. But it didn't work as I expected. Changing the limiting primaries between P3-D60, P3-D65 and P3-DCI with a P3-D65 encoded output makes a CMS pattern visually converge on those white points on a chromaticity plot. But we still need a scale factor for values >1 produced. That's what the old output transforms did. It did it after rendering, and because it just clipped at the gamut boundary it didn't make a difference. Now we're explicitly compressing, so ideally you would make a new target that is the intersection of the two gamut hulls.
  • Kevin Wheatley: Ideally we compute where it should be before gamut mapping. Then you get the maximum container to fit it into. If you compress then shift, you need to re-constrain it as values may shift out. But how do people who use it want it to function? Probably nobody will notice is little extra on one side due to the shift making more space. But at the top, there is part of the gamut that is brighter than the peak creative white gets shifted to, due to the scale. Is it something we have to solve? E.g. if we have a cube that has a D60 white that we want to show on a D65 display, we have to shift it and scale it. Then The display peak white is brighter than the shifted and scaled D60 peak white, the creative neutral. For invertibility we want to fill the gamut. But should we restrict values to never go above creative white peak?
  • Joshua Pines: In practice that's what we do. We know the creative white from the start, so build everything based on that. Our show are 50/50 D60/D65 creative white. With a D60 show in SDR we don't let anything go to D65 white for a deliverable. HDR is a different case, but nothing goes up to peak in HDR.
  • Kevin Wheatley: If somebody does an extreme creative white adjustment, do we just say "we don't support that"? We want to be able to tell the gamut mapper what the modified boundary is. But for inversion we want to go all the way up to the top, and people can limit it downstream if they need to. We need to try to break it with test images.
  • Alex Fry: Probably easiest to test with sRGB images with different white points on a P3 screen. Maybe the skew from clipping after shifting is too minor to worry about, given we already soft compress.
  • Kevin Wheatley: Scott what happened last time round.
  • Scott Dyer: With white points it's always hard to communicate the intent of the transform vs what people expect. We don't have clarity on what should be done. We have either just scale, or scale and roll-off in the current transforms. Scale only where possible, and roll off where the scale would be too large, e.g. in the P3 DCI D65 sim ODT.
  • Joshua Pines: We could just use a D65 white point and tell people to build an LMT if they want something else.
  • Alex Fry: That would be tricky because everything converges to neutral at the top.
  • Kevin Wheatley: The two that we have now are clear. We need to do D60 ACES in produces D60 on a display. Or equal RGB in produces D65 on a D65 display. Then you have creative choice of white, and we need to find the point in the transform to apply that. It seems to me it should be after the tone mapping and before the gamut mapper. You've got what your image is supposed to look like and the gamut mapper tries to preserve that as much as possible on the current display.
  • Joshua Pines: We ensure we keep neutrals consistent in all deliverables on the chosen white. People notice if they shift as they go up.
  • Kevin Wheatley: Because the tone scale doesn't clip at 1.0, you could have values that were slightly blue, near neutral and at peak brightness, and after shifting those end up at D65 peak white – brighter than the D60 creative white peak.
  • Alex Fry: I think you need a target gamut that is the intersection of the two gamuts multed together. Maybe we should test hard clipping, and see if it's noticeable.
  • Matthias Scharfenberg: I hope this is easier than we think, We are talking about the meaning of J=100 and M and h are zero. So if we don't discoulnt the illuminant, and use D60, it would adapt the source to D60. Wouldn't it solve itself? If you want {1, 1, 1} to mean something different, you would do that on the conversion from input to JMh.
  • Kevin Wheatley: What if J=100 and M is slightly larger than zero?
  • Alex Fry: I suspect Pekka's chroma compression will ensure anything at 100 or above has an M of zero. Pekka, you were experimenting with a hue dependent curve for gamut compression.
  • Pekka Riikonen: That's used in the chroma compression. I was looking at a similar thing for gamut compression. It needs more testing.
  • Alex Fry: Inverting yellow is important for Rec.709 which is where we will have people with logos that need to end up on screen at source values.
  • Pekka Riikonen: So far when I try that it ruins the forward transform. Maybe I need the chroma compression curve to compress yellows more. I don't have that control yet. I also wanted to note that in the tone scale the lightness is using the limiting white, not the input white.
  • Alex Fry: That seemed the right choice because we have discount illuminant off. Luke, if we turn discount off, and set the reference white to the white of the input, should it invert in and out?
  • Luke Hellwig: If you use discount illuminant you're telling the model "just use the white point I give you". If you turn it off you're saying the white I give you doesn't actually appear white to a human observer. So the model makes it's own white by mixing what you give it with the equal energy illuminant. I suggest keeping discount illuminant on because that doesn't reflect what really happens.
  • Kevin Wheatley: In theatrical the screen is the only illumination, so the viewer adapts to that's e.g. D60 white. If we then make a video delivery we grade in a room with a D65 bias light, so D60 doesn't look white. So sometimes you may want D60 sim, sometimes not.
  • Joshua Pines: That's never come up. but it's an interesting point.
  • Kevin Wheatley: Pekka, you solved the issue of collapse near black and white by limiting the a value. Was the limit derived or "by eye"?
  • Pekka Riikonen: By eye. It was the value that didn't cause the collapse.
  • Kevin Wheatley: We may need to justify that numerically.

Meeting #89, February 15th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Carlos Aviles
Lars Borg
Daniel Brylka
Luke Hellwig
Christopher Jerome
Zach Lewis
Thomas Mansencal
Jeffrey D Mathias
Carol Payne
Pekka Riikonen

Meeting Notes

  • Alex Fry: Pekka has been working on improvements to the implementation of Nick's non-iterative gamut compressor. That's v30 in the repo. I have a v31 which adds per-hue compression parameters, and Pekka has been working on his own version of that. Nick has been working on a DCTL shader implementation.
  • Pekka Riikonen: v30/31 adds nothing new except the gamut mapper. The look of the rendering changes slightly. Normal colors don't change. Just out of gamut ones. I've changed the cusp-mid blend and focus distance to try and stay close to the previous look. It does darken some colors. I think we need a hue dependent lightness mapping too. I think yellow gets slightly too dark. It still has two values for compression distance, so shadows and highlights have different compression. There is still some clipping in cyan.
  • Nick Shaw: Cyan is the only Macbeth chat color that is out of Rec.709.
  • Pekka Riikonen: There will always be some clipping, as the gamut mapper doesn't map everything into gamut. I've experimented with a similar curve to the one I use for chroma compression to do per hue compression settings, but I don''t have that in there yet. The six axis control is good to find settings, but for the final version I think we should fit a curve.
  • Kevin Wheatley: We have lots of different compressions doing different things. could we unify them into one mechanism?
  • Nick Shaw: Some are target gamut dependent and some independent though.
  • PS: Simpler would be nicer, but I don't know how.
  • Kevin Wheatley: I worry that multiple compressions might make for odd behavior in grading. We wanted to be able to justify all our parameters, rather that just what we found works.
  • Pekka Riikonen: Compressing the lightness means M values bend out, because with Hellwig M is larger at the higher values we've compressed, so the chroma compression brings that in and makes it smooth. I don't know if letting the gamut mapper affect the interior too would help.
  • Nick Shaw: Since we have generic compression and target dependent, I think two steps is ok, but no more.
  • Alex Fry: Last week we were looking at whether the chroma compression brought the green in too far, so it was already inside the gamut before gamut compression.
  • Pekka Riikonen: We should revisit that with the latest version.
  • Nick Shaw: In the current version I see that the PowerP exponent is 1.0, so the compression curve can be simplified if we stick with that.
  • Pekka Riikonen: I dialed it back to 1.0 in recent versions, but we should leave the control in there for now. You also suggested a couple of meetings ago setting the threshold to zero, which I disagreed with. But i did try it, and changed the exponent, and that reduced some issues with gradients.
  • Nick Shaw: There isn't a problem with starting compression at zero, as long as it comes in gently. Chroma compression already moves values that are in gamut, so they are not fixed.
  • Alex Fry: Looking at the inverse round trip with the latest version it looks cleaner than with the old iterative solve.
  • Pekka Riikonen: I experimented with keeping the inverse in AP1, but for yellow that only happens with zero compression.
  • Nick Shaw: It depends what the inverse is used for. For graphics, even if inverting produces negative, that's fine because the don't use the intermediate state. It just needs to go forwards through the transform again and end up where it started. If people are using an inverse to make an LMT LUT, it needs to be in a reasonable range.
  • Kevin Wheatley: If we add too many knobs to twiddle we're over-optimizing. It's like the red modifier. And we're optimizing for Rec.709. What about P3?
  • Nick Shaw: Is it possible to provide an LMT including a matrix for people who need to hit those boundaries, so it creates the necessary negative values that are hard to grade into?
  • Kevin Wheatley: Isn't that the same as adjusting the primaries of the model? Should we do these adjustments before the tone-scale?
  • Nick Shaw: We're in danger of optimizing to the nth degree to hit one of the criteria, which becomes a look which people haver to fight if they don't want that look.
  • Pekka Riikonen: M=100 is not that far out for green, but it is for yellow.
  • Kevin Wheatley: So should the scaling parameter be calculated from the gamut? If a fixed 100 is wrong?
  • Nick Shaw: But testing with M=100 is arbitrary. It's a plotting convenience, but for some h and J values M=100 is a real color. For others it's way out there.
  • Pekka Riikonen: For yellow M=100 is a crazy value, and if we try to compress that in gamut we are compressing too much.
  • Kevin Wheatley: Nick can you update us on what you've been doping?
  • Nick Shaw: I had waited before porting to DCTL because everything kept changing, so I though I would have to keep starting from scratch. But now everything feels like it's stabilizing, so I've started work on a DCTL implementation of just the Rec.709 transform initially. I've worked from Pekka's v30 (I can add the 6-axis controls if we decide to keep those) and just ported the bits of the Blink that are actually being used in the current transform to DCTL – no ZCAM, no HK mod, etc. I haven't exposed any parameters. They are declared as constants, so to make a different version you duplicate it and change the constants. It's not for interactive experimenting – the Blink is for that. But it's an implementation for colorists to try where any problems they find are part of the transform, not a LUT limitation. It's in my repo, if people want to try it. But it's a work in progress. It's not fully working, and only tested on my M1 Mac [it currently does not work on other systems]. DCTL doesn't have an init() function like Blink. Everything runs per pixel. So I precalculated the 360 cusp values with my Python, and simply declare an array. For a deliverable we would have to clearly document how to populate such an array. But this means there is no iteration in there. There's currently a bug where very bright values end up black. There are no parameters, but there are check-boxes to enable the various stages, and a diagnostic mode to output JMh. It matches a LUT bake closely but not quite. I don't know if there is a difference, or if it is just LUT inaccuracy.
  • Kevin Wheatley: There are optimizations we could make in the quadratic solutions by using alternate forms. But the lack of iteration is a big win.
  • Nick Shaw: And although the rendering is not identical to the previous, neither is objectively right or wrong. They are just different.
  • Kevin Wheatley: Are the errors near black and white that we saw last week still there?
  • Pekka Riikonen: I don't see them any more.
  • Kevin Wheatley: Is there any other code we could remove?
  • Nick Shaw: I already took out a lot, leaving only what was actually used. I took out the chromatic adaptation, because that was only used in ZCAM. But that feels wrong. Our incoming XYZ is D60 adapted, and our output is D65. But we don't explicitly handle that anywhere. But maybe it's built in to the model.
  • Pekka Riikonen: Is this related to "discount illuminant", which is enabled.
  • Alex Fry: We do have neutral in going to neutral out.
  • Luke Hellwig: My model is based on CAM16, which has the Equal Energy Illuminant as the reference white point. Discount illuminant should be on, so whatever white point you give it, that will come out as achromatic in the model.
  • Thomas Mansencal: There is an XYZ_w input to the model where you specify your illuminant. So that's your input illuminant.
  • Kevin Wheatley: I was expecting two reference whites. If we were making wrong assumptions I would expect we'd see problems.
  • Alex Fry: We have in white and out white, calculated by passing white through the matrix for whatever your input space is.
  • Kevin Wheatley: It is possible to assume you have converted an equal value, so white stays white but the colors are wrong. But how wrong? And would you notice?
  • Luke Hellwig: Equal in ot equal out should indicate you're handling the white balance correctly in these models. The model uses a Von Kries adaptation.
  • Alex Fry: Toggling "discount illuminant" doesn't quite do what I imagined, so I need to investigate.
  • Pekka Riikonen: Have we made the final decision on Hellwig?
  • Thomas Mansencal: Luke is here and Zafdar isn't !
  • Kevin Wheatley: It's simpler, and the differences don't seem significant. And we have the author. I think we can remove ZCAM, but we'll make a notification on ACES Central.
  • Luke Hellwig: People can contact me if they have questions. Check the chat.

Meeting #88, February 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Michael De Caria
Alex Forsythe
Luke Hellwig
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen
Matthias Scharfenberg
J Schulte
Christian Wieberg-Nielsen

Meeting Notes

  • Kevin Wheatley: We have some continuation of what Nick and Pekka have been doing. But first Luke has a question.
  • Luke Hellwig: I want to port the gamut compressor to MatLab for testing. Who should I ask for any help I need.
  • Nick Shaw: The Python may be an easier reference than the Blink. The Colab or the repo. It's Thomas's original, with additions by Pekka and me. Particularly if you're just looking at the gamut compression, not the rest of the DRT.
  • Thomas Mansencal: I can probably help.
  • Kevin Wheatley: Nick has been trying an experiment
  • Nick Shaw: Our gamut compression inversion issue is because the compression direction is derived from the source J, and the compression changes J, so we don't know the vector to decompress along. I wondered if I could reverse that, so the vector was not modified by the compression. So the vector can't be dependent on the source pixel. So I wondered if I could make something where instead of working out a focus point from the source values, and then finding the intersection of a line from source to focus with the J-axis, we have a line from a point on the J axis where the angle is affected only by that J value, and which has the behavior we want (flat at 0, 100 and the focus J value, and sloping towards the focus elsewhere) and then we could solve for what J value makes the line pass though our source pixel coordinates. That way any value on the line, i.e. post-compression has the same vector for decompression. I have the line of the form y = mx + c in this interactive Desmos plot with the equation for m which gives the desired behavior, and when I substitute that into the equation for the line and rearrange, it becomes a quadratic in J (the J-axis intersect). The coefficients of the quadratic are all constants in terms of the source value, cusp, and a control I added for line steepness. So I can substitute those into the quadratic formula and get the J value where the line passes through the source coordinates. My working steps are included in the Desmos plot. This is all a thought experiment based on geometry and mathematical concepts. But we don't want to compromise our forward transform for a clean inverse. Pekka has taken what I did and put it into the Blink so it can be applied to images.
  • Matthias Scharfenberg: When I originally built the first DRT it was just based on mathematical ideas too. The proof is in the pudding.
  • Pekka Riikonen: My first impression was "it looks good, and we can work with it." It looks very similar to the previous version. I have v29 [this should be renamed v30, as a v29 already exists in the repo] with this in it. The difference is not big, and on normal images there's no difference. I did this very quickly, and at the moment there is an artifact near black and white.
  • Alex Fry: So this version now has no iteration?
  • Nick Shaw: Except the initial setup of the array of cusp values for 360 hues.
  • Alex Fry: I think the compression parameters may need to be hue dependent.
  • Pekka Riikonen: The cyan blue magenta gradient we looked at last week would benefit from darkening cyans.
  • Kevin Wheatley: The artifacts could be catastrophic cancellation near zero. Playing with the plot I felt I saw odd behavior near zero, where we needed to work out how we want it to behave.
[I think that was on an earlier plot, without the gain factor included, so the lines were much steeper. In the current version this doesn't happen for reasonable M values]
  • Pekka Riikonen: I'll keep experimenting, and try to have a new version for next meeting.
  • Joshua Pines: It may be worth checking for Mach bands where an out of gamut ramp crosses the conditional expression, because the second derivative may not be continuous.
  • Kevin Wheatley: We may wan to look at partial derivatives in each direction.
  • Joshua Pines: The cusp is inherently not continuous in the second derivative as you travel across it.
  • Kevin Wheatley: We're hoping cusp smoothing will help with that, if you accept a little clipping
  • Nick Shaw: Hopefully if the compression makes the slope shallow by the time clipping happens, it won't be too noticeable.
  • Pekka Riikonen: The clipping doesn't seem visible, and there aren't hue skews.
  • Kevin Wheatley: A progression towards a non-iterative solution seems like a big win. Does it add any new constraints?
  • Nick Shaw: I wondered if near zero behavior could be changed by moving my gamma-based boundary approximation so it tapered to just above or below zero.
  • Joshua Pines: Modifications like that might turn your quadratic into a cubic.
  • Nick Shaw: I did limit myself to not including any tweaks that would end up with a cubic, as I didn't have a simple formula solution for that.
  • Joshua Pines: There is, but it's more complicated.
  • Kevin Wheatley: Alex, you have some stuff to show.
[Alex demonstrated at 36:30 a 3D visualization of the chroma compression and gamut compression of a rotatable hue slice]
  • Alex Fry: It seems like at some points the chroma compression moves M values of 100 to within the gamut, so we can't hit that boundary. It's so far in that the gamut compressor has no effect.
  • Pekka Riikonen: It would be useful to see the actual intersection for the gamut mapper. Because we don't map to the boundary. We normalize to the boundary and then compress above a threshold, with the limit mapped to the boundary.
  • Alex Fry: The chroma compression has per hue controls, but they're not exposed.
  • Pekka Riikonen: They are there in the code, and could be adjusted.
  • Kevin Wheatley: If we tweak it for Rec.709, we could compromise P3.
  • Alex Fry: The chroma compression isn't target gamut dependent, so if we're over-compressing for Rec.709, we're definitely over-compressing for wider gamuts. We can't turn the chroma compression off, because it does useful things. But I think we need to modify it.
  • Pekka Riikonen: I wonder if it's related to inversion, because it's not green, it's the yellows that go outside AP1 when inverted.
  • Alex Fry: I think the yellow goes out because there's not much space to play with between the edge of Rec.709 and the edge of AP1. With green there's loads of space for inverted values to still be in AP1.
  • Kevin Wheatley: So if people want to hit the yellow boundary they will need negative AP1 values.
  • Alex Fry: There are two factors. The chroma compression is too aggressive at those hues, and the gamut compressor parameters may need to vary with hue.
  • Pekka Riikonen: The problem is that reducing the chroma compression will reduce the compression inside the gamut, and make it less smooth. My current compression curve doesn't have the necessary level of control.
  • Alex Fry: If the compression moved up and down with the cusp, it might fit better.
  • Pekka Riikonen: I have other candidates for the chroma compression, but I don't have those working yet.
  • Kevin Wheatley: It shouldn't be perfectly aligned with the cusp, or it would be gamut dependent and also change the look.

Meeting #87, February 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
Alex Forsythe
Luke Hellwig
Jeffrey D Mathias
Carol Payne
Pekka Riikonen
J Schulte

Meeting Notes

  • Kevin Wheatley: There will be some repetition from last week.
  • Alex Fry: I put what I showed last week into a post. It shows the dip between cyan and magenta at lower exposure and 3D visualizations showing the compression vectors. The boost in J happens as the pre-compressed values pass outside the gamut hull.
[Alex then talked through the content of his post]
  • Alex Fry: In the green region the chroma compression seems to be pulling the M values in so they are already in gamut before the gamut compression.
  • Lars Borg: If the gamut compression preserved luminance, or J, would it go away?
  • Alex Fry: Setting it to compress horizontally reduces the blue dip, but doesn't remove it completely. It looks bad at the top end on images like Red Christmas.
  • Nick Shaw: What is the reason for lifting at the low end? Might it reveal noise?
  • Kevin Wheatley: Matthias already modified the algorithm so it goes flatter at the very bottom. Perhaps we could try keeping it flat below the cusp and gradually bringing it down above it. I looked again at the gamut compression book, and people tried everything, and nothing is perfect. There's always a trade-off.
  • Pekka Riikonen: In v27 I had the min and max values for focus distance as an attempt to make it flatter below the cusp. I removed that because we decided that was the wrong domain to do that in. But I found in testing that for bright colors like yellow, if you bias to darkening it darkens yellows too much. I think the value should be higher for primary colors and lower for secondary colors.
  • Lars Borg: Should the cusp be used at all? It's an attribute of the display gamut not the image. It imposes limitations but should it influence how you map colors? It might influence relationships between colors.
  • Pekka Riikonen: As I posted, the straight hue line in perceptual space is not linear in chromaticity space, and it's causing skew.
  • Alex Fry: The whole model is based on straight lines in JMh not Yxy.
  • Pekka Riikonen: Skewing blue cyan makes it brighter. Maybe we should darken it.
  • Alex Fry: The model should give us constant perceptual brightness if J is constant.
  • Kevin Wheatley: The volume we're looking at is back-calculated into JMh, so we are mapping to the closest point in that volume, keeping h constant, and deciding what else to keep constant or change. If the model is accurate in that area it should visually be closest to our goal. We can ask if the goal is correct.
  • Pekka Riikonen: Looking at the image it doesn't seem correct.
  • Kevin Wheatley: It seems to be doing what we intend it to do. So we have to go back to whether our intent is correct.
  • Lars Borg: Is HK compensation lifting the blues?
  • Alex Fry: We have that in the model but it's off.
  • Luke Hellwig: It seems maybe the cyan looks bad because it's less saturated. The only way to regain saturation while reducing colorfulness is to reduce J. But that's a solution for this image. I don't know what it would do to others.
  • Kevin Wheatley: If you put the cusp-mid blend to 1 and set ssts mid to zero, it will always slope down. It would be good if we could display that blue and see what hue it really appears. But without a laser display we can't.
  • Lars Borg: It's a problem with ITP and similar methods, thinking blues look cyan. We know the model isn't perfect. The hook is concerning, because it's not in line with the original straight line.
  • Nick Shaw: If the original cuts across hues, compressing part of it and leaving the rest untouched will always bend a straight line.
  • Alex Fry: We can't project along a line in a particular image, because that is image dependent.
  • Lars Borg: Will a straight line with constant hue stay straight.
  • Nick Shaw: We are compressing along straight lines of hue as represented by the Hellwig model, so a straight line in one hue will stay straight, in that space.
  • Kevin Wheatley: It might be nice to look at lines of constant colorfulness within the gamut to see if they look constant.
[at 30:30 Alex showed the effect on lines of constant hue at different J - bending up and down, but staying straight viewed from above]
  • Alex Fry: Looking at the hull shape, cyan is much further in than the blue cusp. So cyan can only be 40% as saturated as blue.
  • Kevin Wheatley: It's odd that blue-yellow is a smaller distance, since I believe we have better discrimination of blue-yellow. Is saturation the right space to think about this in. I suggest the model isn't quite right for those colors.
  • Alex Fry: As the line goes out of the gamut it is more clockwise around, so you need to be more cyan to represent that. clipping just stops at the boundary. What is the right trade-off.
  • Lars Borg: Even mapping in XYZ you would get a bend towards cyan. Even a straight desaturation in RGB would go cyan.
  • Pekka Riikonen: I have posted on my latest update, v29. It uses a gamut approximation to a triangle with cusp smoothing. There's no iterative intersection finder, just the initial one which generates the look-up for the cusp. We could approximate that later. I took the eccentricity out to simplify the model. That doesn't change the rendering much, except in blue and magenta, which can be compensated for. It now uses dim viewing conditions, as Rec.709 should use. I removed focus distance min and max. I also removed the old tone scales. I urge people to try it. I posted images with and without cusp smoothing. It wold be nice to have an inverse without iteration, if we feel that's important.
  • Alex Fry: Does the approximation work for gamuts other than Rec.709?
  • Pekka Riikonen: Yes, and the curve of the bottom part is driven by the model. The smoothing expands the boundary outside the gamut, which I think it should, so as not to eat into it.
  • Nick Shaw: What smoothing method are you using?
  • Pekka Riikonen: It finds the intersection of the line above the cusp and the one below it, and then take a smooth minimum between them.
  • Lars Borg: In the plot it looks like there is an increase of luminance separation between the higher values in the plot. This could amplify noise.
  • Nick Shaw: But the J has been compressed by the tone map before this, so perhaps the increased separation would be cancelled out (or at leas made less significant) by the prior highlight compression.
  • Kevin Wheatley: Worth testing. So do we need to eliminate the iterative solve for the inverse? We need the iterative inverse because J has changed in a way we can't easily invert analytically. If it changed in a more predictable way it might be easier to invert.
  • Pekka Riikonen: The projection line changes depending on the original values, which you no longer have.
  • Kevin Wheatley: Is the compression monotonic? If some areas squeeze and some stretch it's harder to figure out.
  • Pekka Riikonen: We don't know the vector something was compressed along because we don't have the original point.
  • Nick Shaw: Is the focus point fixed for a given hue, or affected by the pixel's J? [It is affected by J]
  • Kevin Wheatley: It seems maybe we need a simpler way of choosing the focus point, and maybe we could pick one which has an inverse.
  • Lars Borg: The focus point doesn't have to be on the neutral axis
  • Nick Shaw: It isn't. What is shown is the intersection of a line to the focus point with the neutral axis. But isn't its distance the other side of the axis fixed for a particular h? [it is not in fact. It varies with source J]
  • Alex Fry: If it was constant the lines would always converge towards the same point. But they vary with initial M. [J in fact]
  • Lars Borg: Your plot is starting from the same maximum M for all J values. But won't some of these be unreal?
  • Luke Hellwig: Cameras can generate non physically realizable colors.
  • Lars Borg: Hopefully we've solved that by the time we're in ACES.
  • Kevin Wheatley: Not necessarily. But we may be looking at colors beyond even what we will encounter.
  • Alex Fry: Our test set definitely includes pixels with negative ACES values.
  • Kevin Wheatley: The transform should be predictable, if not correct.
  • Alex Fry: It may be worth experimenting with a fixed focus distance, as that would make the inverse simpler.
  • Nick Shaw: I can look at adding code to my plotter to show where the focus point is. [the Colab and repo are updated to include displaying the focus point]

Meeting #86, January 25th, 2023, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Alex Forsythe
Luke Hellwig
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen
Simon Yahn

Meeting Notes

  • Kevin Wheatley: Nick and Pekka have been working on a tool for interactively examining the gamut boundary and intersection.
  • Pekka Riikonen: Nick created this tool and I added the gamut mapper. It shows a cross section of a hue slice and the projection line of a selected color when gamut mapped. It shows a dot at the intersection of the projection line with the gamut boundary, found using the same iterative method used in the DRT. It also shows the position of the compressed color. There is no smoothing of the cusp, which creates lines in gradients. The plot has an option to use an approximation to the gamut, using a triangle to the cusp. But that cuts a big slice off the gamut, which significantly affects images. Nick then added a gamma to add curvature, and with a gamma of 1.15 it tracks the boundary well. With real images I found a mismatch and had to use a gamma of 1.33 to get a match [this subsequently turned out to be due to a mismatch between the Python and Blink surround settings].
  • Nick Shaw: What is the result in the upper part, where we use a straight line with no gamma?
  • Pekka Riikonen: There is some concavity with some colors, particularly reds. But that may not be a bad thing. For some hues the straight line cuts slightly inside the true boundary, so we need to investigate the effect of that.
  • Nick Shaw: We don't actually clip in JMh to the boundary, so you can still "push through" it.
  • Pekka Riikonen: You need pretty "out there" colors though. I'm not too worried about it though. I've also experimented with a signed distance function (SDF). This gives the nearest point on the boundary to a color. So perpendicular to the boundary. I added that as an approximation option. As a test I use the SDF distance but with the current projection line. The SDF has smoothing inherent in the function, giving a smoothed triangle. I think we need smoothing around the cusp, but one that makes the shape larger rather than eating ito the gamut.
  • Kevin Wheatley: The tool is also interesting to visualize where the other end of the projection line is, on the other side of the plot. It might also be interesting to see a plot along a different axis such as equal M. The 1.15 gamma value appears to come from the surround coefficients.
  • Nick Shaw: In the code of the J calculation there is an exponent of z * surround.c, which comes to 1.137 with our coefficients for dim. So close to 1.15. And it tracks with the boundary at dark and average too. It's good to have a formula fort exponent, rather than magic numbers.
  • Kevin Wheatley: We need to let people play with the interactive plot.
  • Nick Shaw: It's a shame I can't make an interactive plot work in a Colab. So people need to download it and run it locally. It needs Colour Science for Python installed too.
  • Kevin Wheatley: Near the cusp it's quite flat, but cusp J changes with hue.
  • Nick Shaw: A gradient changing in hue but with constant M could go from above to below the cusp, and therefore maybe change direction sharply.
  • Kevin Wheatley: Alex has done some 3D visualizations.
  • Alex Fry: Anton posted some images showing a dark band in a gradient as you lower exposure. I've made some visualizations [see recording from 36:30] showing the source JMh values, then tone-mapped and finally gamut mapped, and lines joining the two.
  • Pekka Riikonen: Could the band be caused by the final clipping?
  • Alex Fry: I don't think the clipping here is enough to cause a dramatic change.
  • Luke Hellwig: I have an idea that the dark line is where the gamut compression is stopping, and everything beyond that is being desaturated.
  • Alex Fry: They are being brightened here, but I can reduce that by increasing focus distance. That helps a bit. But horizontal projection gives undesirable results.
  • Joshua Pines: Maybe horizontal projection works for dark stuff, but don't use it on bright stuff.
  • Luke Hellwig: The main thing is that part of the gradient is being compressed, and part isn't so no matter what you do it will look weird.
  • Kevin Wheatley: We need a softer inner portion.
  • Nick Shaw: If you set compression threshold to zero it begins softly all the way from achromatic. That might not do nice things to images. I just tried dropping it from 0.75 to 0.5 and the effect is not as dramatic as you might expect. Not even if you drop it to zero.
  • Pekka Riikonen: I would say it has a pretty big impact! I hope once we have a smooth cusp and larger intersection it will reduce these artifacts.
  • Alex Fry: I'm unsure about the smooth cusp, because we need to hit the corners.
  • Pekka Riikonen: If the intersection is larger than the boundary it doesn't eat into it.
  • Nick Shaw: But you still have to clip to the actual gamut for display, so your sharp cusps come back.
  • Alex Fry: And that may introduce hie skews. I found synthetic images designed to show it up had a problem with the sharp cusps, but with real images it wasn't such a problem.
  • Kevin Wheatley: So what do we do next?
  • Alex Fry: I'll try some more visualizations of sweeps with constant M.

Meeting #85, January 18th 2023, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Chris Brejon
Daniel Brylka
Alex Forsythe
Francesco Giardiello
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Carol Payne
James Pickett
Pekka Riikonen
Matthias Scharfenberg

Meeting Notes

  • Kevin Wheatley: Matthias will be joining us later to talk about the gamut compressor. Anton posted about a possible noise issue.
  • Alex Fry: He had a dawn shot from a real production, and was concerned that the blue sky was noisier than ACES 1.2. I made some plots to look at what’s happening as exposure changes. ACES 1.2 is clipping where we compress, it may hide some noise. It also hooks round in the plot where we have a straight line. Bit in some cases ACES 1.2 looks “nicer” in abstract.
  • Lars Borg: Your only looking at chromaticity. What about luminance? Maybe compare we should compare luminance noise. Tone mapping can often transfer the blue channel into the green, and blue is often noisiest.
  • Kevin Wheatley: v28 “splatters” the noise against the edge, so it spreads out, where 1.2 clips. It does get pushed into cyan.
  • Lars Borg: Can we determine the volume of the blob in a perceptual space, and see how it changes with mapping. If something extends its perceptual extent it will look more noisy.
  • Nick Shaw: Will using the same JMh perceptual space to analyze that we use for the mapping, bias our metric?
  • Luke Hellwig: Looking a the images, the noise looks similar to me. Just the color is different.
  • Pekka Riikonen: To me the clipping of 1.2 makes it accidentally look better because cyan skews blue.
  • Nick Shaw: clipping makes more values identical which will reduce noise.
  • Alex Fry: Also the tone curve is different. Is our curve lifting the noise and making it visible?
  • Pekka Riikonen: We should look with the same curve.
  • Kevin Wheatley: Nick has made some plots.
  • Nick Shaw: Earlier I posted a sweep round hue, showing the display gamut boundary and cusp path. I wondered how good an approximation a simple triangle would be. Pekka did some tests and found a triangle cuts off a slice of the gamut, because it cuts across the curve in the lower half. So I tried using a straight line at the top, and a gamma curve on the bottom part to match the shape. When I plot it the gamma curve matches very well (maybe there is a reason for this in the model). The straight line at the top is not too far off except around h=o, where it still encompasses the boundary. An approximation based on that might do ok for most hues. I wrote up a document (which I could post on ACES Central, if people want) to wrap my head around the gamut compressor, and so others could point out my misunderstandings.
  • Matthias Scharfenberg: Your first part showing the horizontal compression is correct. But the vertical component comes from compressing towards a focus point which can be moved between the J value for SSTS mid, and the J value of the cusp. You get a diagonal line, with compression normalized on that’s intersection with the boundary. But the focus point is not on the J axis. It’s at a distance on the other side of it, so the slope doesn’t get too strong, and lift lightness for dark saturated colors. I divided the focus distance by the J distance from either black of white, depending if it’s below or above the cusp. This makes the slope approach zero as you approach the maximum or minimum J value. Taking the diagonal from the focus point to the value, the normalization zero is where that crossed the J axis, and one is where it crosses the gamut boundary.
  • Nick Shaw: So because the J value is changed, you don’t know what the original J was, so can’t work out the compression ratio to invert. That’s why I was wondering if you could slit the compression into two components, firstly the simple horizontal one, and secondly a part along a line from zero so it kept the ratio of M to the boundary the same. That might make inversion easier. But I don’t know what that would look like visually.
  • Matthias Scharfenberg: Yes, the vertical movement obfuscates the original value. If there could be an alternate approach that mimics the result, but is invertible, that would be very useful. Also how important is the vetiver component with the current DRT?
  • Alex Fry: It does do something which is pleasing.
  • Nick Shaw: Pekka has tweaked the gamut compressor.
  • Pekka Riikonen: It still does broadly the same, but the focus distance is leaped between two values, based on the tone scale lightness. But it is a hack. Last weekend I experimented with the triangle approximation, and it does slice off a pice of the gamut. But we need tools to visualize this.
  • Kevin Wheatley: Nick also plotted the coordinates of the cusp, which has a weird shape. The points do coincide with the RGB and CMY corners.
  • Lars Borg: An experiment would be to take a wide gamut image, compress it to the cusp, and show it on a wide gamut display. If you can see where the Rec.709 corners are then the gamut mapping is wrong, as you’ve imposed the 709 look.
  • Matthias Scharfenberg: There is a cusp smoother in there.
  • Alex Fry: It’s still there but set to zero recently or you can’t reach the corners.
  • Pekka Riikonen: If the approximation was larger than the gamut, it could be smoother, and the angles might be shallow enough that clipping wouldn’t matter as much.
  • Alex Fry: Hitting corners is important, particularly for logos.
  • Pekka Riikonen: I did a Fourier series approximation, which fits well. But with the smoothing, do we do intersection with a triangle, then smooth that, or find the intersection with a smoothed shape?
  • Kevin Wheatley: And how do we measure what’s best.
  • Pekka Riikonen: TO me the current version looks very nice. It’s about performance. So a similar look with better performance would be ideal for me.
  • Kevin Wheatley: So we could approximate the bottom part with Nick’s gamma curve. Is a straight line good enough at the top? If so the intersection is simpler in the forward direction. But what about the inverse.
  • Pekka Riikonen: I think we need the iterative solve if we use the current method.
  • Kevin Wheatley: But what about an approximation that preserves the ratio, as Nick suggested?
  • Nick Shaw: A visualizer of what is happening now would be useful to make an approximation that does a similar thing.
  • Pekka Riikonen: I have my Fourier approximation, but I’m not sure if my intersection is correct. A visualizer would help.
  • Alex Fry: Christopher commented in the chat about projecting towards black.
  • Nick Shaw: You could do that by setting SSTS mid to zero (as that value isn’t used anywhere else) and setting the mid-cusp slider all the way to mid.
  • Pekka Riikonen: There is a Shadertoy visualizer for Oklab. Something like that would be great.
  • Alex Fry: I’ll see what I can make in Nuke.
  • Nick Shaw: I’ll look at adding the gamut mapper to my Python.

Meeting #84, January 11th 2023, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Chris B
Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
William Feightner
John Frith
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen
Simon Yahn

Meeting Notes

  • Kevin Wheatley: Nick has been looking at the shape of the gamut cusp
  • Nick Shaw: I posted an animation of the gamut boundary shape with hue, and a plot of the path of the cusp in J and M. As a double check I didn’t use the boundary solve from the DRT, but wrote my own using the JMh to XYZ from Thomas’ Python. It’s not a shape that lends itself to a function. There are six cusps, which I assume are red, green, blue, cyan, magenta and yellow. An approximation might have to use some interpolation between those points. Straight lines might be ok for the J path. M has looping lines between them so needs a more complex interpolation. Whatever we do needs to be documentable so somebody could apply the approach to an arbitrary gamut. My plots are of Rec.709. Pekka thought the J path is not used that much in the DRT, and you could pick a line you liked. But that wouldn’t be generalizable to any gamut.
  • Kevin Wheatley: The M and J have similar shapes in some regions. They come from the model, so the curves may be a projection of some aspect of the model.
  • Nick Shaw: I wondered if they were driven by the eccentricity function that goes in and out with hue.
  • Kevin Wheatley: It would be interesting (at least for J) to see the difference between a leap between the six points and the actual curves. We should see in the shapes are similar for other gamuts. That would help describing a method for approximating them.
  • Alex Fry: All the cusp does in the code is decide if you’re above or below to control the angle of the gamut compression. The hull from the cusp to the black and white points have a changing shape which just approximating the cusp doesn’t capture.
  • Nick Shaw: A triangle to the cusp might be good enough. There is a kink in the bottom part sometimes, but that may be an error in my code. (it is)
  • Kevin Wheatley: What are we using this values for? How loose an approximation is good enough?
  • Alex Fry: Is the approximation just needed for performance?
  • Kevin Wheatley: And invertibility.
  • Alex Fry: Early on we used a 2D LUT of max M for J, h. We could look again at that, and play with coarseness.
  • Nick Shaw: What about non-uniform LUT interpolation with vertices on the six cusps?
  • Alex Fry: If the LUT can’t be pre-calculated in the code, in can be declared in e.g. DCTL.
  • Kevin Wheatley: Because it doesn’t have to be exact, a mathematically defined shape could have a mathematically defined inverse.
  • Alex Fry: It may be worth looking at a crude mesh of just those six points.
  • Nick Shaw: That’s basically a 2^3 3D LUT with only the corners.
  • Pekka Riikonen: Mapping just to the cusp would change the look. Now we map to a blend between that and mid grey. The distance is based on the original distance of the hull from achromatic at a given J and h, and the vector is towards a blend between the opposite cusp and mis grey.
  • Kevin Wheatley: We would still use the existing mapper, but feed it with an approximated boundary.
  • Pekka Riikonen: We could do that or approximate the whole gamut shape, which is what Bjorn did, and I used in the Okish DRT. That would be most efficient performance-wise.
  • Luke Hellwig: I think those curves may come from the eccentricity function. Maybe you could bypass that. Within a single hue, the magnitude doesn’t matter.
  • Nick Shaw: Doesn’t the magnitude affect how much compression is needed to bring a color in gamut?
  • Luke Hellwig: If you change the scale of the data and the boundary you need to get within, by not using the eccentricity at any stage.
  • Nick Shaw: I’ll plot the curve again with the eccentricity switched off.
  • Pekka Riikonen: I did a Fourier series fit of the curves. It can be closely approximated with only eight coefficients.
  • Kevin Wheatley: John Frith emailed me.
  • John Frith: One of our VFX supervisors has been testing and really likes it. He wants to use it on a real project. A real world test, and maybe publicity. We liked the noise improvement in v28, so maybe wait for that.
  • Alex Fry: v.28 only exists as Blink, but I can bake LUTs.
  • Kevin Wheatley: It would have to come with a big disclaimer from The Academy. And you couldn’t use the name.
  • Alex Fry: The Lego movie we used a very early unreleased ACES version, which is very different to 1.0. So it’s fine as long as nobody expects it to match whatever we finally release.
  • Kevin Wheatley: SDR / matching feedback would be useful.
  • Alex Fry: It’s a LUT like any client LUT. It’s not ACES yet.
  • Pekka Riikonen: There have been some comments about banding in the LUTs, so the shaper may not be ideal.
  • Alex Fry: That was done back with ZCAM and what was needed to cover the inverse of Candidate C. You could probably go tighter.
  • Nick Shaw: If the show has one hero camera, you could use that’s native log and gamut as the shaper. Going back to my plot, I commented out the eccentricity, and there are still looping curves.
  • Pekka Riikonen: Gamuts are always strangely shaped in perceptual spaces.
  • Luke Hellwig: The equations are so complex I don’t think you could easily find what drives the shape.
  • Pekka Riikonen: I’ll test with my Fourier approximation.
  • Alex Fry: I’ll try a 2D LUT.

Meeting #83, January 4th 2023, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Alex Forsythe
Luke Hellwig
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Scott Dyer: Pekka, can you update us on what you posted on ACES Central?
  • Pekka Riikonen: Sure. Over the break I worked on improving the chroma compression, and looked into the gamut mapper. I opened a PR for v28. Previously the chroma compression curve also controlled the path to black. I added a separate controllable path to black, to help desaturate the noise floor, and control colorfulness near black. I moved my per hue chroma compression to a hue dependent curve similar to the one Luke uses for HK mode. The main chroma compression is not hue dependent, but it could be. I added some modes to the gamut compressor so it has a range of compression amount depending on lightness. There are more out of gamut colors for darker colors. This also helps with the inverse. I also added a range control for the focus point, driven by lightness. This was to address the overly light red that happens after the cusp. Zero is the achromatic axis, and one is the opposite cusp. Now instead of one focus point between those, you have a range – nearer achromatic (steeper) for lighter colors and nearer the cusp for darker colors (shallower). I’ve removed the old highlight desaturated, and put the clamp into the code, not a separate node. With gammad up images you can see the shadow noise reduction. We could alter the chroma compression, and if we are willing to add more compression we can lower the max compression value, which will help with the inverse too. And the less colors are out of gamut, the easier it is for the gamut mapper. How useful is it to keep tweaking the gamut mapper if the implementation will change.
  • Nick Shaw: I’m thinking for the gamut boundary we could maybe find some fit like you did for the hue dependent compression, which approximates the path of the cusp. Then maybe the top of the section through the hue slice is nearly a straight line at the top, and maybe we find a curve for the bottom part.
  • Pekka Riikonen:Björn Ottosson’s gamut approximation just uses a triangle.
  • Nick Shaw: Straight lines make finding intersections easier. But maybe we can find a simple curve with a mathematical intersection solve.
  • Scott Dyer: Thanks Pekka. The noise in blacks is definitely a big improvement. What do you think we still need to look into before it’s a candidate?
  • Pekka Riikonen: I think it’s in a very good state. If we stick to this gamut mapper we could start user testing now. The HDR transform is great. The compromises are in Rec.709. Do we want at least a prototype of a final gamut mapper before user testing? Do we need a simpler mathematical gamut mapper with an exact inverse? Of is the current iterative one ok? Björn Ottosson has his new Okhsl color space which has a gamut mapper with a mathematical inverse, even though it changes lightness.
  • Alex Fry: I’m less concerned about performance if it looks right.
  • Nick Shaw: The iterative solve makes the inverse less accurate, which may affect the round trip accuracy I showed in my post. A Rec.709 cube passed through v28 inverse and forward pinches in a little at the edges. Because an inverse gamma curve is very steep at zero, a small error is magnified. It probably doesn’t affect the look much, for inversion based LMTs, but if people have display referred brand graphics with exact colors they must hit then it could be a problem.
  • Alex Forsythe: That’s a complaint we get regularly.
  • Alex Fry: Is there a meaningful scene representation of these logos, or should they be overlaying afterwards in display referred?
  • Nick Shaw: I think the cube is pinched in less than ACES 1.0 in the green/yellow edge.
  • Pekka Riikonen: Disabling compress mode makes it better.
  • Alex Fry: But we need that, or something like it.
  • Nick Shaw: I’ll test more to see what affects round trip accuracy. I’ll also try to plot the path of the cusp in J and M against h.
  • Pekka Riikonen: I tested using C instead of M, and it seemed to make no difference.
  • LH: The way you’re using it, I think M is fine.
  • Pekka Riikonen: Are there particular requirements for the inverse? What if it isn’t within AP1?
  • Alex Fry: For some apps that’s not a problem. For some it may be.
  • Nick Shaw: ACEScct covers a bit of negative, so within 0-1 ACEScct would be fine for any app.
  • Alex Fry: We should aim for AP1, at least for Rec.709.
  • Pekka Riikonen: Can the current version be implemented in DCTL, so we can test without LUTs?
  • Nick Shaw: The iterative solve may be problematic, as DCTL runs everything per pixel. There are no globals.
  • Alex Fry: Original ACES couldn’t run without LUTs on hardware at the time. If it’s defined procedurally, a LUT is ok for implementation. GPU real time would be ideal.
  • Pekka Riikonen: The Requirement document says “the algorithm shall be a series of discrete, ordered operations and not include any LUTs”.
  • Scott Dyer: That means no LUTs in the definition, not implementation. It should be able to be run procedurally to calculate exact values. But simple would be better if it’s feasible. We should look at that requirements list again.

Meeting #82, December 21st, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Michael De Caria
Alex Forsythe
Francesco Luigi Giardiello
Luke Hellwig
Christopher Jerome
Andy Maltz
Jeffrey D Mathias
Carol Payne
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Alex Fry: It’s the last meeting of the year so probably a short one. I don’t have anything to show. Nick has something
  • Nick Shaw: I spent a lot of my time looking at the BlinkScript, because it wouldn’t compile on my M1 Mac. Blink varies between GPUs. I haven’t go the whole DRT working, but I extracted the XYZ <-> JMh functions to make a Nuke script to test the surround effect. Running a display referred image XYZ to JMH and back without the tone curve, I can change the surround parameters to see the effect on a rendered image of modifying for different viewing conditions. I can see the curve if I put a ramp through it. Leaving the JMh to XYZ on dim, I change the input conversion to dark, and it’s gained up. I assume because the image will be perceived at s brighter in a dark surround, so J increases. I can counter that with a multiply. The net result turns out just to be a gamma curve (on neutrals) with an exponent which is the ratio of the 2nd of the three values in the surround coefficients (0.59 / 0.552 = 1.12). I think the gamma is applied to J. There is a slight color shift on neutrals, but that may be my error. Saturation also changes due to the difference in the 3rd coefficient. The first coefficient does almost nothing, I think that’s related to adaptation, and I already have D65 image data. The saturation change is in the opposite direction to the one in ACES 1.0.
  • Alex Fry: The three coefficients are the CAM16 induction factors.
  • Luke Hellwig: I did not change CAM16’s surround handling in my model. But I never felt it did what they said it was supposed to. I also have issue with CAM16 adaptation, so I never use a degree of adaptation less than 1.
  • Nick Shaw: Discount illuminant (which is how it’s used in the DRT) makes the degree of adaptation 1.0. Then the color shift I saw goes away.
  • Luke Hellwig: It makes sense to discount illuminant, because the reference white of CAM16 doesn’t match any psychophysical data. The third coefficient is a multiplier on the M channel.
  • Nick Shaw: So if we want to tweak the values to reduce the effect, it’s reasonable to just move the dark and average values closer to the dim values?
  • Luke Hellwig: I would just go with what looks right.
  • Alex Fry: It makes sense to just do a J gamma and M mult after the tone curve, and before we find the boundary. I was worried about going back and forth through the model again after everything else.
  • Nick Shaw: Feels like we can just find a J gamma and M mult value that suits our purposes. Scott, did the dar to dim gamma and desat values in 1.0 come from scientific testing or just what looked right?
  • Scott Dyer: Both. Experiments originally and then tweaked based on golden eye viewers.
  • Nick Shaw: Then there is the question of whether we should have a dark to dim modification at all. Particularly as it’s a small change. Should it just match a colorimetric conversion between Rec.709 and 2.6 gamma P3?
  • Joshua Pines: Put me in the no modification camp, and most of our colorists.
  • Alex Fry: Applying the surround modification in the display referred domain makes more sense than just using different values in scene and display.
  • Nick Shaw: That’s why it’s now just a gamma, not the sigmoid that doing it either side of the tone curve produced.
  • Alex Fry: I’ve been thinking about the inverse issue. The yellow corner is the only real problem. I will experiment with varying compression with hue. Inverting a Rec.709 cube you get a bulge on each edge, but only yellow goes outside AP1.
  • Nick Shaw: It’s interesting it doesn’t invert the blue out to Blue Bar type values, when those values map to the Rec.709 boundary.
  • Alex Fry: I think compress mode has more effect on yellows because that is where the zero crossing kink happens.
  • Nick Shaw: ACEScct can carry negative AP1 values, and grading systems don’t even clip negative ACEScct values. But grading tools don’t behave well down there.
  • Alex Fry: With many display referred images they go fine through an inverse and forward transform. But some, with yellow-green foliage you see a difference if you clamp to AP1 after the inverse. I’ll play with hue dependent compression. Also updating Matthias’s gamut volume slice visualization tool for Hellwig.
  • Scott Dyer: On the items list three things are labelled “decision needed”. LMS primaries, which CAM, and how to handle values outside the model.
  • Alex Fry: For the third one we seem to have settled on compress mode.
  • Pekka Riikonen: Linear extension doesn’t see to work. I’m still tweaking the primaries. v27 changes them to make the colors closer to original Hellwig with compress mode.
  • Alex Fry: I err towards Hellwig, as it’s simpler, and also we have Luke here to help.
  • Pekka Riikonen: My only issue is blues skewing magenta.
  • Alex Fry: I prefer that to ZCAM’s cyan shift.
  • Scott Dyer: So we can wrap early? No meeting next week, and back in the New Year.
  • Pekka Riikonen: What is the deadline for all this?
  • Alex Fry: As soon as possible.
  • Andy Maltz: We know this is hard. You’re moving faster than we did with 1.0. But you’re not building everything from scratch. But there is a window of relevance. What can leadership do to help. At some point we have to make hard decisions about what does and doesn’t get done before we ship. We just want transparency and candor.
  • Alex Forsythe: ASAP deadlines make it hard to plan. We need realistic dates based on realistic expectations of how long things take.

Meeting #81, December 14th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Michael De Caria
Alex Forsythe
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen
Christian Wieberg-Nielsen

Meeting Notes

  • Alex Fry: I've been updating the candidates repo to have inverses for all platforms. It only includes v26 and v27 (Pekka's new version) now. People have seen some banding, probably due to LUT resolutions. The inverse seems to work, but the the Rec.709 cube inverts to values outside AP1. Reducing gamut compression helps, but you hit clipping sooner. It's a trade-off. It's worse with smooth-cusps on. ACES 1 just hard clips at the display gamut boundary.
  • Nick Shaw: ACES 1 round-trips perfectly for a most of the REc.709 cube, but the non-invertibility of the RRT sweeteners mean some values can never be reached. It's a balancing act. We wanted something that doesn't need the RGC, but that means it needs to handle out of gamut values gracefully. So you do compromise things slightly for normal images, in order to deal with those extreme ones. Leaning on the RGC for the extreme values means there is less compression in the DRT, which means inverting does't produce values way out there.
  • Alex Fry: Even AP1 is extreme compared to Rec.709.
  • Nick Shaw: Sure. But if the DRT assumes everything is in AP1 (using the RGC if needed) the DRT compression can be such that display gamuts can invert to within AP1.
  • Pekka Riikonen: I've been experimenting modifying the primaries to with compress mode the colors are closer to those from original Hellwig. But some blues still render a bit purple. Live from params are enabled by default in v27 with the values I've found. I've also increased compression, which helps with the early channel clipping in shadows. But makes inversion worse. It's a temporary solution. I also changed the shadow colorfulness boost to scale with peak luminance. That helped the HDR/SDR match. I've changed the compression curve so it doesn't go negative, which removes the NaNs. There are still some infs. I also played with settings to improve HDR SDR match. I felt mid grey at 1000 nits is a bit high, and also so is the shadow flare which increases with peak luminance. The w_g value of 0.14 in Daniele's curve puts mid grey at about 15 nits for 1000 nit peak, like ACES 1.2. Using 0.12 puts it at 14 nits, where Jed's curve had it, which I prefer. Or even lower.
  • Nick Shaw: Jed's values were just his personal preference, and maybe also he referred to what T-CAM does.
  • Pekka Riikonen: Daniele said a rule of thumb is a 10th of a stop increase in grey per stop of peak. That hits 14 at 1000. But I think it's maybe the shadow lift that puts me off.
  • Scott Dyer: This is interesting. I've left the tone curve locked for now. But it can still change. We only locked it to focus on color.
  • Nick Shaw: There is no right answer for mid grey. We kept it at about 15 because nobody complained about that in ACES 1.2. Maybe it is the shadow lift that makes you want to lower mid grey.
  • Scott Dyer: We have no control for shadows. It hinges and follows mid grey. The t_1 value controls it a bit.
  • Pekka Riikonen: We can ask about this in final user testing.
  • Scott Dyer: What are the biggest things to look at over the next few weeks?
  • Alex Fry: Inverting is the biggest for me. Inverting to something practical. Less gamut compression, or maybe an inverse that expands less and accept that a round-trip will squeeze in a bit.
  • Pekka Riikonen: The issues on the repo are a good list for me. A big one is the gamut mapper. Thee current one is not practical in a final deliverable. If we change the mapper it will change the rendering. It would be good to see a gamut approximation implementation. Also the Hellwig vs ZCAM decision.
  • Alex Fry: Hellwig is appealing because it's simpler. Are we using the right components?
  • Pekka Riikonen: J yes. M I'm less sure. Maybe C.
  • Nick Shaw: If we use a gamut hull approximation, we need one for every gamut. I guess we find one for the main gamuts and document the method, so people can do it themselves for custom targets.
  • Nick Shaw: I did some investigation into surround compensation. I plotted the curve (in display linear) for dark surround in and dim out. It's an inverted sigmoid, not a simple gamma like ACES 1.0. Dark to dim brightens the image significantly and desaturates it. ACES 1.0 dark to dim brightens less and adds some saturation because the gamma is applied to luminance only. If I apply the curve from Hellwig dark to dim on luminance it adds saturation. I get a better match to the CAM by applying the curve to RGB. The desaturation is the opposite of ACES 1.0. Perhaps using surround factors on the way in and out is wrong. It's matching a dark surround scene to a dim surround display. ACES 1.0 is matching a dark surround display to a dim surround display. Maybe using it on the scene side is wrong use of the CAM. Also perhaps the sigmoid comes from using it on the way in before the tone curve and on the way out after it.
  • Alex Fry: Perhaps we should take our rendered image and go back and forth though the isolated Hellwig with different surrounds.
  • Pekka Riikonen: What happens with ZCAM.
  • Nick Shaw: The effect is much smaller and seems to happen only at the top end, and not change the shadows. But I'm not sure it's hooked up right in ZCAM. I haven't tested, but I would think that to reduce the effect magitude in Hellwig you could take the triples of values the conditions map to is to take the dark and average ones and change them closer to the values for dim.
  • Alex Fry: Or we could use a gamma same as ACES 1.0.
  • Pekka Riikonen: Although then why are we using a CAM? We could use JzAzBz or Oklab or something like that.
  • Nick Shaw: I'll look at what curve is produced by going 48 nits dark to 100 nits dim with no tone curve. If it close to a gamma we can just use one.
  • Alex Fry: We could apply a gamma to the J component, before we find the gamut hull, of that's invalid if we change things.
  • Nick Shaw: Is the is the last meeting of the year?
  • Alex Fry: Could be. Can people do next week?
[people agreed they could do a meeting next week]

Meeting #80, December 7th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Carlos Aviles
Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
Luke Hellwig
Christopher Jerome
Jeffrey D Mathias
Carol Payne
James Pickett
Pekka Riikonen
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: There’s been plenty of discussion on the ACES Central thread
  • Alex Fry: I’ve updated the repo so v26 now includes inverses. They are hooked up in OCIO and Baselight. I need to work out if/how it’s possible in DCTL. V26 has HK mode disabled based on feedback.
  • Chris Clark: At CIC I met Luke Hellwig. He’s been hired by Samsung, and part of his job is standards. I mentioned this group, and he said he had spoken to you. I gave him the meeting calendar. I think he’ll be able to help.
  • Kevin Wheatley: On ACES Central there has been lots of feedback from Jeffrey. And Pekka’s commented on the shadows.
  • Pekka Riikonen: I’ve mentioned shadow noise before. So I was looking at trying to desaturate the shadows. The NaNs went away. I also noticed channels clipping early. It also happens in the other candidates and ACES 1.x. It may be a gamut mapping issue. We want channels to go smoothly into black. I increased gamut mapping compression and it makes a difference.
  • Nick Shaw: Is that the conflict between hitting the corners of the cube and being smooth? Presumably that is clipping where colors aren’t pulled right into the cube?
  • Pekka Riikonen: It’s a path to black issue.
  • Alex Fry: In the JMh cloud, dark noise produces large M values. So those won’t be reproducible.
  • Pekka Riikonen: I presume we shouldn’t clip these channels if we want to preserve hue in shadows. Also with an ACEScct ramp the bottom end is small negative values, and they retain some color. I don’t know what to make of that. I don’t have a black desaturation solution yet?
  • Alex Fry: ow much effect is there outside the shadows?
  • Pekka Riikonen: For normal images I don’t see any effect. Pure colors desaturate a tiny bit more. In the highlights the red channel goes above 1 and so gets clipped.
  • Alex Fry: So should we vary compression power with J?
  • Pekka Riikonen: I also looked at getting primaries back to the Hellwig colors without compression. I got green and yellow back a lot closer. But blue gets lighter. I can’t get rid of blue/magenta skew.
  • Kevin Wheatley: We have Luke Hellwig in the meeting!
  • Luke Hellwig: I’m Luke Hellwig. I’m finishing my PhD in color science at RIT, and just started at Samsung in the display lab and standards team. I talked to Thomas over the summer about this. I’m interested in learning more and willing to help.
  • [Alex summarized what we are doing]
  • Kevin Wheatley: In our industry we may trade scientific accuracy for smoothness and control.
  • Luke Hellwig: The issues you mention with the HK extension make sense, due to the derivative near zero. I’m working on something new that may fix that. I’m not sure it’s appropriate in a DRT because it is perceptual.
  • Nick Shaw: It depends if it can help recreate the perception of a bright saturated color on a display with smaller dynamic range and gamut.
  • Luke Hellwig: HK would boost dark colors which you don’t want.
  • Kevin Wheatley: We have a tone curve driven by display peak to fit brightness to the display. But what happens to colorfulness? We’re trying to use a perceptual model to handle that, which previous ACES versions didn’t.
  • Luke Hellwig: You are using JMh?
  • Alex Fry: Those components were what we used in ZCAM.
  • Nick Shaw: Is it appropriate to hold M constant for a perceptual match when you’ve tone mapped J?
  • Luke Hellwig: J is relative to the white point, and M is absolute. If you have a reflective object and turn up the light on it, its M value increases, but it’s C value stays the same. J is lightness and Q is brightness, and they are a pair. M and C are also a pair. You have to be careful about mixing them.
  • Kevin Wheatley: After tone mapping it is relative to display brightness. Our scene data is relative to a grey card that is 0.18, so it’s already normalized.
  • Nick Shaw: We have values above diffuse white in the input and we map diffuse white to 100 on the way into the model. Do we have out of gamut values and values over 1 going in. Should the model still work with those?
  • Luke Hellwig: There is a debate over how to handle values over diffuse white. CAM16 should be resilient to values over diffuse white, where CIELAB isn’t.
  • [Pekka showed Luke some images and the Nuke implementation]
  • Luke Hellwig: How are you doing the chroma compression.
  • Pekka Riikonen: It used to be just highlight desat, with a power(p) curve. Now it’s a more sophisticated chroma compression over the whole range, so colorfulness hits zero when we hit display white. And there is an extra bit to compress different hues differently so we can hit the cube corners.
  • Nick Shaw: And the chroma compression is driven by the derivative of the tone curve.
  • Pekka Riikonen: It started as that, but now it’s adjustable but based on the derivative. Compressed lightness with scene colorfulness looks terrible.
  • Luke Hellwig: If you reduce chroma you still need chromatic contrast, so there are still saturated areas.
  • Carlos Aviles: We at Technicolor Creative Services were looking at the candidates and a concert scene with a lot of blue, and we were concerned that it went cyan, and that might give a false idea of blue at the look dev stage.
  • Alex Fry: The blues in that image are way outside the spectral locus. Rev26 is an evolution of candidate C with a different model, so things have changed.
  • Carlos Aviles: We will look at the new version.
  • Kevin Wheatley: We’ve talked about if saturated colors should map to display primaries. Just looking at a 709 display, you think blue should just go to maximum blue. But P3 blue looks different. And more so for green.
  • Nick Shaw: Putting the RGC before v26 with a blue screen image that goes purple helps it back to blue.
  • Kevin Wheatley: But the highlights stay purple because you’re only changing a limited range.
  • Alex Fry: I should make new AVIF HDR/SDR comparison images with v26.
  • Pekka Riikonen: I agree with Jeff’s comment that there’s a small saturation difference between SDR and HDR which I’ll try to fix. 
  • Alex Fry: The highlights are less saturated, as they should be, but I see a good match in the normal range. HDR is just brighter in the way I expect.

Meeting #79, November 30th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
William Feightner
Alex Forsythe
Christopher Jerome
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Alex and Pekka have a v25 and v26 of the DRT.
  • Alex Fry: I've been chasing the M collapse bug. I found that turning off HK mode fixed it. Previously I was applying it in every XYZ <> JMh conversion, but that means J rides up and down multiple times based on h. Now I have it controllable in three places – on the way in; in the middle for boundary finding; and on the way out. It seemed to be the one in the middle causing a discontinuity on a tinted grad. Turning that off fixed that. HK on the way in only brightens saturated colors of certain hues, particularly blues. It may be a bit strong, but without it blues feel like they are sucking light out of the scene. As Nick pointed out, this and other things like surround do the right sort of thing but maybe should be dialed back. The blue light in the Terry Silver image also looks like negative light without HK.
  • Lars Borg: Do you have a test image for this with all colors, not just blue? Maybe a synthetic image on grey adding colored light and make sure lit areas are brighter not darker.
  • Alex Fry: It feels like HK changes the tone scale, but it doesn't for grey scale, but does for most real images with color.
  • Pekka Riikonen: Maybe Hellwig just renders blue too dark. ZCAM renders it brighter.
  • Alex Fry: I've updated the LUT repo with v25 (HK in only), v26 (Pekka's update, no HK) and 26b (Pekka's update with HK in only). I still need to add inverses. The repo also contains a 540 nit version for JOLED displays.
  • Nick Shaw: I can't quite work out if HK in only is right or not, or if you should do out too.
  • Kevin Wheatley: We need to ask what are we modeling with HK mode? Do we have a virtualized rendering and a mapping it to different displays? Or are we a family of outputs that we're trying to match to each other? That affects HK at the start or end. Do we care about absolute intensity on a given display? That affects HK on or off. Same for surround.
  • Alex Fry: I assume the data represents the scene.
  • Kevin Wheatley: For the scene you apply it on the way in. To match displays you need to apply it for the display specific case.
  • Alex Fry: I settled on input only because output produced whacky results.
  • Kevin Wheatley: Currently you're subtracting it on the way out. And maybe for display you need to add it. You have a virtual perfect display, and you try to match that. But then the gamut calculation should include it.
  • Nick Shaw: With the display we know the absolute brightness so can use the model as intended. On the way in we're making assumptions from relative brightness values.
  • Kevin Wheatley: So maybe it should be off on the scene side because we don't know.
  • Alex Fry: The OCES display concept confuses me. With an ideal display aren't the values basically the same as the scene?
  • Kevin Wheatley: But the OCES display still has finite dynamic range, so the rendering compensated for the difference between that and the scene.
  • Pekka Riikonen: The previous chroma chroma compression wasn't invertible, so I have a different technique, which also deals with the expansion of colors at higher exposure and is also invertible. Neutrals stay neutral with exposure, so colorfulness is lower and HDR and SDR look similar to Candidate A SDR. I use a chroma compression curve based on the derivative of the tone scale rather than the difference between compressed and uncompressed. It also compressed the whole range, not just above middle grey. It compresses to not hit zero saturation where the curve hits peak. I have per hue desaturation control. It changes the rate of desaturation as lightness increases. It doesn't desaturate blacks, so noise is still colorful. So you could add black desaturation. There is a big difference on skin tone, which is obvious on the ARRI Isabella image. Looking at a tinted ramp, v25 expands the color with exposure. v26 keeps it consistent.
  • Kevin Wheatley: That should help with creative white points in the grade.
  • Pekka Riikonen: looking at the 3D cube it is full except yellows, but slightly skewed by the compress mode. We may need a matrix to fix that. Or adjust the effective primaries.
  • Kevin Wheatley: That's the logical place to do that.
  • Nick Shaw: Do we have the custom primary values to match "Stock" or "Thomas" as start points.
  • Kevin Wheatley: We could calculate them.
  • Pekka Riikonen: If the compression was adjustable in each dimension, maybe we could fix the color shifts.
  • Alex Fry: I forgot to mention, I also added controls for "discount illuminant" at different points. Previously it was all D65, so you couldn't do D60 on a D65 display.
  • Pekka Riikonen: In ZCAM blues outside the spectral locus went cyan. With compress mode Hellwig there is a magenta skew inside the locus. V26 also has compress mode in ZCAM, which fixes the cyan blues.
  • Alex Fry: I think a slider for HK could be useful.
  • Kevin Wheatley: We need to decide on one way of fixing these issues to reduce permutations. And also check the 3rd dimension, not just chromaticity diagrams.
  • Alex Fry: Regarding inversion, we have something that inverts, but does it invert the display cube to sensible values you can reach? They aren't in AP1.
  • Nick Shaw: Way out, or could you grad to it?
  • Alex Fry: By pushing into negative, yes. Worth looking where aRec.709 cube inverts to.
  • Chris Clark: We want to look at how the HDR /SDR match compares to a Dolby Vision conversion.
  • Joshua Pines: It's not just an ACES problem, but sometimes you can't match back to the SDR with the Dolby trims. Also if you have a studio logo that round-trips in SDR, what happens through an HDR output Transform?
  • Kevin Wheatley: Even Rec.709 vs P3.

Meeting #78, November 24th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Michael De Caria
Alex Forsythe
Christopher Jerome
Zach Lewis
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Alex has some updates with bug fixes.
  • Alex Fry: We were getting a slight shift in the RGB channels with a neutral ramp. There was a variable name issue that meant it wasn't being applied with Hellwig. I've also tidied the XYZ white points for consistency. Now in v24 discount illuminant brings the channels into alignment. Pekka flagged an issue where uneven input produces a discontinuity. I think it's related to the M collapse I see with Pekka's dominant wavelength image. I've updated the candidates repo to LUTs baked with v24 and added a release tag for where they were for the original tests. I've made new variations including P3-D65 and Rec.709 in a P3-D65 container, so make comparison easier. There's OCIO, Baselight and Resolve versions and the OCIO configs have multiple versions for Linux and Windows HDR and Mac EDR. Currently they don't include inverses. I need to work on that.
  • Kevin Wheatley: There are discussions in the issues section of the official repo.
  • Alex Fry: I need to move my code to that.
  • Kevin Wheatley: We need to work out the licensing and contribution agreement for that.
  • Pekka Riikonen: I opened a PR where I refactor the diagnostic modes, which improves compile speed. It highlighted that there is a load of code that could be removed. Like the MMSTC.
  • Nick Shaw: Is the "focusJbypass" checkbox something that will be needed in the final version?
  • Alex Fry: It's part of my testing to find the reason for the M collapse.
  • Nick Shaw: If you turn it off the discontinuity changes. The values are continuous either side of the break point, but there is a blip at that break like there is a zero crossing or something causing a zero division there. focusJbypass makes it snap to white at that point.
  • Pekka Riikonen: My post shows a grey ramp shifted to DCI white, and neutrals don't stay neutral. They actually expand in colorfulness. That doesn't happen with candidate A or the ARRI transforms. Anything neutral at normal exposure is neutral 5 stops over as well. Also some channels scale disproportionately, especially red making highlights warmer. I think this as related to the fact that we're squeezing large scene colorfulness values into a smaller range after the tone scale. In HDR you don't see it so much because there less compression. I can fix or adjust it with my new invertible chroma compression. I'm still working on that with Hellwig, which is harder because of the size of the space.
  • Kevin Wheatley: So are we mapping our ACES 1.0 to the right place in the model?
  • Alex Forsythe: Luke had a paper at CIC with some suggestions on how to deal with M changing as J changes.
  • Pekka Riikonen: I haven't read the paper. Are we sure the scaling by 100 for Hellwig is correct?
  • Kevin Wheatley: Maybe we should email Luke.
  • Pekka Riikonen: I have a kitchen sink ZCAM version of v24 with chroma compression. I can get a D65 curve going all the way up to where it hits display white and stays the same scaling all the way up. It makes images more neutral. Skin tones are close to candidate A. It is fully invertible in ZCAM. I am goin to try to normalize the Hellwig space to a similar range to ZCAM to see if it works for that. ZCAM is easier to work with. With Hellwig I'm still getting NaNs.
  • Kevin Wheatley: Hellwig should be simpler, which is why we looked at it. How can we focus on core bugs and tidy up of the code? It's hard for anyone except Alex and Pekka.
  • Alex Fry: I need to merge Pekka's PR, and I'll work on an inverse for the LUT bake.
  • Kevin Wheatley: How can more people assist?
  • Alex Fry: Reimplementing in Python will be useful. But with what Pekka and I are doing Blink is easier for looking at images.
  • Kevin Wheatley: We had three issues we needed work on. One is related to the compression algorithm, which we have progress on. One is the effective LMS primaries. Those two interact.
  • Alex Fry: In Hellwig we are using the stock primaries, and the compression sorts out the collapse near blue.
  • Pekka Riikonen: I've implemented compress mode for ZCAM. But in that and Hellwig it shifts the colors. So I think we need to change the matrix too. I think it's the compression that's making the blues skew magenta in Hellwig.
  • Kevin Wheatley: The third issue is the iterative solve for the gamut hull. Inverting would be much easier if we could fit a function. The other issues are bugs.
  • Nick Shaw: The discontinuity seem to disappear if I turn off HK mode.
  • Kevin Wheatley: It may just be pushing it out of your plotting range.
  • Nick Shaw: It may be, but it doesn't seem so.
  • Pekka Riikonen: If I change the input I think I still get a break without HK. We don't have HK for ZCAM.
  • Alex Fry: Turning off HK fixes the M collapse as well.
  • Kevin Wheatley: We wouldn't expect that sort of change. It's just a multiply based on hue.
  • Nick Shaw: Is it part of the XYZ to JMH, so applied every time? Is it appropriate to use in every JMh conversion, after we tone map?
  • Pekka Riikonen: It could be an order of operations issue if the tone scale changes J.
  • Alex Fry: Maybe we only apply it on the first conversion to JMh. The current LUT bakes have HK on and use compress mode. The Nuke script to bake the LUTs and generate configs etc. is in the repo if people want to play.

Meeting #77, November 17th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Michael De Caria
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Alex has some visualizations to show
  • Alex Fry: Alex wanted to visualize perceptual hue skews with gamut clamping in the JMh domain. So I built a Nuke Mh visualization tool. I've plotted the gamut hull of AP1 with lines showing the path in Mh to a Rec.709 clamped version. Some are relatively straight, some curve a bit, and by the primaries you see them collapse to those. The skews along the blue edge have skews almost perpendicular to the constant hue lines. When you plot the current CAM DRT the lines follow constant hue as you would expect, because we're testing it on its own terms. When I plot with my tool and zoom to the center, the three lines should converge to a point, but there's an odd kink. I wonder if that's related to the M collapse I've seen. I've generated a series of images and plots in xy and Mh. It's interesting to see dark noisy values that are out of gamut have small M values. Bright values can have large M. It's as you would expect, but good to see plotted. The new ALEXA 35 bar image has bright colorful values, basically within AP0, but it make a giant "paint splat" in Mh. That's pulled in by the compressor in the DRT. Pekka pointed out in a DM a variation between ZCAM and Hellwig with the Daniele evo curve. I'm working on tools to chase down bugs, by doing things like isolate the output end.
  • Kevin Wheatley: It's interesting to see how the colors in this random selection of images map to straight lines in the [scene-referred] Mh plot. It suggests the model is working. They may of course be clipped and running up the edge of some value.
  • Alex Fry: Clipping seems to make things loop back. The synthetic images are interesting to look at too.
  • Nick Shaw: On the subject of the curve difference between Hellwig and ZCAM, I'm looking in Nuke, and I'm seeing more of an RGB misalignment than a curve difference.
  • Pekka Riikonen: I saw an exposure difference and an RGB difference. A little color in the highlights.
  • Nick Shaw: The color shift I see is there with all curves. Not just Daniele evo.
  • Pekka Riikonen: I don't remember seeing it in v20.
  • Kevin Wheatley: I am not sure the precision issue Jeffrey raised is really a problem. He was concerned about the post to half-float limits. That's just clipping off possible infinities, not reducing the bit depth to half-float.
  • Alex Fry: Limiting the range to +/-65504 that can be represented by a half-float.
  • Nick Shaw: I had a conversation with Scott on Slack about the inverse Daniele curve, and limiting input where the forward curve went flatter than a limit, to prevent division by zero. Half-max seemed like a reasonable upper limit.
  • Alex Fry: ACES 1.0 clamps the input to the half range too
  • Scott Dyer: ACES EXR input should be in that range anyway. But all calculations should be done at higher precision.
  • Kevin Wheatley: Half is enough for camera sources, unless you have a multi-exposure HDR of the sun. But Jeff is concerned about the granularity of the 11 bits of precision per stop in half-float. Dolby made some plots for presentations showing PQ and half-float against the Barten curve of dimcriminatable differences. Half zig-zags but stays well below Barten. 16-bit int log can have the same precision but smaller range.
  • Lars Borg: Half-float sits way below the PQ curve.
  • [Lars showed a plot of various curves compared to Barten]
  • Alex Fry: We can make a new LUT package, but I'd prefer to iron out the bugs first. A DCTL version will be a lot more work.
  • Nick Shaw: I started working on a DCTL version, but chasing Alex's changes meant repeating a lot of work, so I decided to wait until things were more locked down.
  • Alex Fry: J and I were talking about what our deliverable will be. Is CTL compulsory. Python?
  • Scott Dyer: For now we deliver an algorithm. Final form can come later, including CTL as the reference. I've been doing some stuff with Python and DCTL, but nothing ready to release for a while. LUTs of the current state would be helpful.
  • Alex Fry: I'll make some new LUTs. The candidates had three versions, for each of the three options. For one candidate we could have more variations. Perhaps include P3 and Rec.709 in P3, for comparison. Also do we limit HDR to P3? Full 2020 will be clipped differently by different displays, introducing different skews.
  • Scott Dyer: The DCI spec is similar you can encode values outside P3, and those land wherever they land.
  • Joshua Pines: 99% of studio deliverable requirements reject anything outside P3.
  • Alex Fry: The Blink includes pre-calculated XYZ matrices, but we could make it calculate matrices for arbitrary primaries.
  • Scott Dyer: Looking at your Blink, is the highlight desat new?
  • Alex Fry: No. But Pekka's chroma compression is. That and highlight desat are an either / or. They are both checkboxes, but when chroma compress is on, highlight desat is bypassed.
  • Pekka Riikonen: I am working on an invertible version of my chroma compression. I was surprised the HDR render looked much less colorful than the SDR. I think some people comment on that in the tests for candidate C.
  • Alex Fry: My memory is C was a pretty good match and A and B were under saturated in SDR and oversaturated in HDR, compared to raw pixels with a matrix and EOTF.
  • Nick Shaw: I remember talking to Alex about my feeling C was desaturated in HDR, and whether my feeling was incorrect, and it was just relative to the other two that were too saturated.
  • Alex Fry: My ground truth was matrix and EOTF only on an HDR monitor.
  • Pekka Riikonen: It's interest most preferred A in SDR which is the least colorful in SDR. On my screen candidate A in SDR is closest to candidate C in HDR. Maybe we should match the SDR to the HDR colorfulness.
  • Alex Fry: From my end there was a feeling the SDR should have had more highlight desat. I think it's highlight roll off, not mid-tone saturation.
  • Pekka Riikonen: In SDR I see color casts in whites and greys which I don't see in HDR. A warm cast over everything that I don't see in HDR, which looks neutral.
  • Alex Fry: There was an errant CAT at one point that shifted the white point. But neutral came out neutral in the candidates.
  • Pekka Riikonen: I'll try and match the SDR to the HDR in the chroma compression. I fee the SDR is trying to squeeze more color into less range, where HDR has less squeezing.
  • Joshua Pines: When you looked at these images side by side, was that on one screen or different monitors.
  • Pekka Riikonen: Two monitors.
  • Joshua Pines: Check your monitor calibration.
  • Alex Fry: To eliminate that try the SDR 709 packed into an HDR container. Im using Nuke's viewer on an XDR display.

Meeting #76, November 9th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Carlos Aviles
Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
John Frith
Jeffrey D Mathias
Carol Payne
James Pickett

Meeting Notes

  • Alex Fry: I've been working on a Miro board to explain the flow of the DRT, and started to port it to Python to make testing values easier. I have an overview – RGB to JMh; forward tonescale on J, including highlight or chroma compression; gamut compression where we search for the hull of the target gamut and pull M values in and J down/up; JMh to RGB. There is a more detailed version which breaks down those steps further. In my repo I have a basic stub of a Python version. I am using Thomas' Colour for the JMh conversions etc. I compared my original Blink Hellwig JMH, Thomas' Python and what's in the DRT and found some mismatches which I posted about. Turned out there were scaling errors and an issue with passing the surround conditions. I've now fixed it and get a match. It doesn't fix the 'shelf' issue. Nick has bolted his Blink implementation of Daniele's latest curve into the DRT code.
  • Kevin Wheatley: How do your fixed values compare to the ZCAM model?
  • Alex Fry: They are still different, but not the 100x difference we saw before. I need to check the Hellwig paper again to be sure I'm scaling the input correctly.
  • Kevin Wheatley: The two models follow similar ideas, so should look similar
  • Alex Fry: The end result looks very similar.
  • Nick Shaw: It must be scaled right or the tone curve, or the roll off would be in different places.
  • Alex Fry: We go back through the model to scene linear before applying the tone curve, so they each invert themselves. The biggest difference I see is with the dominant wavelengths where Hellwig has the shelf which ZCAM doesn't. It's happening in the gamut compression step.
  • Kevin Wheatley: Scott's made a Git repo as a formal location for all this work. It raises the issue of licensing and contribution agreements.
  • Alex Forsythe: We use the ASWF CCLA. We are looking at a tool on GitHub.
  • Scott Dyer: Alex is working in his own fork of an original AMPAS repo. We'll use this new one for deliverables. Things for users to explore etc.
  • Kevin Wheatley: We can open a GitHub issue for each item that needs doing.
  • Carol Payne: Shouldn't it copy over the license from aces-dev, as it's going to be part of ACES?
  • Alex Forsythe: The ASWF suggest Apache 2 is the preferred license with MIT second. 'Draft your own" licenses like ACES are less preferred. We may want to consider ASWF guidance.
  • Carol Payne: Maybe then all of ACES should move to that license in the long term.
  • Alex Forsythe: Maybe we should stick with the ACES license for now, and if we change that in future we change everything.
  • Kevin Wheatley: As ling as what we start with allows that future change. Using issues will let us track back in future.
  • John Frith: We at Technicolor have been looking at the candidates and have some feedback. I understand there's a move towards candidate C, which we generally agree with. We did notice saturated reds going orange. Can you hit fully saturated red? We have an internal K1S1 like technical LUT which lets us see shadow detail. The candidate tone mapping doesn't show as much shadow detail.
  • Alex Fry: Interesting you say red. Most have commented on greens and blues.
  • Kevin Wheatley: The latest version may have changed the behavior you noticed.
  • Nick Shaw: The candidates used the ZCAM model, and we've moved to a different CAM, so you should probably test that. Most people commented in the green in the Fairy bottle going cyan, which happens with both models.
  • Alex Fry: because we compress along hue lines the behavior is different to a clamping transform, which tend to collapse to the primaries.
  • Kevin Wheatley: So those clip to the different primaries of different displays, so have different hues. Which may or not be what you want. We discussed that last week.
  • [Alex showed his plot of where Rec.2020 primaries collapse to in P3 and Rec.709 under different transforms]
  • Alex Fry: We hope if you had a Rec.2020 laser projector, a P3 display and a Rec.709 display, you might not get the saturation, but you would get a perceptual hue match between the three.
  • Nick Shaw: You could compare it to the way colorists used to display referred grading complained about ACES behavior because it wan't what they were used to. Or white in textures not being peak white through an ODT. If you can convince people there's a logic, they'll see the benefit in the long term.
  • John Frith: In CG if you put 1, 0, 0 people may be confused if it comes out orangey rather than max red.
  • Alex Fry: If you work with colour wheels like that. Working display referred we were used to pushing a primary all the way to the corner and that's what you get. Now it's more debatable what those ACEScg primaries "mean".
  • Alex Forsythe: It can be misleading to look at chromaticity diagrams. Maybe we should plot the path to the clipped primary in JMh.
  • Kevin Wheatley: And out there in green the JNDs aren't circular. So we should at least try 1976 plots, or M and h.
  • John Frith: I have two vfx supervisors who have different opinions. What about the tonescale, and shadow detail. Is that fixed?
  • Nick Shaw: Does the LUT you use have lifted blacks like K1S1?
  • John Frith: definitely more than the candidates.
  • Alex Fry: Can you hit display zero with your LUT, or is it lifted like K1S1.
  • Nick Shaw: You can hit display zero with K1S1 if you go down to LogC zero, which is negative linear.
  • Alex Fry: How do they find our curve compared to ACES 1.0.
  • John Frith: It's definitely going in the right direction. But the two supervisors disagree, as one wants a more finished contrasty look, and the other built our lower contrast LUT.
  • Kevin Wheatley: We've had similar discussions about "pretty" vs technical. I averaged tone scales people use. Some were ACES like and went down hard into black. But quite a few were lifted like K1S1. The candidates aren't a million miles from my average. But we didn't analyze the shadows in detail.
  • Carlos Aviles: Our feedback is related specifically to VFX. Finishing needs something different, and they don't need to see so much shadow detail. We're talking about an internal review LUT.
  • Nick Shaw: The question is could two LMTs meet those two needs? You two supervisors with different opinions emphasizes that there's no one right answer. So it's more important that the transform is flexible enough that both can get what they need with two LMTs. So the out-of-the-box look is less important if LMTs move to a primary place in every ACES implementation.
  • John Frith: If it's robust enough. But how to you test that?
  • Kevin Wheatley: A lot is related to invertibility.
  • Alex Forsythe: Would a mode that presented scene-linear data directly on the display be useful? For probing highlight and shadow detail? An LMT that undoes the tone-scale.
  • Alex Fry: I just don't use an LMT for that. Just a matrix and inverse EOTF. I could add a mode which only has roll off at the top, but still does gamut compression etc.
  • Kevin Wheatley: You don't want no toe. You want more toe. Lift. You add exposure and pull out the toe. We have something similar to MPC, that adds extra flare. If there is flare in the room you need to lift it above display black a bit, to make it visible.
  • John Frith: It needs to be robust enough that people can make LMTs, but also if it's too contrasty artist add curves, and it's not scene-linear any more. You want something that looks pleasing but helps people make scene linear images. But if people don't like the look out of the box, the won't use it and won't look further.
  • Kevin Wheatley: We are giving hooks in the code so that MPC and Technicolor or similar can make custom variations.
  • John Frith: Do we think the RGC will still be needed?
  • Alex Fry: The rendering is more tolerant to out of  gamut values, but it's still preferable to get your working values within AP1.
  • Nick Shaw: But the RGC parameters may not stay the same. We always planned to revisit it when Output Transforms were updated.
  • Kevin Wheatley: And newer more complex IDTs may change things too. But even then after you grade it you may have negatives. So there is nothing in our group that says it has to go away.
  • John Frith: Have any invertibility tests been done on the candidates?
  • Alex Fry: Invertibility is one of our design requirements. Candidates A and B are mathematically invertible up to the point where they clip. C compresses more values in, but the inverse is more complex, and requires iteration.
  • Kevin Wheatley: I proposed a test for invertibility and precision, going backwards and forwards and checking if we can hit all values of a Rec.709 display. There are different use cases for inversion.

Meeting #75, November 2nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Michael De Caria
Alex Forsythe
Francesco Luigi Giardiello
Christopher Jerome
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Alex Fry: I've started work on implementing the Daniele tone curve, based some code from Scott's and posted about it. Pekka noticed a variable name error. We need to bring variable names into line between implementations. Nick has made a Blink implementation of Daniele's curve.
  • Nick Shaw: I just took the code from my Tonescale DCTL, and ported that to Blink. I put the pre-calculations and put them in the init() function so they only happen once. And I took the inverse curve from Scott's Matlab. My DCTL and Blink use the variable names from Daniele's Desmos.
  • Alex Forsythe: Do we have a list of items we want to tick off before giving it to colorists for feedback?
  • Alex Fry: I have some issues I'm looking into. There's a white point shift with Hellwig.
  • Alex Forsythe: Check that the matrix rows for RGB to RGB matrices sum to 1.0. And RGB <-> XYZ matrices are inverse pairs.
  • Nick Shaw: The same XYZ matrices are used in both models, so it should occur with both if it's the matrices.
  • Alex Fry: I'm also looking into the collapse in M with bright saturated values. There may be a gamut volume misalignment. Maybe a missing CAT.
  • Alex Forsythe: Will Blink run in Nuke non-commercial?
  • Scott Dyer: No.
  • Pekka Riikonen: Is the pre-matrix used in ZCAM also used with Hellwig?
  • Alex Fry: That's only used in the XYZ to LMS matrix. You also said you saw something not round-tripping. I see a vey small shift with some values on the locus.
  • Pekka Riikonen: I was seeing something more dramatic.
  • Alex Fry: We need to keep investigating. These test values are pure wavelengths, and the compression doesn't pull everything in-gamut, so there is some clipping. That may be skewing things. Are we happy with sacrificing some smoothness to hit the corners?
  • Nick Shaw: I've pushed my Blink Daniele curve to my repo.
  • Alex Fry: I'll bolt that into my code.
  • Kevin Wheatley: How do we test the tone scale against our requirements? Is it C2 continuous? How much does mid grey rise with luminance? Are there asymptotes? Can we reach the edges we want to? We should have some automatic tests to be sure updates don't break any of that.
  • Alex Fry: Also, although we can hit the gamut edges, can we do it with plausible AP1 values? We may need to tune the compression, assuming AP1 is the working space for grading. Other questions would be is it inverting correctly? Do we use the chroma compression if we can't invert properly. I need to look in to the M collapse. The question "what does Rec.2020 green mean?" keeps coming up. Is it right to sacrifice intensity for accurate hue.
  • Nick Shaw: If something is "correct" but not what people expect, is it what we need?
  • Kevin Wheatley: It's like the D60 sim, if you have a Rec.709 monitor next to a D60 one, but elsewhere you want it to be D65. Hitting the primary in some circumstances may be what people expect. In others they need the hue to plausibly match between displays. The assumption that Rec.2020 green is the green you think it is may not be right. But if you shift it to the display primary, what happens to the colors in between, because they will be less correct. It make sense to me not to do it. And if we have to do it, how do we warp it?
  • Nick Shaw: You can hit the green primary. Just not with AP1 [0, 1, 0]. It may be about education, and maybe you need a different color picker.
  • Alex Fry: Is hue or intensity more important. On a Rec.2020 projector, it is bluer than P3, but the overwhelming sensation is the intensity. You feel that more than the blueness.
  • Pekka Riikonen: What is the closest appearance match to that color? Maybe not the primary, or hue match, but in between.
  • Kevin Wheatley: With Rec.2020 primaries, everybody would see them differently because they are on the boundary. We can't lean too much on appearance matching. And it's not a 2D problem. There's luminance and saturation. And what about when you go to Rec.709, it you hit the P3 primary?
  • Pekka Riikonen: Has anybody seem the Fairy bottle under the new ARRI ALF4 transforms? It's similar, but darker, and cyanish green.
  • Kevin Wheatley: Does anybody have suggestions how we might test it, so it doesn't drift during final tweaks. Alex, you mentioned clustering on the six axes.
  • Alex Fry: That may be the same issue. I also am not completely convinced ZCAM is better, when I toggle between them.
  • Pekka Riikonen: I have a version (v20_pex) with compress mode in ZCAM.
  • Alex Fry: Maybe then we wouldn't need the pre-matrix.
  • Pekka Riikonen: I think we will because compress mode shifts in gamut values. In my testing compress mode fixes the blue shifting to cyan. But reds do go magenta.
  • Alex Fry: Is compress mode variable per dimension?
  • Nick Shaw: I think only linear extension. But compress mode could be. But that might complicate things more.
  • Alex Fry: Pekka what has your impression been of Hellwig vs ZCAM?
  • Pekka Riikonen: Fro normal colors they're very similar. For the very colorful range Hellwig has better color because the space is so big. You get nice reds in the candle image.
  • Alex Fry: Hellwig and ZCAM have dramatic differences of scale. Maybe I'm not scaling something properly.
  • Alex Forsythe: Printing out values at each stage may help.
  • Nick Shaw: The DRT doesn't exist in Python, but are the elements there to build it?
  • Kevin Wheatley: He has all the models, we could validate against the BlinkScript.
  • Alex Forsythe: We should create a list of things to look into and we can assign people to things.
  • Pekka Riikonen: Thomas already has the compress mode.
  • Nick Shaw: I see the Colab uses the spow function, which defaults to mirroring. The Blink uses one which clamps at zero, with a note that it was changed to fix an issue.
  • Kevin Wheatley: The mirroring cause the kink in hue lines,
  • Nick Shaw: But the power function is used in many other places, and compress mode or linear extension makes the mirroring irrelevant there because it makes everything positive. Clamping no mirroring would cause a mismatch between Blink and Thomas' Python.

Meeting #74, October 26th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Carol Payne
Pekka Riikonen

Meeting Notes

  • Alex Fry: I've been stripping out the SSTS code from the DRT and merging Pekka's work, and looking into some issues with very bright colorful values. Started putting the current version of the tonescale and parameters into it. Don't have anything to show yet.
  • Scott Dyer: The parameters we have are a close match to what people saw in the candidates. Good enough for now. I've been looking again at some stuff we did when developing ACES 1.0 to do with gamut mapping and path to white, and making visualizations. The biggest task is identifying what behavior we want. Then hopefully we can implement it simply. Tomorrow Rod Bogart is coming in to see some of the candidates. And others who are in LA could come in too. I'll post on ACES Central when I have something to show. We have a single peak luminance parameter to control the curves. With STEM2, JZ wanted to show HDR with the same grey value as SDR as a start point, then extent the highlights. That was easy with the SSTS. We need to see if that's possible with this curve with an LMT, and provide a mechanism to do it when we ship, because some people will want it. The whole curve changes now with peak luminance. With SSTS it was two B-splines joined at mid grey, so below that it was not altered by changing the peak. With this it changes slightly. It may not be significant. We need to provide controls for custom ODTs that let people do all the things they can do now.
  • Alex Fry: For most the grey boost to 15 nits in HDR is reasonable, but not everyone wants that behavior.
  • Pekka Riikonen: Daniele's curve has w_g which defines how grey changes with peak luminance. Set that to zero and it doesn't change.
  • Scott Dyer:  That keeps mid grey constant, but there is still a small change in the shape below that. But that may not be noticeable.
  • Nick Shaw: I would guess that with the much larger change at the top you wouldn't notice the small change of shape in the shadows. I can expose w_g in the tonecurve DCTL so people can play with it.
  • Pekka Riikonen: I've been looking at an inverse for the chroma compress version. I don't think it has a mathematical inverse, so it needs an iterative one.
  • Alex Fry: I've been copying your code in. I'm still confused by the collapse in M I am seeing.
  • Pekka Riikonen: I said before the increase of colorfulness as luminance increases was bug. But it isn't. As we compress luminance, we need to compress chrominance as well. The gamut mapper doesn't touch colors inside the gamut, so we need a separate compression step. And the highlight desat only affects values above mid grey.
  • Alex Fry: That's based on the difference between uncompressed and compressed values, so is greatest for super-high values.
  • Pekka Riikonen: We need to work out the best way compress chroma and yet still have highly saturated colors. What I showed is a naive version of that which proves the concept.

Meeting #73, October 19th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Jeffrey D Mathias
Joseph McCormick
Daniel Mulligan
James Pickett
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We've has interesting progress on the core DRT from Alex and Pekka.
  • Alex Fry: I think I've fixed the GPU crashes with help from Pekka who suggested adding a limit to the iterations in the gamut boundary search, so it never loops infinitely. There may also be an issue with a reused variable. I've also separated the input and output viewing conditions settings so you can change them separately. The effect is more dramatic than I would like. Changing the Hellwig adapting field and background alters saturation, more like what I'd expect from changing viewing conditions. I've started a new thread on the DRT. I've posted rendered images with plots. You can see the gamut compression doesn't bring everything completely in, so the will be final clipping introducing skews. There's been discussion on the "right" thing to do with e.g. a Rec.2020 green. Most saturated for display (different per target) or try to preserve hue? Pekka's extended some of my ideas.
  • Pekka Riikonen: Bill's demo last week reminded me of some of the issues with our current candidate. He showed maintaining saturation in highlights. I tried having two approaches, one which desaturated highlights and one which didn't and lerping between them based on colorfulness. Our JMh desat desaturates all colors equally. I added a chroma compression control varying per hue.
[for detail see recording from ~13 minutes]
  • Nick Shaw: Does it affect invertibility? Lerping can do that.
  • Pekka Riikonen: Good question. I haven't tested.
[Pekka demonstrates the affect of his modifications]
  • Alex Fry: So you tuned the values by eye?
  • Kevin Wheatley: Would this need to be different per display gamut?
  • Pekka Riikonen: I believe the same values would work for all, an HDR and SDR. Somebody needs to check. I only did this for ZCAM so far as I was having trouble with Hellwig. I get lots of NaNs.
  • Scott Dyer: The synthetic chart has synthetic noise at various levels at the left.
  • Nick Shaw: If the noise is centered around zero there could end up being pixels with negative luminance, and high chroma.
  • Alex Fry: I found the two models need different parameters for some things.
[Pekka showed 3D plots of the Hellwig and ZCAM JMh spaces with and without gamut compression]
  • Lars Borg: Compressing all the colors to the same level will distort the balance between colors. A neon will look white but the color surrounding it retains the neon color, which is absurd. It's a challenging trade off.
  • Alex Fry: We're not committed to using Hellwig. The hope is Hellwig is simpler. But we need to look at images. Did Bill say they were using ZCAM or "something like ZCAM"?
  • Kevin Wheatley: I think he said they started there and then modified it.
  • Alex Fry: It's all up for grabs. If somebody can suggest an alternate gamut mapping.
  • Nick Shaw: We do need something without an iterative solve.
  • Pekka Riikonen: There are some harsh edges in ramps with Hellwig. The smooth cusps control helps, but then you can't reach the corners.
  • Alex Fry: I felt the sharp kink was a necessary tradeoff to hit the edges of the cube.
  • Lars Borg: Do those occur in real images?
  • Alex Fry: I've only seen it in synthetic images.
  • Kevin Wheatley: We need t be sure we're comparing like for like between the models, due to scaling differences.
  • Alex Fry: Currently we go XYZ to JMh and collapse M and h to zero then go back to scene linear XYZ, apply the tone scale (defined in scene linear) then back to the model. So the internal scaling of the model doesn't affect the tonescale. But what those values "mean" is different.
  • Kevin Wheatley: So the tuning parameter appropriate for one model may not be for the other.
  • Alex Fry: Things like viewing conditions get handed to the different models as what those models think dim/dark/average mean. But desat and gamut compression are just done with JMh values which mean different things. The shape of ZCAM does seem easier to handle.
  • Nick Shaw: Like the RGC, the gamut compression scales to the gamut boundary, so should normalize out to an extent.
  • Alex Fry: I think the highlight desat is more problematic. Pekka can you rescale your plots so the two have comparable sales?
  • [at about 43 minutes Pekka showed the differences in the shapes]
  • Alex Fry: I'd like to see if your variable compression can be made to work with Hellwig. We need to find the source of those NaNs.
  • Pekka Riikonen: I only see them with compress mode and linear extension, because there are divisions by zero.
  • Alex Fry: With blue bar, compress mode causes some magenta skews.
  • Scott Dyer: People here have dome some testing where they used the v1 SDR renderings and wanted to get HDR version which started by matching, and then extend the highlights. They just changed the grey parameter to make it the same as SDR. Like the D60 sim being parameterized, we may need the same for treatment of saturated primaries – match hue or "most green".
  • Lars Borg: Also need similar looking brightnesses. If you have a green next to a yellow on a Rec.2020 display that look the same brightness, they should have similar brightnesses on a Rec.709 display.
  • Alex Fry: Looking at a Rec.2020 projector green, it's definitely bluer than 709, but so much more intense.Which do you preserve?
  • Lars Borg: It depends if it's the hero or the background.
  • Scott Dyer: It's not a problem unique to ACES.
  • Alex Fry: Maybe if you want max Rec.709 green you pick a color on that axis.
  • Kevin Wheatley: Then you have a problem in P3, in a single master world.
  • Lars Borg: You say it is bluer. I would say the other is too yellow. If you blend between the clipped and gamut compressed versions will that still be on the boundary?
  • Kevin Wheatley: Not necessarily in a 3D sense.
  • Alex Fry: It's mainly a CG issue where artists push the working space value to max green.
  • Kevin Wheatley: SO we don't have an answer for now.
  • Alex Fry: Pekka if you can look into making your code work with Hellwig. I'll look into those NaNs. Scott, how are things with the tonescale?
  • Scott Dyer: I have some stuff I'll post on ACES Central about my investigations.

Meeting #72, October 12th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Chris Brejon
Chris Clark
William Feightner
Alex Forsythe
Francesco Giardiello
Christopher Jerome
Jeffrey D Mathias
Carol Payne
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: This week we have a guest demo from Colorfront.
  • William Feightner: We's been addressing the same problems as this committee for years, and done many iterations. Our concept is the same as ACES, having a virtual look and rendering that to various displays and environments. Over this screen share we can only do Rec.709. Maybe another time we can stream to HDR devices, like the new iPad, if people have them. The iPad matches my Sony X310 pretty well. I have the three candidates from July loaded as 3D LUTs in Colorfront Transkoder. The Colorfront engine works in a similar way, and I have made a look which uses the tone curve used in the candidates, but then uses the Colorfront rendering to take that to Rec.709, with gamut mapping etc. Our engine also has HK compensation so the apparent vividness of colors is maintained. Candidate C seems to really emphasize the reds. Our engine is based on a perceptual model. There's a look and the part that takes that to different displays. We virtualize our looks for 10000 nits. Comparing the Colorfront render to Candidate C, Colorfront is smoother and maintains hue, where C has hue shifts and discontinuity. With the ACES synthetic chart, our render hold color right up the ramps, where C goes to white and clips. I have the ACES RGC on, but even without it our engine handles things fine. I've also compared using the candidate C 1000 nit Rec.2020 PQ render and mapping it to Rec.709 with our engine, against the candidate's Rec.709 render. The 100 nit render is smoother, and the Rec.709 has some clumping and non-linearities. Looking at Red Xmas, the Colorfront render holds the red color on the faces, and candidate C has a visible band and then desaturates. And the reds go pink. We can tune our algorithm to favor luminance or chrominance when mapping to a smaller space. We've set it to about 50/50. Again the candidate's 1000 nit render mapped with our engine to Rec.709 looks a bit better. It's been an iterative process to modify the perceptual model, and tune the parameters over a few years. On Red Xmas candidate A is a bit smoother. B really has problems. I gather C is the favorite and B has been abandoned. I'd be happy to look at newer iterations and comment. Colorfront has a parametric LMT which inverses out the RRT ODT, so invertibility is important in whatever we do. We can export the LMT as CLF or a LUT for ACEScct.
  • Chris Clark: One think I notice about the Colorfront rendering is the way it holds color in the ramps. How did you you find a balance in your gamut mapping vs the risk of quantization if you do an inverse transform?
  • William Feightner: Our down mapping holds everything up to 10000 nits, and reduces the mid grey level slightly. Home TVs are fairy well calibrated for HDR, so mid grey falls where it should. But SDR is not 100 nit. It may be 200 nits or more. So that's considered in setting SDR grey. We do all this in a perceptual space, so the colors track all the way through. The trick is the gamut mapping for colors that don't fit. It's quite simple. The model also contains Hunt compensation for consistent color between SDR and HDR. Luminance is key. It's important that it continues to look brighter as we go up the ramps.
  • Alex Fry: Can you say what model you use?
  • William Feightner: It's based on ZCAM, but with adaptations to tune it for what we're doing. I don't know how much I can share, but we want to support ACES. We have our parametric LMT that's in beta in various products.
  • Alex Fry: What do your looks target, conceptually?
  • William Feightner: 10000 nits AP1 virtual reference display, with D65 white. That's a reasonable size and can be supported by 3D LUTs.
  • Alex Fry: Conceptually this is all very similar to where we're at. Some differences are down to arbitrary choices. How much desaturation in Rec.709 vs HDR, maintaining color vs intensity, etc. You're making different trade offs. Some like as much color as possible at the top. Others want less than we have now.
  • William Feightner: We start with HDR and then say "how close can we get to that in SDR" rather than starting with the SDR look that people are used to. SDR and HDR are really a continuum. And the environment is very important. Dark, dim bright makes a big difference. Dolby addressed that with local rendering. A 300 nit LED wall in a dark cinema environment is very different to the same wall in a brighter "dinner cinema" environment. People need to be educated about that. People have a narrow perspective comparing the candidates to manufacturers' LUTs, a lot of which are broken with big artifacts. But it's what they are used to. The big question is do all the versions give you the same feeling and look, and you can trim from there. As well as the choices you can make, the candidates I looks at had some bumps and miscoloration. But maybe that's improved in later versions.
  • Alex Fry: We currently have candidate C as our concept, but we are swapping the CAM from ZCAM to a modified Hellwig 2022 model. We've been working on things that are different between the models. My latest repo has rolled the variations with linear extension and the ne compression model into a single code path. We have the HK extension, and  the option of modified primaries. I have some plots of JMH to xy with the different options. Colors in Red Xmas tear apart because the fall across the kink in xy in the original Hellwig model. Last week I was moving the gamut mapping focus distance way out, to minimise changes in J. Now the values are more similar to start with, they don't get pulled apart. Compress mode seems quite successful, but currently some images cause major crashes with it. I need to investigate a suggestion Pekka sent. Nick has a separate fork, with an alternative linear extension.
  • Nick Shaw: Not much to say. It uses a two part sRGB-style curve, with parameters picked to approximate the 0.42 exponent used in the Hellwig model. It doesn't seem to make any significant difference.
  • William Feightner: How do these compressions invert out?
  • Alex Fry: I've not tested in depth, but it should invert. On the subject of highlight desat, we have a gain control for it, but it's based on the delta between the compressed and uncompressed values, and compresses the M value.
  • William Feightner: It looks a bit clumpy to me. I feel the start should be that the HDR and SDR look the same. But that may not be what everybody wants.
  • Alex Fry: The trade off is if it looks the same color but doesn't communicate the intensity, you're losing a different sort of information. It's a choice. Do we prioritize brightness or colorfulness? In SDR you can't have both.
  • William Feightner: I can render a different version with our engine, that won't go white, but makes a different choice.
  • Alex Fry: Going forward, the thing will be what you can share, because we're trying to do the same thing.
  • William Feightner: I haven't looked much at the Hellwig model. But I will.
  • Alex Fry: The Hellwig model is mathematically simpler, and we have better contacts with the people who developed it. All the models work well with plausible colors, but we need to deal with implausible values from poorly formed IDTs or whatever, and put those values somewhere sensible. We want not to need the gamut compressor which you had on.
  • William Feightner: We don't need it on, but for grading it's easier to have colors constrained to the grading space. I think the stock aggression parameters are a little aggressive, and it could be tuned differently if it wan't fixing issues in the RRT/ODT.
  • Alex Fry: I see Christophe linked to Troy's tweets. I need to look at his latest renderings. What's the latest on the tone scale? I'm still using the version of the curve from the candidates in July.
  • Scott Dyer: Has anybody checked the derivative of the curve? I posted a question on that.

Meeting #71, October 5th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Michael De Caria
William Feightner
Alex Forsythe
Francesco Luigi Giardiello
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: Alex has some updates on the current candidate, and there were some posts on different ways of dealing with linear extension or an alternative.
  • Alex Fry: I've added a toggle for the HK extension to the model in Hellwig and Hellwig linear extension modes. I've fixed some inversion bugs. The break point for linear extension is now a triple of values. I have added diagnostic modes to look at the image state in JMh at various points in the chain. Mode 1 shows the input image in JMh before tone mapping. Mode 2 shows tone-mapped JMh, with highlight desat if enabled. Mode 3 adds the gamut compression. This is for me to make it easier to visualize what's happening. Linear extension and original Hellwig don't necessarily match even when L_B is set to zero. It still switches to linear at zero, so you have to set it very negative. I've noticed that the Hellwig variations pump up noise at the bottom that doesn't happen with ZCAM. It's most obvious with Red Xmas.
  • Nick Shaw: Is that affected by HK?
  • Alex Fry: It changes, but is present with and without HK. I've notices Hellwig produces very high M values at low J, where ZCAM doesn't.
  • Nick Shaw: Although your plot is JMh in the whatever the current model is, and ZCAM M and Hellwig M aren't necessarily the same thing, or scaled the same.
  • Kevin Wheatley: What is the effect on the noise of raising L_B in the linear extension.
  • Alex Fry: It rises first, then falls off.
  • Kevin Wheatley: The way the extension currently works (not passing through zero) it raises things as you raise L_B.
  • Thomas Mansencal: You could maybe force it to pass through zero.
  • Nick Shaw: That was what I was trying with my sRGB style 2-part curve experiment – something that approximated the power curve used in Hellwig in the same way sRGB approximates 2.2 gamma.
  • Thomas Mansencal: That's what CIECAM16 does. Maybe we could try that.
  • Nick Shaw: In my experiment I just wet through Alex's code replacing spow with a corresponding 2-part curve, using the monCurve function from CLF. I just found some values that roughly match the 0.42 exponent in Hellwig.
  • Lars Borg: Have you checked if there are any places where there ends up being a gain between pixel values when you expect a compression? Maybe it's just the source is noisy and we are amplifying it when camera manufacturers expected it not to be seen because it is so far out of gamut. Gamut mapping can amplify noise. If you  have an out of gamut blue, and the blue channel is the noisiest, when you desaturate you add noise from the blue into the other channels.The green channel is a 10x contribution to luminance compared to blue, so noise is more visible.
  • Nick Shaw: Can your diagnostic show pre-compression LMS values, so see if adjacent pixels are falling either side of a particular threshold, so being treated very differently?
  • Alex Fry: Not currently. I'll figure something out.
  • Lars Borg: I have some test images for noise that may be helpful.
  • Thomas Mansencal: Does Blink allow you to pass a struct around for diagnosis?
  • Alex Fry: I don't know. Blink documentation is sparse.
  • Kevin Wheatley: The variations we are trying are to handle values around zero and into negative. We should investigate which is best to use, rather than try to fix all of them. In the thread Nick suggested making the curve pass through zero. I said there were many options for the form of the fit. The Pekka pointed to Björn's approach which compressed values to all be positive before the non-linear curve, and then decompresses them afterwards. Thomas posted a plot of this approach with Hellwig. It does slightly distort in-gamut values, but produces much straighter lines outside the gamut.
  • Lars Borg: Does the curve mirror about zero? Have you tried a linear start like sRGB, which has a limit on the slope at zero?
  • Nick Shaw: That was what my experiment did. It's in my fork of Alex's repo, if people want to look. But it's work in progress and doesn't include his latest bug fixes.
  • Lars Borg: Several camera log encodings use a linear segment at the start.
  • Kevin Wheatley: We need to look at the alternative ways to remove the wiggles. Then we need to look at the source of the noise to ensure we are not introducing new problems by fixing another one. If we drift too far, we risk never finishing. It is possible that the values fall on a bad part of the tone-scale which brings out noise.
  • Alex Fry: It does see to be the gamut compression that brings out the noise.
  • Nick Shaw: That suggests there is something in the gamut compression that affects some pixels dramatically and leaves their neighbors untouched, so exaggerates differences.
  • Kevin Wheatley: Is it still compressing towards middle grey, so some values go up and some go down?
  • Lars Borg: You don't want to brighten noisy blue-blacks.
  • Nick Shaw: Can you disable that with the focus point control? Or mid-cusp blend?
  • Kevin Wheatley: We also had some feedback about reaching desired colors though the different candidates. We don't necessarily need 100% output gamut, but C was felt to be limiting for certain logos. Last week we thought there needed to be more highlight desaturation to mask artifacts, but that might compete with the gamut filling requirement.
  • Thomas Mansencal: That's more of an issue for graphics and CG.
  • Joshua Pines: We often have this issue with studio logos when everything has to go through one single view transform (not just ACES).
  • Kevin Wheatley: It's a regression, because it can be done with current ACES.
  • Thomas Mansencal: Within limits. There is that yellow edge.
  • Alex Fry: I wondered if it was impossible to produce that red through candidate C at all, or just impossible to produce it from AP1? It may need negative AP1 values.
  • Kevin Wheatley: Chris is suggesting a limited inverse transform for logos, which is less aggressive. Would we want to put this kind of alternative for general use?
  • Alex Fry: We need to have suggested solutions for different situations. Like we currently have sRGB Output and sRGB Texture.
  • Joshua Pines: Having a custom LMT with ACES that everything has to go through makes things even harder with logos. There's a debate about if the LMT should be baked into the graded archive. People flip-flop on that.
  • William Feightner: Invertibility is important, because being able to do that has allowed us flexibility  with ACES 1.x.
  • Alex Fry: We do need invertibility.
  • William Feightner: Looks are often destructive, so the rendering should be invertible, and the more aggressive look is in the LMT. No one look is right for everybody. That's why we have LMTs.
  • Kevin Wheatley: We discussed the idea of a default LMT.
  • William Feightner: ColorFront uses an LMT and then tone and gamut maps that down to whatever display.
  • Alex Fry: We are trying to have no look in our rendering, so people can use what LMT they want. On the subject of the gamut compression focus distance, cranking it way up reduces noise.
  • Nick Shaw: So focus at infinity would make compression be in M only, with no effect on J.
  • William Feightner: I just looked at the Red Xmas image in Rec.709 though the ColorFront rendering. I don't see any artifacts. I'd be happy to demonstrate our approach in our next meeting.

Meeting #70, September 28th, 1pm PT

Attendees

Kevin Wheatley
Nick Shaw

Rémi Achard
Daniel Brylka
Michael De Caria
Alex Forsythe
Francesco Luigi Giardiello
Christopher Jerome
Shebanjah Klassen
Jeffrey D Mathias
Carol Payne

Meeting Notes

  • Kevin Wheatley: Unfortunately Alex and Scott can't be here this week. Alex has something he will show next week. Francesco does have something he can show us.

Francesco showed a series of images taken with an ALEXA 35 and a RED KOMODO, lit with an Aputure RGBWW LED fixture and a "meeting light". These were monitored in SDR and 600 nit HDR, through a pair of LUTs created by Nick Shaw in Nuke, using Alex Fry's V16 CAM DRT. The DRT was left at default values, except the CAM used was changed to Hellwig 2022 with linear extension, and the LMS matrix was changed to the one proposed by Thomas Mansencal. It was subsequently noticed that the default value for highlight desat was 0.3, whereas a value of 1.3 would be needed to produce a closer match to the previous Candidate C. The LUTs were flattened transforms, going from camera log to display, and including the ACES 1.3 Reference Gamut Compression.

The images were not exposed with a meter, but rather set by eye to give a good visual match between the image on the HDR monitor and the actual scene. The scene contained two  ColorChecker24 charts (one "Classic post 2015", and one of the new Calibrite editions) and a person for skintone reference.

The shots cycled through various lighting states, including highly saturated primary colors, and Francesco described his impression of the similarities and differences between what he saw in the scene and on the monitors.

Since it is not practical to transcribe all Francesco's observations, viewing the recording of the meeting is necessary.

Meeting #69, September 21st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Chris Clark
Sean Cooper
Bill Feightner
Alex Forsythe
Francesco Giardiello
Christopher Jerome
Thomas Mansencal
Jeffrey D Mathias
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: There have been some posts on ACES Central about tonescale, which triggered some discussions. There was an issue with the parameters we used for Daniele’s curve causing mid grey start decreasing after a certain point. Pekka posted some parameter values which solve the issue, and also better match the previous candidates. I proposed we should analyze what that curve does when inverted, passing all 12-bit values through the inverse and forward curves, to see if we get them all back. We don’t need perfection, but practical inversion.
  • Scott Dyer: The function outputs luminance. 12-bit PQ wont all be used for e.g. 100 nits. What do we want it to do?
  • Nick Shaw: I imagined it would be 12-bit SDR, because the LUTs people want to invert are SDR.
  • Kevin Wheatley: I just thought 12-bits of whatever. At least more than 10. I notice the curves for lower intensity displays flatten more, so may clump more. Losing a couple of values top and bottom may be ok. We then maybe tweak values. As we move towards final, how do we measure against our requirements? If we test the various parts of the transform we may understand where issues we could see might come from. It would be good to collect our requirements in a post.
  • Nick Shaw: Is that specifically tonescale requirements so we can lock that down.
  • Kevin Wheatley: That would be good to start with, as it’s a starting point. That’s one area people think the existing has problems.
  • Nick Shaw: On the subject of tonescale, I’ve made a DCTL implementation of tonescale only applied to a monochrome luminance based image. You can select SSTS or Daniele’s curve (with Pekka’s values) and see either 100 nit BT.1886 or PQ at 100, 250, 500, 1000, 2000 and 4000 nits. It’s in my repo. Be aware that other stuff like ZCAM in there is an out of date version. So you can look at an image though the tonescale, without the distractions of color.
  • Alex Forsythe: I think that’s a very useful thing to look at.
  • Pekka Riikonen: Have we decided on 128 for peak white in SDR. I think it’s ok.
  • Scott Dyer: It seems reasonable.
  • Chris Clark: Graphics is a big use case for inversion. Has anybody tested where SDR white comes out in HDR after inversion? There are a lot of opinions on this.100 nits, 203nits, etc.
  • Nick Shaw: You mean if SDR peak inverts to 128 scene-referred, where does that land at e.g. 1000 nit HDR?
  • Chris Clark: Yes. And perhaps a version of the inverse output transform could expose the knobs to control this.
  • Alex Fry: Because it’s a CAM, intuitively it should create a match between different peaks, but we haven’t specifically tested.
  • Kevin Wheatley: in HDR it will run out at wherever 128 maps to.
  • Alex Fry: I think grey scale will be ok, but what’s important for logos is what it does with saturated colors. That’s where funny inversions could catch you out.
  • Kevin Wheatley: If that’s it for tonescale, let’s move to color.
  • Alex Fry: I’ve bolted Hellwig with linear extension into my Blink DRT. The linear extension alone does not solve the blue-screen image clamping to black on the non-plausible blue, so I’ve added options to use an alternate LMS matrix proposed by Thomas, or to move the effective primaries around manually. It does make the blue swing to purple which is debatable if it should happen. We are diverging from the model when we do this. I also have a control for the break point for the linear extension. It’s currently the same for all channels. If you crank that up too far it collapses. I’ve also fixed some bugs, e.g. with the CAT. It currently uses the Daniele parameters from a while back. We need a versioning system to refer to different parameter combinations. I’ve made up a “Macbeth” with synthetic Lapis and laser primaries, rendered through the RICD. It could be useful also to render it through e.g. a synthetic ARRI.
  • Nick Shaw: And what about an ALEXA 35?
  • Sean Cooper: It should be better! I will look at providing some sample images from the ALEXA 35 for testing.
  • Thomas Mansencal: I could render the lapis through various camera SPDs to see where it lands.
  • Alex Fry: The Red Xmas image is one that rendered reasonably through ZCAM, but falls apart with the new one when you crank up the linear extension break point. I need to look into that. There may be bugs. That collapse happens with a linear ramp too.
  • Nick Shaw: The linear extension at a tangent is ending up making values that were zero positive, isn’t it? Could that be causing problems?
  • Sean Cooper: Have you been looking at HDR as well as SDR displays when tweaking this? Focusing too much on the result in SDR could cause problems in HDR.
  • Alex Fry: I would expect it to behave the same in all ranges. But I’ve mostly been looking at code, not images recently.
  • Kevin Wheatley: We have only been looking at the 2D CIExy plots. Could something be happening in the 3rd dimension that we’re not noticing?
  • Thomas Mansencal: It’s pretty stable as J changes.
  • Kevin Wheatley: So Alex will carry on checking his implementation. Sean will provide some images. We’ll try putting a 12-bit ramp backwards and forwards through the tone scale. Anything else?
  • Thomas Mansencal: I would be keen to test the extension of Hellwig with HK compensation. It’s only a few lines of code.
  • Francesco Giardiello: I have set up a test at Netflix with real cameras and viewing a set live and through the candidates. We’re doing it again on Friday in London, and anybody is welcome to come along. We have an ALEXA 35 and a Komodo, and various lights.

Meeting #68, September 14th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Christopher Jerome
Thomas Mansencal
Jeffrey Mathias
Carol Payne

Meeting Notes

  • Kevin Wheatley: We have an update from Scott on the tone-scale, and Alex has an update on his Blink implementation.
  • Scott Dyer: I posted a couple of hours ago my tone-scale tests. Daniele had posted a new Desmos with log-log scale and an explanation. He has nice expressions for "hitting the roof" and mid grey increase with luminance. I compared Daniele's defaults to Jed's MMDC, ACES 1.0 and Kevin's median data. I adjusted Daniele's parameters to better match what we currently have, and changed the y-axis to PQ to see shadows better. Currently all HDR curves put mid grey at 15 nits. Dolby Cinema has 7.8 nit grey. Where would grey go for e.g. a 600 nit display? In v1 SDR was +/- 6.5 stops. Daniele's curve hits peak at a scene value of 384 (~11 stops above grey) How much tuning do we need to do before locking the curve and focus on color rendering? I have only plotted, not looked at picture. We need people to agree it looks ok at the various peaks with my tuned values. RAE said we wanted rationale for all the magic numbers. Is this enough? Do we need to hit very specific numbers? Exactly10.0 for SDR grey?
  • Nick Shaw: Documenting choice of numbers is great, but there is creative choice here. Numbers coming from a documented continuous function would be ok by me, rather that how we currently say "this is 10, this is 7.8 and all these are 15" with no reason given. There's no line you can plot through those. Although there is still the issue of do we use the same curve but scaled for 100 and 48 nits. How do we justify that?
  • Alex Forsythe: I imagine a document like the ACES white point document, describing the origin of the numbers.
  • Scott Dyer: I like that we have a curve that defines these numbers, and it's based off the Michaelis Menten curve. It may be ok to say we came to this as a good middle ground after testing.
  • Kevin Wheatley: Historically mid grey is about 10 because that matches background illumination. It's less clear for HDR. Some people have a lower valuer for HDR. Some have the same. In terms of cinema, there is only the screen as illumination in the room (ignoring safety lights) and that's why you set mid grey to average picture level of 4.8.
  • Nick Shaw: Isn't that a bit circular? You're setting mid grey to average picture level, but it's only the average picture level because of where you set mid grey.
  • Kevin Wheatley: It comes from the peak being transparent film which produces 48 nits, and dividing that by 10 is historically where you would be. It's not an equal density value for mid grey, but it visually pretty much is. I'd be happy starting from video, and saying for 100 nits 10 is what people have always done. My average data is almost exactly 10. The rest is preference.
  • Alex Forsythe: For cinema we can look at film. There are target densities for LAD on print film. Then calculate or measure the factor that will produce a given luminance off the screen. But it's usually ~10.
  • Alex Fry: With the current behavior, if you set peak to 48 instead of 100, is the a value of WgW_g that gets the curve the same, so we can avoid the current post scale.
  • Kevin Wheatley: You can ask if in a dark room it should match exactly, because from a color science perspective it shouldn't.
  • Scott Dyer: We used the same curve because we thought "this is the SDR curve". 48 is about half 100, and 4.8 is a 10th of 48, so at 100 it's 10. It just works. That's how I looked at the candidates, using the 100 nit curve, but on a projector.
  • Alex Fry: It's how we have always done it. But what if you have something like a 250 nit SDR screen? Or a 203 nit SDR, which seems to be floating around.
  • NSL That comes from broadcast, where they want an HDR screen with 203 nit diffuse white next to an SDR display in the truck. There's a lot of political debate about that. ITU spec suggests using a 400 nit HLG screen, which puts diffuse white at about 100 nits. We don't have people looking at both side by side.
  • Alex Fry: Sometimes we do!
  • Nick Shaw: I feel you should go into the SDR room, come out and refresh your eyes then go into the HDR room and just "feel" like you're watching the same movie. Not see them side by side.
  • Carol Payne: Or start with HDR then look at SDR.
  • Kevin Wheatley: So is it ok to set gamma higher to match the other candidates? In the feedback people were ok with the mid-tones.
  • Scott Dyer: I think ACES 1.0 has 1.5 or 1.55 for mid slope. 1.2 is certainly less and the plot is comparable to the candidates, so meets the requirement to be less contrasty.
  • Kevin Wheatley: So then how many stops are we expecting to show? And is that just at the upper end, or do we need to extend at the lower end.
  • Scott Dyer: There's a post on where different rendering hit peak. But we're doing relative black. What do we say? It just goes down? It just works?
  • Kevin Wheatley: So with relative black, do we say there are a fixed number of stops below grey, and if you want to show more you lift it? That seems right, as we don't want to pull up too much noise into the blacks.
  • Scott Dyer: We could change Daniele's rhitr_{hit} expression so it goes in stops.
  • Kevin Wheatley: It's logarithmic, but the numbers are arbitrary.
  • Scott Dyer: There's no justification in SSTS either. In SDR 6.5 seemed reasonable for the cameras of the day. Also how important is it to hit peak at a reasonable scene level? Or do we want to leave room to preserve detail.
  • Kevin Wheatley: This is a question for a scene lighting expert. Where do DPs put their blown highlights?
  • Alex Forsythe: HDR makes that harder. You would get as many answers as you have DPs.
  • Scott Dyer: We could make rhitr_{hit} start at 16.5 which is 6.5 stops, then go to maybe 11 stops as it is now. We need to look at pictures.
  • Alex Fry: I suggest starting with what you have now and looking at pictures.
  • Kevin Wheatley: 128 is a lot more stops, but cameras developed since ACES 1.0 have more dynamic range. But we don't want people finding their camera's peak is low on the scale. If we look at where current cameras peak, we should go above that at least for SDR.
  • Nick Shaw: Should I write a DCTL version of this curve for people to look at? How many of these parameters should be exposed?
  • Kevin Wheatley: I would give them a fixed set of numbers, with constants behind the scenes. And a few controls.
  • Nick Shaw: So just SSTS and this curve, with a few peak nit options?
  • Kevin Wheatley: My only concern is that all these curves flatten. At what bit depth are they effectively clipped? We need to test whether at a certain bit depth we can still distinguish the last couple of code values. If we can, that satisfies the invertibility problem. Do you have to scale the curve up by a small "fudge factor" above peak, so you don't hit the flat section?
  • Nick Shaw: Doesn't that happen already with the "where it hits 1.0" parameter? If it hits 1.0 at a finite value, values above that produce output above 1.0, which you can choose to clip or not. But yes there could be quantization at a particular bit depth in PQ, say.
  • Kevin Wheatley: With an image at a given bit depth, if you go backwards through that curve, you ideally want unique values.
  • Jeffrey Mathias: [from text chat] Maybe minimum of 6 stops below and 8 above would be useful, but a few more on top might be handy sometimes. For bits might I suggest either 12 or 14... 10 too little for HDR and 16 is nice but maybe can do without.
  • Alex Fry: I have a progress update. I have a version of Hellwig with the linear extension. I have a parameter for the break point. It still collapses around blue. Thomas proposed a different matrix to stretch the blue corner. It's basically working but not yet inverting properly. So I haven't yet bolted it into the full DRT.
  • Kevin Wheatley: Jeff is asking about the red car post. We don't know the origin of the image. The red looks pretty red, and with ZCAM it goes orange.
  • Alex Fry: I assume it's a well out of gamut red.
  • Alex Forsythe: We need to get the source and track it down.
  • Alex Fry: The linear extension may well help with that.
  • Kevin Wheatley: I'm hoping that may solve many problems, even if it makes the model fit worse. It's a trade off. And moving the blue focus seems reasonable. There has been no psychometric testing that far out.
  • Thomas Mansencal: My matrix makes sure the singularity happens outside AP0. I calculated the effective primaries from the current matrix and then moved the blue one and calculated a new matrix. And I have the new version with HK effect accounted for, which may be worth adding in.
  • Nick Shaw: How much complexity does that add?
  • Thomas Mansencal: It's just a few lines of code.
  • Alex Fry: Right now there is a single value for the linear extension. It might help if there could be a different value on the blue side.
  • Nick Shaw: Are the effects on the two sides the same. Blue collapses to a singularity, where the yellow side has the mirrored curve.
  • Alex Fry: Lapis was mentioned. If we got spectral data for that we could render it's ACES values. Better to simulate the RICD than use a real camera. It's a hard to capture color for a real camera.
  • Thomas Mansencal: I have an SPD for it.
  • Kevin Wheatley: So Alex needs to fix the inversion and try the HK extension and modified primaries. Maybe different thresholds for the three LMS channels.

Meeting #67, September 7th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Francesco Giardiello
Thomas Mansencal
Jeffrey Mathias

Meeting Notes

  • Kevin Wheatley: No complete items yet. Alex has a Nuke implementation, Nick has some DCTL questions. Scott has some tonescale stuff.
  • Alex Fry: I have a Blink implementation which includes selection between ZCAM and the Hellwig 2022 CAM. It has SSTS, Jed curve and Daniele curve. It's just the basic Daniele curve, without people's recent experiments. It no longer passes through IzAzBz. The image looks slightly warm. People mentioned that in feedback but I think I made it worse. Maybe missing a CAT. I have added an inverse [in fact this does not work yet]. It has a problem with out-of-gamut blues which collapse to black with the Hellwig model. I haven't added the linear extension yet. My next step is to add what Luke and Thomas have been experimenting with.
  • Nick Shaw: Switching between CAT types there seems to be no effect.
  • Alex Fry: The CAT may not be wired up for Hellwig. We may just be looking at D60 on screen. The whole thing may be jumping back and forth more than necessary. I can share the code, but it's rough and ready.
  • Nick Shaw: I've made a start on a DCTL version, but nothing that works yet. I tried to follow the BlinkScript structure to make following your updates easier. But DCTL is too different – no global variables or init() function. I've played with a UI which is a simplified version of Alex's, because DCTL is limited in UI design. How much should we expose? Depends if it is for experimenting in this group, or for a wider audience.
  • Kevin Wheatley: I'd leave all the sliders exposed for now.
  • Alex Fry: Even when using other curves, the SSTS mid point is used for the focus of the gamut compression. We need a way to calculate that value.
  • Kevin Wheatley: Jeff suggested multiple DCTL nodes in Fusion.
  • Nick Shaw: Fusion isn't real-time, and the idea is to kind of follow the Blink and have it all in one. The DCTL ZCAM runs 1080p in real-time on the color page.
  • Kevin Wheatley: So the next thing is to add the linear extension and look into the color weirdness
  • Thomas Mansencal: Should we change the XYZ to LMS matrix too?
  • Kevin Wheatley: To widen the LMS "primaries". Maybe a combination of the two methods.
  • Alex Fry: The code needs cleaning up.
  • Kevin Wheatley: That's less important for now.
  • Alex Fry: I need to look at Thomas' repo of Luke's latest work. Should be easy to bolt in.
  • Thomas Mansencal: I'll try to also implement the extra correlate to compensate for HK effect. You could maybe use that for the tonescale instead of J.
  • Scott Dyer: Alex, are you still using the tonescale that was in the candidates?
  • Alex Fry: Yes, that's Jed's dual contrast "spring" version, called MMSDC in the ToneScale drop-down.
  • Scott Dyer: I've been trying to make sense of doing HDR in Daniele's curve. I want to plot them all overlaid, including ACES 1.0 and Kevin's average data.n For what scene value hits peak, Jed used these values to guide his fit. And Pekka posted some values from other renderings. ACES 1.0 uses 6.5 stops for SDR and interpolates for HDR. I'm not quite sure what to change in Daniele's parameters.
  • Nick Shaw: Do you need to change NrN_r as well as nn?
  • Scott Dyer: I think so, but I'm not quite sure how. I think the rr value is the scene value that hits 1.0. I'm working on values for HDR to make middle grey exposure rise continuously with peak luminance, as well as including more scene range.
  • Kevin Wheatley: That is "a behavior". Is it the behavior we want. People have expectations for middle grey, so driving it just from peak luminance is not necessarily right.
  • Scott Dyer: In ACES 1.0 we don't have a continuum. We have two values for SDR and HDR (plus 7.2 nits for Dolby cinema). I think there should be a continuum. And more scene range. People generally want their HDR picture brighter. Daniele has a w value for exposure. Right now SDR includes +/- 6.5 scene stops. It could be wider, but reasonably low for SDR.
  • Alex Fry: You also don't want to reveal garbage, where the source is clipped relatively low. In HDR you have to do something to hide that.
  • Scott Dyer: I like the simplicity of the curves. I haven't implemented the derivative, but I will. I think we can get to a curve that works for all dynamic ranges.
  • Kevin Wheatley: So a bit more work on tonescale and a bit more on colors will get us closer. Anything else?
  • Alex Fry: The linear extension will keep me busy.
  • Kevin Wheatley: We need to lock down some things before working on others as they all interact.
  • Alex Fry: The highlight desat behavior changes significantly between ZCAM and Hellwig. It may be my mistake. It needs investigating.

Meeting #66, August 31st, 1pm PT

[Meeting Recording] (Password:  z^@=2.Rx)

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Remi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Michael Harbour
Christopher Jerome
Asuka Kinney
Michael Maloney
Thomas Mansencal
Jeffrey D Mathias
Vinh Q Nguyen
Carol Payne
Pekka Riikonen
Blake Sloan
Jed Smith

Meeting Notes

  • Kevin Wheatley: This is our first meeting in Zoom. We have summarized and anonymized the survey results. You may agree or disagree with what is said, but it is the context for future decisions.
  • Alex Forsythe: ACES leadership is keen to release ACES 2.0 by the end of the year. We need to think what we need to do to aim for that.
  • Kevin Wheatley: We need to work on the tone scale. There is a theme to what people have said on that, and the candidates have the same tone scale. Some feedback may be due to a bad setting but generally most think the mid-tone contrast is ok. Highlights people see a gray wash, due to our soft roll off compared to before. But some say it is more contrasty, which is interesting. 
  • Alex Fry: The "more contrasty" may be an outlier due to a configuration error.
  • Thomas Mansencal: I saw mention of blacks being "crushed".
  • Kevin Wheatley: The range of scene values we display may be different to before. Or the slope is different. Be good to figure if there is a way to adjust that in the curve, and observe the result. It didn't jump out as wrong to me. ACES 1.x is are more of a final look, where the candidates are not. I expected people to say it's flat. So the highlight comments make sense. Maybe we backed off too far, but the point is to be a starting point from where people can get where they want.
  • Alex Fry: It will be useful next time to explain more why it is as it is. 
  • Carol Payne: Going forward we should say what we're doing and why. Feedback we got said it was hard to judge because people didn't know the intentions.
  • Nick Shaw: We did ask "can you get it where you want?" but a lot of people are talking only about initial appearance.
  • Scott Dyer: They are comparing 6 renderings – HDR and SDR for 3. Aiming for the end of the year, we need a release candidate soon. What questions do we need to answer for that? The biggest question for me is about which blue is preferred? C is quite different. We can omit B. Is the look of blue in A because it has no gamut mapping? If one is different to the other, can we put a module from one into the other simple model? For our goals of simplicity and 100% invertible.
  • Alex Fry: For people looking at HDR and SDR C was preferred. SDR only people preferred A. I think it comes to skin handling, and the way C tried to maintain saturation at the top, particularly in SDR. Pekka's version was closer to the vibe of A. I think that improved on the things people didn't like about C, although it affected HDR SDR matching. Blue handling is significant. Turning off gamut mapping lets blue collapse to the primary, but has down sides. The blue appearance is fundamental to how C works.
  • Carol Payne: We had people say they needed to connect it to a real scene they saw with their eyes. We should get DPs and colorists in front of a set, and then show them the rendering. We could help support this kind of test.
  • Nick Shaw: Is it worth making some camera log to display LUTs, so people can load it in their own camera, and see reality and a rendering?
  • Alex Fry: We've missed SIGGRAPH. Any other trade shows we could do something?
  • Carol Payne: Camerimage? In Poland.
  • Alex Fry: I made some LogC LUTs for J to test. We could add those to the package if it's useful.
  • Michael Maloney: People in camera tests put up a SkyPanel and look for gamut artifacts. Is gamut mapping included in candidates or should we use the RGC with them?
  • Alex Fry: A and B just clamp to display, but C has display gamut mapping. But that is what causes the cyan shifts.
  • Michael Maloney: Our testing has leant towards A, but we have not done HDR testing.
  • Alex Fry: Maybe we need to explain what is different about A and C. It doesn't need to be blind any more.
  • Kevin Wheatley: Scott, I assume we want a release candidate ASAP, because it takes time from there. How long did it take last time?
  • Scott Dyer: Last time we didn't provide it in Baselight / Resolve / OCIO. We invited people in. So this time it may be quicker.
  • Alex Fry: Alex, are we talking about wrapping the architecture group by the end of year, or implementation too?
  • Alex Forsythe: The desire is for a final release to implementers by year end.
  • Scott Dyer: Including documentation and everything else for ACES 2.0?
  • Alex Forsythe: Expectation is everything! But there will have to be discussions with leadership.
  • Kevin Wheatley: Handing over to an implementation group may be achievable. Working backwards, we need a good handle on things in 4-5 weeks, say. So we can't progress 2 candidates much longer.
  • Scott Dyer: So let's pick one.
  • Carol Payne: I'm looking at notes from somebody who didn't complete the survey, but looked at a CLED in HDR. They said A was most consistent, but C would need a default LMT. We should pick the most adaptable one.
  • Thomas Mansencal: And simplicity, so it can run on GPU.
  • Alex Forsythe: The one people thought was worst may be most flexible. It's what we can do with them, not how they fall off the truck.
  • Blake Sloan: Are we abandoning the idea of an RRT? Will that be the LMT that matches 1.x?
  • Kevin Wheatley: Nptionally OT = RRT + ODT. People wanted something flatter to start, but that means not people say "it's not very punchy!" It's easier to add contrast than remove it. We are replacing the RRT + ODT with OTs for each device.
  • Blake Sloan: If there was a display filling your FoV and able to reproduce the sun't brightness, the RRT would be unity.
  • Kevin Wheatley: Philosophically maybe yes. The RRT is to deal with not having infinite range. But movies never look like reality. We're making things look pretty, not "correct".
  • Carol Payne: If we can make 80% of people happy, and the other 20% can get get to what they want, maybe with inverses.
  • Blake Sloan: There were no inverses supplied. Can all be inverted?
  • Alex Fry: They all can be, if you use the BlinkScript versions in the repo. A and B are simpler, and analytically invert. C needs an iterative solver. 
  • Nick Shaw: The LUT versions cover a particular range, but the procedural versions should invert perfectly.
  • Alex Fry: There are inverse settings in all the nodes.
  • Nick Shaw: Baselight and Resolve let you define custom OTs including inverses.
  • Michael Maloney: Clean inversion is important to us, as there is no one look to rule them all.
  • Kevin Wheatley: We discussed the requirement to reach all the corners. Live action may not need that, but animation or graphics may. Which is more tuneable, RGB curves or applying the curve to a brightness component? I think A would take more fiddling later if we change the tone scale.
  • Scott Dyer: I'm torn. People like A because it is familiar. But we've been asked to solve problems and make something new. That leads to C.
  • Alex Fry: I agree. C is deliberately trying to match different displays, rather just taking arbitrary slices of something. And that is not even considering HDR. I feel there is a path to fixing the concerns with C. Pekka's work on desaturation helps. We need to figure out how to handle the blue hooking round.
  • Michael Maloney: C including gamut compression is beneficial on set.
  • Kevin Wheatley: We have some experiments looking at fixing blue issues. I'd like to understand if our HDR / SDR tone scale match is right.
  • Pekka Riikonen: I posted the lates version with Daniele's tone scale a couple of weeks ago which should match SDR and HDR pretty well.
  • Kevin Wheatley: I was more interested in whether we are capturing the right part of the scene dynamic range?
  • Nick Shaw: The values the candidates use now are from Jed's curve?
  • Pekka Riikonen: Yes.
  • Kevin Wheatley: We need to understand and parameterize those, so we can tune them. Do we always take e.g. 10 stops either way? Or do we see more or less as display range changes? There's no right answer. Josh told us colorists do one or the other. We need to pick one.
  • Carol Payne: As Thomas said simplification is important.
  • Kevin Wheatley: Switching out the CAM in C would simplify it.
  • Alex Fry: Blake's suggestion that an infinite DR display would show all the scene range suggests that is the limit and it decreases as display DR decreases.
  • Kevin Wheatley: Daniele did some normalization, but we need to parameterize those to things you can measure about a display and environment.
  • Nick Shaw: Daniele's curve by default maps infinity to 1.0, but includes a scale so you can map it to something above 1.0, so it passes though that at some finite value with finite slope.
  • Kevin Wheatley: How do we get to a point where we can analyze that and get numbers which produce curves we can compare to other curves. Fixing the blue may be harder. I'd be interested to try Luke's updated model.
  • Thomas Mansencal: His later updates also include the HK effect.
  • Kevin Wheatley: I think we can make a version of C that uses it relatively quickly.
  • Alex Fry: It's not a straight swap, but it should be possible. It has run as DCTL which is promising for a simpler version running in real-time.
  • Kevin Wheatley: Can we look at the tone scale and the blue handling separately? Can we get volunteers for tasks?
  • Alex Fry: I will bolt Luke's model into the code.
  • Nick Shaw: I can look into a DCTL version to verify GPU real-time. There is still an iterative solve in the gamut compression. Ideally we find a curve fit approximation for a final version.
  • Kevin Wheatley: We should test if the model works first, then move on to things like simplifying gamut mapping, and seeing if it can cover the gamut, which may mean dialing back the gamut mapping.
  • Blake Sloan: We can test the BlinkScripts and inversion.

Meeting #65, August 17th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Daniel Brylka
Bill Feightner

Meeting Notes

  • Kevin Wheatley: We've had a few more responses to the feedback form. Do we have enough info to progress? I think we can reject one candidate.
  • Scott Dyer: Candidate B was thrown together at the last minute as an extreme for HDR behavior, to see preferences. Clearly people don't want their HDR turned up to 11. They want a feeling of a match. B was the candidate chosen by most for HDR and SDR "if you had to eliminate one". For "only choose one" C was preferred for HDR and A for SDR. But the number of votes is small. Lots of comments, often contradictory! On tone scale people like the mid-tone contrast, but often say it's too low/crushed in shadows. People generally like the SDR HDR match of C. Hoe shifts not too bad for all three.
  • Nick Shaw: Somebody commented on the Re.709 and the Rec.709 sim as if they were different. So next round we need to be more clear what that is – for people with HDR displays, use the Rec.709 sim rather than Rec.709 so you can switch without changing display settings.
  • Kevin Wheatley: We had feedback by email about being clearer and giving fewer options so people can test in depth.
  • Scott Dyer: Hopefully next time it will be a single release candidate, and seem more final.
  • Kevin Wheatley: A clear winner would have been nice. We need to extract what people did and didn't like from the comments, and work out if it's a limitation of the method or could be emulated with another. I feel A and C could emulate each other in some respects. It may be due to gamut mapping present in only one. The model clearly gives us something in HDR / SDR matching that A can't give, as you would predict. If we could fins what people thought was wrong with C, that made A better for them, maybe we could fix that in C and focus on that. I don't know if that's possible. Maybe we should try A with gamut mapping, maybe taken from C.
  • Scott Dyer: That's what I was thinking. We've also had other experiments go on. The main thing was are we on the right path. It seems if we end up with something similar to A or C that would be fine.
  • Kevin Wheatley: We should snapshot the responses now, but leave the survey open for extra comments. We should start analyzing what we have. Look at e.g. tone scale and compare with existing and my average data. If we can summarize the comments anonymously so others can see it, and people can find example images which illustrate things.
  • Nick Shaw: Some comments seemed confusing, maybe due to terminology, because it seems some thought it was more contrasty than current ACES, which it shouldn't be.
  • Alex Fry: It seemed inverted. Maybe it's a setup issue at their end.
  • Scott Dyer: It would be good to summarize for people, and highlight key point that come up repeatedly. Sorting contradictions will be hard.
  • Nick Shaw: We need to find themes.
  • Kevin Wheatley: We will publish the summary and say "we will drop B", but check for anything about B we need to consider. Maybe some issues with can be resolved by switching out for a simpler CAM, but preserving the general idea. Depends how much weight we put on SDR / HDR matching. If that's important we need to fix C. Otherwise it's less clear. I don't feel we have a formal description of why the contrast is what it is. We should parameterize the curve with meaningful numbers. We should also discuss meeting cadence. Is a week long enough to summarize this?
  • Scott Dyer: Let's take a week off and post the summary before the next meeting.
  • Nick Shaw: Another thing that's happened in the meantime is ARRI released a new camera and new display transforms. Daniele commented that their HDR / SDR match was greatly improved from ALF-2. Something else to go up against.
  • Kevin Wheatley: Without copying it!

Meeting #64, August 3rd, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
William Feightner
Carol Payne
Pekka Riikonen

Meeting Notes

  • Alex Fry: Not much to discuss. A couple of extra survey responses. No meeting next week due to SIGGRAPH. There’s been some tone scale discussion between Pekka, Jed and Daniele. I shared a version of the ZCAM DRT with Daniele’s curve, where you can switch tone scales. The default is very close the the current MM curve for SDR.
  • Pekka Riikonen: I just wanted to make the HDR and SDR match the MM tone scale. So I bolted Jed’s model onto Daniele’s curve to calculate the parameters. Now exposure matches the MMTC. The discussion is what exposure hits peak luminance. I went for 10 stops for all. Jed has varying values. Daniele hits peak at infinity.
  • Nick Shaw: Jed picked values he liked and fitted a curve.
  • Pekka Riikonen: I did the same for 10 stops. I prefer a constant exposure at peak, e.g. ACEScct 1.0.
  • Alex Fry: So same dynamic range for SDR and HDR? Is that to get the same scene values from an inverse?
  • Nick Shaw: Because ACES has historically let you first grade SDR then switch to an HDR OT, if you have the same DR and grade SDR to hit peak white, that becomes 4000 nits when you switch to HDR. My gut says those extreme HDR values should only come from very high scene values which you wouldn’t push SDR to in a grade.
  • Pekka Riikonen: In that case Jed’s model would be better.
  • Alex Fry: People want to hit white in SDR.
  • Pekka Riikonen: Or we call it infinity and they land where they land.
  • Nick Shaw: Some broadcasters mandate graphic white at 100% SDR. That would invert to huge scene values, with the associated issues where they act like lights when you scale them. Jed’s values are just what he felt reasonable, and he fitted a curve. We could start from a function, and decide we add x stops from the scene per stop of peak white. Daniele suggested something similar for mid grey – a tenth of a stop per stop of peak.
  • Pekka Riikonen: My curve matches Jed’s curve except slightly in the shadows.
  • Nick Shaw: That’s because Jed had his spring function spliced on at the bottom, and Daniele’s doesn’t?
  • Pekka Riikonen: Yes. I have a function increasing exposure with peak lumniance. It closely matches the existing.
  • Alex Fry: It’s good to match what we have. So we fix the issue but keep the look. Please keep reaching out for feedback. Lars, do you have any comment on my Ukrainian flag render?
  • Lars Borg: Ideally you have one flag indoors, and one outdoors and brightly lit, and you make sure they look reasonably the same. Even here you see the renderings are different colors.
  • Alex Fry: I got the flag colors off the web.
  • Lars Borg: An indoor flag might not be tone mapped much, but an outdoor hard lit one would be tone-mapped more.

Meeting #63, July 27th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Francesco Giardiello
Thomas Mansencal
Joshua Pines
Ilia Sibiryakov

Meeting Notes

  • Kevin Wheatley: We had a TAC meeting yesterday, and a "bug" reported to us by somebody trying to install the test package on CentOS [this turned out to be user error with the install path].
  • Alex Fry: I've been looking at Daniele's compression curve.
  • Scott Dyer: We gave the TAC a status update on the testing, and said we've only had 10 replies so far. Netflix and Universal will try to get more people to test. What if there is no consensus in the responses? Do we go to the option of many renderings?
  • Joshua Pines: If there is no consensus we've done our due diligence and we can choose one.
  • Kevin Wheatley: Scott has had a report of an issue which seems related to the size of the embedded LUT in the DCTL.
  • Nick Shaw: We could prove that as the cause if it worked with a 33^3 cube.
  • Alex Fry: I've implemented Daniele's original curve in Blink, and its inverse. Not put it in a DRT yet. The toe moves where the top lands, so we need an extra scale to hit 1.0.
  • Nick Shaw: I suspect one of the extra constants in Jed's version may be gain to do that. And he may have added additional gain so it passes through 1.0 with finite slope, rather than having an asymptote there.
  • Thomas Mansencal: It makes sense to me that the flare/glare compensation brings the peak down. Uniform flare subtraction will do that.
  • Kevin Wheatley: we need it to pass through 1.0 so it has an asymptote beyond that. It is not necessarily a straight scale. That might be a fudge.
  • Alex Fry: The effect is significant, and at the moment you can only get to 0.95, not 1.0.
  • Thomas Mansencal: It's happening after everything, which is not where I would put flare compensation. I would put it first, because it happens at the optical block, in scene space.
  • Kevin Wheatley: Does it handle capture flare or display flare?
  • Nick Shaw: Daniele already has his nrn_r, "what 1.0 should be", and peak luminance nn, so you can add gain by changing those.
  • Thomas Mansencal: Did you ask Daniele what flare was modeling. Camera or display?
  • Alex Fry: "Shadow Toe, flare/glare compensation - how ever you want to call it"
  • Kevin Wheatley: It behaves like display compensation, so the display adds back what you subtract. So maybe leave it at zero.
  • Nick Shaw: Jed had changed it to 0.01, rather than 0.05.
  • Thomas Mansencal: Daniele seems to suggest it's display flare, which seems dangerous to model in the DRT. You might have a display with no flare. I have a bit in mine, but I could set it to zero.
  • Kevin Wheatley: It comes back to whether we consider the display absolute or relative to black. We have Ilia here who has made an experimental output transform.
  • Ilia Sibiryakov: I've called it "Luminance DRT", but it's about the concept. It uses paths generated along hue lines in IPT to the edge of the gamut hull and from there to white. Then those paths get rounded. The scene-referred data is compressed using brightness curves, and then those paths used to de-chroma. It can make skin look "beefy". An alternate approach might be to start the path to white at the level of the maximum extent at a given hue.
  • Lars Borg: It also retains color separation, because if you go to the cusp you will have no separation between colors that would be mapped to the same color.
  • Ilia Sibiryakov: Also red seems to desaturate too quickly, and a bit with yellows.
  • Alex Fry: The corner smoothing is something similar to what we found with ZCAM DRT, when trying to avoid "echoes" at the cusps. But we also need to be able to hit the corners. Smoothing cusps means you need ridiculous values to get to those corners.
  • Kevin Wheatley: One idea is to slightly enlarge the gamut, so the smoothed area is still in the gamut. You still clamp to the actual gamut at the end.
  • Lars Borg: Your plot does not show any colors outside the target gamut.
  • Ilia Sibiryakov: Currently I just clip negative channels. What I have works with in-gamut data.
  • Lars Borg: Maybe clip after compression which will help reach the corners.
  • Alex Fry: Are you tone mapping in IPT too?
  • Ilia Sibiryakov: For this illustration, but my actual implementation compresses in luminance. It makes not difference. It could be improved with a better brightness model.
  • Lars Borg: Strong blues are low luminance and don't get compressed, so they get very pale. Yellows and blues are challenging. Yellows are high luminance and blues are low. Maybe a factor of 10. Imagine a series of Ukrainian flags at different brightness levels. You want them all to still look like the Ukrainian flag. Has anybody tested that?
  • Ilia Sibiryakov: I haven't found blue going pale. Yellow does a bit.
  • Lars Borg: If you desaturate yellows, but not blues, the flag looks strange.
  • Ilia Sibiryakov: Is it a requirement to be invertible?
  • Kevin Wheatley: Ideally, yes.
  • Alex Fry: Production realities mean that's necessary.
  • Joshua Pines: Any show that has a mandated ACES deliverable, but somebody has supplied their own view transform for, we need to invert the DRT to make an LMT. Ideally a true mathematical inverse, but a brute force inverse that's really close may be ok.
  • Kevin Wheatley: We don't have a mathematical inverse for the ZCAM DRT. We have an iterative solve.
  • Lars Borg: In the long term you can make a mathematically invertible curve that is a good approximation of what you want.
  • Kevin Wheatley: We have to remember that whatever we deliver, the fine art will always be left to the colorist. So we just need something they can use that's roughly right. What was your approach to rounding the paths?
  • Ilia Sibiryakov: it's a blending from 0-1 of the points on each path beyond a threshold (I found 0.4 worked) and then blending between those.
  • Kevin Wheatley: Kind of like a Bezier.
  • Ilia Sibiryakov: My C code pre-calculates these paths into a a LUT that you index by CIE chromaticity. The Python just reads the EXR.
  • Kevin Wheatley: So you're interpolation that LUT? It's not the entire path?
  • Ilia Sibiryakov: Yes
  • Thomas Mansencal: Is it a 3D LUT or 2.5D? And what kind of interpolation?
  • Ilia Sibiryakov: 3D LUT with trilinear interpolation.
  • Kevin Wheatley: It shares aspects with some of our existing transforms, and confirms some discoveries we've had.
  • Alex Fry: It would be interesting to compare IPT with ZCAM. This seems like the way, but the different models behave differently. What happens in IPT to data outside the spectral locus? Imaginary blues have non-intuitive paths they compress along. Do you have a hue line plot in IPT?
  • Ilia Sibiryakov: No. But I can make one.
  • Kevin Wheatley: IDTs and cameras are not perfect and produce imaginary colors that we need to deal with.
  • Ilia Sibiryakov: With luminance, if Y goes negative, the maths goes strange.
  • Alex Fry: That's an advantage of ZCAM. J stays positive.
  • Ilia Sibiryakov: Is JMh an IPTish space?
  • Kevin Wheatley: They are all similar, based on the same data.
  • Ilia Sibiryakov: Might JMh stay positive because it clips negative LMS values?
  • Kevin Wheatley: We've been looking into behaviors with negatives. One thing we looked at uses a mirrored power function, which causes wiggles in a plot. We've been experimenting with linear extrapolation below a threshold instead.
  • Ilia Sibiryakov: It might be easier to do "footprint compression" first, so bring out of gamut values in before rendering.
  • Alex Fry: We considered making the Reference Gamut Compression a first step. Can't remember why we rejected that.
  • Nick Shaw: It breaks linearity, and compressed some values in near the limits.
  • Joshua Pines: If it's needed for something on set, it would have been applied upstream.
  • Kevin Wheatley: It could have been applied to camera data, but grading or VFX could push thongs out again.
  • Ilia Sibiryakov: It can't work for the infinite range.
  • Kevin Wheatley: But it should for 99.9% of what cameras can generate, and AP0 and AP1 primaries.

Meeting #62, July 20th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
William Feightner
Thomas Mansencal
Carol Payne
James Pickett
Joshua Pines
Pekka Riikonen
Ilia Sibiryakov
Mike Whipple

Meeting Notes

  • Kevin Wheatley: Doubled responses to six HDR plus one SDR. We'll chase by email this week. Thomas posted on linear extensions to Luke Hellwig's model.
  • Alex Fry: I tried the same with ZCAM, but less relevant following Thomas' work.
  • Kevin Wheatley: It shows as you approach the boundary of the LMS space, something weird happens. Luke suggested an extrapolation method which Thomas has implemented. It's promising at one intensity, but once intensity drops below a threshold it explodes.
  • Thomas Mansencal: I need to investigate the explosion. Maybe it's trying to crank colorfulness too high at low intensity. I've seen something similar in the Kim 2009 model.
  • Kevin Wheatley: I also saw a triangle appears in the middle.
  • Thomas Mansencal: That needs investigating too. Maybe ask Luke.
  • Nick Shaw: It should be investigated, but as we're using those lines as vectors to compress along for out of gamut values, those wont be anywhere near that central triangle.
  • Thomas Mansencal: We can test feeding values in. Maybe it will never happen in practice.
  • Nick Shaw: And with the low J explosion, could we slide the threshold where extrapolation starts based on J? You may not notice more being a straight line if it's dark.
  • Thomas Mansencal: I'll test if it's in the Kim base model.
  • Alex Fry: Are the explosions the extreme values looping back.
  • Thomas Mansencal: Maybe. I posted a GIF before of a similar effect with a polar grid.
  • Kevin Wheatley: You also said you make tweaks to the LMS "primaries".
  • Thomas Mansencal: Yes, I moved the focal points as you can see between the 1st and 2nd plots.
  • Lars Borg: At the top of the display gamut mapping of less saturated colors becomes important, and the volume tapers to white.
  • Thomas Mansencal: The Hellwig model is simpler than ZCAM. And Luke recently published a revised CIECAM02 and CAM16 with HK accounted for. I'm keen to try that.
  • Alex Fry: I had similar issues with my attempt to do linear extensions for ZCAM.
  • Kevin Wheatley: Pekka raised the kink in the derivative of the MM curve. I tried fitting a cubic as the extension, which is smoother. Nick pointed out that it's already smooth when C0C_0 is set to 1.0, matching Daniele's original. So we have two choices. 1, lose the extra contrast, or 2, find another way to do it. The result of my fit isn't really visible in the curve plot. Only in the derivative.
  • Thomas Mansencal: Do we want that contrast? What is it for?
  • Alex Fry: I can change the C0C_0 value between 1.2 and 1.0 in the Nuke implementation. It's a noticeable difference in the shadows.
  • Nick Shaw: But is it a creative choice?
  • Thomas Mansencal: We didn't exclude a default LMT.
  • Kevin Wheatley: It changes the slope at grey, and also adds a soft knee.
  • Thomas Mansencal: Because it's at 18%, if there are artifacts they will be more noticeable.
  • Pekka Riikonen: Daniele's original has a contrast parameter. Don't know if it's similar.
  • Nick Shaw: It doesn't do the same.
  • Kevin Wheatley: The survey results will help. How much contrast do people want? Where does the 1.2 value come from? We need to see the effect on images. Is it worth adding complexity to polish something we may not see?
  • Alex Fry: Whatever we do should end up similar to Kevin's average curve.
  • Nick Shaw: But not including the K1S1 raised black.
  • Kevin Wheatley: We need to be able to hit zero. My fit is only one solution. Any others?
  • Thomas Mansencal: A simple pivoted contrast might do it.
  • Nick Shaw: It doesn't have to be an exact fit, as we're only match something based on coefficients which "looked right" to Jed.
  • Kevin Wheatley: Anything else?
  • Alex Fry: I want people to look more at Pekka's Oklab version, and its effect on saturation, which Nick and I had different feelings on.
  • Pekka Riikonen: I've been experimenting with varying the path to white desaturation at different hue angles. That might help hit the corners of the cube.

Meeting #61, July 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
William Feightner
Francesco Giardiello
Thomas Mansencal
Joshua Pines
Pekka Riikonen

Meeting Notes

  • Kevin Wheatley: We don't have much feedback, but a few people have worked on their own DRTs.
  • Pekka Riikonen: I liked what Björn Ottosson posted, especially on the path to white works in per channel, so I wanted to turn his examples into a DRT. I started from Alex and Mathias' ZCAM BlinkScript. It's similar to OpenDRT, but it has a gamut mapper in it and scales the chroma on the path to white using the derivative of the tone scale. I go to Oklab non-linear LMS instead of linear LMS. I take the midpoint of min and max RGB and apply a tone scale to that. There I apply the MacAdam limit approximation to show where colors start to fluoresce, and won't be surface colors any more. Those apply to the brightness values used in the norm, not to the input RGB values. After the tone scale it extracts the chroma from the LMS values and adds it back to the tone scaled values, by scaling therm with the derivative of the tone scale. This gives a path to white very similar to that of a per channel curve. Then there is gamut mapping either using the one from the ZCAM DRT, but using LCh instead of JMh. I also tried to implement Björn's gamut approximation, but the maths didn't work exactly. It went negative, so I changed it a bit. The LCh approach has the same issue in the blue corner as ZCAM, but that goes away with the gamut approximation, because it's smooth enough. I used the same MM tone scale as the current candidates. I added an alternative to use the Oklab lightness metric, but it doesn't change much. The MacAdam limit changes the shape a little, so the yellows don't desaturate as quickly. The gamut approximation smooths the hard cyan/blue transition. In my post I said maybe the MacAdam limit isn't useful, but Troy messaged me and suggested I keep it. The 3x3 matrix is limiting, and I thought RGBCMY weights for the norm might be better so we can sculpt the path to white. I made a version of the ZCAM DRT which has that, so you can adjust the shape. I don't think you could do this as an LMT. The main difference to ZCAMv12 I see is neutrals tend to have a color cast, and here they become more neutral. It shows in skintones too.
[Pekka showed some images and plots through his Nuke implementation]
  • Alex Fry: One criticism of the current candidates has been lack of highlight desat.
  • Pekka Riikonen: My version of ZCAM uses a desat similar to the one in OpenDRT. But this also fixes them. With the current desat there is a bulge near the top of the taper. It shows clearly on some sky images.
  • Alex Fry: I'd blamed that on the underlying data, and thought it was hidden in the other candidates. But maybe it isn't.
  • Pekka Riikonen: Maybe ZCAM scales the colorfulness too much as you go higher. It's very clear if you hake the highlight desat off. If you take the saturation to the same level, this derivative driven DRT looks very similar to RGBDT.
  • Nick Shaw: Have you looked at this in HDR.
  • Pekka Riikonen: I only have an SDR monitor.
  • Nick Shaw: Because the roll-off is in a different place in HDR, the derivative is different. Will HDR look different because of this. Or maybe, because it's only desaturation in a different place, with no hue change, opening up saturation in HDR is the "more" people want for HDR.
  • Alex Fry: I'll look in HDR, but it makes sense to me. When I look at the current one I often feel skies and highlight are already too desaturated. But that's HDR and SDR side by side. SDR alone it often feels better to have more desat in the highlights. But that's subjective. None maintain saturation as much as the Dolby CMU, which seems to do that almost at the expense of everything else.
  • Pekka Riikonen: Something else I noticed while doing this was that there is a kink in the derivative of the MM spring tone curve, where it changes to the MM function.
  • Nick Shaw: That's at 0.18 where it joins.
  • Thomas Mansencal: We said we wanted clear derivatives.
  • Kevin Wheatley: So that's C1 but not C2 continuous.
  • PW: It may be fine here.
  • Thomas Mansencal: You may not see artifacts here, but in some cases you might.
  • Joshua Pines: It can cause mach bands. A psychophysical discontinuity, not a real one. But I haven't notices that in any images here.
  • Nick Shaw: Can the coefficients in the equation be tweaked to make it continuous?
  • Joshua Pines: You can lerp between the two parts, but that complicates the equation.
  • Kevin Wheatley: We can try to force a problem, maybe fading a grad or sky to black.
  • Thomas Mansencal: A spline-based curve wouldn't have these problems. We could fit one to the function.
  • Kevin Wheatley: There has been some experimentation with the compression in the CAMs. It seems mirroring in compression is causing the distortions outside the spectral locus. It's work in progress breaking at a random point and extrapolating. It was done by Luke Hellwig himself. It gives a more predictable extrapolation. It may help with the blue / cyan issues if we pick the right break point. We're diverging from the model, but there are a range of observers, so the locus is fuzzy. Also the data the model was fit to probably didn't go out that far.
  • Thomas Mansencal: We could find the maximum in the LUTCHI data set, and set the break based on that.
  • Kevin Wheatley: It would be good to explain the choices we make.
  • Alex Fry: I wonder if it would apply to ZCAM too, as we already have that built.
  • Kevin Wheatley: I think we'll end up combining different ideas from different approaches. Camera encoding "gamuts" will produce values that fall outside this boundary. We want nice straight lines out there.
  • Lars Borg: If you just pick a threshold and extrapolate below that, you may get differences just because the intensity is lower, even though the chromaticity is the same. You could base the cut-off on chromaticity, so it doesn't change with intensity. Although color perception changes with intensity anyway!
  • Kevin Wheatley: We're only looking at a 2D chromaticity plot. It was just an experiment to find out "is this the cause of that curve". But now what does that mean?
  • Lars Borg: A different approach would be to change the LMS coordinates and make it wider.
  • Thomas Mansencal: Then it would just happen in a different place. It could still explode.
  • Lars Borg: It would be interesting to see that chromaticity diagram at different intensities.
  • Thomas Mansencal: I'll try at the weekend.
  • Kevin Wheatley: It might even be beneficial for near black noise.
  • Nick Shaw: Will a derivative based desat desaturate shadows too, because the curve flattens there too?
  • Pekka Riikonen: I believe so.
  • Alex Fry: ZCAM actually boost saturation at the bottom to maintain colorfulness.
  • Kevin Wheatley: We need to send 'nudge' emails to our testers.
  • Alex Fry: Encourage people to fill in the form, and you all fill it in too.

Meeting #60, July 6th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Daniel Brylka
Chris Clark
Michael De Caria
Alex Forsythe
Joshua Pines

Meeting Notes

  • Scott Dyer: I posted the candidates after last week's meeting. We didn't include a timeline. We can send a reminder in two weeks, as we discussed. Only three responses so far, one SDR only. Please send to your contacts. I've started on a list of the operations in the current ODTs and OTs and the order they occur. It points out the inconsistencies. Nick has posted a bug on one case of order of operations. The Rec.709 and Rec.2020 SDR ODTs have a saturation adjustment, which is not in the HDR ones. I am trying to diagram the logical order of operations too. Is anything missing.
  • Alex Forsythe The CAM has some things like surround compensation integrated. How do you plan to reconcile these differences?
  • Scott Dyer: When we only have one we won't need to. The CAM DRT (not included on the list) also does display gamut mapping. They can't match up, but if we list them we can make sure everything is taken care of, and there are parameters for them. 
  • Alex Forsythe We want to take the learnings from each model, and propagate those to the final candidate. Thomas posted a couple of papers from Luke Hellwig. Has there been any work to incorporate those?
  • Alex Fry These are new. We've looked at the Hellwig 2022 model.
  • Alex Forsythe We should look into those new papers.
  • Alex Fry We found issues when trying the previous Hellwig model, so we need to solve those before including others. If we go with candidate C, what it does would still work with a different CAM.
  • Scott Dyer: My list helps to find the inconsistencies cause by incremental additions to the original ODTs.
  • Alex Fry Anybody want to fill out the form themselves, or know anyone, please do.
  • Scott Dyer: We said we'd cancel meetings around SIGGRAPH.
  • Nick Shaw: Do we send out two week reminder before or after next weeks meeting.
  • Scott Dyer: Before is probably best.

Meeting #59, June 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Chris Clark
Michael Parsons
Joshua Pines

Meeting Notes

  • Kevin Wheatley: Not many updates this week. Still waiting for pilot group feedback.
  • Scott Dyer: I am ready to send out the wider announcement. We should hear back from Josh first.
  • Joshua Pines: Those who don’t know DCTLs could use more specific instructions about what files go where. And on Unix the folder starts with dot, so is hidden and you can’t easily drag and drop. Need simple instructions to help colorists. We eventually got them to show up, and the picture changes, but they all look the same. It must be us doing something wrong.
  • Alex Fry: I’ve seen it apply the shaper and fail to find the LUT and fail silently.
  • Joshua Pines: Maybe. The shapers are inline in the DCTL, but the LUTs are externally referenced.
  • Nick Shaw: Was there any info in the Resolve logs?
  • Joshua Pines: There were no no pop up error windows, but we didn’t check the logs. But maybe it's just us.
  • Alex Fry: Other's have reported success in Resolve, but they may be on Mac. Perhaps its a Linux issue. Baselight people have had success on Linux.
  • Nick Shaw: We could clean up the ReadMe a bit. It includes irrelevant stuff about IDTs copied from the Resolve manual.
  • Scott Dyer: We'll do a few last tweaks and then put it out later today hopefully.
  • Kevin Wheatley: We also plan to go the the historical meetings and writing a summary. Also we should make a block diagram for the parts that are common to whatever we do, To remember all the parts we need to do. The obvious one is the encodings. Some of the existing ones blend parts together. We should make a segmented block diagram, even if implementations may optimize.
  • Scott Dyer: We have some parts as existing CTL. But what are we lacking in the current CTL? We should write down what modules we need – EOTF, primaries, full to legal, etc. I imagine we make common presets, but make it easy for people to make custom ones. It's not consistent now. Also order of operations, because some clipping and scaling is in the wrong order. And there is the scaling and roll off for different white points.
  • Scott Dyer: That is inconsistent, and has hard coded magic numbers. The calculation should be in the code, so it could adapt to other cases.
  • Nick Shaw: Where there is roll off there is a balance to be struck between how much is in the scaling and how much in the roll off.
  • Scott Dyer: We need to decide the behavior and put it in the code.
  • Nick Shaw: We should also discuss what doesn't need to be in there. Is the full to legal in the CTL ever used? That is not normally used, because it's done by the app for SDI output if needed.
  • Scott Dyer: It doesn't hurt to include it, but we could take it out.
  • Joshua Pines: I agree it should happen at the video card. But a few studios bizarrely demand legal range deliverables. But it's a shame if it has to be part of the default package.
  • Scott Dyer: Some people may batch script ctlrender, so need it included.
  • Nick Shaw: It could be a separate CTL people could include in the pipe, rather than be in the Output Transform CTL.
  • Chris Clark: We have a colorist in Korea who has offered to translate it into Korean. Has translation come up? Colorists are the best people to translate. Can you make one page for the form, so he can look at it.
  • Scott Dyer: We can duplicate it and share it with him.
  • Kevin Wheatley: We've been remiss not considering translation.
  • Joshua Pines: Translation should be raised with the TAC.
  • Chris Clark: APAC countries tend to need translation. Other countries seem ok with English.
  • Nick Shaw: I dropped a PDF of the form and the SDR only one in the chat. We can post in English for now, and ask on the ACES Central post if anybody can volunteer to translate.
  • Scott Dyer: I'l make a Miro of the parts I think we need.
  • Kevin Wheatley: The old Miro is cluttered. We have TAC meetings coming up. Can we get feedback in time to report to them? The TAC also clashes with SIGGRAPH.
  • Alex Fry: I can put together a slide deck to report where we are to the TAC. I can't think of any questions for them.
  • Kevin Wheatley: We need to explain to them the "elephant in the room". How that's been managed.
  • Alex Fry: I downloaded the repo onto a Linux Resolve system, and it works for me. I've edited the ReadMe text too.
  • Scott Dyer: Very shortly I'll send it out with a post on ACES Central and an email to our full colorist list.

Meeting #58, June 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
Thomas Mansencal
Carol Payne

Meeting Notes

  • Kevin Wheatley: Not much feedback yet. A few posts on ACES Central about use of the candidates. How long until we do a wider release.
  • Alex Fry: No feedback through the form yet, but people I've spoken to had no issues with installation. I think get it out there to get some opinions.
  • Kevin Wheatley: We have a target list, and can make an ACES Central post. Anything else?
  • Scott Dyer: We can post, email, and ask people to share it.
  • Carol Payne: You want a range of users. Not just those on ACES Central.
  • Thomas Mansencal: 3D Pro would reach a lot of people.
  • Kevin Wheatley: Do we go so wide straight away? Potential signal to noise ratio issues. Also it may be misinterpreted as a finished thing.
  • Nick Shaw: Maybe just our list initially and a public post later.
  • Alex Fry: ACES Central is not "general public". I don't think we will have the problem of too much feedback!
  • Thomas Mansencal: 3D Pro people are generally knowledgeable.
  • Carol Payne: Give people a deadline and I think they will fill out the form.
  • Kevin Wheatley: How long? We can do some things in the meantime.
  • Carol Payne: I think the gamut compression we did 2-3 weeks.
  • Alex Forsythe: We want quality not quantity. We shouldn't weight all opinions equally.
  • Kevin Wheatley: 2 weeks expecting 3 actually? We discussed what might be causing the wiggling in the alternate CAM. It could be the effective LMS primaries, or a compression function with a mirror for negatives. We should test which effects the artifacts.
  • Thomas Mansencal: I posted about some test I did. It's prone to explosions. Changing the matrix breaks the predictions. Maybe interpolate between the matrices.
  • Nick Shaw: You haven't removed the mirroring about zero? The folding near the red corner is still there.
  • Thomas Mansencal: Yes, we could use a linear extension there.
  • Kevin Wheatley: If we find a sensible threshold value (because the spectral locus is "fuzzy") below which we extrapolate linearly instead of mirroring. Then the curves maintain their paths where the fit is good, and out where we break it the data is missing anyway.
  • Lars Borg: You can create the horseshoe for non-standard observers and find the triangle which includes all of them.
  • Kevin Wheatley: That's two paths. Widen the triangle to make it compatible with more people, or extrapolate because we are less confident the further out we get.
  • Nick Shaw: Thomas' matrix straightens the lines in the bottom left corner, which is probably good before extrapolation or they will go in a direction we don't want.
  • Thomas Mansencal: I think we need to do both. The originals converge very near the 460nm point.
  • Alex Fry: We're not drawing these lines directly. Where do we put our extrapolation.
  • Kevin Wheatley: If we make the compression function linear below a point, not mirrored, that should do it.
  • Nick Shaw: That's the first test. Break anywhere without worrying about the ideal break point.
  • Thomas Mansencal: I have the model in a Colab if people want to play.
  • Nick Shaw: Does Google Colab support interactive plotting.
  • Thomas Mansencal: No. We have to go back to what we did for gamut mapping. Or just keep redrawing the figures.
  • Kevin Wheatley: What we do here will be applicable to ZCAM too.
  • Nick Shaw: I see two mirrored functions in the code. Which one would we change?
  • Thomas Mansencal: The first is just for the white point, so I think it's the one in step 3.

Meeting #57, June 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Daniel Brylka
Michael De Caria
Alex Forsythe
Thomas Mansencal
Carol Payne
Joshua Pines

Meeting Notes

  • Kevin Wheatley: We may have some feedback from Josh. What next steps do we consider. Alex and Thomas have done some work with the new Hellwig-Fairchild 2022 CAM.
  • Joshua Pines: We felt the ReadMe instructions for installation need to be clearer. There are also difficulties with folder names on Linux Resolve. Eventually we got it installed and they appeared on the menus. When we select one something happens, but they all look the same. Maybe we're doing something wrong. Maybe there could be install scripts. We don't use Resolve in ACES mode. We do out own transform to ACEScct, and then a separate Output Transform.
  • Alex Fry: We Need to work out if you're just seeing it in the shaper space and the DRT LUT is ignored, or if they are just so similar you don't see the difference.
  • Scott Dyer: We'll get Josh to write up notes of his feedback. But we can improve the ReadMe.
  • Kevin Wheatley: Maybe we take e.g. Marcie and supply three renderings through the candidates so they can check against those.
  • Nick Shaw: There were issues with brackets in file names, so maybe the issues on Linux Resolve are similar.
  • Thomas Mansencal: It could be case sensitivity. And we should check the Resolve version they have.
  • Alex Fry: Install scripts could be tricky with varied setups. Last week Thomas made a self contained version of the Hellwig-Fairchild 2022 CAM, and I made a Blink version from that. It's not the whole thing, but the main components we need. Unfortunately Matthias' code uses an intermediate IzAzBz state from ZCAM in some places, which doesn't have an equivalent in this one. So I haven't made a DRT, but made this plot. It's a value of 5 in J, and 32 equal steps in H, ramping 0-100 in M, and plotted the path in CIExy. In the blue corner it's different from ZCAM, but has the same issue. And there's an odd kink I don't understand in the yellows. I think the model works well for real colors, but is not designed to care about unreal ones.
  • Lars Borg: I think it was only designed for Rec.709 colors.
  • Thomas Mansencal: I think Luke used a high order polynomial for a better fit of the hue eccentricity, so outside the fitted range the polynomial explodes.
  • Lars Borg: This plot is missing the location of the LMS triangle. I think the yellow kink would line up with one edge of that where one component is zero. It looks like a mirrored gamma curve about zero. The solution is to move the triangle out as it done with the PQ system.
  • Alex Fry: I think the model is behaving as designed, but we need to make it work with nonsense colors. If we could put it into the ZCAM DRT we'd haver the same problems.
  • Thomas Mansencal: If you compare my ZCAM and Hellwig plots, you can see how Hellwig goes crazy beyond a certain point. We could talk to Luke about possible extrapolations. I think he'd be interested in discussing our practical use of his work.
  • Lars Borg: If we used a curve with a linear toe like sRGB has that might fix what happens at the zeroes.
  • Thomas Mansencal: I was thinking at was the polynomial fit, but we could try the linear extension approach.
  • Lars Borg: We mainly care what happens within the spectrum locus, and a straight line outside that is probably fine. We need a transform that at least covers all of AP0.
  • Thomas Mansencal: It's more work, but it can ne done slowly in the background.
  • Nick Shaw: No point going too far too soon with it, and then our focus group comes back and says we don't like the result of CAMs at all.
  • Alex Fry: I feel it's a slight variation on ZCAM, not something totally different.
  • Kevin Wheatley: There was discussion of surround compensation. Do we look into that now?
  • Alex Fry: I'd like to look at the ZCAM parameters to do that. Of course that wouldn't work for the other candidates.
  • Nick Shaw: Hopefully when we see what the effect is in ZCAM we could emulate it with something simpler for the other candidates. Maybe just gamma in Y. Although as Josh said last week, many colorists prefer a straight colorimetric conversion with no compensation. So does the compensation bring enough to the table to be worth having?
  • Thomas Mansencal: Luke mentioned that there is not much background effect in the Lutchi data set, and wondered if the way it's modeled is even correct. Things have been built on top of stuff without understanding what they meant. And they are in all the CAMs.
  • Kevin Wheatley: What about Carol's suggestion of documentation?
  • Alex Fry: Chris posted a list of relevant threads.
  • Kevin Wheatley: We need to go back through the notes and make a list of decisions.
  • Nick Shaw: I'll try to go back through all my notes and make a list of key points.

Meeting #56, June 1st, 1pm PT

Attendees

Alex Fry
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Zach Lewis
Thomas Mansencal
Carol Payne
Joshua Pines

Meeting Notes

  • Alex Fry: No big updates today, but a chance to think about what we might do next. We've sent out the test package to a small initial group to check. We have Baselight people, Resolve people and a Nuke person. Josh has heard from one of the people.
  • Joshua Pines: I'll help if needed, but I am staying back to see if it makes sense to a colorist without help. Can we track who has downloaded it?
  • Alex Fry: Not apart from the feedback form. So what else can we use our time for while waiting for feedback?
  • Scott Dyer: Maybe write down what we think is needed in the display encoding stage, as pseudocode. White point adaptation, surround compensation. We'll need that for whatever candidate we pick. I also wanted to explore the big differences between candidates I see in red and blue particularly. Is it gamut mapping or fundamental to the approach? Maybe we will need to replicate features from one in another one. Anything else?
  • Alex Fry: You have that in Colour, so I can look at porting it to Blink.
  • Thomas Mansencal: I can make a simplified self contained version.
  • Alex Fry: We only use some of the correlates from ZCAM.
  • Thomas Mansencal: The ones we use are common to ZCAM and CAM16/CIECAM02.
  • Alex Fry: I will also make some plots comparing what ZCAM does to Scott's smart clip for candidate B. They have similar effects on blue.
  • Alex Forsythe: I think investigating CAM16 is a good idea. Using the CAM I felt the HDR and SDR look more the same in terms of hue.
  • Alex Fry: It's particularly clear in the series of images with women's faces and a red dressing gown.
  • Alex Forsythe: With the CAM the skin-tones aren't the prettiest out of the box. But HDR SDR ares similar technically in hue in a way the others aren't. Saturation changes compared to the others… Don't know if it's good or bad.
  • Carol Payne: In the gamut mapping group we wrote an interim report on what we had done. It's useful to say what we looked at, even if we rejected it. It was helpful when writing the final reports.
  • Alex Fry: That's a good idea. Maybe use what Christophe wrote. Hopefully we don't find good stuff that was brought up but we never explored! The dim vs dark was something Thomas mentioned as a promising aspect of the CAMs. That's something we can look into. Do we even need it?
  • Joshua Pines: Traditional DI folks deal with theatrical vs grading suite simply with the 2.4 vs 2.6 gamma difference. The current ACES ODTs have something more sophisticated. Some people were confused that the Rec.709 version they got with a 2.4 / 2.6 gamma adjustment didn't match the ACES Rec.709.
  • Alex Fry: The current ones do a gamma adjustment only on Y, and a slight saturation tweak.
  • Thomas Mansencal: One of the thinks Luke noticed when deriving the CAMs from scratch is that surround didn't have as strong an effect as is currently in the CAMs. I'll find my notes from a meeting I had with him.
  • Alex Forsythe: If you run color patches through the current CAMs and vary only the surround parameter, you can look at the effect and see what it's changing.
  • Joshua Pines: I suggest one parameter should be the option to turn surround compensation off, if we include it. Similar to white point adaptation.
  • Thomas Mansencal: With a system like ACES with a fixed list of Output Transforms, the list gets exponentially longer when you add options.
  • Alex Fry: For non static implementations with toggles it's simpler. We do already have it with D60 sim.
  • Joshua Pines: So we already have choices you need to track. AMF will solve everything!

Meeting #55, May 25th, 1pm PT

Attendees

Alex Fry
Nick Shaw
Scott Dyer

Rémi Achard
Chris Clark
Alex Forsythe
Jean-Michel Gilbert
Carol Payne
Joshua Pines

Meeting Notes

  • Alex Fry: Not much to discuss this week. We're about to send out the testing package to the initial group. The sample image repo and candidates repo are now moved to the AMPAS GitHub. We just need to write the email. Nick has updated the Google Form so there is one HDR and SDR version and one SDR only one, removing HDR/SDR matching questions. I had some feedback about the Baselight implementations, and we may need to make it more flexible so people can easily put different DRTs on different cursors.
  • Nick Shaw: Might it be useful to include versions which have the shaper in the Truelight cube file? The questionnaire is as you saw it last week (bar a minor change for consistency of scale direction) and the SDR only version just has all questions related to HDR deleted.
  • Alex Forsythe: Rather than using an ordinal scale for rating, it might be better to either use a discrete ordinal scale or make it continuous.
  • Nick Shaw: Unfortunately that's not possible with Google Forms.
  • Chris Clark: How many of the colorists on the list I gave you will you send this to?
  • Alex Fry: Initially just a few people that we have direct contact with. The all of them in the main test.
  • Chris Clark: Doug Delaney would be a good candidate.
  • Scott Dyer: We want a variety of people, HDR, SDR, Baselight, Resolve.
  • Alex Fry: We only want 4 or 5 people.
  • Nick Shaw: Ideally one SDR only, or pretending to only have SDR so they fill in that questionnaire version.

Meeting #54, May 18th, 1pm PT

Attendees

Alex Fry
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Michael Parsons
Joshua Pines

Meeting Notes

  • Alex Fry: Kevin is away this week. Nick has some changes to the survey form. I have a cutdown set of images proposed for the test group. And we need to come up with a list of names for our pilot phase. The candidates repo with the transforms for Resolve, Baselight and OCIO is currently still in my GitHub, but we'll move it to an AMPAS one soon. The ACES_ODT_SampleFrames repo contains 1080 AP0 EXRs of 77 selected frames, as well as AVIF versions and the GitHub Pages with comparisons. We picked what we thought was a range of varied representative images, and some of the gamut mapping problem images. Anything anybody thinks we've missed?
  • Nick Shaw: I've made changes based on last weeks discussion. A notice about collecting but not publishing emails. The questions on your setup are required questions. I got different info on viewing conditions – ST 2080-3:2017 say 5 nit surround, BT.2035 says 10 nits. We need to pick one.
  • Joshua Pines: 10 nits for SDR, and we use that for everything.
  • Nick Shaw: We probably want one for both so they can compare without changing lighting. Maybe let them use what they are used to, and tell us what that is. I've tried to make the form less repetitive, turning multiple questions into one question with a "multiple choice matrix" of answers where I could. The zone saturation section I did two versions, one with repetitive separate questions, and one "cleaner" version with just three questions (shadows, mids, highlights) and a matrix of answers. Which do people prefer?
  • Scott Dyer: The separate version I felt I would be comparing them to themselves, and the simpler version comparing them to each other and ranking them.
  • Lars Borg: I think that's good. It let's you set a reference and rate the others relative to that.
  • Nick Shaw: The last section has a new question about their preferred HDR and SDR candidate, and one about any other questions we should have asked. That one's particularly useful for this pilot group.
  • Lars Borg: Are there any questions we could remove to prevent "survey fatigue"?
  • Nick Shaw: It would be nice if we could do conditional questions, so if somebody says they don't have an HDR monitor it doesn't show the HDR questions. But Google forms can't do that.
  • Lars Borg: If you moved some questions to the end, people might feel more productive towards the end.
  • Joshua Pines: Should we add an N/A option for HDR questions, for those who have SDR only?
  • Alex Fry: We could do separate forms, one with no HDR questions, and give people two links.
  • Nick Shaw: I added some descriptions to the hue and HDR "matching" sections describing what we mean by "hue consistency" and "matching".
  • Joshua Pines: Do they need SDR and HDR monitors side by side? Or is the SDR soft proofed in HDR?
  • Alex Fry: We have both Rec.709 and Rec.709 in BT.2100 PQ to make it easy to compare. We have a list of colorists from Netflix. We only want 5 or 6 for the pilot.
  • Joshua Pines: Are you just looking for colorists?
  • Alex Fry: Mostly, and people one of us has a good relationship with. We'll send out an email with direct links, so they don't have to go to GitHub.
  • Nick Shaw: That's what we did for gamut mapping.
  • Scott Dyer: Anything we can usefully do while we wait for responses?
  • Nick Shaw: By the time this goes out, we probably won't get anything back by next Wednesday. Do we skip next week and reconvene the week after with feedback?
  • Scott Dyer: Maybe. We'll let people know on ACES Central.
  • Alex Fry: We can post the email we send out on ACES Central, for those interested but not here now.

Meeting #53, May 11th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Michael De Caria
Chris Clark
Francesco Luigi Giardiello
Jean-Michel Gilbert
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Mike Whipple

Meeting Notes

  • Kevin Wheatley: Priority today is finishing preparation for the initial tests. Initially a small number to find obvious issues. We have the Netflix colorist list.
  • Alex Fry: I hope that first run will only take a week or two.
  • Kevin Wheatley: What image set do we send with that? We have the 556 images. We haven't selected from that.
  • Joshua Pines: The initial group could feed back on whether the selected images are representative.
  • Kevin Wheatley: The chairs can pick a subset.
  • Jean-Michel Gilbert: Is there CG fire in there?
  • Kevin Wheatley: There must be a well lit human face.
  • Thomas Mansencal: And things that highlight defects. Mostly the current RRT is fine with normal images. It's just solid state lighting etc. that highlights issues.
  • Alex Fry: The three candidates are all similar for many images. I've pushed a fix to a problem with the DCTL Rec.709 sim in PQ. The latest Chrome 101 update has changed behavior and the HDR demo pages now don't work properly.
  • Kevin Wheatley: We have a list of questions for testers in a Google Form. There is necessarily a lot of repetition with the various permutations.
  • Nick Shaw: I was wondering if we should require names/emails or if it should be anonymous? And which questions do we make compulsory?
  • Alex Fry: Knowing who people are is useful.
  • Kevin Wheatley: We can say we won't publish their emails, but it's useful to the chairs to communicate with them. And to weight their feedback.
  • Carol Payne: That's what we did for the gamut mapping.
  • Nick Shaw: We should add a disclaimer about anonymizing published data.
  • Kevin Wheatley: For GDPR etc. we should make clear people are volunteering their information and it will only be used for this purpose. Then we ask about their setup.
  • Jean-Michel Gilbert: What counts as a reference HDR display?
  • Nick Shaw: We ask for make and model.
  • Lars Borg: Maybe just ask whether they are looking at HDR.
  • Nick Shaw: And with the SDR sim for SDR display, we can infer they are comparing that too.
  • Thomas Mansencal: Should we specify expected viewing conditions and calibration state?
  • Kevin Wheatley: We could ask if their viewing environment is a dark theatre, a grading suite with bias light or an office environment.
  • Thomas Mansencal: Knowing that could be useful feedback.
  • Joshua Pines: Colorist may not set up their own room, so can't answer detailed engineering questions.
  • Kevin Wheatley: Then we ask about the tone curve, and contrast in the different zones. Then can you get it how you want. Somebody said on ACES Central they though they could grade all of them to match. That's exactly the question. Could we use any of them to get what people want. We should steer them to answer that.
  • Jean-Michel Gilbert: We should ask separately for HDR and SDR.
  • Kevin Wheatley: That's the next section, SDR vs HDR. But we don't have a question about HDR vs SDR tone scale specifically. We want to gather that information without biasing things.
  • Alex Fry: And although the tone scale is the same for all three, sometimes it feels different due to color effects.
  • Carol Payne: I would stay away from saying "tone scale" which means different things to different people.
  • Nick Shaw: We can rename the section more generically "contrast".
  • Kevin Wheatley: We should add a question about comparing the contrast between HDR and SDR versions. We have SDR vs HDR, how well do they "match", plus general comments.
  • Nick Shaw: "Match" lets them interpret it as whatever it means to them. Hence the quotes.
  • Kevin Wheatley: Is matching what they want, out of the box, or can you get what you want?
  • Joshua Pines: Should we get more specific about how they feel it does or doesn't match?
  • Nick Shaw: The hope is the "any other comments" lets them specify without leading them.
  • Thomas Mansencal: Is "faithful" better that "match".
  • Joshua Pines: Match your expectations.
  • Jean-Michel Gilbert: Shadow level and contrast need to exactly match.
  • Lars Borg: Do we ask which is your preferred HDR reproduction|?
  • Nick Shaw: We ask "if you could only pick one" at the end.
  • Kevin Wheatley: That could differ for HDR and SDR. Chris suggested on ACES Central that maybe they should be swappable. It's a valid choice to prefer one for HDR and another of SDR. Next we ask about hue consistency. This could have the same problem as the word "match". What do we mean by it?
  • Jean-Michel Gilbert: I would ask "do you get the hue behavior you expect?"
  • Kevin Wheatley: Then we ask whether there is a pattern of what they always do in grading, so we could consider baking that into the transform.
  • Alex Fry: When asking about hue, should we ask separately about HDR and SDR behavior?
  • Kevin Wheatley: Unfortunately Google forms doesn't allow branching based on answers. You could group the questions in various different ways.
  • Carol Payne: If it were me I would want to answer all the questions about one candidate, then the all same questions about the next, and so on.
  • Nick Shaw: I considered that, but after speaking to my wife (who is a psychologist and has designed many questionnaires) we agreed that doing that would be biased toward more accurate answers for the first candidate, with people being more careless and rushing when asked the same questions a second and third time.
  • Kevin Wheatley: Next we asked about zone saturation, and this is where it gets very repetitive, but we need to get replies on all permutations.
  • Jean-Michel Gilbert: Colorists may be sensitive to saturation in different zones, but others may just perceive the whole image as having to much or too little saturation.
  • Kevin Wheatley: Then we ask can you get it where you want.
  • Lars Borg: Would it be useful to ask if the transforms make your grading controls still feel orthogonal?
  • Alex Fry: We should ask about their grading space.
  • Lars Borg: Do they have to do more than in the past, to compensate with a different control after making an adjustment.
  • Joshua Pines: It's like "does adjusting saturation behave as you expect?"
  • Kevin Wheatley: I though that was covered by "how easy was each to grade under?" The we ask for a preference, positive and negative. Maybe we should extend that to HDR and SDR. Anything else we should ask?
  • Nick Shaw: The test package doesn't include inverses, so we cant ask about adding display referred graphics.
  • Kevin Wheatley: Asking about that sort of thing would be for later testing.
  • Thomas Mansencal: We should ask if people are happy to participate in further iterations of testing.
  • Kevin Wheatley: Maybe that should go at the front.
  • Joshua Pines: We could ask them if we've missed any questions.
  • Kevin Wheatley: We'll make changes to the questionnaire based on this feedback, then are people happy for us to put it out without doing this again.
[There was general agreement to this]

Meeting #52, May 4th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Daniel Brylka
Alex Forsythe
Francesco Luigi Giardiello
Zach Lewis
James Pickett
Joshua Pines

Meeting Notes

  • Kevin Wheatley: We're making progress with our candidates. Just checking what we still need to do for each. A (rgbDRT) is ok, now we have the licensing sorted. B we have outstanding questions about applying the smart-clip. C we're ok with the current state although we have some issues. So what about packaging?
  • Alex Fry: We have Resolve and Baselight implementations, and OCIO v1 configs, with the new shaper space.
  • Kevin Wheatley: I assume the two GitHub issues don't hold up testing.
  • Alex Fry: They are fixed. Scott posted some images comparing noise between candidates, and some images and plots showing the smart clip.
  • Scott Dyer: Some images the smart clip does good things, some it does bad. I think overall it makes things worse.
  • Alex Fry: Is it exacerbating noise or not masking noise.
  • Nick Shaw: The smart clip certainly exacerbates noise in red areas by moving the middle (green) channel up to maintain ratios when blue is clipped to zero, adding yellow speckles. The noise is present in the blue channel, but is moved to the green one.
  • Kevin Wheatley: If in doubt leave it off for the testing.
  • [Scott showed 3D plots of the effect of the smart clip]
  • Kevin Wheatley: We can add it back in if it fixes something people don't like in the testing.
  • Scott Dyer: All of them will need some form of gamut mapping. Clipping isn't best, even if some like some of the effects.
  • Alex Fry: Two have no GM and one does.
  • Kevin Wheatley: Next supporting documentation. We have a draft ReadMe. What still frames should we supply. We have ~500 images. Do we link to that?
  • Alex Fry: We can select a few (~20?) and link to all 500. And they should use their own too. Some normal images and some challenging. We can start a thread.
  • Kevin Wheatley: Anything we need to add to the ReadMe text?
  • Alex Fry: Description of the various LUTs. SDR, SDR in PQ, etc.
  • Kevin Wheatley: Description of how to set up your system and which variants to use.
  • Nick Shaw: Baselight separates the DRT from the Viewing Colour Space, so it's easy to do e.g. SDR in PQ.
  • Alex Fry: Resolve doesn't let you tag the output color space of custom ODTs, which is frustrating. But it's fine on the external monitor. There are two OCIO configs, one for standard Nuke and one for Mac EDR users.
  • Kevin Wheatley:  So we can sort that reasonably quickly. Who do we send it to? We have a list from Netflix, for initial checking before the wider group. How long do we leave it out there for people to test?
  • Alex Fry: We should have a Google form with non-leading questions. What did the GM group do.
  • Nick Shaw: We had conformed timelines in projects Baselight and Resolve. But here we have multiple things to test. Maybe the projects should start test to ACES 1.3, to not bias towards one candidate.
  • Kevin Wheatley: This is less yay/nay, and more touchy-feely. Can you do what you want and achieve the looks you want?
  • Alex Forsythe: It's good to have this in writing which we didn't do in the past.
  • Zach Lewis: We previously discussed a default LMT. Should we mention that? This isn't necessarily the default "look of ACES 2.0".
  • Alex Fry: I don't think the current ones need an LMT as much as OpenDRT did, because it was hyper neutral. The important thing is can they achieve the look they want?
  • Kevin Wheatley: Lower starting contrast should help achieve more looks. It's mainly things like can you hit the corners? Does it do any nasty things when grading through it?
  • [Scott showed plots of the effect of the three transforms on the reds and blues (from ~37minutes in the recording)]
  • Scott Dyer: Useful to look at, but I don't have conclusions. The plots are unclamped display linear before display encoding. I also took some radial hue ramps in Oklab, to start with something perceptually uniform.
  • Alex Fry: We need to check the LUT I gave you for C, because some plots look odd.
  • Scott Dyer: I need to look properly in HDR and SDR in the Baselight in my lab.
  • Alex Fry: The example images now have useful comparisons with 1976 chromaticity plots of the candidates vs simple matrix and EOTF. Wiping between these lets you see the effect on chromaticities. The detail button goes full screen. That's also useful to compare HDR and SDR. You can see ZCAM moves chromaticities out as things get darker, to maintain perceptual colorfulness.
  • Nick Shaw: We should point out that you can't judge noise on the web pages, as it is masked by heavy compression for the web.

Meeting #51, April 27th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Michael De Caria
Francesco Giardiello
Jean-Michel Gilbert
Thomas Mansencal
James Pickett
Joshua Pines

Meeting Notes

  • Kevin Wheatley: There is some good news on licensing.
  • Alex Fry: rgbDRT is relicensed under MIT, so we can use that as candidate A. Open DRT is still withdrawn and under GPLv3. I've expanded my example images into a separate repo to separate the different comparisons, and HDR vs SDR. All the combinations. I've encoded the SDR as gamma 2.4 within a PQ container. In un-color-managed browsers they will look undecoded PQ. I my do an SDR only version for people with no HDR display. I also have a diagnostic page to check your system's HDR handling. I also added a "ground truth" comparison to simple matrix and PQ EOTF, scaled so ACES 1.0 maps to 100 nits, so you can compare the un-tone-mapped ACES colors with their appearance after tone mapping.
[note: it may be necessary to change the color profile flags in Chrome to ensure HDR is displayed. Not everybody could get it to work.]
  • Alex Fry: I can only guarantee it works with a Pro Display XDR in PQ referenced mode. It may be worth making a YouTube video that wipes through them.
  • Kevin Wheatley: The SDR only version would be useful. Everyone can look at that.
  • Scott Dyer: I posted about a new potential candidate B, based on earlier ratio restoring ACES code. The red issues we saw in last meeting came from the values being pure P3 reds. Negative blue components ended up clipped, and the hue restore made it pure red, because the P3 red extends over the red-green line of AP0, and I was using AP0 as rendering primaries. I expanded my rendering primaries. going wider made artifacts less objectionable. I think I have a suitable replacement for candidate B. Are we ready to put them out for preliminary testing? The render restores the pre-tone-map RGB ratio by moving the middle channel.
  • Lars Borg: How does it look on neons? We don't want neons washed out. Also when we first did the middle channel adjustment, blue channel noise moved into the other channels.
  • Scott Dyer: With this version I'm not seeing the noise we saw before.
  • Lars Borg: Can you analyze why it's not noisy?
  • Kevin Wheatley: Maybe it's due to the lower contrast tone scale.
  • Scott Dyer: I also remember artifacts round clipped highlights before. I'm also possibly concerned about the effect of the 'smart clip' which restores RGB ratios after display gamut clipping, effectively clipping out of gamut colors in a straight line towards the white point in CIExy. It does change the blues. The blues are the biggest variation between the renderings. Which is more correct I don't know.
  • Alex Fry: ZCAM does the same. It's comparable to what we had with the Fairy bottle.
  • Scott Dyer: Alex has made a Nuke version, and I'm working on a DCTL version.
  • Alex Fry: I also have an updated Baselight shaper, so the repo has 3 candidates again.
  • Kevin Wheatley: We could keep tweaking rendering primaries etc. for ever, but are we happy to put them out for preliminary testing?
  • Thomas Mansencal: Should we add a requirement that it should maintain hue as much as possible, on e.g. a well exposed Macbeth chart.
  • Alex Fry: How do we define that.
  • Thomas Mansencal: Perhaps change in OKlab. Also when ramping up exposure.
  • Kevin Wheatley: This would be after first testing.
  • Alex Fry: One thing we need to decide before testing is whether the smart clip should be on or off for candidate B. It makes sense to me. 
  • Scott Dyer: It seems good in theory, but looks weird with pictures.
  • Alex Fry: Like with ZCAM it's a question of preserving saturation vs hue.
  • Jean-Michel Gilbert: This is why I added my blue modifier to ZCAM, and my scaling in XYZ. Whatever works!
  • Lars Borg: Isn't it a creative decision|? If a color is off blue, shouldn't it stay that way, rather than collapsing to the primary? Tone mapping shouldn't make creative decisions.
  • Alex Fry: We need to look at plots.
  • Lars Borg: How do they look on a wide gamut HDR display?
  • Alex Fry: P3 is generally the best we have.
  • Lars Borg: That's the reference when making a Rec.709 version. Nobody can say what the right color outslde P3 is. Even the colorist hasn't seen it.
  • Jean-Michel Gilbert: The expensive ASUS can show 85% of Rec.2020.
  • LB So can laser projectors, but they can't go to the spectrum locus for e.g. cyan. So P3 is still the reference and there are lots of ACES colors we can't display.
  • Scott Dyer: If the other renderings have no display gamut mapping, this one shouldn't either.
  • Alex Fry: XCAM DRT does have display gamut mapping. That's why it has a similar blue shift.
  • Scott Dyer: Some of these blues are bad ACES data which is outside the spectrum locus.
  • Alex Fry: I will rebake the examples page with chromaticity plots, so we can see where data is getting bent. I've also added a "detail" button to full screen individual images.
  • Lars Borg: That is helpful for colors darker than primaries. The other challenge is colors brighter than primaries. We don't have a way to visualize where those are, because they are within the primary triangle, but they are too bright. JzAzBz plots are great to see where color outside the primary triangle go, but we don't have a way to chart distortions inside the triangle.
  • Thomas Mansencal: Plot.ly could do a 3D representation in a web page, but it would be a lot of work.
  • Alex Fry: We should post images with and without the smart clip, so people can decide which they prefer. Hopefully we can get a decision within a week. I wish I had 3 laser pointers at Rec.2020 primaries, so I could see what they look like. The guy shining them in his bathroom made it seem AP0 green should end up a bit cyan, not Rec.709 green.
  • Joshua Pines: If there were people in LA we could sort something with our laser projector. Or reach out to Dolby.
  • Alex Fry: It would be good to know if the sense of saturation is more important than the hue shift. Some LED walls go out pretty far, but not Rec.2020.

Meeting #50, April 20th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Alex Forsythe
Francesco Luigi Giardiello
Joseph McCormick
Joshua Pines

Meeting Notes

  • Kevin Wheatley: Good news, the tone scale has been relicensed by Jed as MIT. The full OpenDRT is still GPLv3.
  • Alex Fry: I've removed OpenDRT from the test repo. I need clarification on RGB DT, which is in the same repo as Open DRT, so has the same license. I don't know Jed's intention there. RGB DT was the winner of the three tested RGB curve DRTs.
  • Kevin Wheatley: Scott has been working an an alternative to OpenDRT.
  • Scott Dyer: I've used old ACES code to make something with the same ratio preserving aims as OpenDRT. It uses the MM spring tonescale, and code which restores the RGB ratios after the tone curve, based on Doug Walker's code from 2012. It renders in AP0. I haven't tested other rendering primaries. It keeps very strong saturation in reds, which looks good in some cases, but others like RED Xmas it clips badly. It certainly needs more work. Currently it's CTL. I'll work with Alex to get it into Nuke. It will give us something to test and see if people like ratio preserving behavior.
  • Kevin Wheatley: So if we find RGB DT is also withdrawn, do we want to pursue an alternate for that?
  • Alex Fry: We need to know if Jed is happy for us to use his suggested primaries, even if we build something new. We could find new primaries, looking at Thomas' proposed P3 based.
  • Nick Shaw: Did Jed ever post on ACES Central, proposing those primaries, or do they only exist in RGB DT? And did he find them just by dragging primaries around with his tool, and looking at images?
  • Kevin Wheatley: I suspect so. We should really have something more robustly derived.
  • Scott Dyer: My ratio preserving rendering makes reds clip badly with synthetic charts.
  • LB: Might it be because the red is on the target space primary? So the other channels are zero?
  • Scott Dyer: I will work on fixing that.
  • Alex Fry: I've update my repo, taking out candidate B, and changing the shaper space so you can hit the full display extents. The OCIO config and Resolve version work. I need to fix the Baselight implementation. I've also made two web pages (linked from the docs) which compare candidates A and C in SDR and HDR. It only works with a Mac in Chrome with a Pro Display XDR. On one you compare SDR and HDR for each transform. On the other you compare the A with C in SDR and in HDR. I don't know if it works on Windows. On Linux you may be able to force it to HDR if you have an HDR display connected. When comparing, look at how reds render in HDR and SDR. It doesn't work on iPad. The renders of the images were done with OCIO, so if LUTs cause issues they will show in the images. A Mac with an external HDR display won't work, as it pegs SDR white at 203 nits.
  • Nick Shaw: You could encode the SDR as PQ.
  • Kevin Wheatley: We will have a meeting next week, but it may be short if lots of people are at NAB.

Meeting #49, April 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Lars Borg
Daniel Brylka
Michael De Caria
Chris Clark
Alex Forsythe
Francesco Giardiello
Olivier Groulx
Zach Lewis
Thomas Mansencal
Carol Payne
Matthias Scharfenberg

Meeting Notes

  • Kevin Wheatley: Alex has posted a large number of images for assessment of possible Candidate A options.
  • Alex Fry: Originally there was an option K, but that was effectively the same as G, so I removed it. Everyone seemed to prefer G, which is Jed's RGB DT. H is ACES 1.2 with no sweeteners, and the MM tone curve. J is RGB DT but using Thomas' proposed P3 based primaries. They are all similar for in gamut, but different in the extremes. G seems to render best. The only image that looked odd was the green dragon, which is not a plausible image, being a pure AP1 green.
  • Scott Dyer: We have discussions for candidates A, B & C. A we seem to have a consensus on option G.
  • Kevin Wheatley: Definitely not H, due to the artifacts. We'll discuss Jed's withdrawal of OpenDRT next, but hopefully that shouldn't affect his RGB DT. Since he's not here, he can't explain, but we have to respect his decision. The question is how his licensing of the repo affects other transforms within the same repo, which includes RGB DT and JZDT.
  • Alex Fry: The main issue is whether the Michaelis-Menten dual contrast spring curve that we use in all the candidates is restricted by the license.
  • Nick Shaw: We need Daniele to give his opinion on any conflict with TCAM, and also because he proposed the Michaelis-Menten curve originally.
  • Thomas Mansencal: The curve is not new, just Jed's parameterization of it.
  • Nick Shaw: Does The Academy have any license statement about the status of anything posted publicly on ACES Central?
  • Kevin Wheatley: We need to get Jed's point of view. For SDR we have the SSTS and my average as other options.
  • Carol Payne: We could talk to the ASWF who obviously have lots of experience with open source licensing.
  • Scott Dyer: Björn Ottoson posted on the ACES Slack channel regarding the cyan corner of ZCAM. He's compressing before the non-linearlity in the model, and uncompressing afterwards. He posted a Colab.
  • Lars Borg: Those are the same issues that occurred with the original ITP and others. Non-linearities at the edge of the selected primaries, which is why these fail outside Rec.709. Relaxing things at the blue corner is definitely the right way top go, but how do you know what's right? ICtCp does similar desaturation to move them out, but how much is right? It needs observer validation. There's a lack of data.
  • Kevin Wheatley: I think Mark Fairchild's paper that Thomas linked to may have been trying to eliminate the non-linearity.
  • Lars Borg: You can use wider primaries or LMS. JZaZbZ has the same ugly non-linearity, for example. There is not enough data for the blue corner. We have a map that's wrong, and we don't know what the reality is.
  • Alex Fry: How do you get observer data in the imaginary area outside the map?
  • Lars Borg: The unreal data is different for different cameras, because it comes from errors, so the same scene color maps to different unreal values for different cameras. Maybe the right thing to do with that becomes a creative decision. Finding a good real color to represent the varying unreal colors is a challenge.
  • Alex Fry: It's substantially an IDT problem.
  • Lars Borg: Cameras don't meet the Luther condition. An IDT should never produce values outside the spectrum locus, but they do. But these models fail even before the spectrum locus.
  • Alex Fry: We're trying to build a rendering that produces reasonable results with garbage data.
  • Lars Borg: Even the same spectral color will produce different ACES values from different cameras.
  • Kevin Wheatley: A DoP may choose a camera because it does that.
  • Lars Borg: Then the question is what color did you want? It's what they saw on the monitor on set.
  • Kevin Wheatley: We want more predictable behavior. But what is the shape of that? We could offer different options for our testing.
  • Lars Borg: The ideal would be if it was easy for colorists to easily maintain hue while manipulating colors. Maybe it doesn't need to match the HVS, it has to feel linear.
  • Scott Dyer: All resized to 1920x1080 and in AP0.
  • Alex Fry: Scott mentioned that some earlier ACES experiments were norm based, so we could try something based on those if OpenDRT is not an option.
  • Scott Dyer: We have options to relatively quickly make a replacement for OpenDRT, rather than just removing it. 0.1 was a norm lookup, and 0.2 used RGB curves, but then restored the RGB ratios to their prior state by altering the middle channel. I'm experimenting with a replacement for candidate B, if we can still use the tone scale.
  • Nick Shaw: Jed removed some of the tweaks from OpenDRT, but it still is more than just a ratio-preserving norm-based tone scale. It has at least a path to white connected to the tone scale.
  • Scott Dyer: We have other path to white mechanisms from earlier ACES versions too. So we could use those.
  • Thomas Mansencal: Was RGB DT based on the ARRI K1S1?
  • Alex Fry: Only broadly conceptually, like any RGB rendering goes to rendering primaries, tone maps, then display encodes.
  • Nick Shaw: Only the 10% desat comes from what the K1S1 does. But that's not unique to that.
  • Alex Fry: Matthias had some stuff about ZCAm boundary smoothing.
  • Matthias Scharfenberg: Plotting the ZCAM output in 3D it looks like rounded corners in display linear, but once you add the 2.4 gamma inverse EOTF they go concave. The solve and smoothing happens in display linear, so I have experimented with shapers before the cusp smoothing, log-like or power. I didn't want it dependent on the output device. The smoothing makes sense, and looks better in plots, but is it needed for real images. I don't have conclusions yet.
  • Nick Shaw: The gamma encoded display values are what the colorist sees on the waveform. So they will feel they can't reach the corners.
  • Matthias Scharfenberg: I've experimented with various shapers. It can't be a straight power or log, because it needs to handle negatives.
  • Alex Fry: We should check how the image set we now have looks with and without the cusp smoothing.

Meeting #48, April 6th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
Zach Lewis
Thomas Mansencal
Joshua Pines

Meeting Notes

  • Kevin Wheatley: We wanted to give an update on the status of the test package. Alex created a thread.
  • Alex Fry: I stripped the descriptions from the ReadMe and made OCIO configs. There's not three LUTs for each candidate – Rec.709, Rec.2100 PQ limited to P3-D65 HDR and now Rec.709 in a Rec.2100 PQ container, so you can view SDR on an HDR monitor. There are two OCIO configs, on standard and one for Nuke on Macs with EDR support. Zach has been working on an OCIOv2 config. We found a couple of issues. The LUTs for B and C can't hit the corners of the target display because the extreme valuers needed to do that are outside the LUT shaper range (AP0 ACEScct curve). We could use a different shaper curve covering a wider negative range, or we could come up with different shaper primaries.
[Alex showed CIExy plots of a P3-D65 unit cube through the inverse of B and C, with some primary triangles to contain the result]
  • Alex Fry: Even then I struggle to hit the yellow corner. It may be an issue with the transform, not the LUT. Open DRT needs a very bright saturated yellow to hit that corner. LUTs will be needed for some applications, so a LUT implementation must be possible. Also some DCCs may struggle to create the extreme values needed even with the procedural version to hit the corners. It may be a problem for the requirement to hit all the corners of the display cube. Christophe brought up the question of whether candidate A should use AP1 rendering primaries like ACES 1.0. Now it's just that without the sweeteners. Should we use RGB DT or something else?
  • Kevin Wheatley: We don't want to delay the test too much.
  • Alex Fry: RGB DT is ready now.
  • Scott Dyer: Those who have looked at it seem to have liked it as an RGB curve transform.
  • Alex Fry: There are good arguments for it. It already uses the same curve we have. The logic for the primaries seems sound.
  • Scott Dyer: What is the reason for candidate A. I think we need an RGB lookup to compare the others to. We don't need the Frankensteined V1 and RGB DT. They already have v1 to compare it to.
  • Alex Fry: The different tone curve in v1 makes a big difference. We should put up some images for people to compare current candidate A and RGB DT.
  • Thomas Mansencal: We could use rendering primaries based on P3. We often master in a P3 container, and view P3. Being closer to that is logical for a rendering basis, because the further you are, the more things skew. AP1 is rotated compared to P3.
  • Alex Fry: Would that mean different rendering primaries per target?
  • Thomas Mansencal: No. I would just use P3. It's close enough to aligning with Rec.709.
  • Alex Fry: What about Rec.2020 in 5 years time?
  • Thomas Mansencal: We change the rendering primaries for ACES 3.0.
  • Kevin Wheatley: Would you use P3 or an expanded form.
  • Thomas Mansencal: I posted a proposed expanded version on the same vectors as P3 but close to the spectrum locus.
  • Scott Dyer: We did try something like that and it works well because it's basically display referred rendering. I think we were concerned about limiting it for the future.
  • Thomas Mansencal: We're here now, and P3 would still be a reasonable limit and will be for a few years.
  • Alex Fry: How would different primaries for different displays work? Would there be different skews?
  • Kevin Wheatley: We'd still need proper gamut mapping to avoid the clipping skews. Maybe borrow from the ZCAM DRT display mapping. You would have to plan for a future update when the P3 limit was too constraining.
  • Thomas Mansencal: I'm normally future looking but here it's pragmatic. P3 is where we are now. SRGB has lasted 30 years and isn't going away.
  • Joshua Pines: Most studios will reject anything that goes even slightly outside P3. Of course some will complain there is no Rec.2020.
  • Thomas Mansencal: Should there be a separate display transform for laser displays?
  • Alex Fry: I'm curious how much the look changes as you vary rendering primaries if the tone curve is closer to the ODT.
  • Kevin Wheatley: Our current rendering pretty much uses Rec.2020, because AP1 is very close to that. You could compare that to a P3 rendering.
  • Alex Fry: Would you need a real Rec.2020 display?
  • Joshua Pines: You could look at what happens to memory colors like flesh tones, viewing both on a P3 display. We master in P3 and gamut limit to Rec.709 for that target.
  • Thomas Mansencal: The Difference between P3 and Rec.2020 is huge compared to the difference between P3 and sRGB.
  • Kevin Wheatley: So can we take the test images we have and put them through the various different primary versions of candidate A and look for gotchas?
  • Alex Fry: We can easily make a thread.
  • Kevin Wheatley: Going back to the inverted plots and the shaper, Open DRT seemed larger, so does that need more sample points, in the lattice?
  • Alex Fry: I think both do.
  • Kevin Wheatley: Although a large shaper space could be problematic, if the space is smooth, maybe the interpolation would be ok. ZCAM is more twisted, so interpolation will give bigger errors.
  • Alex Fry: I think the issue of reaching the extremes needed is more problematic than the shaper. Playing in Resolve, the controls wouldn't let you get there. That was in an ACEScct project with the current shapers. Even in Nuke I have to dial in big negatives.
  • Kevin Wheatley: Thomas posted a link to a paper by Mark Fairchild which may hit why ZCAM has these cyan issues.
  • Thomas Mansencal: I haven't emailed him yet but I will. I will also try to implement the paper in Colour. It's a rework of CIECAM97 CAM16 and CIECAM02. They cracked them open to understand the design choices, and found some stuff which may not be justified. Stuff from CAM97 that should no longer be used. I suspect they are working on something bigger.
  • Alex Fry: It would be good to tidy up ZCAM DRT with something that requires less fudging. Most of its magic seems to come from the target display gamut compression.
  • Kevin Wheatley: All this is for after this first round of testing.
  • Alex Fry: Yes, we initially need to know if people like this kind of compression at all.
 [Zach showed his OCIOv2 implementation]
  • Zach Lewis: I've implemented Alex's LUTs but with built in transforms instead fo the 1D shaper LUTs. It would be nice to directly implement the RGB DT using OCIO built-ins. OCIOv2 makes it easier to pair sets of view transforms and displays. It would be better if the LUTs went direct to XYZ D65, but currently I use Alex's LUTs and an inverse transform to XYZ D65.
  • Kevin Wheatley: For those not familiar with OCIOv2 it uses the kind of split we have discussed where display encoding is independent of the view transforms.
  • Scott Dyer: I've had computer issues which mean I don't have the test images ready as I'd hoped. I'm going to go in to the office and sort things. I need to remove some images I don't have permission to share.
  • Nick Shaw: RGB DT isn't simply a tone curve in rendering primaries and display encoding. It follows the form of the ARRI K1S1 and has a 0.9 saturation to counter the saturation increase from RGB curves. I suspect it's similar to the purpose of the global desaturation in the RRT.
  • Scott Dyer: That is logical to include.
  • Kevin Wheatley: We'll post test images to compare options X, Y and Z for candidate A.

Meeting #47, March 30th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Daniel Brylka
Caleb Carges
Alex Forsythe
Jean-Michel Gilbert
Zach Lewis
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith

Meeting Notes

  • Kevin Wheatley: We want to go through the status of the candidates. We have a to do list. Jed wanted to produce an LMT to show Open DRT in its best light. The others are "good enough". Francesco offered some images. We wanted to set people's expectations. What to look for and not to look for.
  • Jed Smith: I have not had time to make an LMT. But maybe there is benefit presenting Open DRT as a neutral start point.
  • Kevin Wheatley: We would highlight this in the ReadMe.
  • Alex Fry: Originally we thought they would be presented "blind" but I think we need some guidance notes.
  • Kevin Wheatley: Here is the draft ReadMe. We're only targeting two transforms, 100 nit SDR and 1000 nit HDR.
  • Scott Dyer: The ReadMe doesn't give details on the different candidates. Just general guidance and questions we want answered.
  • Alex Fry: We should ask about shadow saturation as well as highlight saturation.
  • Scott Dyer: Maybe shadows mids and highs.
  • Alex Fry: And note SDR is Rec.709 limited and and HDR is P3 limited. So they should look at how the gamut changes between them as well as the dynamic range.
  • Joshua Pines: Is the intended audience colorists?
  • Alex Fry: Also DPs, VFX artists, game devs. Any possible endpoint.
  • Joshua Pines: It must be made clear what input the LUTs expect.
  • Alex Fry: The LUTs are in packages with DCTL for Resolve and .fltransform for Baselight, and install instructions so they appear as custom Output Transforms in an ACES project.
  • Scott Dyer: We can follow what was done for the gamut mapping group.
  • Kevin Wheatley: Initially it will be a small group, but when we widen out we need an OCIO config too.
  • Nick Shaw: Would a VFX artist be confused by these with no associated grade? Particularly Open DRT that needs an LMT. They might expect it to "look good out of the box".
  • Alex Fry: It depends on the artist.
  • Alex Forsythe: An OCIO v1 config may have issues caused by using LUTs. They will stress it and hit corner cases.
  • Alex Fry: That would be the case for all apps, as they are all using the same LUTs.
  • Kevin Wheatley: The GPU path which is limited to trilinear interpolation in OCIOv1 may be a problem.
  • Nick Shaw: Tetrahedral / trilinear is a project level setting in Resolve, so we should note that they must set tetrahedral.
  • Daniele Siragusano: A Baselight DRT family can specify tetrahedral with a flag.
  • Alex Forsythe: We should definitely start with a small group, even just to find issues with the distribution package.
[Scott showed the images he downloaded from Francesco, as well as the Academy collection of test images]
  • Scott Dyer: I think we only need to include selected images as single frames. We need to include "normal" images as well as the gamut issue extremes.
  • Nick Shaw: We should note that people may want to look at the results with the RGC both on and off, so they are not just judging a DRTs handling of out of gamut values.
  • Jed Smith: Open DRT is not intended to work with pre gamut compressed images.
  • Scott Dyer: We can conform everything to 1080p ACES EXRs.
  • Nick Shaw: For the GM testing we provided Resolve and Baselight projects with a pre-conformed sequence, and also a reference QuickTime.
  • Alex Fry: If they are single frames we can include a lot of images, to give variety.
  • Jean-Michel Gilbert: Perhaps we should separate the film and digitally originated images.
  • Kevin Wheatley: Or at least identify the source (if we know). The stocks and scanners probably no longer exist, so the film images couldn't be reproduced exactly.
  • Scott Dyer: The film images were just scanned as Cineon, and brought in as if they were ADX.
  • Nick Shaw: Are they graded at all? The file name says "graded".
  • Scott Dyer: Only a technical balancing grade to bring the various sources to the same start point. We mostly still have the source files.
  • Joshua Pines: Almost no scanners are ACX calibrated. But ADX10 IDT works fine in our experience. That's what we mostly do.
  • Scott Dyer: The plan is to make a DropBox folder with the packages, and also just the images. Let's package it up as we intend to, then test it with a few close people.
  • Kevin Wheatley: Do we think we can achieve this by next week?
  • Alex Fry: I have some wording about the three candidates. Trying not to be leading.
  • Daniele Siragusano: Why is it not randomized and blinded?
  • Alex Fry: Candidate B (Open DRT) is so different I felt it needed explanation.
  • Joshua Pines: You could make one set called ABC and another DEF, where they aren't the same order.
  • Nick Shaw: We could talk in abstract about them having different approaches, but not specify which.
  • Alex Fry: Is it a problem if testers talk to each other?
  • Joshua Pines: We're not running a psychophysical experiment.
  • Daniele Siragusano: Isn't the idea that B needs an LMT a bit legacy. Everybody always uses an LMT these days. Nobody uses a vanilla rendering.
  • Kevin Wheatley: Some do.
  • Joshua Pines: Colorists are normally just handed a show LMT.
  • Daniele Siragusano: Maybe colorists could try their existing LMTs to see what works with what.
  • Alex Fry: Am I right that Open DRT leaves display gamut mapping to the user?
  • Jed Smith: That's not wrong. It tries to be hue preserving within the display gamut volume, and per channel clip to be more saturation preserving outside it. It definitely has less look than A and C. I think VFX artist maybe should get something with a look because that's what they are used to. Colorist can experiment.
  • Alex Fry: As it stands today, VFX artists and colorists get the same transform.
  • Nick Shaw: Would you give a VFX artist T-Cam with no look grade? Isn't that like Open DRT designed to be very neutral, and need "pleasing looking" grading applied?
  • Daniele Siragusano: The OCIO config we ship has no look. There's a difference between a technical rendering and a look.
  • Alex Fry: In my experience animated features don't use an LMT. And in VFX, an LMT can get in the way, so a stock RRT is often used.
  • Daniele Siragusano: T-Cam tries to give a clear view of the scene without creative changes. Isn't the idea that we don't put a look into a transform, so you can have many looks? I don't buy this default LMT thing.
  • Jed Smith: I see the default LMT as something to show people what's possible.
  • Joshua Pines: It comes down to the bimodal nature of our targets. Some want something they can drive in any direction. Others want it falling off the truck looking good or they won't use it. The default LMT is to cover both.
  • Daniele Siragusano: Isn't that part of the testing – how many people think it looks bad because it has no look?
  • Kevin Wheatley: So are we leaning towards removing the descriptions?
  • Nick Shaw: Or merging them into one generic one which says there is variation in philosophy, but doesn't point to which.
  • Scott Dyer: That's what my initial ReadMe tried to do.

Meeting #46, March 23rd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Charles Boileau
Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Luigi Giardiello
Jean-Michel Gilbert
Thomas Mansencal
James Pickett
Matthias Scharfenberg
Daniele Siragusano
Jed Smith

Meeting Notes

  • Scott Dyer: I've only looked at SDR so far, but it feels like the contrast of the new (MM) curve is noticeably lower than ACES 1.0. More like K1S1. That's not necessarily bad. It's easier to add contrast than dial it out. It raises the subject again of a default LMT. Testing will guide us to whether extra contrast belongs in an LMT or the tone curve.
  • Nick Shaw: What matters more is if the contrast is lower to the same degree in SDR and HDR.
  • Kevin Wheatley: Contrast can come from two places. If you take too much scene contrast and squeeze it into a display, or due to the rate of highlight roll off.
  • Scott Dyer: The range captured seems good. 
[Scott showed a plot comparing Jed's (MM) curve, K1S1 and ACES 1.0]
  • Scott Dyer: I feel it loses a lot of its "pop" on images. I'm anticipating a reaction to such a dramatic look change.
  • Thomas Mansencal: Previous releases shipped with an LMT to emulate the previous look. I assume we would do that again, insofar as that is possible.
  • Kevin Wheatley: That's a reason to keep the transform simple. It makes making that type of LMTs easier.
  • Alex Fry: Was there disagreement about contrast in the RAE document?
  • Scott Dyer: I don't think anybody wanted more contrast. Some wanted less, and other were ok with it as is. People are familiar with K1S1, so that's not a bad start point. You could make a 1D LUT to emulate v1 like contrast, although the color wouldn't match.
  • Daniele Siragusano: I think a lot of the contrast in ACES 1.0 came from the fact that many of the test images were film originated, so there was roll off in the negative which wasn't completely unbuilt. So there was need to add highlight contrast, Digital cameras don't have this.
  • Scott Dyer: Developing 1.0 a film print-through curve was the reference.
  • Scott Dyer: We have them but they are  more recent that ACES dev 1.0.
  • Daniele Siragusano: Those are good for testing extremes, but they aren't lit like a DP would typically do.
  • Scott Dyer: Some DPs do use those kinds of lights.
  • Thomas Mansencal: We need to have an appropriate dataset. We need to provide a standard set of images to testers as well as asking them to try it on their own material. To remove biases a company may have.
  • Scott Dyer: That's what was done with the gamut compressor tests. How many images should we choose? It can't be too big, but must cover a range.
  • Thomas Mansencal: We need film originated, CG, digital cameras.
  • Kevin Wheatley: What else are we missing before releasing test candidates.
  • Alex Fry: I've updated my repo so they all have the same curve. Candidate A is based on ACES 1.0, but with no sweeteners and the MM curve. B is Open DRT. C is the ZCAM DRT with the MM curve and my modified LMS matrix to reduce the cyan skew of blues.
  • Scott Dyer: To confirm, candidate A still uses AP1 rendering primaries? We didn't adopt the primaries Jed suggested or look for others?
  • Alex Fry: Not yet. But we can discus that. Jed's latest Open DRT removes several things that were in previous versions, with an intent to move them to a default LMT. Should we put that LMT in, to show it "at its best"? I've made LUTs for Baselight and Resolve, and a reference YouTube clip. The repo includes the Nuke scrip that generates the LUTs, which therefore includes all the DRTs.
  • Scott Dyer: I think what we ship out should give every candidate it's best chance. With OpenDRT it's hard, because the LMT will remove artifacts, but those artifacts are part of the rendering. For candidate A, I am wary of using all the guts of v1. There could be other stuff in there that's limiting. I would make sure that's as simple as possible – tone curve, primaries, display encoding. I worry that using AP1 rendering primaries, when people may be looking at material rendered with ACEScg primaries may give misleading feedback.
  • Alex Fry: Jed's RGB DT could be that candidate.
  • Jed Smith: My LMT for Open DRT adds a bit of gamut compression, a bit of non-linear chroma increase, and maybe a bit more contrast. What I would put in it for testing would depend on the audience. How are they using it? What information will they be given?
  • Kevin Wheatley: I imagined a single package for a range of users, as in the real world. But if we give them an LMT that they can use or not for only one of them, is that a valid test? But if others don't need an LMT that's probably acceptable.
  • Jed Smith: Will they be grading it?
  • Kevin Wheatley: A core use would be grading, and in comp we always get a grade from somewhere, or we make one. So there's always a grade, even if it's only minor.
  • Alex Fry: In VFX comp you don't tend to grade the plate, but in full CG and animation you do.
  • Jed Smith: So a minimal LMT that doesn't change contrast would help OpenDRT look "pleasing". I can make something.
  • Kevin Wheatley: Are the rendering primaries for A problematic? And for C are we happy with Alex's "bodged" matrix?
  • Jean-Michel Gilbert: I can't find the bodge matrix numbers in the BlinkScript.
  • Alex Fry: They are parameters in the node.
  • Kevin Wheatley: So we're happy with A and C, and just waiting for Jed't LMT for Open DRT… So what about packaging and documentation? And are we missing anything from our image set?
  • Alex Fry: We need to sort through submissions, and some of the gamut mapping test images. Maybe some of the Stuttgart images. Real production images would be nice but those are hard to clear.
  • Kevin Wheatley: What about STEM 2?
  • Jean-Michel Gilbert: I could get permission for some of our older stills.
  • Francesco Giardiello: I can share some material we shot for the AMF / IDT groups.
  • Kevin Wheatley: I think Jean-Michel posted something on ZCAM.
  • Jean-Michel Gilbert: I did an experiment with compressing in XYZ instead of JMH, but using a compression ratio derived from the J component of the JMH gamut compression.
[Jean-Michel showed the effect of his modified ZCAM on dark blues, which became less cyan]
  • Alex Fry: Maybe with my modified matrix that wouldn't be needed.
  • Nick Shaw: Alex, have you now modified your bodged matrix to fix the white point shift?
  • Alex Fry: Yes
  • Scott Dyer: I'll start a thread on ACES Central to discuss what image set we should use for testing, and update the DropBox Paper.

Meeting #45, March 16th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Lars Borg
Chris Brejon
Daniel Brylka
Alex Forsythe
Francesco Luigi Giardiello
Jean-Michel Gilbert
Zach Lewis
Thomas Mansencal
Carol Payne
James Pickett
Matthias Scharfenberg
Shane Smith
Jed Smith
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: Jed has made some posts on ACES Central, and Alex has incorporated Jed's curve into Matthias' work.
[Alex showed his unreleased update to the BlinkScript ZCAM DRT with optional use of the same curve as OpenDRT]
  • Alex Fry: Jed's curve only uses the peak luminance from the SSTS coefficients, but the desat in the DRT still uses the mid grey value. I can now make new LUTs using the same curve. We could also make a per channel DRT using that same curve.
  • Kevin Wheatley: So do we then have a tone scale we are happy using?
  • Scott Dyer: I like it, but I need to look at pictures on various displays. I also need to understand the constants, so we can document and justify them, e.g. the flare.
  • Kevin Wheatley: My question was, do we need to account for flare (using what model?) or is it handled by relative black? Also with a CAM based system do we need separate surround compensation or does the CAM do that? Thirdly what is the parameter space and how do we derive the parameters?
  • Carol Payne: We can load it on several displays (CLED, X310) in the Netflix office. Any particular targets we need. What about a non-reference like an LG OLED?
  • Scott Dyer: The shadow behavior across different displays is something I wanted to look at.
  • Kevin Wheatley: We're looking at a BT.1886 SDR display, a 1000 nit P3-D65 display, and maybe a third one would be a DCI theatre with D60ish white. That covers the main bases. Then there's sRGB with a third environment. The SDR and HDR should be on the same display with both encapsulated in a PQ encoding.
  • Nick Shaw: When we encode SDR into PQ, if we perhaps set a black level that's representative of a reference (R-REC-BT.2035) SDR display, do we emulate the per spec BT.1886 black handling, which I don't think most real displays use, or just use 2.4 gamma offset by the black level?
  • Kevin Wheatley: I think we measure a real display, and see what it does.
  • Alex Fry: Do we make the LUTs all output PQ?
  • Jed Smith: I was summarizing my experiments, and showing my three different models. And I posted a per channel rendering using the same tone scale as the others.
  • Alex Fry: Our initial fall back candidate was just the current SSTS with sweeteners disabled.
  • Scott Dyer: AP1 is not an ideal rendering gamut, so I would be in favor of looking at other options. I don't think there is much value in the "least changes" version.
  • Alex Fry: It would be good to make teh fall back as good as it can be, rather than "the one you're not meant to pick."
  • Kevin Wheatley: Thomas noted that ProPhoto was optimized to minimize hue skews. How do we pick a rendering gamut?
  • Jed Smith: My nuke node lets you expand primaries out along the line of the original primary, and then offset perpendicular to that. I experimented with that trying to match the perceptual hue lines.
  • Scott Dyer: We did similar experiments and tried to find error minimization. We never found one optimal space that worked for everything. The reason for AP1 is long, complicated and not documented. They worked well enough at the time, but were compromises.
  • Jed Smith: My RGB DRT follows the ARRI K1S1 model – transform input gamut to rendering gamut; apply tone scale; rendering gamut to display gamut, with 10% desaturation; inverse EOTF. Going back to mid grey being the focus for the gamut compression, my curves have no constraint for where mid grey ends up, as that would add a lot of math complexity. It starts at 10 nits at 100, and increases from there. The curve I'm currently using is the Michaelis-Menten with dual (pre and post tone map) contrast.
  • Kevin Wheatley: We could forward compute where grey ends up through that curve.
  • Jean-Michel Gilbert: Alex, do you have the code for your modification to the ZCAM matrix?
  • Alex Fry: I need to tweak them because there is a slight white shift, but I will post them.

Meeting #44, March 9th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw
Scott Dyer

Rémi Achard
Lars Borg
Daniel Brylka
William Feightner
Alex Forsythe
Jean-Michel Gilbert
Thomas Mansencal
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith
Jed Smith

Meeting Notes

  • Kevin Wheatley: Jed made some posts on tone mapping. They are a good summary of his work. And we need to clarify our tone scale requirements so we can move towards testing. Jed shows two variations on an equation, with exposure changed on either the scene or display referred side.
  • Daniele Siragusano: I think it's more surround compensation than exposure.
  • Kevin Wheatley: Should this function do the surround compensation, or do that with a CAM if we're using one?
  • Jed Smith: Any reason a power function in display linear is unsuitable for surround compensation?
  • Nick Shaw: I don't think it's unsuitable, but we need to pick the exponent values to use. Maybe a CAM could guide us to those.
  • Jed Smith: I don't mention surround compensation in that thread. I assumed that would be an additional tweak on top of the function. I was looking at the difference between applying contrast in scene and display referred.
  • Nick Shaw: The maths would be simpler if you could apply surround compensation just by modifying the exponent in the existing equation.
  • Jed Smith: My post is a summary of my previous posts. Daniele posted an excellent tone scale function, and it took me a while to understand it. I'm summarizing what I learned. The name spring function means a sigmoid function you can scale in y without the slope through the origin (toe) changing. That's great for HDR - the mid grey and peak can change, without the toe changing. A spring function is easy with scene and display side contrast. You set the x scale and y scale to the same thing, then you have an additional multiplier on the x scale to have a scene referred exposure control. I experimented with a pivoted contrast control, so mid grey doesn't change. That's good for SDR but not HDR because the highlights get pushed very bright. I also tried a pivoted scene linear contrast of 1.3 with linear extension fora all luminances, which works better for HDR. But I felt SDR needs more contrast.
  • Kevin Wheatley: The post raises some questions for requirements. Should contrast be fixed for all peak brightnesses, and then there is a separate surround compensation which might modify that contrast? Or should the tone scale be altered with peak brightness? And if you have a higher contrast display should we adjust the tone scale or pin it?
  • Daniele Siragusano: The model allows all those options by just driving things differently.
  • Kevin Wheatley: We have to pick one for our out-of-the-box implementation.
  • Daniele Siragusano: The model is monotonic and simple. I would call the adjustment pre/post tone curve, not scene/display referred, because it's hard to say in a DRT where it changes. It's a continuous process, with scene-referred input and display-referred output. And the scale for surround compensation is small, so if you think of 0 and 1 being pinned, a gamma affects mid-tones more than highlights or shadows. It compresses the highlights or shadows, depending which way you change it. And an exposure change before the tone map does visually almost the same thing. With small changes they are very comparable. For surround you could even get rid of the exponent, and do a small exposure change before tone mapping. It makes sense. If the background is brighter, you make the image brighter, and vice versa. I wouldn't bother with a sophisticated CAM for this.
  • Nick Shaw: How do you work out how much darker or brighter. That's why I suggested using a CAM for guidance, and approximating it with a power function.
  • Daniele Siragusano: Yes, if you can find an image appearance model. I've not seen one, and a color appearance model will heavily overcompensate.
  • Daniele Siragusano: We determined settings ourselves with a small group of colorists, and got much lower values than Bartleson-Breneman.
  • Alex Forsythe: Bartleson-Breneman is generally considered an overcompensation.
  • Daniele Siragusano: We used to only have one output transform for cinema and TV, with maybe a colorist to tweak things.
  • Lars Borg: The spread of curves is so wide, none would fit all observers.
  • Kevin Wheatley: Anther aspect is how you map scene range into display range. First, is black absolute or relative to display. I think we agreed it should be relative. Second where does mid grey go? Magic numbers with interpolation, or do we need a justification for those.
  • Joshua Pinest: Historically mid grey at 10% of peak for SDR has been consistent. For HDR all bets are off. 10% of what? Reference white? There are two camps. Some want it the same as SDR. Some want it up to a stop brighter.
  • Kevin Wheatley: So do we put it say a stop higher, or leave it as is and leave it to grading?
  • Daniele Siragusano: If we start with 10 nits at 100 nit SDR, we need to decide how much it goes up with each stop of peak luminance. OR does it stay at 10 nits?
  • Nick Shaw: The initial ACES HDR Output Transforms used 10 nits, and people wanted it higher, so now we have 15 nits.
  • Joshua Pinest: It was made a little brighter based on a small sample of features being done at the time. But people seem to like 15 nits for 1000 nits peak.
  • Jean-Michel Gilbert: I have found I need 20 nits to make HDR match SDR in a dim viewing environment.
  • Alex Forsythe: There was a fair amount of testing that lead to 15 with Marvel and Disney. And we added the option to shift the mid-tone with a parameter which can be exposed to the user. If you grade in an exposure change the SDR is all pushed too high in the tone curve and blows everything out.
  • Daniele Siragusano: A parametricised output transform is a headache, because you want to change it per shot.
  • Scott Dyer: Why not a separate exposure node? I never intended people to use it as a user parameter.
  • Daniele Siragusano: If you have a slider, people want to touch it per shot. You should just make a different OT which is brighter.
  • Alex Forsythe: The output transform certainly shouldn't be a "creative knob".
  • Daniele Siragusano: One grading application exposed it as a user setting for the project.
  • Joshua Pinest: If people want to go crazy using everything all the way up to 4000 nits there can't be a one size fits all which gracefully maps that to SDR. What we have is a reasonable starting point. It's still early with HDR to know where middle grey should fall.
  • Scott Dyer: What decisions do we need to make to turn Jed's work into a proposed model we could adopt? And how do we evaluate it? Do we do a separate test phase? Or pick something that makes sense theoretically and roll that into the other testing?
  • Alex Fry: I'd like to bolt it into what we have now.
  • Jed Smith: Open DRT currently uses the Michaelis Menten post-tone-map contrast version. That was also Daniele's proposal.
  • Alex Fry: We should be able to easily add it to the RGB curve model.
  • Jed Smith: I experimented with keeping SDR and HDR contrast the same, and found that didn't look good. HDR needs less contrast to my eyes.
  • Kevin Wheatley: So because your display has more dynamic range, you don't need to add so much contrast to the image?
  • Jed Smith: For my 800 nit TV HDR need contrast of 1.15 to 1.2, and SDR needed about 1.35.
  • Lars Borg: BBC lowers the gamma of HLG as brightness increases. BT.2100 has a formula for gamma from peak luminance.
  • Nick Shaw: The results of the BBC's experiments run counter to classical color science Stevens Effect compensation.
  • Kevin Wheatley: The value 1.35 is similar to what I've seen in my averagely average data. And it's similar to the SSTS for 100 nits.
  • Jed Smith: That value is the exponent used post tone map.
  • Joshua Pinest: Which should be the same as the slope on a log-log plot.
  • Jed Smith: My graph is PQ on y and log2 on x
  • Joshua Pinest: PQ is effectively log in the mid range.
  • Kevin Wheatley: Log luminance might be a better y-axis.
  • Alex Forsythe: With log luminance pay attention to if PQ data is black offset or not.
  • Joshua Pinest: Traditional film response plots are log-log, just base 10 not 2.
  • Jed Smith: I just need to work out what value to put in the middle of a log y-axis. Maybe mid grey.
  • Nick Shaw: Or one mid grey, like 10 nits. It will vary for the different curves.
  • Alex Fry: For evaluation, I am keen to do that in the context of a real display transform, rather than black and white, because how a curve is applied makes a big difference to the appearance.
  • Kevin Wheatley: It boils down to how many rounds of testing do we have.
  • Nick Shaw: We don't want too many permutations of tone curves and transforms.
  • Jean-Michel Gilbert: Two display types – SDR and HDR – becomes multiple with different HDR types.
  • Alex Fry: We have to stick to one, e.g. 1000 nits.
  • Kevin Wheatley: Do we start with magic numbers 10 and 15.
  • Jean-Michel Gilbert: I use 20 for HDR. With SSTS in the ZCAM DRT I had to raise it to that to get the blacks to look the same as SDR.
  • Daniele Siragusano: I think we should have a model where if you keep mid grey at 10 nits, 2 nits stays at 2 nits, whether it's HDR or not.
  • Jean-Michel Gilbert: I was using the same monitor for SDR and HDR and adjusted a control Windows has to get an SDR window where it should be.
  • Kevin Wheatley: I think we want to keep those kind of variables out of our testing, and encode SDR and HDR into the same PQ container and look at them on the same PQ monitor in the same HDR mode.
  • Scott Dyer: US moves to Daylight Saving before our next meeting. We'll work out how we deal with time changes, and let you all know.

Meeting #43, March 2nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Nick Shaw

Rémi Achard
Chris Clark
Alex Forsythe
Francesco Luigi Giardiello
Thomas Mansencal
Carol Payne
Daniele Siragusano
Shane Smith
Jed Smith

Meeting Notes

  • Kevin Wheatley: Apologies for the last minute cancellation of last weeks meeting.
  • Alex Fry: I've been trying to deal with the ZCAM model's cyan issues. Hand tweaking the numbers in the LMS matrix to pull the distortion away from camera spaces we may have as sources. It's a variation on Matthias' "Abney's Abyss".
[Alex showed a chromaticity plot with original and modified ZCAM hue lines. Then images rendered through both.]
  • Alex Fry: I'm trying to keep the red and green corners as unchanged as possible, and stretch the blue corner out to make the lines straighter inside the spectrum locus. Ideally we would keep lines as they were at the start, and then extrapolate linearly before the curve round. Does anybody have any ideas how this could be done in a more robust way? Like Daniele's re-fiting the model suggestion.
  • Thomas Mansencal: We don't have all the data sets to fit.
  • Kevin Wheatley: It would be interesting to understand where the curvature comes from. We've seen graphs with curves like this from other processes. It may be a function of the cone space. But it may just come from the fitting, rather than thinking about how the eye works. Is there a simpler approach that might give us similar shapes?
  • Nick Shaw: Is the shape of those curves exposure invariant?
  • Alex Fry: They change a little, but not much as you vary exposure. I could have averaged them for a range of brightnesses.
  • Thomas Mansencal: ZCAM isn't exposure invariant. One point of a CAM is to model changes in appearance with brightness. Say if it's fitted for the Bezold–Brücke shift.
  • Alex Fry: I'll try and make graphs which run though different levels.
  • Kevin Wheatley: Did you just vary matrix numbers? Or did you flatten it into four coordinates?
  • Nick Shaw: Would that work? Because one matrix is applied to non-linear data, and one is viewed through a non-linear transform.
  • Daniele Siragusano: Which matrix were you modifying?
  • Alex Fry: The XYZ to ZCAM LMS matrix. But just by varying matrix numbers.
  • Kevin Wheatley: You're only viewing a 2D result, but the transform is 3D.
  • Alex Fry: I'll have to double check there are no weird things happening in 3D.
  • Thomas Mansencal: Could you transform to a polar space and just target one range of hue? Changing the matrix affects everything, like the blue highlight fix. Maybe we could even just use the hue correlate of the model.
  • Nick Shaw: Then you have a hue qualified correction. Does that kind of patching risk adding noise?
  • Kevin Wheatley: We don't want to end up with things like the existing tweaks, which people don't like.
  • Nick Shaw: Replacing the red modifier with a blue modifier!
  • Daniele Siragusano: Changing the matrix changes decorrelation of luma and chroma, because it's like the Ycc matrix. The K weights in video language. This has many implications.
  • Thomas Mansencal: It's like changing the matrix of the a/b planes for L*a*b*.
  • Alex Fry: Does anybody know a better way to change the matrix, keeping some parts constant?
  • Thomas Mansencal: It's hard, if not impossible, because it's a linear transform. Rotating and scaling the space.
  • Alex Fry: Nine number is a lot of combinations.
  • Nick Shaw: Are you keeping row sums to unity? So it's only six numbers, because the other three are calculated?
  • Alex Fry: I changed all the numbers, but preserved row sums.
  • Thomas Mansencal: We could try to optimize it, based on some constraints.
  • Kevin Wheatley: Since the models is only well behaved for real colors, so we need a pre CAM adjustment to bring the values in? We're already diverging from ZCAM by adding the tone mapping. We could split the core rendering and the mapping to devices, for which ZCAM might be fine as is.
  • Alex Fry: We talked before about putting something like the RGC on the front of the DRT.
  • Thomas Mansencal: It makes sense. You always pre-condition data before fitting.
  • Carol Payne: We wouldn\t need to stick to the RGC parameters. But also the footage might already have had the RGC applied. But we did test repeated application of the RGC, and it didn't seem to cause problems, just the effect diminishes.
  • Nick Shaw: Even if the RGC was applied, grading could push it to other places. Particularly crude offset type grading in ACEScct.
  • Alex Fry: I experimented with just clipping outside the spectral locus.
[Alex showed the effect of his tweaks on the "cyan intrusion" - 27:45]
  • Kevin Wheatley: We could do something similar to the derivation of the RGC parameters. looking at where values might start and where they need to end. The RGC brings things into AP1. We just need to be within the bounds where the model behaves well.
  • Nick Shaw: The RGC was not designed to be hue preserving, so it could add skews. We did have my "hexagonal" variation which preserved straight lines in xy. But that created some cusps as it transitioned across the primaries. I had a version which smoothed those, but it wasn't invertible.
  • Daniele Siragusano: We haven't discussed the tone curve which has a dramatic effect on how all this behaves. The non linear transform, which in ZCAM is modified PQ. Could we try e.g. ACEScct. PQ has a large division, which is a nightmare for floating point calculation. Especially half-float. Also the model creates bent lines which we have a problem with, but we are happy with areas where it doesn't bend much. We already have a mapping with straight lines.
  • Alex Fry: We could try one with straight lines.
  • Daniele Siragusano: I think Open DRT does this.
  • Kevin Wheatley: Should we split it and take Jed's work and try the alternate gamut remapping, and se if that's what causes the problems?
  • Daniele Siragusano: It seems this model does good things for mapping to the actual display. Except very green liquids, where we prefer saturation preserving to hue preserving. A gamut compression which compresses a bit then at some point just clips, does a transition from hue preserving to saturation preserving. The renderings look good, so maybe you've found something that's fine. But check the effect on the other correlates. You may need to find a compromise where you look at all aspects of the model.
  • Nick Shaw: The ZCAM model is used for a number of things in the rendering. We don't have to use the same modified version everywhere. That tweak is specifically to make the M desaturation straighter when compressing into the display gamut. Although that might add a load of extra steps, going back out of one ZCAM model and into another.
  • Daniele Siragusano: That seems too hacky to me. I would try swapping out the model's non-linear curve.
  • Nick Shaw: You would then need to change the matrices, because the were fitted assuming the PQ curve.
  • Thomas Mansencal: Something else related to our work has been done by the lead developer of darktable. He's looking for a new uniform color space to adjust saturation. It done by fitting Munsel notation. Google's Material color system uses a new space derived from CAM16.

Meeting #42, February 16th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Michael De Caria
Alex Forsythe
Francesco Giardiello
Jean-Michel Gilbert
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: There have been a few posts on ACES Central following the discussions last week about the color of Fairy liquid. People have decided what it looks like to their eyes. Mostly green but some say they see a tinge of blue. The plots suggest it's in the green area. Alex's plots made me think about where that green should map to in a smaller gamut. 
  • Alex Fry: I'm not sure the bottle in the products image is Fairy. I think it's Palmolive. But we're now focusing on Fairy, which seems to be available everywhere. I measured the spectrum with a CR250 in sunlight and a reference sample of white paint in the same light. Also a Listerine bottle and the Fairy bottle on my office desk. I bent the values in the images around to make the chromaticities close to those from the spectro. I could compare renderings in Rec.709 P3 and HDR with the actual bottle in my office. Neither P3 or 709 can get close to the actual saturation of the liquid. Clipped renderings initially seem to have more punch, so feel closer, but looking closer they are more yellow than reality. Hue preserving lets P3 and 709 look the same. But there is something nice about the skewed ones. Is it a feature or a bug? Might LMTs skew towards display primaries to give some colors more punch?
  • Daniele Siragusano: What does the green of the label look like compared to the liquid?
  • Alex Fry: The label a bit more yellow and less saturated. It's a graphic treatment. Advertising images do vary.
  • Scott Dyer: What are we looking to find out with this?
  • Alex Fry: Is it a fault with the ZCAM model producing something less desirable? The things people like about other renderings are technically bugs. They vary between 70 and P3.
  • Daniele Siragusano: It's two different intents, trading hue preserving and saturation preserving.
  • Kevin Wheatley: It shows that even if we had a better IDT, it's not the source of the problem. And is it a problem or an alternative choice. Is hue preserving at the expense of all else the right approach. It's a different problem than the blues, because the shape of the transform curves, perhaps due to lack of saturated colors in the training set.
  • Alex Fry: Blue may curve, but green is pretty straight.
  • Kevin Wheatley: So there is a choice to be made.
  • Alex Fry: Hue preserving is desirable because an LMT has something that's stable after it. But I see the appeal of an LMT that collapses out of gamut colors a chosen set of rendering primaries. You could choose to collapse toward 709 while rendering in P3. It's something you can't do if the skews are part of the DRT.
  • Daniele Siragusano: You're describing a grade.
  • Nick Shaw: Isn't that what we want – to let the colorist and client have choices, without the DRT getting in the way?
  • Alex Forsythe: Playing devil's advocate. Are there expectations that particular colors render a particular way, which creates more work for the colorist. And people may dismiss the DRT as broken.
  • Alex Fry: Expectations are an issue. This we can explain, but the "pit of cyan" is harder.
  • Nick Shaw: Couldn't a default LMT make sure it matched expectations out of the box, but people can turn it of and take manual control?
  • Daniele Siragusano: If an LMT has fixes for things that never look right, maybe it's not the right thing.
  • Nick Shaw: If the DRT is neutral, the default LMT produces a "pleasing looking" rendering. But if the LMT is fixing something that's wrong, then there's an issue with the DRT.
  • Alex Forsythe: It's problematic if the DRT is not doing something that we define it should. If it does that, and the result is not pleasing, that's by design. It's about expectations for each piece, and verifying each part does what it should.
  • Nick Shaw: So LMTs need to be a first class citizen of ACES 2.0, with a defined place where they go and the default LMT is loaded by default in every app.
  • MS: People might have to incorporate the default LMT in every LMT they make, which could complicate their processes.
  • Kevin Wheatley: So this suggests IDTs aren't the root of these issues. It suggest we may need to modify behavior in blues, to straighten out lines, to make things more predictable. We also wanted to talk about tone-scale. Jed made some posts and Scott's been doing some work.
  • Scott Dyer: Jed'd taken the curve he posted about recently, and turned it into a model which adapts based on peak luminance. And he made plots comparing different models. I have similar plots. We need to look at how whatever tone-scale we pick works in the shadows towards black. The SSTS has a cludge to force minimum luminance to zero in PQ. I like Jed's function, but there is a lot of variation in the lower region. But in his most recent post that version stays more fixed.
  • Jed Smith: In my tests it works better when the curve maps zero to zero. Each version has pros and cons. I would defer to Daniele to say which is best. The main differencer between this an what I called the Naka-Rushton one is this doesn't have intersection constraints. With a constraint, if you adjust mid grey contrast, it affects the shadows. The stability coming out of black in this function is an advantage. I counteract that behavior in my version of Naka-Rushton, so the behavior is similar.
  • Scott Dyer: When I designed the SSTS, one principle was increasing peak luminance increase the scene range displayed. With SSTS it increases the scene value mapped to 2000, 4000. Maybe they are too high. We all want middle grey to increase in brightness as we go up in dynamic range. Currently it's inconsistent. I like that this one is increasing consistently , but maintaining the slope. It's increasing the number of stops mapped to the display, which is great. My concern is the blacks.
  • Daniele Siragusano: You can drive the Michaelis-Menten one the same way, depending how you parametricize the model. The model is flexible, not constrained. But yes, it makes sense to make the image brighter with more dynamic range.
  • Scott Dyer: And we have Josh's factor of two between 48 and 100 to make them the same.
  • Daniele Siragusano: Not just 48 nits, but any large screen with dark surround. For large screen maybe yo have another factor corresponding to the viewing condition changes.
  • Scott Dyer: We can nail down a ton-scale independently of color. It's a huge variable in the test LUTs. We don't want people reacting to tone-scale differences instead of color handling. What do we need to do to get agreement on a v2 ACES tone-scale?
  • Kevin Wheatley: If we ignore dim v dark and pick e.g. dim, and say there is some function referencing mid grey to peak output. What is the mapping?
  • Alex Forsythe: We're not saying it's linear – twice as much at 200.
  • Joshua Pines: It would be an interesting colorist poll asking going from 100 to 1000 or 1000 to 100, where do you want grey "falling off the truck"? Is there consensus or two camps?
  • Nick Shaw: Was there a poll which lead to the current 15 nit value?
  • Alex Forsythe: Long story! We may want to tweak some things, but we should stay close to the current values people seem to accept. Maybe we interpolate between those two that we have.
  • Nick Shaw: So do we design some candidate functions that pass through [100, 10] and [1000, 15] and ask people which they like at e.g. 500 nits?
  • Joshua Pines: I don't think there will be much variation in interpolating between those.
  • Kevin Wheatley: What else to we need to do?
  • Scott Dyer: There are choices. People are sensitive to changes near black. If near black behavior changes, how do we want it to change? In theory, if a display goes darker, you want to see more detail down there. We need to look at it in detail, because it all looks similar on code value plots. We need confidence that what we intend actually happens on a display. We specify intended luminance, and PQ is theoretically absolute. But is it really? We want to make sure the feedback isn't that all displays look different.
  • Daniele Siragusano: Is OCES absolute or relative in the blacks?
  • Alex Forsythe: It was never clearly defined. If they are output-referred nit levels, are they relative to a "theatre black"?
  • Daniele Siragusano: To discuss shadow behavior you need to nail down if zero is the darkest a display goes, or zero nits.
  • Alex Forsythe: We need to put together hard proposals and look at the pros and cons.
  • Daniele Siragusano: Do we use the natural flare as our shadow rendering or not, is the question that goes with is it absolute or relative in the shadows.
  • Scott Dyer: Currently it's inconsistent. For v2 it should be defined and documented. There are weird differences between SDR and HDR. SDR has a weird definition of display black that it stretches thing to.
  • Daniele Siragusano: It also affects how gamut is defined. Is red [1, 0, 0] or [1, small number, small number]? If this isn't defined the gamut is not defined.
  • Alex Forsythe: There is a relevant discussion on relative vs absolute black in SMPTE 432-1.
  • Kevin Wheatley: So is there a good reason not to start with the assumption that it's relative to display black?
  • Daniele Siragusano: In Baselight we use relative black. But them in plots you need to consider that, and add a linear light offset which is the flare.
  • Joshua Pines: Playing devil's advocate, if you have a zillion displays with different contrast ratios, it would seem easier if OCES was absolute, and you would have rules about how you go from absolute to these different displays, rather than having zero floating around relative to some unknown output device.
  • Daniele Siragusano: I've done this three times in three years, and always came to the conclusion that the default rendering is identical to what the natural flare would do.
  • Alex Forsythe: That's what's covered in the annex in 432-1. And relative more elegantly and simply handles it. So it's been covered before.
  • Joshua Pines: So does middle grey fall relative to different displays' blacks or an absolute nit level?
  • Alex Forsythe: It ends up floating relative to theatre black.
  • Joshua Pines: No. Mid grey gets nailed at an absolute nit level on the display regardless of black.
  • Daniele Siragusano: The effect of the floating becomes negligible above a certain level. After 1 nit the noise in any measuring device, including our eyes is much higher than the offset.
  • Daniele Siragusano: That's what I've always done, but I wanted to confirm there isn't a good case for doing it the other way.
  • Kevin Wheatley: So we can park that on relative. Next thing is slope/gamma near the middle in terms of intensity. There was a desire for lower mid grey contrast. SSTS seemed already do that for SDR. Is that ok?
  • Joshua Pines: Be good to know the numbers for slope in the old SDR and SSTS.
  • Jed Smith: You can model that exact curve, except the black behavior using Naka-Rushton.
  • Kevin Wheatley: It's interesting there is a mysterious coalescing round one idea, but it would be good to have some numbers. The other question is the highlights, and are we happy with the current range extension?
  • Jed Smith: I'm not happy. SSTS has too much HDR compression for my liking. As Scott said, the values that map to the peak white are very high. I think it's better if that is lower, so there is less compression in the highlights. But I'd need to compare on an HDR reference monitor.
  • Kevin Wheatley: So it's taking too much scene range and squeezing it into the display range?
  • Jed Smith: In mine it's lower so display peak happens before the curve gets to zero slope.
  • Kevin Wheatley: So also terminal slope we don't want flat. Currently there's no control for that, is there?
  • Jed Smith: There is. You just have to change the position of the limit, so the clip point is some finite scene-linear value.
  • Scott Dyer: So we get the continuation naturally. It goes through a specific luminance at a specific point, but continues beyond it.
  • Daniele Siragusano: If you make the bound of your working space the peak of the target, to a CG artist or colorist knows that 1.0 in log will map to display peak.
  • Jed Smith: Would that change with different display peaks?
  • Daniele Siragusano: I don't know.
  • Joshua Pines: ACEScct 1.0 is ~10 stops (10.274) above mid grey. It was chosen so cameras now and in the reasonable future wouldn't reach it. There is no magic about that number.
  • Kevin Wheatley: Any other parameters we need to model.
  • Jed Smith: I need help with the model for flare compensation, because I only have an OLED HDR TV. I feel good about it for the HDR and 600 nit HDR presets, but above that I don't know. But the list for me is flair compensation, mid grey behavior as peak increases, slope (contrast) of the curve, and clipping point at peak luminance.
  • Kevin Wheatley: That's a good list to get set for one viewing condition, and then we can look at how to modify things for different environments.
  • Scott Dyer: I will look at Jed's stuff on an actual display and produce some plots on black handling, and we can get the tone-scale nailed down ASAP. Then we can look at a color model.
Jed showed [01:05:30] his Desmos plot of Naka-Rushton via Michaelis-Menten, to explain his reasoning why he believes Michaelis-Menten looks better for HDR

Meeting #41, February 9th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Jean-Michel Gilbert
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith

Meeting Notes

  • Kevin Wheatley: We have some issues to discuss from ACES Central, and Matthias has an update.
  • Alex Fry: There are two things being discussed in the ACES Central thread - what AP1 blue looks like, and hue skews. What should happen with something at the AP1 blue primary, and what does that blue mean? Do we trust ZCAM out that far?
  • Nick Shaw: And the blues in images like the blue bar are way beyond AP1 blue. So camera originated material can have values way outside AP1.
  • Kevin Wheatley: Those values (not 'colors') outside AP1 will exist due to IDT matrices, and we have to be able to deal with them, whatever the philosophical debate about their meaning.
  • Alex Fry: Mathias has two approaches to these. One where you modify the model, and one where you sanities the data on the way into the DRT, maybe integrating the RGC into the DRT.
  • Nick Shaw: Do we put the RGC in the DRT, or say people should have already used it after the IDT, and can add it again at the end of their grade if that has pushed values into a problematic area. Do we restrict people too much if we make it a hard coded part of the DRT?
  • Jean-Michel Gilbert: I prefer the idea of a DRT which can handle invalid data, and stick to the expected hue.
  • Alex Fry: But what is the hue of invalid data? The ZCAM model doesn't reach the AP0 blue primary, so can't represent that.
  • Nick Shaw: I would argue for making the RGC part of a default LMT, so people can push against it and not create silly colors, but can turn it off if they need to because it's limiting them.
  • JMG we already get the shift with e.g. [0, 0, 15].
  • Kevin Wheatley: So we have a tension between camera originated media and CGI/game engines, because those could generate very pure colors. So we need to handle it in a graceful manner. It's not just blue. I'm familiar with the color of washing up liquid, so this highlights the cyan issue creeping into greens.
  • Alex Fry: The lines in the model around green are pretty straight.
  • Alex Forsythe: If the ZCAM model is correct, the color may actually be cyan, and hues aren't being skewed by ZCAM, and it could be OpenDRT skewing it to green. That may match what we expect it to look like, but perhaps the original ACES value does represent cyan [due to the camera sensitivities and/or IDT]
  • Kevin Wheatley: The effect was there on Pekka's sun-lit image and the other [ALEXA] image, which was a studio shot.
  • Alex Fry: There could be IDT issues, because previously all these images would have only been viewed through a DRT which clips and therefore skews. So if the result matches your expectations, you might not be aware of the IDT error.
  • JP: It could be the camera spectral sensitivities if the stimulus is very narrow band.
  • Alex Fry: It would be interesting if we could measure the bottle with a spectro.
  • JP: It would also be interesting to compare the camera manufacturer's rendering.
  • Alex Forsythe: It would also be interesting to make up a different set of primaries, and move the green left or right, and compare how ZCAM and OpenDRT behave. My concern is that Open DRT may do something different with Rec.709 than it does with the make believe primaries. Then you would have inconsistency between devices.
  • Alex Fry: I believe Open DRT is hue preserving in the rendering, but has no display gamut mapping at the end.
  • Jean-Michel Gilbert: I also under compresses as you raise peak luminance. So if you go to 1000 nits it almost doesn't compress the source, so almost looks like ACES HDR.
  • Nick Shaw: I have a Fairy liquid bottle here and it certainly looks green, not cyan to me. If Pekka's image came from a display-referred image [it didn't] and went to ACES through an inverse output transform, it's baking in assumptions about what the forward rendering will do, so even if the ACES values are wrong, they would go back to the original green through the same forward transform, but not necessarily though a different one.
  • Alex Forsythe: The image could have a cyan appearance that's not negated by ZCAM, but is negated by OpenDRT.
  • Kevin Wheatley: You can see in Alex's animation that Open DRT has a translation which moves it closer to green where it hits the Rec.709 boundary, where ZCAM goes straight in towards white. If due to the IDT or camera sensitivities, or whatever, the color is incorrectly encoded as cyan, ZCAM may be doing the right thing, but it's not what we expect, so we think ZCAM is doing the wrong thing. Maybe we should do a test shoot with water and colored dyes, because it may be quite a pure color.
  • Alex Forsythe: Would Open DRT's folding to the boundary be different for different display gamuts?
  • Kevin Wheatley: You could test with e.g. P3 or Rec.2020.
  • Alex Fry: If you kept the green and blue primaries the same, and moved the red, would it fold to a different place, because it's moving towards wherever the red primary is, because that's the one that's negative?
  • Jean-Michel Gilbert: Open DRT works in LMS, not RGB.
  • Alex Fry: The last step to the display, which is where this clipping skew happens, is XYZ to RGB.
  • Matthias Scharfenberg: It's not really folding as it appears in the animation. It's just clamping the negatives. The folding comes from fading between clamped and unclamped. In the ZCAM DRT it would stay green if you turn off the limiting or set it to primaries which enclose that value. Then it clips to the display primaries.
  • Alex Fry: ZCAM will still have a slight skew, because we're not fully compressing tot he boundary. We compress to near it then clip.
  • Alex Forsythe: Ignoring the image and looking at the graphs, what is the behavior we want?
  • Kevin Wheatley: We want similar distortions for different devices. We could look at the shift clipping to Rec.709 and P3. That kind of distribution of colors along a straight line is common, because of how light behaves. It makes sense to me to preserve a straight line between white and a color.
  • Alex Forsythe: If you drew a straight hue line in e.g. JzAzBz from AP1 blue, there's no expectation it should land on any display's blue primary. So AP1 blue looking cyan doesn't necessarily ring alarm bells for me.
  • Alex Fry: In CGI there's an expectation if you push the blue slider to maximum, you get blue. And that is what happens with current renderings. Can anybody who's actually seen a Rec.2020 blue (which is very close to AP1 blue) comment what it looks like?
  • Matthias Scharfenberg: Bjorn Ottosson commented before that desaturating a saturated blue towards white, you need to take out red and add green to compensate for Abney, which creates the curved paths and cyan skew which we're not sure we like. So I looked for what coefficients I could tweak to straighten these lines. I take the LMS matrix and change the Z contribution to L and M by multiplying one of them by a value, and dividing the other by the same value. I also compensate the diagonal values to keep the row sums unity. It's no longer ZCAM, but does it help? I added a control which I called "Abney's Abyss" to a v11 of my ZCAM DRT which lets you adjust this. Taking it to 1.0 reintroduces at least a feel of magenta. I found 0.75 was a sweet spot. It has a negligible effect on normal colors. But it is a "hand wavy fudge factor".
  • Jean-Michel Gilbert: I tried a similar approach, but playing with different coefficients.
  • Daniele Siragusano: You are effectively adding more pints to the fitting data. Maybe you could actually do that, and add extra points and refit. It might be better that just looking at images when you're not sure if you can trust the input data. Adding extra points to stabilize data is common.
  • Matthias Scharfenberg: I don't think we have all the source data set.
  • Kevin Wheatley: It reminds me of how the blue highlight fix was made. Rather than changing matrix coefficients, could we move around the effective primaries that create that matrix?
  • Matthias Scharfenberg: You can't actually reverse effective primaries from the LMS matrix, like you can for an IDT matrix.
  • Daniele Siragusano: And ZCAM matrices are applied in non-linear space. which breaks assumptions.
  • Nick Shaw: It that first LMS matrix not applied to linear data?
  • Daniele Siragusano: But the result is then transformed non-linearly and another matrix applied to that. You could also change the PQ parameters, because the whole thing is only based on a fitting.
  • Nick Shaw: They have already diverged from standard PQ. One exponent value (Rho) is changed in the equation, compared to ST.2084.
  • Alex Fry: What about compressing the data before applying the model, so it's all in an area where the model make sense?
  • Matthias Scharfenberg: The RGC certainly reduces the cyan shift. The RGC parameters were derived for a particular purpose, so other parameters might be better.
  • Alex Fry: It would be nice if we had a way to compress everything to the spectral locus.
  • Matthias Scharfenberg: But how? what direction do you compress in when hue is meaningless outside the spectral locus.
  • Kevin Wheatley: There are curves in the ZCAM hues lines in every direction, but for the other primaries they occur far enough out that cameras don't normally produce data out there. But blue is close enough that they do.

Meeting #40, February 2nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Chris Clark
Alex Forsythe
Francesco Giardiello
Jean-Michel Gilbert
Thomas Mansencal
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith
Jed Smith

Meeting Notes

  • Kevin Wheatley: We have an update from Matthias
  • Matthias Scharfenberg: I am working on a v10 rewrite of  my ZCAM DRT. I have the forward transform and am working on the inverse. I'm combining all the Nuke nodes into a single BlinkScript. Ive fixed an error in v9 where the CAT wasn't applied. V9 compressed in J and them M separately. That makes inversion difficult. I am now compressing diagonally in J and M simultaneously. The look difference is minimal, and I hope it will make the inverse easier.
  • Alex Fry: I have built a tool that generates Resolve and Baselight LUT based DRTs from the v9 Nuke DRT. There's something happening in the shadows of ZCAM, and I'm not sure if it's in the DRT or a problem with the LUT implementation (note: it appears that the LUTs are a good match to the Nuke nodes).
  • Kevin Wheatley: What are we still missing before we give it to a small group of select reviewers? Who might we ask first before a wider audience? We need to make a package with sample images.
  • Nick Shaw: For the gamut mapping group I made self-contained Resolve and Baselight projects, with media and a reference movie. I suggest we do the same.
  • Carol Payne: We made a Google Form with specific questions we wanted answered. We asked about their setup. Did they have an HDR monitor? What OS? We can share our list of colorists.
  • Kevin Wheatley: I would start with a dry run of one or two colorists.
  • Carol Payne: We have some people at Netflix who used to work as colorists.
  • Alex Fry: We should warn people of the quirks they can expect at this stage.
  • Kevin Wheatley: Have we heard from Jed recently in case he has any imminent fixes? Matthias' ZCAM v9 seems close to being ok. SSTS is a known thing.
  • Jean-Michel Gilbert: OpenDRT v0.0.90b4 (current) looks too red for us. We use v0.0.80b3.
  • Thomas Mansencal: They don't have to all look the same. That could be useful feedback.
  • Kevin Wheatley: We're looking at how it behaves to grade under, not the out of the box look necessarily.
  • Nick Shaw: Is the LUT baked OpenDRT using it's own curve? Or modified to use the SSTS to match the other two?
  • Alex Fry: I left it as is, because it's not a trivial swap.
  • Jean-Michel Gilbert: Having different tone scales is good for comparison.
  • Daniele Siragusano: People may like the tone scale from one, but other aspects of others, which would be useful feedback.
  • Kevin Wheatley: How many variation should we put out.
  • Joshua Pines: We should ask specific questions such as "which tone scale do you prefer?"
  • Jean-Michel Gilbert: If experimented with the OpenDRT tone scale in ZCAM but not the other way round.
  • Thomas Mansencal: Are we going to put it in front of people other than colorists? VFX artists may want something that looks good out of the box.
  • Kevin Wheatley: In the wider group certainly.
  • Nick Shaw: We have said we are open to having a default LMT to make it look good out of the box.
  • Thomas Mansencal: Somebody still has to make that.
  • Kevin Wheatley: We haven't talked much about how we would handle mapping between different output devices. Do we have somebody well set up to test this.
  • Jean-Michel Gilbert: I've been trying that and the conclusion I've come to is we need a parametric DRT.
  • Carol Payne: You mean people with an HDR monitor?
  • Kevin Wheatley: That would be the start, but then if we can get two standard displays to look similar we can develop a framework to target anything in between. Jed's presets are "by eye", and we need something more rigorous.
  • Thomas Mansencal: Lots of people have the new M1 MacBook Pro.
  • Kevin Wheatley: That would be good if we need to broaden it beyond people with X300s.
  • Carol Payne: A few of us have access to XDRs, and could get access to others.
  • Joshua Pines: We deliver to standards. So if we target in-between levels we should replicate that on an X300.
  • Daniele Siragusano: I have access to the LG Pro 65 and 32, the EIZO Prominence, M1 MacBook Pro and an iPad Pro and they all look pretty good without calibration. Just a white balance. It's never been easier to get an HDR monitor, at least for natural images.
  • Kevin Wheatley: If we use many different devices we need a way to characterize the display, or at least have setup instructions, and environment. Jed has arrived, so do you have any imminent fixes?
  • Jed Smith: I am working towards a release candidate where I planned to remove a lot of stuff like gamut compression and perceptual hue, and add a default LMT.. But you would be fine using the one that's on the GitHub now. But tell people it is not designed for a good looking image out of the box. It needs a grade or LMT.
  • Kevin Wheatley: That's important. The balance between too much look in the DRT and keeping it flexible.
  • Thomas Mansencal: Should we wait for Jed's LMT?
  • Jed Smith: Depends. For colorists grading under it it's fine as is. But for people just looking at images out of the box, it needs the LMT.
  • Kevin Wheatley: Going back to ZCAM, is the cyan shift something inherent in the model, or something we can and should fix?
  • Matthias Scharfenberg: I think it's the model breaking down near the boundaries and outside the spectrum locus. The RGC helps it a lot.
[Matthias showed 3D visualizations of various gamuts in ZCAM. AP0 distorts badly in the blues]
  • Matthias Scharfenberg: Including the RGC in OT seems heavy handed. Where Y becomes negative the model breaks.
  • Jean-Michel Gilbert: My tests suggest it comes from their modifications to make things perceptual.
  • Nick Shaw: It uses PQ, which doesn't handle negatives.
  • Matthias Scharfenberg: There is a cone fundamentals matrix in there which maybe could be tweaked. It wouldn't be ZCAM any more. Near the gamut boundary you could say "it's not blue enough" but outside the spectrum locus it's a philosophical question what those colors "should" look like.
  • Thomas Mansencal: We're already modifying some things, so it's ok to diverge from ZCAM as long as what we do and why is well documented.
  • Nick Shaw: We're not branding it as ZCAM, so we don't have to be faithful to that. We do what works for what we need.
  • Kevin Wheatley: We will probably need to approximate it for performance anyway.
  • Jean-Michel Gilbert: We could lerp to ICtCp in the problem colors.
  • Jed Smith: In my testing that was worse, which was why I used JzAzBz.
  • Kevin Wheatley: The RGC may well already have been applied earlier in the chain.
  • Carol Payne: If that becomes a fixed part of anything it would be the IDT not the OT.
  • Daniele Siragusano: Problem colors could be reintroduced by CDL after the RGC. Color space aware grading controls help.
  • Carol Payne: That's where the parameterized gamut compression could be useful to the colorist.
  • Matthias Scharfenberg: The Blink version will have all the matrices in the code and they could be exposed for tweaking.

Meeting #39, January 26th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Charles Boileau
Daniel Brylka
Alex Forsythe
Jean-Michel Gilbert
Thomas Mansencal
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: Thomas has posted on ACES Central about dominant wavelengths.
  • Thomas Mansencal: I wanted to quantify the shift in dominant wavelength from ZCAM colorfulness compression. I plotted the resulting curves lines in CIExy, and then plotted a straight line between the intersection of those curves with the sRGB gamut boundary and the spectrum locus, to find the new dominant wavelength.
  • Jean-Michel Gilbert: With default ref white 100 and YmaxY_{max} 100?
  • Daniele Siragusano: But isn't this shift what the model is designed to do?
  • Thomas Mansencal: The model is behaving correctly, but it may not be what we want to see when we push color one way and then it gets compressed – you push further towards blue, and it goes cyan may feel wrong. We're used to blue going purple, if anything.
  • Nick Shaw: Is this still the same in Matthias' v9 where he compresses in J as well as M?
  • Matthias Scharfenberg: I think so. It just changes the relation to where the gamut boundary is. It doesn't change the hue. It reminds me of the discussions in the gamut compression group about the Rösch color solid. This also showed than lines of equal perceived hue are almost certainly curved.
  • Thomas Mansencal: Because we are forcibly compressing, maybe we are over-predicting Abney.
  • Daniele Siragusano: It would be interesting to plot the same thing with the current ACES transform. The fact that it turns purple is something people complained about.
  • Kevin Wheatley: Is cyan worse, better or just different? Or could it just be 'turned down a bit'?
  • Thomas Mansencal: And you need to let the image do its work when it hits the eyeball. We don't want to predict everything. Same with colorfulness, where it it increases with brightness. Do we want to correct for that, so the SDR and HDR look the same. Or do you want the HDR to be more vivid.
  • Jean-Michel Gilbert: What's disturbing about the cyans is it also happens in the darks. In the bright range hue shifts are expected. We're used to sky going cyan in the bright range, although it's nice if it stays blue. I'm not sure about a cyan day-for-night.
  • Matthias Scharfenberg: Might there be a coefficient or set of coefficients int the ZCAM model we could tweak to reduce the cyan?
  • Jean-Michel Gilbert: I've played with scaling down the constants that define the LMS space in JzAzBz as a quick hack because they make things more orange and more cyan.
  • Alex Fry: It's not a problem if we don't use ZCAM exactly.
  • Nick Shaw: If you looked at the data-set for the ZCAM fitting and cull some points which appear to contribute to the cyan, then re-run the fitting?
  • Thomas Mansencal: We don't have access to the data-sets, bu anyway the model is behaving correctly, but we may not like the result. If we like everything else about ZCAM maybe we can correct for that.
  • Daniele Siragusano: But where you need to do that would be gamut dependent. And if feels a bit like duct-taping.
  • Thomas Mansencal: I want to run the same tests on some different models and see if the results are the same. IPT, Oklab…
  • Alex Fry: It would be nice to find a suitable model without having to build the DRT with every possible model.
  • Daniele Siragusano: I'm a bit confused. We do want orange fire to go yellow, but don't want bright blues to go cyan.
  • Thomas Mansencal: Maybe people don't mind the cyan.
  • Alex Fry: Playing with it in Resolve, it does fall into cyan a lot of the time. The band that gives you green is very narrow. You feel it when you work under it.
  • Thomas Mansencal: The various data sets have a lot of disagreement. Maybe the model is not good in cyan. The shift of dominant wavelength is very obvious in cyan, and not so much elsewhere.
  • Jean-Michel Gilbert: On the other end red only becomes orange when it's really bright, which to my mind is desirable.
  • Alex Fry: Matthias, do you want to talk about your v9?
  • Matthias Scharfenberg: The main difference is a smoothing I've applied to the gamut boundary based on Bjorn's comment about the compression creating a cusp within the gamut. This does make it harder to reach the corners of the gamut volume, so I've reduced the limit value in the compression. It removes some contouring that Jean-Michel had noted. It also have the effect of reducing some banding you could see in saturated shadows with v7. I have changed the cusp/SSTS min drop-down from v7 with a slider defaulting to 50% between the two. I'm working on a v10 where I put the whole transform into a single Blink kernel. That would help to port to other implementations, and also make it simpler to e.g. tweak ZCAM parameters, because currently there are ZCAM transforms in multiple nodes.
  • Jean-Michel Gilbert: I was looking at the gamut mapping methods in BT.2407, and a lot of them add a hue mapping step. Should we look at that/
  • Matthias Scharfenberg: But we are deliberately trying to keep hue constant.
  • Jean-Michel Gilbert: I'm talking about misalignment between the source gamut and target gamut.
  • Daniele Siragusano: That's something broadcasters do to put colors on the vectorscope targets. When things get more saturated they put the primaries on the primaries.
  • Matthias Scharfenberg: That would make it less output device agnostic.
  • Alex Fry: There is sometimes an expectation that when a colorist slams the blue slider to maximum, they get max blue out, which the old ones do because they just clip. It may mean changes to how some grading controls work.
  • Nick Shaw: If it's not a slider but a color wheel, is there an angle the colorist can find that will be maximum blue?
  • Jean-Michel Gilbert: If you have a load of non displayable colors, you can display more of them, while compressing less, if you do realignment.
  • Joshua Pines: Hue preservation is paramount for colorists and creatives. Vectorscope targets are not as relevant as appearing the same on different display. But also if colors are different you want to be able to discriminate between them.
  • Jean-Michel Gilbert: I'm refering to the picture on page 11 of BT.2407.
  • Daniele Siragusano: BT.2407 is talking about mapping between BT2020 and BT.709, which is not the same use case as a DRT.
  • Kevin Wheatley: Have we nailed down what we want from the tone-scale. We've looked a lot at BT.1886 monitors, but not a lot at matching that with HDR.
  • Jean-Michel Gilbert: I notice that when I look at the SDR transform in a window on an HDR monitor, it doesn't seem to match a calibrated SDR monitor.
  • Kevin Wheatley: We've not spent enough time looking at HDR/SDR comparisons, and figuring out what we want to achieve. We've looked a lot at hue and gamut boundaries recently.
  • Jean-Michel Gilbert: I got a better match when I limited my BT.2020 to P3-D65 then gamut mapped to Rec.709.
  • Scott Dyer: We need to identify the tone behavior we want. Open DRT is based on the Michaelis Menten curve. Jed posted his other curve recently. I'm looking to find a simple formulation to derive the parameters for any peak luminance. I hope to create an interative plot for comparison and discussion.
  • Kevin Wheatley: Is a single parameter of peak luminance enough!
  • Daniele Siragusano: I think you need two. Peak and surround. I think my function does this and is super simple. You have peak, gamma and a toe/flare compensation.
  • Kevin Wheatley: If we get a curve we like for BT.1886 how should that transfer to other displays? How much of that does the ZCAM model give us, and what do we have to do separately.
  • Daniele Siragusano: I approach it differently. I picked something which models something in nature. You apply it to HDR and SDR images, and it looks good. But it's not the only approach.
  • Kevin Wheatley: We need to work out what are the parameters that drive it. And what if e.g. the ones we choose don't produce an s-shape? Does that matter? Can the s-shape come from something else?
  • Joshua Pines: The s-curve has been used by artists for ever to reproduce HDR scenes in LDR, and painters knew nothing about the maths. The slope in the mids does need to remain consistent between SDR and HDR. We just change the roll off at either end.
  • Jean-Michel Gilbert: We feel the difference in contrast between HDR and SDR in current ACES is a bug. We do want a minimal difference in black level between the two to let HDR go darker.
  • Daniele Siragusano: It needs to be a continuous change between 100 nits, 200 nits, 500 nits, 1000 nits.
  • Nick Shaw: What about 100 down to 48 nits. There's a debate if those should vary or be the same.
  • Joshua Pines: Because of all the other factors, including screen size, 100 nits and 48 nits feel the same when they have the same tone mapping.
  • Nick Shaw: So if something varies continuously between 100 and 1000 nits, if it flattens so the rate of change is very small between 100 and 48, would that be the same enough?
  • Daniele Siragusano: If your dim to dark is a factor of two, you could say in a dark surround you use the curve for double the peak luminance. Then they end up the same.
  • Joshua Pines: We found display peaks less than a stop apart can use the same transform.
  • Daniele Siragusano: If you use the factor of two, you also get the same mapping on 108 nit projection as a 216 nit HDR display.
  • Jean-Michel Gilbert: In my tweaking of the ZCAM model I reduced the highlight desat, because we like saturation. But then it doesn't match SDR. So maybe we need to expose that as a parameter.
  • Daniele Siragusano: Not a parameter in the DRT. Or you have to track it, and people will keyframe it.
  • Joshua Pines: I think it should be a knob in the color corrector.
  • Alex Fry: It's hard to do this in an LMT unless it's ODT specific. We need some highlight desaturation in the DRT out of the box.
  • Kevin Wheatley: With current RGBW OLEDs you get that for free. But if more RGB OLEDs come out, then you have really colorful masters that don't look the same.
  • Alex Fry: I will update my repo of LUT based candidates for Baselight and Resolve to use Matthias' v9.
  • Kevin Wheatley: Troy posted some links to papers on brightness perception and one which shows the origin of the Naka Rushton equations.

Meeting #38, January 19th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Francesco Luigi Giardiello
Jean-Michel Gilbert
Thomas Mansencal
Björn Ottosson
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Shane Smith

Meeting Notes

  • Kevin Wheatley: We should recap some discussions from ACES Central, and discuss how to prepare things for testing. To summarize the discussions, there's a tension between gamut volume coverage and the preferred path to white.
  • Björn Ottosson: To summarize my post. If you do a simple unwrapping of the RGB cube you see very obvious harsh edges. Those are still visible if you use a perceptual model. It's particularly obvious in blue because between pure blue and greenish blue, there's a huge drop in saturation. I only see a few possible solutions. One is to not reach the extents of the RGB cube. Or you can accept the problem, although you can hide it a bit with a mapping that affects the interior of the cube less. The third option is to accept hue distortions, at least in very saturated colors. The next part of the post shows mathematically why the derivative of the tone curve drives desaturation in an RGB curve approach. You also don't want your path to white to become fluorescent on the way. The last thing is about the input gamut. If you want to reach the corners of the RGB cube, and have a natural transform for normal colors, you have to go quite far out into imaginary yellows, for example, to get full yellow output.
  • Kevin Wheatley: That all matches the things I had imagined we need to consider.
  • Jean-Michel Gilbert: ACEScg may be a suitable input gamut. It's close to Rec.2020 which is the target for HDR.
  • Björn Ottosson: That's an example of what I was saying. ACEScg yellow is close to natural real world yellows. So if you map that to the corner, you won't have a natural mapping for saturated yellows.
  • Daniele Siragusano: That's why most working spaces have imaginary primaries.
  • Björn Ottosson: It's useful to have that headroom available in grading.
  • Daniele Siragusano: One point that's missed is that if the slope when you hit clipping is shallow enough, you may not notice clipping artifacts. We have that with tone-mapping, where a tone mapper can overshoot peak white, and get clipped, but if the slope is shallow at that point it's not noticeable. It's just Mach bands at the end of the day, so I'm not afraid of the sharp edges of the hull. But I agree with most of what you said. I'm not a big fan of ACEScg, because it's too close to the target gamuts, so it's hard to make a good mapping, particularly for reds and yellows. Lightness is  not a good metric to use for me, because it's too complex for a per pixel model. Energy conservation metrics may be a good thing, and if all reflective colors have a certain limit that may be a good place to start the path to white.
  • Alex Fry: Do people have strong opinions of the input gamut? Currently we chop everything outside AP1.
  • Kevin Wheatley: It ties to what I said last week about targeting a real or virtual intermediate device and then down-mapping to the real display looking for the best representation of that. Or at least do that conceptually. And it seems a set of virtual primaries might be what's needed. But not AP0, as the directions are inappropriate.
  • Thomas Mansencal: I did some tests and putting AP0 primaries into the ZCAM model it blows up. I plotted some AP0 ramps, and it's obvious it's not a good choice.
  • Björn Ottosson: No perceptual model has been designed to behave outside the visible range.
  • Thomas Mansencal: Which is why I think you shouldn't use a perceptual model with non-realizable colors.
  • Daniele Siragusano: But if you start with XYZ and take Y, you break down as soon as you're near the spectral locus.
  • Björn Ottosson: So there would be an argument for making your working space use perceptual correlates, if that were feasible.
  • Jean-Michel Gilbert: sRGB will always clip if you have real yellow.
  • J. Schulte: What is real yellow? A particular wavelength?
  • Kevin Wheatley: We don't want to be constrained by Rec.709's limitations.
  • Nick Shaw: That's the logic for having a bigger target, so what you show in sRGB is just the best approximation you can manage of that.
  • Matthias Scharfenberg: Isn't choosing a virtual display with wide primaries just shifting the problem into the final display mapping. What does that get us?
  • Kevin Wheatley: I thought originally it would be better to target a real device, because you just get one and measure it. But you are then constrained by the limitations of that device.
  • Matthias Scharfenberg: My ZCAM DRT is input gamut agnostic. The first step is conversion to XYZ.
  • Daniele Siragusano: You can't fit everything ito the ZCAM model.
  • Matthias Scharfenberg: The ZCAM model is not well behaved with imaginary colors. But ZCAM is just what we picked because it has a good balance between nice perceptual correlates and computational complexity. But I'm not married to ZCAM.
  • Björn Ottosson: What you've done is similar to what I did. You are implicitly defining the input gamut, because if you want to reach the bounds of the target gamut, there is a point in the input gamut that reaches them.
  • Matthias Scharfenberg: That is dependent on the target gamut.
  • Daniele Siragusano: So the intermediate stage needs enough "juice" to reach the target bounds.
  • Björn Ottosson: If you are use something based on an LMS space, it falls apart once one of the LMS values goes negative. That's why the blue corner is problematic.
  • Daniele Siragusano: My main point was if you are shallow enough as you approach the boundary, clipping is not a visual problem.
  • Matthias Scharfenberg: That's why I used the same non asymptotic compression algorithm we used in the RGC. And it helps with inversion, because we always end up with display-referred media we need to get into our scene-referred pipeline.
  • Thomas Mansencal: Within reason. We may not have to match the display-referred original exactly.
  • Alex Fry: One reason the concept of an intermediate OCES type step feels odd with the ZCAM DRT is there is nothing that happens before the ZCAM. No sweeteners. It's really all a fancy gamut mapping to the display. There is no rendering.
  • Kevin Wheatley: Conceptually OCES rendering is supposed to be the scene to display world mapping, then there is the display specific mapping. Or do we do it all in one, and find magic numbers that do what we want?
  • Daniele Siragusano: I never understood what the split of the two curves brings in terms of benefit. I think the simple model I suggested, based on one slider works well.
  • Alex Fry: Once we remove the sweeteners, what is the rendering part doing?
  • Jean-Michel Gilbert: When I baked my LUTs of the ZCAM DRT and OpenDRT I had to add different sweeteners for different targets, to fix the defects in the blue etc.
  • Matthias Scharfenberg: It is definitely still a work in progress. I have a new v8 I am working on and also some visualization tools. It is now also mapping in J towards either the SSTS mid point or the cusp of the gamut boundary, rather than just mapping M in a straight line.
[See Matthias' demo at 43 minutes in the recording]
  • Matthias Scharfenberg: V8 also removes the pre-calculated LUTs, and finds the boundary and cusp by iteration which is very expensive.
  • Björn Ottosson: In Oklab I have an analytic approximation to find the gamut boundary. There is benefit to keeping your compression boundary loose and outside the actual boundary. Compressing everything inside creates a kink following the cusp for colors near the gamut boundary.
  • Matthias Scharfenberg: Currently the complexity means I haven't managed to make an exact inverse.
  • Daniele Siragusano: I have a plot which shows how compressing to a boundary with cusps creates Mach bands.
  • Matthias Scharfenberg: We have to choose, do we make sure everything is compressed into the target gamut, or do we allow it to be a bit looser and then clip.
  • Kevin Wheatley: I prefer the second one, because if you give colorists something which can cover the gamut they can make it look right. If they can't get there, they are restricted. So I would tolerate a bit of error.
  • Nick Shaw: That's a key point, that there is always a person in the loop. It has to look decent out of the box, but not perfect for all images.
  • Matthias Scharfenberg: Those things Jean-Michel was compensating for are definitely a weakness in the model. Alex's plots clearly show a dog-leg towards cyan as you go from blue to white.
  • Björn Ottosson: If you compress to a smoother version of the sRGB hull, then clip, you remove those artifacts.
  • Alex Fry: I feel it's to do with ZCAMs constant hue line not having the expected path for blues.
  • Jean-Michel Gilbert: It doesn't happen as much with OpenDRT and ACES 1.2. But I know ZCAM and OpenDRT are unfinished.
  • Nick Shaw: Are we premature in baking LUTs of WIP transforms? Might people judge them on things that we know will be improved?
  • Alex Fry: Feedback is always usefully and maybe some issues are only theoretical rather than real problems.
  • Kevin Wheatley: How might we simplify the gamut boundary?
  • Alex Fry: I have a work in progress Nuke project for baking LUTs and creating DCTL ODTs for Resolve. We need to do the same for Baselight. We need to consider whether to modify OpenDRT to use the same curve.
  • Thomas Mansencal: We need to make sure we have traceable version numbers.
  • Daniele Siragusano: The interaction between the grading space and the DRT is very important. I will make some plots.

Meeting #37, January 12th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Alex Forsythe
Jean-Michel Gilbert
Thomas Mansencal
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith
Troy Sobotka
Mike Whipple
Joachim Zell

Meeting Notes

  • Alex Fry: I've been looking at why Matthias' implementation looked different to mine. I realized I was using the original c9 curve for SDR, where Matthias was using the 100 nit SSTS curve. I made a version of his node with selectable curve, including c9, SSTS and Kevin's "average real production" curve. Kevin's curve is remarkably similar to the 100 nit SSTS, which we don't normally see, because the current SDR renderings are all based on 48 nits. So 100 nit SSTS may give us the lower contrast we've been asked for.
  • Kevin Wheatley: My data needs cleaning, because it doesn't quite hit 100 nits, and the bottom rolls of too much. I've looked again at the Doug Walker / Gary Demos presentation. They were looking at a curve with end points that we straight lines in log/log. Maybe we should add something similar to that to the SSTS. That does raise questions about handling of shadows and highlights, and how we might put that into an LMT, but the principle is interesting.
  • Nick Shaw: Won't a straight line on a log/log plot never reach zero? Might that be a problem for colorists who wand CV 0 output.
  • Kevin Wheatley: You could maybe target zero out for zero in. My average data is just a way of quantifying what people mean by "lower contrast". It's a guide not an exact target.
  • Thomas Mansencal: Have you tried fitting a curve to it?
  • Kevin Wheatley: Not yet. We need to think what the slopes should be at the ends. That's why I rewatched Gary and Doug's SMPTE presentation In some of their work there is discussion of how you should map grey, and if it should be relative to display brightness and surround. Also is a 709 display as our middle ground reference is a good idea. Should a real HDR display be the reference and we map down? Or a virtual display like OCES?
  • Thomas Mansencal: In this time an SDR reference seems retrograde, and HDR make more sense.
  • Kevin Wheatley: OCES has advantage, but also the disadvantage that there is no real display to validate.
  • Thomas Mansencal: Have the BBC done a recent survey of consumer displays?
  • Kevin Wheatley: Not that I am aware of. Many now may be more than 1000 nits, but we don't have t chase manufacturers. What is our professional display target?
  • Alex Fry: If manufacturers make 500 nit SDR displays, it's up to them to make them to make 100 nit content look sane on them. I've also done some experiments with varying the scaling of the input data for ZCAM, which has a big effect on saturation. The original choice of 100nits was slightly arbitrary, and not necessarily appropriate for outdoor scenes. Going above 100 rapidly goes nasty rapidly on skin tones.
  • Kevin Wheatley: Is this a function of the surround value. It's like a transparency where the backlight is brighter than the surround environment. That is what happens if your backlight is too bright.
  • Daniele Siragusano: If you raise the output exposure, does it desaturate in the same way?
  • Nick Shaw: I just checked my DCTL implementation, and the effect of varying the surround value is pretty minimal. But It's possible I hard coded the default value of 10 in there somewhere.
  • Daniele Siragusano: This is the value you want to vary to control the colorfulness of your rendering. You're setting up the brightness of your golden scene.
  • Jean-Michel Gilbert: I used 201 nits in my tweaked version. It enabled me to see some subtle red fog in one screen of our game I hadn't noticed before.
Jean-Michel Gilbert showed a comparison of his tweaked ZCAM DRT, and Open DRT, which he had posted about.
Matthias showed some visualizations he has been working on of hue slices in JMh, showing gamut boundaries at different hues.
  • Matthias Scharfenberg: I am looking at gamut compressing along a different line, so instead on only in M towards the J axis, it compresses up or down in J as well, perhaps towards the J value of the cusp of the target gamut at that hue. I don't have anything to share on that yet.
  • Nick Shaw: I'd be interested to look at the output of my DCTL implementation of the ZCAM DRT using that visualization tool, to see how my iterative approach to the gamut compression compares to the 2D LUT implementation.
Nick showed his DCTL implementation of the ZCAM DRT and explained the various parameters.
  • Nick Shaw: I was surprised it is able to play back in real time, given it is iterating 128 times for every pixel to find the M value at the gamut boundary. That could probably be optimized, but iteration is not ideal.
  • TS: The Colour Science ZCAM implementation uses a "safe" power operation to protect against NaNs when you raise a negative value to a fractional power. I suggest the Nuke and DCTL implementations should do the same.
  • Matthias Scharfenberg: I had a question about the Zhai (2018) chromatic adaptation. When you have tungsten balanced input, for example, it stays a bit orange on the output. So equal RGB input does not produce equal RGB output. I wondered what the issue is with a simple 3x3 Von Kries. But the main issue for me is the need to use iteration or a LUT to find the gamut boundary.
  • Thomas Mansencal: It's complex, and I think most people use LUTs.
  • Kevin Wheatley: Could we find an analytic approximation?
  • Nick Shaw: If the gamut compression isn't asymptotic at the boundary, then it's a bit fuzzy, so maybe it can be imprecise if you pick suitable compression parameters.
  • Alex Fry: Might an approximation mean there are parts of the gamut you can't hit? Or hue skews where things clip?
  • Thomas Mansencal: The shape not being convex also makes things more complicated.
  • Nick Shaw: If it's already compressed when it reaches the boundary, hopefully the skews from clipping will be minimal.
  • Alex Forsythe: Because they are additive, hopefully different display gamuts will all have similar shapes.
  • Lars Borg: Differences in the brightness of the blue, for example, of different gamuts might make an appearance match difficult.
  • Matthias Scharfenberg: You could pre-bake gamut boundary LUTs for the common display gamuts.
  • Lars Borg: You may not want to map to the cusp for every display, but to a common point for them all.
  • Alex Forsythe: Maybe we should reach out to Ján Morovič. We should also start to plan how we are going to test these algorithms.
  • Alex Fry: We could probably put them out a a select few DI people.
  • Nick Shaw: How many parameters would we expose in a test version?
  • Alex Forsythe: We need to ask specific questions.
  • Daniele Siragusano: Would it be easier to bake out LUTs?
  • Thomas Mansencal: How do we decide what parameters to bake into LUTs? It might be useful to make a OCIO config like we did for the gamut compression, so people could switch between them, and maybe ALF-2, IPP2 and TCAM as well.

Meeting #36, January 5th(2022), 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Alex Forsythe
Danny Gagatt
Frith John
Michael Parsons
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Shane Smith

Meeting Notes

  • Matthias Scharfenberg: It's just a "hand wavy" attempt based on the idea that the desat in an RGB curve comes from the channels converging in the roll-off, so I'm using the roll-off to create a mask for the desat. I think it makes things look nicer. I've also added an inverse path. I needed to clamp the output to 0-1 or I had some instability.
  • Alex Fry: That's a difference between Matthias' version and mine. Mine is inherently limited to 0-1 because the gamut compression roll-off is asymptotic at 1. Matthias' version still has finite slope through 1. It's up for discussion which approach is preferable.
  • Matthias Scharfenberg: I used the Power(P) function from the RGC, and not going asymptotic aids invertibility.
  • Nick Shaw: I find the idea of the DRT being able to exceed the display gamut and having an optional clipping for master is appealing.
  • Alex Fry: If you clip anywhere you will get a hue skew.
  • Matthias Scharfenberg: The RGC used distance limits based on cinema cameras. For this we could do something similar, or maybe use AP1, so anything in AP1 will map to within the display gamut volume.
  • Alex Fry: Pekka posted some interesting comparisons of ZCAM, Open DRT and ACES 1.2, but a modified OpenDRT that uses the SSTS, so all three use the same tone curve, to remove that as a factor. Both our ZCAM DRTs use SSTS, but that is just to make a level playing field, not a proposed final tone curve. Everyone agrees we want lower default contrast. Maybe we're ready to test with Jed/Daniele's Michaelis Menten based curve. It might be useful to make a version of the existing rendering which uses that curve, but keeps everything else the same. Then introduce it into the others.
  • Nick Shaw: While working on ZCAM in DCTL I noticed that the dark desaturated look of the blue Macbeth patch comes substantially from the gamut compression. I made a modified version of Matthias' Nuke implementation where I could disable the gamut compression. Is that relate to the sparsity of blue samples in the LUTCHI data set.
  • Matthias Scharfenberg: It may also be because the current gamut compression is keeping the brightness constant, and just compressing the M correlate. Other gamut mapping approaches compress towards different targets. It could be better to perhaps compress darker colors towards a brighter achromatic, and maybe also brighter colors towards a darker one. I am planing to experiment with that.
  • Alex Fry: It was interesting in Pekka's post how similar the appearance of ZCAM and Open DRT were when both used the SSTS. In his latest post he has some plots showing hue before and after the three DRTs across an exposure ramp.
  • Alex Forsythe: This came out of a discussion about looking at ACES values before and after the DRT. Troy suggested that looking at perceptual correlates might be better. And also to look at it before and after the gamut mapping step.
  • Kevin Wheatley: It's a 2D version of what you can see in his earlier 3D plots. For grading I would want the one with least wiggles and bumps.
  • Daniele Siragusano: For further fitting through the DRT, predictable behavior is desirable. For grading I disagree. It would be boring to grade through. Every image has a position where it locks in and resonates with a DRT, and that is caused by the non-linearities. But clean is good for building LMTs, so you don't have to invert the non-linearities. Then the LMT is fragile because if you move it around then the LMT wiggles and DRT wiggles don't line up. If the DRT has no wiggles you need an LMT that wiggles. LMTs for a DRT with wiggles are difficult.
  • Alex Forsythe: I think we need to start putting some of these options in front of creatives. Particularly colorists.
  • Alex Fry: I think we would need to make LUTs and call them A, B, C, to abstract them.
  • Alex Forsythe: We need to work out how to present it.
  • Joshua Pines: You need to provide the current ACES rendering as a reference.
  • Kevin Wheatley: Is there anything else from our list we can set tasks for people to focus on?
  • Alex Fry: It's worth investigating the use of ZCAM for dark v dim. But in my implementation changing that scaled everything, so more work is neded.
  • Nick Shaw: That doesn't happen with Matthias' version. You can use e.g. dark for the forward transform and dim for the inverse part.
  • Matthias Scharfenberg: It's fixed to dark on the way in, so the UI only controls what happens on the way out.
  • Daniele Siragusano: Using dark for scene data seems the wrong way round.
  • Kevin Wheatley: I was thinking the reference should be in the middle.
  • Alex Fry: We should also look at values other than 100 nits to map reference white to on the way in. 100 would be very dark for most scenes. Ideally we'd use the actual absolute scene value inferred via metadata, but that is not currently practical.
  • Matthias Scharfenberg: I tried to keep only a minimum of parameters exposed in my implementation, to keep the permutations simpler.
  • Nick Shaw: I am working on a Resolve implementation in DCTL based on Matthias' Nuke one. It's not ready to share yet. I've made it modular with a DCTL ZCAM implementation that can convert between any of XYZ, IzMh and JMh. It includes scaling on the input and output, so you can view the output of a node in Resolve's 0-1 range, but then scale it back up to the appropriate range on the input of the next node if you are building a chain of transforms following the structure of the internals of Matthias' group node. Resolve doesn't support expression linking values between nodes (AFAIK) so you have to enter the parameters in each node. I have a work in progress SSTS DCTL including the various scalings and conversion to JMH that happen in Matthias' Nuke version, but I don't yet have his highlight desat or gamut compression.
  • Alex Fry: Does anybody have thoughts on our choice of ZCAM.
  • Daniele Siragusano: I have reservations about ZCAM. It's built on ICtCp which had historical limitations, and they have kept those in JzAzBz. Also I'm wary of using PQ on scene-referred data.
  • Matthias Scharfenberg: We used it because it's the latest and seems to have good predictors for it's various correlates. But we're open to people suggesting other models.
  • Daniele Siragusano: I don't know of any specific alternative that is public at the moment.
  • Nick Shaw: Am I right that you have said you're reservations apply to all current CAMs, due to applying 3x3 matrices to non-linear data.
  • Daniele Siragusano: With legacy hardware, they were limited to matrix-LUT-matrix sequences of operations. But we aren't targeting that kind of limited hardware, are we? I'm also not sure that that this is a good emulation of what happens in our visual pathway. There is also the philosophical question, "Do we want to 'pre-perceive' the image?"
  • Kevin Wheatley: That's comparable to the OCES concept of a real or virtual target display, and a CAM makes sense to me for matching other real display to that. That's separate to the rendering.
  • Daniele Siragusano: I find it odd that we're shoe-horning the SSTS into the middle of a CAM based approach. It should be used to match the appearance of e.g. a 450 nit and 100 nit display, and it does everything internally. Otherwise you are nulling the  properties of the color appearance model, which is a big factor. The SSTS is applied in linear, yes? So you go into PQ, then back to linear, then apply the SSTS, then back to PQ? This going in and out is not natural for a CAM.
  • Nick Shaw: In my Resolve node graph there is a point where the data is in PQ normalized 0-1, so you can just use Resolve's curve tool on it. The only reason to go to linear and back is because the SSTS maths is designed for linear in and out. But the net result is just to apply an s-curve to the PQ I channel data.
  • Kevin Wheatley: So is anything crucial happening prior to the SSTS?
  • Alex Fry: It's just getting into that state where brightness is separated, so you can apply a curve to it.
  • Daniele Siragusano: The CAM is converting to non-linear LMS and then fitting it to the data, so it's inherently PQ. It would be interesting to just use e.g. the SSTS and do a fitting of the rendering space, until the skews align with some of the e.g.LUTCHI data. This is what Jed tried to do with changing the rendering primaries.
  • Alex Forsythe: That's kind of what they did with RIMM/ROMM.
  • Daniele Siragusano: You could use any Luminance Chrominance space, like Kodak Photo CD did.
  • Alex Forsythe: They could have used an RGB type space, because it's easy to transform between them and luminance/chrominance.
  • Joshua Pines: Luminance/chrominance spaces are normally used for data reduction.
  • Daniele Siragusano: ICtCp started because YCbCr didn't work well for HDR because there is too much luma/chroma correlation. But building a full CAM on top of it is not what it was designed for. But maybe I'm worrying too much. The initial results look good. But we should look at different ways of deriving a luma/chroma space.
  • Kevin Wheatley: I will work on my average tone map, so I can provide some anonymized average data.
  • Alex Forsythe: I think we should work out how to get this out there for testing.
  • Alex Fry: And we need to work out what the timeline looks like for the rest of this.

Meeting #35, December 22nd, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Daniel Brylka
Sean Cooper
Michael De Caria
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith
Troy Sobotka

Meeting Notes

Matthias showed his version of a ZCAM based DRT which includes explicit highlight desaturation – see recording.
Alex showed his 3D visualizations of his ZCAMish DRT. He noted how it shows the undesirable effects with bright yellows and blues.
  • Nick Shaw: could we add a band-aid to the model to rebalance the yellow/blue and they are opposite in hue? Or are there other candidate models we could try?
  • Alex Fry: Matthias and I discussed OKLab as a potential candidate.
  • Matthias Scharfenberg: It is a much simpler model than ZCAM.
  • Daniele Siragusano: How does OKLab work above diffuse white? But we could try various models using the same structure as this ZCAM approach.
  • Alex Fry: I did try a simple Yxy based model, and felt maybe it was my gamut compression that was the biggest factor in what I liked. People should read Troy's post on ACES Central, concerning keeping brightness, vs keeping color.
  • Daniele Siragusano: Highlight bleaching in SDR has a big effect on matching SDR and HDR if HDR doesn't bleach. It is maybe better done in an LMT.
  • Matthias Scharfenberg: Our gamut compression is currently a pretty simple straight line approach. Other approaches would be different.
  • Daniele Siragusano: It's a very interesting exercise to see what the spectral locus at various intensities looks like in these CAM models. I am wary of models like ZCAM based on matrices applied to non-linear cone response. Maybe somebody could try redoing the fitting in the model targeting a more friendly display gamut.
  • Joshua Pines: I worry that some of the things we are discussing should be a knob for creative control on a shot by shot basis.
  • Nick Shaw: Could you have a parameterised LMT that was specific to the rendering that controlled e.g. saturation vs brightness?
  • Alex Fry: You could end up with a rendering that does nothing, and everything is in LMTs that are display dependant.
  • Matthias Scharfenberg: At a gamut boundary you can either be asymptotic, which makes the boundary hard to reach, or my version which sets a limit which maps to the boundary, but further out values will go outside the gamut.
  • TS: That could cause accidental skews where values escape the display capabilities.
  • Daniele Siragusano: There is still a person in the loop who can correct for extreme imagery, so you don't have to give up too much in the rendering. There is always mastering clipping, like limiting to P3 in X'Y'Z' or Rec.2020.
  • Nick Shaw: Where are we at with reporting back to the TAC?
  • Alex Fry: The ZCAM approach is promising, but there's still a lot of work to do.
  • Scott Dyer: I can write a summary of what the SSTS tweaks will and won’t do. And there are still unknowns with how much work is involved in fixing that. So we can give the TAC our list of requirements and estimated T-Shirt sizes, so they can make some decisions about what's on the table and set some timelines.

Meeting #34, December 15th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Rémi Achard
Rod Bogart
Daniel Brylka
Jean-Michel Gilbert
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Shane Smith
Troy Sobotka

Meeting Notes

  • Alex Fry: I've been adding to my ZCAM DRT. Matthias has been doing a parallel similar but different approach.
  • Jean-Michel Gilbert: I have been working on matching colorimetry and contrast between HDR and SDR in our game. SDR is currently a little dark. You have to start from the same perceived contrast or you're constantly compensating one to match the other.
  • Nick Shaw: Your game is using the OpenDRT?
  • Jean-Michel Gilbert: Yes. But in our testing the 100 nit SDR has less contrast than the 1000 nit. I ended up removing the surround compensation and maker the image brighter to get more out of SDR. Maybe we'll have to do separate SDR and HDR grades, but we hope not.
  • Joshua Pines: From the Hollywood side, separate trim passes for HDRT and SDR are the norm. We call them trims, but they really are different worlds. When you turned off the surround compensation, did that improve the match?
  • Jean-Michel Gilbert: Yes. In open DRT the surround compensation is a multiplier for the contrast. So SDR 1.4 contrast with 0.9 surround compensation ends up with ~1.2 contrast, which is what HDR contrast is.
  • Nick Shaw: Alex, in your ZCAMish DRT are you doing any surround compensation? It looks like surround is set to "average" in both the forward and inverse transform.
  • Alex Fry: Nothing yet. In ZCAM when the surround for forward and reverse don't match it also changes the absolute brightness, so more work is needed to make that work.
  • Nick Shaw: You're applying the current ACES curve by inverting out the ZCAM curve to get a difference, so the end to end curve matches current ACES? So if the curve you're inverting changes, you would need to take that into account.
  • Alex Fry: It's crude and uses a couple of 1D LUTs, so as Pekka pointed out it doesn't perfectly match. I discovered that there was an issue with the way Nuke  implemented my 2D lookup. It wasn't seen the whole range, and the highlight compression wasn't happening. I've padded it out so it behaves correctly. There's still some clustering that needs investigating. I've also decoupled the 2D lookup so there are separate ones for <0 and >1, so you can soft clip them independently. That's v006 in the repo. I'll bake out some sample movies so people can see things moving. Matthias has been doing his work separately.
  • Matthias Scharfenberg: I based my version on Thomas' Colour Science library. Also instead of tone mapping the ZCAM J component as Alex does, I thought I would try something less perceptually tuned like the Iz component. I'm doing a similar thing to Alex with my 2D lookup for the boundary, but it's self contained rather than baking out images. They are very similar. My differnt approach to the lookup gave me some interesting insights (see recording at 16:30). I was thinking about how hard it might be to approximate the boundary without a lookup. I needn't be perfect because I used a different compression approach based on the one in the Reference Gamut Compression, which is not completely asymptotic. which is better for invertibility. I will share it if people want to see my version.
  • Alex Fry: I see you have some surround compensation in there.
  • Matthias Scharfenberg: I'm just exposing the surround parameters from the CAMs. I'm also using the SSTS from Jed's Nuke implementation so it is easy to do HDR variations. But currently the surround parameters seem to have no effect. I'll have to investigate.
In the recording at 24:30 Matthias shows how he constructs is boundary lookup dynamically, including the SSTS parameters
  • Thomas Mansencal: Maybe we could speed up the LUT construction using Blink.
  • Matthias Scharfenberg: Yes, but then it wouldn't be accessible for people using Nuke non-commercial. Anyway I wanted to start by just getting it working.
  • Alex Fry: My implementation is based off the LuxPy implementation, so may not be identical to the paper.
  • Thomas Mansencal: It's not possible to match the paper exactly, because we don't know what chromatic adaptation they use. I used the two step one the authors used previously. But it's an assumption. But the two step is needed to make it round trip/
  • Matthias Scharfenberg: I used the Zhai 2018 two step method in mine.
  • Alex Fry: I just left it out, and assumed you have already transformed the data.
  • Matthias Scharfenberg: The highlight desat behavior creates some artifacts e.g. with the Mercedes building image.
  • Troy Sobotka: I think that chromatic attenuation is an important component that I don't think a CAM answers fully. The output medium can't express the stimulus, so how and why that compression happens is a fundamental question that I don't think has been addressed yet.
  • Alex Fry: I feel one thing I like about the ZCAM DRT is that it maintains a sense of brightness as values approach the edge of the gamut. That could be desirable or not.
  • Troy Sobotka: The Red Xmas has obvious bright bulbs in it, and we have learned expectations for that. If a representation doesn't match our expectations it triggers an sort of uncanny valley response. That feels to me like a very important design facet to address. Blue bar and others too, we don't get a sense of the magnitude of the stimulus, from how they are rendered at the display.
  • Alex Fry: The CG woman's face you don't get such a sense of the saturation with ZCAM, but you don't get the flattening out and maintain a sense of brightness in the overexposed version. Although that might be problematic to grade through if you want something bright and saturated.
  • Troy Sobotka: I'm not 100% sure about that, to be honest, because technically, if you're on a brightness metric, every chromatic stimulus mixture lives at a different position, and as a result for each one there would be, in theory at least, a fully relaxed version where you can run right out to the edge of the display medium's gamut volume. It would probably be very undesirable, but certainly parametrically tunable.
  • Jean-Michel Gilbert: It's certainly undesirable. We've been fighting against what OpenDRT does in certain extreme cases.
  • Troy Sobotka: OpenDRT doesn't use a lightness metric, which is why it suffers with chromatic attenuation on yellows. Bjorn Ottosson plotted some HSL sweeps which show this. And Alex has one, I think, plotted against ZCAM Jz which attempts to be a brightness metric, even if it's faulty. So each of those values can hit the medium's limit.
  • Nick Shaw: But the artifacts you see on that image are deliberate because it's an accumulation of values, deliberately stopping at a threshold to find it.
  • Troy Sobotka: Yes, but if you're using a brightness metric, you literally can go out to that and then hit the corner and across. If you use a different method of attenuation, you'll find that the high luminance ones like yellows will always attenuate the most just because they're tapered towards the edge as opposed to a brightness metric like the one which allows it to go all the way up to the edges.
  • Alex Fry: Can you explain your issues with the Z component of ZCAM.
  • Troy Sobotka: It's anchored in luminance, and because luminance was not designed for what it ends up being used for it's problematic. It would be nice if there was an actual engineered reasoning behind the attenuation as opposed to complimentary light happy accidents, which I think is still the case with ZCAM.
  • Alex Fry: Any other models we should look at?
  • Troy Sobotka: Thomas might know.
  • Thomas Mansencal: I don't have a problem with luminance. It's a measure of stimulus, and psycho-physical experiments give you the observer response as a function of that. What other input would you have to the model except luminance?
  • Troy Sobotka: Luminance being the baseline for many of the models is fundamentally problematic for images. There's no manipulation of luminance that works for a chromatic source, which strikes me as a large showstopper for imagery.
  • Thomas Mansencal: We still end up creating stimulus from a device, and we can measure that, which is luminance. And we compute the response of an observer to that stimulus.
  • Troy Sobotka: I'm saying with chromatic sources that doesn't work.
  • Matthias Scharfenberg: Are you suggesting we keep J constant and just compress the colorfulness component? Because we have to compress something because we're limited by the output device.
  • Troy Sobotka: It's trivial to show that modifying luminance doesn't work.
  • Thomas Mansencal: We're not working on luminance. The CAM gives you the observer's perception of the luminance. We're in the model space. It may not be complete, but that's what it's trying to do.
  • Matthias Scharfenberg: Does ZCAM take the Helmholtz Kohlrausch effect into account when it calculates brightness of chromatic sources, or does it just use Y. If someone has a better CAM then great, but currently ZCAM seems to be the best we have.
  • Thomas Mansencal: Even if you look at Nayatani 1995, which takes HK into account, it's still fed with XYZ and is built on the LUTCHI data set.
  • Troy Sobotka: Luminance was never designed for that. It was designed to be additive. Hence the problem. Yyou can go all the way back to 1912, and there's plenty of researchers that have demonstrated that it's super and sub additive. Would it make sense to try Alex's experiments with a different model that's based in actual brightness?
  • Thomas Mansencal: How do you get that brightness when what you have is an RGB image. All you can do is convert it to XYZ and apply a model.
  • Troy Sobotka: I think we need to find a model that renders a hue sweep uniformly in terms of brightness.
  • Alex Fry: I wouldn't expect my LUT sweep to look uniform. But it should look uniform in the layers that make up the LUT before they hit the boundary. Only what is within the display gamut can be judged for uniformity.
  • Thomas Mansencal: Send us a paper Troy!
  • Troy Sobotka: I think Cam18 SL has a component in it, and maybe a couple of others. I think Fairchild has one. There may be a correlation between the output medium's range of response and the amount of attenuation that needs to be applied that might be worth pursuing.
  • Thomas Mansencal: We need to be pragmatic. All the required research is not done. We need deliverables.
  • Alex Fry: Yes, we need to use existing research. We don't have the time or scope to do new research. Let's try and find the best one that exists. We have one more meeting for the year.

Meeting #33, December 8th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Sean Cooper
Michael De Caria
Alex Forsythe
Zach Lewis
Thomas Mansencal
Matthias Scharfenberg
Daniele Siragusano
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: We'll have a demo of a newly fixed ZCAMish DRT
  • Alex Fry: To be clear, I didn't' fix anything is the DRT. One of the images had been imported incorrectly.
Alex summarized the ACES Central thread on the ZCAM DRT and the latest update to his DRT, adding HDR output, and with a 2D lookup for gamut compression.
  • Alex Fry: With most "normal" images it seems to do reasonable things. With  more extreme ones it behaves differently. With some it perhaps desaturates too much. And some things seem to be related to the way ZCAM curves hue around in CIExy. Some unreal colors we might think of as "very blue" get curved round towards cyan as they are desaturated in ZCAM M. Same for CG images where the blue is slid to the max in ACEScg, giving you blues that are more cyan than you might expect.
  • Lars Borg: That may be a sign that the model isn't very good for blues. Even some Rec.709 blues end up cyan.
  • Alex Fry: As Thomas posted, the data set the model is fitted to is a bit thin in the blue corner, so the model will be extrapolating. Taking Chris's CG render and modifying it so it's as if it was lit with Rec.709 blue, it still end up bit cyan, but looks a lot more reasonable.
  • Lars Borg: I'm concerned that you're not using something like a max(RGB) model. With a camera pointed at a real scene, if you put a magenta filter in front of it the red and blue channels are unchanged. Only the green changes. If we map it to the luminance falling on the scene, we map the filtered color as strongly as we map the unfiltered color. But with a JCH type model we map the unfiltered color much less, so we keep it far stronger than the unfiltered color. I've seen that look very unnatural. If you have something that goes from white to blue, if the tone mapping darkens the whites, it should also darken the blues. With a JCH model it wouldn't do that. Video guys like to use max(RGB) because that indicates the intensity of the light more than luminance does.
  • Daniele Siragusano: There are pros and cons to every model. max(RGB) has discontinuities where you cross from one component being maximum to a different one. And that slight conversion of the blues is something a lot of models suggest perceptually.
  • Lars Borg: Yes, and the Helmholtz–Kohlrausch effect suggests you don't want to darken blues as much. But given that in these spaces blue only has a ~5% contribution to luminance, it doesn't get darkened at all, so you end up with an out of gamut color, and you have to decide what to do with it. If you then desaturate it, is that a viable approach? Because I know many video guys don't like it.
  • Alex Fry: I have a feeling I may be getting more from the 2D lookup gamut compression than ZCAM itself, in terms of the things I'm liking. I'm compressing only in the M dimension after tone mapping. Each of my output presets use a different LUT image baked out to represent the limits of each display cube.
  • Thomas Mansencal: I should point out that the LUTCHI data set I plotted is not the whole data set, as I don't have that. But it's the data set used to build appearance models.
  • Alex Fry: I was interested in the image Pekka posted which shows a pinched blue. And I don't know the reason for the weird discontinuity where the cyan ends meet.
  • Daniele Siragusano: In the animated GIF you posted, at the end the color space was not connected any more, which is worrying.
  • Alex Fry: I think those are the corners of the cube, and I'm not interpolating between them. That's just used to construct the maximum value M can have at a given position.
  • Nick Shaw: Nothing with gaps gets interpolated, because the generated map is continuous.
  • Alex Fry: I haven't checked if the two sides of my map image map, so it's continuous in hue. But I'm still trying to figure out where the issue is coming from.
  • Nick Shaw: I checked tiling the map image in Nuke, and it appear to be continuous.
  • Alex Fry: It's also interesting in this plot to see how much of an ACEScg color wheel falls outside Rec.709.
  • Kevin Wheatley: I had some questions regarding the HDR and SDR. What parameters are being varied?
  • Alex Fry: I'm basically applying two curves, one which inverts the relationship between J and the original scene linear values, and a second one applying the current ACES Output Transform curves. So the result with a grey ramp should match the existing ACES transforms. To create the maps for SDR I'm taking a set of ramps with M varying in time, running it through the inverse of the JMH mode of ZCAM, normalizing it by a factor of 100, converting to sRGB and creating an inside/outside gamut matte, and multiplying that with the ramps, and using a recursive over to produce the LUT image. For HDR I had to ramp J to 226, rather than 1000, as 226 is the J value needed to produce 1000 nits. Then I use those LUT images to soft clip M with a user controllable threshold.
  • Kevin Wheatley: So perhaps the gamut mapper you're using may have the largest contribution.
  • Alex Fry: Possibly. So I've made a simple test version that just used a radial Yxy space, and it breaks in some ways (particularly when Y collapses to zero or negative values) but it does give a lot of the stuff that I liked in the ZCAM version. But it shows that the gamut mapping approach can be adapted to any model with hue angle and distance.
  • Thomas Mansencal: But you're still working in a perceptually uniform space, so that's the contribution of the CAM, yes?
  • Kevin Wheatley: That's a good thing to have, but what's missing for me from the SSTS approach is, apart from gamut mapping, is mapping between different display intensities.
  • Alex Fry: ZCAM has parameters relating to surround, but I haven't experimented with them. Maybe I can just expose them for people to play with.
  • Kevin Wheatley: I know we're not supposed to look at implementation, but would this kind of 2D lookup be acceptable? Or can we find a function to approximate it?
  • Alex Forsythe: Would symbolic regression be applicable? It's a machine learning type approach to figure out what function would fit.
  • Nick Shaw: You might also find that a curve fit approximation would smooth out some of the odd jaggies.
  • Kevin Wheatley: There a a huge number of gamut mapping approaches. Basically you're trying to find the nearest point on the surface that represents your gamut in that (hopefully perceptual) space. But there's no guarantee there will be a single nearest point, or that the result is smooth.
  • Alex Fry: Probably a game developer or render developer could find the solution easily.
  • Daniele Siragusano: Maybe it's worth first looking at signed distance functions. That's not much more approachable on GPUs.
  • Alex Fry: Am I missing a trick? Can anybody see a different approach?
  • Daniele Siragusano: I don't think so.
  • Alex Forsythe: I like that this ZCAM approach seems more scientific than just looking at pictures.
  • Daniele Siragusano: I'm not sure about that. It's a big philosophical discussion what those experiments actually mean, and how they can be generalized. But at least if you have a method like this to make a DRT based on an opponent model, you could swap that later for a different model, or tweak the underlying model or color space.
  • Thomas Mansencal: It's also worth testing the Kim 2009 model. I may try if I have any time during the break. It uses HDR datasets. I think it was used for FilmLight's T-CAM DRT.
  • Alex Fry: Anything we can make an implementation of in Nuke I can wire up pretty easily.
  • Daniele Siragusano: If you an come up with a programmatic way of doing that gamut mapping we can start to evaluate different models and spaces. You could just look at in gamut colors, but I think including this kind of gamut mapping is helpful.
  • Alex Fry: The way it goes up currently seems fine. It's the imaginary colors coming back in that's causing the drama.
  • Daniele Siragusano: I find the desaturation a bit strong. Is that controllable with your gamut mapping?
  • Alex Fry: It's interesting how that varies with the image. With the ARRI Isabella image, for example, there's a very noticeable difference between the whites of the different papers, which isn't there with other DRTs.
  • Kevin Wheatley: But changing your gamut compression threshold and curve might alter that.
  • Alex Fry: I could also try separating the lookups for values that go negative, and those that go over one, because now it's one lookup.
  • Kevin Wheatley: I'd like to look more at using CAMs for going between different display intensities.
  • Thomas Mansencal: And viewing conditions, although that's really part of the same thing.
  • Kevin Wheatley: Perhaps we might be able to pick a single rendering curve and use a CAM to do the variations between viewing conditions.
  • Daniele Siragusano: But the CAM doesn't solve that. You still need tone mapping, but you just have more degrees of freedom in the dimension you choose.
  • Kevin Wheatley: Currently we do our tome mapping in ill defined positions. Some of the SSTS may account for perceptual effects, but it's not clearly factored out.
  • Daniele Siragusano: So you hope by doing it in the right domain, things become more decoupled.
  • Kevin Wheatley: The only other thing is the Google Form. Has everybody filled that in? If not, please do.

Meeting #32, December 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Chris Clark
Francesco Luigi Giardiello
Jean-Michel Gilbert
Andy Maltz
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Shane Smith

Meeting Notes

  • Kevin Wheatley: Today we're going to go through the results of the Google form for the T-shirt sizes of the requirements.
  • Alex Fry: It's only been up a couple of days, but we have a few responses already.
  • The first few, defined for all half-floats, continuous and continuously increasing, are all small for all three.
  • No asymptotes is small/medium for all.
  • Lower contrast is small/medium for SSTS, and veering towards small for the other two.
  • Nick Shaw: The answers are influenced by how people interpret the question. Is it just lower contrast, or are we asking how hard it will be to decide the right contrast?
  • Alex Fry: I think I voted large on this, as I think there will be a lot of testing needed to decide the right contrast.
  • Nick Shaw: I think I answered small, just in terms of "yes the contrast should be lower".
  • Jean-Michel Gilbert: I need to revise my answers to take user testing into account.
  • Kevin Wheatley: Everything up to here was clearer, but if you interpret this one literally it is easy, It shows there is complexity to the question.
  • Alex Fry: The base set of transforms is no big deal fo the SSTS or Open DRT – they do it already. ZCAM is a bigger ask, as it doesn't do it yet.
  • SDR and HDR matching everyone agrees won't happen for SSTS. Medium for the other two.
  • Jean-Michel Gilbert: We've been working on his for over a year. and there's no agreement on what a "match" is. It's different lights on, lights off, and different people's opinions. I sam more than medium for Open DRT and ZCAM.
  • Nick Shaw: The biggest problem is deciding what constitutes a match.
  • J. Schulte: It's very important, because feeling you have a match between your deliverables is a big reason to choose ACES.
  • Alex Fry: So large instead of medium?
  • Kevin Wheatley:  For me the key is the distribution of answers. We don't have the same totals, so not everybody's answered all questions.
  • Scott Dyer: I didn't answer them all because sometimes I felt I didn't have enough information about the ZCAMish DRT.
  • Kevin Wheatley: Where I didn't answer, I didn't answer any, to keep the counts consistent. But it's a small sample size so it's only a pointer.
  • Nick Shaw: Definitely qualitative, not quantitative with an N of 9.
  • Alex Fry: We can leave it open so more people can respond.
  • Ability to produce arbitrary dynamic ranges falls out of the last two. Medium for SSTS and small/medium for the other two.
  • Hue shall be consistent is a hard no for SSTS medium for the other two. ZCAM is where how we define hue is the biggest unknown.
  • Cube in a balloon is no for SSTS, small for Open DRT because that's where that definition comes from. ZCAM heavily unknown.
  • Spanning the display gamuts is small for SSTS and medium/large for the other two.
  • Using a CAM is a bit vague, but can't be done for SSTS, because that's too big a change. Medium for Open DRT and small for ZCAM because it is s CAM.
  • No LUTs is small for the first two and small/medium for ZCAM, because there are a lot of unknowns.
  • Nick Shaw: Your ZCAMish DRT currently uses a 1D LUT for the gamut compression?
  • Alex Fry: Yes. But I think it needs more than that.
  • Nick Shaw: Because it varies at different brightnesses.
  • Alex Fry: Yes, my LUT was only sampled at 100 nits. ZCAM doesn't fundamentally use any LUTs, but I used one to find the edge of the display gamut and pull stuff in.
  • Daniele Siragusano: Fitting display gamuts in ZCAM programmatically would be impossible.
  • Jean-Michel Gilbert: Bjorn Ottosson had a paper about finding the max saturation of a display gamut in different color spaces. And there's a ShaderToy version.
  • Daniele Siragusano: Isn't ZCAM concave in some areas, so there isn't one clear winner if you go back and forth between ZCAM and sRGB?
  • Alex Fry: It may be, but I hadn't noticed.
  • No Black boxes. Small for ASSTS and Open DRT, small.medium for ZCAM.
  • Round tripping a cube of output values. Medium for SSTS and Open DRT, and large for ZCAM.
  • Conceptual OCES target, small for SSTS because it already does it, Medium for Open DRT, medium/large for ZCAM.
  • Separation of "rendering" and "preparation of display light", separating the ODT. Small/medium for the first two and small for ZCAM.
  • Kevin Wheatley: I'd be interested to know why people voted SSTS as harder. The modules are there already, but might just need some basic reordering. It's fairly clear where the cut should be.
  • Scott Dyer: Although the parts are there it is a bit of work to reorder them and make sure they are doing the right thing consistently. I say maybe medium particularly due to migrating the SDR transforms.
  • Nick Shaw: Migrating those to the SSTS will obviously slightly change the look a little, but do you think there needs to be more to it than picking the right SSTS parameters?
  • Scott Dyer: Hopefully not. But there are more weirdnesses in there than I remember.
  • Alex Fry: Avoid unnecessarily complex steps, a mixed bag here. Particularly for the SSTS.
  • Nick Shaw: I guess your assessment of difficulty depends how much you mind where the SSTS is now, so how much needs changing to get to a release candidate.
  • Alex Fry: I'm curious about the medium and hard votes. Do some think the SSTS is too complex already?
  • Jean-Michel Gilbert: The Bezier curves in the SSTS use up a lot of GPU power, so I bake a LUT.
  • Alex Fry: Emulating a display on a larger DR display, small
  • LMT now, shall institute preferential skews etc, even spread of votes for SSTS. I guess the the SSTS you're fighting the existing skews. Medium for the other two.
  • Add contrast, easy across the board.
  • Allowing for creative white points, small/medium for all.
  • Kevin Wheatley: With SSTS you can't control white at the top in an LMT because of the roll-off.
  • Daniele Siragusano: You can't do it properly unless you do it further along.
  • Nick Shaw: We're not still under LMT, are we?
  • Alex Fry: No.
  • Kevin Wheatley: It depends then how you implement it. For a fixed set of white points you just have different ODTs, like the current D60 sims. For fully variable white points it could be harder.
  • Joshua Pines: If people decide on a D60 creative white, they want that preserved across all deliverables.
  • Kevin Wheatley: For that it becomes a function of the on the wire encoding.
  • Joshua Pines: I just want the option to turn the chromatic adaptation off.
  • Nick Shaw: That only helps with D60, not arbitrary creative whites.
  • Joshua Pines: Currently if people start working with a Rec.709 OT, they unknowingly impose a D65 white. Then for theatrical they have to go out of their way to match that.
  • Alex Fry: I was imagining something like how Baselight does it, where you pick the creative white independently fo the device. And we ship a couple of fixed D60 presets.
  • Shall have simple display gamut mapping. That's beyond a small change to the SSTS, so medium/large. Medium/large for Open DRT. Medium/large/XXL for ZCAM, which is unknown.
  • Kevin Wheatley: To fulfill the previous ones we are probably looking at a simplistic gamut mapping, thats a bit better than hard clipping. But we don't know what that is, or how hoard it will be.
  • Alex Fry: Surround compensation. Even spread in the SSTS where we have it already, but it needs extending. How does it map to what we're trying to do. Sounds difficult across the board. But hopefully whatever we do applies across the board, if it's decouple from the rendering… unless it's part of it for ZCAM.
  • Kevin Wheatley: Does anybody have other thoughts on what might be problematic? I look to look at this sort of thing for where people disagree. That suggests there is a misunderstanding or lack of clarity in the requirement.
  • Alex Fry: That certainly looks like the case fro the last one (surround).
  • Jean-Michel Gilbert: Depends if you think we basically have it already or you think it needs hardware with a light sensor.
  • Chris Clark: I voted XXL on the assumption it meant doing our own perceptual experiment and getting some data.
  • Scott Dyer: I was thinking the same. I have concerns about the simple gamma that's in there now. It's only dim vs dark, and does it apply at all for HDR. There are a lot of unknowns.
  • Nick Shaw: Daniele, do I remember you saying a simple gamma does work quite well.
  • Daniele Siragusano: We found it did. There's a lot happening there, and you can only do an overall fit. Dark and bright scenes might need something different. Bit it's a classic Bartleson-Breneman equation. It's literally just an exposure compensation. But of course it needs to be validated.
  • Joshua Pines: Traditionally colorists have been happy with the simple 2.4 to 2.6 gamma difference. In fact many complained about the introduction of the surround compensation when ACES first came out. But generally I think we should be able to compensate for different environments. Maybe the colorists just weren't used to it. And people unfortunately do look at projection and a monitor simultaneously (even though they shouldn't) and surround compensation creates a mismatch.
  • Daniele Siragusano: Historically gamma was used successfully in implicit color management. sRGB, legacy video, or 2.4/2.6, so you could go from one to the other without any adaptation.
  • J. Schulte: Yes, those came from a need for a first order approximation of surround compensation.
  • Kevin Wheatley: But how much does it apply to HDR? It's accepted for SDR. HDR always feels off to me, like we don't understand it well enough to know what to do. So we need to turn these result into a set of actions. Can anybody who hasn't filled it out try to do so. And post on ACES Central or get back to us, so we can prepare something for the TAC. Then we can choose some of these and try to do them, to validate people's guesses. Even if it's just documenting discrepancies that need fixing.

Meeting #31, November 24th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Daniel Brylka
Sean Cooper
Michael De Caria
Zach Lewis
Joe McC
Michael Parsons
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith

Meeting Notes

  • Kevin Wheatley: The TAC meeting was last week, and we went over the spreadsheet of requirements. We need to categorize which can be met by our three candidates, and how much effort each might require, by the end of the year. We need to feed back to the TAC whether the minimal set of changes would achieve a reasonable proportion of the requirements, so they can consider how much to extend the original deadline. We also want to discuss the meeting cadence between now and the end of January – stay every two week, or maybe every week but perhaps shorter meetings.
  • Joshua Pines: Do people think the TAC's requests are reasonable?
  • Kevin Wheatley: One difficulty we have is availability of people to carry out work.
  • Nick Shaw: It's not too much of an assumption to say were sure we can't achieve all the requirements with a minor tweak to the current Output Transform, is it? Some definitely need a different approach. How do we define meeting is enough of them? Doug Walker's "T-shirt approach"?
  • Kevin Wheatley: The T-shirt sizing measure is about how big a talk something is and the amount of resources needed. But even if we had infinite resources, could a minimal set of tweaks satisfy enough of the requirements? We have three approaches on the table – Minimal tweaks to the current version; Jed's OpenDRT or something based on it; or something based on a CAM. The CAM requires the most work, but given that what Alex did was put together quite quickly, the results look pretty good. The point of the exercise is to see whether we could achieve a reasonable proportion of these requirements if we had a limited deadline imposed on us.
  • Scott Dyer: The original aggressive timeline was set outside our control. This is a very complex problem. And I think the TAC has realised the timeline is unrealistic, so they want to make the problem smaller in the short term. So initially it's an evaluation of if minimal changes is enough to become ACES 2.0. What requirements are showstoppers, so they could set a more realistic timeline. A target date for a release candidates for people to test (which they may not like). So we need to add three columns to the spreadsheet and evaluate each requirement against each candidate.
  • Joshua Pines: Each column would contain either "no" or small/medium/large for how much work it is.
  • Scott Dyer: Some things will require investigation to know how much work is needed.
  • Alex Fry: Tone scale defined for all float values shouldn't be controversial.
  • Nick Shaw: Likewise continuity, although there is obviously no continuous domain between e.g. NaN and infinity.
  • Kevin Wheatley: Perhaps infinity in would produce the same output as MAX_FLOAT in? I think monotonicity is also small.
  • Nick Shaw: Although it does depend on what we define as tone. Increasing luminance could be a decrease in a different "brightness" attribute along some gradients.
  • Alex Fry: I'd say Medium for ZCAM
  • Kevin Wheatley: Even for neutrals?
  • Joshua Pines: It's an under-defined requirement. Is it just luminance?
  • Scott Dyer: Originally the list said "neutral tone scale".
  • Kevin Wheatley: Non-asymptotic is easier with less roll-off, but that ties to less look and needing a default LMT (which nobody objected to last week). We can say medium in this first pass, and revisit it later.
  • Daniele Siragusano: Does tone scale mean for R=G=B?
  • Kevin Wheatley: I think it's some kind of cylinder around R=G=B that  we can consider neutral.
  • Nick Shaw: Is there a metric for a just noticeable difference from neutral?
  • Daniele Siragusano: We could define it backwards from the display referred ACES white point.
  • Kevin Wheatley: But there's some complexity from the fact we never settled on whether you fully adopt the white, or if there could be something else like a bias light in the environment. But for now let's say the colorimetry of the nominal equal value output.
  • Scott Dyer: I'd say neutral tone scale where R=G=B which would result in ACES white at output unless you add additional modifications.
  • Kevin Wheatley: Of course some of our current outputs adapt fully to the display white.
  • Scott Dyer: But this requirement is conceptually for the RRT, which is for an OCES display which has an ACES white. It's easier if we slept that up. Not all outputs need to have equal code values out for equal code values in.
  • Kevin Wheatley: Contrast is probably bigger than "small" because we don't know what the value should be.
  • Nick Shaw: As a requirement it just says "less than".
  • Kevin Wheatley: The requirement needs refining to put some bounds on that "less than". And we need to distinguish OCES from actual display output, because the actual output could vary with viewing conditions. And the dim/dark etc modifications could be another requirement. And "S-shaped"? Is it difficult for any of the current outputs to achieve?
  • Joshua Pines: It's easy to say it should be s-shaped. What the actual curve should be is much harder.
  • Nick Shaw: All the candidates have a curve at their core, so aren't we just putting a restriction on what shape that curve should be? Would anyone argue against s-shaped?
  • Alex Fry: I think there could be situations where I'm not sure what it gets you with things like LED walls that rise straight out of black. It seems it should be there, but I don't know if it always makes sense.
  • Daniele Siragusano: It's important to define the input and output domain. The current rendering is s-shaped for ACEScc input, but with ACEScct it takes a downturn at the bottom end if you plot against 2.4 gamma output code values.
  • Joshua Pines: I think we mean ACES linear in vs linear out on a log/log plot.
  • Kevin Wheatley: So again easy to make it a sigmoid curve. Harder to decide what curve.
  • Joshua Pines: That's the case for a lot of them. Easy to impose a requirement but hard to make the final choice.
  • Kevin Wheatley: Presets, it's again easy to say we need presets for all these, but much harder to say how we map things between them.
  • Joshua Pines: Just saying "there should be these presets" the answer is yes!
  • Kevin Wheatley: So HDR and SDR "matching" is the hard one.
  • Alex Fry: That would be very hard with the SSTS per channel approach.
  • Nick Shaw: You could argue for just "no" on that, because with RGB highlight roll-off in different places, different things will skew, so they will never match.
  • Kevin Wheatley: It depends what minimal change means. I was imagining we would render to a reference display, whether OCES or an actual realizable display, and then that's the appearance we try to match on other displays. The initial rendering would be simple, but then you would need color appearance matching, etc, which would be more complex.
  • Nick Shaw: I imagined minimal meant sticking to the current structure, but just talking the sweeteners out and changing the curve.
  • Kevin Wheatley: That was my next point, because if we stick to that a match will never be possible.
  • Alex Fry: I think adding the extra color appearance stuff would no longer be minimal.
  • Kevin Wheatley: And the other two?
  • Joshua Pines: It's unanswerable, because different people want different things. So it's unsolvable with a single set of transforms for each output device.
  • Kevin Wheatley: Thats a different question. As Nick said, with RGB curves you can never get a match, but with the other two it's achievable once you decide what kind of match you want.
  • Daniele Siragusano: Maybe it's hard because the requirement is not fully written. We could write e.g. "HDR and SDR should match a reference scene as well as possible."
  • Kevin Wheatley: Perhaps, but what does a reference scene mean?
  • Daniele Siragusano: You would build a real scene at The Academy, put two monitors up and ask if they both match what your eyes see.
  • Joshua Pines: That's never the goal on a real project.
  • Daniele Siragusano: Not the final goal, but it could be a good starting point to which you add an LMT.
  • Alex Fry: This is the big one which will have a lot of discussion.
  • Kevin Wheatley: The next one, making in between transforms, will either be trivial if it's built into the rendering, or difficult if we've made separate subjective decisions for each output, and there's not possible interpolation.
  • Nick Shaw: The ideal would certainly be to find a continuum that passes through all the presets.
  • Kevin Wheatley: That would have similar complexity for all three.
  • Alex Fry: I would say it's harder for the SSTS because there are already switches for different groups of outputs built in there at the moment.
  • Kevin Wheatley: It can be done, but hard to asses if it's right. As Daniele has said before I'm not sure it's parameterized under the right variables to solve it. A CAM is kind of builtt for hat, bu the SSTS you need to figure out scalings and magic numbers for each output.
  • Daniele Siragusano: Since Jed uses the Michaelis–Menten curve it should be easy to achieve.
  • Scott Dyer: It does it right now, but whether the result is right across the various displays is a different question.
  • Daniele Siragusano: In my testing it looked pretty good.
  • Scott Dyer: I think Jed's has a more sound foundation for consistency than the SSTS.
  • Kevin Wheatley: And I would think ZCAM would be similar, but it's currently less well developed.
  • Alex Fry: It's an unknown, but we could try it with Jed's curve.
  • Daniele Siragusano: That curve expects linear input, and ZCAM is fundamentally nonlinear.
  • Kevin Wheatley: So hue preservation. That's a no for minimal. Jed's is small…
  • Alex Fry: In theory Jed's approach and ZCAM can do this relatively easily, but won't agree on what hue is.
  • Nick Shaw: Jed's does that by default, because it's Jed'd definition of what his transform does.
  • Scott Dyer: It's how you reach the corners of the gamut.
  • Kevin Wheatley: The minimal approach only fits one definition because it clips. Changing that would not be small.
  • Nick Shaw: I would give it a no again, because the chroma compression of the SSTS is a "fortunate accident" resulting from how it works, rather than deliberately engineered like a cube in a balloon.
  • Kevin Wheatley: If you add a gamut compression to the current SSTS, there are different methods and no one "right" one, so it's not simple.
  • Alex Fry: For Jed's it is simple by definition, and I'd say at least medium for ZCAM. The way it interacts with gamut boundaries is complex.
  • Daniele Siragusano: Typically display gamuts look horrible in those spaces.
  • Nick Shaw: And ZCAM takes not account of display gamut, so you still need another layer on top of it.
  • Daniele Siragusano: You have to shoot backwards and project, but you can only do that by baking the result into a LUT.
  • Kevin Wheatley: We're nearly out of time, and only half way through. We need people to look at the rest before next time, and maybe vote. But this first pass is only part of the solution.
  • Alex Fry: I'll set up a Google for for people to vote.
  • Joshua Pines: On the gamut coverage, Rec.709 and sRGB cover the same gamuts, so having both is redundant. DCI P3 reference projector is a gamut that is almost never used, and white points including D60 and D65 are allowed. We need to make sure we cover those two.
  • Daniele Siragusano: The reference projector is X'Y'Z and P3 is not mentioned in the standard. This requirement needs rephrasing.
  • Kevin Wheatley: The last thing is do we change the meeting cadence and length?
  • Scott Dyer: We don't need to fill the whole hour if we don't need it.
  • Nick Shaw: If we need to deliver something by the end of the year, leaving in between weeks blank when there are so few weeks left seems a bad idea.
  • Kevin Wheatley: So let's meet every week, but only for as long as needed.

Meeting #30, November 10th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Chris Clark
Alex Forsythe
Francesco Luigi Giardiello
Andy Maltz
Carol Payne
Joshua Pines
Matthias Scharfenberg

Meeting Notes

  • Kevin Wheatley: Alex has something to how on Color Appearance Modeling
  • Alex Fry: After Alex Forsythe mentioned CAMs I made a Nuke node of the ZCAM model that does forward and inverse directions. It takes XYZ data scaled to absolute Nits. It doesn't include the chromatic adaptation, so the input needs to be D65 adapted. It has many controls, but the most important are the inverse data modes. It takes the XYZ data and decomposes it into the ZCAM correlates as different Nuke layers. Then you choose which ones you want the inverse to reconstruct XYZ from. You can show the components in a layer contact sheet in Nuke. I had a go at making a rendering transform that uses ZCAM. I tone map the J (brightness) component, and I added a path to white and gamut limiter applied to the M component. The curve manually matches the neutral tone scale of ACES 1.2. Oddly, although the grey ramp is limited to 0-1, color ramps end up with excursions above 1, which I assume is it trying to maintain colorfulness when I modify the brightness. I compared it on various images to Open DRT and the current ACES SDR rendering. It seems to behave reasonably well on most images, although there are some hue skews. I don't know if that comes from the model or from what I'm doing, and those excursions beyond 1. I set the highlight desaturation by eye, so it's fairly arbitrary. You do get some blocking around some highlights. It does maintain more color in shadows than the other two. Fire gets those yellow skews that we like.
  • Alex Forsythe: Are the hue skews coming from the transform to display primaries, where the CAM produces values out of the display gamut?
  • Alex Fry: I have a soft limiter that's supposed to restrict the C component to sRGB limits. It's done in JCH, but with limits derived from sRGB primaries. I don't know if it's working properly, so it could come from that.
  • Nick Shaw: Where you show the excursions in the color ramps, are those downstream of that soft clip.
  • Alex Fry: Those are without that, but they may not be affected by it, as they are within the extruded sRGB gamut triangle, but outside the gamut volume, where it tapers at the top.
  • Nick Shaw: So that could cause hue skews when you clip to the gamut volume for display.
  • Alex Forsythe: It's great to have this and be able to put images through it. I see similar things in my implementation. Did you use the two stage chromatic adaptation?
  • Alex Forsythe: As Chris asked in the chat, are you using the same scaling for the absolute Nit level for all the images, because obviously the ACES data is relative.
  • Alex Fry: For now I'm just pegging ACES 1.0 to 100 Nits.
  • Alex Forsythe: I did the same, which is completely arbitrary. It's not as complex as it looks in the node graph. Also the perceptual correlates in the contact sheet aren't expected to look reasonable, as they include a lot of negatives. But we may be able to take this model and whittle it down to something  that does what we need and is simpler. And we could back test the LUTCHI dataset which was used to generate the ZCAM model.
  • Nick Shaw: Presumably as well, if you've selected a particular set of components to use in the inverse transform, there are a number of other nodes that aren't being used, so that simplifies things.
  • Alex Fry: Yes, you only ever actually use three pathways through the complex reconstruction node graph.
  • Alex Forsythe: This is all great stuff that we should keep investigating, but now I think we need to jump to nailing down the requirements, which is still outstanding to check if any of these models are doing what we need them to.
  • Scott Dyer: I've tried to compile this from previous requirements lists from earlier ACES development, and discussion that have happened here and on ACES Central. Defining requirements is hard because it's a chicken and egg situation where you kind of need to see the results of things before you can focus the requirements. We can't lock this list down until we tried more things. But we need the list. I divided it into categories, and gave a one sentence rationale for each requirement. Some things I think are uncontroversial, like the tone scale. But there are no specifics on things like mid tone contrast. It just say "less than ACES 1.0". Color reproduction is the one that's most undefined. How do you define what hue preserving is? I'll post it to ACES Central, can comment if they think anything is wrong or missing.
  • Alex Forsythe: That's great. We do need a final list of requirements soon. There are a lot of items, and I think we need to distinguish what is a requirement vs what is a desired behavior. We also need to not let preconceived notions on the solution dictate the language. E.g. the tone scale stuff feels like it's describing a solution like we already use, or kind of like Open DRT. In a CAM the tone scale is dictated by the CAM itself, not a separate tone scale. The tone scale Alex applied was in order to limit the data to a real display with finite dynamic range. So there will be an end to end tone map, but a lot of that will be done by the CAM.
  • Scott Dyer: That's what I was saying before about requirements being affected by solutions. I'm not sure how we can make a requirements list that everybody can agree on.
  • Nick Shaw: I read the line about tone scale as simply saying it should be mathematically applied, not a 1D LUT, and the internals of the CAM are presumably still mathematical. There just isn't one expression you can point to and say "this is the tone scale function."
  • Alex Forsythe: I am really talking about the language which I feel is currently bent towards one type of solution. We need to try to abstract them to make them implementation independent.
  • Kevin Wheatley: What if we invert them so they say what it should not do? I read a lot of those as "here are some cases where it used to break so we have the opposite of those as requirements". So we don't want breaks on the tone scale, and have certain slope constraints, because those caused inversion problems.
  • Scott Dyer: That's the intent of the rationale column. I'm not attached to anything on this list. In making the list I realized that particularly for color reproduction, making requirements is difficult. Things like highlight chroma compression have been discussed many times. Maybe these conflicting requirements will show we can't have a single solution.
  • Kevin Wheatley: Maybe we should separate those which are technical problems we are trying to fix vs preference based requirements.
  • Carol Payne: I think this is great, and if it points out that some requirements are conflicting that's part of its purpose. I see all the things we've coalesced on over the months of discussions well represented here. I think it's also good to start with a longer list and hone in on things. I agree on splitting technical fixes and preferences. One that's currently missing is the need to operate in real-time, so we can't do anything too crazy.
  • Alex Forsythe: This is a great first step, and we need to iterate on it quickly to hone in on a final list of requirements.
  • Scott Dyer: There is a lot still missing. Particularly the HDR/SDR compatibility that we've discussed.
  • Chris Clark: I think we need a column called "type" to separate stuff where people say things like "if should look pleasing" and other subjective stuff. And maybe some preference based things we can work out a way to make into something objectively measurable.
  • Nick Shaw: Maybe we can give things a weight, so we know what we might be prepared to let go if we get all the others.
  • Chris Clark: I thought the pint about hitting the corners of a display gamut was missing, but it is there.
  • Kevin Wheatley: I put that down intentionally, as it was the one that most obviously highlighted the discrepancy between two camps that want different things, and shows we might need variations.
  • Alex Fry: I was curious about the one that says we maintain two conceptual RRT+ODT steps. Is that true of the SSTS? And do we need to keep it?
  • Scott Dyer: It does conceptually. I wanted to make sure we didn't just make separate paths for different displays, so we have one thing we target, and then remap that to different displays.
  • Kevin Wheatley: It's like an interface in a programatic sense, a lingua franca that you interchange between the various stages, rather than something explicit.
  • Nick Shaw: Is it related to the lerp in the SSTS that goes between SDR and OCES? Maybe you could phrase it differently so it just says it need to be able to handle anything up to and including some idealized display, and anything in between.
  • Alex Fry: In the SSTS you can target OCES, but other outputs don't pass through it like they did before.
  • Kevin Wheatley: With a CAM you would have some parameterized space that it targeted, that might not be OCES but it's what you go from fo other viewing conditions.
  • Scott Dyer: I think it's a useful conceptual step to maintain the integrity of what ACES is as a color management framework. But we need to finesse the language.
  • Kevin Wheatley: If you have a system like the ZCAM one Alex showed, you do have multiple steps that you have to keep separate, at least architecturally, where different compressions happen at different stages. But it may get optimized out in implementations for performance reasons.
  • Alex Fry: Was real-time a requirement for ACES 1.0? My memory is that originally it was always LUTs and only more recently it could realistically be done mathematically in real time.
  • Joshua Pines: Yes, it was initially assumed that implementations would bake it down to a LUT. We didn't have GPUs that could do the maths in real time.
  • Kevin Wheatley: So that is a preferable but not a hard requirement.
  • Nick Shaw: And not just implementable as a shader, but one which is small enough that the GPU has enough resources left to do all the other grading operations it needs to.
  • Kevin Wheatley: Which is vague. What is simple enough? 4K, 8K 60fps in real-time? Different implementers will choose different cut points where they have to go to a LUT.
  • Joshua Pines: It's comparable to the CLF split between the preview tier and finishing tier. But real-time is a requirement that's hard to quantify. I like the objective/subjective split.
  • Kevin Wheatley: Objective requirements should be possible to formulate as a pass/fail test.
  • Joshua Pines: Even things like hitting the corners of the gamut that might seem objective, could in fact be subjective.
  • Alex Fry: Testing can be objective, but deciding if you want it is subjective.
  • Joshua Pines: One minor comment, 48 nit cinema is mentioned which seems synonymous with a DCI P3 reference projector. We haven't mastered anything with a DCI white point in a ling time. Mostily D60 or D65. You can't produce something that will only work with DCI white.
  • Kevin Wheatley: I had a question on my list as to whether we need to allow for white point other than device native.
  • Joshua Pines: The DCI spec allows for a triangle of creative whites at 48 nits that spans D55 and D65. So how do we accommodate that? Something to consider.

Meeting #29, October 27th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Rémi Achard
Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Michael De Caria
Alex Forsythe
Jean-Michel Gilbert
Thomas Mansencal
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Shane Smith
Jed Smith
Andy 

Meeting Notes

  • Alex Fry: 3 things on the agenda - Scott's work on modifying the current SSTS; Jed's progress with Open DRT; and Alex has some thoughts on Color Appearance Models. Jed first. Why did you decide to fork off with the JzAzBz experiment?
  • Jed Smith: It's all experiments, and this is just another one. It's not another track. I have a couple of other ideas too. Open DRT is not the only one. I wanted to look at other LMS spaces, and JzAzBz has an LMS space in it. I also tried a different norm (maxRGB for JzAzBz, where Open DRT uses a weighted Euclidian distance).
  • Nick Shaw: You said before the choice of norm is the biggest factor affecting the appearance, is that right?
  • Jed Smith: It’s one major factor. It changes how hues are distorted. The rendering space is the other big factor.
  • Alex Fry: So the two approaches use basically the same architecture.
  • Jed Smith: The main difference is the norm. Also Open DRT uses an LMS space from Richard Kirk's paper. The other uses the LMS space from JzAzBz. Everything else is the same except JzAzBz doesn't have the choice of creative white point. My other experiments are related to the lack of saturation at the low end that happens with chromaticity preserving renderings. Per channel create a vibrance increase at the bottom end. I was looking at a rendering with a more finished appearance, in contrast to Open DRT.
  • Alex Fry: Is this something that couldn't be done with OpenDRT?
  • Jed Smith: It could be done with an LMT.
[Jed then showed his Nuke script demonstrating the structure of Open DRT and the JzAzBz DRT - see the recording at 6 minutes]
  • Alex Fry: Is the norm the 'tone' in isolation?
  • Jed Smith: Essentially, but Troy would disagree!
  • Thomas Mansencal: How did you derive the weights you are using?
  • Jed Smith: Just experimentally. My own preferences.
  • Thomas Mansencal: Would you say you prefer the result of maxRGB to the Euclidian norm for compositing?
  • Jed Smith: For me live action looks better under Euclidian, but animated is different and maxRGB might be preferable.
  • Thomas Mansencal: A maxRGB and an LMT might make sense, because if you desaturate in the rendering you can't get it back with an LMT. And you take the LMT out for animation. Rather than having both norms and choosing per project.
  • Alex Fry: I'm trying to wrap my head around how the norms relate to perceived brightness. MaxRGB seems cleanest.
  • Nick Shaw: With max RGB, because it flips between using all of one channel or another, I believe if you have e.g. a noisy blue channel, when blue is maximum its noise can contaminate the other channels because you are multiplying them by the norm. I think Doug mentioned rejecting maxRGB for that reason in his paper with Gary Demos. Have you experienced issues with this?
  • Jed Smith: I haven't tested specifically for noise, but it seems like a problem that can be solved elsewhere by doing processing in the shadow grain to reduce it.
  • Scott Dyer: We used a max in one of the pre 1.0 packages. We tried a ratio preserving approach, and in some shots noise exploded. I've been working on a couple of things. Firstly we need to agree on requirements. I'll make an ACES Central thread where people can propose requirements, and when we agree on each one I'll put it in a Google Doc. We had a set of requirements before 1.0. Things changed, and I don't know if we ended up meeting them all. We need requirements to define tests. When we have tests we can compare how models differ and find which best meets the requirements. I've also been working on a "minimum change" rendering – taking out the sweeteners and lowering contrast. Kevin gave me data for his average tone curve. I hope to post a DCTL so people can try it. It will be a baseline to compare against.
  • Alex Forsythe: I've been thinking about the relevance of CAMs for Output Transforms. To me tone mapping operators are a general attempt to compensate for scene to display transforms, but they are ad hoc. CAMs convert colorimetry associated with a set of viewing conditions to perceptual correlates. Using an inverse, you can calculate corresponding colorimetry for a different viewing environment. Up to now they've been limited. In the last 5-10 years they have improved. With tone mapping we build a model and a set of requirements, and ask if the model fits the requirements. CAMs are fit to data from viewers in different environments. So the model imposes the requirements. I want to separate creativity and objective science. I have a list of some CAMs. It really began with CIECAM02. it was limited, but Windows color management used it. Some CAMs have local operators to compensate for spatial issues. I'm particularly interested in ZCAM, which is based on JzAzBz. It's simple and is specifically an image appearance model as well as color appearance. CAMs can be complex, and most require that the input data is calibrated (absolute cd/m^2). The colorimetry you get out of the model is not limited to a display, so that needs to be addressed. I want the group to consider whether there is merit in looking at these objective models. Because they are fit to data sets they already have the various effects we try to model built in to them.
  • Alex Fry: I've been thinking about the absolute luminance factor. I've experimented in the past with embedding EV metadata so you know what the absolute luminance is. 0-10 data from a bright day and 0-10 data from a dimly lit room is not the same, and probably should be rendered differently.
  • Daniele Siragusano: Then you get an adaptive rendering which is not intuitive to grade against. But it forces you to consider what the reference scene is.
  • Nick Shaw: The question is how great is the variation caused by differences in absolute luminance compared to what a colorist will do creatively? We always have a colorist in the mix, and that's what we do currently – base the rendering on an average scene, and let the colorist make it look right.
  • Alex Fry: I was experimenting recently with taking scene colorimetry and dumping it directly to a display. At the same level it looks right, but if the imagery is from outdoors it looks completely wrong.
  • Thomas Mansencal: I've been looking at CAMs myself, and there has been good work recently. But the ACES 2.0 deadline is looming, and that feels like a more involved long term research project.
  • Alex Forsythe: True, but we don't want to go lowest comment denominator just because of a deadline. In my experiments the results do not look horrible! My tests have been simplistic, particularly in terms of conversion to display code values. There are also some papers where people have been looking at exactly these issues related to tone mapping operators.
  • Thomas Mansencal: We do need to consider what happens in conditions outside the original data. Kim 2009 breaks in certain conditions. With a very dark surround and very bright display, the models generated negative brightness values. So you have to fix the model, which seems like a lot of work.
  • Jean-Michel Gilbert: You get a similar problem if you put an SDR render and a 1000 nit HDR render side by side on the same monitor, and try to match them so the HDR looks similar, but with a bit more range.
  • Alex Fry: We're not trying to develop a new CAM. It seems worth experimenting with the existing ones.
  • Alex Forsythe: I'm also trying to strip the models down to the core intention, as some are quite complex. Daniele has experienced with this.
  • Daniele Siragusano: The data comes from simple images, and we found when applied to complex images the models would overcompensate. There's so much more going on that the models don't deal with – color constancy, lightness constancy etc. But it forces you to ask the right questions. But no off the shelf model worked for us. We had to take something and modify it and simplify it so it worked on real images. But that loses some objectivity, because you diverge from the original data.
  • Thomas Mansencal: TCAM is based on Kim 2009, yes? But how long did it take to make your version?
  • Daniele Siragusano: There was a lot involved.
  • Chris Clark: Back when I tried CAMs, we found they were counterintuitive for colorist to grade through. But they are a good reference to compare other things to. That was a while back though, and we were using CIECAM02.
  • Thomas Mansencal: CAM16 is a little better.

Meeting #28, October 13th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Rémi Achard
Daniel Brylka
Michael De Caria
Chris Clark
Sean Cooper
Alex Forsythe
Jean-Michel Gilbert
Zach Lewis
Francesco Luigi
Thomas Mansencal
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Shane Smith
Jed Smith
Mike Whipple

Meeting Notes

  • Alex Fry: On today's agenda is Kevin's proposal for resolving the conflict between swappable renderings and everything in the LMT; my thoughts on a Minimum Viable Product as a backstop; Christophe's excellent document retreading the thought process so far; and Jean-Michel has a report on the use of OpenDRT in a game.
  • Kevin Wheatley: My post is just an example of the sort of things we could build a collection of. I proposed a test case that we would like an output transform capable of filling every possible value of an 8-bit cube, for a Rec.709/sRGB display, and probably also P3. If we collect a series of these, we get objective tests we can test any proposal against. If we end up with conflicts between requirements (I believe we do) we have a concrete reason to discuss whether we allow variation in the rendering transform. There was debate as to whether 8-bits was enough, but I picked it as a base level for the proposal.
  • Jean-Michel Gilbert: I propose adding PQ 600 and 1000 nits to that list.
  • Joshua Pines: Those are still limited to P3 in all today's deliverables, even if it's in a Rec.2020 container. So you just mean PQ dynamic range?
  • Jean-Michel Gilbert: I assumed when Kevin said P3, he meant 2.2. gamma like Apple Display P3.
  • Kevin Wheatley: I just wanted to be minimal and not controversial.
  • Alex Fry: But including gaming input is important. Do game HDR implementations limit to P3?
  • Jean-Michel Gilbert: Right now P3 is a goal. Currently we render sRGB and then stretch to P3 based on how bright or how chromatic it is.
  • Kevin Wheatley: J suggests P3 in 2020 as a limit is changing, so we cant rely on that.
  • Scott Dyer: If we do want to reach all corners of the display cube, this is a great way to objectify that and test for it. Also when people have been posting pictorial comparisons of preferences on ACES Central. It would be great if we could find an objective way of describing what they prefer, so we can test anything we come up with against those criteria.
  • Daniele Siragusano: Are we defining the input gamut too. It's not helpful if you can reach e.g. the yellow corner, but only by having an input of 3 million!
  • Kevin Wheatley: I didn't want to place too many constraints, but that is a good question.
  • Daniele Siragusano: You could create a lookup of random output values for every input in the cube, and the inverse of that would pass your test, but would be nonsensical.
  • Kevin Wheatley: A fail would be a rejection, but passing the test alone doesn't mean acceptance.
  • Jean-Michel Gilbert: I like what Christophe did, where he desaturated the results of various DRTs. That lets you see what's happening with luminance in the tone curve. I like Chris's images as a test, because they have saturated ACEScg values and sRGB primaries, so are good for seeing what a transform does.
  • Carol Payne: Your test seems logical as a baseline. It's only one requirement though.
  • Kevin Wheatley: Of course. But I didn't want them all to come from me. People can propose others on ACES Central before the next meeting.
  • Alex Fry: On to my proposal, which is a minimum fallback of an improved version of the current OTs. Brand new renderings are great, and we should do that. Be we should also have a version of the SSTS that corrects the things we know are problematic, so we have a baseline to compare others against. The best possible SSTS, without the sweeteners or bugs.
  • Disabling the “RRT Sweeteners” (Improving invertibility)
  • Backing off the contrast (Parameter adjustment)
  • Document the surround factor (This would be new functionality for the HDR transforms)
  • Move the SDR transforms to SSTS (This would require some level of subjective assessment, as it currently produces a slightly different look)
  • Jean-Michel Gilbert: On Divinity II we used the SSTS for SDR, and nobody noticed.
  • Alex Fry: If you toggle between them, they are different, but neither is obviously better or worse. Just different.
  • Thomas Mansencal: That would make implementation simpler for 3rd parties such as OCIO, who already have the SSTS code path.
  • Nick Shaw: Jean-Michel, did you use it like the current SDR OTs, with a 48 nit peak, then stretching to fill, or did you use a peak white of 100 nits or so.
  • Jean-Michel Gilbert: We used 48 nits stretched to display relative.
  • Carol Payne: That seems a good base level to compare new renderings with.
  • Alex Fry: When we say lower contrast, what do we mean? Mid-tone contrast? source range mapped to destination? There is no one "lower the contrast".
  • Scott Dyer: I think I can make a proposed version for an MVP. relatively easily. I can take suggestions for what lower contrast would be.
  • Kevin Wheatley: I can provide an average of the curves I looked at in LUTs.
  • Jean-Michel Gilbert: I've found it interesting looking at Nick's SSTS with sweetener bypass, the result is similar to disabling perceptual dechroma in OpenDRT, particularly the red modifier.
  • Nick Shaw: We should probably make up an LMT of the sweeteners that people could try with the MVP.
  • Alex Fry: Next is Christophe's document. Everybody should read that. We won't discuss it today as he's not here. Anybody have anything else?
  • Jean-Michel Gilbert: I have some comments from various people at Larian with their subjective impressions of various images under the different iterations of the OpenDRT. They only reviewed in the thread on ACES Central, so not a reference viewing environment.
  • [Hopefully Jean-Michel's report can be posted publicly somewhere at some point.]
  • Thomas Mansencal: It's good to have a perspective from video game people, not just VFX and colorists.
  • Jean-Michel Gilbert: I think the best would be a blend of the JzAzBz DRT and OpenDRT v90b2.
  • Jed Smith: The big difference between those two is the norm. All chromaticity preserving spaces look pretty similar. But JzAzBz uses max(RGB) and OpenDRT uses a weighted Euclidian distance norm. Choice of norm is the biggest decision.
  • Alex Fry: We should also talk about the gamma 2.2 vs piecewise sRGB discussion. Jack Holm posted on the original intent of the standard, that it should be a piecewise EOTF. But some read the spec a different way, and some displays certainly use 2.2 for sRGB. I think we need an explicit 2.2 in a refresh of the SSTS as well a she current piecewise sRGB.
  • Daniele Siragusano: But in his later post Jack contradicts himself, saying the straight line part addresses the difference between the black levels of 0.1 nits in BT.1886 and 0.2 nits in sRGB, but it won't compensate if the encoding and decoding are exact inverses – they cancel out.
  • Alex Forsythe: That goes back to the idea that an unmodified Rec.709 image should look reasonable on an sRGB display. But if you have display colorimetry, and you go through the inverse of the EOTF and then forward through it in a display, you get back to what you started with, and the linear part prevents an zero slope at zero, which causes problems with inverting creating infinite slope.
  • Jed Smith: To me, reading the spec for the first time recently it was clear that the intent was a piecewise encoding for a gamma 2.2 display.
  • Daniele Siragusano: That was the right thing to do back when we didn't have DRTs to handle flare compensation. Nowadays we want 1 to 1 in the display chain.
  • Kevin Wheatley: That's what we have assumed, that the core rendering handles flare, and the encoding is just an inverse of the display. So just need to be explicit in any output transform what we expect the display to do. Then we have 2.2 and piecewise sRGB. We could even create a test image for people to verify what their display is doing.
  • Jed Smith: Correct interpretation of the document matters less than what people actually do.
  • Joshua Pines: This goes back to people mistakenly thinking The piecewise Rec.709 curve was an EOTF, when it isn't.
  • Alex Fry: So two options, sRGB as it currently is, and a new 2.2 gamma.
  • Daniele Siragusano: But that adds confusion between 2.2. and 2.4. The number 2.2 is dangerous.
  • Joshua Pines: Correctly or incorrectly people now call pure 2.4 gamma BT.1886.
  • Kevin Wheatley: So to finish up, if anybody has other ideas for how to specify requirements, pleas post them on ACES Central.

Meeting #27, September 29th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
Francesco Giardiello
Zach Lewis
Michael Parsons
Carol Payne
James Pickett
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith

Meeting Notes

  • Alex Fry: We reported to the TAC last week. Some things are still unresolved. Before we get into that, let's look at what Scott has been doing.
  • Scott Dyer: I thought the consensus fro the TAC was that our task was to deliver a new improved rendering. Whether we allow other renderings too is not relevant. Any candidate must meet our requirements. Does it improve on the issues with 1.2? If not, what must be changed so it does? Is that possible within Jed's current framework?

  • How does Jed’s tonescale presets behave/match across different display classes?
  • What is the default rendering intent? With the increased saturation in highlights I suspect that SDR and HDR will actually “feel” more of a match than in ACES 1.2. How can we evaluate this when most people only have one device they can look at any given time?
  • Jed’s surround compensation is a param in his tone curve. What adjustments is this striving for? How were these presets derived (my suspicion is they were tuned by him - can we increase the sample set or do this more objectively?)? Does surround compensation belong in the rendering curve or as part of a separate operator in the display-linear space?

  • Scott Dyer: He's explored several perceptual effects, but his rendering remains simple. Some modules may or may not be needed, depending on their impact. I've investigated what the modules do. Looking at it in isolation it's a reasonable starting point, with my test image suite. My biggest problem is contrast in faces. It preserves more color, and it's very obvious that the contrast is lower than 1.2. Should we go half way between? In a plot of input (ramp +-10 stops, equally spaced in stops) vs Rec.709 output code value you see the difference in mid slope contrast.
  • Daniele Siragusano: It's important to know what log curve is on the x-axis. There is a kink on the 1.2 curve in that plot which suggests a piecewise curve like ACEScct.
  • Scott Dyer: The ramp is pure log. That curve is part of the quirks of the 1.0 RRT+ODT SDR rendering. On the plot the highlight roll-off is different from the 1.2 as well, but very similar to the K1S1. Looking at test images of faces, the 1.2 and OpenDRT look very different. On it's own it's not objectionable, but compared to other renderings it's more colorful, and tonality is lacking. I don't think we need to go this low contrast, but other people can give their opinions. Maybe the colorist should be the one to add contrast.
  • Daniele Siragusano: You need to be careful with film originated images. The ADX unbuild does not remove film's shoulder. So the result is not really linear light, and you shouldn't judge contrast. It may be that you have ACES data with reduced highlight contrast, and a higher contrast rendering compensates for that.
  • Scott Dyer: Not all my 550 images are film. I see it on all the faces. It does seem that OpenDRT matches the "feel" of HDR and SDR better than 1.2. But I have only been looking on my laptop screen.
  • Alex Fry: Worth noting that how you apply the tonescale makes a massive difference. Jed's curve looks more similar to the SSTS if applied per channel. I have a comparison here (which needs to be viewed on an HDR display).
  • Scott Dyer: There are many ways that Jed's work improves on the issues in 1.2. My remaining questions are tonescale (very subjective) and surround compensation, which he has built in to the tone curve.
  • Jed Smith: I agree with your comments about face rendering. But I think the place to alter that is the LMT, and OpenDRT is always meant to have a default LMT.
  • Scott Dyer: We need to get it out to testers. Can they get it where they need to, and if everybody always does a similar thing, maybe that should go in a default LMT. But is the tone scale the right place to do surround compensation? Or should it be done in a separate step in display linear? I assume Jed just used his judgement to come up with reasonable surround compensation presets. It would be nice to put those options in front of a range of people with forced pair comparisons.
  • Jed Smith: Yes, I just looked at what other renderings did, and looked at results on my 800 nit HDR TV.It definitely needs more verification on an HDR reference monitor.
  • Scott Dyer: We did run experiments and, at least for SDR, a small gamma number was enough to do that. We don't know about HDR. We need to find something that works for all surrounds, not just preset dark and dim. The last thing is rendering intent. I think OpenDRT matches HDR/SDR better. That may be a good base intent. Emulating SDR in HDR is easier, as we can just encode SDR in an HDR container. We need to explore if OpenDRT can match the two rendering intents that Josh has described. So that and default skin tone rendering are to be explored, but other than that I think the images do look really good. They don't have artifacts I have seen with other ratio preserving approaches. So we need to see if this group thinks it can be a candidate, and what me might want to change.
  • Alex Fry: Jed has a Youtube HDR clip comparing the SDR and HDR results of ACES 1.2 and (an earlier version of) the OpenDRT. There's been a thread on ACES Central of LMT experiments to try creating various looks, such as the 1.2 rendering. Jed, do you want to talk about your tools?
  • Jed Smith: I have tools to do things like control saturation at different points in a volume. Bending hue across a luminance range.
  • Alex Fry: Most LMTs are inverse LUT bakes, so it would be nice to have examples for making LMTs that weren't just to get round problems.
  • Jed Smith: LMT generating tools would be cool.
  • Alex Fry: Netflix had some questions they put together after the TAC, regarding swappable renderings.

  • Are we talking about only ACES supported rendering transforms (officially released)
  • or user/custom/anyone can add their own rendering transforms? Or both, but one before the other?
  • What are the user stories for multiple rendering transforms and how are these defined?
  • Live Action / Film / “Default” Look
  • Animation/CG/Games
  • Something else?
  • Where does an LMT fail where a rendering transform succeeds, specifically?
  • Are we committed to AMF to track these things, and if so, within its current published spec?
  • Are we having this discussion because of true technical limitations, or because we don’t want to decide between two good things?
  • Should we do a round of user testing soon to help narrow in on some of the preference-based things we are discussing?

  • Carol Payne: We felt after the TAC that everybody was talking about slightly different things, based on their interpretations and preferences. Instead of a pro/con list, we came up with questions we felt haven't been fully answered "in text" in one place. What we mean by swappable rendering transforms, and whether we need them for technical reasons of to avoid making a choice.
  • Alex Fry: My interpretation was we would ship a single reference rendering transform, you you would leave it open for people to create their own. Like with IDTs, there are official ACES IDTs, but if you have a camera with no IDT, or you disagree with the methodology for an existing one, you can make your own IDT.
  • Carol Payne: Is that encouraged? If an IDT exists.
  • Daniele Siragusano: AMF has a slot for custom IDTs.
  • Jed Smith: Scott had a post with an example of making IDTs.
  • Carol Payne: I' fine with making IDTs where they don't exist. But if there is a published IDT, shouldn't that be used?
  • Daniele Siragusano: If you have a special light source it's better to make a custom IDT for that, because the published one won't be optimized for it.
  • Alex Fry: So for DRTs I imagine we ship a default, and people can make their own, but those wouldn't be added to the ACES repo.
  • Scott Dyer: I think we have to make a framework and supply blocks for each part that allow people to do a pure ACES show. But if people want to swap components of that, we can't stop them.
  • Carol Payne: But if you swap things, how do you track that? Everybody will say AMF. But that would mean changing AMF. Currently the Output Transform slot only supports ACES transforms.
  • Daniele Siragusano: You can already reference external CLF files.
  • Carol Payne: That would mean swapping the whole Output Transform. There is no slot to swap out just the rendering.
  • Daniele Siragusano: I think we already agree we need to restructure so there is a clear cut without tone mapping in the ODT.
  • Carol Payne: If it's split into a rendering transform and an output transform, what goes into AMF will fit just fine. That split is how OCIOv2 does it, and others too.
  • Daniele Siragusano: Then it's one block you can swap. And it makes it easier to update in the future. Nothing is there forever.
  • Carol Payne: So do we agree we build a system now with one rendering, that in the future would support swapping that block. But we don't have the bandwidth to do that now?
  • Daniele Siragusano: In my meta-framework there is a vanilla transform, and the rules need to be developed about what constitutes a compliant alternative rendering. I acknowledge that's a lot of work. But it informs what we do now.
  • Kevin Wheatley: The question of "are we doing this for technical reasons or to avoid deciding" is key for me. I feel it is technical, because we have conflicting requirements from the original complaints. The list of user stories is, I think, where the conflicts come from. If we prove we can't solve both requirements, that proves we need a mechanism for swapping DRTs, even if that doesn't come until 2.1 or whenever. My example from my colorists is they want to hit particular saturated colors for pack shots or whatever, and everyone else wants a path to white. Those two are in conflict.
  • Daniele Siragusano: ACES history proves you can't solve that with one rendering. It took a lot of engineering, and still came to the conclusion we need another round to try again. Maybe it's not solvable.
  • Carol Payne: Maybe then we consider more than one official ACES rendering. But which is the default? Can you use both on one project?
  • Daniele Siragusano: First we have to acknowledge we can't make one to satisfy everything. Then we can agree on a default.
  • Carol Payne: I think we can make one that satisfies 80%, and that's pretty good.
  • Kevin Wheatley: If we acknowledge that's a requirement, we can progress with an architecture which has that requirement, and experiment with OpenDRT. And then maybe the default can be the one the majority prefer on the majority of images.
  • Alex Fry: Satisfying most of the people most of the time is easier if we have an escape valve for people it doesn't satisfy. Or we have what we have today, with weird hacks, or people completely working around it.
  • Daniele Siragusano: If only one party disagree, what are Netflix's biggest concerns?
  • Carol Payne: Personally I think you're making it seem easier than it is. I like your pros and cons list, and that one is that it will be a lot of work. I don't see why if we design it with a good cut point (which everybody seems to agree on) we can't proceed just with the default for now.
  • Daniele Siragusano: But we need an agreed standard for those who want to escape. This is how you build architectures, and then you implement it. I don't want to work on something where I feel the architecture is not right.
  • Alex Fry: The current version certainly seems to have the wrong cut point.
  • Chris Clark: Our main concern is we're relying on AMF being required, rather than an added feature for sending round a look. Ed Giorgianni mentions tag based color management ultimately always fails.Tags are frequently wrong. It's called a reference rendering because you don't require metadata to know how to view an image.
  • Daniele Siragusano: It's only mandatory if you want to escape. Even today if you have an ACES 2065-1 image, without an AMF, you don't know how to view it. Was it shot 2, 5, 7, years ago? It's a misconception that you can do this. It wasn't the case with film negatives either. Not without a bit of forensics. Of course you need a default, so people can view an image if they know nothing, but I don't see it working if you force everyone to use the same rendering.
  • Chris Clark: We risk adding complexity for the majority, just to add an escape valve for a few.
  • Kevin Wheatley: There is already complexity. I have "ACES" shows today with 10 different LUTs, none of which are RRT based, and with no LMT tracking. So this would provide a way of tracking what is already happening.
  • Chris Clark: I worry about relying on metadata. AMF isn't even fully rolled out yet.
  • Jed Smith: What Kevin said matches my experience.
  • Daniele Siragusano: Mine too. For 90% of our users it's a nightmare.
  • Carol Payne: The shouldn't everybody be jumping on AMF?
  • Kevin Wheatley: We're just the Output Transform group. We are saying it should be pluggable, and the ACE framework needs a way of tracking it, but it's not for us to say what that means. That should go to the Architecture TAC. We can't solve everybody's problems.
  • Daniele Siragusano: We can ship ACES 2 with the default one, but knowing the swappable thing will come later. We need to agree on that path.
  • Scott Dyer: We agree we need to make a vanilla system people can use, and moving the cut point intelligently will enable future changes. If we make a rendering for 2.0 that enables even 50% of those who have abandoned ACES or made crazy work-arounds to reconsider it, that would be great.
  • Daniele Siragusano: Maybe that would be the case for one show, but maybe not the next one.
  • Scott Dyer: If we have something which doesn't have the current known issues, why would you reinvent your workflow every time?
  • Daniele Siragusano: Not reinvent every time, but select from a few proven pipelines, to fit the needs of a particular show. Why would a standards body enforce something so creative as a rendering transform?
  • Scott Dyer: They don't have to do anything, but if they use ACES, they should user the ACES rendering.
  • Jed Smith: Doesn't a DP sit with a colorist at the start of a show and develop a look?
  • Scott Dyer: So we need to them with provide something that let them create looks without restricting them.
  • Daniele Siragusano: I hear people say "I love this rendering and use it for 30% of my shows, but for another 40% I use this one," etc. Why don't we build a standardized system to deliver what the industry wants, rather than telling them what they should do.
  • Alex Forsythe: Let's start by building something that's as flexible as possible, and then figure out if it works for our purposes.
  • Nick Shaw: We need to get a rendering into the hands of colorist, and ask if they can get things to where they want them.
  • Kevin Wheatley: We do, but first we kneed to consider renderings for multiple output devices.
  • Alex Fry: Can we move the cut point, keep building prototypes, and keep this in the back of our heads?
  • Chris Brejon: Daniele said something in an earlier TAC which resonated with me. "Would you restrict all productions to use the same camera?"
  • Carol Payne: If our Output Transform is limiting, we should fix what makes it limiting, so it works for 80% of people. If 20% need something different, let them do that.
  • Alex Fry: If we can build something that does that, it would be great. More people should test Jed's OpenDRT, and test looks under it.

Meeting #26, September 15th, 1pm PT

Attendees

Alex Fry
Scott Dyer
Nick Shaw

Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
Francesco Luigi Giardiello
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith
Mike Whipple

Meeting Notes

  • Alex Fry: We talked to Rod Bogart last week, and he wanted us to go back through our assumptions and requirements and confirm we all agree on them before the TAC meeting. And anything we need TAC input on.
  • Scott Dyer: Wanted to confirm invertibility doesn't mean you will get everything back after inversion.
  • Nick Shaw: Usually it's more important to be able to invert display referred values, so the forward transform gets back to those. There you do get everything back (within reason) because you're starting from a limited subset.
  • Daniele Siragusano: Should we add constraints to the intermediate state? Or are just looking at input and output? Perhaps limiting the slope on the inversion, so the intermediate data is still meaningful.
  • Alex Fry: Sometimes you need extreme values to get the right output.
  • Daniele Siragusano: Is 1 to 1 more important, or sane values you can work with?
  • Sean Cooper: Inversion is a gamut expansion in 3D.
  • Alex Fry: Two sets of transforms gets complex. I prefer 1 to 1 (for graphics) and maybe you massage the extreme values if needed.
  • Thomas Mansencal: We can provide guidelines on how to get a workable image if we choose the 1 to 1.
  • Alex Fry: Tonescale with less contrast, and finite slopes at ends.
  • Sean Cooper: That connects to invertibility.
  • Alex Fry: Hue preservation and highlight desaturation is the most controversial. What it means is debatable, but we want it.
  • Sean Cooper: Hue constancy in terms of hue category. Reds stay red etc.
  • Scott Dyer: We have example images where we have an idea how they should look, plus images from the gamut mapping.
  • Nick Shaw: Having not previously had access to an HDR display, I noticed today how much variation between display technologies there is. My RGBW OLED can't do both bright and saturated at the same time, but my iPad Pro can. Maybe a DRT which desaturates bright saturated values is a benefit if it makes displays match better.
  • Alex Fry: It makes it tricky that few of us have access to a display that doesn't have a gamut volume taper at the top. Point 4, display encoding is obviously needed. Any priority order for 1, 2 and 3?
  • Sean Cooper: We wouldn't want to make a garbage the forward transform just to make it invertible.
  • Daniele Siragusano: It's a balancing act where you cant get 100% of any of them.
  • Alex Fry: What is not here is aesthetics out of the box vs a flexible transform.
  • JP : We've talked about a default LMT that makes it look good, but can be switched off for flexibility.
  • Alex Forsythe: If it is doing something objective, the look is secondary, but if it's not doing something objective it needs to look good.
  • Alex Fry: What has past feedback been? I feel 0.1.1 look nicest out of the box, but cause most complaints about difficulty changing the look.
  • Alex Forsythe: Finding that balance is the nightmare problem.
  • Joshua Pines: Looking good but not imposing limitations is hard. Default LMT seems a popular solution to that.
  • J. Schulte: Decoupling the look from the encoding on the wire is the path forward.
  • Carol Payne: Within limits. Some things need to be in the OT and can't be an LMT.
  • Alex Fry: Some things can only be done by swapping out the whole rendering.
  • Joshua Pines: It's important to bring this up to the TAC. It's the main thing we're discussing and it doesn't fall under requirements.
  • Alex Fry: Nothing's off the table. But we're already not going to deliver a new OT in the original timeframe. If we get into larger architectural changes we're biting off more than we can chew.
  • Daniele Siragusano: I feel not accepting this is what's holding us up. Any argument can be solved if the RRT is just a default, but if you don't like it you can swap it out, and still be in the ACES framework.
  • Jed Smith: I think we need a procedure for decision making so we can move forward.
  • Carol Payne: Even if we allow swapping it out, we need a default transform that satisfies the majority, because most people will just use the default.
  • Daniele Siragusano: If you remove the edge cases you can be more bold because you have less constraints. My suggestion is you don't specify a color management workflow, just a color management framework – rules you need to follow. You provide IDTs, ODTs and a default rendering, but people don't need to use that to be ACES compliant. Archiving and mastering would benefit from that because many go to a lot of effort to undo the fixed components.
  • Nick Shaw: How would a universal archive represent an arbitrary transform. It would have to fall back on LUTs. Even CLF, with it's various operators needs to use a LUT3D to represent the current OTs.
  • Daniele Siragusano: I'm a big fan of programatically derived stuff, but if somebody makes a display transform that is a shader plus something else they derived, why would we prevent that? People tend to make archives which contain sets of LUTs for all the transforms anyway.
  • AF I thought the TAC was on board with having a swappable transform as well as the RRT.
  • Carol Payne: I'm not sure. I thought they said that was something we needed a larger discussion about.
  • Daniele Siragusano: What's the case against it?
  • Carol Payne: It's a lot of effort to design a system around 10% of people. They have the knowledge to do it anyway. It seems more implementation. The point of this architecture group is to design an output transform.
  • Nick Shaw: We talked about a bypass switch, so the IDT is colorimetric, the display encoding is colorimetric, and there is nothing in the middle, so an LMT does everything.
  • Daniele Siragusano: I don't think we help anybody by saying this is the only way.
  • Sean Cooper:We can't ignore the complexity this would bring. An infinitely complex system could solve all problems.
  • Alex Fry: I feel it's about defining where the cut points are.
  • Daniele Siragusano: AMF already has the mechanics to implement this. Having a "show" AMF of the viewing pipeline adds no complexity.
  • Sean Cooper: It adds no complexity to AMF, but it adds complexity to ACES.
  • Alex Forsythe: It needs some kind of anchor. If the transform can be anything, you could have a transform which needs such different data that the archive wouldn't be compatible with any other transform.
  • Daniele Siragusano: We already have many archives that have been through a Rec.709 bottleneck and an inverse. That's not desirable. That argument for invertibility rings alarm bells for me.
  • Alex Forsythe: Invertibility isn't just for that. People need to bring in display referred data for titles and graphics.
  • Carol Payne: We use invertibility for that all the time.
  • Thomas Mansencal: And AR.
  • Joshua Pines: It get's used for both. I think a solution is a super-user back door to supply your own rendering transform. Is that something we want to allow.
  • Carol Payne: A lot has to happen to support that and make it trackable. I thought we had agreed our first priority was to design a default rendering and a default LMT.
  • Daniele Siragusano: That's the wrong way round. If it's just a default you can swap, the design changes completely.
  • Alex Fry: We have to bear swapability in mind when we design it. The cut point at OCES doesn't satisfy that requirement. I think Baselight shifts the cut point to half way through the ODT to make it swappable.
  • Daniele Siragusano: Yes.
  • Alex Fry: Making something where you can swap between the SSTS and Jed's OpenDRT helps think about where the cut point needs to be.
  • Nick Shaw: I think Josh's point is important that a bypass for advanced users should be relatively hidden, so people don't think everybody has to do that.
  • Alex Forsythe: It shouldn't be so flexible that the rendering could be a straight line, and the ACES data becomes display referred.
  • Daniele Siragusano: A straight line doesn't work for different viewing conditions.
  • Joshua Pines: Defining constraints for reasonableness is tough.
  • Jed Smith: Do we have to support anybody's display transform, or are they on their own at that point?
  • Daniele Siragusano: We have that with LMTs. They could be anything. And delivery specs could mandate using the vanilla ACES transform. But if not the archive contains AMFs which define everything for the viewing pipeline.
  • Sean Cooper: We seem to be refusing to define what the framework will be. If we don't define the building blocks, how can we build anything? I don't think the effort of designing an new framework is any more than the effort of having to redo everything at a later date if we fix something now.
  • Carol Payne: If we fix the issues we know with the current rendering, and make it invertible, I think most productions I work with would be happy with making LMTs, and not need to swap out the rendering.
  • Daniele Siragusano: The way IMF lets you have a per-viewing-condition LMT is a back door to not baking a lossy transform into the master.
  • Carol Payne: I don't see making a framework change later as incompatible with just changing the rendering now, as long as we move the cut point.
  • Chris Clark: AMF has been difficult for implementers just as a metadata framework, so it would be more difficult if it also includes the rendering.
  • Daniele Siragusano: It would give implementers more incentive, because they get more features. And why would productions not use ACES if they could do what they wanted? IDTs are an example where you can make your own IDT and share it, which makes a case for swappable DRTs.
  • Sean Cooper: IDTs are more objective though.
  • Daniele Siragusano: A big question is "have we done things wrong in the past, or done the wrong things?"
  • Carol Payne: Like with the gamut compression, if you solve 80-85% of things that's a win.
  • Daniele Siragusano: Why do we split people into those who notice the problems and those who don't care?
  • Carol Payne: They care, they just don't need the level of granular control. LMTs they understand.
  • Daniele Siragusano: In the 2010s the ARRI rendering was the most popular with DPs. Why should a standards body say people can't use that?
  • Carol Payne: To me that's redefining what ACES is, which is a bigger fundamental question.
  • Alex Fry: Is the two week meeting cadence making things worse or better?
(general agreement that two weeks is easier for people but does slow progress down)
  • Chris Brejon: We should have a more fixed agenda for meetings so we know what we're going to discuss.

Meeting #25, September 1st, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Chris Clark
Francesco Luigi Giardiello
Jean-Michel Gilbert
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
James Pickett
Matthias Scharfenberg
J. Schulte
Shane Smith
Jed Smith
Troy Sobotka

Meeting Notes

  • Kevin Wheatley: After last week's meeting we have the start of a spreadsheet with a list of viewing conditions and displays. And we have a Miro board of the flow of the current SSTS OT.
  • Alex Fry: Initially I was just doing a basic list of viewing conditions, without every permutation of gamma encoding etc., then Keven added a tab with more detail. sRGB is the one where the reality differs most from the specs. Nobody uses an 80 nit display, so what should we target? 250 nits in a bright office environment? sRGB was probably the most commonly used in the original ACES configs, as the default in Nuke. But the reality of 2.2 vs piecewise needs discussion. Well defined ones like DCI theatre are easier.
  • Lars Borg: Also for BT.1886, are we looking at reference conditions or practical use conditions?
  • Kevin Wheatley: We do have sRGB displays that follow the specs with 80 nits and 2.2 gamma, although not D50 surround.
  • Jean-Michel Gilbert: Many monitors use 2.2 gamma not the piecewise, but the OS profiles expect piecewise. scRGB expects 80 nits diffuse white, which is not the real world.
  • Thomas Mansencal: Actually many big shops do use 80 nits if people are doing comp work in a dark surround.
  • Lars Borg: For an Output Transform, shouldn't we target consumer living rooms where this is being viewed, not reference conditions?
  • Nick Shaw: I would argue against that. We've always mastered on a 100 nit display in a dark room, and the viewers with their 300 nit display in their living room have viewed stuff mastered at 100 nits, so what they see looks normal to them. If you master to deliver your creative intent on their 300 nit display, it will look wrong to them.
  • Kevin Wheatley: I found a lot of variation in the specs. I put 2.2 as the gamma for sRGB, because that's what is says in the spec I had (1998 version, section 2.1, item 4)
  1. Display input/output characteristic (R, G, and B)     2.2
  • I didn't search for every single standard, but I know with sRGB there is controversy.
  • Alex Fry: I've worked at a facility where half the monitors were 2.2 and half were sRGB, and we needed different transforms for each. The surround is up for debate. Should the sRGB OT assume a bright room? On-the-wire is simpler, but surround compensation needs to be baked into the image.
  • Jean-Michel Gilbert: I think HP and MS designed sRGB as a bright surround version of Rec.709.
  • Kevin Wheatley: I found a wide range of values for HDR surround. Some so close you wonder if the difference is significant. Sunny vs cloudy etc. Some docs say grading suite illumination is 5 cd, but elsewhere it says 10%. But that's 10% of something that can vary.
  • Jean-Michel Gilbert: You asked should we support other monitors than the display HDR list. I say yes. 1500 nits is becoming common, e.g. ASUS. Display HDR 400 and 600 are more common than 100 due to cost. 
  • Kevin Wheatley: We need to pick something, but give our implementation wiggle room in the parameters. Release a restricted subset.
  • Alex Fry: We don't want a default OCIO config with every variant, but we should test against a wide range.
  • Jean-Michel Gilbert: Display HDR 400 isn't HDR and needs to die! They don't even have to support beyond Rec.709.
  • Carol Payne: If we drop something included in previous ACES releases we need to explain why. I'm ok with limiting the list. We basically have two types of people - those who don't calibrate and need only SDR or HDR, and those who do calibrate and will know what OT they need. And game engines will be adapting to the scenario.
  • Alex Fry: It would be useful for a flip-booking application if it could adapt to the connected display, rather than hand off a 1000 nit buffer and let the display do some unknown mapping to e.g. 400 nits.
  • Kevin Wheatley: We can add another tab to the spreadsheet so people can make an include/exclude list. The Miro board shows where we are today with the order of the SSTS code as blocks.
  • Alex Fry: The SSTS block actually masks a lot more complexity, which maybe we should split. It's a discussion aid.
  • Kevin Wheatley: There are sticky notes of topics for discussion.
  • Alex Fry: There are dividing lines in the board. and one thing to discuss is where those should go, to allow swapping other rendering transforms. Currently I put a divide after the SSTS tone curve.
  • Kevin Wheatley: I added another line once it gets to display RGB. After that is primarily on-the-wire encoding. The only questionable one is the scaling for D60. Is D60 the only creative white we ant to deal with, or is it just the out of the box one.
  • Alex Fry: In reality I feel most people are targeting D65.
  • Kevin Wheatley: We have one D60 sim show, but it's unusual. Ideally this bit would come before this line, so it could be plugged as a whole unit. We provide the mechanics for the on-the-wire encoding which is straightforward for each condition people need.
  • Nick Shaw: The scaling there isn't doing the creative white. It is there to scale down if the D60 sim (i.e. no CAT) results in values above 1.0.
  • Kevin Wheatley: That's why it feels like it should go straight after the CAT.
  • Thomas Mansencal: That's what we do with our custom stuff. If the CAT pushes things above 1, we normalize straight after.
  • Nick Shaw: With a CAT [1, 1, 1] maps to [1, 1, 1] so it's D60 sim which needs normalizing.
  • J. Schulte: We explicitly differentiate between D60 and ACES white which is slightly different.
  • Kevin Wheatley: If you remove the D60/D-ACES and make it white creative choice, that one has no special significance. Related, it's been noted before that the gamut clip and CAT are in the wrong order, because you clip then potentially push back out with the CAT. So that's a bug fix that needs doing. Any other comments?
  • Alex Fry: One thing I was unsure about was the global desaturation in the RRT sweeteners. Is that for a specific reason, or sujective?
  • Scott Dyer: There were those who thought the tone scale made things too saturated, and felt it should be there. But it's very subtle. I out it up front so it could potentially be split off into an LMT.
  • Lars Borg: I think it was done because in some test images some very saturated colors fell outside the tone mapping. But it affects all colors. The red modifier specifically pulls down saturated reds. Most discussions were about the far out colors, but did it then sabotage the near grey colors?
  • Scott Dyer: I was worried were sculpting our rendering based on problematic images. There was a paper by Jon McElvain at Dolby which tried a 2D approach to IDTs. And we reconverted some of the problem frames using the 2D approach instead of a 3x3 matrix IDT, and the difference in red was shocking. The other colors were pretty close. I don't want to sculpt the rendering based on images with poor 3x3 IDTs, when better images don't need it.
  • Lars Borg: the test images were captured on film, and who knows how they were processed. So the bottom line is to simplify.
  • Alex Fry: Except perhaps where the display curve is applied. That may get more complex.
  • Kevin Wheatley: Currently we have nowhere to handle the surround or appearance effects. And currently we only have clipping as gamut mapping which creates different artifacts on different displays.
  • Alex Fry: The SSTS OTs have no surround compensation, and there is a simple gamma for SDR.
  • Kevin Wheatley: There is nothing for other appearance effects like colorfulness related to brightness.
  • Thomas Mansencal: All that could be done by a CAM (except gamut mapping) including change of white point.
  • Jed Smith: One thing I found helpful when making the OpenDRT was to separate the parameter space from the rendering code. You separate the parameters exposed to the user, and the resulting calculated values passed to the rendering code. I combined multiple scale factors into a single multiply operation.
  • Alex Fry: It would be good to lay yours alongside the current one in the Miro board, so we could see where things could be split.
  • Kevin Wheatley: If we make it compatible with one thing, it might hopefullly play better with others.
  • Alex Fry: How intertwined is OpenDRT with Daniele's tone curve. Could it be swapped for the SSTS, for example, or apply yours in RGB?
  • Jed Smith: It should be doable.
  • Kevin Wheatley: Is the one multiply for efficiency? Because keeping each scale next to what it relates to simplifies slipping it up. Also if you introduce any steps which are non linear, combining the scalings at the end would not necessarily give the same result. Different gamut mappings might not be linear. What should we do before next time? It would be nice to fill out and lock the spreadsheet.
  • Alex Fry: I'm looking at things that are meaningfully different. Not every single on-the-wire variation. How are people using Desktop P3 in the wild? What surround assumptions? Dim desktop environment? 48 nit in a dark room?
  • Thomas Mansencal: And what EOTF? The Display P3 version with P3 primaries, D65 white and sRGB EOTF is quite common now.
  • Jed Smith: These could be input parameters, because the separation simplifies the way of thinking about it.
  • Alex Fry: Let's work together to represent OpenDRT in the same style on the Miro board. And the SSTS part should be exploded out. For next time, maybe we can get Troy to talk about sensation vs stimulation.

Meeting #24, August 18th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Alex Forsythe
Lars Borg
Daniel Brylka
Sean Cooper
Thomas Mansencal
Michael Parsons
Joshua Pines
Rémi Achard

Meeting Notes

  • Kevin Wheatley: I think we should revisit the purpose of our meetings and make sure we’re making progress towards our goals. We’ve agreed on an outline framework to construct the transform. We’ve talked about not needing to solve all problems in the stock transform, and giving advanced users a way around it. Jed and others have done good investigations of some concepts. But we need to remember we have to produce an actual deliverable. Maybe we should split off some of the group’s efforts to look at a minimum viable product. Scott has worked on that, taking what we have and tidying it up. In parallel Jed’s work fits in the framework we’ve discussed, and could be evaluated against it and integrated at some point in the future. Any thoughts?
  • Alex Fry: Can you define “split off”?
  • Kevin Wheatley: We don’t want to lose the efforts people have been making, but equally we don’t want that to distract from making incremental steps. Jed’s work is a dramatic change. We don’t have a lot of the original timescale left to evaluate whether things are acceptable, and if they solve the original problems.
  • Scott Dyer: Can you clarify the parts I’m cleaning up?\
  • Kevin Wheatley: The origins of the various “magic numbers”. Small CTL bug fixes. Order of operations etc. But also looking at what does it mean to be “a bit less contrasty”? More “average”? That gives us a start point for a benchmark of what we want to achieve. We could also take in the suggestions from Sean and Daniele on how to split up the modules, to make them more explicit. The we can move on to looking at other options which are more research than bug fixes.
  • Scott Dyer: We frequently talk about what’s wrong, but what parts of what we have do we like? There’s obviously things like hue shifts and clipping, which may sometimes produce desirable results, but are unpredictable and mostly undesirable. I need to look at the graphs from Sean and Daniele and see what parts are working. Are the sub-components that can be looked at independently? To give people stuff to look at, as well as Jed’s great work.
  • Alex Fry: The dim vs dark block of the code is one thing that could be looked at. We need it whatever we do. Looking at where in the code it goes.
  • Thomas Mansencal: More broadly adaptation in general. Which aspects are important, and where we put them. It affects the architecture quite significantly. I contacted Eric Reinhard through Josh, but haven’t got an answer yet. I asked him about how he would architect a display transform these days, unhinged appearance modelling. 
  • Alex Fry: I feel the question of tone scale and how to apply it are independent. The SSTS and the “Daniele function”.
  • Thomas Mansencal: I was looking at the DropbBox Paper. Have we done the phase one deliverables? The summary statements, refine the scope and rules of engagement. And are the phase two ones still valid?
  • Kevin Wheatley: I would agree that we did not complete those. Items like nomenclature etc. should have been done. We talked about a list of the minimal set of output devices and viewing conditions. Time is difficult for all of us.
  • Thomas Mansencal: Maybe we can dice the problem into smaller units that somebody could tackle in a few days, rather than a few weeks. I’m super glad Jed has found the time to do what he’s done, but I think he’s an outlier.
  • Kevin Wheatley: We need to produce a long list of small tasks, and then make a short list and start tackling them. 
  • Scott Dyer: One is the list of environments and devices for the most used outputs.
  • Thomas Mansencal: Maybe we could look at the ITU standards and the VESA HDR ones. Maybe not produce an OT for each, but make sure that they work.
  • Alex Fry: The 4000 nit display everywhere isn’t happening any time soon, so those variable display are the reality now.
  • Kevin Wheatley: Other areas we could agree on are methods of evaluation. Scott, Alex, Nick and I have talked about having a black and white tone scale to sort that out before looking at color. Instead of solving everything at once. Any similar suggestions? Then we can find images that match against those criteria.
  • Sean Cooper: I was going to suggest categories of images we should evaluate. High dynamic range cinema cameras; graphics; 2D animation; 3D animation; Synthetic images; Abstract? Broad strokes of what we want to evaluate visually.
  • Kevin Wheatley: We can call on the other working groups for images, but we should have some novel images, so people aren’t wedded to what they should look like. We need documentation of all these things. We have Nick’s notes and the transcriptions, but we could start a document.
  • Alex Fry: I wonder it’s worth walking through the code to see if we all agree with what’s in there. Do we agree on getting rid of the sweeteners?
  • Scott Dyer: I think we want the bare minimum necessary. That means getting rid of the sweeteners, or at least putting them in an LMT. But is there value in going through the CTL? It will change. I’d rather just look at it from the perspective of the block diagrams. We shouldn’t be influenced by the current code.
  • Alex Fry: Are we agreed we’re only looking at the SSTS (or equivalent) assuming the original two-parters are off the table going forward?
  • Kevin Wheatley: I think the intent is to be unified.
  • Scott Dyer: There is a lot of inconsistency currently - e.g. the gamut limited versions (but not all) and the different white points. And a few have a line of code for some Helmholtz Kolrausch effect compensation.
  • Thomas Mansencal: Is there a table summarising the variations?
  • Kevin Wheatley: There is a list of five on the DropBox Paper, but is it complete?
  • Scott Dyer: I can put together a table, with a column for each of these mechanisms.
  • Kevin Wheatley: Anybody want to make the list of reference monitors and environments?
  • Thomas Mansencal: It could be another tab in a big Google sheet.
  • Kevin Wheatley: Not publicly editable!
  • Scott Dyer: I’ll make the Google Sheet. And add a page for devices. Maybe some devices need multiple environments.
  • Sean Cooper: Isn’t most of this already in the Miro Board?
  • Kevin Wheatley: Maybe, but it’s not complete.
  • Thomas Mansencal: For me it’s about capturing everything, so you don’t miss anything out. It then helps when you implement the code.
  • Sean Cooper: It would be nice to revisit the Miro Board to make sure we feel we’ve brainstormed all the possible solutions. The pros and cons of different approaches. Hone it down to 2 or 3, and do small tests down each path, before committing to one.
  • Scott Dyer: We didn’t really decide on which of these frameworks made most sense, did we?
  • Kevin Wheatley:  There was concern over one with per shot adjustments after the display transform, but others we don’t make any decision.
  • Sean Cooper: In brainstorming nothing is off the table. The we do a more critical review. I think we did the first part, but is that the limit.
  • Scott Dyer: One thing I think we must include is the “intent switch”. Optimise or make it look like something else. Some want HDR to be brighter SDR. Some want more wow factor. Both are valid. Bare minimum is a binary switch.
  • Kevin Wheatley: That’s not an identical colorimetric match. Or that adds a third option.
  • Scott Dyer: Some people have two different display in one room, and say “why don’t they match”. But in separate places they would appear to.
  • Josh Pines: That brings up soft proofing. Do we want to allow previewing one device on another?
  • Alex Fry: We’ve all had to do that, so it’s in there somewhere.
  • Josh Pines: That answers the colorimetric match.
  • Scott Dyer: If we engineer it right that’s the easiest part. The “same feel” is harder to evaluate.
  • Josh Pines: JZ would say an exact match is something we need to show clients.
  • Sean Cooper: Why are they mutually exclusive? We already have “choices of flavor” such as D60 sim. Why is it off the table to have 100 nit sim HDR.
  • Scott Dyer: It’s not off the table. It should be something that the system produces consistently, and isn’t left to the user dialling in parameters.
  • Sean Cooper: Choice is already in there wit simulated white points.
  • Scott Dyer: Take clouds where HDR rolls off to white, and HDR could reveal more detail. But some people don’t want that. Currently the HDR OTs don’t do that. But you could have an HDR OT for 1000 nits with no more detail.
  • Sean Cooper: I don’t see why both can’t be provided.
  • Josh Pines: How easily are both provided. Or things in between? Suddenly there is exponential growth in the number of OTs. Or is there that magical parameterization people tend to shy away from. DPs often light for the windows to blow out a certain way, and then in HDR they see stuff they didn’t want to be seen. Ideally there is a color corrector know which does that. So we produce multiple HDR candidates and they pick one that falls off the truck how they want.
  • Kevin Wheatley:  We’re assuming soft proofing on a more capable device. But what about the other way?
  • Alex Fry: I think it needs to go both ways. If you fall in love with the HDR highlights, you need to simulate that in SDR. That’s what Dolby vision does, holding onto super colorful HDR skies.
  • Kevin Wheatley: There’s always a trade off, and that adds to the proliferation of OTs.
  • Josh Pines: From experience the candidates fail into two camps. Keep the SDR roll off or open up. Nobody wants to split the difference.
  • Alex Fry: We definitely had the weird combo of HDR dynamic range and SDR color rendering.
  • Sean Cooper: There’s already choice of white point, so adding one more choice of HDR or SDR in HDR doesn’t seem bad. Opinions of how you like your HDR is polarising.
  • Kevin Wheatley: We should distinguish between what the code can do, and what renderings we provide out of the box.
  • Alex Fry: Even D60 sim adds complexity horror when its in a single flat menu. Now add to that…
  • Josh Pines: I think it’s a naming issue. That was what forced the situation of having to create a D65 white point version for theatrical.
  • Sean Cooper: It is complex and if we decide how to simplify of for people we’re removing options.
  • Alex Fry: The SSTS is nice in that we provide some options, but users can easily to others by changing switches.
  • Kevin Wheatley: So Scott will make the spreadsheet of the current options, and it would be great if we could also make that list of devices and environments before the next meeting. We need to be focussed from now on.


Meeting #23, July 22nd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Lars Borg
Chris Brejon
Daniel Brylka
Sean Cooper
Francesco Luigi Giardello
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith

Meeting Notes

Scott Dyer expanded on his explanation of the SSTS from the previous meeting, and showed the interactive Python plots that were used during development of the 1.1 OTs. He showed both an unconstrained version, and then a version using the constraints from ACES 1.1, which also included presets for various display classes. He demonstrated with this how the mid grey luminance adjustment is achieved using a scene side exposure adjustment, which effectively moves the curve side to side, not up and down, sliding the window of scene exposure mapped to the display. Open to debate if this is the right way to do it. It was done to keep the shape of the curve consistent.

  • Scott Dyer: The SSTS enabled me to adjust the curve in any way I needed. I feel we could use it to look at B&W images and find what we want the tone curve to do, and then add in things like the color stuff Jed has been working on. It's just one proposal. We could replace it with the Michaelis–Menten based curve Daniele proposed, or something else. I'm going to make a proposal for a changed SSTS curve, with different end point slopes and mid-tone contrast, and compare it to other popular renderings.
  • Daniele Siragusano: How are the values for the presets derived?
  • Scott Dyer: They are just the values used in the current OTs. They are the modifiable values in the Output Transform module in the current CTL.
  • Daniele Siragusano: How is mid grey chosen for e.g. an 800 nit monitor?
  • Scott Dyer: There are currently inconsistencies, and no definition for e.g. 800 nit mid grey. The 1000, 2000 and 4000 nit OTs all use 15 nits for grey, and Dolby Cinema uses 7.2. There's nothing in between, and no formula.
  • Daniele Siragusano: So the formula gives you freedom to change things, but doesn't tell you what the values should be. Do you need dynamic branching in the formula. No simple y = f(x) equation?
  • Scott Dyer: Yes. It's piecewise in 4 parts – linear extensions at the ends, and two b-splines, with smarts to make the join smooth.
  • Daniele Siragusano: I'm thinking of computational efficiency. Dynamic branching on a GPU is not efficient, and it needs to have benefits to justify it. We need to think about how much complexity we need at each point, and what it gives us.
  • Kevin Wheatley: Is that an implementation discussion? Back to the parameters, it doesn't seem right to me to use scene exposure to move the mid grey level. It's related to lack of understanding of why the values are what they are. We should have a curve that defines where grey comes out at any level, not just particular instances.
  • Scott Dyer: The SSTS doesn't have to be the final form, but I think it is useful for intuitive control over the curve, to find the behavior we want.
  • Lars Borg: I notice the curve is not symmetrical in the low and high end. I assume there is a reason for that. We need to find what is the overall shape of the curve, and what do we need to keep tweakable?
  • Scott Dyer: The asymmetry comes from needing to match the original RRT, which targets a theoretical 0.0001-10000nit display, with +18 stops mapping to 10000 nits, -15 stops to 0.0001, and 18% grey at 4.8 nits. We need to focus on deciding what the desired behavior is for displays with different capabilities. B&W helps us focus only on the tone scale and not be distracted with other things.
  • Daniele Siragusano: You set black luminance to the capability of the reference monitor, yes? SO ignoring PQ, if you have a relative EOTF, you map that value to zero before applying the inverse EOTF, so it's at zero in the encoding?
  • Lars Borg: In cinema, doesn't zero represent an unachievable absolute black, so you have to clip at 64 for what the projector can actually do?
  • Daniele Siragusano: No, the DCI spec is very explicit that black is relative, and zero maps to whatever is the darkest the projector can produce.
  • Scott Dyer: Absolute encodings complicate things, because if black is defined absolutely, you rarely actually get that output. Really you want it to float with natural flare etc.
  • Daniele Siragusano: So wouldn't it make sense to remove the black luminance parameter? Put it to zero and make it relative.
  • Scott Dyer: It's definitely confusing and inconsistent in the current version.
  • Lars Borg: But if you set black to zero, on two devices with different black levels, you get a different appearance of the shadow detail.
  • Joshua Pines: We have a canonical reference display for each standard, and that has a black level. What Scott is trying to answer is what range from the scene should be mapped to that display range. And he's doing that in nits. How you then go out to an encoding is a separate issue.
  • Daniele Siragusano: But that's the problem. You can't assign a black level a nit level.
  • Scott Dyer: The first RRT was in density. But for 1.1 everyone talked in nits. Density is more forgiving, because you can have a curve that keeps going to effectively infinite density. As Josh said, defining the curve in nits is a design, and it's not necessarily the nits you would measure off the end display. Maybe a finite slope at zero would help, as it keeps going down.
  • Daniele Siragusano: But if you map that to zero in OCES and clip at zero, you can never access that linear extension. The right side will affect how it comes out of black. Talking about nits in the shadows doesn't help, because it varies with APL. Sometimes in HDR black may be at 1 nit. Taking out the black level reduces the parameter space.
  • Kevin Wheatley: So then we have the question of how we choose how much scene exposure we fit into the display range. People must have an expectation with HDR that it shows more scene range, but maps it differently. Is it desirable to control the scene range that goes into a given transform, or is that the colorist's job?
  • Alex Fry: In my testing I felt you needed to change the RRT max stops value, rather than have it be consistent through all the transforms.
  • Scott Dyer: Currently it changes as an interpolation between +18 stops for a 10000 nit OCES display, and +6.5 stops for 48 nit cinema. That needed to stay consistent to not have to have a v2.0 bump.
  • Kevin Wheatley: Can we define where we put 18% which is roughly APL. Then we can say, "How much of the DR should we have below and above? And what should the curve between those points be?"
  • Joshua Pines: For SDR the 10% of peak rule of thumb has been borne out by practical tests.
  • Thomas Mansencal: What surround?
  • Joshua Pines: It seems to have held up in both dark and dim surrounds. But maybe the fact that dark surround is 48 nit peak and dim is 100 affects that. HDR the jury's out. Originally the HDR ODTs put mid grey at the same level as SDR. But people said it should be brighter.
  • Daniele Siragusano: In SDR it's always around the same level because you can't make it brighter. In HDR it varies much more, because you can make a bright scene bright, and a dark scene dark.
  • Joshua Pines: It would be useful if we had enough HDR material to do an analysis. In the current OTs it's somewhere near SDR, but a little brighter.
  • Daniele Siragusano: Id like a system where the number falls out by itself. 18% grey, diffuse white, etc.
  • Thomas Mansencal: We need to model it for a range of displays from 48 nits to a big 10000 nit display, and see where grey comes out as a function of surround and display capabilities.
  • Joshua Pines: The 10% verification came from work Lars Borg did on the Stem material. I believe the ASC is currently shooting Stem 2, and doing some HDR stuff. So maybe we'll have something we can analyse. If we set it to the mean, colorists can move it either way.
  • Daniele Siragusano: Has anybody visually analysed the formula I proposed, that ranges from SDR to HDR? 2 lines of code, and mult and a plus or minus, I think it works surprisingly well.
  • Jed Smith: I've evaluated it extensively, and I like how it works. I hope to have more to show soon.
  • Joshua Pines: We could ask Dolby. They've refined their HDR to SDR algorithm over time.
  • Kevin Wheatley: I assume that works on the limited range of the human eye. If you stand back you can only see a limited range of what's on the HDR monitor, and that’s what you map to SDR.

Alex showed some slides showing the difference between the 4.8/48 nit SSTS curve scaled to 10/100 nits, and using 10/100 directly, producing different results. Then changing some internal parameters and removing the exposure offset, the two matched. Similarly, by default the curve of a 15/1000 nit SSTS didn't originally match one that used those values directly with no exposure shift. But by tweaking parameters it could be made to do so. This is about trying to understand the logic of the exposure shift.

  • Scott Dyer: It's hard to articulate. I'll try to find a way to explain it to you and everybody. Back when we started we couldn't really even look at HDR. We decided 6.5 stops up and down was a reasonable range for SDR. And then it was a design decision that HDR was interpolated between those and -15/+18.
  • Alex Fry: I get the 100 nit thing a bit more now. 48 nit stretched is still +/-6.5 stops, whereas direct 100 nits is more than that due to the interpolation. Maybe about 8 stops?

Meeting #22, July 7th, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Rémi Achard
Daniel Brylka
Sean Cooper
Jean-Michel Gilbert
Zach Lewis
Francesco Luigi Giardello
Thomas Mansencal
Michael Parsons
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Jed Smith
Shane Smith

Meeting Notes

  • Scott Dyer: Some questions arose from Alex about the SSTS, so I thought it would be helpful to go through how the SSTS works (summarised here. See recording from 3 minutes for detail)
  • SSTS = Single Stage Tone Scale
  • Piecewise function. 2 B-splines joined at 18% point plus linear extensions top and bottom. Currently the extensions are flat.
  • RRT has a tone curve for a theoretical 10000 nit display with 100 million:1 contrast, so 0.0001 nits at the bottom.
  • The SSTS has the minimum knots needed to give the required control
  • The RRT maps +18 stops to 104 nits and -15 stops to 10-4 nits
  • A lot of underlying math calculates the coefficients from a minimal set of parameters
  • PercentHigh and PercentLow controls affect the roll off top and bottom
  • TsParams is a struct of TsPoints for the SSTS
  • You only input y values for min, mid and max. x values are calculated automatically by linear interpolation between the RRT tone scale and the 48 nit one. There is an exposure shift in conjunction with the SSTS, which moves mid grey.
  • We want to change the ind slopes from zero to some minimal number, TBD, for inversion. We will make code which can vary those for this group to experiment with. But we should never ship that.
  • Mid slope is currently 1.55. Probably reduce it to something between this and K1S1 contrast.
  • If we make these changes and look at b&w images we can focus only on the tone scale and lock that before working on color.
  • Alex Fry: That's really helpful.
  • Scott Dyer: I think the SSTS is pretty flexible, but I'm not opposed to replacing it. Jed has done lots of great work, and Daniele has his proposed curve. The RRT tone scale interpolated from in the 1.1 OTs exactly matches the RRT tone scale used in 1.0. But if you set the SSTS to 48 nits, 4.8 mid and 0.02 min, you don't get an exact match to the 1.0 RRT + ODT. Two more complex splines concatenated can't be matched with one simpler one. And the 1.0 100 nit curve is actually just the 48 nit curve stretched out.
  • Alex Fry: I'm still not 100% clear why a 100 nit SSTS curve isn't the same as a 48 nit one stretched out, if you keep the proportions between the control point the same.
  • Scott Dyer: It's because only the max changes to 100 nits. The mid point stays the same in the curve, and the exposure is raised before the curve by the amount needed to make the mid point hit 10 nits. So the curve is moving horizontally, not vertically.
  • Nick Shaw: This is just the underlying SSTS curves. It doesn't include the dark to dim, so the 48 and 100 line up exactly.
  • Scott Dyer: Yes, I took that small gamma adjustment out for this demo. I plan to make some candidate curves for people to look at, so I'd love feedback on what those curves should do.
  • Jean-Michel Gilbert: For SDR SSTS curves the absolute luminance doesn't matter because they are relative, so it all ends up 0-1.
  • Kevin Wheatley: That raises the question, it that the right thing to do? Raising mid grey is nominally because that is where the viewer might be adapted to.
  • Scott Dyer: The SSTS is absolute, so using the SSTS to make 48 and 100 looks different. The 100 nit curve is designed to go brighter, then it is squashed down when we normalise 0-1.
  • Nick Shaw: And after this you have the luminance to code value mapping, so Y_min to Y_max maps to min to max code value. It's a kind of range stretch which squeezes the bottom end down to code value zero, yes?
  • Scott Dyer: Yes. That's why there is that weird little kink at the bottom of the curve.
  • Thomas Mansencal: That kink could be because you don't have enough samples on your plot.
  • Scott Dyer: I don't think it is. The curves all come together at the top because you're using a curve designed for absolute luminance, and you're then mapping it to a relative space by just saying "fit this to 0-1". Currently you would only want to use the SSTS for absolute output like PQ.
  • Alex Fry: Historically that makes sense, because you started with 48 nit, and then made derivatives. But going forward if SSTS is the one true way…
  • Scott Dyer: Yes, we need to reconcile that.
  • Kevin Wheatley: Alex, did we cover your points already?
  • Alex Fry: I still need to think through a few things on that. My other demo is separate, so maybe Nick can go over his stuff first.
  • Nick Shaw: In a previous meeting I posted my DCTL which is the current SSTS OT but with check boxes to turn off the RRT sweeteners individually. One thing I've found using that (and I've raised an issue on aces-dev) is that the order of operations means the limit to primaries happens before the chromatic adaptation, so going e.g. form D60 to D65 creates some out of gamut values and also lowers the red channel so it clips early. You can see a little out of gamut sliver on a CIExy plot of something supposedly "P3-D65 limited". I added something to my code which lets you flip the order, so it does the CAT first. The effect is not visible to me in SDR except on the scopes (where the asymmetric clipping goes away) but I'd like to know from somebody with an HDR monitor if they see a difference. So this is not a proposal for a new OT, but really a fix to the existing one, which Scott is aware of, but didn't want to change behavior except in a major version bump.
  • Scott Dyer: Yes. There are a few minor fixes, but we didn't want to do anything that would drastically change the transform. The bug hasn't shown as a show-stopper.
  • Nick Shaw: It would cause a QC fail if the delivery spec was absolute about nothing outside P3 in a 2020 container.
  • Joshua Pines: That's everybody. We have to do a lot of post limiting, even for things caused by code value rounding.
  • Nick Shaw: In the blacks, something that is properly limited in float may not be once you quantize to 10 or 12 bits. Do you need to account for that in your limiting?
  • Joshua Pines: Pretty much.
  • Alex Fry: My experiments have been trying to make use of the fact that modern 16 inch MacBooks will do 500 nits, and let you use that range to see a bit of HDR. I have a text file which marries up screen brightness and absolute nits, so I can dial the brightness up and down and feed that into the SSTS. So what I'm doing here is driving the brightness up and down but keeping middle grey at the same value. At minimum you just have a roll off at 100 nits, but as I step it up, mid grey stays the same, but the highlights stretch out towards 500 nits. But the UI also changes, which makes your vision adapt. So it's a bit pointless unless you can black everything else out. That's in this GitHub. The other thing I did makes use of some Swift code which reads the max EDR range available. Normally you use a 1000 nit OT, and anything above the max EDR range is clipped. If you have an XDR you can show the 1000 nits, but a MacBook has much less. I'm reading the EDR headroom and feeding that into the SSTS. Normally you get a stop extra. This is in the category of "stupid Nuke tricks" but I found it useful to be able to look at HDR. This Nuke script has 3 versions of this idea. One based around 48 nit numbers and one around 100 (which my confusion I was asking Scott about) and the other which uses Jed's OpenDRT.
  • Nick Shaw: I found that 200 nit limit (with 1.0 mapping to 100 nits) experimentally when I made my 500 nit OT on the assumption that it's a 500 nit display.
  • Kevin Wheatley: Is it useful for those who don't have a 1000 nit display? Is one stop enough?
  • Alex Fry: It's useful for me to not have something that applies additional tone mapping to try and fit things into your display's range.
  • Nick Shaw: I don't have an XDR, but I've ordered a 12 inch iPad Pro, so will be interested to see what can be done with that. I have't read of anybody finding a way to connect it directly as an HDR external display.
  • Thomas Mansencal: You may need to render HDR videos.
  • Joshua Pines: The backwards ICC profile puts limits on things, and you have to "lie" about what's coming in to get the transforms t do what you want.
  • Alex Fry: I don't think Side-car does HDR
  • Nick Shaw: Maybe in future.
  • Kevin Wheatley: Maybe I'm confused like Alex, but the idea that you have identical curves between a 48 nit and 100 nit display seems wrong.
  • Joshua Pines: Colorist feedback has been that they want the same curve for 48 nit projection and 100 nit SDR, or they feel something's wrong. All our colorists would love the SDR version Scott showed with the dark to dim turned off. Our transforms match exactly, and colorists are confused that the ACES ones don't match.
  • Scott Dyer: The theatrical and video are close enough people expect them to look the same. But what about a deliverable for 200 nit? Where's the cutoff where you start opening up the highlights for HDR?
  • Joshua Pines: When you have a hero deliverable, any ancillary deliverable that's +/- one stop of that it you can get away with it being the same. Beyond that it no longer holds. The psychophysics of display size is a factor. 48 nits on a cinema screen feels about the same as 100 nits on a TV. That's why IMAX could be darker than 48.
  • Thomas Mansencal: Reflective vs emissive is a factor too.
  • Nick Shaw: Does it go back to telecine, where your master was the print, and the expectation was that a telecine of that looked the same?
  • Joshua Pines: Not really. The print had different roll off, and the telecine was from the neg. And people made lift/gain/gamma adjustments. But yes they were expected to "look" the same. The perfect match between SDR and theatrical was a major selling point when DI was introduced.
  • Kevin Wheatley: My experience is similar. But maybe if we have some kind of continuum based on adaptation state etc, maybe these factors cancel out, and it ends up in a similar place. It makes sense to me to have a continuum, rather than suddenly springing into life, particularly with what Alex was showing where the displays could be every level in between.
  • Thomas Mansencal: It's very common now that gaming laptops are 400, 500 nits.
  • Scott Dyer: I think we can do both with some switch. It comes down to rendering intent, as Ed Giorgianni talks about in his book. Do you optimise for a display, or make it match another. But it would freak colorists out if SDR didn't match cinema.
  • Alex Fry: I was surprised how much difference that 48 / 100 nit SSTS made in terms of saturation.

Meeting #21, June 23rd, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Daniel Brylka
Sean Cooper
Bill Feightner
Alex Forsythe
Jean-Michel Gilbert
Zach Lewis
Thomas Mansencal
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Jed Smith
Shane Smith
Mike Whipple

Meeting Notes

  • Alex Fry: Following my demo last meeting of an LMT that tried to keep hue skews from the current RRT under a hue preserving DRT, I did some tests to help understand what's happening. I made a test which radiates out from white in CIExy, limited to a circle that fits in AP1, then went up and down a couple of stops. I plotted this in display linear. The current ODT collapses in concavely as exposure increases. Rec.709 and P3 don't match, but I realized this is due to the desat matrix in the Rec.709 ODT. Turn that off and they match, other than gamut clipping to Rec.709. Jed's DRT stays circular to a point then bends as it approaches white. I took Jed's path to white out and added my own version which does not kink.
  • Jed Smith: Did you have quasi-perceptual hue desat on?
  • Alex Fry: I tried with and without and still got kinks.
  • Jed Smith: I didn't see that.
  • Thomas Mansencal: Me neither last time I checked.
  • Joshua Pines: Is preserving the circles the right thing to do? Straight lines in CIExy aren't perceptual hue.
  • Thomas Mansencal: I think it's just using straight lines as a model without any assumption that is perceptually hue invariant.
  • Lars Borg: Equal colorfulness  is not a circle. but cutting distance to white in half roughly halves colorfulness for a given color.
  • Thomas Mansencal: To match SDR and HDR a chromaticity preserving baseline is a good start, and then you add layers on top of that to do color appearance etc. Traditional color appearance models don't work like that. Kim or CIECAM02 don't attempt to be chromaticity preserving. It's the entire model that produces the appearance match. If you take the Hunt effect for example, that doesn't preserve chromaticity as luminance increases.
  • Kevin Wheatley: This approach models a rendering transform that is neutral to the output device, and then there's a secondary step to map colorfulness and other correlates.
  • Thomas Mansencal: That approach gives you the option to not include particular factors. Maybe you ignore Hunt, so your HDR "pops" more.
  • Alex Fry: If you remove the "happy accidents" you can explicitly model them.
  • Jean-Michel Gilbert: The Bezold-Brücke Effect is one that particularly bothers me with per channel.
  • Sean Cooper: Chromaticity or hue preserving is not really correct. "Dominant wavelength preserving" might be more accurate. Or "white light mixing".
  • Jean-Michel Gilbert: White light mixing brings to mind the Abney effect.
  • Nick Shaw: I like Jed's term, "chromaticity linear".
  • Lars Borg: Dominant wavelength doesn't seem appropriate for displays, because you're really just talking about the three primaries of a display. What about "chart angle"?
  • Jed Smith: Here are my plots (recording at 17 minutes) that have no distortions.
  • Alex Fry: You have no inverse EOTF? I included and then inverted out the display encoding.
  • Jed Smith: The effect of my perceptual path to white is just this small shift. It's main goal is to address  the appearance differences between SDR and HDR, as I attempted to show with the images (that Thomas hated!) that I posted on ACES Central a while back.
  • Thomas Mansencal: I didn't hate it! Comparing 200 nits to 100 is fair enough. I think for 600 nits, reducing SDR to 1/6 is a stretch, as the highlights in SDR as all at about the same level. Now appearance modeling has come up, where do we slot in the modeling of these effects, and which ones do we want to model? If you have the option to enable Hunt or not on a per DRT basis the number of combinations explodes.
  • Kevin Wheatley: From TAC meetings we should pick one that's ok, and have a mechanism for handling variants for the experts. I haven't experienced enough HDR LUTs to comment on which we should pick. Does anybody here?
  • Joshua Pines: We used to offer multiple versions to offer up of HDR versions of view transforms, but now it's reduced to two. One with the highlight roll off and desat of the SDR, and the other that preserves saturation as you go brighter. It's a per project creative choice. No obvious majority, maybe 60:40. Today most people start with SDR, so they fall in love with the dailies with the SDR look. But the jury is still out. I would be ok with two.
  • Carol Payne: There is the current situation, and where we want it to go. Starting with the bigger canvas is ideal. HDR monitoring on set is becoming more common. When we're not working with e.g. Technicolor, what Netflix use is the ACES OTs, so fixing those will be a huge win.
  • Jean-Michel Gilbert: The last game we used SDR as the master then derived HDR later, and had so many issues. This time we're starting HDR.
  • Thomas Mansencal: Starting SDR the gamut and DR is very limited, to when you go to HDR you are almost extrapolating, which can be dangerous.
  • Alex Fry: As long as most people are viewing SDR, people will fall in love with what they see most.
  • Joshua Pines: Hypothetically with the two options we offer, if there was only one, the other could be achieved with an LMT.
  • Bill Feightner: As imperfect as the existing OTs are, the invertibility is very useful for LMTs. Will whatever we come up with be invertible.
  • Alex Fry: I think we agree that's essential.
  • Kevin Wheatley: One of our goals is to remove the remaining areas where it's not quite invertible. Put them in an LMT or leave them to grading decisions.
  • Thomas Mansencal: Josh, is Eric Reinhard still with Technicolor? I wonder if any of his recent work is relevant here.
  • Joshua Pines: His work is continuing, but not under Technicolor. That group's early research work was great. I can reach out and tell him about this group.

Meeting #20, June 9th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Valentin Alt
Chris Brejon
Daniel Brylka
Sean Cooper
Thomas Mansencal
Daniel Mulligan
Michael Parsons
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Shane Smith
Mike Whipple

Meeting Notes

  • Alex Fry: Following on from the thread on ACES Central, I've been experimenting with an LMT to reproduce the look of the older RRT under the Open DRT, without using a full invert. It uses a truncated version of the ODT and copies the chromaticities across to the scene values. There's also a curve to match the tone scale, which is accurate for a black and white image, but a bit less with color. It still needs more work on highlight appearance for some images. This is similar to something I did at my last company to keep the feel of the SDR in the HDR, so HDR skies desaturated the same but had HDR dynamic range, which masked variation.
  • Kevin Wheatley: I've done more with the LUT curves using Principal Component Analysis to look at properties contributing to variations from the average. Mid grey hit almost exactly 10cd/m2 (10% of peak). It produced a number of curves, smooth at first, but later ones getting noisier, which can create all the variations by being added in different proportions to the  average.
[See recording from 12:35 for detail]
  • Kevin Wheatley: DNEG did the same with LUTs they had. They had more variation, but their process didn't quite replicate mine. I built something in Nuke where I could add the first five components to the average. The first component affects thw position of the highlight roll-off.
  • Daniele Siragusano: HDR10+ uses a similar approach to build curves using multiple fixed curves in different proportions.
  • Kevin Wheatley: I want to see if the main components are replicable with standard grading operators.
  • Daniele Siragusano: Typically you only have about five DRTs, mutated by further color grading. If you took your curves and plotted them in 3D I think you would see repeating behaviors. I'm often asked to make scene looks from DRTs and find it's a modified version of something I've done before. It's like virus evolution! The Adobe PFE is the basis of a lot of what I see. If you average all the images in the world you get a Gaussian distribution, and to compress to a Gaussian, where most of the distribution is in the middle you use a sigmoid, which is the fist derivative of a Gaussian. The visual system is fitted to nature, and sigmoids are ubiquitous in nature. And in film, percolation effects cause the toe, and saturation effects the highlight roll-off, so that creates a sigmoid. There's an experiment showing saturation throwing darts at a wall of balloons, and the more balloons have already been hit, the smaller the chances are of hitting a balloon.
  • Joshua Pines: We're often asked specifically to create variants of one show LUT. And those different CDLs plus the same LUT end up being passed around. The S-curve comes also fro the history of painting, and what artist had to do to represent high dynamic range scenes on low dynamic range media.
  • Daniele Siragusano: Natural systems, valves, audio etc. all have this saturation effect which digital doesn't, so we need to engineer it.
  • Sean Cooper: Even single photon avalanche detectors naturally arrive at sigmoid behavior. Kevin, earlier you said the greatest variation in the log-log plot was at the low end, but you say the primary component affects highlights.
  • Joshua Pines: On a true log plot, any DRT that goes to zero will shoot down to minus infinity at some point.
  • Kevin Wheatley: I trimmed the bottom ends for that reason. But we will need to factor in something for dealing with where black goes. Maybe something like what Gary and Doug dit at the top and bottom end, where it doesn't roll off completely. The average curve that you add things to could specify things like, where is black? what's the slope at mid grey? Where does diffuse white go, etc.
  • Sean Cooper: What domain were the components analysed in?
  • Kevin Wheatley: The y-axis is linear cd/m2, and the x-axis is linear in a range of stops, so logarithmic.
  • Joshua Pines: These LUTs were all SDR, yes? 10% middle grey is almost a law. It's always that in SDR. Lars got an intern to analyze average pixel value relative to peak in some varied ASC samples, graded to "look nice". It came to [0.11, 0.09, 0.11] and that slight magenta shift was due to neutralizing DCI white. LAD density on print was 1.0 and density is log10 of 1/transmitance, so 0.1 becomes 1.0. The CTL for the ODTs explicitly say 0.18 maps to 10%.
  • Nick Shaw: Some variation in LUT exposure could be down to whether they were intended to be used with "print down" CDLs, or if that was baked into the LUT, because often there is a print down somewhere to force DPs to use more light.
  • Joshua Pines: Our show LUTs are always biased to make DPs overexpose a bit.
  • Daniele Siragusano: Isn't it also important to have a function which translates in a continuum between many different viewing conditions? If you take tone curves designed for simulating biological processes, like the Michaelis–Menten curve I proposed, you end up with the same thing.
  • Nick Shaw: It would be interesting to plot differences from that curve, rather than the average.
  • Daniele Siragusano: Or see if you could find parameters for that equation to match the average curve.
  • Kevin Wheatley: It's a shame we don't have a comparable dataset for HDR monitors.
  • Joshua Pines: If you have something that naturally translates to different viewing conditions, you only align the data for one viewing condition, and it maps from there, knowing you started by matching common SDR practice.
  • Joshua Pines: It's very interesting all these LUTs are so shockingly similar.
  • Kevin Wheatley: This was just the tone scale. I suspect if you looked at the color there would be more variation. But we don't need to worry too much about the aesthetic changes.
  • Rémi Achard: We'll look again at why we saw the variation we did. Maybe it's down to different categories of production. Live action vs animation etc.
  • Sean Cooper: It would be interesting to see if there was variation between on set LUTs and final DI ones.
  • Kevin Wheatley: The ones I had weren't final DI versions.

Meeting #19, May 26th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Sean Cooper
Harvey Landy
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Mike Whipple

Meeting Notes

  • Kevin Wheatley: At TAC meeting last week consensus seemed to be that alternate renderings were acceptable. So our reference rendering doesn't need to solve everything for everybody. What we do make should fit in a potentially interchangeable framework, like Daniele's document.
  • Daniele Siragusano: It's an expression of the idea, not an implementation guide. First it clarifies terminology. Color Management System (TCS, OCIO, etc. – actually does something in a real product) and Color Management workflow (set of rules to drive a CMS). Suggestion is we come up with a meta-framework, so designers of a CMW define it against that meta-framework rather than a specific CMS. Can be translated to any CMS. Do it once, not many times. Things you might need to specify – distinction of scene and display referred. reference color space and observer; viewing conditions. CLF can support lots of stuff, once it's supported everywhere. Need a set of minimum requirements – input spaces and displays.
  • Kevin Wheatley: This group should only tackle what we need to, and not make more work for ourselves. But we should make our reference rendering based on something like this. We could put the existing rendering into this, and an updated version of it, and something started from scratch. That gives us multiple CMWs.
  • Daniele Siragusano: The meta-framework should not dictate order or process. Any custom workflow should be accommodated. Scene and display state and either open or opaque transforms between them for different viewing conditions. ICC Max is powerful but very complicated.
  • Thomas Mansencal: I thought the TAC showed some reluctancy about ACES being a meta-framework. We should check back with Rod. I think they were ok with multiple transforms, but not sure about ACES becoming only a meta-framework.
  • Kevin Wheatley:  They didn't want ACES watered down so it became meaningless. But just providing the old and new renderings within this framework is providing something.
  • Carol Payne: There was concern if we open it up too much, what is ACES?
  • Daniele Siragusano: The paper ends with the analogy of 35mm as a meta-framework for what ACES would become.
  • Carol Payne: This framework could come from ACES or somewhere else.The bones of this is in OCIO. Is ACES the framework, or something that gets put in a framework?
  • Kevin Wheatley: The framework could just be a specification of the information you have to provide, rather than an actual implementation of a technology.
  • Thomas Mansencal: It crosses over with AMF, so that could be extended to incorporate this.
  • Daniele Siragusano: The question is what ACES should be. Is it one CMW, and other's can't publish others along that road. Then you get the situation we have now, where people need inverse workflows. Or do we say that the reality is we want diversity and innovation.
  • Thomas Mansencal: Intersection of requirements towards the mean can lead to mediocrity. It reminds me of Khronos and OpenGL, where you had 20 people at the table, and it was really hard to take a step forward.
  • Daniele Siragusano: This is just per pixel, and if you make it open enough that anything can be done with it, I don't think Khronos and Vulkan with OpenGL is comparable.
  • Alex Fry: ACES already is the block diagram, and some things you can swap out. Is it that different? It's just a graphic representation without the detail of the CTL.
  • Thomas Mansencal: It would be a big task. Do we have the cycles and energy to do it?
  • Carol Payne: As Kevin said, if we make the reference transform is such a way that bits can be modified or replaced, then later we could put a framework like this around it.
  • Daniele Siragusano: This is orthogonal to the reference rendering.
  • Joshua Pines: Originally ACES was just the color space and encoding (IIF – Image Interchange Framework).
  • Sean Cooper: It's a step backwards (not a regression) to focus on interchange as what ACES provides. Like CLF which is an über-LUT encompassing many transforms.
  • Alex Fry: So are we talking about an implementation or a set of ideas?
  • Thomas Mansencal: It needs to be a well defined spec. More concrete than Daniele did with that paper.
  • Daniele Siragusano: We could look at all the CMS that exist and see what is common to them.
  • Sean Cooper: We could go back to the TAC with the idea that the conflicting requirements have led us to this idea of a framework, and maybe another group could be formed to look at it.
  • Kevin Wheatley: It's good to go back with concrete examples. Anything else? Thomas mentioned "what does it mean to be parametric?" There was a desire to be careful, to keep parameters to something used at an implementation level to set up output for a new display type. Not changing LMTs on the fly.
  • Nick Shaw: Would creative choices like where mid grey maps to or mid tone contrast be parameters or in LMTs?
  • Kevin Wheatley: I think a reference rendering has to make a fixed choice on these things. Not a user parameter.
  • Alex Fry: It's interesting what Mike said last time about people preferring the brighter HDR of 1.2 for daylight scenes, and for darker scenes preferring the older on which was more similar to the SDR.
  • Scott Dyer: Those adjustments are just a scene-referred exposure adjustment – linear gain. Really the colorist should adjust the exposure.
  • Joshua Pines: We used the first version and were fine. But some colorists wanted it to "look good falling off the truck" so it was altered to something in the middle so they didn't have to go so far to go bright or dark.
  • Carol Payne: If you put it in an LMT the colorist doesn't know if it's part of that of in the DRT.
  • Daniele Siragusano: If there is a slider, colorists will tweak it per scene or shot. I think a DRT should be static.
  • Alex Fry: Has anybody seen the SSTS mid grey setting in Resolve used for real?
  • Joshua Pines: We've seen it cause problems when it was introduced.
  • Nick Shaw: I think Michael Chenery said it was asked for by a client.
  • Kevin Wheatley: I think we agree it shouldn't be exposed to the user, which answers the question of how OCIO should deal with parameterized DRTs.
  • Alex Fry: I could imagine the choice of the HDR looking like the SDR or not would have to be a parameter, or you double the number of ODTs.
  • Kevin Wheatley: That would be easier to discuss once we have something in front of us.
  • Alex Fry: We have three renderings now, the current one, a stripped back version of it with the sweeteners in an LMT, and Jed's DRT. We could build a single system containing all those.
  • Scott Dyer: We could look at the three and see the pros and cons related to the issues that need solving. I feel a lot of the issues are small fixes, but maybe some need a different approach, which may introduce its own new issues.
  • Kevin Wheatley: So we need to list those and any others people suggest, and task somebody to build something so we can do that comparison.
  • Alex Fry: I can do a Nuke node graph that replicates the ACES block diagram.
  • Kevin Wheatley: Do we compare on a single display or multiple.
  • Alex Fry: Has to be multiple. Doing just Rec.709 is easy. SDR vs HDR is where it gets tricky.
  • Carol Payne: We could make an OCIO config like we did with the gamut mapping group.
  • Thomas Mansencal: We need a LUT implementation. But we need to be careful with precision, especially for HDR.
  • Alex Fry: I'll report back on the monitoring survey next time.
  • Kevin Wheatley: We need to agree a reference viewing environment, even if it's just turning the lights off and setting brightness. Then later we can extend to other situations. If people have anything else they want to bring up next week, maybe post on ACES Central first, or email me, so we have a heads up.
  • Chris Brejon: I can share the version I already have of what Alex is talking about building.
  • Alex Fry: One question is can we keep the ODTs separate? Where is the cut point? Do we stick with OCES?
  • Carol Payne: In OCIO XYZ is the display reference.
  • Alex Fry: I assume it wouldn't include the EOTF.
  • Kevin Wheatley: Not if you want to emulate one display on another.

Meeting #18, May 12th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Sean Cooper
Alex Forsythe
Francesco Giardiello
Thomas Mansencal
Michael Parsons
Carol Payne
James Pickett
Joshua Pines
Matthias Scharfenberg
Florian Schleich
Daniele Siragusano
Jed Smith
Shane Smith
Mike Whipple

Meeting Notes

  • Kevin Wheatley: I have made some plots showing the luminance tone curve of a range of 100 Rec.709 show LUTs I have, converted to show ACEScg (r=g=b) to display Y.
  • Lars Borg: Plotting as log log would show if a distinct gamma curve is used.
  • Kevin Wheatley: It shows some trends. Some have high black levels and some high. RRT and others based on it clamp to zero. Ones derived from K1S1 cluster at a raised black. I plotted average. The average has a flat roll off rather than clipped black. Mid greys are all at a similar level. I also plotted the slopes. It would be good if others could do the same with LUTs they have.
  • Lars Borg: The mid-tone contrast varies a lot, so the system should provide for controlling that.
  • Kevin Wheatley: The RAE paper talked about an "averagely average curve" but I didn't think it would come out that cleanly. I had 100 LUTs and removed duplicates to get that down to 67.
  • Thomas Mansencal: It's probably biased to the clients you work with. But they are the way big movies are currently done, so maybe it's ok to bias towards that.
  • Joshua Pines: I could easily see our LUTs in the plot. I like the idea of sampling a wide range.
  • Scott Dyer: It would be interesting to see where Jed's current iteration plots on there. Yours has less shadow contrast compared to ACES, which gives shadow "glow". Your highlights are more graceful and less contrasty. I'd be interested to see something somewhere in between.
  • Jed Smith: My formula is pretty customizable, so you could make something more like ACES.
  • Thomas Mansencal: It would be useful to see if it could be made to fit any of the curves Kevin showed. SSDS as a spline you can fit anything.
  • Daniele Siragusano: If you have a curve and add another before it you can match anything.
  • Thomas Mansencal: You risk the RRT ODT curve "fighting".
  • Daniele Siragusano: It's not normally a problem if the DRT has less contrast.
  • Nick Shaw: As long as the DRT goes to zero. If a curve flattens above zero you can't push through that.
  • Lars Borg: I've been surprised that many people want to map SDR black to ~3%.
  • Joshua Pines: We make sure our Rec.709 LUTs go to zero, or it will fail QC
  • Lars Borg: The person I spoke to said "I have no control over blacks on the desktop. 3% is milky, but the same milky everywhere!"
  • Daniele Siragusano: The extreme slopes in the RRT based ones come from simple grading operators applied in the ACEScct working space. This happens when you change the working space but not the DRT. The RRT was not designed for ACEScct. You don't want sudden curve changes in the display encoding space, which are mostly pure gamma.
  • Nick Shaw: The newer color space aware grading tools make the working space less critical.
  • Jed Smith: I posted a couple of Nuke setups. The first is based on the K1S1, applying a tone curve in AWG then ARRI's modified AWG to Rec.709 matrix which alters the primaries to desaturate red and green applied in display linear, then the inverse EOTF. I'd been thinking about the effect of rendering primaries, so I made a setup which lets you do a similar thing with various different primaries, including a method for dragging primaries around interactively in CIExy, to see how the rendering primaries affect output. You can adjust both the rendering primaries and the display primaries to affect saturation. It can use either Daniele's tone mapper, or a piecewise hyperbolic one I have been experimenting with.
  • Nick Shaw: Are there settings for that that let you use primaries backed out from the modified ARRI matrix.
  • Jed Smith: You could, but I didn't calculate the actual values.
  • Thomas Mansencal: I had a client who wanted to replicate IPP2, and the only was to do it was to change to RWG primaries for rendering. Have you also tried FilmLight E-Gamut?
  • Jed Smith: That's in there and DaVinci Wide, AP0, you can select any of them.
  • Scott Dyer: That's great. We try a ton of rendering primaries during development of ACES 1.0. We solved from a training set to find primaries that minimized hue distortions. We looked at how the RIMM/ROMM primaries were derived. Some worked well then you would find an image that they worked terribly on. Primaries close to the encoding ones of a given camera work for media from that camera. The simple chromaticity preserving method Nick showed last week looks great in HDR, but falls down in SDR.
  • Jed Smith: I showed this as an experiment but I still think chromaticity preserving is the way to go for HDR.
  • Nick Shaw: So if that works for HDR, we need to find the right appearance matching modifications to make it work in SDR.
  • Scott Dyer: We have building blocks that work in different situations, so we need to combine them to make something that fixes the issues we are having.
  • Alex Forsythe: the optimizations for finding RIMM/ROMM primaries work well in SDR but fall down in HDR because it's so tone scale dependent, and in HDR the toe and shoulder are much further out. The majority of the colors that we're looking at aren't in that region. There's a strong link between the effect of the primaries and the tone scale you are applying in those primaries.
  •  Alex Fry: Am I right that the early iterations used AP0 as the rendering space, and later changed to AP1.
  • Scott Dyer: The tone scale was constantly changing during development, so that made finding primaries difficult. I think we should nail down the tone scale we want first, by looking at black and white pictures.
  • Thomas Mansencal: Is there a case for having varying rendering primaries with different tone scales for different dynamic ranges?
  • Jed Smith: It seems like a bad idea to me.
  • Scott Dyer: It makes it easier if your rendering primaries are aligned with your display primaries, but we're making a display independent system.
  • Jed Smith: Looking at FilmLight and DaVinci's work, there may be merit in using a space that encompasses all the camera encodings.
  • Daniele Siragusano: Note that E-Gamut was not designed as a rendering gamut, so if it works as one it's an accident!
  • Thomas Mansencal: What appearance effects do we want to consider? The current OT has surround compensation. Hunt is an important one for me. People should look at the Gary Demos video for a run down of the phenomena to be considered.
  • Kevin Wheatley: We need to find a default rendering that we don't change, so when the conditions change we use an appearance model to adapt. It will break when trying to match between extremes, so won't solve everything, but should between similar conditions.
  • Thomas Mansencal: Now surround compensation is in the tone curve, but I think it should be a black after the rendering.
  •  Alex Fry: Matching cinema and Rec.709 or SDR and HDR are two problems that probably need two solutions.
  • Jed Smith: Are there any commonly used CAMs used for HDR?
  • Thomas Mansencal: I've used Hunt when switching between displays, like VR headsets which aren't HDR but are so near the eyes they might as well be. It's a very different viewing condition.
  • Alex Forsythe: It should be a similar compensation to what you use for SDR.
  • Kevin Wheatley: There has been color science research on HDR appearance models, but nothing is settled. One thing we definitely need is gamut mapping. Not the display referred one we have, but something better than the clipping in the current Output Transforms.
  •  Alex Fry: VR headsets could be very useful for testing, because you control your entire environment, and could make two virtual displays you could turn your head between.
  • Jed Smith: The Helmholtz-Kohlrausch effect is one I think we need to consider, where saturated colors appear brighter. DRTs with artistic input seem to darken saturated colors.
  • Lars Borg: I can comment on that. One of the early ACES test images was astronauts in a grey parking lot with yellow stripes. After tone mapping the yellow lines looked too bright compared to the asphalt. Tone mapping needs to maintain the contrast between greys and color. That version failed to do that.
  • Daniele Siragusano: This is an interesting related article.
  • Kevin Wheatley: I plan to write up the process to produce my graph
  • Thomas Mansencal: We still need those documents on terminology etc.

Meeting #17, April 28th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Caleb Carges
Chris Clark
Sean Cooper
Alex Forsythe
Francesco-Luigi Giardiello
Jean-Michel Gilbert
Ebs Hasche
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
Florian Schleich
Daniele Siragusano
Jed Smith
Shane Smith
Troy Sobotka
Dave 

Meeting Notes

  • Kevin Wheatley: We plan to post a survey on ACES Central to find out what people have access to by way of monitoring and calibration kit. Anything else people think we should add? Nick has something to show.
  • Nick Shaw: I decided to experiment with making an LMT to work with a simple chromaticity preserving DRT, so all the path to white is in the LMT, and see how that LMT worked across different targets. I have no HDR monitor, so gave it to Scott to look at on his X300, and he can comment. I wrote two simple DCTL DRTs, one based on the SSTS and one on the original Jed/Daniele tone-mapper. Both use the same approach of applying the tone map to a norm and then multiplying RGB by the ratio of the tone-mapped to un-tone-mapped value. No path to white or anything else. Display encoding is in a separate node, so you can look at the first 100 nits of HDR if you only have an SDR monitor, or output PQ if you have a PQ monitor. I made an LMT from a slightly tweaked K1S1 (curve extended linearly for highlights) and an inverse of the relevant SDR DRT. So in SDR it matches pretty exactly, because it is going forward through the DRT it just went backwards through, cancelling out and leaving you with the tweaked K1S1. In HDR, because the highlight desaturation is all in the LMT at SDR levels, this limits what HDR can do.
  • Scott Dyer: It definitely limits the ability to get saturation back in HDR for skies etc. But for the kind of person who wants their HDR to look "the same" as the SDR it works. Highlights get brighter with more detail but no extra saturation. The character of the image stays the same – ARRI-like with extended highlights. I didn't try any grading through it.
  • Nick Shaw: Because the K1S1 maps LogC zero do display zero, LogC black (CV95) ends up quite high. This LMT matches that, so the blacks are high and you can't grade down through that floor like you could grading LogC through the real K1S1. I aimed to match K1S1 exactly, but for a real LMT I would sit the blacks down at zero, as you can grade them up, but not down.
  • Daniele Siragusano: Would you expect to match appearance between HDR and SDR with this kind of simple non-bleaching DRT?
  • Nick Shaw: Probably not because it's purely chromaticity preserving.
  • Daniele Siragusano: This is why I think you can't have a single LMT with this kind of DRT because the DRT does no appearance matching.
  • Kevin Wheatley: This shows that LMTs need to extend beyond what the SDR can do.
  • Nick Shaw: If there's no variation in the DRT saturation roll off between targets, the SDR will crash hard into it's limits with a single LMT that keeps going there for the HDR.
  • Kevin Wheatley: FilmLight's Vimeo channel has good demos from Daniele on these issues. Particularly the T-CAM and sRGB ones. The appearance match needs to understand whether you are targeting both a different peak brightness and surround. The current OTs don't explicitly handle those permutations.
  • Daniele Siragusano: It's a nice idea to make the DRT very simple and put everything in the DRT, but that example shows you went too far. It needs to be a bit more complex so you don't need an LMT for every viewing condition.
  • Nick Shaw: Indeed. Otherwise it becomes academic whether you have a fixed DRT and an LMT family, or a DRT family.
  • Kevin Wheatley: Except for those who have to create all those LMTs. If you put more in the DRT it's hopefully less work across multiple projects.
  • Jean-Michel Gilbert: Maybe after the DRT you need an AP1 to target mapping using the method described in BT.2407. Using hue and luminance mapping they seem to achieve good results. [others indicated doubt that this was entirely the case]
  • Joshua Pines: Today if there's an SDR show LUT for dailies we make an LMT of that. Then for HDR we make an HDR version of it with a separate LMT, going through the inverse of the HDR ODT. If your initial SDR LUT is band limiting you're limited by that. People say can't you make an HDR friendly SDR LMT that's less limiting. Nick, you said you modified the K1S1 a bit.
  • Nick Shaw: Just extrapolating the curve linearly just before it goes flat. Nothing to add back saturation up there.
  • Joshua Pines: Right now we make different LMTs. If the client has one (SDR) LUT and asks for one LMT for all deliverables it's going to be tough.
  • Chris Clark: We see that too. But in the future, building LMTs from scratch, are we resigned that one LMT won't work? Isn't that a goal of ACES?
  • Joshua Pines: Depends how LMTs are created. Not every LMT will magically work for all outputs. If an LMT is band limiting and luminance limiting, it is limited. I've often been asked, can you make a K1S1esque LMT that extends up in the highlights and doesn't desaturate, but works to match in SDR?
  • Nick Shaw: It really depends on the differences between members of your DRT family, because they define the appearance relationship between SDR and HDR. And one person's idea of what that relationship should be might not match somebody else's.
  • Daniele Siragusano: If the LUT is SDR, you can only compare for a "match" with the SDR version . But making this kind of scene looks is something we do on a daily basis. It comes down to well behaved DRTs. Monotonic and predictable. I would say you can satisfy a large group of people with one LMT. But of course you still need a trim grade. You can't have something that works for every scene.
  • Alex Fry: There's a distinction between LMTs designed from scratch for multiple outputs, and those which try to shoehorn an existing transform into the system.
  • Daniele Siragusano: We have scene looks to emulate K1S1, ALF-2, IPP2, and various ACES versions, and it works reasonably well. Of course you need to extrapolate, so something that makes maybe a dark red may not do the same when you extend it. What you need is to produce something predictable so is you see something in one viewing condition you trust the DRT to produce something reasonable in others. You need to compromise your perfect match in SDR to allow for the HDR.
  • Kevin Wheatley: We need to figure out what's missing from such a simple DRT. Also I believe the RRT settings were not optimally matched across the different conditions. ACES started with the SDR ODTs, and then the HDR ones came along, and didn't quite match because they were made in a different way. We need to avoid that by starting with something that can be adapted for all outputs.
  • Nick Shaw: My experiment was a demonstration of the limits, not a proposed solution. But there's the issue with HDR and SDR that they are different because SDR is relative, and ACES takes that same rendering for 48 nits and stretches it to 100 nits (with dark to dim adaptation) but HDR is absolute, and grey is a fixed nit value and the various version open up the highlights.
  • Kevin Wheatley: Is is sensible to have one relative, and the other absolute?
  • Daniele Siragusano: Why do you say relative? Everything is relative to its peak.
  • Scott Dyer: We do do different things. SDR we map a tone curve to 0-1 and then convert that to code values. That's also kind of what we do in HDR, but…
  • Daniele Siragusano: Absolute and relative complicates it unnecessarily. BT.1886 is the same principle. You define 1.0 is 100 nits, and you can say what the nit value of every code value is. Same for PQ. And choosing to map 48 nits to 100 nits with a multiply rather than opening up the highlights is a decision to compensate for difference in screen size.
  • Alex Forsythe: There is the PQ issue that it defines itself as absolute, which is actually nonsensical. But other than that I agree with you.
  • Lars Borg: Every TV has a brightness control.
  • Scott Dyer: ACES 1.0 had 48 nit cinema as the reference, and we tried various ways to map our intent onto the display. Feedback always told us blacks were important and they looked wrong. So we leveraged SMPTE EG-432-1 Annex I – Encoding of Colorimetry above Theatre Black.
  • Daniele Siragusano: Absolute black is a misconception.
  • Kevin Wheatley: PQ tells you how to encode the values, not how to choose your black point. We could choose that based on a contrast ratio, with an assumption about minimum flare. Kind of like cinema black.
  • Alex Fry: It would be nice if the CTL was consistent. It would make it easier to read. It's confusing having something go through 48 nits but ending up at 100.
  • Nick Shaw: You can see the difference if you play with values in the SSTS. Using YMAX=48Y_{MAX}=48 and YMID=4.8Y_{MID}=4.8 and then scaling gives you a different result to using YMAX=100Y_{MAX}=100 andYMID=10Y_{MID}=10.
  • Kevin Wheatley: But the question is why is that the case. Is it deliberate for surround effects? Some of it comes from the history of the development, and differences in the ethos of implementation. Hopefully we can simplify that to one consistent mechanism. Then we need to figure out appearance compensations for surround etc. The mechanisms in the current CTL aren't complete.
  • Nick Shaw: If we separate the display encoding into a separate module then everything can be consistent up to that point, and then the last bit is just encoded however is necessary to get the encoding + decoding = NoOp for display colorimetry.
  • Scott Dyer: I'm working on some DCTL modular stuff to help put together prototypes like Jed's been doing in Nuke.
  • Jed Smith: I'm close to releasing a new version of what I've been working on. Then porting it to DCTL is next on my list.
  • Scott Dyer: Node trees are ideal for constructing a transform so you can switch off or switch out individual components.
  • Alex Fry: Troy, do you want to expand on what you asked in the chat?
  • Troy Sobotka: Using the Sean Cooper metaphor, are we filling the gamut like a balloon or like a bucket of water. It may be worth comparing both in terms of gamut volume.
  • Jean-Michel Gilbert: As luminance increases, the gamut volume always tapers to a single point.
  • Troy Sobotka: That's a separate issue. You can tip any RGB cube on its end and it tapers to a point. I'm talking about how that gamut volume fills up. Jed's approach is balloon-like. Has anybody tried the bucket-like?
  • Joshua Pines: Unfortunately it's a creative choice. Some want a beautiful gradual desaturation. Others want a "rave in the desert" look. I wish grading systems had knobs to control this. Do we want to impose a strategy, which should be creative?
  • Troy Sobotka: I'm talking about underneath that architecture. That has an implicit assumption everybody's working with a balloon.
  • Daniele Siragusano: The question is how much space do you need to shoehorn natural images into an artificially shaped volume, and still get a natural looking image? And how much naturalism do you want to maintain?
  • Troy Sobotka: Everybody's talked about "bleaching" but what is the fundamental mechanic? Ultimately it fits into the display, so that's an ancillary post mechanic.
  • Daniele Siragusano: If you try to bring everything in, you sacrifice a lot. At some point you can let it go. So the balloon fills more than the gamut boundary.
  • Troy Sobotka: If you take say the red channel to maximum, there's no rule to say you have to start gamut compression before that.
  • Daniele Siragusano: There are no rules. But if you do that can you match a 4000 nit pulsar and a 48 nit cinema screen?
  • Jed Smith: The first version of the naive DRT I posted was like a balloon inside a box, and with the path to white, the boundaries of the cube are outside the balloon. It produced a pretty lifeless image. Later iterations have been more like what Daniele described, where the balloon is bigger than the cube. It doesn't go to white immediately for reasonably exposed images.
  • Nick Shaw: It makes sense to me to have something that has a path to white that converges outside the cube, so it gets close, but doesn't have to completely reach it within the cube.
  • Kevin Wheatley: Going back to the earlier discussion, if you have a Rec.709 gamma 2.2 display and a P3-D65 gamma 2.2 display, the LMT doesn't know that those have different gamut volumes. Only the DRT knows the destination.
  • Troy Sobotka: It's imperative that we define what the mechanic is doing or it's an ill defined problem that people are randomly testing solutions against.
  • Kevin Wheatley: I think we need to factor out different aspect, and only evaluate one at a time. So we're not comparing HDR 4000 nit all the way to cinema in one go. To understand which factors are more or less important.
  • Scott Dyer: I hope to put stuff on ACES Central soon from my work.

Meeting #16, April 16th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Sean Cooper
Francesco Luigi Giardiello
Jean-Michel Gilbert
Harvey Landy
Zach Lewis
Thomas Mansencal
Michael Parsons
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Troy Sobotka
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: We're proposing meetings every other week (same time) to give people more time to do things.
  • Chris Brejon: Can they alternate with OCIO meetings.
  • Kevin Wheatley: There are multiple OCIO meetings. We'll take a poll. One thing we haven't discussed is people's monitors and viewing environments. What do people have access to.
  • Thomas Mansencal: And calibration state.
  • Kevin Wheatley: Even a simple diagnostic pattern like this one I made would help. It could be more complex. Many factors – flare, EOTF, RGB primaries, white point, environment. We also need one environment to be the reference for the tone scale. Existing ref is dark cinema. Is that the right, or is something in the middle better.
  • Joshua Pines: 2 things in favor of cinema as ref. 1. this is under the Academy of Motion Picture Arts and Sciences. 2. A cinema projector is something we can be confident is fairly consistent everywhere, calibration and surround.
  • Kevin Wheatley: Indeed, that is a key reference. But I wondered if the tone scale should be based on just that, or if a more office environment could be defined or taken from a standard.
  • Nick Shaw: It might make sense to reference something in the middle, so you adapt up and down, instead of only up from the darkest.
  • Jean-Michel Gilbert: Starting from an average it might be easier for Dolby Vision metadata to adapt.
  • Thomas Mansencal: A brighter surround as a default might make sense because the way people watch movies has changed. CG artist don't tend to work in the dark either.
  • Joshua Pines: Our displays are reference standards. End users' displays are uncontrolled. sRGB vs 2.2 gamma etc. Daniele has a great webinar on that. Avoiding sRGB conundrums is preferable. When we are evaluating shadow roll off in the tone map, we're all going to be looking at slightly different things.
  • Thomas Mansencal: Unless we get all of us to calibrate our displays to the same standard, and have the reference surround. The viewing conditions define the ODT.
  • Joshua Pines: You're also at the mercy of the application you use to view it.
  • Alex Fry: We need diagnostic images or the sRGB/2.2 difference if far greater than our tone scale adjustments.
  • Thomas Mansencal: Some who say it's too contrasty may just be viewing it wrong. Can we make a chip system to assess displays without colorimeters?
  • Alex Fry: I've been using camera phones to assess surround, where the monitor is a known constant.
  • Kevin Wheatley: The BBC did surveys using phone light meters.
  • Jean-Michel Gilbert: ICC profiles seem to be broken in Windows 10.
  • Joshua Pines: Not just in Windows 10.
  • Thomas Mansencal: Limiting participation to people with hardware calibrated displays would be limiting.
  • Sean Cooper: We don't want to exclude people, but we could weight the opinions of those with hardware calibrated monitors higher. This group should all know what the state of our monitors is.
  • Thomas Mansencal: In the current situation not everybody may have a calibrated monitor at home. But we could provide tools to help calibrate. We should take a poll on ACES Central on what people have.
  • Kevin Wheatley: In the long term we definitely want to get into reference cinemas, but we want to be able to start work with what people have access to now.
  • Sean Cooper: For absolute assessments people need calibrated displays, but for relative judgements more can contribute.
  • Joshua Pines: In the end we'll have to show what we come up with to colorists, and they only trust their own calibrated displays. So we should also refer to calibrated displays.
  • Zach Lewis: We've been using iOS devices for VFX reviews. How accurate do you think those are? How accurate are iPads?
  • Joshua Pines: We've tested iOS devices, and device to device consistency is very good. But for high nit output on latest iPad Pros the blacks fog, so SDR is good, but HDR not so good. Other devices have better HDR than iOS but color management for them is non-existent.
  • Alex Fry: It's hard to mess up the settings on iOS
  • Thomas Mansencal: Unless you leave True Tone on.
  • Kevin Wheatley: Remote viewing tools are acceptable unless compression get in the way.
  • Zach Lewis: Would an R&D OCIO config be useful that is common for us all.
  • Thomas Mansencal: Until OCIO 2.0 is everywhere it would have to be LUT based. Some current complaints about ACES are actually due to LUT limitations in OCIO 1.0.
  • Kevin Wheatley: I imagine we would come up with implementations in DCTL and Nuke, like the gamut mapping group did. I don't think we need to go to OCIO.
  • Nick Shaw: Are people here going to be using Computer monitors to asses results? Or do some have SDI/HDMI connected video monitors?
  • Kevin Wheatley: That's a poll topic. We need to think what our requirements and tolerances are.
  • Thomas Mansencal: Maybe we can design a small studio environment with a grey sheet behind the monitors that everybody can do the same.
  • Alex Fry: This [28:15 in recording] is what I've been using to assess environments. It relies on the monitor as a known constant. Harder if people don't all have the same monitor.
  • Thomas Mansencal: There's the thing people do in games where they adjust the gamma. Not that we should do that, but it gives you an idea of viewing conditions.
  • Alex Fry: I use an app called Halide which gives you DNG raw from an iPhone. And alternating pixels gamma checking images is a good way to check the whole pipeline. Should we start from a spec (BT.2035) that defines a viewing environment?
  • Daniele Siragusano: But the background changed in BT.2100 from 10 nits to 5 nits to align with CIE dim surround. And average grading suites.
  • Jed Smith: I've been working on HDR in my DRT, despite not having an HDR monitor. I've been looking at Daniele's curve. Trying to wrap my head round the relationship between the input and output domains.
  • Daniele Siragusano: Both domains are linear. Then there is a scale on the output side to align display spaces, so 48 nits Cinema doesn't end up at 48 nits in Rec.709. The normalized white, "what 1.0 is". In PQ 1.0 may mean 10000 nits. In HLG it's blurry, but say 1000.
  • Jed Smith: But you still set a relative exposure for the input domain, so the compression works correctly.
  • Thomas Mansencal: Exposure may be a creative choice.
  • Daniele Siragusano: But on average diffuse white at 1.0 is "correct" exposure.
  • Jean-Michel Gilbert: Maybe you are talking about the max nits of the monitor. A "family" of DRTs as Daniele calls it.
  • Jed Smith: There's a controversy between anchoring to 18% grey or white.
  • Daniele Siragusano: 18% is arbitrary. If 18% is 0.18, 20% is 0.2, 100% is 1.0 etc.
  • Joshua Pines: In the old days everything was based on peak white. But in HDR the displays have different levels and the decision was to decide where mid grey lands.
  • Jean-Michel Gilbert: If we fix mid grey, graphics white floats.
  • Jed Smith: Daniele, your curve doesn't explicitly map mid grey, does it? You map some scene value to some display value, and the rest falls where it falls?
  • Daniele Siragusano: The reference can be whatever you want, but you put a layer on top which calculates e.g. the values needed to map 0.18 to 0.1.
  • Joshua Pines: That was a big debate when HDR arrived. For SDR we all agreed 18% maps to 10% of peak. And for HDR nobody wanted to reference peak brightness. The discussion was where does 18% land? Same as SDR? Brighter, and how much? Differences of opinion. We need it so it falls of the truck looking ok, and can be creatively graded up or down.
  • Daniele Siragusano: In practice dark scenes and light scenes need to be different, in terms of HDR vs SDR. We have to find an average.
  • Nick Shaw: We talked about the BBCs HLG stuff last week. And as they found consumer TVs aren't 100 nits for SDR so mid grey is actually 10% of 250 or 300 nits, so ends up brighter than the 15 nits for HDR on the same TV.
  • Daniele Siragusano: That's why the BBC set graphics/diffuse white to 75% in HLG, which happens to be 203 nits for a 1000 nit monitor. But 203 isn't a magic number. Mastering that bright gives a better HDR/SDR match.
  • Joshua Pines: MovieLabs suggest when showing SDR on an HDR display you map white to 200, so mid grey ends up at about 20 nits.
  • Sean Cooper: A useful discussion is the number of degrees of freedom in the tone scale, and how many we constrain. We have mid grey and peak white. But if you want to vary the slope of mid grey, some equations won't allow that. A spline has infinite control.
  • Daniele Siragusano: Control just moves the problem elsewhere.
  • Jean-Michel Gilbert: In my industry the essential parameters are Ymin, Ymid and Ymax. The current parameters of the SSTS.
  • Kevin Wheatley: For inversion reasons the slopes at both ends need to be considered.
  • Thomas Mansencal: The SSTS has a layer above it to control it with less parameters. But what we use for research doesn't have to be what we finally deliver. We could fit a simpler model.
  • Daniele Siragusano: You have curve grades in all apps. Why redo that? More points = more fragile.
  • Thomas Mansencal: Thats the reason for the SSTS it simplifies the issues of the interaction between the RRT and ODT curves.
  • Nick Shaw: Daniele, your curve is flat at both end, and you create slope at the ends by making the asymptote happen beyond peak white, yes?
  • Daniele Siragusano: Yes, you can do that.
  • Joshua Pines: Is the requirement for non-zero slope at both ends is for mathematical, not aesthetic reasons, yes?
  • Daniele Siragusano: Inversion.

Meeting #15, April 7th 2021, 1pm PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw
Rémi Achard
Lars Borg
Chris Brejon
Sean Cooper
Alex Forsythe
Francesco Giardiello
Jean-Michel Gilbert
Ebs Hasche
Carly Kutcka
Zach Lewis
Thomas Mansencal
Carol Payne
Joshua Pines
J. Schulte
Daniele Siragusano
Jed Smith
Doug Walker
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: Here is a list of work areas:
1) Finish general requirements list gathering
2) Terminology
3) Catagorise/catalogue reference display/environments (list display colorimetry encodings, etc)

4) Stipped back version of SSTS RRT/ODT
    Default contrast
    What clipping points should we remove?
    How should we map SSTS parameters to displays?
5) Alternate renderings

6) Appearance modeling (color, not image appearance)

7) LMT Sub group - designing guidelines or a tool?
Matching old renderings

8) Gamut mappings - do they need to know the display encoding and thus exist after the main rendering?
  • Alex Fry: Who has a particular interest in tackling any of these? We can make threads on ACES Central. There is a thread already on terminology. We need to agree what we mean by things to all be on the same page.
  • Carol Payne: I'm happy to pull out a list and put it on the DropBox Paper for people to agree/disagree. Tracking threads is hard.
  • Nick Shaw: ACES Central is great for discussion, but not as a repository to find information.
  • Alex Fry: No. 3 Daniele offered to write something last seek.
  • Daniele Siragusano: I am working on something and will post when it's ready.
  • Scott Dyer: Whatever people write we can embed or link on the DropBox Paper so it's in one place. Google Docs, whatever.
  • Carol Payne: And we can link to relevant ACES Central posts.
  • Alex Forsythe: I wanted to remind people that the Slack group is not accessible to everybody. It's great for discussions, but please repost anything relevant to everybody on ACES Central, or on Rocket Chat.
  • Alex Fry: No. 4 Scott is working on something with the SSTS, allowing more adjustments of contrast etc. for experimenting.
  • Doug Walker: Gary Demos and I also worked on stuff and published SMPTE papers and demoed at SMPE/HPA conferences. It addresses some clipping point issues.
  • Jean-Michel Gilbert: I did some experiments with a hacked version of HLG, to try and produce an SDR rendering. I got a result with similar levels to Jed's open DRT. I am surprised how dark the sRGB ODT is compared to the 1.2 HDR OTs.
  • Nick Shaw: The SDT ODTs map mid grey to 10 nits for 100 nit SDR, compared to 15 nits for the HDR OTs.
  • Alex Fry: No. 5 is Jed's open DRT. Any other ideas?
  • Nick Shaw: Connected to No. 6, appearance modeling, I did an experiment with HLG because that is designed to have appearance matching between displays with different peak luminances built into the EOTF. This is not related to rendering, but assuming you have an HDR image you are happy with, I wanted to see how HLG would map that to a notional 100 nit HLG display to create an SDR image. I don't have an HDR display to compare, but I wrote a DCTL which let you simulate the HLG EOTF at different peak luminances on a PQ display, based on the conversion in aces-dev, but with L_w and L_b parameterized and including the BT.2100 gamma calculation. I then had to use a Resolve Color Space transform to view the first 100 nits of the PQ on a BT.1886 display. I used the ARRI Isabella image to look at a normal image with skin-tones, rather than an extreme image like RED Christmas. At 100 nits it is not a pleasing result, looking rather like what you see with a component image where the luma signal is attenuated or missing. So perhaps a useful demo of something that won't work.
  • Daniele Siragusano: This is not really how HLG is meant to be used. At 100 nits HLG should be decoded with plain 2.4 gamma (and a Rec.2020 to Rec.709 matrix). The HLG EOTF won't work below 500 nits.
  • Nick Shaw: I was under the impression that the SDR backwards compatibility of HLG was just so that in an OB truck not every monitor had to be HDR – you could still see a viewable image on a Rec.709 monitor, just a bit dark and desaturated.
  • Jean-Michel Gilbert: I also did an experiment to use HLG to match PQ on a 100 nit monitor.
  • Doug Walker: My impression of HLG was that it just fits the range to a given monitor, so highlights in sports don't clip. The gamma adjustment is actually moving things in the wrong direction.
  • Nick Shaw: The BBC experiments appeared to show the gamma needing to move in the opposite direction to what classical color science suggests in order to create a match. They did a "fit to fill" with the signal and then viewers adjusted the gamma to find a match.
  • Daniele Siragusano: It makes sense because when you are stretching the image you need to lower the gamma to bring the mids back down a bit (although overall they still go up). If you plot the grey scale for various luminances it looks like a DRT family between 500 and 2000 nits. There are hue skews in oranges with HLG, but they are not really that severe within that two stop range.
  • Doug Walker: Is that what we really want to do? In ACES the mids don't change with peak luminance. It's just the highlights that are opened up.
  • Daniele Siragusano: At a certain point there is no more to open up. With T-Cam we incrementally move mid grey up with peak luminance. A 10th of a stop in grey for every stop in peak. That's just something we came up with, not scientifically derived. ACES moves mid grey up (4.8 nits, 10 nits, 15 nits) up to 1000 nits peak, but then stops. Why?
  • Doug Walker: Raising grey makes sense, but what HLG does with gamma may make sense as a primitive gamut mapping, but not for appearance matching.
  • Jean-Michel Gilbert: My experiment was using HLG "wrong" to try and make it do something it wasn't intended to do.
  • Scott Dyer: The choice of 15 nits for 1000 nits and above was chosen but not tested. Likewise 7.2 nits for 108 nit Dolby Cinema. I always intended it to scale continuously across the range based on the other parameters. So in ACES 2.0 it's likely to do that.
  • Daniele Siragusano: Here is a plot (32:00 in the recording) of display luminance of the same scene values with the HLG EOTF at different peak luminances. Clipping maps to display peak, but in the mid range it ends up at a similar level. On a 2.4 monitor with Rec.2020 primaries you get a shoulder from the "wrong decoding" of HLG with a gamma function, and at 100 nits its considerably darker. But the BBC did research and found no 4K monitor is 100 nits. At 200 nits the SDR decoding falls in a more similar place to the others, with a shoulder going to 200 nits. If the peak is rolled off instead of hard clipped, it looks very much like a DRT family.
  • Jean-Michel Gilbert: For SDR we found you have to calculate for higher luminance, and underexpose it, so when the monitor scales it up it looks correct.
  • Daniele Siragusano: It would have been simpler if Rec.2020 had been standardized for 250 or 300 nits, because we are grading much darker than what the viewer sees.
  • Doug Walker: I'm still wondering if we really want the contrast to be different between a 1000 and 4000 nit render. I think a content producer mastering on a 4000 nit display then looking at a 1000 nit version would expect maybe a slight drop in brightness, and highlight roll-off, but that the contrast would be the same.
  • Daniele Siragusano: It essentially is the same. But although HLG is a useful distribution system, I don't think we should be looking at it for scene to display rendering.
  • Nick Shaw: I was't looking at it for that. Just display matching across luminances.
  • Kevin Wheatley: In Daniele's plot it's hard to see which is which in the blacks. What should happen in the blacks is an interesting question, because not all displays behave the same there in terms of sequential contrast ratio. The white to black luminance ratio is what defines the contrast of a display, and I wonder if the black handling is some curves is appropriate.
  • Daniele Siragusano: The most important thing is that it doesn't produce different contrast in the shadows. I've seen spline implementations where moving things elsewhere creates wiggles in the shadows. That should be avoided. But black contrast can vary shot to shot with content due to flare. It's misleading to think about the shadows the same way as highlights because they are far more affected by flare.
  • Nick Shaw: And many consumer TVs (I don't mean FALD) modulate the backlight with content and you often can't turn that off, so it's a moving target.
  • Alex Fry: For the current dim vs dark surround it would be good to know where the current numbers came from. Magic numbers dialled in by eye? We need to expand on that going forward. No. 7 LMT subgroup. We need to look at the conflicting requirements – hue preserving, access to corners of display gamut, and the current per channel. Can you build an LMT that addresses one within the other? That would need to work with the group 4 and 5 teams. And are we building a tool to create LMTs? Anyone have a passion for LMTs? Some LMTs are just made in a grading tool, but there are some key ones like creating the appearance of the old SSTS within a hue preserving DRT. Is that possible? Do you need different DRTs for each display family? We need to build some LMTs for our current rendering and for any candidates.
  • Jean-Michel Gilbert: I think we will need different DRTs for different targets, and we'll need to invert the DRTs.
  • Thomas Mansencal: That's possible within reason. If a DRT desaturates in some places, you can't invert that. We're trying to design an elegant solution where we don't have to invert DRTs.
  • Scott Dyer: We've talked a lot about rolling off saturation vs maintaining it. For skies you tend to want to roll it off or it looks bizarre, For neons and tail-lights you want to maintain saturation. I don't know if it's possible without trying it. We've talked about if we can do that with an LMT so the RRT doesn't desaturate fully, so the desat is in the LMT and those who don't want it don't use that LMT. Is that possible? We need to explore.
  • Alex Fry: And some things in the current RRT we can move to an LMT.
  • Thomas Mansencal: LMTs are so tied to the display transform, it can't be a completely separate group.
  • Alex Fry: We aren't talking about separate groups with separate meetings. Just getting people to investigate these things.
  • Nick Shaw: Intuitively it seems one LMT won't be able to do highlight desaturation for all displays, but we need to investigate if two (HDR and SDR) are enough or we need one for every target.
  • Alex Fry: And can we match the old looks with Jed's DRT and vice versa?
  • Zach Lewis: Could the HLG stuff be used to make a shaper space for an LMT applied across ranges?
  • Kevin Wheatley: To emulate one display on another (going down) you just use one display's rendering with the other's encoding.
  • Daniele Siragusano: But if the goal of the DRT is to render from the same scene to different targets, if we need different LMTs then the DRT is not doing what it should. If we do our job right with the DRT, one LMT makes one look and the DRT does the appearance rendering for different targets.
  • Joshua Pines: It's content and creative dependent. One kind of matching between SDR and HDR won't serve everybody's vision. Some want extra detail in HDR, some don't.
  • Daniele Siragusano: You'll always need a trim pass. But that is scene dependent and may even be spatial.
  • Joshua Pines: You could argue that changing highlight roll off should be part of color correction in the trim. The current situation it's not the case that one LMT works and all OTs do the "right thing".
  • Daniele Siragusano: The current situation certainly requires us to invert the DRT if we want a different rendering, so we need different LMTs.
  • Joshua Pines: Forgetting ACES for a moment, if somebody does the SDR first and them moves to HDR, we offer them three or four candidate HDR versions of their show LUT. The client and colorist select one which has the highlight handing they want. But if that level of control was in color correctors, we wouldn't need to dot that, and could push it down the road as a color correction decision, not part of the LMT. It's a worthy, but ambitious, goal to have one LMT that works on all outputs, and it's just trim passes. But currently we've found that impossible.
  • Nick Shaw: Isn't it rather semantic if something is part of a different LMT for a target or part of a global timeline operator in a trim pass? Whether you call it part of the grade or part of the LMT, is it that different?
  • Joshua Pines: Good question. It really comes down to what's available in the knobs in current toolsets.
  • Alex Fry: There are different types of LMT. Film print emulations like Baselight has in its scene looks, which are effectively LMTs, and emulating previous ACES renderings.
  • Joshua Pines: Consider two clients. One wants their HDR and SDR to look exactly the same. The other wants the HDR tuned up to 11. Can one LMT with the differences done in the trim serve both clients?
  • Daniele Siragusano: It's not a problem with a swappable DRT.
  • Kevin Wheatley: But we still want one rendering for the common cases. No.8 gamut mapping (traditional display GM, not what the other VWG has done). That needs to know the gamut volume of the target, so needs to happen after the DRTs.
  • Lars Borg: To add a twist, if we ever consider running ACES through webkit or whatever, you can't access the details of the actual display, as that would be fingerprinting and would go against privacy rules. We can only get categories of display.
  • Kevin Wheatley: I imagined we would have categories of mastering display, and then that's passed to the browser which handles the difference with the actual display.
  • Alex Fry: I don't think we'll be sending raw ACES to the browser for a while. When I started using ACES I had always assumed there was some gamut compression. I didn't know it just clipped. We need to explore this more.
  • Kevin Wheatley: We can probably just borrow from one of the many books on the subject.
  • Jean-Michel Gilbert: ITU has BT.2407 on gamut mapping to Rec.709.
  • Jed Smith: I've done some testing of surround compensation. Daniele suggests a power function is adequate. I also looked at the Bartleson-Brenneman equations. I'm happy to investigate further.
  • Alex Fry: We need a way to evaluate surround compensation when we're all in different uncontrolled environments. Maybe windowing the image into a grey border.
  • Daniele Siragusano: That works too for adaptation. The adaptation is to the actual spectral characteristic of the monitor, not a backlight.

Meeting #14, March 31st 2021, 1pm PT

[Chat]

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Clark
Sean Cooper
Alex Forsythe
Jean-Michel Gilbert
Ebs Hasche
Zach Lewis
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith
Doug Walker
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: I have some slides. Alex posed some questions.
  • Alex Fry: We felt we needed a yes/no on these before moving forward. Do we have conflicting requirements that a single transform can't address?
  • User who wants it to look great OOB with zero grade.
  • User who wants a transparent look for maximum creative grading.
  • User who wants to be able to hit all corners of the display
  • Color Pipeline types who deal with edge cases
  • Can we start to try and build prototypes that try and address these issues?
  • Alex Fry: Not necessarily those in this room, but many people want to switch on ACES and have it look good. That may conflict with user 2 who wants it to keep out of the way of their creative intent. 3 is graphics user who want to hit all the corners of the display gamut. These three seem to conflict. If we agree on that we can build a system to cater for all of them.
  • Lars Borg: The higher skilled the operator, the less important to cater to them out of the box. Many know nothing about color management and want it to be acceptable when they show it to a client. So 1 is very important.
  • Alex Fry: Swapping DRTs happens in reality for people in this room.
  • Lars Borg: We aren't average users.
  • Jean-Michel Gilbert: From conversations with my leads and explaining our ACES workflow to them they know very little.
  • Thomas Mansencal: We have many use cases where we just want it to work. We don't always need a colorist. That's a win - less time, less money.
  • Lars Borg: A bit of history in print. Illustrator and InDesign had fixed color conversions, and when we added color management people wanted the change to be small when they switched document modes. When people switch from the out of the box grade they don't want it to look totally different and have to start from scratch. Controls in a decent start position, but now you can move them.
  • Carol Payne: Hopefully most of that can be done with an LMT. We need LMTs controls and creation more accessible. Obviously not everything can be done with an LMT, but for the majority of users Lars is talking about it could.
  • Lars Borg: For them maybe LMT + CDL is often enough.
  • Carol Payne: 1 & 2 and 3 & 4 group together. I fell 1 & 2 can both be achieved. But 3 & 4 conflict with them.
  • Lars Borg: 3 & 4 can require expertise. But 1 & 2 should be accessible with minimal knowledge.
  • Jean-Michel Gilbert: We're the experts so 3 & 4 aren't a problem for us because we can change things.
  • Daniele Siragusano: But you want to stay in the ACES framework. That's why my proposal was a swappable DRT within ACES, which is the only way I see to achieve all 4.
  • Lars Borg: And the box may not be from the vendor. It may come from the studio. Skilled colorists set things up then pass it to juniors to work on. But some people just buy the product and want it to work. They have no senior colorist, so call vendor tech support.
  • Alex Fry: If swappable DRTs become part of ACES, it will be supported everywhere, not just apps that support OCIO configs which allow you to do that now.
  • Kevin Wheatley: So it's yes to all the questions. Next here are some examples related to transparent grading:
  • low contrast period piece
  • Black and white
  • Sepia
  • Noir
  • Kevin Wheatley: What can be done with an LMT and out of the box, and what needs a replaced DRT? How far can you go with the reference implementation|?
  • Joshua Pines: We just did a show which use about eight period looks. We had references. VFX exchange in ACES was a studio requirement. It may not be typical, but that sort of thing comes up a lot. We had to create LMTs for each look and different LMTs for SDR and HDR. Maybe 47 LMTs. We had to use inverse RRT/ODTs. With a loose DRT and a default LMT for novices, we could maybe do look dev through that.
  • Carol Payne: So with a looser DRT than the current could you have avoided inverting things?
  • Joshua Pines: We have some flat output renderings for creating looks through. Those work for many shows. Maybe not this one as the looks were so severe and specific, that stand-alones and inversion was necessary.
  • Nick Shaw: So the fact you have these suggests it is possible if the ACES rendering was like those.
  • Joshua Pines: Yes, but then you get requests for film print looks, and grading tools can't do that. You need a print emulation in the chain. So it's not always just a colorist working behind a loose DRT but it does happen a lot. It's a good target to have something you can easily do look dev behind.
  • Daniele Siragusano: If you have a loose DRT it doesn't look good without an LMT. So the previous 1 & 2 are a contradiction. If it looks nice it's harder to develop different looks under it.
  • Joshua Pines: It depends on the user. Some get it and can work under the DRT with even just CDL grades to get the look they want. Others are confused because it starts so flat. For them we make modified versions of the DRT.
  • Jean-Michel Gilbert: If you had e.g. Jed's chromaticity linear DRT would you have been able to work under that?
  • Kevin Wheatley: I think Jed's still has much more look to it than what Josh and Daniele are talking about.
  • Joshua Pines: Even when we get a great solution, is some cases people will still have to end up inverting, but it would be great if they didn't have to in the majority of cases.
  • Thomas Mansencal: The biggest problem in what Josh is describing is metadata tracking with so many looks. Won't AMF help that in the long run?
  • Joshua Pines: Absolutely if AMF works as advertised. And this was an exception. Most shows have one look.
  • J. Schulte: We have a similar situation with legacy looks we have to match for new content. So inversion will always be needed sometimes.
  • Daniele Siragusano: Unless we go the swappable DRT approach, so you just use your legacy rendering.
  • Alex Fry: Let's take a poll in the chat to see who favors what.
  • Carol Payne: What about something in the middle, with a default DRT and maybe default LMT, but with the possibility of switching them.
  • Kevin Wheatley: The default with the loose DRT and LMT suits most users, and switching is there for advanced users who know.
  • Alex Fry: Because currently if you swap DRT you're not really ACES any more.
  • Carol Payne: The default DRT is not in the current workflow, yet.
  • Doug Walker: We also need to consider what's practical. Because it takes the industry a long time to transition to something new.
  • Alex Fry: Baselight is an example that shows it's possible. It would just be reframing it as official ACES.
  • Carol Payne: That's down to tracking.
  • Nick Shaw: It can be done now in Baselight, Resolve, or anything that supports OCIO but they are different approaches that aren't cross-compatible.
  • Daniele Siragusano: That's why we need a standard way to define them.
  • Joshua Pines: That's another group's problem, tracking all this!
  • Doug Walker: An aim of ACES was to reduce the need to send a bunch of LUTs around. If we allow an official way to do that it certainly solves many issues, but I don't want to minimize the work that implies.
  • Jean-Michel Gilbert: Aren't LMTs the same as LUTs? [several heads shake]
  • Joshua Pines: The current situation is we still have to send a bunch of LUTs around on a per project basis. The hope is it happens less in future.
  • Daniele Siragusano: Inversion always introduces issues. You have a gradient that goes one way in the DRT and a different slope in the one you're inverting for, and the slopes conflict. Inversion is not the magic solution.
  • Thomas Mansencal: For example when you've desaturated, no inversion will get the color back.
  • Kevin Wheatley: My next slide is the list of who we are doing this for:
  • DOP - Should look the same from one end to the other
  • colorist - should allow grading flexibility
  • needs multiple deliveries - with minimal adjustments
  • DIT - implementation as cubes
  • Brand Logos - exact color targets
  • Look dev - 'Neutral' doesn't allow many things to hide in the shadows/highlights
  • Game engine - end user customisation of parameterisation to optimise for specific display and environments
  • Kevin Wheatley: Not everyone is at the same level of technical finesse. Some have hardware limitations and real-time requirements. How correct might a LUT implementation be?
  • Daniele Siragusano: Chris mentioned in the chat that we need a dedicated LMT working group. If the LMT is a fundamental building block, the LMTs and the Output Transform need to be developed together. They are very dependent on each other.
  • Carol Payne: I read that as a group looking into how you author LMTs and how they fit into various software. And guidelines on what should and shouldn't go into an LMT.
  • Chris Clark: I agree with you both. If you have a soft DRT you definitely need LMTs, and part of that is how you transport and load LMTs into different software. That one reason I've been involved with AMF. We've made progress with ColorFront, and met with the Resolve team. I'm sure FilmLight will want to be part of it.
  • Kevin Wheatley: We definitely need to be able to easily pass around more than CDLs. We had a recent example where a spline was needed.
  • Joshua Pines: CDL was never supposed to be the be all and end all for look dev. There's a need for something beyond that.
  • Kevin Wheatley: It's hard enough for people to agree on what saturation meant, never mind the myriad of other operators that might be needed. We need to make LMTs a first class citizen of ACES.
  • Daniele Siragusano: We should build a few LMTs because making a good LMT is not trivial. If a big part of the look comes from the LMT, we need to make prototypes for users.
  • Alex Fry: We need to supply at least one LMT to match the current RRT, and maybe a few others.
  • Jean-Michel Gilbert: I already think LMTs are a fist class citizen, but it seems software vendors don't. We need a place for an LMT stack, not just a single slot.
  • Thomas Mansencal: If you want the ACES logo you should have to support LMTs.
  • Nick Shaw: Going back to what Lars said earlier about going from an LMT to where you want without starting from scratch, should LMTs have an opacity control? Baselight's scene looks have that.
  • Lars Borg: The intensity control should be inside the LMT so the strength applied is communicated.
  • Daniele Siragusano: The ARRI Look Library has each look in three strengths.
  • Alex Fry: Maybe all this belongs in an LMT authoring tool.
  • Lars Borg: But people may get an LMT and say, yes that's the look but it’s too much. Dial it back 10%. And it would be clip specific.
  • Daniele Siragusano: That's a trivial implementation detail to lerp it back.
  • Alex Forsythe: Could you leverage the alpha channel to do that?
  • Lars Borg: No. that would make it output specific.
  • Daniele Siragusano: AMF should solve that.
  • Doug Walker: It doesn't have a closed form inverse if you're blending it back with the source.
  • Joshua Pines: Doug makes a good point. We usually need inverses for all our LMTs for output-referred content that sits with the rest and has to go out through the LMT and RRT.
  • Daniele Siragusano: With a shot based LMT you turn it off for that shot (when AMF is fully supported)
  • J. Schulte: Some graphics may have to be comped into the scene, so have to go through the whole transform.
  • Jean-Michel Gilbert: I would stencil it out. Mask it out and comp downstream.
  • Alex Fry: That doesn't really work for a film workflow where it all has to live together.
  • J. Schulte: If it's a scene entity it needs to include the glows, highlights and reflections like the rest of the scene.
  • Alex Fry: E.g. holograms in the scene, not UI elements overlaid on the screen.
  • Kevin Wheatley: Any volunteers? We've highlighted a number of things we need people to do. We've already had some volunteers behind the scenes. We need an interim document that captures the requirements. I'm open to suggestions on what people could help with, so we can move forward to testing some things out. We can then form sub-groups to look at the LMTs, the contrast of the tone curve and so forth. How we track the LMTs, DRTs, whatever we're replacing. If anybody has particular interests they can contribute on those.
  • Thomas Mansencal: I'm not clear what you want us to do. A document?
  • Kevin Wheatley: That's one thing. There's a few, but we don't currently have a list. We need to capture the requirements, going back through transcripts and making one document.
  • Carol Payne: I've said we're happy to help write this stuff down. Another thing is terminology definitions, so we all use the same words. Just everybody putting their thoughts in the chat has made it easier to see where everybody stands. We could do a survey, and collate people's answers into a document, and ask if everybody agrees with the direction we're going.
  • Alex Fry: If people have particular interests they can investigate those. Jed's doing his work on the hue preserving DRT. We need to investigate what LMTs though Jed's DRT look like. And through a stripped back version of the current SSTS.
  • Daniele Siragusano: I volunteer to write out the requirements for a swappable DRT, since it was my suggestion.
  • Alex Fry: Are people happy using Nuke to experiment?
  • Kevin Wheatley: We can convert between different things. Ultimately we'll deliver CTL, but use whatever you're comfortable with for experimenting.
  • Alex Fry: As Jed said, the more prototypes the better.
  • Jean-Michel Gilbert: I'm getting an HDR monitor, so I volunteer to help Jed with HDR.
  • Thomas Mansencal: What platform are we using for documentation?
  • Scott Dyer: Like the programming languages, use whatever you're comfortable with, and we can house or link it on the DropBox Paper.

Meeting #13, March 24th 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Chris Davies
Alex Forsythe
Francesco Luigi Giardiello
Jean-Michel Gilbert
Ebs Hasche
Keiran Lee
Zach Lewis
Thomas Mansencal
Joseph McCormick
Michael Parsons
Carol Payne
Joshua Pines
Daniele Siragusano
Jed Smith
Jamie 

Meeting Notes

  • Kevin Wheatley: We plan to sort out the time zone issues over the next week. Jed has something to demo.
  • Jed Smith: I posted on ACES Central to update you on the creation of this GitHub repo containing my experiments from the last month. I've been considering how much to release vs how much to hold back until it more finished. I went with releasing more experiments for anybody who might find it useful. It's very experimental, with incremental releases. I've also started writing some documentation.

For details see the recording from about 4 minutes in, and read Jed's documentation.

  • It's based on separating a max RGB based achromatic and chroma represented by RGB ratios normalized by the achromatic.
  • The achromatic is tone mapped and then RGB scaled by the ratio of the tone-mapped to pre-tone-mapped achromatic.
  • A "path to white" is engineered because it doesn't happen naturally. It is based on luminance and optionally either chromaticity linear or perceptual (based on Björn Ottosson's Oklab space) which reduces skews like magenta on the blue to white path by copying the Oklab hue from before the desaturation into the result.

  • Nick Shaw: Would this work in an LMT? Or is the path to white tied to the display peak luminance?
  • Jed Smith: The path to white will be different for SDR and HDR, so that is something to be investigated.
  • Daniele Siragusano: What brightness do you put into Oklab? Because hue bends vary with brightness.
  • Jed Smith: Because it's done only with the RGB ratios brightness doesn't change much. It's effectively using the top end of Oklab. It may not be correct, but it looks good to me.
  • Jean-Michel Gilbert: That might explain some scene-dependent differences I saw. Red was ok, but I wouldn't expect blue to go cyan.
  • Jed Smith: It's an experiment, and maybe doesn't belong in a display transform.
  • Alex Fry: It's great to see that stuff tried with the hue invariant DRT.
  • Nick Shaw: Perhaps the way Oklab varies with brightness may be less relevant because when you're on the path to white you're at the top end by definition.
  • Kevin Wheatley: But will it be consistent between SDR and HDR displays, where the brightness is different for the same scene values? We have a "skewed goal" of making different displays look similar.
  • Jean-Michel Gilbert: If you only have an SDR display it may be worth looking as HLG output on it, which will look kind of ok.
  • Daniele Siragusano: Perceptual spaces are only defined relative to diffuse white.
  • Thomas Mansencal: This may be problematic for HDR.
  • Jed Smith: That's why I was using it with RGB ratios which are ranged 0-1.There is a gamut mapper included based on the one from the GM group, to pull things into the rendering gamut, BT.2020 here.
  • Thomas Mansencal: You're gamut mapping to 2020, but at this point you don't know what the target gamut is.
  • Jed Smith: Changing the render gamut to e.g. Rec.709 makes everything very desaturated. So using a small rendering gamut seems a bad idea.
  • Jean-Michel Gilbert: Rendering in a smaller gamut clips to its boundary. Tests have shown that rendering in Rec.2020 is more similar to spectral for RGB rendering.
  • Thomas Mansencal: I did those tests with Anders. You want a wide basis – Rec.2020 or ACEScg.
  • Jed Smith: Wondered about trying an LMS type space to get towards a perceptual representation.
  • Thomas Mansencal: It raises an important question about the rendering space. Different spaces give very different results because the basis vectors are oriented differently.
  • Nick Shaw: Your gamut mapper is bringing values into the rendering space to prevent artefacts during tone mapping, but a second gamut mapper may be needed later on to prevent clipping in the target gamut.
  • Thomas Mansencal: When you have a series of blocks in a process, the blocks don't know what comes later. How do you bubble back the target display information to earlier blocks?
  • Daniele Siragusano: It could be a parameter.
  • Thomas Mansencal: But you need to convey that parameter everywhere, and track it. But yes, it could be done.
  • Jed Smith: It would be trivial to add a compression to display gamut. You would convert to that, compress and then convert back.
  • Jean-Michel Gilbert: It would be lossy if you compressed everything to Rec.709. I think we need two gamut compressors. Maybe more.
  • Thomas Mansencal: It seems cleaner to keep things separate. The DRT has a working space, and a later step compresses to display gamut, maybe along with applying perceptual things, if needed, and the inverse EOTF.

For details of Jed's Nuke script, see the recording from about 28:30.

  • Jed Smith: It all gets quite complex, and as Daniele said a while back if you're having to tweak too many things you're probably working in the wrong domain. So I'm experimenting to find something less hacky, but with a similar result. I'm approaching my aesthetic goal, but the way I get there is too complex. So I'm experimenting with some stuff using some maths I got from Björn Ottosson, which he called a "smooth-max". It's a bit like the weighted power norm I used previously. I'm playing with the weightings. I should be clear that although the HDR EOTFs are an option in there now, HDR is broken. Don't use it yet.
  • Nick Shaw: The simpler model looks like it's achieving something very similar to the complex version. Simplicity is the ideal.
  • Jed Smith: That's my goal. Changing the component weights changes how saturated the different hues end up. But all this stuff may be better in an LMT. It allows me to darken certain hues if needed.
  • Jean-Michel Gilbert: I wouldn't darken anything. With the current OT we find we have to apply lift or gamma to lift the blacks at the start because everything is too dark.
  • Jed Smith: This isn't crushing blacks, or doing anything to the achromatic. It's just darkening certain hues.
  • Alex Fry: It would be interesting to see how much of this can be put into an LMT. Keiran Lee the colorist from Animal Logic is here, because I thought we should get a view from somebody "outside the bubble" who's been dealing with these things.
  • Keiran Lee: I don't have anything specific to say. I did use Jed's gamut compressor on Mortal Kombat when that became an ACES show. In general the issues we have are maintaining saturation in SDR with values near 100 nits, dialling back the "ACES look".
  • Kevin Wheatley: It's good to have external input from the creative side.
  • Nick Shaw: A colorist perspective is vital. Are there things in a DRT that stop them getting where they want creatively.
  • The only thing we struggle with currently is when directors ask for more saturation in SDR. HDR is fine, but for SDR we have to drop the brightness down to maintain saturation in skies, for example.
  • Thomas Mansencal: Question for Kieran in hue preservation. Do you find the current hue twists a problem? Or is it something you are used to and cope with?
  • Keiran Lee: Having used ACES on a bunch of shows I've adapted to it. New colorists have concerns when it doesn't fell like they're used to.
  • Nick Shaw: Jed your DRT is presumably to much in flux to be worth writing a DCTL implementation so a colorist could try grading through it and see if they hit limitations.
  • Alex Fry: A baked LUT version would be fine for that as a test.
  • Keiran Lee: We used a LUT as an LMT on Peter Rabbit. ANd on Mortal Kombat we had a lot of neons, and Jed's gamut compressor was really useful. Even if it skewed some colors, it brought them into a place where we could deal with them.
  • Jed Smith: It's changing too much at the moment, but I definitely plan a DCTL implementation and an inverse in the long run.
  • Kevin Wheatley: Nick has made a document with a few key thoughts.
  • Thomas Mansencal: Is this like the interim report we wrote for the gamut mapping group?
  • Nick Shaw: Not yet. It's brief notes of my thoughts. But it could become that.
  • Kevin Wheatley: We need to figure out how to move towards having concrete problems that can be investigated, maybe by breaking out into separate groups. We need to make a list of these issues, so a document is needed. I also made a list, but it still needs work. I think we should create some personas for the users – the colorist who wants to be able to reach all the 709 colors without the DRT impeding them, but also wants to have an HDR output that looks similar. We also need to pin down what similar means. That's something to research rather than just discuss. We need to create straw man proposals to refine. Does anybody disagree we need to move on to tackle individual issues? Sean's description of the architecture options lays those out and we need to work out which makes sense. And we need to define what might vary on a shot by shot basis.
  • Alex Fry: As we start to build stuff we'll find the rough edges.
  • Kevin Wheatley: As we said at the start we need to sort the time zone issues. We'll propose something on ACES Central.

Meeting #12, March 17th 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Mike Best
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
Jean-Michel Gilbert
Ebs Hasche
Zach Lewis
Francesco Luigi Giardiello
Thomas Mansencal
Michael Parsons
Carol Payne
Matthias Scharfenberg
Troy Sobotka
Pablo Garcia Soriano
Garrett Strudler
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: I hoped we could pick up with what Jed didn't have time to show us at the end of the last meeting, but he isn't here.
  • Jean-Michel Gilbert: I was just discussing the ACES 1.1 contrast curve with a new attendee, Mike Best. Maybe we made a mistake in Nuke but we were seeing something odd. It seemed to be applying a second lin to sRGB. It was brighter, and people said "this is how our game should look." But Resolve was applying the sRGB ODT correctly. So we started talking about the contrast curve, as was referred to in the RAE. We wondered about tweaking the SSTS, or looking at the naive DRT.
  • Kevin Wheatley: The contrast of the RRT targets a dark environment, and then the sRGB ODT changes the gamma for a slightly brighter (dim) environment. But you're asking why isn't it the same as a traditional sRGB curve.
  • Jean-Michel Gilbert: Unreal compensates for that by pre-exposing up by 1.5 stops, but we don't want to do that.
  • Kevin Wheatley: That's not standard ACES.
  • Mike Best: They're basically hacking it because gaming monitors are in a brighter environment.
  • Jean-Michel Gilbert: That's mentioned in the RAE
  • Kevin Wheatley: The question is does this use case merit a change? Is it environment or preference? If preference it should really be an LMT. But is that appropriate for a game engine? Is that why you're bringing up adjusting it in the SSTS?
  • Jean-Michel Gilbert: Whatever we do before the SSTS is crushed top and bottom by the zero slope there. So we wondered about changing the end point slopes.
  • Scott Dyer: The SSTS currently targets a dark surround and includes some legacy limitations. In one of the first VWGs that lead to the HDR revisions in 1.1 we discussed tone scale a lot, and flat end points came up, and the idea of giving those small but finite slope, which helps inversion and means there isn't a plateau you can't get past. The tone scale is a critical discussion for this group. We've discussed making the tone scale adaptable for environment.
  • Jean-Michel Gilbert: I'm afraid I can't show my examples as they aren't cleared. But we tried a post rendering gamma, which is a hack, and wouldn't work in HDR. We haven't checked if the assets are handled properly, so it's possible we have an error there. We're going to investigate.
  • Kevin Wheatley: A gamma adjustment is what you need for dark to dim.
  • Jean-Michel Gilbert: Yes but the curve we had is much more than you should need. A square root, instead of 0.96 gamma.
  • Kevin Wheatley: This is an interesting use case. Any others? As one of the RAE authors I've said I had concerns about the end slopes and that the default contrast is a bit high for us, although that's a preference unless it causes technical problems.
  • Scott Dyer: There was a desire to lower the contrast in the proposal for this group, along with looking at the tone scale. Whatever mechanism we use, we need some tone mapping to add mid-tone contrast and highlight and shadow compression. The SSTS is useful for this.
  • Kevin Wheatley: In Daniele's approach where we target groups of displays rather than specific displays, does the SSTS include controls for target luminance?
  • Scott Dyer: Yes, I think anything up to 10000 nits. And it controls where mid grey falls. Some are exposed in the ACES 1.1 SSTS, but there's a lot more inside that are currently calculated automatically from the exposed parameters. If they were all exposed you could easily break the curve. There's a min a mid and a max, and for each it's defined where it falls and what the slope there is. The bends are calculated from those. Currently mid tone contrast is ~1.5. Dropping it to e.g. 1.45 would make a visible difference.
  • Nick Shaw: Here's Jed's Nuke implementation of the 1.2 SSTS with the addition of my RRT sweetener controls. You can see the parameters, including the 1.5 mid-tone slope. I don't know what the other two slopes are, Jed would have to comment there.
  • Scott Dyer: I don't have a demo prepared, but I'll put something on ACES Central we can discuss next time. We need to settle on tone-scale values for the reference.
  • Kevin Wheatley: Troy asked in the chat "what is tone?" It's not constrained to RGB curves. Daniele mentioned that FilmLight found RGB was not suitable for appearance matching between targets. Different devices hit the roll-off at different levels so have different skews.
  • Scott Dyer: The term "tone" has always been used in photography. And we're talking about adjusting contrast and compressing the top and bottom end. Think Photoshop curves.
  • Troy Sobotka: We all understand what a curve is, but when we say "tone" what is the x-axis input domain and y-axis output domain? I don't think RGB covers "tone".
  • Scott Dyer: Think black and white, and the x-axis is log scene exposure and in photography the y-axis is density. Or negative log density is like luminance. So to boost mid-tone contrast and compress the top and bottom to fit the scene range to display you need an s-curve. The exact x and y axes don't really matter, but it could be scene exposure and display luminance.
  • Troy Sobotka: B&W makes a good point. The scene RGB values are radiometric-like, so energy, so you're converting energy to luminance. I think we need to express this clearly to discuss the idea of tone. We're dealing with two different domains.
  • Nick Shaw: Since we haven't settled on the domain our tone curves will be applied in, we can't specify units. We just need to convert scene exposure to display luminance somehow. You have the starting and ending domains, but maybe a third domain in between that we apply the curve in. We haven't decided. So "tone mapping" is more an abstract concept at this stage.
  • Alex Forsythe: We shouldn't make assumptions about input and output domains when we discuss tone mapping. It's just input to output.
  • Kevin Wheatley: And the curve we use to do that has had some issues in the past such as infinite slope and too high a default contrast. Any other issues people have seen that we need to note before moving on to concepts for implementation?
  • Jean-Michel Gilbert: We're not just mapping scene luminance to display luminance. There's wider to smaller gamut too. Ideally it would be spectral, but normally it's RGB. We shouldn't forget hue mapping and path to white.
  • Kevin Wheatley: Gamut mapping is necessary too, but I see it as orthogonal and separate to tone mapping. I see tone mapping as focusing on dynamic range.
  • Nick Shaw: We shouldn't talk about tone mapping as if it has to include gamut mapping and hue correction as part of the same process. But of course those needs to be done, with control of the way it goes (or doesn't go) to achromatic.
  • Jean-Michel Gilbert: When you map the path to white you can't escape mapping hues.
  • Kevin Wheatley: There's a conflict there, because we may need two paths to white or a mechanism that supports that. Taking a chromaticity straight line to white, and achieving values near the gamut boundary conflict.
  • Thomas Mansencal: Having a clean curve with a smooth derivative was brought up in the RAE paper. But I think that's sorted with the SSTS.
  • Scott Dyer: The SSTS has less knots and coefficients in the b-spline, which makes it smoother. Some bumps were asked for in there to match a film curve. I didn't like that.
  • Garrett Strudler: There's an interaction between luminance and chrominance which I think was what Jean-Michel meant.
  • Alex Forsythe: It's worth noting that strategies have been proposed to map just the "luminance channel" and not tone map the chrominance. That has pros and cons.
  • Lars Borg: Somebody did that for an exposure slider, and the results were horrible. You can create out of gamut colors. One ITU suggested HDR to SDR tone mapping approach is based on tone mapping luminance, but then applying the result to all three channels identically.
  • Nick Shaw: But conceptually you can separate them. You still need to do something to the chrominance channels, but it's not inextricably linked to what you did to the luminance channels.
  • Jean-Michel Gilbert: I tried that too and it doesn't look good.
  • Zach Lewis: It was mentioned previously that you could use the slope of the tone curve to control other things. Could that be used to control the the path to white?
  • Nick Shaw: I think Jed already has that in one of his experiments, using the derivative of the tone curve to scale saturation.
  • Kevin Wheatley: I worry we're not talking at the right level of detail. We need to settle on the architecture so we can move on to trying some things. Does anybody think we've missed any issues needed to define the requirements?
  • Garrett Strudler: We should ask "who are we building it for?" They aren't all experts and aren't all working in digital cinema.
  • Kevin Wheatley: There are two aspects – the architecture of what we are targeting it for, and the reference implementation of a rendering transform that we provide. These might be different for different people. If we say it's just A reference rather than THE reference, then it may be good for the non-expert out-of-the-box users. And the replaceability is great for expert users with edge cases. Then we don't have to worry about the edge cases in our reference rendering.
  • Thomas Mansencal: If you have two reference rendering transforms, neither is the reference.
  • Kevin Wheatley: I don't think we need to worry about now. First we need to find out if we can fulfill all the requirements in one transform. Can we find a rendering that does a decent amount of the work, but we can implement the path to white prior to it? That's one approach – one rendering and a default LMT. But as Daniele said, once you have some mandatory parts and some swappable parts, whether you do it like that or allow swapping the whole thing as one lump is kind of moot. You have to manage and track the complexity. We don't want to just tick the box and match the spec. We want something that doesn't get in the way for expert users.
  • Thomas Mansencal: Specs can be updated.
  • Jean-Michel Gilbert: So indie studios should be able to use the out-of-the-box experience, including Unreal and Unity, if they update to ACES 2.0. Big studios will have experts.
  • Thomas Mansencal:  I think in the next version Unreal and Unity will support OCIO. So whatever this group comes up with will end up in OCIO, so game engines will have access to it.
  • Jean-Michel Gilbert: 90% of ACES users won't have experts on staff so need the out-of-the-box experience.
  • Nick Shaw: I think it's a given that it needs to be as easy or easier than it is now for non-expert users, and just work. If you don't need the flexibility you shouldn't have to configure it.
  • Mike Best: As one of those users, comparing it to Photoshop, where you have a limited choice of color spaces and displays, shouldn't the basic just be that it's neutral?
  • Kevin Wheatley: There's a contradiction because for some the film-like bumps and wiggles are exactly what they want, and for others they are a problem.
  • Mike Best: If ACES is going to expand beyond film, it has to accommodate the needs of photography, billboards etc. whose requirements are different. So shouldn't the RRT be as neutral as possible, with different adjustments focusing on target outputs. Like in Photoshop you stick to a limited working space and focus on the target output.
  • Carol Payne: I don't think "film-makers" are happy with the default RRT. That was something that came up repeatedly on the feedback tour, which was mostly film-makers. So everybody, including film-makers is expecting a change to the default look.
  • Thomas Mansencal: And the gamut mapping work is related to other complaints, but that is linked to the Output Transform, because that's where the artifacts from out of gamut values appear.
  • Kevin Wheatley: Thanks, Mike, for your viewpoint. It's useful to have a "naive" perspective on this. And if, as Carole says, people expect change, how much change will they tolerate?
  • Thomas Mansencal: It's a good point. Changing to a hue preserving approach is a dramatic change in look, and I don't know how it would be received. So that's a reason to have two renderings, or an LMT to tweak it (if that's even possible).
  • Jean-Michel Gilbert: Just recently we looked at what EA had done where they had one hue preserving mapper, and one non hue preserving, and they said the artists preferred the one with hue shifts. But we found 100% of us (five people) liked the one without hue shifts better.
  • Kevin Wheatley: So how do people think this should be handled? Multiple renderings, or one rendering and push it to an LMT? LMTs should in theory be preferential, but if there are competing requirements then the variation has to go somewhere.
  • Thomas Mansencal: LMT if it's possible. It may not be but I think we should try.
  • Garrett Strudler: Counter to that I would say if you can swap out the RRT it should go there. Also does OCIO have a default LMT?
  • Kevin Wheatley: It doesn't currently in the ACES config, but we talked just yesterday about adding that slot, to better represent the ACES block diagram.
  • Thomas Mansencal: We have something like that at work to emulate the UE4 exposure gain.
  • Kevin Wheatley: I have similar configs with LMTs too, but there isn't an example in the current ACES config. So this decision has no impact on OCIO, but what about on-set tools etc.?
  • Thomas Mansencal: On set you're usually using a LUT, so you can bake whatever you need into that.
  • Nick Shaw: Prelight and LiveGrade both bake everything into a LUT, so you can have as many steps as you like because don't need to be able to do it real-time in a shader.
  • Kevin Wheatley: But, Thomas, you don't think an LMT can do what we need?
  • Thomas Mansencal: I suspect not because you're working under a curve in the SSTS which changes per target, so the skews change per target. And even to use the "inverse of what's going to happen" approach, you need to know what the curve will be. And an LMT is blind to the display. You would need to bubble back information from the target device. It seems to me best to have two transforms – one with max display saturation and one with a path to white. Maybe you can mix between the two. An LMT would certainly be cleaner. If you have two LMTs people will ask "which should I use?" and you'l get a gazillion questions on ACES Central.
  • Kevin Wheatley: So the two rendering transforms approach is similar to what Daniele was proposing.
  • Garrett Strudler: But if the renderings are very different, a Look Transform will do very different things under them.
  • Kevin Wheatley: I think Thomas was talking about LMTs needing to be different per output device, so you need multiple LMTs. But a traditional LMT is a ball park grade, that only has to be tweaked per device if they are vastly different. So if you have two things that have to both be changed together they might as well be one thing.
  • Thomas Mansencal: It's like with the separate RRT and ODT curves, it's hard to tweak one because it interacts with the other. But the SSTS makes that simpler.
  • Alex Fry: We already have LMTs that are tied to specific RRT versions, such as the emulations of older RRT versions. Do others have examples of LMTs that are linked to particular ODTs.
  • Zach Lewis: We have LMTs that only work with particular ODTs because they are emulating e.g. the K1S1.
  • Kevin Wheatley: Yes, we have LMTs like that which aren't really LMTs any more because they just invert the RRT and ODT, so are tied together. But real LMTs do exist.
  • Zach Lewis: But other LMTs designed to be used in conjunction with a final LMT are "LMT referred".
  • Jean-Michel Gilbert: In my mind LMTs should be creative. On the block diagram the goal is to put your grade as an LMT, stacked with other LMTs like emulations.
  • Alex Fry: LMTs like Scott's PFE example are real LMTs that behave similarly under different Output Transforms. And Daniele has a whole bunch of them in Baselight, designed for T-Cam, but that do work under other renderings.
  • Kevin Wheatley: I've been making a list of these points in a document I can polish and share, so people can add to it.
  • Mike Best: Adding ACES to Photoshop has been talked about. Where is that on the roadmap?
  • Lars Borg: I can't speak for the Photoshop team, but with Photoshop being ICC based currently it's not simple, so won't be quick. But if we make ACES simple so anybody can use it with no knowledge of cinema production then that's an incentive for Photoshop to add it.
  • Kevin Wheatley:  We need to put a poll on ACES Central about meeting times with clocks changing in different time zones.

Meeting #11, March 10th 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Alex Forsythe
John Frith
Francesco Luigi Giardiello
Jean-Michel Gilbert
Ebs Hasche
Harvey Landy
Zach Lewis
Thomas Mansencal
Joseph McCormick
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith
Pablo Garcia Soriano
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: Sean has done a new diagram on the Miro board. Anything else anybody wants to bring up?
  • Alex Forsythe: I've put a collection of relevant literature together on Zotero.
  • Kevin Wheatley: What Sean put together compares various options people have suggested, and breaks them down into steps, categorized as more or less subjective, and what's per show or per shot.
  • Alex Forsythe: This is spot on. And not that different to the system as conceived today. Mostly just the delineation of the blocks.
  • Sean Cooper: I did this because in our conversations it feels sometimes people mean different things by the same terms. So this separates vocabulary from the high level architectural steps. So I use the "Explain It Like I'm 5" approach, so we can agree on the conceptual blocks needed. Then we can discuss more easily where to divide them.
  • Lars Borg: I'm worried the existing framework is easy for a non-expert to set up, and the more customizable you make it the harder it gets for them.
  • Nick Shaw: I don't think we're proposing removing easy defaults. You don't need to see any of this if you don't want to and it will "just work" as it does now. But we're talking about putting the hooks in for those who need and want to customize.
  • Jean-Michel Gilbert: I'm curious about the LMT2 block in the last framework. Is that display-referred?
  • Alex Fry: That was one option discussed on Slack – a post rendering LMT.
  • Kevin Wheatley: First we need to agree on the blocks, then figure out what goes together and what might be replaceable.
  • Thomas Mansencal: It was an output-referred block that may be needed for some effects. But it's a can of worms.
  • Kevin Wheatley: These are all just proposals. Nothing is decided. I would focus on the top row of green boxes. Is anything missing?
  • Daniele Siragusano: This could be the reference implementation, but I would not make any block mandatory.
  • Sean Cooper: I saw it as describing the concepts in the RRT 'blob'.
  • Kevin Wheatley: The order may be ambiguous in some cases. E.g. the subjective appearance match between targets and the different peak luminances. These tend to be done at the same time in some form of tone-scale. But mainly we need to think if anything's missing, and consider the ordering.
  • Nick Shaw: Isn't that split put there so you can separate the difference of appearance between SDR and HDR into a second LMT if that's what you want?
  • Kevin Wheatley: I think we would focus on the blocks in the top row, independent of implementations.
  • Sean Cooper: When I did these cards I was representing the idea that at some point somebody decides that two targets are a good representation of the same image to them. To allow ambiguity in what an appearance match means.
  • Kevin Wheatley: I bring this up because you could make that a cut point where you can swap a block, or in Daniele's approach you just swap the whole rendering. But they are all things you need to do, but it doesn't have to be these blocks in this order. If there are too many swappable blocks, it may be simpler to swap one big block.
  • Daniele Siragusano: This is good to talk about because it could be the conceptual framework for the reference implementation, but if you are swapping out the whole rendering you don't have to define what's in it. It just has to get you from scene-referred to display referred.
  • Alex Fry: It's really useful to see where these blocks are cut in the various possible implementations. The two step RRT+ODT or SSTS.
  • Nick Shaw: The SSTS really does everything, including display encoding in one block, even if it's conceptually split internally.
  • Sean Cooper: There's a missing dimension here, because as well as per show and per shot, there is per target.
  • Jean-Michel Gilbert: We like the SSTS for games because it's one big block with parameters. We use it for SDR and HDR and have a calibration screen in the game to determine the parameters. The user adjusts a slider to find Ymin  and Ymax for their monitor by matching a white box, and Ymid is always 18%. Like Unreal Engine we bake the blue highlight fix into the output transform then invert it after the tone-mapping, just before display encoding.
  • Thomas Mansencal: You can't do this with the standard ACES block diagram. You need a post rendering LMT or a custom rendering.
  • Daniele Siragusano: If the fix is static you can do it if you allow swapping out the whole rendering. If you allow per shot grading after the rendering then you're back to the 90s.
  • Thomas Mansencal: I'm not advocating for it, but if you have the second LMT block it's simple to replicate what UE4 does. With strict ACES I can't match UE4.
  • Daniele Siragusano: I can't see a case where this would be necessary.
  • Thomas Mansencal: Sometimes it's quicker to make a display-referred trim when you're under pressure.
  • Daniele Siragusano: The director doesn't care how you get there. They don't demand display-referred edits. Maybe the current output transform is too limiting if it's hard to achieve. Same for differently tinted scenes. You don't need display-referred trims. You need a DRT that lets you do that.
  • Lars Borg: Directors just say "give me the colors I want". They don't care how you do it.
  • Thomas Mansencal: Yes, but as an operator, if you have a display transform that completely desaturates your highlights, and you are asked to tint them, then you are screwed.
  • Lars Borg: If the RRT is not-clipping you can correct in display space and invert back to scene space. Same result.
  • Thomas Mansencal: Not with the current RRT, because there are colors you can't reach.
  • Lars Borg: We need to fix that.
  • Daniele Siragusano: If you're using inverses it's still a display-referred edit. Many archives are limited because of this back and forth to Rec.709. Better to have an elastic forward transform that lets you get anywhere.
  • Lars Borg: Every edit is display-referred an a sense, because that's what you look at. But is the steering wheel directly connected to the display or to the gear. And we're discussing the gear.
  • Daniele Siragusano: That's the crucial part. 98% of the time working display referred makes your job harder.
  • Carol Payne: It doesn't have to be connected directly to the display, and not doing that means not limiting yourself to that display.
  • Thomas Mansencal: For live grading a display that a camera is pointed at, display referred makes things easier. Cedric mentioned film stock matching, and we also have UE4. Not advocating for it, but I see places where it has advantages.
  • Alex Fry:  Would you make the second LMT a first class citizen of ACES?
  • Thomas Mansencal: I would not necessarily expose it to users by default, but have the block there for when needed. Sometimes it's not going anywhere else, so it's just the right thing to do. Photographers work output referred, and then they print it and that's it.
  • Sean Cooper: If you only target one display, why do you need ACES?
  • Thomas Mansencal: If you are using UE4, ACES is what it uses.
  • Carol Payne: We need to consider other industries, and real-time systems like game engines. But with UE4 you can just stick an extra slot in there with a display-referred adjustment if you need it.
  • CB: Troy used tinting highlights as an example that doesn't work in scene referred.
  • Daniele Siragusano: You can do it. You just need a non-cinematic view transform that allows you to do that. It's a case for the swappable DRT.
  • Kevin Wheatley: I think we should list these use cases. Then we can look at a reference rendering that covers as many of those as possible. We need descriptions, not just a list, to remove ambiguity.
  • Jean-Michel Gilbert: I wouldn't put UE4 matching on that list because there is legacy pre-ACES stuff, like display-referred blending modes that it supports.
  • Kevin Wheatley: In our case for on set live reproduction we turn all that off. We're working on things before the IDT using UE to try to make light in the environment look correct. We obviously still have to view that through an emulation of the rest of the ACES pipeline, or print emulation. There are different use cases for game engines. There are lots of effects – tinted monochrome, Pleasantville-style cutting between black and white and color – that might be an overall effect or done separately. So we should list the cases and see if they are common. If they aren't maybe ACES isn't the right thing for them. If they are, maybe it's a case for the replace the whole thing approach.
  • Lars Borg: If I have a rendering that gives me the yellow I want on one device but not another, I need a correction that's device specific. How can we manage that combined with per shot in this scenario
  • Daniele Siragusano: It's the missing dimension here. You need all of this for each target group of displays. So an overall edit you don't change shot to shot becomes part of the output transform, and you have a 100 nit version and a 1000 nit version. In my version 2 on the Miro board I exploded it out to show the missing dimension.
  • Lars Borg: With a display referred correction it's not clear it's only for one display. A director may want a similar correction for multiple displays. So you have to make a unique correction for each class of display, that can be shared. Is that realistic? Will the director view every flavor?
  • J: For us, directors view every version, from theatrical to airline.
  • Kevin Wheatley: In other cases the director only sees one or two of those.
  • Thomas Mansencal: I think Cedric mentioned Eclair doesn't limit version variations, and they even put different masks on different versions to emphasize areas of the screen for larger displays. They don't limit where those edits happen.
  • Lars Borg: What about different aspect ratios? How do you approve those?
  • Kevin Wheatley: It's out of scope but analogous to grading for different outputs.
  • Lars Borg: Color is just one bit of metadata. They all need tracking.
  • Kevin Wheatley: There's no standard for that, yet.
  • Jed Smith: This all shows we can't solve all problems with one transform. So there's a need for flexibility for different use cases. But what is the scope of this group?
  • Kevin Wheatley: We need to provide at least one reference rendering transform which improves on the current one. We need to define improved. We want it to cover say 85% of cases. For the remainder, technical people can replace the whole thing themselves. But if something is common but incompatible, that's a case for replaceable chunks.
  • DL: Looking from the other side, if it's the reference implementation, but there can be others, we can be more bold. Solve only 80% but really nicely.
  • Jed Smith: I agree. The more cases we try to cover the more compromised our solution gets.
  • Carol Payne: If I was voting right now I'd go for the safer option that covers 85% of cases.
  • Lars Borg: If 85% covers the non-experts thats good. The experts will be skilled enough they don't need Academy help.
  • Jed Smith: I think non-experts are at least 85% of ACES users.
  • Daniele Siragusano: Shouldn't ACES also be for communication between high-skilled users?
  • Carol Payne: I Think that flexibility can be solved in implementation.
  • Alex Fry: I did some tinkering in Nuke to get a feel for separating the TCT part from the DET. I have the 1.0.3, the 1.2, Jed's early Naive DRT and 0.1.1 outputting XYZ with the display encoding in separate blocks. Just to see what happens and where the weaknesses are. This is Scott's analytic LMT so we can see it through the different renderings that it wasn't designed for. That last step ends up so small, is there value in chopping it off rather than using  something like the SSTS.
  • Kevin Wheatley: It's small but separating the "on the wire encoding" can be useful. If somebody invents a new HDR encoding we don't need a new rendering transform.
  • Alex Fry: It's useful to look at a 100 nit rendering on a 1000 nit display, or a 100 nit slice of the 1000 nit one on an SDR display.
  • Jed Smith: I've been iterating on the naive DRT, looking for a transform that could go at the end, preserving as much detail as possible, be chromaticity preserving and chromaticity linear, so it's a more technical transform, and the look goes earlier. I'll make a new post to document and explain it. I've been experimenting with luminance weights to change the behavior of different colors as exposure increases.
  • Kevin Wheatley: We'll create a document listing the use cases with some specificity.

Meeting #10, March 3rd 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Chris Brejon
Daniel Brylka
Chris Clark
Sean Cooper
Alex Forsythe
John Frith
Francesco Luigi Giardiello
Ebs Hasche
Cedric Lejeune
Zach Lewis
Thomas Mansencal
Joseph McCormick
Daniel Mulligan
Michael Parsons
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: Has everybody voted in the ACES Central poll on "do we need to hit display gamut boundaries"? 2nd agenda item – hue preserving v single channel curves.
  • Alex Fry: There was a long discussion between Thomas and Troy on the Slack (parallel to ACES Central). We should discuss some of the themes from that. Mostly it was about hue preserving seeming desirable on paper, but less satisfactory when you look at images. Thomas said he tried making an LMT to give us the skews we like from the per channel under a hue preserving DRT.
  • Kevin Wheatley: We should look more at this idea that not everything has to be in the DRT if it can be in an LMT, particularly for these preference decisions. Was there an answer about if this is possible?
  • Alex Fry: It wasn't conclusive whether it worked for all targets.
  • Thomas Mansencal: Yes. I didn't spend that long trying, but it's hard if you have near infinite dynamic range, and you're trying to bend the hues along the luminance axis. Maybe a different approach would work, but it's a lot of work if we want yellowish highlights etc. under a naive DRT. It's not trivial to emulate with an unbounded range something that previously came for free.
  • Nick Shaw: I believe Jed is working on some parameterized LMTs to work with his naive DRT and create those pleasing looks that we like.
  • Kevin Wheatley: This is not to solve the problem, but to test if the architecture can do what we might need. Is it feasible?
  • Alex Fry: Fire brings up interesting discussions. Nick had some stuff from the pre-film world that was interesting.
  • Nick Shaw: This isn't anything more than showing that painters like Hieronymus Bosch definitely used graduations from red to orange to yellow to white to convey hot fire. It's not restricted to preconceptions from film looks. Also the fire in the Terminator 2 opening titles have a lot of yellow.
  • Alex Fry: Isolating these effects to deal with them explicitly is a giant science project. Maybe beyond the scope of this group. There a good case we should be just taking the current one and addressing where is has issues, rather than rethinking from scratch. Or at least have a start point based on an improved version of the current one, and a mechanism to swap it out for something like this hue preserving one. Nick do you want to show what you did exploring the gamut bounds issues with the current one?
  • Nick Shaw: This Nuke script is based on Jed's BlinkScript implementation of the 1.2 Rec.709 OT, but I've hooked up controls to toggle the 3 RRT sweeteners on and off and look at their effect on a backwards/forwards round trip. And I put an image in there to show what they do to that. You can see that particularly the red to green to yellow edges have a big slice squeezed out of them. Saturated yellow gets very squeezed towards white. That comes from the global desaturation, and with that turned off we get full yellow back. It's the red modifier which distorts this lower part of the reds. And the effect of the glow module is pretty much invisible. You can see the effect of the red modifier on the red patches of the Macbeth chart, so that's there for a reason. But arguably the changes a colorist would make will be far more significant. So should that be part of the RRT? Also the global desaturation, which has a major impact on invertibility, but a very minor effect on the image. I wondered if the different matrix given for tone-mapped AWG in the ALEXA LogC curve usage in VFX white paper is broadly serving the same purpose as the global desaturation.
  • Joshua Pines: It may just be more related to what they had to do to get a nice looking image with the limited processing the original ALEXA hardware was capable of.
  • Nick Shaw: That's basically all that's in the K1S1 LUT everybody likes so much – just curve, matrix, curve.
  • Joshua Pines: It was built based on limitations, but I don't think ARRI would say it's the "correct" way to do it. When the hardware changed their processing changed.
  • Nick Shaw: Yes, as soon as the next generation of ALEXAs were capable of 3D LUT processing they added a more nuance look in ALF-2 because they could.
  • Sean Cooper: I've only as 2 months experience at ARRI, so can't comment on history, but I can ask Harald and his team.
  • Nick Shaw: There's a post from Harald where he describes the processing that makes up the K1S1 LUT.
  • Alex Fry: Alex can you give any more insight into the background of the RRT sweeteners?
  • Alex Forsythe: They sort of came out of historical attempts to match previous RRT versions. We did realize at the time they had minimal impact and that global desaturation affected gamut coverage. The red modifier was most important. Reds can get very hot very fast. That's why the red modifier is as it is in the 1.0 RRT. We need to pay attention to that.
  • Alex Fry: If those are pre tone curve, can we put the sweeteners in an LMT?
  • Alex Forsythe: We considered that at the time, and it get's into the philosophical "falling off the truck" discussion. If you put things that are necessary in an LMT, does that make the LMT required?
  • Nick Shaw: If we do change the architecture slightly, we can make it so people get this LMT by default, but they can turn it of if they need to hit all the corners.
  • Sean Cooper: If you create an LMT that's needed for a pleasing image, and 90% of users work under it, have you accomplished anything? And it raises an interesting question about invertibility. To get back to scene-referred values, the LMT is in the way.
  • Kevin Wheatley: The key question would be is it 90%. If I picked a number I would invert that ratio in terms of what we see. Looking at the (low sample count) 21 votes, 5% do not care about hitting the boundary, 62% want to cover any display, and currently 33% want to cover Rec.709. So a majority want full coverage. Can we come up with something that solves these issues but less strong? That would be an alternative to a swappable LMT that you can turn off.
  • Nick Shaw: Maybe the response is affected by the 3 options in the poll. If asked "Do you want to be limited?" people may instinctively say "no, don't limit me", so they don't have anything taken away, just in case. Maybe the wording could have been better.
  • Sean Cooper: Just to emphasize, I'm not specifically advocating for the sweeteners. But if 90% of people need something there…
  • Alex Fry: Different people may want different compromises, and an LMT makes it easier to choose.
  • Sean Cooper: It seems a good idea to define the goals of the rendering, and ideally put anything preference or aesthetic in an LMT. Just bringing up separation of responsibility in the architecture.
  • Alex Forsythe: Compensating for viewing environment and adaptation state all seems very objective, but I don't think the objective and preferential can be fully separated. We don't have a purely objective model.
  • Kevin Wheatley: is that because you need to look at a picture to conceptualize, and then you can't help making aesthetic judgements?
  • Alex Forsythe:  As Thomas said on Slack, there's no such thing as "no look". Unless you an objectively back up having matched the appearance of one state in another, you've made a choice.
  • Thomas Mansencal: And that's good. When film was developed people didn't just look at numbers, curves and charts. They looked at images. You can't put infinity in a container by looking at numbers. There has to be subjectivity.
  • Sean Cooper: You need a lot of manipulation to match a 32000 nit scene and match it on a 100 nit screen. But the objective of appearance matching seems a good primary goal. Aesthetic preference should be secondary.
  • Thomas Mansencal: I'm not sure there are any color appearance models to achieve that objective with full objectivity.
  • Sean Cooper: There's no calculator that gives us an appearance match for free, but it should be the primary conceptual objective.
  • Thomas Mansencal: I don't disagree. But it's only conceptual. We don't have that calculator, or even the data.
  • Joshua Pines: There is an elegance to separating the aesthetic elements from the purely technical. I've heard that Eastman Kodak had many more psychologists and psychophysicists on staff than color scientists or even chemical engineers. They had to put the subjective into their rendering due to the limitations they had. We shouldn't necessarily do that, but maybe as well as this group of color scientists, we need someone from the other side of the brain in this conversation.
  • Alex Forsythe: We had a big department called Human Factors focusing on image quality and psychophysics.
  • Joshua Pines: We can separate it into two stages, which Kodak couldn't, and that's not a bad thing. It's an enticing idea. It's hard these days to find one thing that all filmmakers will find pleasing. We want it to "look good falling off the truck" but be sure the colorists can get where they want to.
  • Sean Cooper: My point is that just like the RRT is a concept because no display can show ACES, it's still a conceptual objective. Appearance matching isn't a bad objective, even if it's conceptual.
  • Alex Fry: I think Troy's point was if you break it apart and don't have these effects happening "by accident" you have the opportunity to explicitly model them. But then we have to work out how to explicitly model the stuff we've been enjoying (or not) for free.
  • Kevin Wheatley: Why do we think the existing CAMs fail?
  • Alex Forsythe: We looked at CAT02 and CAT16 to a degree, and they don't work for images – it comes out pretty ugly. They are designed for flat patches of color, even though Windows Color System tried to use CAT02. People have tried to use image appearance models, even though those often include spatial processing. A few people tried, e.g. Mark Fairchild's iCAM06, but they never got traction.
  • Thomas Mansencal: As soon as you do convolution there is no possibility of inversion, so I wouldn't go there.
  • Kevin Wheatley: I think that's out of scope for 2.0, and maybe at all. We still need a colorist, and some aesthetic choices are for them to make. Back to CAT02, did you use that for appearance matching or for the gross effect of scene intensity to display intensity?
  • Alex Forsythe: We tried it in the RRT slot for scene space to output space, and it didn't work well for that. Maybe for dim to dark surround it could work. I haven't tried. That's what Windows tried with CIECAM02.
  • Thomas Mansencal: It's fair to say they do a bit of rendering – modifying contrast, changing saturation. Stacking them at the end would be doing rendering over rendering. So we would need to break them apart and maybe only use some modules for things like surround compensation.
  • Kevin Wheatley: Back to what Sean said about the RRT targeting a virtual display. The ODTs do some aspects of appearance modeling, that maybe something like CAM02 could handle. But we still need the main rendering, or multiples because we still have to consider Daniele's suggestion.
  • Alex Forsythe: That's why I was asking Daniele on Slack what experiments they've done with T-Cam.
  • Daniele Siragusano: We made a small set and put a camera on it. We measured with a spectrometer and corrected for IDT mismatches. Then we looked the scene at an HDR monitor that we could virtually vary from 100 to 1000 nits. We had a tuneable backlight to change to dark or dim surround etc. We validated thew domains we worked in while looking at the monitor, and then went to a cinema and looked there. It wasn't truly scientific, but we were aiming for pure appearance matching. We made a low contrast scene and looked at what worked and what didn't. Then we added scene looks to do all the other stuff, so there was no preference rendering in the DRT. T-Cam on its own is not a nice looking image, but it gives the colorist back the bypass all effect. You say "here is the boring rendering of how it looked on set", then you add a grade and people go "wow!" Everyone's happy. It's actually more simple than you would think.
  • Thomas Mansencal: Simple and elegant is what is hard. CAM02 is very complex.
  • Daniele Siragusano: And it's a patchwork on top of CIE19… It's dirty because it's a patchwork on a patchwork, and then you have a hue correction factor…
  • Alex Forsythe: It is ugly!
  • Sean Cooper: Might that not be what we should aim for? Not pretty, but an aesthetic match between scene and various devices? Keeping in mind technical transforms and enabling colorists to do what they want.
  • Joshua Pines: In principle I agree, but there was concern in early ACES development that people would say "let's try this new ACES thing" and think it didn't look any good and give up on it. So it's the balance of looking good vs. the elegance of simplicity. So if we can separate the two so people get a good looking default that can be easily replaced, that's OK.
  • Alex Fry: I think everybody's on board with that. It's just how do we break it apart?
  • Kevin Wheatley: Or how do we put constraints or requirement, because we need to complete that list and move on to more specific investigations.
  • Sean Cooper: So we still need to define a conceptual framework, and decide where boundaries are and the intent of different pieces.
  • Daniele Siragusano: The Academy don't have to do what we did. It's only one way, and there could be another.
  • Kevin Wheatley: There are two aspects – what should the ACES framework support, but also this group has to provide a rendering that improves on the current one, whether we end up with an architecture you can plug other renderings into or not.
  • Thomas Mansencal: You can make an OCIO config with whatever rendering you want now. There isn't a metadata standard for tracking it, but it's possible.
  • Kevin Wheatley: We do that all the time, but those don't tick the 'ACES' box, or require somebody like Josh to do a lot of work on the back end to fudge it.
  • Joshua Pines: If we were to split the current RRT and put the sweeteners in an LMT as a straw man, we could attack it from both sides, looking at alternative looks and alternative renderings.
  • Daniele Siragusano: Then the main look is from the RGB tone-mapping, but also the problems. We want yellow flames, but only get that in SDR, so you can't really put that in an LMT. And the choice of primaries also affects things.
  • Alex Fry:  Which takes us back to simulating this effect with an LMT and the hue preserving rendering.
  • Kevin Wheatley: But we still need a mechanism to produce values near the display boundary.
  • Sean Cooper: If we end up with different looks for different output devices, based of the RGB tone-mapping, that's a failure.
  • Alex Fry: Does the Bezold-Brücke Effect give you some of that yellow appearance at 1000 nits, and we have to fake it at lower brightness output?
  • Thomas Mansencal: We should also check with the same HDR display simulating HDR, so you take the different displays out of the equation, to be separate observer metamerism from actual skews.
  • Alex Fry: How many people here have HDR displays for testing? Not many.
  • Sean Cooper: I wasn't talking about fire specifically. Just the fact that you can't get the same appearance on different display if hue and chroma is being changed by different tone curves.
  • Daniele Siragusano: I can say it's not a perceptual effect where one backs off and the other kicks in, because it goes in crazy directions.
  • Sean Cooper: You can have whatever effect you want, as long as it's reproducible across all devices, or you've failed.
  • Daniele Siragusano: The whole image doesn't get lifted the same as the peak ratio. Most of the picture is only a little brighter. So it's only the compression between 50 and 100 nits that gets expanded, and that's not enough to cause bleaching in the eye in my experience. But you should do your own validation. We found that with RGB tone-mapping whatever rendering primaries (RIMM/ROMM, whatever) we used we could only reduce the effect, not eliminate it. Then you're choosing rendering primaries to minimize the other effects.
  • Kevin Wheatley: Cedric has commented in the chat about the comparison of standardizing format and sprockets vs. emulsion. We definitely want to standardize the architecture – the sprockets and the format. Any other points, Cedric?
  • Cedric Lejeune: The LMT can be confusing. Sometimes it's the grading, sometimes it's like the film stock, and I think we need to separate the two concepts.
  • Kevin Wheatley: That supports the concept of two LMTs, or an LMT and something else.
  • Cedric Lejeune: One on the scene side and one on the display side.
  • Daniele Siragusano: Can we agree we are against user based edits on the display side? (general agreement) If we allow display-referred edits per shot we're in real trouble.
  • Kevin Wheatley: So no display referred adjustments if possible. Sean is suggesting a vanilla rendering, with more in the look. It still feels we are missing something. Can people think before the next meeting what's missing, or how we evaluate it that is sufficient?
  • Thomas Mansencal: So where goes gamut mapping and viewing condition adaptation happen?
  • Kevin Wheatley: I'd say post DRT but before colorimetry. It's at that point where you have a target gamut. After that nothing fancy happens. It's just mapped, and either fits in the gamut or gets clipped.
  • Thomas Mansencal: That's where you know what your target display is, so we don't need to send that information back upstream.
  • Daniele Siragusano: I'm not saying we can't do anything at that stage, but it should be part of the rendering, not something the user can load an LMT in.
  • Nick Shaw: Part of the system, not something the user gets to fiddle with.
  • Kevin Wheatley: It would be good if anybody has anything they would like to contribute. Hopefully we'll have an updated sample implementation from Jed. But it would be good to get sorted on the architectural stage to say we're 90% there so we can focus on the individual blocks.

Meeting #9: February 25th 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Chris Davies
William Feightner
Alex Forsythe
Francesco Luigi Giardiello
Ebs Hasche
Zach Lewis
Thomas Mansencal
Joseph McCormick
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Mike Whipple
Raymond Yeung

Meeting Notes

  • Alex Fry: Main outcome of TAC meeting was we need to clarify what we are trying to achieve, to see conflicts and where cut points should be.
  • Scott Dyer: A few major discussions. 1) Overall architecture – how we break up the diagram to be flexible but maintain goals. 2) Tone mapping and rendering intent – matches between HDR and SDR. 3) Chromaticity preserving or not. 3a) Gamut mapping – compressing OOG colors. Looking at e.g. Jed's chromaticity preserving approach, can we make an LMT which fixes the downsides – skin, fire, etc. Work out what skewed chromaticities would be with RGB 1D, and selectively apply those in a controllable way. I plan to experiment with this.
  • Alex Fry: Logical to have a chromaticity preserving RRT and an LMT that add desirable effects, not the other way round.
  • Nick Shaw: How would that be affected by the fact that skews are greater the further you are into the roll-off, which varies with display dynamic range? So LMT would be different per target display.
  • Alex Fry: WE should maybe look in terms of the Bezold-Brücke Effect where it may be happening naturally at high luminance, and we simulate it for SDR.
  • Thomas Mansencal: I tested the pyro. It's hard because we've been looking at fire for thousands of years and know how it should look. With Jed's DRT and my quick HSV one, it's hard to get it to where you want. We need a solid solution for fire and skin.
  • Alex Fry: 1D RGB gives us a lot of nice things for free. Troy and others think they're terrible accidents we should get rid of. Those need to be replaced with something.
  • Thomas Mansencal: It's not trivial to solve, but hopefully we are the people to do it.
  • Alex Fry: Is anything else out there hue/chromaticity preserving?
  • Thomas Mansencal: T-Cam?
  • Daniele Siragusano: It has no hue bends in the normal areas of the image.
  • Nick Shaw: Are people needing to use a Truelight scene look (LMT) with T-Cam, or are they just grading through it?
  • Daniele Siragusano: Both. People who want filmic looks are using our scene looks. Others just grade. Orange explosions is tied to the discussion of filmic rendering.
  • Thomas Mansencal: Explosion images (stock footage, nuclear tests) I found online were all orange. But I don't know how those were processed. May be worth looking at fire in old films. But it's never pinkish like the naive DRT gives us.
  • Alex Fry: We think fire is orange to our eyes. Or are we remembering the movie look?
  • Sean Cooper: We shouldn't jump to quickly on chromaticity preserving as the right solution. We're making pictures, not graphs. We should pinpoint what we like and don't like about what happens with RGB tone-scales. Yellow/orange maybe good, not purple/cyan.
  • Daniele Siragusano: What about the idea of an objective transform, rather than what we do/don't like? Maybe we should start be reproducing a scene.
  • Sean Cooper: Aesthetics don't have to be in the rendering but we should identify the likes/dislikes.
  • Nick Shaw: I wouldn't call them random skews. We know where and shy they happen, and we lean in to them where we like the result.
  • Thomas Mansencal: And colorists know where it's problematic and correct for it.
  • Sean Cooper: We need to see if we can achieve those things we like through a straw man DRT. If not we iterate.
  • Scott Dyer: We need prototypes to test to help understand and refine the requirements. We need to hone in on what we want to achieve and try stuff. Create action items so we don't get stuck.
  • Alex Forsythe: It's easy to just reproduce scene colorimetry on a screen (within its limits). But I think we agree we need some form of rendering.
  • Kevin Wheatley: Maybe we should do that to show what happens with no rendering, just relative colorimetry.
  • Alex Forsythe: That might be useful to compare against different renderings. Bit I don't think it's desirable as a finished version, although more and less so in different environments.
  • Daniele Siragusano: I didn't mean colorimetric.
  • Alex Forsythe: We should be clear on that. Some people think that is the goal.
  • Daniele Siragusano: We need color appearance models to compensate for luminance, dynamic range etc.
  • Lars Borg: Maybe in some situations (on set?) if the display has the same dynamic range as the scene we could reproduce it colorimetrically, but we want a filmic look for our movie. There are three parts – viewing condition compensation, gamut mapping and filmic look.
  • Kevin Wheatley: Filmic look or not is one controversial point.
  • Daniele Siragusano: You often want that, but the start point should be something which just renders the same scene well enough on all displays.
  • Lars Borg: The original RRT goal was a filmic look.
  • Alex Forsythe: That came and went with versions.
  • Lars Borg: But the RRT doesn't do gamut mapping or adjust for viewing conditions. It's not universal. There is only one.
  • Alex Forsythe: Historically we tried many things (RGB, luminance/chrominance etc.) and looked at PFE as a reference which people liked. So we built an analytic PFE that turned out to be very complex, not tunable and had some artifacts. So we simplified the model keeping the broad aims, and that was 0.1.1. We found that was great for those who wanted that aesthetic, but limiting for those that didn't. So we started to remove filmic aesthetic from the RRT. Ed's original paper just talked about viewing environment compensation.
  • Nick Shaw: Scott, didn't you use stuff from the original filmic look experiments in your analytic PFE LMT? An LMT seems the right place for those things. But it's hard to "look good falling off the truck" but stay away from aesthetic choices.
  • Alex Forsythe: One person's neutral is another person's look.
  • Thomas Mansencal: Even plain sRGB inverse EOTF is an aesthetic choice.
  • Kevin Wheatley: At least we can objectively reject that one as it doesn't satisfy our requirements.
  • Alex Fry: We want it to not skew, clip or run into the edge of gamuts.
  • Kevin Wheatley: Sometimes I do want it to run into the edge of the gamut.
  • Thomas Mansencal: That conflicts with the idea of roll-off unless you do something after the DRT in display space.
  • Kevin Wheatley: Probably don't need to hit the edge of really wide gamuts.
  • Thomas Mansencal: Close but not 100% is probably ok, but the current RRT means you can't reach saturated Rec. 709 yellows.
  • Alex Fry: On LEGO we had to push hard to get the saturated values we wanted, but was all under the RRT. LEGO 1 was 0.1.1 and LEGO Batman onward was 1.0. The look of LEGO 1 benefited from the filmic stuff in 0.1.1. LEGO Batman pushed things harder.
  • Alex Forsythe: A good goal is to be able to reach those limits without a ridiculous LMT.
  • Scott Dyer: Or having to push grading controls ridiculously far. Colorist complained about that with early versions. Also back then many colorists were used to display referred grading.
  • Alex Forsythe: It's important to differentiate feedback based on not being used to grading under a LUT, because the controls don't react how they expect.
  • Daniele Siragusano: That was 10 years ago. Now scene-referred grading is well understood by most colorists.
  • Nick Shaw: Any colorists, or people who grade regularly in this group? Their feedback is important.
  • Joshua Pines: Unfortunately it's a bi-modal distribution. Some expect it to look good with the knobs at detents. Others understand that as long as they can get where they want, grading is their job. So the question is what should the default be? Looks good or maybe flatter but easy to get where you want?
  • Thomas Mansencal: They still shouldn't have to do all their work again when you switch SDR to HDR for example.
  • Daniele Siragusano: It's much easier to crush stuff you can initially see than have to dial back first to see what's there, then push out again to where you want. We found a less aggressive start is better, so you're always grading into the soft-clip.
  • Alex Fry: Is hitting all code values necessary?
  • Thomas Mansencal: I say it's pedantic to say all. To me appearing to be at the edge of gamut is enough. We could find e.g. a suitable deltaE.
  • Daniele Siragusano: Isn't this related to the working space? You can reach any point on any display if you go far enough out and put enough energy in there.
  • Alex Fry: I mean display, so reaching all the bounds of the display gamut.
  • Thomas Mansencal: However hard you push currently you can't reach saturated yellow.
  • Joshua Pines: Graphics are done display referred. If somebody typed RGB = 1, 0, 0 they expect to see that. For Rec.709 that's a problem today. Rec. 2020 maybe less so.
  • Scott Dyer: I test using a cube in display space and go inverse and forwards, and you want it to get back were it started even if it's a ridiculous ACES value in between.
  • Daniele Siragusano: But if it's not a sensible value it will explode in HDR, which doesn't help with multi-delivery.
  • Joseph McCormick: If it's to be truly universal we can't start with restricted criteria. Nobody want's to be the one who says a look is required, so we're stuck in a loop.
  • Chris Davies: Can we take a vote?
  • Thomas Mansencal: On ACES Central to open it up to more people.
  • Joshua Pines: Our colorists expect a DRT where they can get wherever they want, but for on set we give the DITs something more restricted for dailies. Then in DI we turn that trim (LMT) off. So a one-size-fits-all will be hard, unless we are saying we recommend a default LMT.
  • Carol Payne: I was thinking we would recommend a default LMT. Is that the decision – do we lean hard on the LMT, or have multiple DRTs? A starting point would be to write down what can and can't be done as an LMT.
  • Alex Fry: You have to decide, is everything in an LMT, and the DRT is just AP1 to display, or do you have roll-off. If you have roll off does that stop you hitting the corners?
  • Thomas Mansencal: Having a desat to white will limit what you can do in an LMT.
  • Chris Brejon: I've been doing grading tests with the naive DRT and found exactly that. I couldn't reach the saturation I wanted. So I wondered if display referred grading is an option.
  • Thomas Mansencal: It's a can of worms, but a last resort. It's conceptually dangerous.
  • Daniele Siragusano: Then you need to grade for every deliverable. There's no scene-referred archive master.
  • Thomas Mansencal: It's not desirable, but maybe for logos etc. it's necessary, or you need a different solution.
  • Lars Borg: For only one target we could have only colorimetric mapping, and the colorist is responsible for rendering, gamut mapping, film look. But that's not the normal situation. Maybe it should be a goal that the trim passes for different devices should be minimal.
  • Thomas Mansencal: And that reduces cost.
  • Sean Cooper: Rod Bogart said in the TAC we should list what truths should be "pinned to the wall" and that could be one. Also as an analogy, the two possibilities are either "water" that takes the shape of a container, or a "balloon" that just fills the volume, not every crevasse.
  • Lars Borg: You could get very different results on different devices. If one device can do an extreme red that another can't if you use that red you change the creative intent and emotional result.
  • Sean Cooper: I was more thinking about defining which part of the architecture was which aspect. So maybe an output transform that just maps to a device, that's job is space filling, and the rendering is responsible for the uniform aesthetic across all devices.
  • Thomas Mansencal: It's difficult because they are all tied together.
  • Sean Cooper: Yes, but I'm saying if a rendering passes something display referred to the 2nd half which does gamut mapping, that second half only cares about the boundaries of the gamut it needs to fill (discounting viewing conditions).
  • Alex Fry: It's tough with 2D flat graphics that go from scene to screen. Do you put the look in an LMT, then composite the graphics afterwards? What does the scene mean for a logo? If its a flat thing, not on scene geometry. There's a conflict.
  • Daniele Siragusano: I think we are focussed too much on the logo edge case. We need to have a way to make that work, but 99% of the pixels are scene colors.
  • Joseph McCormick: It's super project dependent. In non-fiction there's a lot of archive. You want it to look the same, but when going from a smaller to larger gamut or dynamic range, what do you do? You have to tell ACES that you want to limit it to what the source was.
  • Daniele Siragusano: My answer is "different approaches for different projects".
  • Joseph McCormick: One other thing, can the Dropbox list of requirements be moved to a collaborative Google Doc?
  • Scott Dyer: You can comment on it already. I can give edit permissions to the DropBox Paper to people who ask me.
  • Alex Fry: We'll put a poll up on ACES Central for "do we need to hit all display code values?"
  • Thomas Mansencal: Any practical tasks we need to do?
  • Alex Fry: An LMT that captures what we like about the current rendering through the naive DRT would be good. Thomas do you want to talk about your prototype?
  • Thomas Mansencal: It's a simple HSV gamut compression, because that's hue invariant. The white desaturation uses reciprocal compression. It does something very similar to Jed's, but easier to understand what's happening, using HSV saturation to gamut map in the display space. This is like the output referred LMTs I was talking about.
  • Alex Fry: It's great to have things to play with.

Meeting #8: February 17th 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Alex Forsythe
Joe di Gennaro
Francesco Luigi Giardiello
Ebs Hasche
Zach Lewis
Thomas Mansencal
Joseph McCormick
Michael Parsons
Carol Payne
Matthias Scharfenberg
Florian Schleich
Daniele Siragusano
Jed Smith
Pablo Garcia Soriano
Doug Walker
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: FYI, we will be presenting an update report to the Architecture TAC later today. Last week we looked at the Miro board and discussed Daniele's suggestion of changing the cut points in the OT.
  • Chris Brejon: I updated the Miro board based on my interpretation of Daniele's suggestions.
  • Daniele Siragusano: It's close but not identical to mine.
  • Nick Shaw: There’s been ACES Central discussion related to the Giorgianni document that Alex Forsythe posted. That document is very informative.
  • Kevin Wheatley: Daniele, how does Chris' interpretation differ from your idea?
  • Daniele Siragusano: It makes if look like it forks from the LMT to different mastering displays. In my drawing the mastering display is just a parameter in the DRT boxes. I would day your diagram is missing the actual DRT. You're defining viewing conditions, but not the transforms into those. You need to add the Output Transform where the LMT is now, and the DRT in front of that. You are overemphasising the mastering space.
  • Chris Brejon: Like this?
  • Daniele Siragusano: Yes. If you are discussing how one color management workflow could be, thinking about the architecture for a while is no bad thing.
  • Chris Brejon: The question Jed and I had was, If you do gamut mapping it can't be in the DRT, because that is display dependent.
  • Daniele Siragusano: But that is an implementation issue. But it's a valid question. Bur architecturally you don't care what you are doing or how you are doing it. You just put it in that block on the diagram.
  • Scott Dyer: Where in this diagram would a switch for intent go? E.g. appearance matching vs. optimising for each display. Ed hints at that, but we don't currently support options there.
  • Daniele Siragusano: On my original diagram that would be swapping out the middle part for a family with a different rendering approach. I think it's easier to swap out the whole rendering pipeline rather than try to have an "intent" parameter like ICC does.
  • Alex Forsythe: In ICC terms "rendering intent" means "gamut mapping strategy".
  • Chris Clark: Scott, are you talking about e.g. outputting PQ encoding with a Rec.709 rendering.
  • Scott Dyer: That's something different, which is easy to achieve. I mean when people say they want their HDR to look like their SDR they don't want it to reveal extra range or gamut from the source, but still be brighter and use the range and maybe gamut of the monitor, so they feel like the same representation of the scene. We need terminology for this vs. optimising for all an HDR display can do, revealing more scene information. We want the system to be able to do this. The other simulation idea of toggling SDR and HDR renderings on an SDR display is easy.
  • Nick Shaw: You just hook a rendering up to the "wrong" display encoding.
  • Lars Borg: Separating the gamut mapping from the encoding of the device data. One concern is when I put SDR on my HDR display, where do I put the peak SDR brightness in HDR? 100 nits to match studio reference, or 200, 300 nits like a consumer HDR display. There should be a control for this. And to be able to emulate a monitor I don't have – an LG display with certain APL conditions.
  • Daniele Siragusano: You would use a different rendering transform targeting a lower nit peak display, then a colorimetric transform to your mastering display. This is what we were talking about earlier with the mastering color space, which may be part of the DRT. So you can see a 600 nit or 400 nit rendering, even though you have a 1000 nit monitor.
  • Chris Clark: Would you need to have a hierarchy of gamut or color volume, as you can obviously only simulate a less capable display. You can't simulate 1000 nits on a 100 nit monitor.
  • Daniele Siragusano: You can simulate the first 100 nits, which can be useful.
  • Kevin Wheatley: I've seen the use of a 300 nit monitor to give you some extra over what 100 nits would show.
  • Thomas Mansencal: But you don't have the contrast, so your blacks are raised.
  • Nick Shaw: That's what Apple call EDR, where you can have a bit of "HDR" on a 500 nit display like the 16" MacBook Pro, putting 100 nits at 20% of peak.
  • Daniele Siragusano: You could create a DRT that does that on a 300 nit 2.2 gamma monitor.
  • Nick Shaw: This is a sidetrack, but Apple use floating point buffers, where 1.0 goes to notionally 100 nits (but I think it's affected by your brightness control) and >1 values go higher.
  • Daniele Siragusano: You are referring to sRGB+, or P3+?
  • Nick Shaw: I think what Lars was talking about was the fact that a reference SDR monitor is 100 nits, but because SDR is relative not absolute, once it goes to consumers their TVs may be say 300 nits, and you may want to emulate that.
  • Alex Fry: I assume for something like that you would store it in absolute nits, so not 1.0 but 300, or 48, whatever.
  • Lars Borg: Unless you work in a reference environment, you actual display won't be 100 nits. Are we assuming that all these devices, even for preview by creatives, are in the reference conditions?
  • Alex Fry: For the fixed ones we've got to assume it's in the intended environment.
  • Thomas Mansencal: You have to assume it's calibrated to spec, or you can't do any work.
  • Alex Fry: There's an argument for other OTs targeting brighter rooms and surrounds, but when targeting a specification it's got to be what it says on the tin.
  • Kevin Wheatley: That's what we do. With a row of monitors, even if there's more light up one of the room, we assume they're all the same. Preset a number of conditions you're willing to calibrate for.
  • Lars Borg: So nobody's verifying that just because it looked good at 100 nits in a reference viewing environment, that it doesn't look like crap on a 300 nit display in a living room?
  • Thomas Mansencal: You can't account for that. You have things like Philips Ambi-light with shifting LED colors. You can't do anything about that.
  • Nick Shaw: That's what all TV looks like to that person. Everything they watch was mastered at 100 nits in a reference environment, and they look at it however their TV is set up – that's what they're used to.
  • Lars Borg: True!
  • Kevin Wheatley: We do have a consumer TV in the color suites. It is calibrated and has a D65 bias light. It's a "this is as good as you're going to get at home" reference. You would go round in circles if you chased two reference monitors.
  • Alex Fry: At a certain point it's the TV manufacturer's problem. If they are running way out of spec, they need to compensate. Unless there's a new 300 nit TV standard we need to target.
  • Thomas Mansencal: Dolby Vision IQ adapts the TV to the environment, presented at CES 2021.
  • Alex Fry: Surround variation is a factor to consider. Daniele, Baselight has a bright surround option as well as dark and dim, yes?
  • Daniele Siragusano: That's something needed from an implementation standpoint. But for architecture you have as many as you need. But yes, for implementation, surround could be a parameter.
  • Alex Fry: Would that go on the left or right of this diagram?
  • Daniele Siragusano: Each is a different member of the group in the middle. Same viewing condition needs the same display colorimetry. You could have two outputs with the same encoding, but targeted for different surrounds. It's like the different HDR peak renderings, all encoded as PQ.
  • Alex Forsythe: It's important to separate the display colorimetry from the encoding. The encoding is a last technical step. The important distinction is what is the colorimetry you're aiming for.
  • Nick Shaw: Current ACES (per CTL) has the same display XYZ colorimetry, and Rec.709 primaries, just encoded with different EOTFs. Baselight, and this diagram treats them differently because they have a different surround.
  • Kevin Wheatley: So one of the other potential intents is how we approach the boundary of a display gamut, with a hard clip or a "path to white" as Jed has shown. The current implementation only has one rendering.
  • Daniele Siragusano: In this diagram that is handled by switching out the DRT family.
  • Kevin Wheatley: So Daniele mentioned contrast ranges of displays, but has anybody done anything except black point compensation or flare. Is that part of the environment condition or something separate.
  • Daniele Siragusano: Conceptually I think it lives in the Output Transform. If you compress shadows as well as highlights, it's a function of each viewing condition. You might do something different for projectors, which have different types of flares. And something different again for a 300 nit active screen with a dark surround and much larger ISO contrast.
  • Kevin Wheatley: So is it part of the rendering rather than the encoding? Because I could see it either way.
  • Daniele Siragusano: If there's perceptual stuff in the output transforms then they are not purely colorimetric, and you can't translate between them. And you have fixes for different displays built in which causes all sorts of incompatibilities. I would keep the output part as pure as possible. Otherwise you can't easily swap out the rendering, because you have assumptions in each about what is going to happen later on.
  • Joseph McCormick: A lot of Dolby Vision's content mapping is based around optimising the black point for the display. If something is not part of a standardized output encoding, I don't see how it can not be part of the rendering stack.
  • Daniele Siragusano: Scene-referred black is a whole other discussion. What does 0.0 in linear light mean. Mean of noise, or something else? The Output Transform needs to have some assumption about that.
  • Joseph McCormick: That's a good point. I tested going through an inverse Rec.709 ODT, which doesn't map zero to zero, so you go through a forward HDR OT, and SDR blacks aren't black in HDR.
  • Daniele Siragusano: And important for VFX. Id you have a display transform which needs a negative scene-referred value to hit display black (as many camera renderings do) a colorist doing a pre-grade will create negative values in VFX pulls.
  • Alex Forsythe: Alex, one of your slides for the TAC touches on this. This diagram looks like you're mapping scene-linear zero to display zero, which is something we don't do in ACES. Generally we map 18% to somewhere, and then we have a white point which maps to display white, and anything beyond that is clipped, and the same for black. It's not zero to zero. It's x stops below grey to zero.
  • Nick Shaw: Isn't the discrepancy when you go backwards through an SDR OT and forwards through an HDR one partly related to the different methodologies they use? The SSTS maps zero scene-referred zero close to display zero, but the SDR RRT+ODT doesn't.
  • Alex Forsythe: We have a switch in there for PQ, because it is absolute. DCDM X'Y'Z' values are relative to theatre black, which I think makes more sense.
  • Daniele Siragusano: I agree. If you think display zero just means "as dark as you can get" which is still positive, and varies with APL and flare. The closer you approach to zero, the less you can trust what you encode in there. That's why I advocate mapping zero to zero in a relative system. No energy on set (whatever that means) maps to "as dark as you can get" on the display.
  • Alex Forsythe: The downside of that is that do go to zero with the tone-scale in a consistent way, you end up with a really long tail. If you map something above zero to the minimum value you get something asymptotic and super-flat.
  • Daniele Siragusano: You mean the infinite slope of power functions at zero?
  • Alex Forsythe: Slightly different subject but same effect. And that can cause problems going backwards through it. I'll write an ACES Central post about this. I'm not advocating one way or the other. It's not as simple as mapping zero to zero.
  • Daniele Siragusano: But you're talking about mapping a positive value to zero. Many camera DRTs map a negative value to zero. A slightly positive scene value mapping to zero is not as bad. Mapping noise mean to display zero means cutting off half the noise floor, but I would say that just means it's not flared correctly. But camera manufacturers want lens cap black to be at zero. But in post negatives produce headaches.
  • Kevin Wheatley: in my experience you just have to add a little flare to make the minimum zero, then reverse it on the way out.
  • Daniele Siragusano: But if you map black to black, the colorist who does the pre-grade does this for you.
  • Carol Payne: People are moving away from VFX pre-grades, except in some TV situations, so you aren't working on top of a grade. It's more like a CDL at the end of the chain.
  • Daniele Siragusano: But then you have to go to log for the CDL and then back to linear, do your VFX, then back to log and invert the CDL then back to linear. You could loose more precision than using a VFX pre-grade. But that's just my perspective, and it's nothing to do with this group!
  • Jed Smith: Just pointing out that the mapping of zero to zero shown in that diagram was not a proposal for a rendering. The diagram was to illustrate a point about how colors approached white at the gamut boundary.
  • Alex Forsythe: The diagram shows well how we're taking a "slice" of he scene range, and if you slide the slice up and down you get more highlight and less shadow information. That's how the print negative system works. You can slide it up and down as you print, but it's not pinned to Dmin of the negative. You don't change contrast as you slide back and forth. You're just changing the piece of the negative that you're seeing.
  • Kevin Wheatley: Can we summarise anything from all that?
  • Alex Forsythe: I would say we can summarise it as we need to think about the tone-scale of the rendering and how we make sure that the display values cover the full 0-1 range.
  • Joseph McCormick: For graphics it's important if they had some things at black, they don't want it to map to a washed out black on a PQ monitor. This kind of black point management is similar to how highlights are handled in the current system.
  • Kevin Wheatley: I think the way to handle it is to add a false line near black, which is the lowest point we think will be uncontaminated, and below that there is variability of interpretation. Then the amount of flare for a given device could be the lowest possible output, or the appropriate amount of brightness for the surround. And you need to reproduce that on another display with better contrast, or increase contrast because they really meant zero. And with graphics you also have alpha channels, but that's for another time.

Meeting #7: February 10th, 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Alex Forsythe
John Frith
Francesco Luigi Giardiello
Ebs Hasche
Harvey Landy
Thomas Mansencal
Michael Parsons
Carol Payne
Joshua Pines
Matthias Scharfenberg
J. Schulte
Daniele Siragusano
Jed Smith
Garrett Strudler
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: Thomas created this Miro board of some things we've been discussing. The first diagram here is based on my original architecture diagram, showing the current architecture of scene-referred -> LMT -> RRT -> ODT combos, showing the various boundaries. After the RRT the data is display-referred, and then we have a number of potential outputs – SDR theatrical, HDR theatrical, SDR TV, HDR TV, VFX artist displays, VR/AR headsets, phones & tablets. LED wall as shown for virtual production is a red herring.
  • Thomas Mansencal: That is for e.g. huge LED wall in Dubai, not virtual production.
  • Kevin Wheatley: Yes we do stuff like that for digital signage like Times Square. We have the other diagram on the Miro board from Daniele. The 1st bit's the same, and the output side is similar – inverse EOTF and matrix. The difference is the families of transforms in the middle. ACES stock default, and the idea is somebody else could provide alternative sets of transforms.
  • Daniele Siragusano: ODTs belong in groups which need the same rendering, but e.g. different EOTF, like HLG/PQ at 1000 nits. Whatever the middle block does, perceptual suff or whatever, you need to know which block to connect which group of displays to. 
  • Alex Forsythe: That is similar to how OTs are currently constructed, conceptually. But it's not obviously broken down like that in the CTL. There may be exceptions, but that was the intention.
  • Nick Shaw: Like the way the sRGB and Rec.709 ODTs are the same except the final EOTF. Same display XYZ.
  • Daniele Siragusano: I wanted to make sure it was clear cut at this stage. So we don't put stuff that is part of display rendering into the ODT as it is now, like highlight roll-off, or it won't be compatible with other transforms.
  • Alex Forsythe: I'm not opposed to splitting it up so the display encoding is in it's own little bit.
  • Thomas Mansencal: It's similar to the way OCIO 2.0 breaks ACES down with its built-in transforms. Breaking it down into reusable components, which is cleaner architecturally.
  • Alex Fry: I imagine once you have a set of OCES variants you connect these different middle sections into a finite set of encoding ODTs.
  • Kevin Wheatley: So what are the separate OCES or targets for a set of renderings? How do we group them? That defines how you would extend it to a new device that doesn't fit into one of the groups.
  • Daniele Siragusano: I disagree. If there are too many assumptions in there you have many outliers which don't fit. So the less you specify the better. Ideally the grouping is just an ID. The parameters are there conceptually, but you don't build them into the model. That's too early.
  • Thomas Mansencal: We've had those kind of discussions on ACES Central, but parameterization opens a can of worms. ACES currently is made up of immutable blocks with no option for parameterization. Not saying we shouldn't do it, but it's super-complicated.
  • Daniele Siragusano: Some output transforms might not have the same parameter space. There might be parameters for every group of output transforms to derive those transforms. But the author might do that in different ways. Do you know what I mean?
  • Kevin Wheatley: So if we have these big boundaries, how would somebody know how to implement something. If the boundaries aren't specified they could do anything. We don't want to go back to 10-15 years ago where we just have a bunch of black box LUTs.
  • Daniele Siragusano: This would all be hooked up so if you go to a particular output, the standard defines the viewing condition. DCI X'Y'Z' and BT.1886 define their viewing conditions. The system picks the right transform from the family. You don't mix and match. Not unless you manually override to e.g. simulate Rec.709 on an HLG display.
  • Alex Forsythe: Is this conceptually the same as the parameterized HDR OTs do now? Where device capabilities and viewing conditions are specified as parameters?
  • Daniele Siragusano: That would be one implementation of one group, but I wouldn't put it in the design of the transform derivations. I know some people have a 10000 nit transform and a 100 nit transform, and simply blend in between them. The architecture shouldn't dictate how you derive the parameters. It's just a way of saying "these outputs belong together and need the same transform." So one group is PQ and HLG. Another is sRGB, 2.2 gamma 709 and Adobe RGB. Then the package author offers a transform for each group.
  • Kevin Wheatley: So does an implementer have to provide examples of a certain minimum set of these?
  • Daniele Siragusano: You could define a minimum set as a requirement to be ACES compliant. A route to groups 1-6. Then later maybe VR 360 adds a new viewing environment, we add number 7. But maybe there's no peak luminance defined for that, because it might be better defines by other parameters.
  • Nick Shaw: Does that mean somebody making a custom setup who envisaged e.g. only HDR and SDR TV would be obliged to make e.g. a theatrical transform in order to make an ACES compliant master, even though they had no plans to release theatrically?
  • Daniele Siragusano: That would be up to the production. It's the case today where not all DRTs include all targets. RED IPP2, ALF-2, Truelight CAM, ACES, they don't all include all the same targets. But there should be a minimum set.
  • Carol Payne: That makes sense to me, at least in terms of architecting the vanilla transform family. How other things fit into this feels more like implementation to me. Do we need to decide these things now when designing the vanilla transform?
  • Daniele Siragusano: We do need to define the cut points.
  • Alex Fry: Is the encoding as it crosses a boundary defined in the spec?
  • Daniele Siragusano: That's an implementation detail. It could be anything. On the other side of the line it's just a matrix and inverse EOTF.
  • Kevin Wheatley: One that needs feedback is gamut mapping. The gamuts in a group need to be compatible?
  • Daniele Siragusano: Yes. They have to share the same mastering color space. Otherwise a simple colorimetric transform between them wouldn't work. You don't put Rec.709 and Display P3 in the same group. They are different viewing conditions. They need different gamut mapping approaches. But you limit things by having a suitable wide gamut mastering space, the way in HDR you typically put P3 in a Rec.2020 or P3 container.
  • Alex Fry: So in your example, Adobe RGB is Rec.709 limited?
  • Daniele Siragusano: In this example yes. So you can send a 709 grade to a DP who has an Adobe RGB display, and they see the same as you. You don't want them to see more.
  • Thomas Mansencal: The situation would be similar if one day Adobe implement ACES in Light Room, for example.
  • Kevin Wheatley: So you could include Rec.709 2.4 gamma in that same group, but not HDR.
  • Daniele Siragusano: You could do a 2.4 gamma Rec.2020 HDR where the peak is 1000 nits if you really wanted (that would go in the PQ/HLG group). But I don't think that would be a good idea!
  • Thomas Mansencal: You could do that for an LED wall.
  • Daniele Siragusano: Like digital signage
  • Alex Forsythe: I'm still not sure how this differs from the parameterized HDR OTs in ACES 1.2. The parameters specify max and min luminance, and where 18% grey ends up. You have encoding primaries and limiting primaries, so you could do Rec.709 on a 2020 display if you wanted to. It sounds like you're describing that, but enumerated instead of parameterized to create transforms on the fly.
  • Daniele Siragusano: What this is describing is a generalised system and ACES could be one thing that fits in there. But it's more abstract, and could be used to define a pipeline where people just give you a bunch of LUTs for different targets.
  • Alex Forsythe: So can anything be mapped onto this? E.g. DPX files of density data with a PFE LUT.
  • Daniele Siragusano: Yes if they do the work to create a set of transforms from that LUT. It's what happens now.
  • Alex Forsythe: Currently it's a bit abstract. I'd like to see some specific use cases.
  • Thomas Mansencal: One obvious difference in Daniele's diagram is that there is no highlight roll-off on the display side. Even mid grey mapping, it's all on the left.
  • Daniele Siragusano: Good point. There is still work to do to define the reference point etc. I can show you an example from Baselight. This is the ALF-2 DRT. It defines a set of viewing conditions with strings – "Video-100", "VideoWide-100", "VideoWide-1000", "Cinema-48" etc. If tomorrow we need 500 nit cinema we could add that. These are proprietary transforms as cubes, and for each viewing condition the forward and Inverse transforms are defined, including the input and output color spaces, and the mastering space definition.
  • Alex Forsythe: This feels a bit like an AMF file to me.
  • Josh Pines: These are FilmLight DRT Family definitions. We use them a lot. It's a convenient way to define a set of transforms for various deliverables.
  • Daniele Siragusano: We express the ACES transforms in a similar way, but with parameters not LUTs. RED IPP2 and Truelight CAM work the same. It's a good abstract way to define a color pipeline. And then every EOTF and matrix combination needs to have a string defining which transform to use.
  • Chris Clark: How much do we expect AMF to track this? Maybe it's a TAC discussion to update AMF to include these new transforms. Tracking the OT family is important.
  • Alex Forsythe: I need to think if it can be mapped onto the current AMF. We need to keep it in mind.
  • Carol Payne: Another way to visualise this is the built-in transforms in OCIO 2.0. That is very similar to what Daniele is describing.
  • Daniele Siragusano: Yes it's very similar. If ACES were to define meta-transforms, all the transforms could be described using those. At the moment if someone like Josh wants to deploy a set of transforms, they need to make a Truelight Color Space package, an OCIO config, a set of LUTs, something for ColorFront, and so on for each different system. It's a lot of work. It would be much easier to specify it in a meta-framework, and the systems would each translate it.
  • Kevin Wheatley: Whether we end up with ACES supporting multiple families, this is a useful abstraction.
  • Daniele Siragusano: It would make it easier to port whatever we come up with to OCIO and other similarly structured systems.
  • Thomas Mansencal: The OCIO 2.0 config generator would be much easier if we took this approach. Now the generator traverses the CTL, and there is a lot of manual mapping, which is extra legwork.
  • Daniele Siragusano: For Baselight we put the TCT part of the ODT into the DRT, so the ODT becomes purely colorimetric. We moved the dividing line.
  • Alex Forsythe: It sounds similar to the 1.2 SSTS HDR OTs. It's just the dividing lines are in different places. The only thing that's different is the enumerated sets of core renderings, rather than one parametric one. Unless I'm misunderstanding.
  • Daniele Siragusano: It's an implementation issue. If the transforms are parametric, that's fine.
  • Scott Dyer: To me the 1.1 HDR transforms were a proof of concept. Applying it to SDR would change the look. That's for a full version rev. Although it was parameterized underneath, we have an enumerated list of the common transforms. But pro users can make their own custom ones. We need to know haow it's going to be structured, with these block diagrams, because it will inform architectural choices. What goes in an LMT - desat etc. so it's easier to bypass. The output that we want with the flexibility we need.
  • Alex Fry: The 1.1 parameterized OTs were never meant to be user facing, were they? Just a better way to expose it than raw CTL.
  • Alex Forsythe: It was also a way of unifying the approach. 1.0 had a general way of going ACES to display, but each variation was slightly different. In 1.1 the only thing that changes is the details of the target display. It was introduced for HDR, but we hoped eventually to move over completely to that model.
  • Nick Shaw: Does the SSTS handle the white point scalings, and extra highlight roll off that's in the DCI-P3 ODT?
  • Scott Dyer: Not yet, but we were aware of that. There may even be placeholder text in the CTL. Not needed for HDR in 1.1. When we do it there may be a different mechanism for handling things which doesn't need the P3 exception. And if we use scale factors, the math that calculates them should be explicit in the code. Currently they were done manually by looking at a graph.
  • Nick Shaw: It should be relatively simple to derive scale factors – one over the maximum possible value.
  • Scott Dyer: In theory, but it's never that simple.
  • Nick Shaw: And then for P3 the scale factor comes out too big, so there's an extra fudge factor with roll off.
  • Josh Pines: Just don't use DCI-P3. That makes things much easier. One other thing is creative white point. Currently there is a chromatic adaptation to D65 right at the end of the ODT. But half our shows have a D60 creative white that they want preserved in all deliverables. I don't know how we solve creative white.
  • Nick Shaw: FlimLight solve that very nicely.
  • Alex Forsythe: Most ODTs assume you want equal ACES values to come out as equal display values, except the D60 sim versions which make equal ACES values come out as D60.
  • Josh Pines: I'm bringing this up because everybody is saying "put everything creative in the LMT" but you can't do that for creative white point. Currently you have to switch ODT based on the choice of creative white. It would be nice to make that choice upstream and have it preserved through the ODT.
  • Daniele Siragusano: We addressed it very simply, propagating back what the neutral axis in scene referred actually means. In scene-referred you just have neutral, and then for each viewing condition you say what that means on the display.
  • Josh Pines: Yes, Baselight does it. But I do get confused which way it goes, and pick the wrong way every time! I'm just flagging up so we bear in mind what's the best way to keep the choice of creative white on the creative side, and it may need to be sent along as metadata to the ODTs.
  • Alex Fry: This confused people at Animal Logic when we first adopted ACES – why some ODTs had a D60 sim version. It's only explained in the CTL.
  • Daniele Siragusano: The key point is it's applied after the DRT in display space. It can't be in an LMT because things always get compressed after that towards display white.
  • Josh Pines: Academic color scientists think there should always be a chromatic adaptation to display white. It's color science 101, except in our industry we want creative control. We need to separate mechanical and creative.
  • Alex Forsythe: The D60 sim came from the remastering of the 1996 101 Dalmatians. I think they were remastering theatrical from the Blu Ray, and had a BVM sat next to the projection, and so you had D60 next to D65. So I removed the chromatic adaptation for the BVM, so the white points matched, even though the calibration white point was different.
  • Josh Pines: That's how it is now. People want them to match, right or wrong. Going the other way, we had a superhero show where they use the regular Rec.709 for on set viewing and dailies, then freaked out when they saw projection at D60. So now we needed new transforms with D65 on projection. So we need to address creative white point, separate from the "mechanical".
  • Thomas Mansencal: We could call that the "encoding white point".
  • Daniele Siragusano: But also a lot of our customers deliberately use different white points for dark surround and dim surround. D60 for dark and D65 for everything else.
  • Josh Pines: Maybe that's just what they are used to. We go out of our way to preserve creative white 99% of the time. But both should be possible.
Daniele Siragusano: It doesn't matter which is correct. We need to support both.
Kevin Wheatley: In summary Daniele's proposal seems favorable to most people. But maybe more sub-blocks in the middle section should be separated, and may inform parameterization. Maybe creative white is one of them. People should think about what falls into what dividing line. Some are in the diagram but not called out as dividing points, things related to viewing conditions, rendering intent, emulating other devices etc. Maybe within this family concept they can be a parameter, not separate. But we need to think about it.

Meeting #6: February 3rd, 2021, 11am PT

Attendees

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Chris Davies
Alex Forsythe
John Frith
Joe di Gennaro
Francesco Luigi Giardiello
Ebs Hasche
Thomas Mansencal
Carol Payne
Joshua Pines
Matthias Scharfenberg
Florian Schleich
J. Schulte
Daniele Siragusano
Jed Smith
Troy Sobotka
Mike Whipple
Raymond Yeung
Joachim Zell

Meeting Notes

  • Kevin Wheatley: Discussions have continued on ACES Central, and hopefully we can look at some pictures. Some topics were controversial, some not.
  • Nick Shaw: I have this Nuke script to view various images through the v1 and v2 DRTs Jed posted about (and demonstrated in an earlier meeting) as well as current ACES, K1S1 and IPP2. On Chris's suggestion I have a color wheel, and the vectorscope is interesting to see distortions. But a "weird" shape is not necessarily bad. I think Jed's approach uses a weighted power norm to tonemap, scaling linear RGB together to preserve chromaticity. Jed can describe it better. He posted an initial version, then a v2 where he added some aesthetic modifications, saturation roll-off in the highlights, etc. When I opened it up I saw some hue based qualifiers like those used in the red modifier in the "RRT sweeteners". It's quite noticeable that Jed's transform is more desaturated than the others, but that's not necessarily bad if saturation can be added in an LMT. I think Alex Forsythe said he added a saturation boost after Jed's DRT.
  • Alex Forsythe: Yes, I added a sat node after it.
  • Nick Shaw: What I am looking at here is Jed's default settings for both v1 and v2.
  • Alex Forsythe: I posted on ACES Central a version where I adjusted the settings for the highlight rendering to try and get something more "cinematic".
  • Nick Shaw: The overexposed poker table was one example where Jed's rendering came out sort of flat looking in attempt to maintain highlight saturation. But maybe different settings would help there, and create the blown out look we expect.
  • Alex Forsythe: The default settings didn't let you reach 100% display white. I looked at a ramp on the waveform while tweaking settings to get a more natural highlight roll-off.
  • Alex Fry: Jed's here now if he want's to elaborate.
  • Nick Shaw: If you could recap what you showed before, and explain what you changed in v2.
  • Jed Smith: I don't have a prepared presentation! It separates things and manipulates luminance based on the achromatic axis and manipulates hues using RGB ratios. Those are done in two blocks then combined as 0-1 normalised display linear, then matrixed to display colorimetry followed by an inverse EOTF. I'm still debating order of operations. Here's an sRGB hue sweep encoded as ACEScg. It gets split into three ranges. A pivoted slope is applied to the middle range, highlight compression in the shoulder, and toe compression of shadows. Another set of nodes work out a factor for compressing RGB towards a value (1.0 here) with a lerp. The factor for highlight compression is based on the difference between the linear section and compressed shoulder. You divide one by the other, then take one minus that to get a 0-1 ranged factor for the lerp. You then have black and white pint adjustments and gamma for that lerp factor. And the same thing happens for the shadows with a lerp to the target black value. It's all pretty experimental. Any questions so far?
  • Chris Brejon: What are the differences between v1 and v2? I found the ColorChecker orange looked a bit weird in v2.
  • Jed Smith: In v2 I wanted to find a way to modify images like the fire to render more pleasingly without losing the chromaticity preserving aspect. I modified the path to white behavior and brightness in different hue regions without introducing hue skews, assuming chromaticity preserving is our goal. It's experimental, not necessarily elegant. I used the complement of normalised RGB ratios, as we did for the gamut compression VWG. Anything over 1.0 is outside the gamut. I wondered what if we use this inverse RGB ratio (distance from achromatic axis) to modify the lerp to white factor. So e.g. to change orange we use the blue inverse RGB ratio and use it to modify the path to white factor by multiplying the two together. And applying a power function to it lets us modify the behavior. So we're using the inverse RGB ratios to separate hue regions and control the path to white for those. There's also something taken from ACES 0.1.1 to isolate a hue range as an alpha and bias the result. And I can also use it to adjust the brightness of a region. It's lots of hacking but I think it makes fire and skin tones look a bit more cinematic. Looking at hue swatches before and after we see it preserved straight lines away from the white point in CIExy.
  • Alex Forsythe: Just clarifying, those straight lines in xy don't represent constant perceptual hue.
  • Josh Pines: They are “hue” preserving in xy, but they wouldn't be straight lines in L*a*b*
  • Thomas Mansencal: Perceptual hue is not straight lines in RGB either, because the chromaticity diagram is not hue preserving. If you add white to a pure monochromatic light it will twist, right?
  • Alex Forsythe: So the transform keeps things on the same straight line to white in xy, but those aren't hue lines.
  • Daniele Siragusano: Those lines show how chromaticities are projected towards white as they get brighter.
  • Kevin Wheatley: They are just mixtures, but when you transform to a different color space you change the basis, so if it was hue preserving by chance in one color space, it wont be in another. So it's thinking in terms of linear mixes of light, not how we perceive color.
  • Josh Pines: It's a different thing, but great work and a very good way to start. Maybe we could try a similar approach in a perceptual space.
  • Nick Shaw: That was all in SDR. I'm trying to think if the same principles extend to HDR. 0-1 being display peak to black doesn't necessarily mean the same thing in HDR. Any thoughts?
  • Alex Fry: I assume it would work. It's still 0-1, but a bigger 1.
  • Kevin Wheatley: I think the principle applies but a shoulder at a particular point will occur at a completely different point on the intensity scale. That could look weird, if your shadows end up looking very different between HDR and SDR. Highlight roll off might be ok if that's the desired "HDR is bigger and better" result, but if you want it to look similar it wouldn't work.
  • Josh Pines: That might be unsolvable. That's creative intent. How do you want your highlight hues in HDR? That's hard to answer with a one size fits all approach.
  • Thomas Mansencal: People may want HDR to look different because that's how they make money. Why pay extra if it looks the same? Producers may want something different.
  • Daniele Siragusano: It's not about seeing something different. It's about seeing more of it.
  • Thomas Mansencal: Still in terms of color appearance, do we want to preserve the appearance of things? I don't know.
  • Daniele Siragusano: I think the default should preserve appearance. Going HDR to SDR what else would you do as a default but match appearance? Make HDR more red?!
  • Alex Fry: I've seen both. Sky, for example, that's blown out in SDR. Sometimes you want the HDR blown out but brighter. Sometimes you want it to look more blue.
  • Daniele Siragusano: Sure, but what should the system do as default. Afterwards it should be flexible enough you can do what you like.
  • Kevin Wheatley: You're saying you cant add a color shift or change the overall aesthetics.
  • Daniele Siragusano: Some renderings add saturation in HDR. People I speak to don't like that.
  • Joachim Zell: I would say it should start similar, but then you can go any direction. With a "back to matching" button. With wow factor if you want it. Starting similar, but what does similar mean?
  • Carol Payne: I don't agree we should make HDR and SDR look the same, if that's what we're saying.
  • Daniele Siragusano: Not literally the same as in 100 nits HDR, but an appearance match.
  • Joachim Zell: A red Ferrari and Jeniffer Lopez's skin tone should be the same, but Carol makes a good point. But clients feel comfortable they're in a good facility if walking between the SDR and HDR room they feel they are seeing the same thing. Then they say "show us what else we can do."
  • Carol Payne: What about when you start from the HDR "biggest box"?
  • Joachim Zell: Ferrari red and skin tones should look the same, but of course they will be darker.
  • Josh Pines: It's a religious argument how it should fall off the truck. But as long as you can do either it's worthwhile discussing. What concerns me if if we have an OT that desaturates, whatever you do before that, it ends up desaturated. Colorists must be able to get where they want to go easily.
  • Thomas Mansencal: You must be able to reach the full volume of the display, especially in HDR.
  • Alex Fry: As Jed continues, we should check we can make an LMT that gives the old appearance through it.
  • Jed Smith: I don't see what I'm working on as the "off the truck" look. That would come from a default LMT, which is not represented here yet.
  • Thomas Mansencal: It looks like your fire tweaks are in the middle of your transform. Ideally that should be before as an LMT.
  • Jed Smith: It's tricky, because the highlight desat (path to white) only kicks in when one channel hits maximum, which is tied to the display. So it wouldn't work in scene linear. But the RGB ratio biasing and lightening/darkening could work in scene-linear.
  • Daniele Siragusano: Is there enough space from one primary to white to do this roll-off?
  • Nick Shaw: If it only started when one channel hit the maximum, that channel would stop hard.
  • Jed Smith: That's why the bias factor starts earlier. If you adjust the values so it doesn't do that then it doesn't look nice.
  • Nick Shaw: So the modifications are like a "levels" control applied to the adjustment?
  • Jed Smith: Yes, but applied to a non-linear factor because it's relative to the amount of tone compression applied to the norm.
  • Kevin Wheatley: As mentioned earlier, it's worth testing this approach in a perceptually uniform world, so the path to white behaves the same for different hues, because the distances mean the same.
  • Jed Smith: I did originally use JzAzBz for the path to white, and it wasn't that visually different, but wasn't chromaticity preserving. So is that valuable?
  • Chris Davies: When you say chromaticity preserving, do the xy coordinates stay the same as intensity changes? How is that distinguished from hue preserving? If it goes to white, the chromaticity must be changing.
  • Jed Smith: You mean is that path to white along a constant perceptual hue?
  • Chris Davies: I would say chromaticity is a point, not a line or plane. In your plot, when you say chromaticity preserving you mean something that starts on a vector stays on it?
  • Lars Borg: That sounds like hue preserving.
  • Sean Cooper: The chromaticity preserving refers specifically to the tone mapping. That is chromaticity preserving, not the whole transform.
  • Lars Borg: If you say chromaticity preserving, whatever coordinate system you are using the point has to stay the same. Moving along a line is not chromaticity preserving.
  • Daniele Siragusano: There are two modules. The tone mapping is chromaticity preserving, and the highlight roll off is "linear additive mixture preserving" – mixtures of light.
  • Lars Borg: But if it's outside Rec.709 gamut where does it get moved into 709?
  • Daniele Siragusano: He's not doing that yet.
  • Lars Borg: OK so the tone mapping changes luminance and preserves the xy coordinates.
  • Nick Shaw: Effectively unbounded Rec.709, so chromaticity is preserved but may be outside 709 at this stage.
  • Jed Smith: This comes back to order of operations. Curently its a 3x3 matrix at the end from rendering primaries to display primaries. Bur with a chromaticity preserving approach could this happen before the tone mapping. And is a 3x3 sufficient?
  • Daniele Siragusano: If it's chromaticity preserving the primaries don't matter. It's only the non chromaticity preserving part where rendering primaries make a difference.
  • Kevin Wheatley: The rendering primaries do need to be next to the end.
  • Daniele Siragusano: The primaries affect your achromatic calculation, whatever you are using as a norm – max(RGB) whatever.
  • Lars Borg: But whatever the primaries, equal RGB is alway grey.
  • Daniele Siragusano: That's why with different primaries you have different weights.
  • Lars Borg: But if equal RGB is always neutral, the matrix wouldn't change where the greys are.
  • Daniele Siragusano: It changes the brightness of it, which affects the path to white.
  • Lars Borg: The tone mapping probably needs to be done in display space, because only then do you know your range.
  • Daniele Siragusano: But then you change the appearance when you go from Rec.709 to P3 etc.
  • Lars Borg: That's the challenge. We can't preserve appearance across devices with different characteristics unless we limit to the lowest common denominator. So for example where do I put 2020 green? Do I make it slightly cyan in 709, or map it to the 709 green (which I think is wrong)? You will have detail loss, unless you're very clever. That's a challenge.
  • Daniele Siragusano: In the last 5 minutes, shall we talk about the ethics(?) of the transform? There's a lot we can do, but what should we do? If you play around with stuff you end up with hundreds of lines of code, but you may be chasing your own tail.
  • Lars Borg: Excellent point. The more code, the harder it is to maintain and tweak. We should simplify it as much as possible, or it's like machine learning where we get a "blob" we can't characterize, because it's too complex.
  • Alex Fry: Actually this is way less complex than the current OT!
  • Daniele Siragusano: But this is only part of it. We need to pull the hand-brake to not keep going for two years!
  • Jed Smith: If we agree we’re separating the rendering transform from the "off the truck looking good" LMT, it should be as simple as possible. A good start to build on.
  • Alex Fry: I think an LMT for the first iteration.
  • Nick Shaw: All these qualifiers pushing hue regions around definitely feel like they belong in an LMT.
  • Kevin Wheatley: Long ago when I did stuff with DI, I made appearance models for different display devices and environments and found it made the displays behave more similarly. The colorfulness and so forth was affected by what you were looking at. That was for film, Rec.709 and P3 (no HDR at the time). I think we should look at appearance spaces to normalise things better.
  • Thomas Mansencal: It's an important point. We know that if you are outside as the sun rises and luminance changes, the appearance of things changes. Light a chart with 100 nits or 1000 nits and the appearance changes. So the fundamental question is, do we want thew appearance to be the same at 100 nits and 5000 nits, because that wouldn't happen in the real world. It's the Bezold-Brücke Effect I was discussing with Alex, where wavelengths expand as luminance increases, changing appearance and making things more yellow. Do we compensate for that? Or provide a choice to do it?
  • Alex Fry: You mean simulate it with an SDR display, and let it happen in the eye with an HDR one?
  • Thomas Mansencal: Exactly. You need to compensate for that if you want your HDR and SDR to appear the same.
  • Daniele Siragusano: I don't think it's that significant. Even for a 4000 nit grade you don't make diffuse white 5 stops brighter. Or if you do it's for a very bright environment, where your whole steady state adaptation goes up.
  • Scott Dyer: We shouldn't worry how complex an algorithm is for now. It's useful exploration to hone in on requirements. We can worry about making a simple algorithm later. Let's not get ahead of ourselves.
  • Alex Fry: It's much easier to discuss the desirability hue or chromaticity preservation when you have a prototype to poke at.
  • Scott Dyer: Absolutely. Even just to make sure we're talking about the same things, to say what we do and don't like.
  • Nick Shaw: Jed, is your transform easily invertible so we can make an LMT using the back and forth method?
  • Jed Smith: Not necessarily the experimental per hue stuff, but the main transform I think so.
  • Nick Shaw: If we can make an inverse of v1, we can make a LUT that emulates the current OT through that.
  • Alex Fry: It's also interesting to try a non brute force method that gets you 80-90% there.
  • Nick Shaw: True. But it's often easier to start with a brute force version as a reference.

Meeting #5: January 27th, 2021, 11am PT

Attendance

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Chris Davies
John Frith
Joe di Gennaro
Francesco Luigi Giardiello
Zach Lewis
Thomas Mansencal
Joseph Mccormick
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Troy Sobotka
Garrett Strudler
Doug Walker
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: We need to make the requirements more concrete. The list on DropBox has been edited. Any additions or comments? We need to report progress to the TAC in a couple of weeks. 
  • Scott Dyer: I tried to make the uncontroversial aspects into a list by category. It's a work in progress. To lead to objective criteria, to see if we’re headed in the right direction as we do stuff.
  • Kevin Wheatley: Are we happy with the 4 main requirements?
  • Nick Shaw: Does "span the space" mean you should be able to hit any color in a display gamut?
  • Kevin Wheatley: That's been a complaint from colorists. It's a limit if they can't make every color a device can make. And in VFX you may have display referred reference for a creature. Doesn't have to span 100% but you need to reasonably match what an artist painted in Photoshop. So logos in ads aren't invertible – gamut mapped into RRT volume.
  • Nick Shaw: You get slices cut off the edges of the 709 unit cube when you go backwards and forwards. Particularly the yellow / green edges.
  • Daniele Siragusano: Doesn't this conflict with the requirement to fit the entire dynamic range smoothly into a given display? You can't squash everything gracefully without clipping but also be able to easily reach the limits.
  • Alex Fry: I'd say it is. How do we get round it?
  • Kevin Wheatley: For me it doesn't matter if the values get inverted to crazy numbers, as long as they go back close to where they started.
  • Alex Fry: That's ok for graphics, but if it ends up on a texture it gets bizarre quickly!
  • Kevin Wheatley: Yes there are conflicts. That also brings up that gamut mapping isn't on the list.
  • Daniele Siragusano: If only artists used the same OT when creating the logo, there'd be no problem!
  • Kevin Wheatley: In the real world there's always looping back and forth.
  • Thomas Mansencal: We had exactly the inversion issue just yesterday. After a round trip things didn't look how they should.
  • Daniele Siragusano: So would you favor a rendering that was harsher at the boundary so you could reach it? That would be preferred for motion graphics.
  • Thomas Mansencal: I probably would. That's where you get precision issues. Particularly with half-floats on the GPU.
  • Alex Fry: This makes it harder to hit the "simpler" target.
  • Daniele Siragusano: It's easier to have a harsh clip in the OT and put a soft clip in an LMT than the other way round.
  • Alex Fry: The inverse is still tricky then, because you have to invert the RRT and LMT.
  • Daniele Siragusano: Maybe you don't use the LMT for those assets.
  • Alex Fry: Unless it's in a scene where it's mixed with other things.
  • Scott Dyer: These items were supposed to provoke a reaction if they weren't hitting the mark. Invertibility may mean different things to different people. For archive you need an inverse to ACES. And if you don't change it in ACES you can get it back identically. Others may have different needs.
  • Daniele Siragusano: You always do something while it's in scene referred. Or it would be easy. In an HDR remaster (which people expect to be easy with ACES) the SDR archive material explodes.
  • Alex Fry: SDR studio logos going out in HDR can catch you out.
  • Daniele Siragusano: SDR needs to be made properly scene-referred and that's not trivial.
  • Josh Pines: SDR to good HDR is a black art, and we don't need to solve that here. Not as a design requirement. It's so content dependant.
  • Thomas Mansencal: We could provide guidelines but not tools.
  • Daniele Siragusano: If you provide an inverse transform, it implies you can go Rec.709 to scene-referred. People will interpret it that way.
  • Kevin Wheatley: I name color spaces for this so nobody chooses them by mistake.
  • Josh Pines: 99% of our ACES projects people use non ACES view transforms, then we end up using inversion to make the ACES deliverable that studios require. So we need invertibility today. The concept of hitting the primaries is a religious question. In the early ACES days there was a tug of war between making things future-proof and extensible, and making it look good "falling off the truck". If ACES looked bad compared to e.g. K1S1, people wouldn't use it. Looking good involves some creative intent, and that usually means gamut limiting, etc. It can't hit everything and still look good with no LMT. Low contrast and neutral means it needs grading. There's a conflict.
  • Carol Payne: I agree. That's why I think we should move as much as possible to the LMT. And we can design a few LMTs to fulfil the different criteria. And educate around choices and flexibility.
  • Josh Pines: A default LMT that looks good but is not required is a great idea.
  • Daniele Siragusano: The extension of that is that the OT is a no-op and everything is in the LMT. But then you can't have one LMT that makes things look the same on all displays. Or we have to do the thing I'm not allowed to talk about! Lots of productions want their own renderings, and inversion as part of the design seems bad to me. Quantization etc.
  • Alex Fry: If everything is in the LMT, do you just take the next step and say you can swap out the RRT?
  • Josh Pines: The ACES requirement forces inversion on us today.
  • Daniele Siragusano: Josh you missed the discussion on this a couple of weeks ago. Most productions are swapping out the rendering. We need to accept that reality.
  • Scott Dyer: We've gone to both extremes over time. A lot of people want ACES to be "color science in a box". Put an image through an IDT and an OT and it looks reasonable. Just the basics to make a scene-referred image viewable in a theatrical environment. Increased contrast, and shadow and highlight compression. This isn't just film and photography. It goes back to painting. We all know that just mapping scene colorimetry looks terrible, flat etc. So there's a bare minimum for an acceptable (but not great looking) picture. And maybe a default LMT for a filmic look if people want that. Maybe currently it's too restrictive, so people use elaborate workarounds to still be an ACES project. I wouldn't want to make it a straight line!
  • Thomas Mansencal: Regarding the contrast… WETA doesn't use ACES except my small department. But if you look at the trend in display transforms of the years, it's reduction in contrast. I think the reason is HDR displays. It also makes look dev easier. You make the look because CG skin is not compressed by the OT. In the 100 year history of film making things have been designed to make white actresses have soft looking skin. So if you make a CG actress from a sharp DSLR scan, do your look dev, then render through a CG cinema lens it looks soft. It means education is needed.
  • Chris Brejon: I wanted to go back to what Daniele said in an early meeting, that his idea didn't mean there couldn't be a good vanilla transform. You can still have a swappable rendering that has a default which is the ACES one.
  • Chris Davies: If we have an approach where a single creative LMT is used we need a way to pass metadata about the OT parameterisation.
  • Daniele Siragusano: Let me clarify my proposal. You would provide a default rendering transform, but people wouldn't have to use it. But they would need to provide a set of transforms for a set of targets, Rec.709, PQ HDR… And you have a defined package to contain all this. And if you don't do this you can still use the standard transform. So you have a framework, but swappable components.
  • Alex Fry: The ACES block diagram makes it look like you can swap out the RRT, but in reality it's too intertwined with the ODTs. We could rejig things so it was replaceable. Otherwise we end up with it being a no-op and everything is an LMT, and that's just moving the complexity around.
  • Kevin Wheatley: So the system would do the legwork we can all agree on, like display encoding.
  • Scott Dyer: What's stopping people doing that now, building something with e.g. K1S1? Baselight does the togglable output transforms nicely, but… What do we need to design differently?
  • Josh Pines: I like the idea of an RRT that does the bare minimum, and hopefully looks ok. What stops people "rolling their own" is the studio requirement to deliver ACES archives that look correct through the current OTs. We reverse engineer an LMT for each deliverable.
  • Scott Dyer: That's terrifying, because it's potentially mangling the data.
  • Daniele Siragusano: A lot of people are forced to do this. If we do this everybody wins… the people who don't have a color engineer have a standard set of renderings, maybe optimised for different projects, animation etc.
  • Thomas Mansencal: A good stock transform is important from a studio standpoint for asset sharing across shows. You need something stable and predictable. The current RRT is negatively impacting look dev work. It makes it easier for smaller entities to set things up. Load your ACES config and you're done. You don't need to create 10 different LMTs which can be hard to track across shows.
  • Scott Dyer: So are we saying the block diagram would be as it is now, but with multiple RRTs like there are multiple IDTs, ODTs etc.?
  • Kevin Wheatley: Not necessarily.
  • Alex Fry: It's an option. We don't have a conclusion.
  • Sean Cooper: I think it was more referring to the OT, not the 2-part RRT/ODT.
  • Daniele Siragusano: To keep it simple, we have scene-linear and display linear, and forget about the EOTF stuff (everyone agrees on the maths of that). Then you have a "family" of OTs for different viewing conditions.
  • Joseph McCormick: Where is the viewing condition / creative split in this set of differently optimised OTs? If it's creative it belongs in the LMT.
  • Daniele Siragusano: It doesn't really matter how people get there. If they have LMTs and OTs or bake the LMTs into the OTs, it doesn't matter. Having different LMTs and OTs for each master is only a workaround so we don't bake the inversion into the files. A set of CLFs specifying scene-referred to different display-referred targets is enough.
  • Joseph McCormick: It's an organizational decision. Where do you track what?
  • Daniele Siragusano: If I was a color engineer in a facility I think I would come up with a generic OT and some look transforms in scene-referred. Others might do it differently. There are lots of reasons why you might want different OTs for different viewing conditions, white point etc.
  • Sean Cooper: What about a "Certified Federation of Output Transforms"? An ACES group would validate that something used best practices. So people could propose OTs to add to the system. Open source, no black boxes.
  • Thomas Mansencal: Like the IDT and Logo Program.
  • Daniele Siragusano: Some facilities wouldn't want to open-source their work.
  • Sean Cooper: They could propose a documented "do nothing OT" and put everything in the LMT. 
  • Daniele Siragusano: We don't validate every CLF file as being sensible.
  • Sean Cooper: More like IDTs where we don't accept any random IDT as part of ACES. We'd reject OTs that did everything display-referred, and push for best practices.
  • Thomas Mansencal: Stuff that's not certified gets into ACES. E.g. the GoPro IDT that's in the OCIO config and is widely used. It's worth noting that things like that could happen.
  • Daniele Siragusano: Isn't that a good thing? We explain how to do things, then when somebody makes a camera, somebody else makes an IDT.
  • Thomas Mansencal: Just raising the point that there is an escape mechanism for vetting transforms.
  • Daniele Siragusano: You need to define a configuration so your custom OT works in all systems and renders the same. I'm not saying it's easy.
  • Chris Davies: This is perhaps increasing the complexity of ACES. One great thing about ACES is that small outfits "turn on the ACES switch and it just works" and they are really happy with it. And on the input side it's getting so it's handled more automatically. If we make the output side more complex we lose this simplicity for small facilities. Who's our audience?
  • Daniele Siragusano: But if you do nothing, the vanilla OT which this group comes up with is applied. But if a system sees a config file with CLFs etc., it uses that.
  • Kevin Wheatley: It's not increasing complexity. It's acknowledging the complexity many of us currently hide from people.
  • Sean Cooper: Comparable to keeping track of the ACES version.
  • Josh Pines: I agree a default is important, and this group needs to deliver that, regardless of the hooks for custom renderings.
  • Thomas Mansencal: Breaking that could kill ACES. Lots of people use ACES because it's simple, and we need to keep that. Simpler if anything.
  • Scott Dyer: The two big successes of ACES are the IDT concept, and the understanding that color management is needed. I need to look at the block diagram and see where the hooks would go, but I don't think the two objectives necessarily conflict. Standard for those who need it, and flexibility if you have a Josh Pines.
  • Alex Fry: An advantage of the flexibility is the standard can be more attractive (like 1.1) because it doesn't have to be all things to all people.
  • Nick Shaw: Do we say that custom RRTs need only have a forward transform? Inversion isn't needed?
  • Thomas Mansencal: Probably
  • Josh Pines: Until you get an output-referred studio logo.
  • Daniele Siragusano: The vanilla one needs an inverse
  • Thomas Mansencal: If you make a custom one it's your job to make sure it works for your case.
  • Scott Dyer: Definitely keep things going on ACES Central.
  • Kevin Wheatley: This block diagram is just something I did quickly before the meeting. It sort of represents what Daniele is describing. It's very simple without parameters for now. But remember we need to deliver CTL or similar from this group.
  • Daniele Siragusano: I will write a proposal on ACES Central.

Meeting #4: January 20th, 2021, 11am PT

Attendance

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Lars Borg
Chris Brejon
Daniel Brylka
Chris Clark
Liam Collod
Sean Cooper
Chris Davies
Alex Forsythe
John Frith
Joe di Gennaro
Francesco Luigi Giardiello
Harvey Landy
Zach Lewis
Thomas Mansencal
Joseph McCormick
Carol Payne
Matthias Scharfenberg
Daniele Siragusano
Jed Smith
Troy Sobotka
Pablo Garcia Soriano
Doug Walker
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: We should talk about conversations continued on ACES Central after the last meeting.
  • Scott Dyer: Jed posted last night on ACES Central. Maybe later he can talk about that.
  • Kevin Wheatley: There's been some discussion about the names of the artifacts. Technical vs creative bias on naming.
  • Nick Shaw: Posterization is the effect of quantization of the image data. It gets called banding if it creates visible bands in gradients like skies. But posterization is anything where there are multiple visible patches of the same color due to quantization. Posterization and solarization get used interchangeably, because they have similar visual results, but I solarization is a flat area of color due to clipping. It comes from a chemical result of saturation of film emulsion. But in digital we can call it solarization if it's from clipping, and posterization if it's from quantization. Is everyone happy with those definitions?
  • Thomas Mansencal: Yes (everyone else seemed to nod)
  • Doug Walker: Solarization is a reversal, where something that should be brighter is darker.
  • Alex Forsythe: It's done in photography by flashing the film.
  • Nick Shaw: Is there a digital equivalent? Are we happy to use it to refer to clipping? This would be clipping in all channels. Clipping in one channel would cause one type of hue skew, where that channel can no longer increase, so as the other do it changes the perceived hue. The other kind is where the RGB ratios change gradually as one channel goes into roll off before the others. Desaturation as channels roll off is part of the look we're used to, and an accidental benefit of RGB curves. I personally don't mind a bit of hue skew there, but others have strong opinions. It's up for discussion, what happens to hue and saturation as exposure increases. What Jed has demonstrated is a combination of the power weighted norm from Doug and Gary's Core Rendering Algorithm with the SSTS to apply a gain based tone curve to preserve RGB ratios for hue invariance. Chris's light saber images are a good example of bright saturated images no longer looking light light sources (to my eyes) if they maintain saturation.
  • Thomas Mansencal: This is on an SDR display. An HDR display like an LED wall could make a saturated red so bright it hurts your eyes.
  • Nick Shaw: This brings up the issue of HDR appearance vs SDR. Does everybody agree with my definitions or have anything else they wan't to raise?
  • Alex Forsythe: I posted images of a sweep of Macbeth colors, then with exposure raised 10 stops. It never goes white unless R=G=B in the source.
  • Nick Shaw: That's a result of that approach deliberately maintaining RGB ratios.
  • LB: The classic example is a neon light shining on a background. In SDR if you don't desaturate, the light ends up looking darker than the background, because it is more saturated.
  • Alex Forsythe: Look at the examples on ACES Central. Neons look good. Suns don't. As Scott said. Hue linearity and saturation are good… unless they aren't.
  • Scott Dyer: It's a great example for discussion. We had a norm on the tone scale in 0.1.1. It was great for tail lights and neon. But in some cases it needed to desaturate. An example is the image of the lamp on a shelf behind a girl. It needs Jed's JzAzBz desaturation. This stuff we looked at before is worth revisiting to discuss ideal highlight behavior.
  • Doug Walker: In Gary Demos' work typically the power norm tone scale is followed by another operation that desaturates the highlights. We should consider a combination of approaches.
  • Thomas Mansencal: There is a chasm between what different people want. Some are adamant about hue linearity. Some aren't. It's hard to find a middle ground. Orthogonal problems may have orthogonal solutions.
  • Alex Forsythe: Another factor is simplicity. Simple things tend to behave predictably.
  • LB: Complexity can be hard to fix e.g. hue angles
  • Alex Forsythe: We had complex algorithms in the early days that made nice pictures, but colorists hated how the image responded to grading.
  • Daniele Siragusano: Ending up with something complex may mean you are in the wrong domain. Domain changes may help.
  • Thomas Mansencal: Technical constraints can be helpful, as we had with the gamut mapper, to limit subjectivity.
  • Nick Shaw: Simplicity also means less computationally expensive, which is a benefit for real-time grading systems.
  • LB: If we have a set of "good renderings" we can use ML. Not to create a "black box" but to find a best fit simple mathematical model.
  • Thomas Mansencal: One reason to have a big data set. It wouldn't really be ML. Just basic optimisation.
  • Alex Fry: We need to define requirements and constraints.
  • Kevin Wheatley: We should put anything which comes down to preference in an LMT. Rather than varied OTs.
  • Thomas Mansencal: AMF becomes very important then for tracking the viewing pipeline for an archive.
  • Kevin Wheatley: So the rendering should be what is common to everything.
  • Nick Shaw: Is it possible if the rendering is not hue preserving to wrangle hue preservation back in an LMT? It could be hard and maybe destructive. Earlier hue restore added noise.
  • Scott Dyer: It brought out noise that was there.
  • Doug Walker: Introduced blue channel noise to the green channel which was more visible. And colorists didn't like what the hue preservation did.
  • Scott Dyer: Looking at this will be helpful to decide what we do and don't want.
  • CD: OT/LMT split based on fixing flaws in capture. Should OT fix hard clipping in the source? Older OT are overcomplicated to fix flaws in camera data.
  • Alex Forsythe: The sun will always clip! LMT sun fix seems too much?
  • CD: Different cameras produce different clipped data of the sun.
  • Alex Forsythe: We had this exact discussion. Jim Houston said "the colorist should fix the hard clip of the sun".
  • CD: People might say K1S1 looks nicer. The amount of highlight desaturation is a preference. But we need to come to agreement.
  • Kevin Wheatley: We don't want to forcibly desaturate everything. After a certain threshold you need to, but at lower levels, particularly for HDR you may not want to.
  • JS: My desat is controllable. It's JzAzBz so hue linear. Customisation for varied displays is trivial.
  • Daniele Siragusano: JzAzBz scales A/B over exposure. So you lose connection to the actual display. Same for shadows. If you plot the spectral locus, it tapers to even within Rec.709. Perceptual uniformity at the expense of "alien shaped display". What boundary do you map to? Lowering Jz will push colors out of gamut for the same Az and Bz.
  • JS: Currently I'm not modifying Jz
  • Daniele Siragusano: So you have picked a plane? Then you lose the perceptual scaling.
  • LB: That would result in totally desaturated highlights.
  • JS: That's what I'm doing. Desaturating e.g. blue doesn't go purple in that space.
  • LB: There's no perfect hue linear space.
  • Daniele Siragusano: That one's ok in the blue corner but not yellows. There's no color resolution there, which is bad for skin-tones. You could use JzAzBz on the blue side, and something else on the other. But then maybe you're not in the right domain.
  • JS: So do we want a chromaticity preserving tone scale? We need to decide.
  • LB: Some are ok with orange going yellow in the highlights. I'm not because it isn't real life. I want it to look like real life, but just darker and a bit desaturated keeping perceptual hue. Some don't mind clipping, maybe because they are used to it.
  • Daniele Siragusano: Definitely. A lot of preference for expectations.
  • LB: Maybe it's part of "film look".
  • Daniele Siragusano: It should be the same for HDR and SDR and anticipate future technology.
  • LB: If a colorist looks at HDR and SDR display, how would they want a bright orange HDR ball to look in SDR? Probably a desaturated orange ball.
  • Daniele Siragusano: Now with colored lights, if the set is lit to look one way in HDR, it is bad if it has different hues in SDR. My view is that the start point should be what you saw on set, and that should be maintained across a wide range of displays. The look doesn't matter so much.
  • Kevin Wheatley: We should note that statement down. We could attach some degree of objectivity to that.
  • Alex Forsythe: There can be different intents for different displays. Sometimes you want highlights to desaturate in HDR as they did in SDR. Sometimes not. That's a choice, and it makes sense to have a control in there for that choice.
  • Daniele Siragusano: Parameterized display rendering transforms.
  • Alex Fry: It supports the idea of breaking it into two parts that can be handled separately.
  • Kevin Wheatley: Let's have a quick demo from Jed for those who haven't seen what he posted.
  • JS: It's quite simple. A tone scale and an EOTF for the display. And I tried different desaturation methods for highlights. I found JzAzBz where one channel represents scene luminance and the other two are opponent red/green and blue/yellow, with zero being achromatic. I used log luminance as max(RGB) as a shaper for the desat. I think it looks nicer than the stock rendering, at least for very saturated scenes.
  • Alex Fry: That's great to actually see something "on paper".
  • Daniele Siragusano: The EOTF doesn't change hue because EOTF-1 and monitor EOTF cancel out.
  • Doug Walker: It's just "coding on the wire" to get the display linear values to the screen.
  • Alex Forsythe: Currently we go via XYZ – the actual display linear XYZ values that end up on screen.
  • Daniele Siragusano: Why the mask in log applied in linear?
  • LS: Creative choice. Mapping to 0-1 to use as a multiplier for desat strength.
  • Scott Dyer: Lots of ways to do that. One way is to use the derivative of the tone scale to guide fall off. Or a ratio between that slope and chroma/saturation. That works across different dynamic ranges.
  • JS: It looked like 0.1.1 was doing something like that.
  • Alex Forsythe: Reminder that 0.1.1 freaked out colorists.
  • Daniele Siragusano: We kept it as an option in Baselight for a long time, because some colorists liked it. It was polarizing. We now have a Scene Look to mimic it.
  • Alex Fry: I loved it, and we used it on LEGO one.
  • CD: Also loved it.
  • Kevin Wheatley: Anybody hate it?
  • Scott Dyer: I liked how it looked, but hated that is was blended 1D LUTs.
  • CD: Did anybody try LMS?
  • Thomas Mansencal: JzAzBz kind of is that, with a non-linear transform to make it perceptual.
  • LB: RGB, XYZ and LMS are all linear so have the same hue appearance shifts. You need curves to make AB represent perceptual hue, which is a challenge to do.
  • Thomas Mansencal: There is a new space, OKLab from Björn Ottosson. It might be worth looking at.
  • LB: It's not easy to describe this kind of hue linear space. Jab uses three channel curves. No current model takes account that it is not symmetrical around white.
  • Alex Forsythe: I recently saw a paper on ProLab but haven't looked at it in detail.
  • Thomas Mansencal: It's pretty much like CIE Luv and I think suffers from the same issues. I'm going to add it to Colour Science for Python this weekend.
  • Alex Forsythe: We tried tone mapping in luminance/chrominance spaces, but because the chrominance didn't get the same transform it looked weird. "Pastel" and people didn't like it. But all these deficiencies could be overcome.
  • Thomas Mansencal: Jed, you should try OKLab
  • Daniele Siragusano: But you need to consider the nominal range. A root based transfer function is not good above 1.0.
  • JS: That's why JzAzBz was good
  • Daniele Siragusano: It's PQ with something on top to fit psychophysical data.
END

Meeting #3: January 13, 2021, 11am PT

Attendance

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Chris Brejon
Chris Clark
Liam Collod
Sean Cooper
Chris Davies
Alex Forsythe
John Frith
Francesco Luigi Giardiello
Harvey Landy
Zach Lewis
Thomas Mansencal
Joseph McCormick
Carol Payne
Joshua Pines
Matthias Scharfenberg
Daniele Siragusano
Troy Sobotka
Pablo Garcia Soriano
Garrett Strudler
Mike Whipple
Raymond Yeung
Joachim Zell

Meeting Notes

  • Alex Fry: For now we’re just continuing the opening discussions from last year. Where do we want it to go?

  • Nick Shaw: The great fixed vs variable debate.

  • Kevin Wheatley: Scott said there was scope for some variability in the charter. Daniele mentioned splitting into parts. 

  • Scott Dyer: Option to use other renderings exists already, combined with aspects of ACES. I think this group should focus on fixing the ACES rendering, not changing the architecture of ACES. We can look at other renderings as guidance while fixing.

  • Nick Shaw: Regardless of other renderings, we need a solid out-of-the-box ACES default that works for people.

  • Scott Dyer: It’s not “us vs them” ACES doesn’t have to the only or “best” approach. The question “how much ACES must you use to be an ACES show?” is not our problem.

  • Carol Payne: Yes. We should fix our own house first!

  • Thomas Mansencal: That’s why we’re all here. To make ACES better.

  • Kevin Wheatley: What are the issues? List from DropBox Paper. In no particular order - Clipping and posterization. How much is this exacerbated by OCIO implementation?

  • Thomas Mansencal: Definitely made it worse. OCIO 2.0 should fix. We need images to show issues. It’s compounded with gamut mapping issues. 

  • Alex Fry: Less present in early versions.

  • Daniele Siragusano: It maybe began with render space change AP0 to AP1.

  • Scott Dyer: Easier for camera renderings in their own encodings – all positive values. Currently we clip when we bring it to AP1 – hence gamut mapping. Ideal to eliminate need for gamut mapping with the rendering.

  • Nick Shaw: ODTs clip to display gamut. But that may be more related to hue skews.

  • Thomas Mansencal: I don’t see posterisation – banding. We need to define the terms, with example images.

  • Alex Fry: Historical reason for AP1?

  • Scott Dyer: First time around design goals were a moving target. Fix one, break one. It’s why we need example images. As long as we don’t design just to fix the example image. AP1 came from colorist feedback on “feel”. RGBish space. Although v0.1 rendered in AP0 it used a matrix that effectively shifted the primaries.

  • Nick Shaw: Colorist feel more related to grading space.

  • Scott Dyer: Testing rendering space and working space concurrently. Limited access to colorist time. Less than ideal rigor.

  • Kevin Wheatley: Orthogonal feels better when adjusting. Particularly for “monitor focussed” colorists. When they add red they want it to go redder.

  • Joshua Pines: I remember it differently. Originally AP0 was to be exchange and working space. That worked for colorists. VFX pushed back. Negative blue primary was baffling. Adding blue made luminance go down. Whether colorists or VFX, push back meant a move to a working space more similar to existing spaces.

  • Scott Dyer: True. Different reasons at different times.

  • Daniele Siragusano: What was the lesson? Are the working space and rendering space too intermingled? 

  • Scott Dyer: Some reaction was due to lack of familiarity. Grading “under a LUT”. Wish we’d had more time and more feedback.

  • Nick Shaw: Things have moved on Baselight Base Grade and Resolve equivalent abstract the tools from the working space.

  • Alex Fry: I was thinking more of AP0 in the rendering. It flips between AP0 and AP1 a few times.

  • Scott Dyer: For particular test images, AP1 helped the blues. But issues may have been due to gamut mismatches. AP1 clamping is limiting. Detailed reasons lost.

  • Daniele Siragusano: Intermingling of rendering space is linked to per channel tone mapping. Different approaches would make it less relevant.

  • Scott Dyer: This is linked to hue linearity. But what do we define as hue? How do we judge it? IPT? CIELab.

  • Daniele Siragusano: Change of tone scale for different deliverables affects hue differently. If you’re ok with hue skews, HDR should skew the same. So RGB 1D is no good.

  • Alex Forsythe: We tried a lot of things. Max RGB etc. made beautiful chart images, but sometimes bad pictures. Images must still look reasonable. Need good test images. It is of course subjective. First time round was too subjective.

  • Scott Dyer: We forgot the colorist! We need to have more idea why an image “looks bad”. Bad source? Bad IDT?

  • Alex Fry: Worth going through the same process with test images and documenting it. Relearning the  lessons.

  • Kevin Wheatley: Besides hue, we could group tone scale related issues – invertibility, default contrast. How are these related to HDR? Would SSTC applied to SDR have the same issues?

  • Alex Forsythe: We were aware of HDR when we started. We had prototype HDR displays and also used SDR displays to look at sections of the HDR range. We weren’t naive to HDR.

  • Kevin Wheatley: So are the inconsistencies accidental? Related to different approach?

  • Alex Forsythe: Depends which inconsistencies. Deliberate choice on how to use HDR range. Not match SDR. Some want HDR to use same scene range as SDR. That inconsistency is deliberate. Hue rotations were an accepted artefact of the good things from RGB curves, which we felt looked natural compared to other options.

  • Joshua Pines: Creative community split. Be good to parameterize options on how to use HDR range. Currently a compromise where HDR mid grey is a bit brighter.

  • Alex Fry: In the past we have made stuff which kept the desaturation of SDR to “hide this sins in the highlights”, but used HDR range. But really that should be in an LMT.

  • Daniele Siragusano: It’s a spatial problem as well. Down to sharpness of highlight details. Preference can be misleading. Highlight range without sharpness still looks “filmic”.

  • Kevin Wheatley: That’s a case for a wider default, with more in the LMT. Or do these things need to be part of the display rendering.

  • Daniele Siragusano: Good question! I think we should keep outputs “clean”, and do more in LMTs.

  • Scott Dyer: We had HDR at the start, but focussed on SDR theatrical. We’ve learned a lot now.

  • JZ: Clients are important. They want the same look and feel everywhere, as a start point. ODTs should give comfort, but with the option to push further.

  • Scott Dyer: Related to simulation of one display on another. Look of HDR vs theatrical is a choice.

  • Daniele Siragusano: Useful while grading HDR to visualise SDR on the same monitor by toggling on a simulation. A preview, not a creative tool. JZ is talking about an appearance match.

  • JZ: In early ODTs we later added ambient light compensation, but forgot to document which is which. We should have all the options, but be able to come back to “normal” to comfort our clients.

  • Daniele Siragusano: What is “normal”? What if you start with the HDR?

  • JZ: Obviously then you can’t go the other way.

  • Daniele Siragusano: 48 nit in PQ shouldn’t be a design goal of rendering. I comes for free if you split things.

  • Alex Fry: Easy in Baselight, but elsewhere you need multiple ODTs. Too many ODTs in e.g. OCIO gets unwieldy.

  • Kevin Wheatley: We’ve covered most of the listed issues. Are there others? We should group these points – tone scale; should adjustments be made in RGB or elsewhere; etc.? Some people may not be here because they have too many issues.

  • JZ: We should involve the TAC team. ACES Central too.

  • Nick Shaw: What about people who have too many issues with ACES so don’t use it and aren’t on ACES Central? We need to bring them in. How?

  • JZ: We have a good group here. Covers the world and different areas of industry.


  • CB: Thomas and I were discussing in the chat. Maybe we need to define with examples what is posterisation, mach bands, etc. Are the DropBox frames made with CTL or OCIO.

  • Scott Dyer: I use CTL to avoid errors. The comparisons of other renderings used the gamut mapping OCIO config for preview purposes.

  • CB: I noticed the ACES 709 OTs in that config are not identical.

  • Scott Dyer: We must look into that.

  • Kevin Wheatley: That’s time. Thanks everyone. Next time we can discuss 1 or 2 areas in a focussed way, and start brainstorming solutions. We can ask the TAC to hunt for new people. Please suggest any new topics.

  • JM: May I suggest emulation ODT vs LMT. If LMT they need to be available to everyone.

END

Meeting #2: December 16, 2020, 1pm PT

Attendance

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Chris Brejon
Harald Brendel
Annie Chang
Chris Clark
Liam Collod
Sean Cooper
Alex Forsythe
Pablo García Soriano
Francesco Giardiello
Joseph Goldstone
Ebs Hasche
Harvey Landy
Lou Levinson
Thomas Mansencal
Joseph McCormick
Carol Payne
Matthias Scharfenberg
J Schulte
Daniele Siragusano
Troy Sobotka
Kimball Thurston
Doug Walker
Mike Whipple
Raymond Yeung

Meeting Notes

  • Kevin Wheatley: Put the list of issues from the DropBox Paper and asked the group "what's missing?"
  • Alex Fry: Where did the "magic numbers" in the existing version come from? Red modifier etc?
  • Scott Dyer: Details are lost to history, but originally came from an aim to be "pseudo filmic" in the early days. Now there is a desire to be more neutral as a start point. Glow came from perceived filmic look. Should it be moved to an LMT? It adds complexity and creates problems with invertibility.
  • Doug Walker: Doesn't consider it a "sweetener". It's compensating for saturation effect of RGB tone scale.
  • Thomas Mansencal: Counterpoint is artist aiming for e.g. saturated red have to fight the red modifier.
  • Alex Forsythe: It is compensating for "hot" reds.
  • Scott Dyer: Red modifier and glow are different. Glow is aesthetic. Red modifier can be taken out if the artefact it fixes is dealt with in another way. JG & Jon McElvain others tested alternate IDTs (2D-LUT based), and found reds weren't as hot through RRT with a "better" IDT.
  • Thomas Mansencal: Is the evaluation image set available?
  • Scott Dyer: Images still available but clearances need to be verified. Ditto for new submissions to this group.
  • Daniele Siragusano: Can't look at one building block in isolation. OT shouldn't compensate for inadequacies of IDT. Need to test OT with LMTs to ensure it's not creatively limiting.
  • Kevin Wheatley: Must make sure we can recreate previous RRT looks through whatever we come up with. People think current version has too much look. They end up inverting the RRT. We shouldn't go for a singular look, because that's a major complaint.
  • KT: We could separate the IDT with synthetic imagery. Only things like Macbeth charts where there is reference spectral data.
  • Kevin Wheatley: There is a debate about what should live in what block.
  • HB: Natural images are ok because the IDT is optimised for e.g. skin tones. Difference < 5 ∆E. Saturated colors are more of an issue.
  • Scott Dyer: This time round we want to be less subjective. Must not "look horrific". Define objective criteria for things it should do and not do. Red chroma could have been quantified, rather than subjective judgement.
  • Daniele Siragusano: The "elephant in the room". One Output Transform vs building a standardized system for arbitrary output transforms. Hard to believe there is one transform to rule them all which is ideal for animation, live action etc.  But we still need a "vanilla" default. Use CLF to build a flexible system.
  • Carol Payne: How does that differ from an LMT?
  • Daniele Siragusano: It is using LMTs to do most adjustments so the OT almost "does nothing". Maybe you end up with one LMT for each viewing condition. 
  • Alex Forsythe: Mimic film process (not film look). You can still print an old neg on current stock and not be too far from intent. Look was in the stock. ACES wanted to do the same – bake creativity into the neg.
  • Kevin Wheatley: What is in the fixed section. Environment? Tonescale in LMT? 
  • Daniele Siragusano: You could say for an ACES pipeline you have to have some standard viewing condition, and you provide CLFs for different viewing conditions. Archive has a bunch of CLFs and AMFs which describe the rendering. Similar to OCIO. If you don't have somebody to make transforms, you can use a vanilla ACES rendering, or ARRI, RED, Josh Pines, John Quartel etc. Everybody is undoing the RRT to do this anyway.
  • Alex Fry: So like Baselight? Cut point shifted between RRT and ODT. Varied DRTs.
  • Daniele Siragusano: Can't see the future.
  • HB: Surely appearance matching for different displays and viewing conditions should be part of the ODT? This part is technical.
  • Daniele Siragusano: Color appearance matching may not be fixed. May be differing desires for HDR and SDR. People are cherry picking aspects of Baselight system e.g. if they like the viewing environment compensation.
  • HB: Back to Alex, there were always rendering choices in film by selecting the print stock.
  • Doug Walker: One grade holding up across different devices is an important core part of ACES. Daniele made a good point. Should the architecture just be a way of storing and transporting an output transform? There should always be a reference OT, but you don't have to use it.
  • Thomas Mansencal: Not mutually exclusive. A lot of people like having the standard transform and what it gives them or else they wouldn't be using it. Yes there are people who don’t like it, we need a default transform that looks good but could be removed.
  • Daniele Siragusano: Yes but we shouldn't enforce use of the vanilla transform.
  • Scott Dyer: Do we "bless" what people already do as part of ACES? The advantage of ACES is you don't need to keep track of loads of individual show LUTs. It looks the same to everybody. Once you establish “your look” in ACES, you can reuse that and don’t have to reinvent again and again.
  • Daniele Siragusano: But currently it's limiting. It's getting easier to pass metadata around. AMF.
  • Alex Fry: Baselight flexibility of DRT is interesting.
  • Thomas Mansencal: Stability is important. Don't want to do the work again for every show. We have one working space, and you view things under one view transform. For each show you may have a client LUT for review, but mostly everything is under one view transform which is what makes assets reusable.
  • Alex Fry: Not ideal to make view transforms with inverted RRT to cancel it.
  • HB: When facilities have custom LUT they don't hand it out. If it's parameterised, who "owns" this parameter set?
  • Daniele Siragusano: Studios are asking facilities to hand everything over these days.
  • Kevin Wheatley: We hand everything over in VFX, but it's not perfectly reproducible by somebody else. There's a question for ACES leadership. Do we "fix and improve" or make something more flexible? Rarely pure ACES at Framestore. You could build a way of tracking the current non-ACES pipelines within ACES.
  • CB: For a lot of people it's important it works out of the box, if you don't have a color scientist.
  • Daniele Siragusano: But a range of presets (ARRI, RED, FilmLight etc.) could be built in and you could swap them.
  • Kevin Wheatley: We need to decide the path we are going to take.
  • Alex Forsythe: Per Giorgianni the process of rendering is fundamentally objective. Scene details are known, and then to go to a given display device you have to compensate. Intent of RRT was to do what was common to all devices.
  • HB: Preferred reproduction does not apply. We want objectivity as a starting point for developing a look.
  • Daniele Siragusano: You need to tune a model for a type of scene (high contrast, low contrast etc.) and it won't work as perfectly for different scenes. It's hard to be objective. We have a colorist in the pipeline who tunes for these variations. Hard to do LMTs for OTs with kinks because the LMT needs inverse kinks, but that's then not exposure invariant.
  • JG: Complex RRT leads to brittle LMTs.
  • Kevin Wheatley: No meeting over Christmas / New Year.
  • Scott Dyer: But you can post on ACES Central
  • Thomas Mansencal: We should do a poll on what people like and don't like. Did it for RAE, but maybe we'll get more hits this time.
  • JG: How do we pull in those who don't normally participate on ACES Central.
  • Kevin Wheatley: Lots of people who don't use ACES Central.
  • Alex Fry: Lift Gamma Gain and other places.
  • JG: Some people had a business interest in ACES not being ready. Some wrote it off. We need to know why. Can reach out privately.
  • Kevin Wheatley: I can talk to people at Framestore.
  • JG: Ask e.g. Josh Pines which colorists don't like ACES.
Thanks everybody. That's it for the year!

Meeting #1: December 02, 2020, 1pm PT

Attendance

Alex Fry
Kevin Wheatley
Scott Dyer
Nick Shaw

Rémi Achard
Dennis Adams
Lars Borg
Chris Brejon
Harald Brendel
Chris Clark
Liam Collad
Sean Cooper
Chris Davies
Dennis Felton
Alex Forsythe
John Frith
Francesco Giardiello
Joseph Goldstone
Ebs Hasche
Harvey Landy
Zach Lewis
Thomas Mansencal
Joe McCormick
Daniel Mulligan
Carol Payne
Joshua Pines
Peter Postma
Matthias Scharfenberg
Daniele Siragusano
Troy Sobotka
Doug Walker
Mike Whipple
Raymond Yeung
Joachim Zell

Meeting Notes


Group Goals
  • Create a stable, robust, capable, and visually pleasing rendering transform to be used for ACES
  • Correct for known artifacts when using the current model
  • Resolve design limitations and inconsistencies of current model
  • Enable unforeseen use cases with easy and parameterized output variables
  • Simplify
  • Maintain backwards compatibility
  • Document the how and the why of all design decisions

Kevin: Hoping that we can come together and come up with some form of solution to some of these goals. We may not meet all of these goals, but this outlines the scope of the work we’re going to do.

Deliverables
Kevin: As an architecture group, we are not necessarily going to produce what might be called “production ready” implementations. But I think it is reasonable for us to provide a reference implementation and test cases for validation.

Kevin: Unlike previous efforts, we are aiming to do as much open documentation as possible so that things are clear to everybody.

Potential Threats
There are a lot of things that could derail a project like this.
  • Finding the right balance between “Color Science” and “What Works”
  • Difficulty in having a shared viewing environment under current restrictions
  • Expanding the use case too much could make the system too vague
  • Difficult separation of the technical vs aesthetic aspects of the problem
  • Refactor existing vs independent new beginning?
  • Existing vendor solutions
  • Potentially long duration of the project

Kevin: A few I've picked out to highlight -
  • I think we’re going to run into issues because of the current pandemic situation which will make it difficult to have a shared viewing environment. However, we’re also going to turn that into a positive aspect because in previous iterations it was very much the case that if you weren’t able to get to the room where everybody was viewing you were kind of on your own. 
  • There’s also potentially a long duration of the project - so we could run adrift pretty easily. Hoping we can establish suitable processes that will allow us to keep on top of deliverables and sort of incrementally improve as we go on.

Roadmap
Kevin: Phase 1 is about gathering requirements - getting everybody to contribute what they have thought is right or wrong with the existing system and where it needs improvement.
In today’s market, HDR deliverables are key. And should also try to consider other output types such as VR, gaming, etc.

We want to refine the scope, establish rules of engagement about how we want to move forward to actually implementing the algorithm in Phase 2.

Call for proposals
  • How the algorithm should be architected
  • How we might establish a good test environment or testing criteria for it (we might have a splinter group that just works on that)

Kevin: Phase 2 is going to be trialing the solutions and iterating.
In the end there will be a CTL deliverable but along the way other languages or implementations are completely valid.

Questions
Josh Pines: Just to be clear, does the scope of this group include the RRT or is this just the ODT?
A: Yes. The term Output Transform means the combined RRT/ODT (defined in ACES 1.0 Component Names). This is a redesign of the system for rendering scene-linear ACES2065-1 to specific displays.

Chris Brejon: Is there a plan for a spreadsheet or somewhere to track the answers to the prompt questions (i.e. what is right? What is wrong? etc.)
A: We are open to suggestions for other tools. But for now, ACESCentral is the appropriate place to elaborate on issues. We will curate those discussions and put the gist on the Dropbox Paper site, with links back to the forum posts that contain more context and detail.

Josh Pines: Currently it’s unclear how or even if ACES is compatible with productions that require a DolbyVision deliverable.

JZ: We also need to have the ability to make the content look the same across different displays and/or optimize for display.

Colorimetric match and appearance match are not the same

Joseph: I think we need both - to optimize for the medium and to be able to make them match across displays.

Parameters should not be accessible to the user. Parameters exist for technical reasons to allow for making outputs to different setups. Creative intent should not be instilled via parameters.

HDR dynamic metadata, how do we design for and also test for how that will affect our outputs?

Daniele: You can add another abstraction layer between the rendering and the display which allows an intermediate step that you can target, then use other transforms to create the various outputs. Keep the code clean of what is the appearance based stuff and what is the actual boring calibration things.

D60 sim: Its inclusion almost doubles the number of Output Transforms. Could this be an LMT?

Meeting schedules
Preference was indicated for alternating meeting times, so that most participants could join at least every other week.

All meetings are recorded and posted with transcriptions (and you can increase the playback speed up to 2x so you can get through the meetings twice as fast!)

The group leadership will distribute a poll to get preferred day/time and we will establish a regular meeting cadence.

Chris Davies: The dynamic range being covered by the Output Transforms is worth review. The huge range has caused issues in the past.
A: This concern might be tied in to the OCIO v1 implementations which were LUT based - and this should be improved greatly by OCIO v2. [We should follow up with Chris to understand this issue better]

Chris Davies: What do we do with imagery that is display-referred?

We need to define what we mean by “invertible”. What must it do? What do we want to avoid?

Daniele: Two use cases:
  • Some just need to take an output-referred image into ACES so it can be rendered forward again (no modification, just to make it align with other imagery) – fine
  • Others want to take an output-referred image back into ACES and do grading on it or otherwise modify it – potentially problematic

Ideally an Output Transform would be capable of filling the gamut volume of every target. Currently this is not the case (due to RRT “sweeteners”) which causes problems for invertibility.

Thomas: AR is an example use case where display-referred imagery needs to be mapped onto geometry in scene space.

Joseph: One important issue in the earlier development was to try to preserve saturation in brightly colored highlights such as taillights and neons. There was a lot of effort to do that. 

Max(RGB) based tone-mapping was used in 0.1.1, but then rejected.

Implementation code exists for Gary and Doug’s Core Rendering Algorithm. Hopefully this can be shared with the group for evaluation.