OpenDCX

Deep Compositing Extended

OpenDCX 2.2.2 documentation

Theory of Operation

Modified excerpts from the DigiPro2015 paper Improved Deep Image Compositing Using Subpixel Masks which is available at DreamWorks’ research portal.

Deep Compositing Workflow Challenges

_images/flat_vs_deep_03.jpg

The current industry standard workflow for rendering and handling deep images was outlined in [Hillman 2013], and implemented in the OpenEXR library starting in version 2.0 [Kainz 2013]. The manipulation of deep data for compositing is often performed in The Foundry’s Nuke compositing package which provides a specialized tool set for manipulating deep data which conform to the OpenEXR 2.0 recommendations.

In this workflow each deep sample contains at least one Z-depth value defining the sample’s distance from camera, or two Z values (termed Zfront and Zback) which define the depth range the sample covers. A depth range of 0 indicates a hard surface while >0 indicates a homogeneous volume segment. The color encoded into such a volumetric sample is the color at Zback with logarithmic interpolation being used to determine a color value in between Zfront and Zback.

While this workflow works well for combining volumetric and hard-surface samples or combining multiple volumetric samples, it does not work so well when combining hard-surface samples. This is primarily due to:

  1. Lack of subpixel spatial information.
  2. No ability to pixel-filter deep samples.
  3. Only logarithmic interpolation of samples is supported.

Lacking additional surface information there is no way to determine x/y correlation between samples so it’s impossible to correctly weight their respective contribution to the final flattened pixel. One way around this is to multiply (pre-weight) the sample color & alpha by its pixel contribution, or coverage, but that only works when the samples are kept isolated, and since flattening is performed on the depth-sorted samples front-to-back by successive under operations the weighting must be non-uniform to compensate for the decreasing contribution of each successive sample.

When these pre-weighted samples are interleaved with other samples during a deep merge operation there’s no guarantee the correct sample weighting will still exist, leading to visual artifacts.

Common-Edge and Overlapping Surfaces Example

Examples of deep samples arrangements which do not flatten properly using the existing OpenEXR 2.0 methodology due to the lack of subpixel spatial information. The top section of the diagrams show the correct result from a renderer integrating at subpixel resolution, while the bottom section of the diagrams show the incorrect result from the current OpenEXR 2.0 methodology.

_images/openexr2.0_flattening_01.jpg

Ex. 1: Common-Edge Surfaces - surfaces that may or may not connect but align to camera with a shared edge.

_images/openexr2.0_flattening_02.jpg

Ex. 2: Overlapping Surfaces - surfaces that are separated in depth but occlude each other.

Subpixel Masks - Per Sample A-Buffer

_images/subpixel_mask_grid_01.jpg

Outputting many subpixel surface fragments as deep samples requires a tremendous amount of memory and disk space which are already stressed by existing deep compositing workflows.

A better solution to retaining the subpixel spatial information and reduce deep sample count is by combining (collapsing) subpixel surface fragments together while simultaneously building up a subpixel bitmask which is interpreted as a 2D array of bits. This bitmask provides x/y correlation and pixel coverage information in a minimum of additional per-sample storage. The bitmask size should be at least 8x8(64 bits) to adequately capture high-frequency details like fur and hair. Larger masks could be used but their storage needs become prohibitive and supporting variable-sized masks severely complicates sample management.

Deep pixel flattening is performed at each subpixel mask bit by finding all deep samples that have that bit enabled, depth-sorting them and merging front-to-back while handling sample overlaps. The subpixel’s flattened result is integrated with other subpixel results in a pixel filter to produce the final result. This produces more accurate results from overlapping and common-edge surfaces since the depth ordering of the deep samples at each subpixel location is handled uniquely. It also reduces aliasing along surface edges and eliminates the incorrect color mixing of overlapping opaque samples.

Since the subpixel mask contains pixel coverage it is important to keep surface opacity and pixel coverage separated so that interleaving uncorrelated deep samples together do not produce weighting artifacts upon flattening. The surface color is still premultiplied by the surface opacity but is not premultiplied by coverage which is captured in the subpixel mask pattern.

The final result from this flattening and pixel filtering is not exactly the same as the result from the renderer’s filter (for example jittered subpixel locations have been lost,) but it is significantly better than no filtering at all.

_images/deep_weighting_comparison_v2.jpg

Surface Flags and Sample Interpolation

LINEAR_INTERP = 0x0001

Deep merging of separate hard-surface renders will often produce aliasing along any surface intersections due to the lack of surface slope information at the intersection points. Subpixel masks will not help here as both surfaces likely have full pixel coverage at these locations. The normal of the surface is of limited value as orientation alone does not provide enough information without corresponding depth information. What is needed is the Z-depth range that the surface covers in the deep pixel so that the visible portion of each surface’s contribution can be linearly weighted relative to the other surface’s contribution resulting in an anti-aliased intersection.

This Z-blending effect can be visually significant at the edges of rounded objects where the angle of the surface to camera is most oblique and in regions where the slopes of intersecting surfaces are nearly equal. However actually performing the linear interpolation is a challenge in the current OpenEXR workflow as samples with thickness > 0 are assumed to be volumetric segments and only log interpolation is supported. We need some way of knowing whether a deep sample is hard-surface or volumetric and it must be stored on the sample so that deep merging hard-surface and volumetric samples together retains that info through subsequent deep operations.

_images/deep_interpolation_comparison_01.jpg

To handle this we add a hard-surface flag bit to each deep sample and store the bit in a 16-bit half-float flag channel as an integer value. If the flag is on (1.0) the flattener performs linear interpolation of the sample’s depth range and if it’s off (0.0) it performs the normal log interpolation for a volumetric sample. The flattening algorithm must carefully handle overlaps of differing surface types when combining sample segments.

_images/flattening_01.jpg

This scheme is backward-compatible with existing volumetric images written without the hard-surface flag since the flag channel will be filled with zeros when deep pixels are merged together during compositing. We have modified our renderer to set this flag when outputting deep samples as it is aware of the surface type during shading. We also expose controls for the user to set/change this bit on Nuke’s deep OpenEXR reader, or by using a separate custom DeepSurfaceType operator.

In the side by side closeups above compare the hard-surface intersections of two teapots offset and overlapped, with and without linear interpolation. Note that (a) and (b) appear identical, as although the samples in (b) have depth ranges log interpolation fails when sample alpha is 1.0, which is a very common case with hard-surfaces.

Matte Flag and Mutual Cutouts

MATTE_OBJECT = 0x0002

Another useful custom attribute to store on each sample is whether or not it is a matte object. Making an object become the matte source for another object is a common operation in production rendering. The result is similar to setting the matte object’s color to black (0) and merging it with other objects so the black matte object becomes a holdout of the others. Unfortunately just setting the object color to zero does not produce a correctly held out alpha, and setting the matte object’s alpha to zero simply makes it a transparent black object producing no holdout at all.

Because of this the matte operation is handled as a special case in a renderer when merging the surface samples together and requires some indicator that a surface is matte, either using a shader setting or geometry attribute. This surface information is normally only valid inside the renderer and is difficult to pass on to later compositing steps. The matte flag bit has float value 2.0 and is checked during flattening (or by any operation that needs to take matte-ness into account) and the matte operation is handled just like in the renderer.

_images/matte_cutouts_01.jpg

Additive Flag and Partial Subpixel Coverage

ADDITIVE = 0x0004

PARTIAL_SPCOVERAGE = 0x0100..0xff00

Sometimes an 8x8 spmask abuffer does not have enough resolution to adequately capture the coverage weights of a render or a transformed deep image.

For example at certain fractional subpixel transform offsets, rotations or scales, there is not enough abuffer resolution to adequately capture the transformed spmask leading to aliasing of the transformed image. Simply increasing the spmask resolution is not a practical option so to compensate the partial-spcoverage flag was expanded to 9 bits (only 8 bits are actually stored, the 9th bit is implied by the ADDITIVE bit) which provides 0-256 levels of partial subpixel-coverage weight per deep sample, not per subpixel.

Note

The partial subpixel-coverage weight is shared between all enabled subpixels in a deep sample.

_images/subpixel_mask_grid_02.jpg

It would be great to store a weight per subpixel bin, but that is also not practical and equivalent to increasing the abuffer resolution.

To provide multiple per-subpixel coverage weights we duplicate the deep sample into a full-spcoverage sample and one or more partial-spcoverage samples, one for each weight that we need. Only the subpixel bits that share the same (or nearly same) coverage weight are enabled in each spmask. The color channels for the partial samples include subpixel coverage but not pixel coverage.

For example an input deep sample with transformed subpixel bins stradding multiple output subpixel bins is split into multiple output deep samples, where the first output sample has the full-spcoverage mask and the other output samples are the partial-spcoverage samples which include value weighting and are additive.

Note

Additive-ness is a key attribute when flattening partial-spcoverage samples! Otherwise partials from one surface will incorrectly suppress partials in another sample. The flattener algorthim bundles the additive and non-additive samples when determining sample splits and merges.

_images/partial_subpixel_coverage_01.jpg
_images/partial_subpixel_coverage_02.jpg

RayTracing Combiner

Output two or more duplicated deep samples for each combined surface sample. The first sample is the ‘full’ subpixel mask where each on bit indicates a saturated subpixel bin, which will be the case for the vast majority of combined-hard-surface pixels (furry characters being the exception.) What constitutes the spbin saturation level depends on how you’re distributing samples. Where there’s non-saturated spbins (usually at edges of surfaces) you output additional deep samples with the spbins enabled for those partial subpixels, and set the partial-spcoverage-weight value in the flags channel (8 bits worth.) To reduce the number of possible additional partial samples you threshold the partials into a list of weight bins and enable the spmask bits for all weights in common. For example if multiple subpixel samples from the same surface in the same pixel have a partial subpixel coverage of 50% then one additional output deep sample is added with a 0.5 partial weight and all spbins enabled which have that 50% weight. If there’s a 25% partial then that’s another additional output deep sample. The ‘full’ and ‘partial’ output deep samples have mutually-exclusive spmasks - that is there should be no bits in common between their spmasks. In the OpenDCX flattening algorithm as each subpixel bin is visited the partial samples for that bin are additively combined so that partials add together rather than cross-suppress each other. Thus each subpixel bin in the 8x8 grid has potentially 256 levels of subpixel transparency which is further divided down when all subpixel flattened results are integrated together. Smarter guys than me can compute how many possible transparency levels that works out to...

Adding additional deep samples may seem expensive, but it’s much, much cheaper than increasing the subpixel mask resolution, and it’s not practical to have varying mask sizes due to the channel limitations in apps and OpenEXR iteself.

If any of the 8 partial spcoverage bits are enabled the deep sample values include partial subpixel coverage weighting (spcoverage) equal to the binary value of the coverage bits. These bits work in conjunction with the ADDITIVE flag to fully define partial or full spcoverage.

Partial-spcoverage count to weight conversion is biased by 1 so the maximum coverage count is 256 (0x100) indicating full-spcoverage, and common weights like 0.75, 0.5 and 0.25 are represented exactly. Logically 256 is identical to 0 since a zero-spcoverage sample cannot exist and the full-spcoverage state is the same as no partial-spcoverage at all, so we don’t actually require a 9th bit to store the value 256.

i.e. when the ADDITIVE bit is enabled and any of the partial bits are set the combination represents a partial weight:

flag bits  count    weight
 0x00004   (256)    (1.0)   <<< implied value
 0x0ff04    255      0.9960
 0xc0004    192      0.75
 0x08004    128      0.5
 0x04004     64      0.25
 0x00104      1      0.0039

When accumulating partial-spcoverage weight counts and maxSpCoverageCount is reached or exceeded, to mark a sample as full-coverage you clear the coverage value to 0x00000 and also clear the ADDITIVE flag.

(work in progress)