Frequently Asked Questions


What is OpenDCX?

OpenDCX is an open-source C++ library comprising extensions for the OpenEXR library and its deep file format. It adds support for per deep-sample metadata adding subpixel-masks and surface-type flags, and provides utility functions to manipulate deep pixels with this metadata. It is developed and maintained by DreamWorks Animation for use in rendering and compositing applications typically encountered in feature film production. Subpixel-mask deep compositing provides anti-aliasing of deep images while keeping memory and disk sizes in reasonable check, and received favorable feedback when presented at DigiPro 2015 in the paper "Improved Deep Image Compositing Using Subpixel Masks". We hope to encourage industry adoption of subpixel-mask deep compositing technology with eventual inclusion in standard production libraries, applications and renderers.

What license is OpenDCX distributed under?

OpenDCX is released under the Modified BSD License.

Is there a Contributor License Agreement for OpenDCX?

Yes, developers who wish to contribute code to be considered for inclusion in the OpenDCX distribution must first complete this Contributor License Agreement and submit it to DreamWorks (directions are in the CLA).

Why should I use OpenDCX?

The most common reason is if you want more accurate combining of separate hard-surface renders, or are having issues with the pixel-filtering of flat renders not matching deep renders. A secondary reason is the ability to tag samples as matte-cutout without adversely affecting subsequent operations and have the cutout state be accurately applied during flattening.

What does OpenDCX stand for?


What is the version numbering system for OpenDCX?

OpenDCX tracks the OpenEXR library's major/minor version number and will increment only the patch number (the third number.) So for example OpenDCX 2.2.1 and 2.2.5 requires OpenEXR 2.2 while OpenDCX 3.0.15 and 3.0.22 would require OpenEXR 3.0.

Why use an 8x8 fixed subpixel mask rather than a variable sized mask?

Through experimentation we found an 8x8 mask to be the most reasonable compromise between quality and storage requirements. Initially 4x4, 8x8, and 16x16 sized masks were tried and tested.
4x4 was found to be too coarse and provided only 16 levels of transparency for aa/mblur, but conveniently it only took a single 32-bit float to store while still allowing room for flag bits.
16x16 provided a much finer subpixel resolution and 256 transparency levels but required 8 32-bit floats to store and manage. While the 8 mask channels highly compressed inside the OpenEXR file, once decompressed into memory they consumed significantly more memory and erased much of the savings that were realized from collapsing samples together.
8x8 turned out to be the best compromise between subpixel resolution, aa/mblur transparency quality and memory use. While an 8x8 mask only provides 64 levels of aa/mblur transparency and initially seems too low to avoid visible quantization artifacts, in practice antialising ramps at surface edges will rarely extend past 2-3 pixels and smooth motionblur streaks are typically dithered and visible for only a fraction of a second. Keep in mind that surface opacity is stored separately at full precision (16 or 32-bit float alpha channel.) In production testing the 64-level quantization of pixel-coverage did not prove to be a problem.
However, 2D transforming deep pixels at subpixel amounts and resampling the 8x8 masks does not provide enough resolution to avoid incorrect output sample weights, introducing aliasing. This was avoided by allowing for partial-coverage samples to be additively combined with adjacent samples when flattened, identified using the partial-coverage weight. See the DeepTransform sample() method for an example of producing partial-coverage samples.

Can OpenDCX support a variable-sized subpixel mask? (Update Feb 2017)

No. An evaluation of the practical cost/benefit factors showed that increasing the mask size did not improve the subpixel quality enough to justify the cost of carrying the additional bits around on every deep sample. With release 2.2.2 there is support for a per-sample partial subpixel-coverage weight, when used in conjunction with the 8x8 mask provides effective high-resolution subpixel coverage.

Can post pixel-filtering a deep render in comp perfectly match a pixel-filtered flat render from a renderer?

Yes and no. One of the goals of the OpenDCX scheme is to transfer subpixel information produced in the renderer's subpixel loop out of the renderer to the compositing step by storing some of it in the OpenEXR file - specifically the subpixel mask and the hard-surface/matte-cutout indicators. However to save space we do not store a sample's true subpixel location and only know its location(s) inside a regularly-spaced grid. We can filter across pixel boundaries using these grid locations but can never perfectly replicate the original distribution of samples in the renderer, leading to slight differences in filtering between a pre-filtered flat render and a post-filtered deep render.
However, in practice this has not been a problem since even using a regular distribution of samples is significantly better than not having any subpixel information. A more important factor is replicating the renderer's pixel-filter algorithm and profile in the flattening operation.

How can there be disk-space/memory savings if you're adding additional data to each sample?

By collapsing samples - i.e. the consolidation or combining of samples together.
This can be done because multiple samples are no longer required to provide partial-coverage weighting to a pixel due to antialiasing and motion blur - the subpixel mask provides that. The additional storage cost of the subpixel mask is outweighed by the reduction in overall sample count, and normally results in a net reduction in disk-space and memory consumption. Typically a full-color deep sample consists of 6 floats - RGBA,Z & ZBack - where RGBA are 16-bit half-floats and Z & ZBack are 32-bit floats. A single deep sample typically cost 16 bytes (4*2 + 2*4) and spmask and flag channels adds another 10 bytes (2*4 + 2.) Collapsing just two samples together saves 6 bytes (16*2 - 16+10) even after adding the additional channels.
As detailed in the DigiPro paper, collapsing samples together can destroy high-frequency detail so one must be careful to use multiple criteria for the combining logic.
Note that collapsing works well for hard-surface renders but not for volumetric renders as there's often only one sample per-pixel and therefore nothing to combine. However volumetric renders normally do not require subpixel information and can be written without the spmask & flag channels, with any sample reduction achieved by combining multiple volume step samples into log interpolated ranges.

Why support linear sample interpolation rather than using the log interpolation defined in the OpenEXR standard?

When multiple hard-surface samples are combined (collapsed) within a single pixel they typically have varying Z values that must be somehow combined. If only the nearest Z is used for the final combined sample then Z overlaps between multiple hard-surfaces can only be resolved with a binary choice resulting in aliasing that's commonly seen in Z-merged flat renders.
A better solution is to find the nearest/farthest Z range and store that in the final combined sample as Z(front) and ZBack, allowing depth overlaps to be blended and thus reduce aliasing at intersections. However depth blending requires the sample to be interpolated within the Z(front)-ZBack range and unfortunately the default log sample interpolation defined in OpenEXR does not allow this in practice for opaque hard-surfaces. Log sample interpolation works well when sample opacities are less than 1.0 which is typical for volumetric samples, however hard-surface renders are typically opaque (opacity 1.0) and log interpolation mathematically fails. Linear interpolation solves this for opaque surfaces but is not appropriate for interpolating volumetric objects, thus the need to support both.
To identify which interpolation mode to use for a sample we added a flag bit to each sample and set it to 1 to mark a hard-surface sample. The flattening algorithm checks that bit when splitting/merging overlapping samples and performs the appropriate interpolation.
In absence of the spmask channels the default behavior is to use log interpolation as defined in the OpenEXR standard.

Do I always need to add subpixel masks to my deep renders?

No, subpixel masks are most useful for hard-surface renders so adding them to volumetric renders is usually unnecessary. If a volumetric render's surface is solid enough to appear as a hard surface (perhaps for very thick smoke) then subpixel masks may be useful, but generally speaking the transparency of volumetric renders do not require accurate overlap resolution at a subpixel level.
OpenDCX's flattening algorithm interprets an empty subpixel mask (all bits off) as a special case and treats it as full-coverage, so you can mix subpixel-mask and non subpixel-mask samples together in the same deep pixel and it will flatten correctly.

Are there example images with subpixel masks encoded in them?

Yes, please see the Downloads page.
Please note the DigiPro paper mentions the surface-flags metadata being stored in a half-float channel named spmask.3 - this was changed in the OpenDCX release to spmask.flags so that spmask channels 3-8 would be available for additional mask bits in the future. The example images use this new channel name.