oneAPI Deep Neural Network Library (oneDNN)  1.6.0
Performance library for Deep Learning
Primitive Attributes: Post-ops

oneDNN implements some basic capabilities of operation fusion using the post-ops attributes API. The operation fusion typically reduces the memory bandwidth pressure hence leading to higher performance.

Post-ops are operations that are appended after a primitive. They are implemented using the Primitive Attributes mechanism. If there are multiple post-ops, they are executed in the order they have been appended.

Currently the following post-ops are supported by the library:

Post-ops \ Primitive dev_guide_convolution dev_guide_inner_product dev_guide_batch_normalization
Eltwise Partial Partial Partial
Sum Partial N/A N/A
Depthwise Partial N/A N/A

Just like Primitive Attributes, the post-ops are represented by an opaque structure (dnnl_post_ops_t in C API and dnnl::post_ops in C++ API) which is copied once it is attached to the attributes using the C++ dnnl::primitive_attr::set_post_ops or C dnnl_primitive_attr_set_post_ops functions. The attributes then must be passed to a primitive descriptor creation function to take effect. Below is a simple skeleton for the C++ API:

dnnl::post_ops po; // default empty post-ops
assert(po.len() == 0); // no post-ops attached
po.append_SOMETHING(params); // append some particular post-op
po.append_SOMETHING_ELSE(other_params); // append one more post-op
// (!) Note that the order in which post-ops are appended matters!
assert(po.len() == 2);
dnnl::primitive_attr attr; // default attributes
attr.set_post_ops(po); // attach the post-ops to the attr
// further po changes would not affect attr
primitive::primitive_desc op_pd(params, attr); // create a pd with the attr
Note
Different post-ops can be chained together by appending one after another. Note that the appending order matters: the sequence of the post-ops is executed in the order of appearance.
Warning
Different primitives may have different post-ops support. Moreover, the support might also depend on the actual implementation of a primitive. For instance, the library generally does not support post-ops for reference primitives (which are typically very slow, so there is no point in doing the actual fusion). Robust code should handle errors accordingly. See the section on attributes error handling.
Note
Post-ops do not change the memory format of the operation destination memory object.

The post-op object can be inspected using the dnnl::post_ops::kind() function that takes an index of the post-op (which must be less than the value returned by dnnl::post_ops::len()), and returns its kind.

Supported Post-ops

Eltwise Post-op

The eltwise post-op enables fusing a primitive with an Eltwise primitive. This is probably one of the most popular kinds of fusion: an eltwise (typically an activation function) with preceding convolution or inner product.

The dnnl::primitive::kind of this post-op is dnnl::primitive::kind::eltwise.

API:

The parameters (C++ API for simplicity):

float scale, // scaling factor (described below)
algorithm alg, float alpha, float beta // same as in eltwise primitive
);

The alg, alpha, and beta parameters are the same as in Eltwise.

The eltwise post-op replaces:

\[ \dst[:] = \operatorname{Op}(...) \]

with

\[ \dst[:] = scale \cdot \operatorname{eltwise}( \operatorname{Op}(...) ) \]

The intermediate result of \(\operatorname{Op}(...)\) is not preserved. Hence, in most cases this kind of fusion cannot be used during training.

The \(scale\) factor is supported in INT8 inference only. For all other cases the scale must be 1.0.

Sum Post-op

The sum post-op accumulates the result of a primitive with the existing data. Prior to accumulating the result, the existing value would be multiplied by scale.

The kind of this post-op is dnnl::primitive::kind::sum.

This feature might improve performance for cases like residual learning blocks, where the result of a convolution is accumulated to the previously computed activations. The scale parameter can be used in [INT8](Primitive Attributes: Quantization) inference only when the result and previous activations have different magnitudes. For all other cases the scale must be 1.0.

The sum post-op replaces

\[ \dst[:] = \operatorname{Op}(...) \]

with

\[ \dst[:] = scale \cdot \dst[:] + \operatorname{Op}(...) \]

If the data type parameter is specified, the original destination tensor will be reinterpreted as a tensor with the provided data type. Because it is a reinterpretation, data_type and the destination data type must have the same size. As a result, the computation will be:

\[ \dst(:) = scale \cdot \operatorname{as_data_type}(\dst(:)) + \operatorname{Op}(...) \]

Note
  • Currently only a u8/s8 data type parameter is supported. CPU
    • No support for different destination and sum data type.

Depthwise Post-op

Appends a Depthwise convolution as a post-op. This post-op can only be fused with 1x1 convolution as generally seen in models (like MobileNet_v1) that use a stack of Separable convolutions: Depthwise convolution followed by 1x1 convolution. The stack of these Separable convolutions (like in MobileNet_v1) provide an opportunity to fuse 1x1-Convolution with bandwidth-limited Depthwise convolution.

The dnnl::primitive::kind of this post-op is dnnl::primitive::kind::convolution.

There are two variants of this post-op: dw_k3s1p1 and dw_k3_s2p1 for stride-1 and and stride-2 respectively.

API:

For better readability, below we assume a 2D convolution and use the following notations:

conv_1x1 Convolution with weights spatial=1 i.e., kh = kw = 1.

conv_dw Depthwise convolution with weights spatial=3 i.e., kh = kw = 3, g = oc = ic and pad_l = pad_r = {1, 1}.

The Depthwise post-op replaces

\[ dst[:] = Conv_{1x1}(...) \]

with

\[ dst[:] = Conv_{dw}(Conv_{1x1}(...)) \]

The final output dimensions of the after post-op is defined as

\[ dst_{conv_dw} = \{ n, oc_{1x1}, \operatorname{ceil}(oh_{conv_{1x1}}/stride), \operatorname{ceil}(ow_{conv_{1x1}}/stride) \} \]

where oh_conv_1x1, ow_conv_1x1 are height and width of conv_1x1 destination.

Supported data types

conv 1x1 output data type depthwise post-op output data type depthwise post-op weights data type depthwise post-op bias data type
u8, s8 u8, s8, s32, f32 s8 f32, s32
f32 f32 f32 f32
bf16 bf16, f32 bf16 f32, bf16
Note
  • Currently only supported for 2D 1x1 convolution.
  • Only eltwise post-op can be part of post-op chain (i.e., sum post-op is not supported)
  • The dst_1x1, wei_dw and dst_dw are assumed to be dnnl_format_tag_any.

Examples of Chained Post-ops

Different post-ops can be chained together by appending one after another. Note that the order matters: the post-ops are executed in the order they have been appended.

Let's consider some examples.

Sum -> ReLU

This pattern is pretty common for the CNN topologies of the ResNet family.

/* scale = */ 1.f);
/* scale = */ 1.f,
/* alg kind = */ dnnl::algorithm::eltwise_relu,
/* neg slope = */ 0.f,
/* unused for relu */ 0.f);
attr.set_post_ops(po);
convolution_forward::primitive_desc(conv_d, attr, engine);

This will lead to the following primitive behavior:

\[ \dst[:] = \operatorname{ReLU}(\dst[:] + \operatorname{conv}(\src[:], \weights[:]) \]

Tanh -> Sum -> ScaleShift

This is a hypothetical example that illustrates the sequence of operations applied. We also set all the scales to values other than 1.0 and use dnnl::primitive_attr::set_output_scales which will be covered in Primitive Attributes: Quantization. Unfortunately (or fortunately) the sequence is not supported by the library and is merely used to illustrate the semantics of post-ops.

/* scale = */ s_tanh,
/* alg kind = */ dnnl::algorithm::eltwise_tanh,
/* unused for tanh */ 0.f,
/* unused for tanh */ 0.f);
/* scale = */ s_sum);
/* scale = */ s_linear,
/* alg kind = */ dnnl::algorithm::eltwise_linear,
/* scale = */ alpha,
/* shift = */ beta);
attr.set_output_scales(0, {s_conv});
attr.set_post_ops(po);
convolution_forward::primitive_desc(conv_d, attr, engine);

This will lead to the following primitive behavior (for better readability the tensors are designated by their names only; i.e., [:] is omitted):

\[ \dst = s_{linear} \cdot ( \alpha \cdot ( s_{sum} \cdot \dst + s_{tanh} \cdot \tanh ( s_{conv} \cdot \operatorname{conv}(\src, \weights) ) ) + \beta ) \]

Relu -> Depthwise -> Relu

An example of fusing depthwise convolution with 1x1 convolution in MobileNet.

/* scale = */ 1.f,
/* alg kind = */ dnnl::algorithm::eltwise_relu,
/* neg slope = */ 0.f,
/* unused for relu */ 0.f);
po.append_dw_k3s1p1( /* or po.append_dw_k3s2p1 for depthwise with stride=2*/
/* depthwise weights data type = */ dnnl::memory::data_type::s8,
/* depthwise bias data type (undef implies no bias) = */ dnnl::memory::data_type::undef,
/* depthwise destination data type = */ dnnl::memory::data_type::u8,
/* mask for output scales of depthwise output = */ mask,
/* output scales for depthwise output = */ scales_depthwise)
/* scale = */ 1.f,
/* alg kind = */ dnnl::algorithm::eltwise_relu,
/* neg slope = */ 0.f,
/* unused for relu */ 0.f);
attr.set_output_scales(0, {output_scales_1x1_conv});
attr.set_post_ops(po);
auto cpd = convolution_forward::primitive_desc(conv_1x1, attr, engine);
auto dw_weight_md = cpd.query(query::exec_arg_md,
auto dw_bias_md = cpd.query(query::exec_arg_md,

This will lead to the following primitive behaviour:

\[ dst = ReLU_{depthwise} ( scales_{depthwise} \cdot ( conv_{depthwise} ( ReLU_{1x1} ( scales_{conv_{1x1}} \cdot ( conv_{1x1}() ) ) ) ) ) \]

dnnl::memory::data_type::u8
@ u8
8-bit unsigned integer.
DNNL_ARG_ATTR_POST_OP_DW
#define DNNL_ARG_ATTR_POST_OP_DW
Arguments for fused depthwise convolution.
Definition: dnnl_types.h:2050
dnnl::post_ops::len
int len() const
Returns the number of post-ops entries.
Definition: dnnl.hpp:2259
dnnl::post_ops::append_sum
void append_sum(float scale=1.f, memory::data_type data_type=memory::data_type::undef)
Appends an accumulation (sum) post-op.
Definition: dnnl.hpp:2301
dnnl::primitive_attr::set_post_ops
void set_post_ops(const post_ops ops)
Sets post-ops.
Definition: dnnl.hpp:2779
dnnl::post_ops::append_dw_k3s1p1
void append_dw_k3s1p1(memory::data_type weights_data_type, memory::data_type bias_data_type, memory::data_type dst_data_type, int mask, const std::vector< float > &scales)
Appends a depthwise post-op convolution with stride 1.
Definition: dnnl.hpp:2399
dnnl::post_ops
Post-ops.
Definition: dnnl.hpp:2247
DNNL_ARG_BIAS
#define DNNL_ARG_BIAS
Bias tensor argument.
Definition: dnnl_types.h:1947
dnnl::memory::data_type::undef
@ undef
Undefined data type (used for empty memory descriptors).
dnnl::primitive_attr
Primitive attributes.
Definition: dnnl.hpp:2549
dnnl::primitive_attr::set_output_scales
void set_output_scales(int mask, const std::vector< float > &scales)
Sets output scaling factors correspondence mask and values.
Definition: dnnl.hpp:2651
DNNL_ARG_WEIGHTS
#define DNNL_ARG_WEIGHTS
A special mnemonic for primitives that have a single weights argument.
Definition: dnnl_types.h:1920
dnnl::algorithm
algorithm
Kinds of algorithms.
Definition: dnnl.hpp:471
dnnl::memory::data_type::s8
@ s8
8-bit signed integer.
dnnl::post_ops::append_eltwise
void append_eltwise(float scale, algorithm aalgorithm, float alpha, float beta)
Appends an elementwise post-op.
Definition: dnnl.hpp:2348