DreamNet: A Deep Riemannian Manifold Network for SPD Matrix Learning

Rui Wang, Xiao-Jun Wu, Ziheng Chen, Tianyang Xu, Josef Kittler; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3241-3257


The methods of symmetric positive definite (SPD) matrix learning have attracted considerable attention in many pattern recognition tasks, as they are eligible to capture and learn appropriate statistical features while respecting the Riemannian geometry of SPD manifold where the data reside on. Accompanied with the advanced deep learning techniques, several Riemannian networks (RiemNets) for SPD matrix nonlinear processing have recently been studied. However, it is pertinent to ask, whether greater accuracy gains can be realized by simply increasing the depth of RiemNets. The answer appears to be negative, as deeper RiemNets may be difficult to train. To explore a possible solution to this issue, we propose a new architecture for SPD matrix learning. Specifically, to enrich the deep representations, we build a stacked Riemannian autoencoder (SRAE) on the tail of the backbone network, i.e., SPDNet [1]. With this design, the associated reconstruction error term can prompt the embedding functions of both SRAE and of each RAE to approach an identity mapping, which helps to prevent the degradation of statistical information. Then, we implant several residual-like blocks using shortcut connections to augment the representational capacity of SRAE, and to simplify the training of a deeper network. The experimental evidence demonstrates that our DreamNet can achieve improved accuracy with increased depth.

Related Material

[pdf] [supp] [code]
@InProceedings{Wang_2022_ACCV, author = {Wang, Rui and Wu, Xiao-Jun and Chen, Ziheng and Xu, Tianyang and Kittler, Josef}, title = {DreamNet: A Deep Riemannian Manifold Network for SPD Matrix Learning}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3241-3257} }