Research

Nonlinear Regression via Deep Negative Correlation Learning

Le Zhang, Zenglin ShiMing-Ming ChengYun Liu,  Jia-Wang BianJeoy Tianyi Zhou, Guoyan ZhengZeng Zeng

Fig. 1: Decision surfaces for classification of artificial spirals dataset for both (i) conventional ensemble learning and (ii) NCL learning.

Abstract

Nonlinear regression has been extensively employed in many computer vision problems (e.g., crowd counting, age estimation, affective computing). Under the umbrella of deep learning, two common solutions exist i) transforming nonlinear regression to a robust loss function which is jointly optimizable with the deep convolutional network, and ii) utilizing ensemble of deep networks. Although some improved performance is achieved, the former may be lacking due to the intrinsic limitation of choosing a single hypothesis and the latter may suffer from much larger computational complexity. To cope with those issues, we propose to regress via an efficient “divide and conquer” manner. The core of our approach is the generalization of negative correlation learning that has been shown, both theoretically and empirically, to work well for non-deep regression problems. Without extra parameters, the proposed method controls the bias-variance-covariance trade-off systematically and usually yields a deep regression ensemble where each base model is both “accurate” and “diversified.” Moreover, we show that each sub-problem in the proposed method has less Rademacher Complexity and thus is easier to optimize. Extensive experiments on several diverse and challenging tasks including crowd counting, personality analysis, age estimation, and image super-resolution demonstrate the superiority over challenging baselines as well as the versatility of the proposed method.

Paper

  • Nonlinear Regression via Deep NegativeCorrelation Learning, Le Zhang, Zenglin Shi, Ming-Ming Cheng, Yun Liu, Jia-Wang Bian, Joey Tianyi Zhou, Guoyan Zheng, Zeng Zeng, IEEE TPAMI, 2020. [pdf|code|project|bib]
  • Crowd Counting with Deep Negative Correlation Learning, Z Shi, L Zhang, Y Liu, X Cao, Y Ye, MM Cheng, G Zheng, IEEE CVPR, 2018. [pdf|bib|code]
@article{zhang2020dncl,
    author={Le Zhang and Zenglin Shi and Ming-Ming Cheng  and Yun Liu and Jia-Wang Bian and Joey Tianyi Zhou and Guoyan Zheng and Zeng Zeng}, 
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
    title={Nonlinear Regression via Deep Negative Correlation Learning},  
    year={2020}, 
    volume={}, 
    number={}, 
    pages={1-16},
    doi={10.1109/TPAMI.2019.2943860}, 
}

Method

Fig. 2: Our DNCL Regression is formulated as ensemble learning with the same amount of parameter as a single CNN. DNCL processes the input by a stack of typical convolutional and pooling layers. Finally, a “divide and conquer” strategy is adapted to learn a pool of regressors to regress the output on top of each convolutional feature map at top layers. Each regressor is jointly optimized with the CNN by an amended cost function, which penalizes correlations with others to make better trade-offs among the bias-variance-covariance in the ensemble.

Applications

DNCL is found to be useful in a variaty of computer vision applications we have tried so far. If you found it useful in your applications and want to share with others, please contact us to add a link in this project page.

Application 1: Crowd Counting

Fig. 3: Visualization on the diversities with all 64 base models. The input image and the ground-truth number of people is shown in (a-b). The predicted density map from conventional ensemble and NCL are shown in (c-d), respectively. The pair-wise Euclidean distance between the predictions of individual base models in conventional ensemble (e) and DNCL (f), respectively. It can be seen that the proposed DNCL method leads to much diversified base models which can yield better overall performances.
Tab. 1: Comparing results of different methods on the UCF CC 50 dataset.

Application 2: Personality Analysis

Tab. 2: Personality prediction bench-marking using mean accuracy A and coefficient of determination R2 scores. The results of the first 6 methods are copied from [3] and [42].

Application 3: Age Estimation

Tab. 3: Results of different age estimation methods on the MORPH [95] and FG-NET [96] datasets.

Application 4: Image Super-resolution

Fig. 4: Visual comparison for 4× super-resolution of different super-resolution results.
Tab. 4: Average PSNR/SSIM/IFC score for image super-resolution of scale factor ×2, ×3 and ×4 on datasets Set5, Set14, BSD100 and Urban100.
(Visited 4,060 times, 1 visits today)
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments