Leveraging maximum entropy and correlation on latent factors for learning representations

  • Zhicheng He
  • , Jie Liu*
  • , Kai Dang
  • , Fuzhen Zhuang
  • , Yalou Huang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Many tasks involve learning representations from matrices, and Non-negative Matrix Factorization (NMF) has been widely used due to its excellent interpretability. Through factorization, sample vectors are reconstructed as additive combinations of latent factors, which are represented as non-negative distributions over the raw input features. NMF models are significantly affected by latent factors’ distribution characteristics and the correlations among them. And NMF models are faced with the challenge of learning robust latent factor. To this end, we propose to learn representations with an awareness of the semantic quality evaluated from the aspects of intra- and inter-factors. On the one hand, a Maximum Entropy-based function is devised for the intra-factor semantic quality. On the other hand, the semantic uniqueness is evaluated via inter-factor correlation, which reinforces the aim of semantic compactness. Moreover, we present a novel non-linear NMF framework. The learning algorithm is presented and the convergence is theoretically analyzed and proved. Extensive experimental results on multiple datasets demonstrate that our method can be successfully applied to representative NMF models and boost performances over state-of-the-art models.

Original languageEnglish
Pages (from-to)312-323
Number of pages12
JournalNeural Networks
Volume131
DOIs
StatePublished - Nov 2020
Externally publishedYes

Keywords

  • Correlated latent factor learning
  • Maximum entropy
  • Non-negative Matrix Factorization

Fingerprint

Dive into the research topics of 'Leveraging maximum entropy and correlation on latent factors for learning representations'. Together they form a unique fingerprint.

Cite this