TY - GEN
T1 - The linear geometry structure of label matrix for multi-label learning
AU - Chen, Tianzhu
AU - Li, Fenghua
AU - Zhuang, Fuzhen
AU - Guo, Yunchuan
AU - Fang, Liang
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Multi-label learning annotates a data point with the relevant labels. Under the low-rank assumption, many approaches embed the label space into the low-dimension space to capture the label correlation. However these approaches usually have weak prediction performance because the low-rank assumption is usually violated in real-world applications. In this paper, we observe the fact that the linear representation of row and column vectors of label matrix does not depend on the rank structure and it can capture the linear geometry structure of label matrix (LGSLM). Inspired by the fact, we propose the LGSLM classifier to improve the prediction performance. More specifically, after rearranging the columns of a label matrix in decreasing order according to the number of positive labels, we capture the linear representation of the row vectors of the compact region in the label matrix. Moreover, we also capture the linear and sparse representation of column vectors using the $$L:1$$-norm. The experimental results for five real-world datasets show the superior performance of our approach compared with state-of-the-art methods.
AB - Multi-label learning annotates a data point with the relevant labels. Under the low-rank assumption, many approaches embed the label space into the low-dimension space to capture the label correlation. However these approaches usually have weak prediction performance because the low-rank assumption is usually violated in real-world applications. In this paper, we observe the fact that the linear representation of row and column vectors of label matrix does not depend on the rank structure and it can capture the linear geometry structure of label matrix (LGSLM). Inspired by the fact, we propose the LGSLM classifier to improve the prediction performance. More specifically, after rearranging the columns of a label matrix in decreasing order according to the number of positive labels, we capture the linear representation of the row vectors of the compact region in the label matrix. Moreover, we also capture the linear and sparse representation of column vectors using the $$L:1$$-norm. The experimental results for five real-world datasets show the superior performance of our approach compared with state-of-the-art methods.
KW - Linear representation
KW - Multi-label learning
KW - Sparse representation
UR - https://www.scopus.com/pages/publications/85091557677
U2 - 10.1007/978-3-030-59051-2_15
DO - 10.1007/978-3-030-59051-2_15
M3 - 会议稿件
AN - SCOPUS:85091557677
SN - 9783030590505
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 229
EP - 244
BT - Database and Expert Systems Applications - 31st International Conference, DEXA 2020, Proceedings
A2 - Hartmann, Sven
A2 - Küng, Josef
A2 - Kotsis, Gabriele
A2 - Khalil, Ismail
A2 - Tjoa, A Min
PB - Springer Science and Business Media Deutschland GmbH
T2 - 31st International Conference on Database and Expert Systems Applications, DEXA 2020
Y2 - 14 September 2020 through 17 September 2020
ER -