TY - GEN
T1 - VTLayout
T2 - 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021
AU - Li, Shoubin
AU - Ma, Xuyan
AU - Pan, Shuaiqun
AU - Hu, Jun
AU - Shi, Lin
AU - Wang, Qing
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting Figure from documents, they are still unsatisfactory in recognizing the List, Table, Text and Title category blocks in DLA. This paper proposes a VTLayout model fusing the documents’ deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.
AB - Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting Figure from documents, they are still unsatisfactory in recognizing the List, Table, Text and Title category blocks in DLA. This paper proposes a VTLayout model fusing the documents’ deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.
KW - Document layout analysis
KW - Fusion of visual and text
KW - PubLayNet
KW - VTLayout
UR - https://www.scopus.com/pages/publications/85118987003
U2 - 10.1007/978-3-030-89188-6_23
DO - 10.1007/978-3-030-89188-6_23
M3 - 会议稿件
AN - SCOPUS:85118987003
SN - 9783030891879
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 308
EP - 322
BT - PRICAI 2021
A2 - Pham, Duc Nghia
A2 - Theeramunkong, Thanaruk
A2 - Governatori, Guido
A2 - Liu, Fenrong
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 November 2021 through 12 November 2021
ER -