![]() ![]() TaBERT is trained on a large corpus of 26 million tables and their English contexts. In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). Publisher = "Association for Computational Linguistics",Ībstract = "Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Cite (Informal): TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data (Yin et al., ACL 2020) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Video: Code facebookresearch/tabert Data = ": Pretraining for Joint Understanding of Textual and Tabular Data",īooktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", Association for Computational Linguistics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online. TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. Anthology ID: 2020.acl-main.745 Volume: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Month: July Year: 2020 Address: Online Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 8413–8426 Language: URL: DOI: 10.18653/v1/2020.acl-main.745 Bibkey: yin-etal-2020-tabert Cite (ACL): Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. ![]() Abstract Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |