§ 瀏覽學位論文書目資料
系統識別號 U0002-0407202415110900
DOI 10.6846/tku202400452
論文名稱(中文) 基於圖神經網路與大語言模型之多輪知識圖譜問答
論文名稱(英文) A Multi-turn Knowledge Graph Question Answering system based on Graph Neural Network and Large Language Model
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 資訊工程學系博士班
系所名稱(英文) Department of Computer Science and Information Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 112
學期 2
出版年 113
研究生(中文) 梁原霖
研究生(英文) Yuan-Lin Liang
學號 809416018
學位類別 博士
語言別 英文
第二語言別
口試日期 2024-06-06
論文頁數 81頁
口試委員 指導教授 - 張志勇(cychang@mail.tku.edu.tw)
共同指導教授 - 石貴平(kpshih@mail.tku.edu.tw)
口試委員 - 廖文華
口試委員 - 蒯思齊
口試委員 - 黃仁俊
口試委員 - 武士戎
關鍵字(中) 知識圖譜
多輪問答系統
圖卷積神經網絡
語言模型
語義解析
關鍵字(英) Knowledge graph
multi-turn question answering system
graph convolutional neural network
language model
semantic parsing
第三語言關鍵字
學科別分類
中文摘要
多輪知識圖譜問答(MQA)是一項複雜的挑戰,不僅需要對當前查詢進行推理,還需要整合先前互動的上下文。這項任務需要同時理解知識圖譜的結構和語義。目前的模型雖然有效,但在提供多輪推理過程的清晰解釋性洞察方面仍有不足。本研究提出了一個全面的框架,結合了查詢增強、語義解析和先進的語言模型技術,以提高問答過程的效果。
提出的框架由幾個相互關聯的模塊組成,旨在處理問答任務的特定方面。初始模塊運用查詢增強技術,將當前查詢轉換為增強查詢,利用先前對話的上下文線索,從而提高查詢的清晰度和相關性。隨後,第二個模塊從知識圖譜和表格中檢索信息,包括兩種數據模塊:知識圖譜模塊和表格模塊。KGIR採用兩階段模型,結合知識圖譜結構學習和文本到Cypher轉換技術,實現精確的信息檢索。在第一階段,我們的模型採用先進的深度學習技術來學習知識圖譜的複雜結構。這個階段涉及捕捉圖譜中的關係、實體和上下文細節,使模型能夠全面理解底層信息架構。學習到的知識隨後用於創建知識圖譜的豐富表示,為模型的第二階段做準備。第二階段從用戶查詢中抽取關係和實體。表格模塊利用多重深度學習模型框架,通過文本到結構化查詢語言(SQL)的轉換技術查找表格信息。第四個模塊是一個語言模型,它整合處理後的數據和語義上下文,生成連貫且與上下文相關的回應。此模塊不僅回答增強查詢,還評估查詢與數據之間的語義相似性,從而提供雙重功能。在整個過程中,每個步驟都設計為可解釋的,使用戶能夠追蹤從查詢到最終答案的推理路徑。
在既定的多輪問答基準測試中的實證評估表明,所提出的框架達到了高準確度,並在可解釋性方面提供了顯著改進。通過無縫整合圖神經網絡、語義解析和語言建模,我們的系統為處理知識豐富環境中的複雜對話和動態互動設立了新標準。這種方法不僅提高了多輪問答系統的穩健性,還為對話系統和語義搜索應用的未來發展開闢了道路。
英文摘要
Multi-turn question answering (MQA) over knowledge graphs presents a complex challenge that involves not only reasoning over the current query but also integrating the context from previous interactions. This task demands an understanding of both the structure and semantics of the knowledge graph. Current models, while effective, still fall short of providing clear interpretative insights into the multi-turn reasoning process. This work introduces a comprehensive framework that incorporates techniques for query enhancement, semantic parsing, and advanced language models to enhance the efficacy of the question-answering process.
The proposed framework consists of several interconnected modules designed to handle specific aspects of the question-answering task. The initial module employs query enhancement to transform the current query into an enhanced query that leverages contextual cues from previous dialogues, thereby enhancing the clarity and relevance of the query. Subsequently, the second module retrieve the information in the knowledge graph and tabular, which consists of two type of data-modules: knowledge graph-module and tabular-module. The KGIR utilizes a two-stage models that combines the techniques graph neural network and transformer with a Text-to-Cypher conversion technique for precise information retrieval. In the first stage, our model em-ploys advanced deep-learning techniques to learn the intricate structure of the knowledge graph. This stage involves capturing relationships, entities, and contextual nuances within the graph, enabling the model to develop a comprehensive understanding of the underlying information architecture. The learned knowledge is then utilized to create an enriched representation of the knowledge graph, setting the stage for the second phase of the model. The second stage extracts the relationships and entities in the user queries. The tabular-module utilizes a multiple deep learning models framework to look up the tabular information with a Text to Structure Query Language (SQL) conversion technique. The fourth module, a language model integrates the processed data with the semantic context to generate coherent and contextually relevant responses. This module not only answers the enhanced query but also evaluates the semantic similarity between the query and the data, thus providing a dual functionality. Throughout the process, each step is designed to be interpretable, allowing users to trace the reasoning path from the query to the final answer.
Empirical evaluations on established multi-turn question answering benchmarks demonstrate that the proposed framework achieves high accuracy and offers significant improvements in interpretability. By seamlessly integrating graph neural networks with semantic parsing and language modeling, our system sets a new standard for handling complex dialogues and dynamic interactions in knowledge-rich environments. This approach not only enhances the robustness of multi-turn question answering systems but also opens avenues for future advancements in dialogue systems and semantic search applications.
第三語言摘要
論文目次
Outline	V
List of Figures	VII
List of Tables	VIII
Chapter 1. Introduction	1
1. 1 Background	1
1. 2 Research Goals	5
1. 3 Organization of the Thesis	6
Chapter 2. Related Work	7
2.1 The information retrieval method	7
2.2 The semantic parsing method	8
2.3 The conversational question-answering	9
2.4 Tabular semantic parsing	10
Chapter 3. Preliminary	14
3.1 Transformer	14
3.2 BERT	17
3.3 BART	19
3.4 LLAMA	19
Chapter 4. Methodology	21
4.1 Assumptions and Problem Statement	22
4.1.1 Knowledge Graph Data Type	22
4.1.2 Tabular data type	25
4.1.3 Objective	28
4.2 The Proposed DQE-KSR-AG Framework	28
4.2.1 Dialogue Topic Segmentation Preprocessing	30
4.2.2 Query Enhancer	32
4.2.3 Knowledge Graph Information Retrieval (KGIR)	35
4.2.4 Structured Query Construction and Retrieval (SQCR)	48
4.2.5 Answer Generation (AG)	56
Chapter 5. Experiment and Analysis	58
5.1 Experiment and Performance Analysis of KGIR	58
5.1.1 Data preparation	58
5.1.2 Performance analysis	59
5.2 Experiment and Performance Analysis of SQCR	64
5.2.1 Data preparation	64
5.2.2 Performance analysis	64
5.3 Performance of the DQE-KSR-AG framework	71
5.3.1 Data preparation	71
5.3.2 Framework evaluation	71
Chapter 6. Conclusion and Future Work	74
Reference	76

Figure 1. 1. The example of the multi-turn question answering with a knowledge graph	4
Figure 3. 1. The structure of the Transformer model…………………………………………………………………………..14
Figure 3. 2. The example of BERT tasks during pre-training and fine-tuning phases [9]	18
Figure 3. 3. The structure of the LLAMA	20
Figure 4. 1. The diagram of the proposed DQE-KSR-AG….……………………………………..………………………….. 29
Figure 4. 2. The diagram of Query Enhancer	34
Figure 4. 3. The Diagram of KGIR	37
Figure 4. 4. The Information Extraction (IE) model in detail	38
Figure 4. 5. The Cypher generation model in detail.	43
Figure 4. 6. The diagram of the PCP algorithm.	46
Figure 4. 7. The diagram of the SQCR	49
Figure 4. 8. The diagram of the SWSF-Model.	50
Figure 4. 9. The VE-model in detail.	53
Figure 4. 10. The diagram of the V-model.	55
Figure 4. 11. The diagram of the RAG.	56
Figure 5. 1. The comparison of KGIR and BERT on entities and relationships extraction task with evaluation metrics	61
Figure 5. 2. The comparison of CG-model and baseline seq2seq on Cypher generation task with evaluation metrics.	62
Figure 5. 3. Examples of error cases.	63
Figure 5. 4. Performance of the compared methods and evaluations of the tasks.	65
Figure 5. 5. Performance of the methods and evaluations of the value extraction with top-k values extracted.	67
Figure 5. 6. Performance of the methods and evaluations of the logic form and execution results.	68
Figure 5. 7. Examples of case analysis.	70


Table 2. 1. Summary and comparison of techniques used in related work (KG)	9
Table 2. 2. Summary and comparison of techniques used in related work (Tabular)	12
Table 5. 1. Comparison of evaluation results between models in each task	62
Table 5. 2. Experiment results of the methods on the TableQA dataset.	69
Table 5. 3. The results of the ablation study.	69
Table 5. 4. Evaluation of the DQE-KSR-AG framework and RAG	72



參考文獻
[1] 	I. Harrando and R. Troncy. (2022). "Combining semantic and linguistic representations for media recommendation." Multimedia Systems, vol. 28, pp. 2161–2173.
[2] 	L. Dong, F. Wei, M. Zhou, and K. Xu. (2015). "Question answering over freebase with multi-column convolutional neural networks," in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China. Association for Computational Linguistics, pp. 260–269. 
[3] 	N. Chakraborty, D. Lukovnikov, G. Maheshwari, Trivedi, P.,. Lehmann, and A. Fischer (2021). "Introduction to neural network‐based question answering over knowledge graphs." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11. 
[4] 	S. Hu, L. Zou, J. X. Yu, H. Wang and D. Zhao, "Answering Natural Language Questions by Subgraph Matching over Knowledge Graphs," in IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 5, pp. 824-837, 1 May 2018. 
[5] 	Y. Gu, S. Kase, M. Vanni, B. Sadler, P. Liang, X. Yan, and Y. Su. (2021). "Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases." In Proceedings of the Web Conference 2021. Association for Computing Machinery, New York, NY, USA, pp. 3477–3488. 
[6] 	T. N. Kipf, & M. Welling. (2017). "Semi-Supervised Classification with Graph Convolutional Networks." 5th International Conference on Learning Representations, ICLR 2017, Toulon, France.
[7] 	X. Shao, C. Dai and K. Hu. (2022). "Research on Question Answering of Lung Cancer Based on Knowledge Graph," 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, pp. 3692-3696. 
[8] 	B. Tang, X. Chen, D. Wang and Z. Zhao. (2022). "KAFNN: A Knowledge Augmentation Framework to Graph Neural Networks," 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, pp. 1-8. 
[9] 	J. Devlin, M. W. Chang, and K. Lee. (2019). "BERT: Pre-training of deep bidirectional transformers for language understanding," in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, Hum. Lang. Technol., pp. 4171–4186. 
[10] 	M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. (2018) "Deep contextualized word representations," in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, Hum. Lang. Technol., pp. 2227–2237. 
[11] 	A. Radford, K. Narasimhan, and T. Salimans. (2018). "Improving Language Understanding by Generative Pre-Training." [Online]. Available: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper. 
[12] 	H. Sun, B. Dhingra, M. Zaheer, K. Mazaitis, R. Sala-khutdinov, and W. W. Cohen. (2018). "Open domain question answering using early fusion of knowledge bases and text," in Proc. Conf. Empir. Methods Natural Lang. Process, pp. 4231–4242. 
[13] 	J. Zhang, X. Zhang, J. Yu, Tang, J. Tang, J. Tang, C. Li, and H. Chen (2022). "Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Ques-tion Answering." In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, pp. 5773-5784.
[14] 	Y. Chen, L. Wu, and M.J. Zaki. (2019). "Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases." In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics, pp. 2913–2923. 
[15] 	A. Krizhevsky, I. Sutskever, and G.E. Hinton. (2012). "ImageNet classification with deep convolutional neu-ral networks." Communications of the ACM, vol. 60, pp. 84 - 90. 
[16] 	H. Sun, T. Bedrax-Weiss, and W. W. Cohen. (2019). "Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics, pp. 2380–2390 
[17] 	M. Defferrard, X. Bresson, and P. Vandergheynst. (2016). "Convolutional neural networks on graphs with fast localized spectral filtering." In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS'16). Curran Associates Inc., Red Hook, NY, USA, pp. 3844–3852. 
[18] 	Y. Sun, Q. Shi, L. Qi, and Y. Zhang. (2022). "JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering." In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational, Seattle, United States. Association for Computational Linguistics, pp 5049–5060.
[19] 	W. Zheng, J. X. Yu, L. Zou, and H. Cheng . (2018). "Question answering over knowledge graphs: question understanding via template decomposition." In Proceedings of the VLDB Endowment, vol. 11, pp. 1373-1386. 
[20] 	S. Hu, L. Zou, J. X. Yu, H. Wang and D. Zhao. 1 May 2018. "Answering Natural Language Questions by Subgraph Matching over Knowledge Graphs," in IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 5, pp. 824-837. 
[21] 	S. Aghaei, S. Masoudi, T. R. Chhetri and A. Fensel. (2022). "Question answering over knowledge graphs: a graph-driven approach," 2022 IEEE/WIC/ACM In-ternational Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Niagara Falls, ON, Canada, pp. 296-302. 
[22] 	B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2014). "Embedding Entities and Relations for Learning and Inference in Knowledge Bases." CoRR, abs/1412.6575. 
[23] 	T. Trouillon, C.R. Dance, É. Gaussier, J. Welbl, S. Riedel, and G. Bouchard. (2017). "Knowledge Graph Completion via Complex Tensor Factorization." J. Mach. Learn. Res., vol. 18, pp. 130:1-130:38. 
[24] 	G. He,Y. Lan, J. Jiang, W.X. Zhao, and J. Wen. (2021). "Improving Multi-hop Knowledge Base Question An-swering by Learning Intermediate Supervision Sig-nals." Proceedings of the 14th ACM International Con-ference on Web Search and Data Mining, Israel and Virtual, pp. 553-561, March 8-12. 
[25] 	Marti A. Hearst. (1997). "TextTiling: segmenting text into multi-paragraph subtopic passages." Comput. Linguist. vol. 23, no. 1, pp. 33–64, March 1997. 
[26] 	Y. Song, L. Mou, R. Yan, L. Yi, Z. Zhu, X. Hu, and M. Zhang. (2016). "Dialogue session segmentation by embedding-enhanced textTiling." Interspeech 2016, pp. 2706–2710. 
[27] 	K. W. Church. (2017). "Word2Vec." Natural Language Engineering, 23(1), 155–162. doi:10.1017/S1351324916000334. 
[28] 	Y. Xu, H. Zhao, and Z. Zhang. (2021). "Topicaware multi-turn dialogue modeling." Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 16, pp. 14176–14184. 
[29] 	I. Mele, C. I. Muntean, F. M. Nardini, R. Perego, N. Tonellotto, and O. Frieder. (2020). “Topic propagation in conversational search,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, and Y. Liu, Eds. ACM, 2020, pp. 2057–2060.  
[30] 	L. Yang, H. Zamani, Y. Zhang, J. Guo, and W. B. Croft, (2017). "Neural matching models for question retrieval and next question prediction in conversation," ArXiv Preprint 1707.05409, 2017. 
[31] 	S. Yu, J. Liu, J. Yang, C. Xiong, P. Bennett, J. Gao, and Z. Liu, (2020) “Few-shot generative conversational query rewriting,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’20. New York, NY, USA: Association for Computing Machinery, p. 1933–1936. 
[32] 	J. Hao, Y. Liu, X. Fan, S. Gupta, S. Soltan, R. CHADA, P. Natarajan, E. Guo, and G. Tur. (2022). "Cgf: Constrained generation framework for query rewriting in conversational ai," in EMNLP 2022. 
[33] 	S. Vakulenko, S. Longpre, Z. Tu, and R. Anantha. (2021). "Question rewriting for conversational question answering," in Proceedings of the 14th ACM International Conference on Web Search and Data Mining. ACM, pp. 355–363. 
[34] 	H. Su, X. Shen, R. Zhang, F. Sun, P. Hu, C. Niu, and J. Zhou. (2019). “Improving multi-turn dialogue modelling with utterance ReWriter,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 22–31. [Online]. Available: https://aclanthology.org/P19-1003. 
[35] 	J. Gao, C. Xiong, P. Bennett, and N. Craswell, "Neural approaches to conversational information retrieval," CoRR, vol. abs/2201.05176, 2022. [Online]. Available: https://arxiv.org/abs/2201.05176. 
[36] 	D. H. D. Warren and F. C. N. Pereira. (1982). "An efficient easily adaptable system for interpreting natural language queries," Comput. Linguistics, vol. 8, no. 3–4, pp. 110–122. 
[37] 	I. Ritchie and P. Thanisch. (1993). "MASQUE/sql-an efficient and portable natural language Query interface for relational database," in Proc. 6th Int. Conf. Ind. Eng. Appl. Artif. Intell. Expert Syst, p. 327. 
[38] 	A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, (2017). "Attention is all you need," in Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17), Curran Associates Inc., Red Hook, NY, USA, 6000–6010. 
[39] 	M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. (2018) "Deep contextualized word representations," in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, Hum. Lang. Technol, pp. 2227–2237. 
[40] 	A. Radford, K. Narasimhan, and T. Salimans. (2018). "Improving Language Understanding by Generative Pre-Training." [Online]. Available: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper. 
[41] 	W. Hwang, J. Yim, S. Park, and M. Seo. (2019). "A comprehensive exploration on WikiSQL with table-aware word contextualization," arXiv:1902.01069. [Online]. Available: http://arxiv.org/abs/1902.01069. 
[42] 	S. Han, N. Gao, X. Guo and Y. Shan. (2022). "RuleSQLova: Improving Text-to-SQL with Logic Rules," 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, pp. 1-8, doi: 10.1109/IJCNN55064.2022.9892938. 
[43] 	P. He, Y. Mao, K. Chakrabarti, and W. Chen. (2019). "X-SQL: representation with context," arXiv:1908.08113. [Online]. Available: https://arxiv.org/abs/1908.08113. 
[44] 	V. Zhong, C. Xiong, and R. Socher. (2017). "Seq2SQL: Generating structured queries from natural language using reinforcement learning," arXiv:1709.00103. [Online]. Available: http://arxiv.org/abs/1709.00103. 
[45] 	L. Dong and M. Lapata. (2016). "Language to logical form with neural attention," in Proc. 54th Annu. Meeting Assoc. Comput. Linguistics, pp. 33–43. 
[46] 	R. Jia and P. Liang. (2016) "Data recombination for neural semantic parsing," in Proc. 54th Annu. Meeting Assoc. Comput. Linguistics, pp. 12–22. 
[47] 	X. Xu, C. Liu, and D. Song. (2017). "SQLNet: Generating structured queries from natural language without reinforcement learning," arXiv:1711.04436. [Online]. Available: https://arxiv.org/abs/1711.04436. 
[48] 	T. Yu, Z. Li, Z. Zhang, R. Zhang, and D. Radev. (2018). "TypeSQL: Knowledge-based type-aware neural Text-to-SQL generation," in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, Hum. Lang. Technol, pp. 588–594. 
[49] 	L. Dong and M. Lapata. (2018). "Coarse-to-fine decoding for neural semantic parsing," in Proc. 56th Annu. Meeting Assoc. Comput. Linguistics, pp. 731–742. 
[50] 	B. McCann, N. Shirish Keskar, C. Xiong, and R. Socher. (2018). "The natural language decathlon: Multi-task learning as question answering," arXiv:1806.08730. [Online]. Available: http://arxiv.org/abs/1806.08730. 
[51] 	C. Wang, K. Tatwawadi, M. Brockschmidt, P.-S. Huang, Y. Mao, O. Polozov, and R. Singh. (2018). "Robust Text-to-SQL generation with execution-guided decoding," arXiv:1807.03100. [Online]. Available: http://arxiv.org/abs/1807.03100. 
[52] 	T. Shi, K. Tatwawadi, K. Chakrabarti, Y. Mao, O. Polozov, and W. Chen. (2018). "IncSQL: Training incremental Text-to-SQL parsers with non-deterministic oracles," arXiv:1809.05054. [Online]. Available: http://arxiv.org/abs/1809.05054. 
[53] 	T. Yu, R. Zhang, M. Yasunaga, Y. C. Tan, X. V. Lin, S. Li, H. Er, I. Li, B. Pang, T. Chen, E. Ji, S. Dixit, D.Proctor, S. Shim, J. Kraft, V. Zhang, C. Xiong, R. Socher, and D. Radev. (2019). "SParC: Cross-Domain Semantic Parsing in Context," In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4511–4523. 
[54] 	K. Xu, Y. Wang, Y. Wang, Z. Wang, Z. Wen, and Y. Dong. (2022). "SeaD: End-to-end Text-to-SQL Generation with Schema-aware Denoising". In Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, United States, pp. 1845–1853. 
[55] 	M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer. (2020). "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880, Online. 
[56] 	H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, & G. Lample, (2023). "LLAMA: Open and Efficient Foundation Language Models."ArXiv, abs/2302.13971. 
[57] 	H. Touvron, L. Martin, K.R. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D.M. Bikel, L. Blecher, C.C. Ferrer, M. Chen, G. Cucurull,  D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A.S. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I.M. Kloumann, A.V. Korenev, P.S. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E.M. Smith, R. Subramanian, X. Tan, B. Tang, R. Taylor, A. Williams, J.X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, & T. Scialom. (2023). "LLAMA 2: Open Foundation and Fine-Tuned Chat Models." ArXiv, abs/2307.09288. 
[58]	A. Guo, X. Li, G. Xiao, Z. Tan, and X. Zhao. (2022). "SpCQL: A Semantic Parsing Dataset for Converting Natural Language into Cypher. " In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM '22). Association for Computing Machinery, New York, NY, USA, pp. 3973–3977.
[59]	T. Yu, R. Zhang, K. Yang, M. Yasunaga, D. Wang, Z. Li, J. Ma, I. Li, Q. Yao, S. Roman, Z. Zhang, and D. Radev. (2018). "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 3911–3921. 
[60]	N. Sun, Yang, X. Yang, & Y. Liu. (2020). "TableQA: a Large-Scale Chinese Text-to-SQL Dataset for Table-Aware SQL Generation." ArXiv, abs/2006.06434.
[61]	Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. (2020). "Relational Graph Attention Network for Aspect-based Sentiment Analysis." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3229–3238.
論文全文使用權限
國家圖書館
不同意無償授權國家圖書館
校內
校內紙本論文立即公開
電子論文全文不同意授權
校內書目立即公開
校外
不同意授權予資料庫廠商
校外書目立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信