盆腔炎用什么药好| 孔子的真名叫什么| 特警是干什么的| kalenji是什么品牌| 晚上8点半是什么时辰| 自慰是什么| 孔雀吃什么食物| 96年出生的属什么| 转移灶是什么意思| 吃核桃有什么好处| 人造奶油是什么做的| 机械性窒息死亡是什么意思| 刺猬和豪猪有什么区别| 智利说什么语言| 十一月十五号是什么星座| 神经病吃什么药效果好| 夏至吃什么| 三叉戟是什么意思| 好女人的标准是什么| 嘴咸是什么原因| rip什么意思| 什么菜降血压效果最好| 破处是什么感觉| 不还信用卡有什么后果| 扁平疣用什么药膏管用| 梦见大火烧山是什么意思| 泡果酒用什么酒好| 肌肤之钥是什么档次| 嘴唇轻微发麻什么病兆| 皮肤长小肉粒是什么原因| 肝火旺吃什么食物好| 祈禳是什么意思| com是什么| nb是什么品牌| 64年出生属什么| 红配什么颜色最好看| 螳螂吃什么| 急性扁桃体炎什么原因导致的| 身上搓出来的泥是什么| 贝伐珠单抗是什么药| 婴儿喝什么奶粉最好| hi是什么酸| 卤牛肉放什么调料| 孕妇吃猕猴桃对胎儿有什么好处| 反流性咽喉炎吃什么药| 乌龟最喜欢吃什么| 女人左眼下有痣代表什么| 小甲鱼吃什么| 12月份什么星座| 既视感什么意思| 43是什么意思| 血红蛋白偏低是什么意思| 骨瘤是什么病| 放化疗期间吃什么好| 什么叫多囊| 人为什么会晕车| 对宫星座是什么意思| 吃薄荷对人身体有什么好处| 月办念什么| 肾虚吃什么补| 什么的奇观| 燕窝有什么好处| 囊肿里面是什么东西| 小鸟站在高压线上为什么不会触电| 乾元是什么意思| 69年鸡是什么命| 什么是性上瘾| 血红蛋白低吃什么补最快| 什么人不能喝桑黄| jimmy是什么意思| 离线缓存是什么意思| 撤退性出血是什么意思| 身体缺钾是什么症状| 辣椒属于什么科植物| 五常法指的是什么| 善哉善哉是什么意思| 柠檬什么季节成熟| 屁股疼吃什么药| 男生小肚子疼是什么原因| 吃什么有奶| 眩晕是什么原因引起的| 为什么会起水泡| 什么是功能性消化不良| 土霉素主要是治疗什么病| 去湿气吃什么食物好| 一什么老虎| 手指头发红是什么原因| 什么是小男人| a型血和ab型血生的孩子是什么血型| 猪心炖什么适合孩子| 九眼天珠适合什么人戴| 什么是黑色星期五| 三八妇女节是什么生肖| 生地麦冬汤有什么功效| 手指甲扁平是什么原因| 甲状腺弥漫性病变是什么意思| 扁桃体经常发炎是什么原因| 为什么订婚后容易分手| 狗狗生产需要准备什么| 冰袋里面装的是什么| 玫瑰的花语是什么| 印度为什么那么热| 尿酸高什么水果不能吃| 梦见别人家办丧事是什么意思| 浑身麻是什么原因| 总是犯困是什么原因| 五行中什么生木| mr检查是什么意思| 梦见着大火了是什么征兆| 百香果什么时候开花结果| 鸭子什么时候下蛋| 丸吞是什么意思| 颈椎病有什么特效药| 巧克力囊肿是什么| 老母鸡炖什么好吃又有营养价值| 咳嗽有白痰一直不好是什么原因| 眼睛不舒服是什么原因| 属猴本命佛是什么佛| 嫁给香港人意味着什么| 鱼龙是什么| 胃胀气是什么原因| 甲状腺饱满是什么意思| 2222是什么意思| 二尖瓣微量反流什么意思| 公务员是做什么工作的| 骨折恢复吃什么好| 出佛身血是什么意思| 铂字五行属什么| 老公生日送什么礼物| 72岁属什么| 大脑供血不足用什么药| 眼压高什么症状| 眉毛长白毛是什么征兆| 39是什么意思| 减肥什么时候喝牛奶| 农历2月份是什么星座| 腰酸是什么病的前兆| 小姑子是什么关系| peony是什么意思| cos代表什么意思| 平安夜什么时候吃苹果| 阳春三月指什么生肖| 疯马皮是什么皮| 复三上坟是什么意思| 敏五行属什么| 房颤是什么症状| 男生吃菠萝有什么好处| 实属什么意思| 脾胃虚寒有什么症状| 坐东北朝西南是什么宅| 晚上睡觉小腿抽筋是什么原因| 为什么8到10周容易胎停| 荨麻疹吃什么药好的快| 日柱灾煞是什么意思| 怀孕初期要注意什么| 腰椎骨质增生是什么意思| 做梦梦到牛是什么意思| 什么饺子馅好吃| 压疮是什么| 枸杞泡酒有什么作用和功效| 为什么会得疣| 3月6号是什么星座的| 3.23是什么星座| 脚心发热是什么原因| 什么是白内障| 鹿土念什么| 总三萜是什么| 何炅的老婆叫什么名字| 桑拓木命是什么意思| 什么叫放疗治疗| h1v是什么病| 尿酸高能吃什么鱼| 产后抑郁症有什么表现症状| 起义是什么意思| 拉屎发黑是什么原因| 未见明显胚芽是什么意思| 智齿发炎是什么原因| 门当户对指的是什么| 跑马了是什么意思| 宫刑是什么意思| 虾仁和什么炒好吃| 上呼吸道感染吃什么药| 肺结节钙化是什么意思| 莫欺少年穷是什么意思| h是什么牌子的衣服| 右耳朵耳鸣是什么原因| 里急后重是什么意思| 碳酸氢铵是什么| 沏茶是什么意思| 台州为什么念第一声| 人的运气跟什么有关| 兵马未动粮草先行是什么意思| 胚胎生化是什么意思| 胸膜炎吃什么药好| 人怕冷是什么原因| min是什么意思| 前纵隔结节是什么意思| 欲情故纵什么意思| 女孩叫锦什么好听| 生龙活虎是什么意思| 孑然一身是什么意思| 宫颈机能不全是什么原因造成的| 嬴姓赵氏是什么意思| 支原体培养阳性是什么意思| a型血的人容易得什么病| 文武双全是什么生肖| 榨菜的原料菜叫什么| 长方脸适合什么样的发型| 什么牛排最好吃| 天干是什么| 反射弧长是什么意思| 呆板是什么意思| 当归有什么功效| 郑州有什么玩的| 李子什么颜色| 德国用什么货币| 胸部ct平扫能检查出什么| 黄体破裂有什么症状| 老鼠最怕什么东西| 纳帕皮是什么皮| 什么是原发性高血压| 低血压去药店买什么药| 糖筛和糖耐有什么区别| 100mg是什么意思| 司长是什么级别的官| 心肌缺血吃什么补得快| 猴配什么生肖最好| 秦昊的父母是干什么的| 水弹是什么材料| 不治身亡是什么意思| 法西斯是什么意思啊| 一毛三是什么军衔| 禅师是什么意思| 五行缺土是什么意思| 阴到炎用什么药好得快| 抗缪勒氏管激素是检查什么的| 腰椎间盘突吃什么药| 刮痧用什么油刮最好| 小意思是什么意思| 什么是三伏天| 3月10日是什么星座| 射手男喜欢什么样的女生| 手掌上的三条线分别代表什么| 指甲有白点是缺什么| 胃疼吃什么药最有效| 42年属什么生肖| 火眼是什么症状| 人死后为什么要盖住脸| 胃痛可以吃什么水果| 世界上最长的蛇是什么| 口水臭是什么原因| barbie是什么意思| 3月18日什么星座| 激素高是什么原因| 纪念礼物送什么好| dwi呈高信号什么意思| 查心脏挂什么科| 隐形眼镜护理液可以用什么代替| 气泡水是什么| 牙齿里面疼是什么原因| 肚脐周围疼是什么原因| 医生和医师有什么区别| hg是什么元素| 半夏反什么药| 百度Jump to content

厉害了!这家印度银行追回了黑客盗取的1.7亿美元

From Wikipedia, the free encyclopedia
百度 北京时间3月24日16:00,中国U23对阵叙利亚U23的国际足球热身赛在陕西省体育场举行。

Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information.[1] With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM's pre-existing training data.[2] This allows LLMs to use domain-specific and/or updated information that is not available in the training data.[2][3] For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources.

RAG improves large language models (LLMs) by incorporating information retrieval before generating responses.[4] Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources.[1] According to Ars Technica, "RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts." This method helps reduce AI hallucinations,[4][5] which have caused chatbots to describe policies that don't exist, or recommend nonexistent legal cases to lawyers that are looking for citations to support their arguments.[6]

RAG also reduces the need to retrain LLMs with new data, saving on computational and financial costs.[1][7] Beyond efficiency gains, RAG also allows LLMs to include sources in their responses, so users can verify the cited sources. This provides greater transparency, as users can cross-check retrieved content to ensure accuracy and relevance.

The term RAG was first introduced in a 2020 research paper[4] from Meta.[8][3]

RAG and LLM Limitations

[edit]

LLMs can provide incorrect information. For example, when Google first demonstrated its LLM tool "Google Bard", the LLM provided incorrect information about the James Webb Space Telescope. This error contributed to a $100 billion decline in the company’s stock value.[6] RAG is used to prevent these errors, but it does not solve all the problems. For example, LLMs can generate misinformation even when pulling from factually correct sources if they misinterpret the context.[9] MIT Technology Review gives the example of an AI-generated response stating, "The United States has had one Muslim president, Barack Hussein Obama." The model retrieved this from an academic book rhetorically titled Barack Hussein Obama: America’s First Muslim President? The LLM did not "know" or "understand" the context of the title, generating a false statement.[2]

LLMs with RAG are programmed to prioritize new information. This technique has been called "prompt stuffing." Without prompt stuffing, the LLM's input is generated by a user; with prompt stuffing, additional relevant context is added to this input to guide the model’s response. This approach provides the LLM with key information early in the prompt, encouraging it to prioritize the supplied data over pre-existing training knowledge.[10]

Process

[edit]

Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating an information-retrieval mechanism that allows models to access and utilize additional data beyond their original training set. AWS states, "RAG allows LLMs to retrieve relevant information from external data sources to generate more accurate and contextually relevant responses" ("indexing").[11] This approach reduces reliance on static datasets, which can quickly become outdated. When a user submits a query, RAG uses a document retriever to search for relevant content from available sources before incorporating the retrieved information into the model's response ("retrieval").[12] Ars Technica notes that "when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information" ("augmentation").[6] By dynamically integrating relevant data, RAG enables LLMs to generate more informed and contextually grounded responses ("generation").[5] IBM states that "in the generative phase, the LLM draws from the augmented prompt and its internal representation of its training data to synthesize an engaging answer tailored to the user in that instant.[1]

RAG key stages

[edit]

Indexing

[edit]

Typically, the data to be referenced is converted into LLM embeddings, numerical representations in the form of a large vector space.[9] RAG can be used on unstructured (usually text), semi-structured, or structured data (for example knowledge graphs).[13] These embeddings are then stored in a vector database to allow for document retrieval.[14]

Overview of RAG process, combining external documents and user input into an LLM prompt to get tailored output

Retrieval

[edit]

Given a user query, a document retriever is first called to select the most relevant documents that will be used to augment the query.[2][4] This comparison can be done using a variety of methods, which depend in part on the type of indexing used.[1][13]

Augmentation

[edit]

The model feeds this relevant retrieved information into the LLM via prompt engineering of the user's original query.[11][15] Newer implementations (as of 2023) can also incorporate specific augmentation modules with abilities such as expanding queries into multiple domains and using memory and self-improvement to learn from previous retrievals.[13]

Generation

[edit]

Finally, the LLM can generate output based on both the query and the retrieved documents.[2][16] Some models incorporate extra steps to improve output, such as the re-ranking of retrieved information, context selection, and fine-tuning.[13]

Improvements

[edit]

Improvements to the basic process above can be applied at different stages in the RAG flow.

Encoder

[edit]

These methods focus on the encoding of text as either dense or sparse vectors. Sparse vectors, which encode the identity of a word, are typically dictionary-length and contain mostly zeros. Dense vectors, which encode meaning, are more compact and contain fewer zeros. Various enhancements can improve the way similarities are calculated in the vector stores (databases).[17]

  • Performance improves by optimizing how vector similarities are calculated. Dot products enhance similarity scoring, while approximate nearest neighbor (ANN) searches improve retrieval efficiency over K-nearest neighbors (KNN) searches.[18]
  • Accuracy may be improved with Late Interactions, which allow the system to compare words more precisely after retrieval. This helps refine document ranking and improve search relevance.[19]
  • Hybrid vector approaches may be used to combine dense vector representations with sparse one-hot vectors, taking advantage of the computational efficiency of sparse dot products over dense vector operations.[17]
  • Other retrieval techniques focus on improving accuracy by refining how documents are selected. Some retrieval methods combine sparse representations, such as SPLADE, with query expansion strategies to improve search accuracy and recall.[20]

Retriever-centric methods

[edit]

These methods aim to enhance the quality of document retrieval in vector databases:

  • Pre-training the retriever using the Inverse Cloze Task (ICT), a technique that helps the model learn retrieval patterns by predicting masked text within documents.[21]
  • Progressive data augmentation, as used in Diverse Augmentation for Generalizable Dense Retrieval (DRAGON), improves dense retrieval by sampling difficult negative examples during training.[22]
  • Supervised retriever optimization aligns retrieval probabilities with the generator model’s likelihood distribution. This involves retrieving the top-k vectors for a given prompt, scoring the generated response’s perplexity, and minimizing KL divergence between the retriever’s selections and the model’s likelihoods to refine retrieval.[23]
  • Reranking techniques can refine retriever performance by prioritizing the most relevant retrieved documents during training.[24][12]


Language model

[edit]
Retro language model for RAG. Each Retro block consists of Attention, Chunked Cross Attention, and Feed Forward layers. Black-lettered boxes show data being changed, and blue lettering shows the algorithm performing the changes.

By redesigning the language model with the retriever in mind, a 25-time smaller network can get comparable perplexity as its much larger counterparts.[25] Because it is trained from scratch, this method (Retro) incurs the high cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on the domain and can devote its smaller weight resources only to language semantics. The redesigned language model is shown here.

It has been reported that Retro is not reproducible, so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG.[26]

Chunking

[edit]

Chunking involves various strategies for breaking up the data into vectors so the retriever can find details in it.[14]

Different data styles have patterns that correct chunking can take advantage of.

Three types of chunking strategies are:

  • Fixed length with overlap. This is fast and easy. Overlapping consecutive chunks helps to maintain semantic context across chunks.
  • Syntax-based chunks can break the document up into sentences. Libraries such as spaCy or NLTK can also help.
  • File format-based chunking. Certain file types have natural chunks built in, and it's best to respect them. For example, code files are best chunked and vectorized as whole functions or classes. HTML files should leave <table> or base64 encoded <img> elements intact. Similar considerations should be taken for pdf files. Libraries such as Unstructured or Langchain can assist with this method.

Knowledge graphs

[edit]

Rather than using documents as a source to vectorize and retrieve from, Knowledge Graphs can be used. One can start with a set of documents, books, or other bodies of text, and convert them to a knowledge graph using one of many methods, including language models. Once the knowledge graph is created, subgraphs can be vectorized, stored in a vector database, and used for retrieval as in plain RAG. The advantage here is that graphs has more recognizable structure than strings of text and this structure can help retrieve more relevant facts for generation. Sometimes this approach is called GraphRAG.[citation needed]

[edit]

Sometimes vector database searches can miss key facts needed to answer a user's question. One way to mitigate this is to do a traditional text search, add those results to the text chunks linked to the retrieved vectors from the vector search, and feed the combined hybrid text into the language model for generation.[citation needed]

[edit]

Since vector search relies on embedding individual chunks, a lot of granular, token-level information cannot be obtained via pure hybrid of vector search. For higher accuracy, one can create embeddings out of individual tokens instead and compute the Chamfer distance between them. This leads to significantly better results at the cost of speed. Modern solutions such as Morphik make this technique scalable by using a combination of software and hardware acceleration.

Evaluation and Benchmarks

[edit]

RAG systems are commonly evaluated using benchmarks designed to test both retrieval accuracy and generative quality. Popular datasets include BEIR, a suite of information retrieval tasks across diverse domains, and Natural Questions or Google QA for open-domain QA.

In high-stakes domains like law and healthcare, domain-specific benchmarks are increasingly used. For instance, LegalBench-RAG[27] is an open-source benchmark designed to test retrieval quality over legal documents. It evaluates recall and precision for different RAG pipelines using real-world legal questions and documents.

Challenges

[edit]

RAG is not a complete solution to the problem of hallucinations in LLMs. According to Ars Technica, "It is not a direct solution because the LLM can still hallucinate around the source material in its response."[6]

While RAG improves the accuracy of large language models (LLMs), it does not eliminate all challenges. One limitation is that while RAG reduces the need for frequent model retraining, it does not remove it entirely. Additionally, LLMs may struggle to recognize when they lack sufficient information to provide a reliable response. Without specific training, models may generate answers even when they should indicate uncertainty. According to IBM, this issue can arise when the model lacks the ability to assess its own knowledge limitations.[1]

RAG systems may retrieve factually correct but misleading sources, leading to errors in interpretation. In some cases, an LLM may extract statements from a source without considering its context, resulting in an incorrect conclusion.[12] Additionally, when faced with conflicting information RAG models may struggle to determine which source is accurate. The worst case outcome of this limitation is that the model may combine details from multiple sources producing responses that merge outdated and updated information in a misleading manner. According to the MIT Technology Review, these issues occur because RAG systems may misinterpret the data they retrieve.[2]

References

[edit]
  1. ^ a b c d e f "What is retrieval-augmented generation?". IBM. 22 August 2023. Retrieved 7 March 2025.
  2. ^ a b c d e f "Why Google's AI Overviews gets things wrong". MIT Technology Review. 31 May 2024. Retrieved 7 March 2025.
  3. ^ a b Singhal, Rahul (Nov 30, 2023). "The Power Of RAG: How Retrieval-Augmented Generation Enhances Generative AI". Forbes.
  4. ^ a b c d Kiela Douwe, Lewis Patrick, Perez Ethan, Piktus Aleksandra, Petroni Fabio, Karpukhin Vladimir, Goyal Naman, Küttler Heinrich, Lewis Mike, Yih Wen-Tau, Rockt?schel Tim, Riedel Sebastian (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. pp. 9459–9474. arXiv:2005.11401. ISBN 978-1-7138-2954-6.{{cite book}}: CS1 maint: multiple names: authors list (link)
  5. ^ a b Turow Jon, Kiela Douwe (March 26, 2025). "RAG Inventor Talks Agents, Grounded AI, and Enterprise Impact". Madrona.
  6. ^ a b c d "Can a technology called RAG keep AI models from making stuff up?". Ars Technica. 6 June 2024. Retrieved 7 March 2025.
  7. ^ Mishi, Javed. "Retrieval-Augmented Generation for Enterprise Search Systems". Nextbridge. Hajra Naeem. Retrieved 11 July 2025.
  8. ^ "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks". ai.meta.com. 2020.
  9. ^ a b Xu, Sherlock (January 25, 2024). "Understanding Retrieval-Augmented Generation: Part 1". www.bentoml.com.
  10. ^ "Mitigating LLM hallucinations in text summarisation". BBC. 20 June 2024. Retrieved 7 March 2025.
  11. ^ a b "What is RAG? - Retrieval-Augmented Generation AI Explained - AWS". Amazon Web Services, Inc. Retrieved 16 July 2024.
  12. ^ a b c Kiela Douwe, Turck Matt (March 6, 2025). "Top AI Researcher on GPT 4.5, DeepSeek and Agentic RAG | Douwe Kiela, CEO, Contextual AI". YouTube.
  13. ^ a b c d Gao, Yunfan; Xiong, Yun; Gao, Xinyu; Jia, Kangxiang; Pan, Jinliu; Bi, Yuxi; Dai, Yi; Sun, Jiawei; Wang, Meng; Wang, Haofen (2023). "Retrieval-Augmented Generation for Large Language Models: A Survey". arXiv:2312.10997 [cs.CL].
  14. ^ a b Sankar, Shrinivasan (Feb 13, 2024). "Retrieval Augmented Generation(RAG) — A quick and comprehensive introduction". ai-bites.net.
  15. ^ Kiela Douwe, Ho Alan (Oct 13, 2023). "Where did Retrieval Augmented Generation come from, and where is it going?". YouTube.
  16. ^ Lewis, Patrick; Perez, Ethan; Piktus, Aleksandra; Petroni, Fabio; Karpukhin, Vladimir; Goyal, Naman; Küttler, Heinrich; Lewis, Mike; Yih, Wen-tau; Rockt?schel, Tim; Riedel, Sebastian; Kiela, Douwe (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 9459–9474. arXiv:2005.11401.
  17. ^ a b Luan, Yi; Eisenstein, Jacob; Toutanova, Kristina; Collins, Michael (26 April 2021). "Sparse, Dense, and Attentional Representations for Text Retrieval". Transactions of the Association for Computational Linguistics. 9: 329–345. arXiv:2005.00181. doi:10.1162/tacl_a_00369. Retrieved 15 March 2025.
  18. ^ "Information retrieval". Microsoft. 10 January 2025. Retrieved 15 March 2025.
  19. ^ Khattab, Omar; Zaharia, Matei (2020). ""ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"". doi:10.1145/3397271.3401075.
  20. ^ Wang, Yup; Conroy, John M.; Molino, Neil; Yang, Julia; Green, Mike (2024). "Laboratory for Analytic Sciences in TREC 2024 Retrieval Augmented Generation Track". NIST TREC 2024. Retrieved 15 March 2025.
  21. ^ Lee, Kenton; Chang, Ming-Wei; Toutanova, Kristina (2019). ""Latent Retrieval for Weakly Supervised Open Domain Question Answering"" (PDF).
  22. ^ Lin, Sheng-Chieh; Asai, Akari (2023). ""How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval"" (PDF).
  23. ^ Shi, Weijia; Min, Sewon; Yasunaga, Michihiro; Seo, Minjoon; James, Rich; Lewis, Mike; Zettlemoyer, Luke; Yih, Wen-tau (June 2024). "REPLUG: Retrieval-Augmented Black-Box Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8371–8384, Mexico City, Mexico. Association for Computational Linguistics". ACL Anthology (Publisher: Association for Computational Linguistics): 8371–8384. arXiv:2301.12652. doi:10.18653/v1/2024.naacl-long.463. Retrieved 16 March 2025.
  24. ^ Ram, Ori; Levine, Yoav; Dalmedigos, Itay; Muhlgay, Dor; Shashua, Amnon; Leyton-Brown, Kevin; Shoham, Yoav (2023). "In-Context Retrieval-Augmented Language Models. Transactions of the Association for Computational Linguistics, 11:1316–1331". ACL Anthology (Publisher: MIT Press). arXiv:2302.00083. doi:10.1162/tacl_a_00605. Retrieved 16 March 2025.
  25. ^ Borgeaud, Sebastian; Mensch, Arthur (2021). ""Improving language models by retrieving from trillions of tokens"" (PDF).
  26. ^ Wang, Boxin; Ping, Wei (2023). ""Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study"" (PDF).
  27. ^ LegalBench-RAG (2024)
白血病有什么症状 牙齿总是出血是什么原因 无期是什么意思 保外就医是什么意思 skap是什么牌子
脚掌发麻是什么原因 什么东西放进去是硬的拿出来是软的 长绒棉和全棉什么区别 12岁是什么礼 现在执行死刑用什么方法
为什么磨牙 血压低吃什么药见效快 上岸了是什么意思 取鱼刺挂什么科室 什么什么言什么
终止妊娠是什么意思 蚊子为什么吸血 隆科多为什么不姓佟 外阴白斑挂什么科 手发麻是什么原因
哑巴是什么生肖0297y7.com 欲拒还迎什么意思hcv8jop7ns3r.cn 脱口秀是什么意思96micro.com 什么原因导致长水痘wuhaiwuya.com 八百里加急是什么意思hcv8jop3ns2r.cn
随意是什么意思hcv8jop3ns3r.cn 回笼觉是什么意思jiuxinfghf.com 西洋参不能和什么一起吃hcv9jop2ns8r.cn 三伏贴能治什么病hcv8jop9ns1r.cn 玉米什么时候成熟hcv8jop7ns0r.cn
左耳朵发热代表什么预兆hcv8jop2ns0r.cn 脑梗是什么原因造成的hcv7jop6ns7r.cn 早上10点是什么时辰hcv9jop7ns4r.cn 吃什么药能快速降血压hcv8jop7ns4r.cn 汗青是什么意思hcv8jop3ns3r.cn
人吃什么才能长胖hcv9jop1ns7r.cn 最贵的金属是什么hcv8jop1ns3r.cn 霍山黄芽属于什么茶hcv9jop8ns3r.cn 牛排骨炖什么好吃hcv8jop8ns6r.cn 麦粒肿吃什么消炎药kuyehao.com
百度