site-logo

JAWS PANKRATION 2024

site-logo
HomeNewsTimetableCfPCommitteePromotionFollow UpPrivacy Policy

The Ultimate RAG Showdown (Kendra, KB for Bedrock, etc...)

Lv300

Lv300

8/24/2024 08:20 (UTC)

Session Info

One of the most utilized use cases for generative AI is "RAG" (Retrieval-Augmented Generation).

RAG allows for more useful responses by combining knowledge information with generative AI.

We will prepare multiple patterns for building RAG on AWS and verify which configuration provides the best accuracy.

Kazuaki  Morita

Kazuaki Morita

- AWS Community Builders -

- AWS Top Engineers(APN) -

- AWS Ambassadors(APN) -

- AWS All Certified Engineers(APN) -



Session Category
Machine learning


AWS Services
Bedrock
Kendra
OpenSearch Service
OpenSearch Serverless

Session Materials


Session Summary (by Amazon Bedrock)
    The speaker introduces themselves and their work on X (likely Twitter). They live in Nara, Japan, and have published a book on Bedrock development. The presentation focuses on RAG (Retrieval Augmented Generation), a technique that provides external information to AI for generating responses. This method helps reduce AI hallucinations by giving it access to guided knowledge. The speaker discusses various ways to build RAG using AWS services: 1. Amazon Bedrock: Easy to set up using the management console, with frequent feature updates. It uses S3 to store documents and generates responses based on registered data. 2. Amazon Kendra: An enterprise search service with multiple connectors for various data sources. It supports FAQ formats and can improve search accuracy. 3. A custom solution using AWS services: This involves creating an application that performs text extraction, indexing, and response generation. The speaker compares these methods based on ease of setup, flexibility, and performance. They conducted a performance evaluation using AWS What's New articles in Japanese, creating questions automatically and testing the RAG systems' ability to answer correctly. The evaluation results showed varying performance across the different methods, but overall, the responses were generally satisfactory. The speaker notes that the choice of language model and document format can significantly impact performance. In conclusion, the speaker suggests that the evaluation method itself needs assessment and expresses hope for dedicated RAG evaluation features in Amazon Bedrock's model evaluation functionality.

©JAWS-UG (AWS User Group - Japan). All rights reserved.