Overview
This solution provides a robust, Retrieval-Augmented Generation (RAG) system on AWS, designed to support a web interface where users can ask questions based on internal company data from Slack, Confluence, and Jira. The architecture starts with Amazon S3 to store data snapshots from Slack messages, Confluence articles, and Jira tickets. AWS Lambda functions periodically pull and process data updates from these platforms using their respective APIs, ensuring the data remains current. The information is then indexed in LanceDB, a serverless vector store for data and AI workloads providing a powerful search layer optimized for fast query retrieval.
When a user enters a question through the web interface (hosted in Fargate), the query is sent to an Amazon Bedrock embedding API designed to interpret the query and vectorize the query to do both vector search and full text search and retrieve the most relevant data using this hybrid approach. Using LanceDB, the model retrieves contextually relevant snippets, combines them, and generates a coherent response, which is returned to the user.
Together, this solution enables insights by combining company data with advanced machine learning, giving users immediate answers based on aggregated knowledge from multiple communication and documentation platforms.
Sold by | Protagona |
Categories | |
Fulfillment method | Professional Services |
Pricing Information
This service is priced based on the scope of your request. Please contact seller for pricing details.
Support
For any questions about this offering or what Protagona can do for you, please reach out to us and we'll get you taken care of: