An industry leader in specialised broking and insurance, the client manages complex placements and handles large volumes of documents and correspondence between clients and insurers.
Challenge
The client had accumulated over 16 terabytes of data in its OpenText system. The information — a mix of emails, PDFs, spreadsheets, and policy documents — was largely unstructured and carried little metadata. Employees struggled to search, analyse, or interpret the content effectively. As a result, important information was overlooked, and decision-making was slower and less consistent than required.
Solution
Firemind developed a Retrieval Augmented Generation (RAG) solution to make the client’s data practical and usable. A custom AWS Lambda process identified and categorised files, while Amazon Kendra indexed the content so staff could query it. Using large language models through Amazon Bedrock, employees were able to ask questions in plain English and receive concise, accurate answers drawn directly from the documents.
Services used
- Amazon Kendra – indexing and querying of unstructured data
- Amazon Bedrock – Anthropic Claude models for summarisation and responses
- AWS Lambda – file identification and preprocessing
- Amazon S3 – scalable storage and retrieval

The Results
- Accuracy of 9/10 in responses to claim file queries
- Manual search time reduced from hours to seconds, improving productivity across underwriting and claims teams
- Consistent answers across departments, lowering the risk of missed information
- Clear path to scale with further integration into OpenText and additional use cases identified