TextKernel is a global leader in AI technology for recruitment, providing solutions to over 2,500 corporate and staffing organisations worldwide. Their expertise lies in intelligent information extraction, search, and matching technology for talent acquisition.
Challenge
Processing large volumes of CVs requires significant compute power, which can lead to high operational costs. TextKernel wanted to explore whether Amazon Bedrock’s large language models could deliver the same or better parsing quality while reducing processing time and expenses.
They needed a scalable, cost‑effective solution to extract structured data from CVs, while ensuring consistent performance and maintaining high‑quality output at scale.
Solution
Firemind developed a proof‑of‑concept using Amazon Bedrock to create a chain‑of‑thought extraction process. This approach used custom prompts to identify and extract key CV data fields, storing them in Amazon DynamoDB for fast retrieval.
The architecture leveraged Amazon S3 for data ingestion, AWS Lambda for processing, and AWS Step Functions for workflow automation – delivering a system capable of high‑volume CV parsing with reduced latency and lower costs.
Services used
- Amazon Bedrock
- AWS Lambda
- Amazon DynamoDB
- Amazon S3

The Results
- 40% cost reduction compared to OpenAI models
- 2x faster processing speed
- Maintained or improved parsing quality
- Fully scalable for high‑volume workloads
- Reduced operational latency