A Story of
Junior Software Engineer
Junior Software Engineer with a specialized focus on AI Quality and Testing, offering a strong background in shipping TypeScript-based products. Experienced in developing automated test suites using frameworks like Playwright and Cypress to identify complex bugs and edge cases. Demonstrated ability to contribute code enhancements and maintain rigorous engineering quality standards through a diverse portfolio of self-built projects. Committed to improving AI-driven web and mobile applications by analyzing user feedback and implementing technical fixes. Proactive in navigating complex codebases to ensure high-performance rendering and reliability in video generation technologies.
Tata Consultancy Services
Dec 2021 — Feb 2023
• Built TestPilot E2E, a comprehensive end-to-end testing suite developed with TypeScript and Playwright to automate complex user workflows. • Implemented automated API validation layers to ensure data consistency and backend stability across multiple application endpoints. • Improved regression testing efficiency by creating a modular framework of reusable test cases that reduced manual verification time. • Identified and documented technical bugs and critical edge cases, delivering detailed reports to support rapid engineering fixes. • Contributed to the software development lifecycle by integrating automated scripts that validate feature performance before production deployment. • Maintained an organized repository of technical test documentation and engineering insights to track product quality trends.
• Built VidQA Forge, an automated testing framework developed in TypeScript for evaluating AI-powered video generation platforms. • Designed comprehensive test suites to validate prompt-to-video accuracy, rendering consistency, and model performance under edge-case scenarios. • Identified and documented critical defects related to processing latency, failed video renders, and invalid input handling to improve reliability. • Implemented automated validation layers applying engineering rigor to monitor system performance and ensure the stability of the AI video generation pipeline. • Analyzed user feedback patterns from social channels to simulate real-world usage scenarios and expand the automated test repository. • Maintained technical documentation and engineering insights within the repository to support continuous quality improvements for AI-driven features.
• Developed PromptSight AI using TypeScript to evaluate AI-generated responses for relevance, coherence, and overall output quality. • Integrated OpenAI APIs to facilitate the comparison of multiple prompt variations and ensure consistent performance across diverse datasets. • Implemented automated scoring logic to streamline prompt engineering workflows and identify inconsistencies in Artificial Intelligence (AI) model behavior. • Tested core engine components to discover edge cases and document technical bugs related to API response handling and data accuracy. • Created comprehensive test suites to validate scoring algorithms and maintain high quality standards for AI-driven generated content. • Maintained detailed technical documentation and engineering logs to track the improvement of prompt evaluation accuracy and system reliability.
Lindsey Wilson University · Kentucky
Dec 2025