Our Projects: Speaking Volumes and Proving That Our Impact is Inherent
For people, dreams and aspirations remained intangible and difficult to act upon. The client, a social media and advertising specialist, wanted a tool that could turn these abstract ideas into structured, actionable plans, enabling users to articulate their dreams in natural language and receive tailored visual and practical guidance.
We developed a unique AI-powered platform that combines conversation and visualisation to bring users’ dreams to life. Utilising LLAMA 7B and a fast API for dialogue and intent extraction, the app guides users through conversations that help them articulate their aspirations. The backend integrates advanced tools such as diffusion models, Midjourney, and DALL·E to generate dream-related images from textual input and user-provided facial images. With React Native for the front end and a backend stack that includes MySQL, PyTorch, and Chroma Vector DB, the app also summarizes each dream and provides a step-by-step plan to achieve it.
Users can now transform their ideas into clear, actionable goals with visual representations that motivate and inspire. The platform not only saves users time but also introduces a novel and emotionally engaging way to chart a journey toward fulfilling their dreams, setting it apart in both UX and utility.
Managing a diverse and complex pool of HR documents and user expectations posed several challenges. These included processing large volumes of PDFs and web content, supporting multilingual communication, ensuring response relevance and traceability, and integrating with live employee data for personalised queries—all while delivering consistent user satisfaction across the board.
We built a sophisticated Retrieval-Augmented Generation (RAG) system integrated with similarity scoring, document parsing, and dynamic querying features. Leveraging LLAMA 7B, LangChain agents, PyTorch, and Chroma Vector DB, the system retrieves precise answers from HR policies and HRMS documentation, including page-level references. It supports multilingual NLP, allowing users to interact in their preferred languages. Real-time access to MySQL databases ensures queries related to individual profiles, benefits, and balances are resolved on demand. Built using ReactJS, the interface offers a seamless and intuitive experience across devices.
The AI assistant now delivers accurate, multilingual answers to HR policy and HRMS queries, saving users substantial time. By citing document names and page numbers or linking directly to web sources, the tool enhances trust and reliability while dramatically reducing the load on HR departments for common queries.
Navigating HR legal compliance was a daunting task for users, who had to manually sift through numerous state-specific documents or search the internet to find accurate information on topics such as minimum wage, working hours, and leave policies. This fragmented approach made it difficult to ensure that queries were answered quickly, accurately, and per regional compliance standards.
To streamline access, we developed an AI-powered solution that enables users to ask compliance-related queries in natural English. The system uses a robust backend built with Python, PyTorch, LLAMA 7B, LangChain agents, and similarity models. It is trained on a comprehensive set of Indian state-wise legal documents and can retrieve precise answers from these texts. Users are shown the document source and page number alongside the answer, and if no match is found internally, the system intelligently searches the web for verified responses. The interface, developed using ReactJS and integrated with Chroma Vector DB and MySQL, also offers chat history and feedback capabilities.
Users can now access state-wise HR legal compliance information in seconds without switching platforms or manually searching documents. The solution has significantly reduced the time and effort spent by end users on compliance-related queries, ensuring accuracy and traceability through document-backed responses.
Clients using Sphinx’s HRMS wanted a way to generate operational reports using plain English, eliminating the need to navigate through multiple menus or predefined reporting formats. Speed, flexibility, and user-friendliness were key requirements.
A conversational AI feature was introduced, enabling users to request data insights and charts using everyday language. Developed with LLAMA 7B, PyTorch, and similarity models, and supported by Python, ReactJS, and Chroma Vector DB, the engine processed queries to generate live data and visualisations. Users could provide feedback, access their top 10 historical chats, and use dynamic filters in their queries to refine report results.
The natural language interface significantly simplified report generation for end-users, enabling data retrieval on-demand, whether predefined in the system or not. This saved valuable time and made HRMS reporting accessible to all users, regardless of technical expertise.
A defence sector client required autonomous drone navigation over designated areas with optimal travel time and distance. The aim was to minimise energy consumption and avoid obstacles without manual intervention, while maintaining full area coverage.
An AI/ML-based flight planning system was developed using TensorFlow and Python, connected to a neural network pipeline (NNP) that predicted potential collisions and dynamically adjusted the route. The algorithm processed image resolution, flight speed, coverage area, and height to generate efficient paths. Additional clustering techniques and motion trajectory estimation ensured adaptive routing even in unpredictable environments. All analytics and telemetry data were managed through a MySQL backend.
The drone system successfully navigated complex terrains with minimal oversight, achieving full area coverage while maintaining optimal energy efficiency. It reduced the need for manual programming and greatly enhanced mission safety and reliability.
Clients sought a solution to monitor physical spaces through video feeds, with the added capability of detecting and classifying humans by count, age, and gender in real time. Traditional surveillance systems lacked the intelligence to perform these tasks with accuracy and efficiency.
Sphinx engineered an AI-based video analytics engine integrated with SSD MobileNet and CaffeModel frameworks, supported by Python, OpenCV, DLIB, and Centroid Tracking algorithms. The system was designed to monitor live camera feeds, automatically detecting the presence of individuals. It further classified each subject’s estimated age and gender and tracked unique visitors to avoid duplication in the people count. Data was stored and visualised via a MySQL backend with SQLAlchemy ORM.
The solution allowed users to monitor demographic data and footfall in real time, enabling more accurate reporting and decision-making for crowd management, security, and operational analytics. The captured image, age, gender, and count were readily accessible, enhancing overall situational awareness.
HR teams preferred not to upload policy documents in static formats but sought an intelligent system that could dynamically answer employees’ questions derived from internal documentation. The objective was to create a solution capable of generating precise, AI-driven answers without manual document browsing.
Sphinx developed a smart Q&A engine embedded within its HRMS platform, leveraging the LLAMA 7B chat model along with Python, Flask, and MySQL for backend processing. The system allowed users to upload multiple documents, whether text-based or image-based, and used OCR tools like EasyOCR and PyTesseract to convert content into searchable text. The engine then applied transformer models to automatically generate both short and long answers to potential user queries, storing them in a structured database. Questions with lower answer confidence were earmarked for AI model fine-tuning to improve future accuracy.
The system empowered HR teams and employees alike by providing direct, concise answers without needing to consult entire policy documents. With more than 90% answer accuracy for matched queries, users experienced a faster, more intuitive way to interact with institutional knowledge.
HR consultants using the HRMS platform were burdened with the time-consuming task of manually extracting essential details, such as name, email, skills, and experience, from a high volume of uploaded resumes. This manual process became virtually impossible when handling resumes at scale, significantly hampering efficiency.
Sphinx developed an AI-powered Resume Parsing Engine that could process multiple resumes in parallel, offering both scalability and speed. Integrated with NLP frameworks such as SpaCy and NLTK, and supported by Python’s Flask and SQLAlchemy ORM, the system automatically detected whether a resume was a text document or an image. If an image, it used AI-based OCR via PyTesseract and EasyOCR to convert it into readable text. Up to 30 resumes could be processed simultaneously, extracting vital candidate information like contact details, language proficiency, work experience, and skills. This structured data could be retrieved individually or in batches as needed.
The Resume Parser significantly reduced the time and effort required by HR professionals to screen resumes. Especially when dealing with large datasets, the system provided a near-instantaneous turnaround, increasing operational productivity and allowing consultants to focus on higher-value tasks.
Traditional HRMS systems could not support employees with internet-based queries or perform on-the-fly data analytics. Users wanted the ability to ask general questions and analyse raw data without relying on separate tools or departments.
ChatGPT was integrated into the HRMS platform to enable a conversational interface for answering general-purpose questions. Users could also upload CSV files and ask analytical questions about their data. The system, built using FastAPI and Python, could generate both summaries and graphical charts based on user queries, delivering instant business intelligence.
The integration of ChatGPT transformed the HRMS into a more intelligent and interactive platform. Users gained the ability to self-serve for both general knowledge and data analysis, significantly improving accessibility, efficiency, and user satisfaction within the organisation.
As part of their HRMS product suite, the client needed to provide users with a chatbot capable of handling routine HR tasks without human intervention. Key requirements included automating functions like checking leave balances, tracking attendance, querying HR policies, and submitting leave requests.
An AI-driven chatbot was created using RASA X, RASA NLU, and RASA Core, with additional backend development in Python, and hosted with NGINX and MariaDB. The bot allowed users to interact via a web-based platform to perform common HR functions. Natural language understanding was enhanced with BERT and T5 models to ensure conversational accuracy and contextual responses.
The chatbot significantly enhanced the user experience within the HRMS platform. It reduced the burden on HR departments by automating common queries and requests. The hybrid chat model also ensured a seamless mix of digital assistance with options for escalation to human support when needed.
A legal audit firm in Germany needed an automated way to assess whether its clients, ranging from small businesses to large enterprises, were GDPR compliant. Manual audits were time-consuming, and the firm also required a user-friendly interface where clients could ask GDPR-related questions and understand potential penalties.
An AI-powered chatbot was developed using RASA NLU, RASA Core, and Hugging Face BERT models, hosted on a Python-based backend with NGINX and MariaDB. The chatbot assessed GDPR compliance by asking users a structured set of questions and calculating potential penalties. Additionally, it responded to ad-hoc GDPR queries and allowed users to leave custom compliance-related questions for follow-up.
The chatbot offered a responsive, easy-to-use platform for GDPR compliance checking. Its hybrid workflow enabled a seamless experience combining automated assessments and manual query handling. Moreover, the AI model’s training process ensured transparency and traceability, making the solution reliable and scalable for various client sizes.
A leading biomedical instrumentation Solution The solution involved extracting and transforming data from existing CRM and other tools to feed AI/ML models. These models, built using Keras, TensorFlow, Pandas, NumPy, and Matplotlib, powered a web-based platform o@ering role-based dashboards. Predictions and insights were generated for sales, marketing campaigns, lead scoring, and customer churn. The platform also integrated with social media APIs (Twitter, Facebook), used MongoDB and PostgreSQL for storage, and was developed with Microsoft .NET Core and React-Redux. Achievements The implementation led to substantial business gains: improved sales governance, increased productivity, reduced call times, and cost savings. Most importantly, it enabled focused and goal-oriented sales activities by o@ering predictive insights across the customer lifecycle.
The solution involved extracting and transforming data from existing CRM and other tools to feed AI/ML models. These models, built using Keras, TensorFlow, Pandas, NumPy, and Matplotlib, powered a web-based platform o@ering role-based dashboards. Predictions and insights were generated for sales, marketing campaigns, lead scoring, and customer churn. The platform also integrated with social media APIs (Twitter, Facebook), used MongoDB and PostgreSQL for storage, and was developed with Microsoft .NET Core and React-Redux.
The implementation led to substantial business gains: improved sales governance, increased productivity, reduced call times, and cost savings. Most importantly, it enabled focused and goal-oriented sales activities by o@ering predictive insights across the customer lifecycle.
A global leader in video analytics and non-cooperative face recognition sought to develop highly accurate and defect-free surveillance products tailored for diverse sectors, ranging from oil and gas to Smart Cities. They required comprehensive product lifecycle management from an Offshore Development Centre (ODC), alongside scalable systems capable of operating across critical infrastructure such as airports, libraries, and banks.
Sphinx designed and developed a comprehensive suite of AI-driven security products, including facial recognition, license plate recognition, non-motion detection, video compression (H.264, HEVC, MPEG4), and mobile surveillance interfaces. Leveraging technologies such as Microsoft .NET, ASP.NET, C++, WPF, OpenCV, and FFMPEG, we integrated advanced video encoding/decoding tools and deployed proprietary algorithms to enhance visual analysis. The development also involved creating a smart pass system and implementing a vision stack tailored to a broad range of environments, including high-traffic urban zones.
The client realised a marked enhancement in operational security, supported by a scalable and intelligence-driven architecture. Our solutions also enabled on-demand business analytics and led to the development of a patented algorithm to redact and restore video segments. This allowed precise and secure image manipulation, increasing confidence in forensic and security applications.
A health and wellness provider needed a secure AI/ML-powered digital assistant capable of identifying early burnout symptoms and offering corrective support. The system required customisation by therapists, coaches, and nutritionists and needed to evolve daily through continuous learning.
We developed a responsive, role-based web platform using Angular 9, RASA (NLU and Core), Python, and Node.js. This system allowed the creation of intent libraries, contextual entities, and customisable conversation flows. Real-time dashboards and analytics were embedded to monitor interactions, while model training (manual and automated) was fully integrated with testing environments. The backend was secured using NGINX and powered by MariaDB for data storage and retrieval.
The result was a user-friendly and transparent platform that enabled a hybrid approach to mental wellness support. It combined digital assistance with human oversight, ensuring both scalability and personalisation. The assistant successfully delivered structured, traceable stress management outcomes across user cohorts, all delivered on time and within scope.
An international software development firm faced health-related concerns due to its touch-based attendance systems. They required a touchless, AI-enabled attendance solution that could authenticate multiple employees simultaneously without physical contact, especially critical during times of heightened health awareness.
We developed a facial recognition solution using Windows Server REST APIs and TensorFlow, integrating live video feeds through IP and web cameras. A FaceNet model was used for facial feature extraction, utilising triplet loss and optimisers like ADAM and ADAGRAD. The classifier employed was an SVM (Support Vector Machine), ensuring high accuracy for real-time authentication. The solution was implemented using C# .NET, Python, and CV2, supported by Visual Studio and TFS.
The client achieved seamless, real-time attendance tracking with zero physical interaction. Alerts were generated automatically, improving both safety and operational efficiency. The system also supported multi-user facial recognition in controlled environments, reinforcing workplace hygiene and data integrity.
An international real estate developer managing large public venues needed a thermal screening solution to detect elevated body temperatures among visitors and staff, helping to mitigate the risk of disease transmission across their properties.
We built a facial temperature recognition system using Thermal IP cameras, integrated with RESTful APIs on a Windows Server. The backend, developed in Python and C#, utilised TensorFlow for real-time thermal image processing and classification. The system was calibrated with black body references to ensure accuracy and deployed through WCF/Web API with web and mobile dashboards for monitoring.
The platform enabled contactless detection of fever symptoms for multiple individuals simultaneously, with instant alert generation and comprehensive MIS reporting. This not only improved public health safety within the client’s properties but also demonstrated effective automation and accountability through technology.
Legacy parking systems reliant on physical tickets or RFID cards were not only environmentally unfriendly and prone to misuse but also inefficient in terms of vehicle identification. The client needed an AI-powered solution to automate vehicle access and billing without manual intervention.
We deployed a system leveraging IP cameras and TensorFlow-based license plate recognition (LPR) to automate entry and exit processes. At the point of entry, vehicle images were captured and time-stamped. at exit, the system cross-referenced entry data and calculated parking fees. The solution was delivered using Python, C# .NET, WCF/Web API, and Onvif protocols, supported by third-party SDKs and integrated via Visual Studio IDE.
The client saw a dramatic reduction in human error and unauthorised access. Vehicle details, entry timestamps, and payment histories were logged and processed automatically, with blacklisted vehicles denied entry. The system ensured efficient traffic flow, eco-friendliness, and enhanced security, transforming the parking experience into a seamless, smart operation.
In response to COVID-19 protocols, organisations needed a way to ensure that employees entering their premises were wearing masks. The challenge was to detect multiple human faces simultaneously, recognise whether individuals were wearing masks, and trigger notifications if non-compliance was observed. The goal was not just detection but to use facial recognition as a touchless access and alert system in real time.
A robust application was developed using TensorFlow for image recognition, CV2 as the camera module, and Caffe models for facial detection. Built on a Windows Server environment, the backend exposed its features through REST APIs, while client applications for both web and mobile were configured to display access statuses and MIS reports. The system functioned seamlessly with IP camera feeds in controlled environments. The entire tech stack included C++, Python, C# .Net, Visual Studio 2019 with TFS, and WCF/Web API for system integration and communication.
The result was a highly effective system capable of detecting multiple faces simultaneously and issuing mask-related alerts in real time. It enabled organisations to monitor employee compliance without manual checks, thereby reducing transmission risks. Additionally, it introduced touchless facial recognition, enhancing both hygiene and operational efficiency in workplace access control.
By submitting, you agree to receive follow-ups & promotional messages. Read our Privacy Policy & Terms of Service.