Smart Auto Assessment and Roadside Technical Help Interface
Chirayu Sanghvi and Alina Vereshchaka
Department of Computer Science and Engineering,
State University of New York at Buffalo
Emergency vehicle accidents pose significant challenges to operational efficiency and financial stability within emergency services, impacting organizations and communities. These incidents result in substantial repair costs, prolonged vehicle downtime, and potential legal liabilities, straining crucial public safety resources. Additionally, issues like inflated repair costs, inefficient roadside assistance, and lengthy insurance processes compound these challenges. The Smart Auto Assessment and Roadside Technical Help Interface (SAARTHI) provides a comprehensive end-to-end solution, leveraging computer vision technologies for rapid damage assessment, consistent repair cost estimation, immediate repair coordination, towing services, and faster insurance processes. By addressing these issues, SAARTHI enhances the efficiency and reliability of emergency response systems, ensuring continuous service coverage and improved operational readiness. This socially beneficial solution strengthens emergency services and promotes community safety and wellbeing.
Roadside Assistance
Repair Cost Estimation
Prompt Help
The SAARTHI framework integrates advanced AI techniques to streamline vehicle damage assessment and repair coordination for emergency vehicle drivers. Users register on the SAARTHI platform to view damage history and start new assessments by uploading vehicle images. AI processes these images for damage detection, non-maximum suppression, and cost estimation, presenting a detailed assessment report. A chatbot, developed with JavaScript and BotUI, assists users with report creation and connects them to the nearest SAARTHI agent via Google Maps API for repair requests. Verified SAARTHI agents, available 24/7, are registered by an administrator and assigned unique IDs for tracking. The system identifies the nearest available agent using the Haversine formula, ensuring prompt assistance. Post-repair, users receive a detailed cost breakdown, which is added to their organization's monthly report.
Our approach to car damage assessment aims to precisely detect, classify, and contour vehicle damages, leveraging the capabilities of instance segmentation and object detection. We utilize a Mask R-CNN model, which features a ResNet-50 backbone, implemented via the MMDetection toolbox. Initially pre-trained on the COCO dataset, the model is fine-tuned on the CarDD dataset to enhance its performance specifically for emergency vehicle assessments. This fine-tuning process ensures higher accuracy and reliability in detecting various types of vehicle damages, providing a robust solution for comprehensive damage evaluation.
To enhance vehicle damage detection accuracy, we utilize the Non-Maximum Suppression (NMS) algorithm. NMS resolves overlapping bounding boxes by ensuring each damage type is represented by a single, most relevant bounding box. It computes the Intersection over Union (IoU) to measure overlap, retaining the box with the highest confidence score and suppressing others with an IoU above a specified threshold. This iterative process continues until only significant bounding boxes remain. By eliminating redundancy, NMS significantly improves the precision of damage detection, crucial for accurate repair cost estimation.
Repair Cost calculation demonstration
Our repair cost estimation process begins with the refined detection results obtained through Non-Maximum Suppression (NMS). Each detected damage type is associated with a predefined base cost, adjusted by a severity factor derived from the bounding box area and confidence score. The final repair cost is calculated using the following formula:
Cost = Base Cost × (1 + (Area × Score) / Normalization Factor)
This method ensures precise and reflective cost estimations based on damage severity. By focusing on accurate detection and cost calculation, our approach provides a robust solution for vehicle damage assessment and repair coordination.
Salient Object Detection (SOD) enhances the detection of car damages by refining the boundaries of irregular and slender shapes. SOD focuses on highlighting the most noticeable and severe damage areas while filtering out less relevant parts. This is particularly useful in complex scenes where it is more important to identify key areas of damage than to classify them. By applying a modified U-Net model, SOD improves the precision and clarity of detected damages. Using the CarDD dataset, SOD processes images to accurately locate and define the boundaries of damages. This approach significantly enhances the accuracy of vehicle damage detection, providing clearer and more precise assessments for repair coordination.
To optimize emergency vehicle assistance, the system identifies the nearest available repair agent when a user requests help. The distance between the user and each agent is calculated using the Haversine formula, ensuring precise measurement based on geographic coordinates. The formula is as follows:
d = 2r * arctan(sqrt(a), sqrt(1 - a))
where , a = sin^2(Δφ / 2) + cos(φ1) * cos(φ2) * sin^2(Δλ / 2)
and Δφ and Δλ are the differences in latitude and longitude, and r is the Earth’s radius.
Agents are then sorted by proximity, and the closest agent is contacted first. If the nearest agent is unavailable or at full capacity, the system automatically moves to the next closest agent. This method ensures rapid and efficient allocation of resources, minimizing response times and enhancing the overall efficiency of the emergency repair process.
For our research using the CarDD dataset, the model achieved a bounding box Mean Average Precision (bbox mAP) of 0.5380, indicating strong object detection performance with high precision at IoU thresholds of 0.5 and 0.75. The segmentation masks (seg. mAP) achieved an overall mAP of 0.5190, demonstrating high precision across different object sizes. Training metrics showed a consistent upward trend in accuracy and a steady decline in loss, reflecting increasingly accurate predictions and convergence. The model exhibited excellent performance across various object sizes, particularly excelling with larger objects. Additionally, it maintained robust detection capabilities even at higher IoU thresholds. These results confirm the model's effectiveness in damage detection and segmentation, ensuring reliable and precise assessments for vehicle repair processes, enhancing the overall efficiency and accuracy of the SAARTHI framework.
Our model’s performance in salient object detection during testing shows strong results. It achieved an average loss of 0.5826, indicating a good match between predicted and actual values, and an accuracy of 0.8645, reflecting a high proportion of correct predictions. Precision and recall, both at 0.87, suggest effective identification of true positives with balanced accuracy. The F1-score of 0.87 confirms the model's robust overall performance. These results demonstrate the model's strong capability in accurately detecting and classifying salient objects in vehicle damage assessment.