Real-Time Quality Check for AI - Human in the Loop
An AI developer wants to do dip-stick testing of the output of its model and needs a simple and effective way to do the same. For example - Predible use-case.
Create appropriate projects
Create account on CARPL at demo.carpl.ai
Create Dataset
Name: QC_Dataset
Description: Dataset for Real-Time QC of AI
Modality: <Choose appropriate>
import requestsurl = "https://demo.carpl.ai/api/v1/create_dataset"payload = {'dataset_name': 'QC_Dataset','dataset_description': 'Dataset for Real-Time QC of AI','modality': 'DX'}files = []headers = { 'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJlbWFpbGlkIjoia2FiaXRhZGFzaEBnbWFpbC5jb20iLCJ1c2VyaWQiOjcsImV4cCI6MTU5ODcyMDIwOX0.tELVQPk3oa6UO01KTaLGPwr43AD_swttc1NQjqljBZA'}response = requests.request("POST", url, headers=headers, data = payload, files = files, allow_redirects = False)print(response.text.encode('utf8'))Create Validation Project
Name: Real-Time QC Validation
Description: A validation project to check for AI errors in real-time - human in the loop
Algorithm: Run Inferencing <Choose appropriate>
Dataset: QC_Dataset
import requestsurl = "https://demo.carpl.ai/api/v1/create_project"payload = {'project_name': 'Real-Time QC Validation','project_description': 'A validation project to check for AI errors in real-time - human in the loop','algorithm': '<algorithm_id>','dataset_id': '<dataset_id>'}files = []headers = { 'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJlbWFpbGlkIjoic2FsaWwuZ3VwdGFAY2FyaW5nLXJlc2VhcmNoLmNvbSIsInVzZXJpZCI6MTAsImV4cCI6MTU5ODEwMzgxN30.Z3MUznyjXnuaoopJvWZIyGbtx4w9Tka9XkMr_2IG4es'}response = requests.request("POST", url, headers=headers, data = payload, files = files, allow_redirects = False)print(response.text.encode('utf8'))Upload Data and Run Inferencing
One-time checks
import requestsurl = "https://demo.carpl.ai/api/v1/login"payload = {'emailid': '####','password': '####'}files = [ ]headers= {}response = requests.request("POST", url, headers=headers, data = payload, files = files)print(response.text.encode('utf8'))import requests url = "https://demo.carpl.ai/api/v1/datasets"payload = {}headers= {}response = requests.request("POST", url, headers=headers, data = payload)print(response.text.encode('utf8'))import requestsurl = "https://demo.carpl.ai/api/v1/projects"payload = {}headers= {}response = requests.request("POST", url, headers=headers, data = payload)print(response.text.encode('utf8'))Upload data to QC_Datasets
import requestsurl = "https://demo.carpl.ai/api/v1/upload"payload = {'dataset_id': '<dataset_id>','anon': '0/1'}files = [('file', open('/path/to/file','rb'))]headers = { 'token': 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJlbWFpbGlkIjoicm9oaXQudGFraGFyQGNhcmluZy1yZXNlYXJjaC5jb20iLCJ1c2VyaWQiOjEsImV4cCI6MTU5NjM3OTY0OH0.kz6sHNdcu5HUdCOAIEjCYYpkbloTEklmUcdOK0RyDH8'}response = requests.request("POST", url, headers=headers, data = payload, files = files)print(response.text.encode('utf8'))Run inference in Validation Project
import requestsurl = "https://demo.carpl.ai/api/v1/run_algorithm_val"payload = {'project_id': '<validation_id>','studies': '["d21f59437e74298df2d4ab453ee2c9f0e8319f1283369175b58054532d5fcd70"]'}files = []headers= {}response = requests.request("POST", url, headers=headers, data = payload, files = files)print(response.text.encode('utf8'))Repeat steps 2 and 3 every-time you are loading new data.
Log in to CARPL
Click on Validation Projects
Click on Classification
View the Real-Time QC Validation Project
List of cases with inference results will come up
Click on “Details >> View”
Click on “View Dicom” to see the case in CARPL Viewer
Click on the GT column on the “View” pop-up and modify the Ground Truth appropriately
Click on the “Validation” tab and observe the Area Under Curve / Dot-Plot / Summary Stats
Export a Validation Report by clicking on “Generate Report” on the top right corner
This process can be repeated whenever new data is added to CARPL