Dr. Chollette Olisah - Principle Investigator

Artificial Intelligence Enhanced Smallholder Farm Software Tool

Funder: Global Challenges Research Fund (GCRF) and BBSRC £21,000: The project was aimed at developing a proof-of-concept decision support system that includes farmer educative modules, farmer-to-market access modules and a novel crop yield prediction model. The prediction model was designed to assist farmers in making decisions on the effect the state of the farm will have on crop yield. The project was successfully executed and delivered within timeframe. 

Role: Africa Lead. 

Outcome: 1) Proof of Concept Decision Support system developed into an Android mobile Application. 2) Publication.


Augmented Berry Vision – Real-time Augmented Display of Spectral Ripeness Cues in Berry Farms

Funder: Innovate UK, 50877 - £90,189: UWE collaborative role on the project included the design and development of a proof-of-concept blackberry ripeness detection model for detecting ripeness in mature blackberry fruit which are visually challenging for fruit growers. Therefore, the project included the Design and development of a novel multi-input convolutional neural network (CNN) ensemble classifier for detecting subtle traits of ripeness in blackberry fruits.

Role: Principal Investigator and Technical lead.

Outcome: 1) Functional model and 2) Publication.


AI for Quality Control – Data Driven Surface Inspection -National Composites Centre 

Funder: Collaborative Industry project with NCC, £188,642: This project was aimed at the design and development of robust machine learning framework that included deep convolutional neural network for out-of-plane wrinkle defect detection and classification during the preform stage of wind turbine blade manufacturing. 

Role: Co-Investigator and technical lead.

Outcome: 1) Proof of Concept automated system for out-of-plane wrinkle defect detection and classification in actual manufacturing process. 2) Paper in-view.



Detection of Saturation and Uneven Illumination in a Single Image to aid ADAS Camera Calibration 

Funder: Belron International Ltd - £42,421.67: A project to reliably localize regions of uneven illumination and saturation of light on a calibration board in order to overcome calibration system failure resulting from occlusion of light over the calibration marks on the board. The fundamental principles of machine vision were adopted for localizing the regions of uneven illumination and saturation on conventional Belron Workshop calibration board. 

Role: Co-Investigator and Technical lead. 

Outcome: Software.

Machine Learning for Adavanced Drivers Assistance System (ADAS) Camera Pose Estimation 

Stage 1

Funder: Belron International Ltd - £33,000: This was an investigation into ADAS camera pose estimation through developing a functional machine vision (for preprocesing the images) and machine learning (discovering changes in yaw and row patterns of the ADAS camera) framework. The framework included the design and devleopment of a pose estimation model with impressive accuracy and identified constraints that could limit the potential application of the model in real world. The goal of the project is to be able to replace ADAS camera at specified position with minimal error without manufacturers information.


Stage 2

Funder: Belron International Ltd - £27,777.78: An extended investigation into application possibilities of the proposed ADAS camera pose estimation model at Belron Workshops. Experimented with different dimensions of the real-world constraints to reach conditions within which the model can be applied. ADAS pose (pitch and yaw) estimations were highly impressive and error was far less than the acceptable margin of error and application recommendation made to Belron International Limited. 


Role: Co-Investigator – Technical lead. 

Outcome: Functional model 

Vision Detection for Early Signs of DD Lesions and Lameness within Diary Cattle

Funder: UKRI £491,929 (£148,746): A collaborative project between HoofCount, AgriEPI and UWE. The project was aimed at exploring machine vision and machine learning techniques for the design and development of machine learning framework for early detection of digital dermatitis and white lines on hooves of dairy cattle.  Due to the nature of the fact that data was collected in real-time, there was the need to associate each cow to its ID for identification and to reduce the number of redundant rames in te captured video to manageable shunk and to relevant frames. Therefore, a novel filtering algorithm was designed and developed for sorting hooves into separate categories to ensure the “lifted hooves” are preserved for DD and white line detection. Algorithm performed with high level of accuracy and minimal error. 


Role: Technical Lead.

Outcome: A novel filtering algorithm with hoof detection model.

Early Detection of Cancer in cases of Barrett’s Esophagus using Robotic Endoscopic Image Analysis through Deep Image Retrieval

The project was focused on evaluating the capability of AI/ML algorithm to differentiate between a normal oesophagus and Barrett’s oesophagus. Using the PIVI criteria, an NPV, sensitivity and specificity of 100%, 100% and 90% were achieved, respectively. Though, the AI/ML model shows promising results for the classification problem, it will be challenged when two similar samples, LGD and inflammation, are presented to it. Therefore, a follow-on project is aimed at designing and developing AI/ML algorithms for distinguishing an LGD from a regenerative inflammation in order to detect an LGD lesion accurately and predict its risk progression to HGD. Progression referred here is the AI/ML computational grouping of the LGD into what it considers to be the risk levels and uses them to infer progression to HGD.


Role: Technical lead.

Outcome: Barrets Oesophagus Classification model.