Douglas J. Kurrant, PhD, P.Eng.

CURRENT PROJECTS

USE DEEP LEARNING TO DEVELOP COMPUTER VISION BASED CLASSIFIER
(for BowRiver Precision Farm Technologies)
Developing a computer vision tool that is applied to crop imagery to classify the growth stage of the crop. The growth stage classification results are then integrated with the crop’s phenology model by validating the model’s predicted results. The computer vision technology operates in real time and is used as a tool to help farmers with timing of critical management activities such as herbicide planning, insect treatment, and fungicide decisions. Use of the tool becomes particularly important when farmers are managing multiple crops, each with a different growth stage. Importantly, the classification result will be used to validate and refine the corresponding phenology model of the crop. The phenology model development is described by the "Apply machine learning models to weather data to enhance phenology models" project..
I have teamed up with a corporation that farms on a large scale in order to refine the business model and to improve the functionality of the technology. Technical refinements include the ability to identify and and classify weeds and pests that may also appear in the crop images.
I will also be acquiring an extensive repository of crop imagery from a multitude of crops over the coming growing season from my farm and augment this database with images acquired over the corporation's operation. This will improve the generalization and accuracy of the deep learning models used to classify the crop growth stage, along with weeds and pests that may also appear in the images.
I will post updates to the blog on all of the developments, and describe the project in more detail once the farming season starts.

APPLY MACHINE LEARNING TECHNIQUES TO WEATHER AND SOIL DATA TO ENHANCE PHENOLOGY MODELS
(for BowRiver Precision Farm Technologies)
I am implementing a network of field specific IoT sensor platforms over a number of fields to gather weather and field data for the associated crop phenology models. A key aim is to use the predicted results from the crop growth classifier described by the "Use deep learning to develop computer vision based classifier" project to validate and refine the corresponding phenology model of the crop.
The crop phenology model repository will be extended to include models for common weeds and insects. This will allow the growth stage predictions of the crops to made relative to the predicted growth stage of weeds and insects. Machine learning technologies are applied to these data in order to improve the accuracy and predictive quality of the phenology models.
I will post updates to the blog on all of the developments, and describe the project in more detail once the farming season starts.


RACERS EDGE - A HIGH OCTANE VARIANT OF MACHINE LEARNING FOR BRACKET RACING APPLICATIONS!
(for BowRiver Precision Farm Technologies)
Developing an IoT sensor platform (STMicroelectronics STM32L496ZG microcontroller with RTOS) positioned near the side of the race track to acquire weather, racetrack, and race car data. The front end server (Avnet Quectel BG96 module) is incorporated into the sensor platform that collects weather and race track data. The race car data are transferred to the front-end server via a pair of 915 MHz FSK packet radios. The front-end server consolidates and then sends these data to a back-end cloud server for storage and analytics. Moreover, machine learning algorithms are applied to these data to make an inference about the driver's 'dial-in time'. The recommended 'dial-in-time', along with other relevant information are communicated to the driver's smart device via the back-end cloud server.
The 'dial-in time' is the time the driver estimates it will take his or her car to cross the finish line. If the car goes faster than its dial-in (called breaking out), it is disqualified. The crucial point of bracket racing is that a premium is placed on consistency of performance of the driver and car rather than on raw speed. Hence, the motivation for the development of this tool is to improve the accuracy and consistency of the racer's 'dial-in times.
I have teamed up with an experienced NHRA bracket racer this upcoming season to refine this product and the corresponding business model. I am also extending the product to include predicted junior racer 'dial-in-times'. The goal is to demonstrate the utility of the product to the NHRA community. Future work includes teaming up with an NHRA mechanic to extend this tool and the artificial intelligence capabilities to refine the race car's fuel injection and ignition timing to improve the accuracy of the 'dial-in times'. The next version of the tool will also replace implementing the ML algorithm in the cloud with an 'edge' computing approach.
I am really looking forward to acquiring the test, training, and validation data for this project! I will post all of the cool pictures and updates to the blog on all of the developments, and describe the project in more detail once the race season starts.

DEVELOP MACHINE LEARNING APPROACHES TO SEGMENT MEDICAL IMAGES
(for the Universities of Calgary and Manitoba)
I have developed a robust and flexible medical image segmentation technique with my colleagues at the Universities of Calgary and Manitoba comprised of an unsupervised machine learning technique that is reinforced with hypothesis testing and statistical inference. Partitioning medical images into tissue types permits quantitative assessment of regions that contain a specific tissue. The assessment facilitates the evaluation of an imaging algorithm in terms of its ability to reconstruct the properties of various tissue types and to identify anomalies. The key advantage for using the algorithm over other approaches, such as a threshold-based segmentation approach, is that it supports this quantitative analysis without prior assumptions such as knowledge of the expected tissue property values. Moreover, it can be used for scenarios where there is a scarcity of data available for supervised learning.
A manuscript was prepared and recently submitted to IEEE Antennas and Wireless Propagation Letters - special cluster on Machine Learning Applications in Electromagnetics, Antennas, and Propagation. I will post updates to the blog on all of the developments, and describe the project in more detail once the manuscript has been accepted for publication.
Presently, I am conducting a study that compares results obtained using a simple threshold segmenting technique with results from unsupervised and supervised machine learning techniques. I am using UNet and Mask-RCNN for the supervised machine learning techniques. I have started preparing a manuscript that describes the study and I will post updates to the blog on all of the developments at a later date.

For radar-based microwave breast imaging, the skin response overwhelms the backscattered responses that arise from the internal structure (i.e., scatterers associated with the gland and tumor). In order to image and detect a tumor response, it is critical that the skin response be suppressed without impacting the tumor response. I am presently developing a signal processing algorithm for the University of Calgary that operates on time-domain backscattered fields (i.e., radar reflection data) in order to extract the skin response from the backscattered fields.
This is a very challenging and complex problem, as the gland and tumor responses are 50-80 dB below the skin response (not much above the noise floor!!), and there is a significant overlap between the skin and the gland/tumor responses. Hence, individual responses cannot be delineated with a time gate or window (e.g., Hahn window). The ‘classical’ machine learning models that I have previously implemented, required the formulation of hand-crafted features rather than having a multiple layer CNN network to extract the features. These classical machine learning approaches were successful for simple scenarios, but failed as the object of interest increased in complexity (e.g., increase in heterogeneity of the tissue for which the tumor is embedded).
In order to solve this complex and challenging problem, I developed a novel and innovative deep learning solution from scratch. Accordingly, the results need to be carefully examined and scrutinized. I will post updates to the blog on all of the developments, and describe the project in more detail once the results have been validated (and the manuscript has been submitted for review).
Develop signal processing algorithm to extract skin response from backscattered fields
(for the University of Calgary)


REGULARIZATION TECHNIQUES
(for the Universities of Calgary and Manitoba)
The science of reconstructing the dielectric properties of an object from some electromagnetic field measurements is known as microwave tomography (MWT) and requires solving an inverse scattering problem. The typical approach used to resolve the inverse scattering problem is to recast the problem to the minimization of a suitable functional (i.e., a cost functional). There are a number of challenges that must be overcome in order to solve these problems.
First, the inverse scattering problem is highly nonlinear and this nonlinearity increases with frequency. To increase resolution in the reconstructed image the frequency of the illuminating field may be increased. While this leads to the possibility of resolving finer structures, it also causes problems when imaging large scale structures (relative to the illuminating wavelength) and high contrast objects. The reason for this is that the electric fields, or more specifically the scattered fields, are nonlinearly related to the inhomogeneity of the scattering objects. This nonlinearity is a consequence of multiple scattering and becomes more pronounced with higher frequencies. Therefore, as the object’s size increases relative to the illuminating field wavelength or when the contrasts of the inhomogeneity become large, the nonlinear effect (or the multiple scattering effect) becomes more pronounced.
Second, the inverse scattering problem is severely ill-posed. That is, small perturbations of the measurement data due to noise contamination, lead to large variations in the reconstructions. For typical MW measurement systems, the number of reconstruction elements (i.e., the dimension of the solution space) far exceeds the number of independent data resulting in non-unique solutions that contribute to the ill-posedness of the problem. The information collected by these systems is upper-bounded, so multi-view and multi-illumination strategies (i.e., increasing the number of transmitters and receivers around a breast) do not fully resolve this issue.
Third, regularization techniques are required to preserve stability but are difficult to implement and often lead to lower resolution reconstructions. Here, regularization is defined in the context of mathematics and so it refers to a process of introducing additional information in order to solve an ill-posed problem. Due to the nonlinear nature of the cost functional, a closed form solution does not exist which means that the microwave image reconstruction proceeds iteratively (i.e., the solution is approximated by converting the non-linear problem into a series of linearized steps).
I have spent the past 13 years investigating MW imaging and regularization techniques in order to improve image quality in terms of enhanced resolution and improved sensitivity/specificity to malignant tissue. I am presently collaborating with colleagues at the Universities of Calgary and Manitoba to develop a technique that uses ultrasound fields to acquire internal structural information which is used as prior information for the MWT algorithm. Read the journal article that describes the project here.
Another project that I am working on is the development of a technique that terminates the iterative algorithm (which is a form of regularization) in order to prevent regularized solutions that are highly oscillatory resulting in an increase in the variance of the reconstructed profile. For these scenarios, the inverse solution may be unstable and unwanted features (e.g., measurement noise) may dominate the reconstruction. I will post updates to the blog on all of the developments, and describe the project in more detail once the results have been validated (and the manuscript has been submitted for review).