Recent Projects

Two months ago, we purchased a robot called the temi 3 (available here). This robot features a large touch screen, LIDAR for obstacle avoidance, and text-to-speech/natural language processing/automatic speech recognition included without the need for setting up or training. The tablet attached to the top of the robot runs on Android and controls all functions of the robot (through an open-source SDK). Temi demos and videos are available on the Robot Temi YouTube channel. After learning about all the functionality that is baked in to this robot and the vast documentation of its SDK, we were confident that we can use this to develop applications in our environment. There are two projects that we are interested in working on.Both projects involve the smell inspector device described in this post to classify smells. Analyzing Smells in Hospital Rooms First, we want to detect if urine or stool was present in a patients hospital room. This is in early development and testing of this is taking place within our own office. Temi comes with “patrol” mode, which visits all predefined waypoints on a floor map. To allow temi to roam freely, the robot needed to be lead around our floor to get a map of the area and to add waypoints (e.g. Cody’s office, Snack Room, Conference Room). Once this was complete, we can programmatically start a patrol using the SDK. After mapping and patrolling was set up, we needed a way to mount the smell inspector sensor to the robot. Dmitry […]
Acknowledgements This project is a collaboration between the Center for Applied AI and the following researchers: Dr. Yuan Wen (Institute for Biomedical Informatics, Department of Physiology) Dr. Laura Brown (Department of Physiology) Dmitry “Dima” Strakovsky (College of Fine Arts) Background Volatile organic compounds (VOCs) are chemicals that can easily evaporate at room temperature and are commonly found in various products and materials such as paints, cleaning supplies, fuels, and solvents. The measurement of these compounds has been evolving over decades. Today, thanks to advancements in collection techniques, collection devices fit in the palm of your hand and can detect trace amounts of VOCs. These devices are used for many applications, but we are interested in their applications to healthcare. Specifically, can we collect breath samples to create a machine-learning model to predict a patient’s blood glucose accurately? The VOC sensor device we use is named the “Smell Inspector” and is described in the “Smell Inspector” section below. We aim to 1) create a dataset with blood glucose measurements and 2) infer the blood glucose measurement from a breath sample in real-time. Introduction This project is in the early phases. However, we have begun to collect breath samples from 4 volunteers in our office. None of these volunteers have diabetes and do not have access to a measurement device for blood glucose, so we decided to test the effectiveness of the VOC sensor by classifying peppermint breath vs. normal breath. By doing this, we can see how sensitive the smell inspector is […]
In 2020, Kentucky had the third-largest drug overdose fatality rate in the United States, and 81% of those deaths involved opioids. The purpose of this project is to provide research support to combat the opioid epidemic through machine learning and forecasting. The goal is to provide accurate forecasts based on different geographical levels to identify which areas of the state are likely to be the most “high risk” in future weeks or months. With this information, adequate support could be prepared and provided to those areas with the hope to treat victims in time and reduce the number of deaths associated with opioid-related incidents. The first step was to analyze what geographical level would be most appropriate for building and training a forecasting model. We had EMS data containing counts of opioid-related incidents based on six different geographical levels: state, county, zip code, tract, blockgroup, and block. Through experimentation, it was found that the county level is likely the most appropriate scale. State level is too broad for useful results, while any level smaller than zip code proved to be too sparse. Machine learning models can rarely perform well when training on data that consists of mostly zeroes, and smaller geographical levels contain too few positive examples of incidents for any model to successfully learn the trends of each area. Additionally, the temporal level was chosen to be at the monthly scale, rather than yearly or weekly, due to early testing results suggesting the best performance at monthly levels. Even […]
In 2021, our team started development of a LoRaWAN network (see link) to cover the University of Kentucky campus/hospital and most, if not all, of Lexington (see link). This project is still active as we continue to develop new applications, but creating infrastructure to support this network has become a challenge. Fortunately, a new offering from Amazon called Sidewalk has recently begun rolling out, which provides this infrastructure (see link). Amazon Sidewalk blurb: Sidewalk is a low-bandwidth, long-range wireless network, which aims to enhance the connectivity of smart devices within neighborhoods and cities by utilizing the existing infrastructure and creating a shared network. The network operates on a portion of the 900 MHz spectrum, allowing devices to communicate over longer distances compared to traditional Wi-Fi networks. Amazon Sidewalk utilizes a combination of Bluetooth Low Energy (BLE) and the 900 MHz spectrum to extend the range of compatible devices, such as smart locks, outdoor lights, and pet trackers. By leveraging the Sidewalk bridge, which acts as a gateway device, the network connects to the internet through a user’s home Wi-Fi network. However, it’s important to note that Amazon Sidewalk operates by using a small portion of a user’s internet bandwidth, which is shared with nearby devices, including those owned by other Sidewalk users. (Protocol reference here) This entails that a specific subset of Amazon devices, including Alexas, Echos, and Ring Cameras, can be utilized to receive Bluetooth, FSK, and LoRaWAN transmissions from sensor devices, offering extensive coverage (specifically within Lexington, KY, […]
In early April 2023, Meta AI released Segment Anything (SAM), an machine-learning based segmentation model. The repository model of SAM operates on a very general image database, so we have been re-training SAM to specifically process mammograms and identify any abnormalities within. In 2020, it is estimated that there were roughly 2.3 million new cases of breast cancer, and one of the detection methods is using mammograms to visualize potentially cancerous abnormalities. The goal is to train SAM to automatically detect and annotate abnormalities in mammograms with the intent of processing mammograms with greater than current accuracy and speed. The repository version of SAM is a parameterized predictive model, which only uses the information provided by Meta AI to create parameters which guide SAM to identifying and segmenting different image components. Currently, we are working on training SAM specifically onmammograms so we can add and change parameters to more specifically focus on breast cancer detection. The expectation is that SAM will soon be able to identify abnormalities in a mammogram, soon after annotate those abnormalities to determine what they are (cancer, mineral deposits, healthy tissue, etc.). As we progress, the expectation is that after specifically identifying cancer or cancer-related abnormalities in mammograms, the SAM model can be expanded to other tissues and their screening for cancer.
We developed a program that visualizes the raw input data and ML-based smell detection analysis of the SmartNanotubes Smell Inspector. The Smell Inspector is based on electronic nose (E-nose) technology that uses nanomaterial elements to detect odors or volatile organic compounds (VOCs). Classification of smells occurs through pattern recognition algorithms incorporated as trained ML models. There are diverse potential applications particularly in a health care setting, such as disease detection through breath sampling. The program consists of a user-friendly GUI application made using the Python Tkinter library. It continuously checks the sensor’s connection status, allows the user to initiate the sensing process, and displays the raw signals on a bar plot as well as the probabilities of the detected smells updating in real-time. The current program uses a neural network trained model to detect the smell of coffee. As we progress, we plan to improve the quality of the interface and expand the range of trained models to encompass a wide range of scent classifications.
Clinicians often produce large amounts of data, from patient metrics to drug component analysis. Classical statistical analysis can provide a peek into data interactions, but in many cases, machine learning can provide additional insight into new features. Recently, with the boom of new artificial intelligence models, these clinicians are more interested in applying machine learning to their data. However, in many cases, they may not possess the necessary knowledge and skills to effectively train and infer a model. Fortunately, using AutoML techniques and a user-friendly web interface, we can provide these clinicians with a way to automatically train tabular data on many different machine learning models to find which produces the best results. Therefore, we present CLASSify as a way for clinicians to bridge the gap to artificial intelligence. Even with a web interface and clear results and visualizations for each model, it can be difficult to interpret how a model achieved its results or what it could mean for the data itself. Therefore, this interface can also provide explainability scores for each feature that indicates its contribution to the model’s predictions. With this, users can see exactly how each column of the data affects the model and could gain new insights into the data itself. Finally, CLASSify also provides tools for synthetic data generation. Clinical datasets frequently have imbalanced class labels or protected information that necessitates the use of synthetically-generated data that follows the same patterns and trends as real data. With this interface, users can generate entirely new […]
  Segment Anything is a segmentation algorithm created by Meta Research. In order to try and make segmentation of medical images available to UK Hospital staff, a web interface which allows for the layperson to interact with segmentation should be utilized. Meta Research provided a sample web interface which precompiled segmentations automatically, but did not feature their correction or manual segmentation features. From there, however, the open source community began to tinker and we now have Segment-Anything-WebUI which features a more robust toolset for the segmentation of images in the browser without needed to precompile any of the segmentations for view. Additionally, it allows you to upload local files to be segmented, then save the segmentations as JSON objects. This repository was the basis of the version we have developed at the Institute for Biomedical Informatics. Accessing the Application The web application is available in two forms. The first form is through the hub site, which is hosted on University of Kentucky systems and is intended to assist in the annotation of medical images as well as the training of more useful and impressive model checkpoints for Segment Anything which will improve annotation with the goal of automatic or single-click annotation. The second form is downloading and building the repository on your own local machine. Instructions are available in the repository readme for building and running the site. How It Works Upload A File: opens a file browser and allows you to upload an image to segment. The image must […]
We developed an AI model for detection of ultrasound image adequacy and positivity for FAST exam (Focused Assessment with Sonography in Trauma [1]). The results are accepted for publication in Journal of Trauma and Acute Care Surgery. We deployed the model (based on Densenet-121 [2]) on an edge device (Nvidia Jetson TX2 [3]) with faster-than-realtime performance (on a video, 19 fps versus expected 15 fps from an ultrasound device) using TensorRT [4] performance optimizations. The model is trained to recognize adequate views of LUQ/RUQ (Left/Right Upper Quadrant) and positive views of trauma. The video below demonstrates the model prediction for the adequacy of the view. The device can be used as a training tool for inexperienced Ultrasound operators to aid them in obtaining better (adequate) views and suggest probability of positive FAST test. The project is done in collaboration with University of Kentucky Department of Surgery. The annotated data is provided by Brittany E Levy and Jennifer T Castle. [1] https://www.ncbi.nlm.nih.gov/books/NBK470479/ [2] Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely Connected Convolutional Networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2016;2017-January:2261-2269. DOI: 10.48550/arxiv.1608.06993 [3] https://developer.nvidia.com/embedded/jetson-tx2 [4] https://developer.nvidia.com/tensorrt