Multispectral-Based Imaging and Machine Learning for Noninvasive Blood Loss Estimation
Blood loss estimation during surgical operations is crucial in determining the appropriate transfusion decisions. More practical emerging solutions, e.g. the Triton System, use image processing and artificial intelligence (AI) in quantifying blood loss from images of blood-soaked sponges. Triton utilizes an infrared or depth camera that's used to identify the region of a color (RGB) image corresponding to a surgical textile. However, calculating depth is computationally expensive and can provide only the shape information. In this research, we propose a multispectral-based imaging and machine learning approach to directly quantify blood loss from images of surgical sponges. Near infrared (NIR) and Visible(Vis) light sources in conjunction with an RGB imaging sensor without an NIR filter is used. With this, in addition to the improved focus and reduced background interference on the gauze image due to blood's IR absorption capacities, the color as well as the shape information may be utilized. Results show that the multispectral-based imaging approach rendered a +28.30%, +48%, +27.97%, and 25.72% improvement on the MAE, MSE, RMSE, and MAPE, compared to using a single Vis wavelength or RGB image.