Safety of ICU patients can now be monitored through facial recognition technology

Published On 2019-06-05 13:35 GMT   |   Update On 2019-06-05 13:35 GMT

Critically ill patients are routinely sedated in the intensive care unit (ICU) to prevent pain and anxiety, permit invasive procedures, and improve patient safety. Nevertheless, providing patients with an optimal level of sedation is challenging. Patients who are inadequately sedated are more likely to display high-risk behavior such as accidentally removing invasive devices.


Now, a team of Japanese researchers has developed a technology through which they can continuously monitor the patient's safety in ICU. They used facial recognition technology to develop an automated system that can predict when patients in the ICU are at high risk of unsafe behavior such as accidentally removing their breathing tube, with an accuracy of 75%.


The new study, presented at the Euroanaesthesia congress (the annual meeting of the European Society of Anaesthesiology) held in Vienna, Austria from June 1-3, 2019, suggests that the automated risk detection tool has the potential as a continuous monitor of patient's safety and could remove some of the limitations associated with limited staff capacity that make it difficult to continuously observe critically-ill patients at the bedside.


"Using images we had taken of a patient's face and eyes we were able to train computer systems to recognize high-risk arm movement", says Dr. Akane Sato from Yokohama City University Hospital, Japan who led the research.


"We were surprised about the high degree of accuracy that we achieved, which shows that this new technology has the potential to be a useful tool for improving patient safety, and is the first step for a smart ICU which is planned in our hospital."


The study included 24 postoperative patients (average age 67 years) who were admitted to ICU in Yokohama City University Hospital between June and October 2018.


The proof-of-concept model was created using pictures taken by a camera mounted on the ceiling above patients' beds. Around 300 hours of data were analyzed to find daytime images of patients facing the camera in a good body position that showed their face and eyes clearly.


In total, 99 images were subject to machine learning--an algorithm that can analyze specific images based on input data, in a process that resembles the way a human brain learns new information. Ultimately, the model was able to alert against high-risk behavior, especially around the subject's face with high accuracy.


"Various situations can put patients at risk, so our next step is to include additional high-risk situations in our analysis, and to develop an alert function to warn healthcare professionals of risky behavior. Our end goal is to combine various sensing data such as vital signs with our images to develop a fully automated risk prediction system", says Dr. Sato.


The authors note several limitations including that more images of patients in different positions are needed to improve the generalisability of the tool in real life. They also note that monitoring of the patient's consciousness may improve the accuracy in distinguishing between high-risk behavior and voluntary movement.

Disclaimer: This site is primarily intended for healthcare professionals. Any content/information on this website does not replace the advice of medical and/or health professionals and should not be construed as medical/diagnostic advice/endorsement or prescription. Use of this site is subject to our terms of use, privacy policy, advertisement policy. © 2020 Minerva Medical Treatment Pvt Ltd

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News