I’ve mostly found that smart alerts just overreact to everything and result in alarm fatigue but one of the better features EPIC implemented was actually letting clinicians (like nurses and doctors) rate the alerts and comment on why or why not the alert was helpful so we can actually help train the algorithm even for facility-specific policies.
So for instance one thing I rated that actually turned out really well was we were getting suicide watch alerts on pretty much all our patients and told we needed to get a suicide sitter order because their CSSRS scores were high (depression screening “quiz”). I work in inpatient psychiatry. Not only are half my patients suicidal but a) I already know and b) our environment is specifically designed to manage what would be moderate-high suicide risk on other units by making most of the implements restricted or completely unavailable. So I rated that alert poorly every time I saw it (which was every time I opened each patient’s chart for the first time that shift then every 4 hours after; it was infuriating) and specified that that particular warning needed to not show for our specific unit. After the next update I never saw it again!
So AI and other “smart” clinical tools can work, but they need frequent and high quality input from the people actually using them (and the quality is important, most of my coworkers didn’t even know the feature existed, let alone that they would need to coherently comment a reason for their input to be actionable).
Honestly I just wish it would apologize less maybe its my strong autism genes from both parents but I don’t understand how people prefer that I just ain’t got time to read about how sorry it is every time I just wanted a slightly different answer and it didn’t understand the way I phrased it the first time. I’m not mad ffs sometimes it even takes a few tries to communicate with a human I’m not gonna blame a toddler-aged computer algorithm for not knowing what I meant when I say whatever dumb shit I’m saying today.