Accelerated research is one innovative development from the implementation of AI in healthcare. The technology shows promise in disease prediction and detection. A notable example is how Cedars-Sinai's department of Medicine leveraged AI to predict certain heart conditions in patients. Using machine learning and deep learning, Cedars-Sinai Health System's AI can distinguish between fatal and treatable sudden cardiac arrest. Further development and adoption of similar technologies could aid physicians soon and improve the treatment process for patients with rare conditions.
AI-assisted oncology also shows promise. At Rutherford House, staff has spearheaded the use of AI in assisting physicians in treatment planning. So far, Rutherford Health's system leverages AI for radiotherapy planning, but CMO Professor Karol Sikora says that machine learning could drive patient choice by offering them options alongside their respective risks soon.
EMRs and Proprietary Software
The new model of AI-assisted care presents some obstacles. For example, many forms of AI have safeguards to prevent staff from looking at how it processes information. In some cases, AI vendors will distance themselves from their technologies after the initial sale. This presents issues if a medical malpractice case involves the AI product. Civil Litigation Attorney Matt Keris explains the pitfalls of this process, called audit trail discovery.
The biggest reason why we have to do retrospective analysis on AI systems is because it doesn't tell you why it's recommending a treatment or how it came to that conclusion. So that part of the process is rather scary and can make our cases much more complex to explain to everyone else evaluating these cases.
If a claim arises, your AI solution must have an auditable trail that you can readily produce. Ask your potential vendors if the solution you're considering can produce the documentation you need. Avoid general, non-purpose-built products that don't offer ongoing support for your organization's unique needs.
Sometimes, even simple AI problems can introduce risks to patients and providers. For example, AI products for at-home care require internet access, and if a patient's Wi-Fi is not up to speed, the system may fail to alert their care provider, resulting in injury or death. Even a lack of tech literacy can complicate AI-assisted at-home care, especially for senior citizens. Consider the end-user and choose a solution that matches the user's abilities.
Despite its potential risks, the role of machine learning and artificial intelligence in healthcare systems will continue to expand. Administrators must evaluate the risk and the potential value of these tools as they continue to mature. Decision makers must consider their organization's unique needs and find a vendor that offers a product and support to meet their goals.
For more information about the risk of AI in healthcare, click here.