These days, it seems like everyone is talking about the implications of a future powered by AI. While machine learning certainly isn’t new to the healthcare industry, OpenAI’s recent release of GPT-4 and other generative AI solutions has effectively created an “AI arms race” to deploy the next digital health unicorn; or at least dip their toes into the tech deep enough to purport some level of expertise.
Across industries, GPT-4’s predecessor, ChatGPT, already boasts more than one million registered users. Despite the media hype, the unprecedented rush of AI gurus, or the resulting backlash, including a rallying petition from tech leaders including Steve Wozniak and Elon Musk to pause the rapid pace of AI development, the adoption of AI technology doesn’t appear to be slowing anytime soon. Healthcare offers no exception.
This week, Modern Healthcare reported that electronic health records software firm, Epic Systems, officially made its foray into AI, when the company announced its new partnership with Microsoft’s OpenAI Azure Service, which uses Open AI’s GPT-4 language model. While the company didn’t offer a specific timeline for the rollout, it did share its plans to integrate the technology to communicate asynchronously with patients and to source recommendations from the tool’s data visualization tool, rather than input manual searches.
The company isn’t alone. As numerous healthcare companies test the waters of AI, the technology has the potential to transform the future of healthcare delivery. While a majority of these early applications are well-positioned to reduce redundant or monotonous tasks, healthcare experts and technology companies are already dreaming up additional areas for its use, including achieving interoperability, affirming clinical diagnoses, and using chatbots to streamline patient communications. It remains to be seen how receptive healthcare leaders will be when AI actually hits the ground, however.
Generative AI is very powerful, but we need to be cautious. Letting a machine generate uncontrolled communications with patients is risky. Using AI as a generative suggestion tool to augment communications is a more conservative approach to AI, without relinquishing human control."
David Floyd
Senior Vice President, Engineering
Challenges of AI in healthcare
The industry shouldn’t shy away from AI’s power. When used responsibly, the technology has the potential to be another useful tool in the healthcare innovation toolbox, says David Floyd, Upfront’s senior vice president of engineering.
Roughly 68% of patients believe healthcare providers need to improve their interactions with patients. In the areas of patient engagement, we know that offering personalized digital communications is critical to driving outcomes and health systems’ operational success. Could AI be the key to unlocking an even better patient experience?
Only time will tell. At Upfront, we’ve been longtime advocates for tech-enabled personalization to authentically connect with patients and motivate them to care. For example, our technology allows our client strategy teams to identify care and communication gaps among specific patient populations, so that we can address challenges head-on with human-centered design.
Once we gain a better understanding of specific patient populations, we’re able to guide healthcare providers on creating the most relevant and resonant content to positively influence consumer health behaviors. Today, we’re exploring how AI might accelerate health outcomes in the near future.
“There are a number of scenarios where AI and deep learning intertwines,” Floyd adds. “It can be especially helpful as a supervised tool to provide recommendations around frequency of communications, channel preferences, and guiding healthcare providers on the best ways to interact with patients and motivate them to care.”
Still, using AI in healthcare areas surfaces its share of ethical questions and considerations. Generative AI is a powerful technology to be sure, but it’s critical to balance the tool’s benefits with a prudent examination of its implications and challenges. If our goal is to “humanize healthcare,” we can’t lose sight of serving our patients in pursuit of the latest anthropomorphic tech.
“Generative AI is very powerful, but we need to be cautious,” warns Floyd. “Letting a machine generate uncontrolled communications with patients is risky. We’ve taken the approach of using AI as a generative content suggestion tool to augment communications in a way that leverages the historical data and insights we’ve collected through our proprietary psychographics model. That’s a more conservative approach to AI, but without relinquishing total human control.”
Consider this recent study from Pew Research, for example, which found more than 60% of Americans would be uncomfortable with a provider relying on AI in their own healthcare experiences, especially when used to diagnose disease or recommend medical treatments. The study cited other patient concerns, such as racial bias, medical errors, and health privacy as potentially dimming AI’s bright light.
If patients already feel like a number when they receive healthcare services, how will interacting with a robot change their sentiment?
Let’s take a look a closer look at three urgent questions facing the healthcare industry as it considers how to responsibly adopt AI in a new era of machine learning:
How will AI impact user privacy?
With AI developments rolling out faster than regulators can manage, protecting healthcare data privacy is a top concern. Because the effectiveness of AI requires extreme quantities of patient data, experts say it opens the door to a host of cybersecurity vulnerabilities. According to a 2021 study published in BMC Medical Ethics, the nature of the implementation of AI raises concerns over the “access, use, and control of patient data,” because it is often in private (third-party vendor) hands. Additionally, the study pointed to issues in confidently anonymizing data through AI-driven methods, increasing the risk of data breach or manipulation. As AI technologies are adopted in healthcare, it will be critical to ensure technology vendors are adhering to the highest security standards to shield patient data from abuse.
How does AI affect patient consent?
Speaking of protecting patient data, responsible AI in healthcare will require informed patient consent. Recently, digital mental health company Koko came under fire when its co-founder admitted that its use of ChatGPT to respond to more than 4,000 users was implemented without patients’ knowledge.
While the co-founder maintained that real human’s “co-piloted” the technology, i.e. supervised or tweaked messages generated through its support platform, patients stopped responding to the messages once they were informed it was manned by bots. The company has since pulled the feature from its platform. In the future, expect eyes to be zeroed in on vendors that violate existing informed consent laws.
Will algorithmic biases result in gaps in care?
Perhaps, one of the biggest concerns around AI rests in the technology’s potential to “deepen racial and economic inequities.” According to the American Civil Liberties Union, inherent biases in the data used to train AI language models have been found in numerous examples, resulting in gender and racial unfairness. While a lack of representation among engineering teams is slightly to blame, it will be incumbent upon healthcare enterprises and vendors to monitor healthcare data for bias errors.
As healthcare works to reverse disparities and promote health equity, these biases must be taken into account. Having high-quality data can help combat these biases and ensure that AI models are robust and fair.
Keeping an eye on the future
At Upfront, we see plenty of benefits for AI in aiding the transformation in healthcare, but making patients feel seen and heard, and guiding them to optimal care must remain our top priorities.
Building a foundation of trust with patients and gathering feedback is critical when leveraging any new technologies, and will be crucial to AI’s adoption and success. Is your company using or considering AI applications? What challenges have you encountered? How do you think AI will benefit healthcare in the future? Let us know in the comments.