New Ways to Deny Care: AI’s Transformation of the Claims Denial Process
Artificial Intelligence (AI) has become a powerful tool in various industries, including healthcare and insurance. However, its implementation is not without controversy. In recent news, big insurance companies are facing scrutiny for their use of AI in questionable practices. Cigna, a prominent insurance company, has recently come under fire for its reported claims process, raising concerns about patient access to necessary care and treatments. The use of algorithms and AI to approve or deny claims without proper evaluation has sparked outrage, prompting insurance commissioners and federal lawmakers to take action. In this blog, we will delve into the issues surrounding Cigna’s practices, the concerns raised by federal lawmakers, the need for regulation, and the advocacy efforts to ensure fair and ethical AI use in the healthcare industry.
Cigna Comes Under Fire
Cigna recently came under fire for its reported claims process. According to an investigation by ProPublica and NPR, Cigna has been denying claims without even reading them first. This means that many patients have been denied access to necessary care and treatments without even having their claims properly evaluated.
Insurance commissioners across the country are taking notice of this irresponsible behavior and attempting to hold Cigna accountable. Cited is a violation of their fiduciary duty as justification for stronger regulations against them.
“If these figures are at all illustrative of Cigna’s commercial appeal and reversal rates, it would suggest that the PXDX review process is leading to policyholders paying out of pocket for medical care that should be covered under their health insurance contract,” rep. Cathy McMorris Rodgers said in a letter to Cigna.
Ultimately, Cigna’s use of AI to approve or, more likely, deny claims, is just another draconian example of healthcare insurers putting profits before people – denying claims without even looking into them is utterly unacceptable.
This is one big reason federal lawmakers continue to raise concerns about insurers’ use of algorithms and AI to approve or deny claims.
Federal Lawmakers Raise Concerns
Sen. Ron Wyden voiced his concerns in a June 8 Senate Finance Committee hearing, saying he is “increasingly concerned by the potential for abuse when it comes to the use of big data and algorithms in healthcare.”
“If insurance companies are getting bigger, and buying companies that specialize in developing algorithms, it strikes me that they are going to be in a position to invest in new ways to deny care,” Mr. Wyden said. “That strikes me as a prescription for trouble.”
There needs to be more oversight and regulation of algorithms to approve and deny claims. However, as with most AI-related discussions, this technology comes down to user intent.
Karen Joynt Maddox, MD, associate professor of medicine at Washington University in St. Louis, commented: “We should figure out how to harness the algorithms for good rather than evil…because it’s being done in an unregulated and unknown manner, we’re seeing people get out ahead who are in the business of trying to deny care.”
What’s next for AI in healthcare? Hopefully, proper regulation.
The AMA will advocate for greater regulatory oversight of AI use by health insurance companies. In fact, seven national specialty societies introduced a resolution at the 2023 AMA Annual meeting.
According to the AMA, “The question raised by such use of technology is whether its use is in compliance with the state and federal insurance regulations that govern payer decision-making on whether to approve claims or prior-authorization requests.”
No doubt, there is a need for further policy on the use of AI in healthcare given its quick evolution and adoption. We support the AMA’s policies and advocacy to guide the proper and fair use of AI in healthcare.
Has your revenue cycle been impacted by claims denials? Allia Group can help recover lost revenue in other places. Contact us to learn more about our unique litigation model to recover underpayments for out-of-network care.