What are the legal and cultural challenges to using data for AI Applications?

Current Legislation Concerning AI

In the past year, there have been several key proposals to monitor and govern AI across a wide array of industries. While these bills may not specifically govern health data, they would help establish a broader regulatory environment for AI. The Algorithmic Accountability Act represents the most direct attempt to regulate health data and applications that is overseen by technology companies. 

  • Algorithmic Accountability Act of 2019: This act requires companies that possess or control large amounts of personal data to study and fix flawed computer algorithms that may result in “inaccurate, unfair, biased or discriminatory decisions impacting Americans.” The Act specifically authorizes the FTC to create regulations for companies to carry out impact assessments of highly automated decision systems, and then correct any issues that they encounter during those impact assessments. 
  • Artificial Intelligence Initiative Act (AI-IA):  Introduced May 2019, the AI-IA builds on the President’s Executive Order and directs key agencies, such as the National Science Foundation, the National Institute of Standards and Technology, and the Office of Science Technology Policy (OSTP), to invest in AI research and development and support the development of an AI science and technology workforce pipeline. The Act also creates a National Artificial Intelligence Advisory Committee in the OSTP.
  • AI in Government Act: This Act would establish the AI Center for Excellence within the General Services Administration (GSA) to develop innovative uses of AI in government.

For more information about legislative updates for Artificial Intelligence, please visit the Center for Data Innovation’s AI Legislation Tracker in the United States. 

Legal challenges

  • Inconsistent restrictions on data use. Policymakers and healthcare practitioners have noted that health data types have different legal and regulatory constraints on their use, resulting in legal complications. For example, researchers have noted that administrative and claims data, clinical data, and certain types of surveillance data, such as survey data, can include sensitive, individual-level information. The use of these data types is often restricted under existing privacy frameworks such as HIPAA. Meanwhile, patient-generated data, such as data collected from mobile applications and wearable devices, can also contain sensitive information about individuals ranging from fertility treatments to mental health conditions. However, there are relatively few legal guidelines that protect this emerging data type from misuse. 
  • Concerns about intellectual property. Participants at the Roundtable on AI also discussed the challenges of using and sharing proprietary data and algorithms. Data collected in drug development trials, through private-sector health surveys, or in other ways could benefit researchers and organizations in the health sector developing AI applications, and proprietary AI models could be developed for greater accuracy if the algorithms they use were shared. But while all parties stand to benefit from sharing data and algorithms, it is difficult to balance that benefit against companies’ need to protect their intellectual property for competitive advantage. 

 

Cultural challenges

  • Underlying bias in health data. Some Roundtable participants highlighted concerns about bias and lack of diversity in health data, which can have serious consequences when utilized for AI development. As one expert notes, “If the data are flawed, missing pieces, or don’t accurately represent a population of patients, then any algorithm relying on the data is at a higher risk of making a mistake.” For example, Studies have pointed out that cultural biases and biases in how we collect data can lead black and latino patients in emergency rooms to receive 40 percent less access to pain medications than white patients. 
  • Data silos and administrative hurdles. While HHS is developing more efficient ways for its operating agencies to share data – for example, by developing common data use agreements (DUAs) – it is still difficult for HHS agencies to share data with each other, and can be even more difficult for organizations outside of government to obtain data from HHS. Roundtable participants said that it can take 12 to 18 months to get access to data from various agencies and offices within HHS. Culture changes are needed to reduce the administrative hurdles that prevent timely data sharing.
  • Overly restrictive interpretations of HIPAA. Some Roundtable participants noted that fears about violating HIPAA have created a risk-averse environment for data sharing.  While HIPAA is intended to protect patient privacy, the Office of the National Coordinator for Health IT notes that it does allow data sharing and use under specific conditions. Participants suggested that HHS could provide more guidance on what is and is not permissible under HIPAA in different contexts. For more information on how the Office of Civil Rights has relaxed HIPAA privacy rules during COVID-19, please visit the “HIPAA and Patient Data” section here.    

 

Use Case: United Healthcare’s Optum Division Accused of Racial Bias

In October 2019, New York State officials launched an investigation into a UnitedHealth algorithm developed by Optum and the New York State Department of Financial Services and requested that UnitedHealth either prove that its algorithm is not discriminatory or stop using it. A study published in Science revealed that a widely used algorithm developed for healthcare companies, used to assign risk scores to patients, assigned the same risk score to blacks as whites even when the black patients were actually much sicker. This had the unintentional effect of reducing the number of black patients who were flagged to receive  extra care by more than half. This resulted in less money being spent on black patients with higher levels of need while the algorithm incorrectly concluded that black patients are healthier than white patients. Booz Allen Hamilton has noted that controlling for bias in algorithms should be an industry imperative and one that would entail making data and algorithms available to external review.