Author
Abstract
Bias is a key issue in expert and public discussions about Artificial Intelligence (AI). While some hope that AI will help to eliminate human bias, others are concerned that AI will exacerbate it. To highlight political and power aspects of bias in AI, this contribution examines so far largely overlooked topic of framing of bias in AI policy. Among diverse approaches of diagnosing problems and suggesting prescriptions, we can distinguish two stylized framings of bias in AI policy—one more technical, another more social. Powerful technical framing suggests that AI can be a solution to human bias and can help to detect and eliminate it. It is challenged by an alternative social framing, which emphasizes the importance of social contexts, balance of power and structural inequalities. Technological frame sees simple technological fix as a way to deal with bias in AI. For the social frame, we suggest to approach bias in AI as a complex wicked problem, for which a broader strategy is needed involving diverse stakeholders and actions. The social framing of bias in AI considerably expands the legitimate understanding of bias and the scope of potential actions beyond technological fix. We argue that, in the context of AI policy, intersectional bias should not be perceived as a niche issue but rather be seen as a key to radically reimagine AI governance, power and politics in more participatory and inclusive ways. 偏差是有关人工智能(AI)的专家及公众讨论中的一个关键议题。一些人希望AI将有助于消除人类偏差,而另一些人则担心AI会加剧社会不平等。为了强调AI偏差的政治方面和权力方面,本文研究了迄今为止在很大程度上被忽视的“AI政策偏差界定”这一主题。在用于诊断问题和提出建议的各种方法中,我们能区分两种不同风格的AI政策偏差框架——一种偏技术性,另一种偏社会性。强大的技术框架表明,AI能解决人类偏差,并有助于检测和消除偏差。此框架受到另一种社会性框架的挑战,后者强调社会情境、权力平衡和结构性不平等的重要性。技术框架将简单的技术解决方案视为一种解决AI偏差的方式。对于社会框架,我们建议将AI偏差视为一个复杂的棘手问题,为此需要一个更广泛的战略,以涉及不同的利益攸关方和行动。AI偏差的社会框架极大地扩展了对AI偏差的正当理解以及技术解决方案之外的潜在行动范围。我们论证认为,在AI政策情境下,交叉性偏差不应被视为一个特定的问题,而应被视为“以更具参与性和包容性的方式从根本上重新构想AI治理、权力和政治”的关键。 El sesgo es un tema clave en los debates públicos y de expertos sobre la inteligencia artificial (IA). Mientras que algunos esperan que la IA ayude a eliminar el sesgo humano, a otros les preocupa que la IA exacerbe las desigualdades sociales. Para resaltar los aspectos políticos y de poder del sesgo en la IA, esta contribución examina el tema hasta ahora ignorado en gran medida del encuadre del sesgo en la política de IA. Entre los diversos enfoques para diagnosticar problemas y sugerir recetas, podemos distinguir dos marcos estilizados de sesgo en la política de IA: uno más técnico y otro más social. Un poderoso marco técnico sugiere que la IA puede ser una solución al sesgo humano y puede ayudar a detectarlo y eliminarlo. Se ve desafiado por un marco social alternativo, que enfatiza la importancia de los contextos sociales, el equilibrio de poder y las desigualdades estructurales. El marco tecnológico ve una solución tecnológica simple como una forma de lidiar con el sesgo en la IA. Para el marco social, sugerimos abordar el sesgo en la IA como un problema complejo y perverso, para el cual se necesita una estrategia más amplia que involucre a diversas partes interesadas y acciones. El marco social del sesgo en la IA amplía considerablemente la comprensión legítima del sesgo de la IA y el alcance de las acciones potenciales más allá de la solución tecnológica. Argumentamos que, en el contexto de la política de IA, el sesgo interseccional no debe percibirse como un problema de nicho, sino como una clave para reinventar radicalmente la gobernanza, el poder y la política de IA de manera más participativa e inclusiva.
Suggested Citation
Inga Ulnicane & Aini Aden, 2023.
"Power and politics in framing bias in Artificial Intelligence policy,"
Review of Policy Research, Policy Studies Organization, vol. 40(5), pages 665-687, September.
Handle:
RePEc:bla:revpol:v:40:y:2023:i:5:p:665-687
DOI: 10.1111/ropr.12567
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bla:revpol:v:40:y:2023:i:5:p:665-687. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://edirc.repec.org/data/ipsonea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.