Korean Journal of Psychology : General
[ Article ]
The Korean Journal of Psychology: General - Vol. 40, No. 4, pp.459-485
ISSN: 1229-067X (Print)
Print publication date 25 Dec 2021
Received 14 Dec 2021 Accepted 14 Dec 2021
DOI: https://doi.org/10.22257/kjp.2021.12.40.4.459

인공지능 편향식별의 공정성 기준과 완화

김효은
국립한밭대학교 인문교양학부
Fairness Criteria and Mitigation of AI Bias
Hyo-eun Kim
Department of Humanities, Hanbat National University

Correspondence to: 김효은, 국립한밭대학교 인문교양학부, (08827) 대전시 유성구 동서대로 125 Tel: 042-821-1738, E-mail: hyoekim26@hanbat.ac.kr

초록

인공지능 편향은 사회적 영향과 거버넌스의 문제일 뿐만 아니라 인공지능 시스템의 강건성 문제이기도 하다. 부호처리 패러다임의 인공지능에서는 제기되지 않던 인공지능 편향 문제는 컴퓨터가 인공신경망 기반의 자율지능시스템 단계가 되면서 시스템 구축 절차 각각에서 개입된다. 이 논문의 목적은 인공지능의 구성 절차에서 개입되는 편향의 양상, 편향 판단의 공정성 기준들, 편향완화 방법을 탐색하는 것이다. 공정성의 다양한 유형들은 동시에 만족되기 어렵고 인공지능의 적용 분야 및 맥락에 따라 상이한 기준과 요소의 결합이 필요하다. 학습 데이터, 분류자, 예측 내용의 편향을 완화하는 방법 또한 편향을 완전히 차단하는 것은 아니며 편향완화와 정확도 간의 조화를 모색해야 한다. 인공지능 감사를 통해 알고리즘에 무제한으로 접근하여 편향을 식별해낸다 하더라도 해당 알고리즘의 편향 여부를 단정하기는 어렵다. 편향완화 기술은 단순히 편향을 제거하는 단계를 넘어서서 편향완화와 시스템의 강건성을 동시에 확보하는 과제, 그리고 다양한 공정성 유형들을 조정하는 과제를 해결하는 단계로 나아가고 있다. 결론적으로, 이러한 특성들은 인공지능 편향을 인지하고 해결책을 모색하는 과제가 개념적 차원의 사안 인지를 넘어서 시스템 이해에 기반한 편향 인식 및 조정 차원에서 모색되어야 함을 암시한다.

Abstract

AI bias is not only an issue of humanities and social impact and governance, but also of systemic robustness. The algorithm bias has the characteristic of being intervened in the system construction process as the computer becomes an artificial neural network-based autonomous intelligence system. The objective of this paper is to deal with the aspects of bias that are involved in each stage of artificial intelligence, the fairness criterion for the judgment of bias, and the bias mitigation methods. Different types of fairness are difficult to satisfy simultaneously and require different combinations of criteria and factors depending on the field and context of AI application. Each method for mitigating the bias of training data, classifiers, and prediction alone do not completely block the bias, and a balance between bias mitigation and accuracy should be sought. Even if bias is identified through unlimited access to the algorithm through AI auditing, it is difficult to determine whether the algorithm is biased. The bias mitigation technology goes beyond simply removing the bias, and is moving toward solving the problem of both reducing the bias and securing the robustness of the system, and adjusting the various types of fairness. In conclusion, these characteristics imply that policies and education that recognize AI biases and seek solutions should be explored in terms of bias recognition and coordination based on system understanding beyond recognizing issues at the conceptual level.

Keywords:

machine learning, bias, fairness, data, algorithm

키워드:

기계학습, 편향, 공정성, 데이터, 알고리즘

References

  • 관계부처합동 (2021). 신뢰할 수 있는 인공지능 실현전략, 5월 13일. 과학기술정보통신부 인공지능기반정책과. Retrieved from https://www.korea.kr/common/download.do?fileId=195009613&tblKey=GMN
  • 김인식 외 (2021). 유튜브 알고리즘과 확증편향, 한국컴퓨터교육학회 학술발표대회논문집 25. Retrieved from https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002555225
  • 김재완 (2019). EU 일반정보보호규정(GDPR)의 알고리즘 자동화 의사결정에 대한 통제로써 설명을 요구할 권리에 대한 쟁점 분석과 전망. 민주법학, 69, 277-298. doi:10.15756/dls.2019..69.277
  • 김청택 (2019). 빅데이터를 이용한 심리학 연구 방법. 한국심리학회지: 일반, 38(4), 519-548. doi:10.22257/kjp.2019.12.38.4.519
  • 김효은 (2020). 공학적 방법을 결합한 인공지능윤리 학습. 윤리연구, 1(129), 133-153. doi:10.15801/je.1.129.202006.133
  • Barocas, S., Andrew D. Selbst, (2016). Big Data’s Disparate Impact, California Law Review, 104, 671-732. doi:10.2139/ssrn.2477899 [https://doi.org/10.2139/ssrn.2477899]
  • Barocas, S., Hardt, M., & Narayanan, A. (2017). Fairness in machine learning. Nips tutorial, 1. doi:10.1007/978-3-030-43883-8_7 [https://doi.org/10.1007/978-3-030-43883-8_7]
  • Beaupré MG, Hess U (2006). An ingroup advantage for confidence in emotion recognition judgments: the moderating effect of familiarity with the expressions of outgroup members. Personality & Social Psychology Bulletin. 32(1): 16-26. doi:10.1177/0146167205277097 [https://doi.org/10.1177/0146167205277097]
  • Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29, 4349-4357. doi:10.5555/3157382.3157584
  • Chiappa, S. (2019). Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, 33(1),7801-7808. doi:10.1609/aaai.v33i01.33017801 [https://doi.org/10.1609/aaai.v33i01.33017801]
  • Dave, P. (2018). Fearful of bias, Google blocks gender-based pronouns from new AI tool. Reuters, November, 27. Retrieved from https://www.reuters.com/article/us-alphabet-google-ai-gender-idUSKCN1NW0EF
  • Firth, R. (1957). A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, vol. Special Volume of the Philological Society, 1–32. Retrieved from http://cs.brown.edu/courses/csci2952d/readings/lecture1-firth.pdf
  • Friedler, A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2019, January). A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the conference on fairness, accountability, and transparency, 329-338. Retrieved from https://arxiv.org/abs/1802.04422 [https://doi.org/10.1145/3287560.3287589]
  • Gale, Maggie; Ball, Linden J. (2002). Does positivity bias explain patterns of performance on Wason's 2-4-6 task?, Gray, Wayne D.; Schunn, Christian D., Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society, Routledge, p.340. Retrieved from https://eprints.lancs.ac.uk/id/eprint/11136 [https://doi.org/10.4324/9781315782379-95]
  • Garg, S., Perot, V., Limtiaco, N., Taly, A., Chi, H., & Beutel, A. (2019). .Januaryerfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 219-226. doi: 10.1145/3306618.3317950 [https://doi.org/10.1145/3306618.3317950]
  • Josh, N. (2019, June 19) 7 Types of Artificial Intelligence, Fobes Media LLC. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificialintelligence/#145fe100233e
  • Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521-3526. doi:10.1073/pnas.1611835114 [https://doi.org/10.1073/pnas.1611835114]
  • Lohia, P. K., Ramamurthy, K. N., Bhide, M., Saha, D., Varshney, K. R., & Puri, R. (2019, May). Bias mitigation post-processing for individual and group fairness. Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp), 2847-2851. IEEE. Retrieved from https://www.ibm.com/downloads/cas/WM4MWDOE [https://doi.org/10.1109/ICASSP.2019.8682620]
  • Menon, A. and Williamson, R. (2018). The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, 107-118. Retrieved fromhttps://arxiv.org/abs/1705.09055
  • Microsoft (2021). Transparency note and use cases for Custom Neural Voice. Retrieved from https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice
  • Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., 1170, New York, USA. Retrieved from https://fairmlbook.org/tutorial2.html
  • Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK. Retrieved from https://dl.acm.org/doi/10.5555/2029079
  • Prates,M. O., Avelar, P. H., & Lamb, L. C. (2020). Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, 32(10), 6363-6381. Retrieved from https://arxiv.org/abs/1809.02208 [https://doi.org/10.1007/s00521-019-04144-6]
  • Prince, A. E., & Schwarcz, D. (2019). Proxy discrimination in the age of artificial intelligence and big data. Iowa L. Rev., 105, 1257. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3347959
  • Prost, F., Qian, H., Chen, Q., Chi, E. H., Chen, J., & Beutel, A. (2019). Toward a better trade-off between performance and fairness with kernel-based distribution matching. arXiv preprint. Retrieved from https://arXiv:1910.11779, .
  • Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A.,& Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint, arXiv. Retrieved from https://arXiv:1811.05577, .
  • Sample, I. (2017, Nov. 5). Computer says no: why making AIs fair, accountable and transparent is crucial. The Guardian, 5, 1-15. Retrieved from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial
  • Skeem, J. L., & Lowenkamp, C. T. (2016). Risk, race, and recidivism: Predictive bias and disparate impact. Criminology, 54(4), 680-712. doi:10.1111/1745-9125.12123 [https://doi.org/10.1111/1745-9125.12123]
  • Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2017). Detecting bias in black-box models using transparent model distillation. arXiv preprint, Retrieved from https://arXiv:1710.06169
  • Timm, J., Staab, S., Siebers, M., Schon, C., Schmid, U., Sauerwald, K., & Beierle, C. (2018, September). Intentional forgetting in artificial intelligence systems: Perspectives and challenges. In Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz), 357-365. Springer, Cham. doi:10.1007/978-3-030-00111-7_30 [https://doi.org/10.1007/978-3-030-00111-7_30]
  • Verma, S., & Rubin, J. (2018). Fairness definitions explained. In 2018 ieee/acm international workshop on software fairness (fairware), 1-7. IEEE. doi:10.1145/3194770.3194776 [https://doi.org/10.1145/3194770.3194776]
  • Vries, T., Misra, I., Wang, C., & van der Maaten, L. (2019). Does object recognition work for everyone?. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 52-59. Retrieved from https://arxiv.org/abs/1906.02659
  • Wilson, C., Ghosh, A., Jiang, S., Mislove, A., Baker, L., Szary, J., & Polli, F. (2021). Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 666-677.doi:10.1145/3442188.3445928 [https://doi.org/10.1145/3442188.3445928]
  • Zhang, H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 335-340. Retrieved from https://arxiv.org/abs/1801.07593 [https://doi.org/10.1145/3278721.3278779]
  • Zliobaite, I. (2015). A survey on measuring indirect discrimination in machine learning. arXiv preprint. Retrieved from https://arxiv.org/abs/1511.00148