Search

IT GOVERNANCE IN MITIGATING ALGORITHMIC BIAS: ETHICAL PRACTICES TO ENSURE TRANSPARENCY AND FAIRNESS IN THE PROCESSING OF PERSONAL DATA BY ARTIFICIAL INTELLIGENCE

Summary: IT Governance plays a central role in mitigating algorithmic biases in AI by implementing processes and practices that are ethical, auditable, and transparent. However, the major challenge is balancing technological innovation with data protection and the rights of data subjects. When implemented effectively, these practices reinforce legal and ethical compliance, promoting equity and social responsibility without undermining the competitiveness of companies.

Keywords: IT Governance; Artificial Intelligence; Algorithmic Bias.

The growing use of algorithms in the processing of personal data by artificial intelligence (AI) has been a central topic in debates on ethics and social implications around the world. Although they offer significant efficiency, these algorithms often reproduce social biases embedded in training data (i.e., information used for their development), resulting in systemic discrimination. In automated credit granting decisions, for example, AI can favor already privileged groups, reinforcing pre-existing inequalities.

It is in this context that information technology (IT) governance plays a fundamental role in mitigating these biases. Through the adoption of ethical practices and robust mechanisms for supervising AI operations, it is possible to promote a more transparent, equitable, and responsible use of algorithms, minimizing risks and protecting the rights of individuals guaranteed by the laws of various jurisdictions and favoring fairer automated decisions.

What are algorithmic biases and what are the related ethical implications?

Algorithmic biases are distortions in the results of AI systems due to prejudices present in the data used to train them. In other words, if the historical data used to train an AI system is biased towards a particular racial or socioeconomic group, the algorithm is capable of reproducing that bias.

A classic example occurs in recruitment processes: when an algorithm is trained with data that favors candidates with a certain profile, it may disregard talent from underrepresented groups, perpetuating discrimination. In this regard, a study conducted in 2022 using ChatGPT 3.5 revealed that names associated with Black candidates were less likely to be selected compared to names of white or Asian candidates, reinforcing inequalities in the labor market.

In this sense, it is essential that companies seek not only broader data collection, with representativeness in data sets, but also implement solid IT governance to combat algorithmic bias, as will be analyzed below.

IT governance in combating algorithmic bias and the difficulties of its implementation

Algorithmic bias can be mitigated through the implementation of robust IT governance, which involves adopting a set of processes, policies, procedures, and practices for risk management, ensuring the ethical use of data, and improving the performance and security of technological operations.

This implementation can be achieved by creating ethics committees dedicated to AI, human supervision and continuous monitoring of training data, mainly through audits, as well as by hiring data professionals from different backgrounds to promote diversity.

On the other hand, the task of making AI systems explainable and auditable is not easy, as it involves several technical and operational challenges. First, due to the complexity of AI models themselves, especially those based on deep learning. Secondly, because complete and accessible documentation of algorithm development requires specific technical knowledge, and many companies lack internal specialists with the necessary expertise and even sufficient funds and time to hire external specialists.

In the same vein, transparency in the disclosure of information to impacted stakeholders can be more complex when personal data is used by companies that have trade secrets. The Brazilian General Data Protection Law (LGPD), as well as the European Union’s General Data Protection Regulation (GDPR), for example, establish that data processing must be carried out transparently so that data subjects are fully aware of how their information is being used. Compliance with such legal requirements, in this context, could compromise the protection of trade secrets.

In addition, concerns about the use of AI have been the subject of concern in several countries: in the European Union, the AI Act came into force this year, whose main objective is to establish a legal framework for the development, placing on the market, entry into service and use of AI systems. The AI Act seeks to simultaneously guarantee the protection of fundamental rights and support innovation, establishing more or less stringent compliance requirements depending on the level and risk associated with the use of AI, in addition to stipulating that developers must use representative data and conduct regular audits to ensure transparency.

Similarly, the Algorithmic Accountability Act is currently being debated in the US Congress, proposing that companies using AI in high-risk sectors be required to conduct regular audits. In Brazil, Bill N° 2.338/2023 is also pending, focusing on security, transparency, and the protection of fundamental rights during the use and development of AI.

It should be noted, therefore, that despite the considerable challenges involved in implementing IT governance, this measure is essential for the secure use of algorithms and, above all, for striking a balance between innovation, ethical responsibility and compliance with current and future laws on the subject.

Conclusion

The use of algorithms in the processing of personal data has been growing steadily, generating numerous debates about ethics and social implications as they reproduce social biases embedded in training data.

In this scenario, the implementation of robust IT governance is necessary not only to promote sustainable and inclusive development, but also to protect the rights of individuals.

Although there are considerable challenges, measures such as audits, diversity in teams, and the creation of ethics committees ensure that automated decisions will be fair and comply with the LGPD, GDPR, AI Act, and future AI regulations in Brazil and the United States, without compromising business competitiveness.

Article originally published in the magazine “Fronteiras Digitais – Perpespectivas Multidisciplinares em Cibersegurança, Privacidade e Inteligência Artificial” (Digital Frontiers- Multidisciplinary, Privacy and Artificial Intelligence), 2025 edition, produced by Brazil- Canada Chamber of Commerce (CCBC) and the National Association of Data Privacy Professionals (APDADOS), p. 44..

Camila Lisboa Martinscmartins@gtlawyers.com.br

Jessica Ferreirajferreira@gtlawyers.com.br