AI Series: Ethical AI – ESG risks and social impacts 

The advent of generative AI and automation technologies is reshaping the future of work, posing essential questions regarding social equity and labour market resilience.” Harry Freeman Consultant

Part 3 of our five-part series.

As artificial intelligence (AI) is becoming a central part of businesses across the globe, ethical considerations are moving from the fringes to the mainstream of ESG thinking. Apart from its business benefits, AI introduces a diverse array of social risks that need to be actively managed by organisations in order to remain compliant, competitive, and credible. Issues of algorithmic bias, transparency gaps, and reduced human oversight are no longer technical problems – they are becoming material ESG issues requiring system-level remedies. 

AI and algorithmic bias: An emerging governance challenge 

There is an inherent problem of bias in AI. Algorithms trained on partial or representative data sets will likely amplify current social inequalities in employment, lending, healthcare, and even criminal justice. A 2019 study published in Science found that a widely used clinical algorithm in the US healthcare system systematically underestimated Black patients’ health needs, due to the use of healthcare costs as a proxy for health status in the training data. The same type of biases have been found in facial recognition software, credit scoring models, and hiring programs. 

For ESG-conscious companies, this algorithmic bias creates reputational, legal, and operating risks. Bias in AI is not a societal issue – it is a material risk that has implications for human rights, governance, and regulation. Visionary companies are therefore integrating fairness audits, diverse sources of data, and inter-disciplinary governance into their AI governance frameworks. 

Employment, inclusion, and the socioeconomic implications of automation 

The advent of generative AI and automation technologies is reshaping the future of work, posing essential questions regarding social equity and labour market resilience. Up to 30% of US economy working hours could be automated by 2030, according to a 2023 McKinsey report. Retail, customer service, and administrative support industries – usually with high shares of women and minority employees – are particularly vulnerable. 

The ESG impacts of worker displacement are substantial. Responsible AI solutions must incorporate inclusive transition planning, upskilling programs, and open labour risk assessments. Investors increasingly look for proof from human capital management disclosures that companies are approaching the social impacts of AI change with care and foresight. 

Responsible innovation and the role of ethics-by-design 

Ethics-by-design is also gaining momentum as an AI development best practice methodology. Drawing on academic reflection on anticipatory governance and responsible innovation, this methodology promotes the integration of ethical, legal, and societal considerations into AI systems in the earliest stages of design. 

This entails proactive risk mapping, stakeholder engagement, and scenario planning to examine unexpected consequences before deployment. As an example, the EU’s AI Act mandates risk-based classification of AI systems and imposes strict governance conditions on “high-risk” applications.  

Towards transparency and trust in AI 

Transparency is at the heart of responsible AI and directly in line with ESG standards of accountability. However, the majority of AI systems – most notably those that use deep learning – are “black boxes,” offering little interpretability to users or auditors. 

Explainable AI (XAI) strategies are turning out to be the intersection of performance and transparency. These include model-agnostic methods such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), which permit the explanation of black box models without sacrificing precision. For ESG disclosures, the ability to explain how an AI system came to a particular conclusion is increasingly tied to stakeholder trust and regulatory compliance. 

CDP and the Task Force on Social-related Financial Disclosures (TSFD, in development) are both placing more emphasis on the governance of emerging technologies. Thus, ethical AI practices are becoming necessary not only to internal risk management but also to how organisations appear in ESG ratings, rankings, and capital markets. 

Formulating social resilience into AI strategy

The future of ESG leadership is in integrating ethical AI as a part of the core business model and disclosure. This is by creating in-house governance structures for AI ethics, empowering multidisciplinary teams, and instituting robust KPIs covering fairness, transparency, and accountability. 

Organisations that set the standard on ethical AI will not only lower risk of reputational and regulatory exposure, but they will also be well-placed to unleash the full potential of AI in inclusive, equitable, and socially resilient ways. With the line between technology and society increasingly blurring, instilling ethical foresight into AI development is no longer a choice – it is a requirement for sustainable innovation. 

Author: Harry Freeman, Consultant, Simply Sustainable

Harry Freeman

Consultant

Read bio

Request a call back

Talk to one of our friendly experts at a time that’s convenient for you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.