What digital rights mean for investors

How the rapid development of technology is changing the approach of fund managers and the role advisers play

(via Pexels)

(via Pexels)

The pace at which technology advancements are changing the way we live cannot be understated – even far outpacing the speed of regulations.

With one of the biggest advances being artificial intelligence, it has raised questions about the risks associated with these technologies and the control they are wielding over our lives.

It has also given rise to a growing number of investor engagement groups becoming active around the topic of digital human rights in recent years.

So what does this mean for investors and the stocks leading the charge on technological developments?

This report is worth 30 minutes of CPD.

Majority of advisers now more likely to use actively engaging fund manager

The majority of advisers are much more likely to use an asset manager who actively engages with company management, following on from the introduction of the Financial Conduct Authority's sustainability disclosure requirements and labelling rules, according to the latest FT Adviser Talking Point poll.

As part of the new rules, distributors must communicate the labels and provide access to consumer-facing disclosures to retail investors, either on a relevant digital medium for the product or using the channel they would ordinarily use to communicate information. 

They must also keep the labels and consumer-facing disclosures up to date with any changes that the firm makes to a label or the disclosures.

The survey shows 93 per cent of respondents said they were much more likely to use an asset manager who actively engages with company management, 2 per cent said they were less likely to do so, while 5 per cent said they were indifferent.

Wes Wilkes, chief executive of Net-Worth Ntwrk, says: "If a firm is not already actively engaged with fund managers with whom they are trusting client assets then they shouldn't be using them. In terms of communication and access for retail investors, technology is the answer.

"A client can access all relevant disclosures, risk warnings and holding summaries within just a few clicks on an app or an online portal. There really is no excuse for this not being in the hands of the investor and automatically updated."

ima.jacksonobot@ft.com

The impact of digital rights on investing

The rapid development of technology is accelerating the debate around digital human rights and the approach of fund managers toward investments.

Advisers also have a role to play in the debate.

Digital rights is typically defined as human rights specific to digital products and services.

When you consider that more than two-thirds of the global population own a smartphone or use the internet, the transformative effect of technology on humankind cannot be understated.

But this change has also brought about concerns relating to the risk of using the technology, who controls the data and the negative impact of technology.

The human rights risks run across the digital value chain, from design to the consumption and use of products and services.

Jessica Wan, social research lead at Redwheel, says: “Technological advances are far outpacing the speed of regulations. Simply put, we believe fund managers cannot wait for regulators to develop guidance to start addressing the human rights risks and impacts. 

“Fund managers are drawing from international human rights norms and existing tried and tested frameworks such as the UN Guiding Principles on Business and Human Rights and the OECD Due Diligence Guidance for Responsible Business Conduct to address the emerging digital human rights concerns.”

Rise of the machines

One of the biggest technological advances that has helped push digital human rights up the agenda is AI.

And over the past year, there has been a boom in generative AI, a sub-set of machine learning. 

Kate Elliot, head of ethical, sustainable and impact research at Greenbank, says: “It is important to note that this technology has been around for a while but catapulted into the public awareness in late 2022 with the launch of public-facing tools such as ChatGPT, based on models developed by OpenAI. 

“This rapid rise to fame, often combined with a lack of technical understanding, led to huge amounts of speculation on the ways – good or bad – that AI may shape our future societies."

As a result, several jurisdictions have been looking at the issue of AI safety recently and how to regulate its use.

Technological advances are far outpacing the speed of regulations. Fund managers cannot wait for regulators to develop guidance to start addressing the human rights risks and impacts. 
Jessica Wan, Redwheel

Danielle Essink, senior engagement specialist at Robeco, says: “Given the speed at which AI is being developed, there is no doubt that in the next few decades this technology will transform our economy and society in ways we cannot imagine. 

“AI adoption is expected to continue to grow across different industries, and benefits such as cost reduction and improved efficiency are expected to remain significant. AI also represents massive opportunities to contribute to positive societal changes, such as detecting patterns in environmental data, or improving the analysis of health information. 

“At the same time, AI could cause new problems or aggravate existing ones if companies do not have enough understanding of the risks associated with these technologies.”

Unlike traditional AI systems that are designed to recognise patterns and make predictions, GenAI creates new content in the form of images, text, audio, and more. 

GenAI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response. 

Ethics in technology

Worryingly, one of the known abuses of the technology is in ‘deep fake’, the creation of videos, images and speech that are very realistic. 

Essink says: “Many people cannot tell the difference between the AI and the real thing and companies involved in these technologies need to make sure they work on mitigating the potential misuse of their products by working together with industry peers and experts on a standard to inform the public on the origin of images.”

Privacy and data protection, freedom of expression and non-discrimination are important digital human rights that are affected by digital technologies. 

Other concerns around big tech and data privacy include misinformation/disinformation.

Fund manager engagement

Essink says this is one of the key focus areas in Robeco’s engagement with tech platforms, where it sees particular risk around elections.

At the same time, tech platforms face fundamental business risk if their content management policies do not meet the expectations of users, advertisers, and regulators.

Essink says: “We want to understand the efficacy of the approach tech platforms take to combating misinformation and disinformation. We have ongoing dialogues with tech companies around these issues and do not shy away from filing shareholder proposals on this topic if we feel that transparency, and thereby the opportunity to assess the approach, is lagging.

“We are encouraged by the signal that a large number of investors welcome further disclosure on human rights.”

AI could cause new problems or aggravate existing ones if companies do not have enough understanding of the risks associated with these technologies
Danielle Essink, Robeco

In the recent past there have been cases where fund managers have opposed big tech companies on digital rights and used their votes.

Greenbank's Elliot says another key issue with fast-moving tech is bias in training datasets and algorithms.

AI and machine learning involve extrapolating from patterns in training data to generate likely future patterns, and if there is bias in the training data this will persist in the models unless corrected.

Regulation needed

She adds: “Regulation is starting to address particularly problematic uses of technology, for example remote biometric identification in public spaces and predictive policing, but it is still critical for companies to conduct human rights due diligence and be transparent about the potential risks of their technology. 

“What is essential is that these digital human rights risks are proactively identified by companies, and appropriately managed and mitigated, and we need regulatory incentives and enforcement to support this.”

Voting power

In recent years, Essink says she has seen a growing number of investor engagement groups becoming active around the topic of digital human rights. 

One such is the Investor Alliance for Human Rights, which has been instrumental in bringing investors and experts on the topic together to encourage shared learning and an understanding of the human rights impacts.

The alliance describes itself as a "collective action platform for responsible investment grounded in respect for people’s fundamental rights".

Its 200-plus members include asset management firms, public pension funds, trade union funds, faith-based institutions, family funds, and endowments. 

Back in January, members of the alliance filed 15 proposals for the 2023 proxies of Alphabet, Amazon and Meta, which raised a variety of human rights concerns at each company ranging from inadequate content moderation and the proliferation of hate speech, to a lack of transparency and accountability through the use of opaque algorithms and artificial intelligence, violations of privacy rights, risks of the targeted advertising business model of 'big tech', as well as corporate governance concerns such as the dual-class share structures prevalent in the tech sector that limit voting rights for shareholders. 

Investors said "taken together, the issues raised in the proposals speak to the power and influence these tech giants wield over society and highlight how a lack of adequate oversight structures to mitigate potential harms raises risks for all stakeholders". 

The alliance also runs an initiative using research from Ranking Digital Rights, a non-profit that scores technology and telecoms companies on their respect for privacy and freedom of expression.

Elliot notes that tech stocks have been under scrutiny for many years in relation to the human rights implications of their products and services, with a focus on the potential for human rights risks or other social harms.  

For example, Ranking Digital Rights has been assessing and benchmarking tech and telecom companies on digital human rights for almost a decade, while NGOs have a longstanding history of engaging with companies to improve disclosures and practice around human rights due diligence. 

Regulation is starting to address particularly problematic uses of technology, but it is still critical for companies to conduct human rights due diligence and be transparent about the potential risks. 
Kate Elliot, Greenbank

She adds: “The increased scrutiny and regulation of AI just brings another facet of technology into this debate. 

“We believe that companies which have established human rights due diligence frameworks are likely to be well-positioned to meet any demands from increased regulation and to build public trust through transparency and accountability.”

Wan says as an investor Redwheel is able to leverage research and engage with relevant portfolio companies about areas they can improve on in the realm of digital rights.

Earlier this year, Aviva Investors was one in a long list of many companies that supported the Investor Alliance's letter to the European Commission in response to the EU AI Act, citing the need for better regulation of AI development. 

Louise Piffaut, head of ESG – liquid markets at Aviva Investors, says: “That’s not to say we can’t have responsible innovation, but we need to make sure human-rights risks are better managed. 

"That requires better understanding of unintended consequences by companies, regulators and industry bodies.

 “Regulation also needs to be outcome-seeking, with a focus on human rights protections, because technology develops and evolves relatively quickly, resulting in loopholes which can be exploited by some.”

Investor awareness

Wan says investors should be aware of how their portfolio companies are using AI systems in how they can lead to adverse human rights impacts beyond the tech sector.

For instance, companies using AI systems for recruiting talent can unintentionally perpetuate discriminatory practices as AI can exacerbate existing biases, including gender and ethnicity, based on the data they are trained on.

She adds: “There are human rights risks for end-users, both individual and businesses end-users. AI systems are not immune to the significant human rights challenges faced by social media platforms. 

“In a way, digital human rights are no different than other issues investors need to tackle, including the just transition or human rights violations in global supply chains. The good news is that human rights due diligence, that is already being applied in other sectors, can also be leveraged in the context of digital human rights.”

Investors should identify the most salient risks and prioritise companies in their portfolio. 
Jessica Wan, Redwheel

The first step to engagement, Wan recommends, is for investors to understand their digital human rights risks exposure across their portfolio. This can range from companies operating in the AI value chain to companies using AI systems.  

She adds: “We think it is crucial for investors to take a value chain lens in assessing these risks, from workers in the digital supply chain to end-users. Investors should identify the most salient risks and prioritise companies in their portfolio. 

“After understanding their risk exposure, investors should assess how their highest risk companies are identifying, preventing, mitigating and addressing their digital human rights issues and remedying any adverse impacts. Drawing from this process, investors can have an informed engagement with their investees.”

Elliot notes: “While these events have brought the debate around technology and human rights to the fore, digital human rights are not a new thing and investors have been engaging on this issue for many years.”

Adviser role

According to Elliot, advisers also have a role to play in investor engagement on digital human rights.

She says: “Education is a key role that financial advisers can play in this debate. There is a lot of confusion around what AI is and the potential risks and opportunities that it presents, and advisers can help their clients navigate these tricky concepts.

“Asking questions to draw out sustainability preferences is also key. A client may mention they are concerned about human rights and it’s then important to talk through what this means to them and whether digital human rights are encompassed in their views.”

Essink adds: “In our view, institutional investors need to engage on a broad set of environmental, social and governance issues, including the role of AI. 

“Financial advisers can translate this into clear advice for their clients as to which investment strategies best capture these engagement efforts in their fund management.”

Ima Jackson-Obot is deputy features editor at FT Adviser

Why we engage with companies on digital rights

Digital rights are a critical issue in today’s digitally-connected world.

They fall within the category of human rights, which is one of our six priority themes for engaging with companies. 

As active owners we engage with our investee companies to ensure that their products and services do not cause harm or adversely affect human rights.

We believe companies that respect and uphold the rights of consumers will be more successful over the long term, making them more attractive as investments.

Increased use of technology highlights importance of digital rights

Individuals’ rights need to be considered, given that our reliance on technology increasingly poses risks to our data privacy in both our personal and professional lives.   

And while digital technologies offer many opportunities, they also bring concerns around issues such as online welfare, data governance, use of surveillance technologies, and algorithmic bias. 

Regulations and best practices are emerging to better protect consumers’ digital rights. Companies that are seen to be negligent around digital rights may lose consumers’ trust and be subject to legal and financial risks. 

These risks are very real.

For example, earlier this year Meta (parent company of Facebook and Instagram) was fined €390mn (£336mn) by EU regulators for its gathering of personal data used for targeted advertising.

It was also fined €1.2bn over the transfer of EU user data to the US. 

As financial penalties materialise, we expect to continue to hold companies to account for their handling of human rights-related issues.

We use our influence to encourage companies to adopt responsible digital rights policies and practices, and to follow best practice.

Engaging on the issue of digital rights can also help to ensure investee companies keep pace with regulatory change. 

Overall, our engagements can contribute towards promoting positive change and mitigating the risks in the digital economy.

Adapting engagement to changing digital landscape 

The digital world moves very quickly.

For example, the past year has witnessed the rapid take-up of AI tools such as ChatGPT, bringing a new set of possibilities but also new risks to consider. 

As active owners, we need to be nimble and ensure that we are engaging with companies as new kinds of challenges and opportunities emerge. 

When it comes to engaging with companies around AI, there are several areas we aim to address.

We seek to understand their AI strategies and risk management practices.

We engage with them to understand how the board and senior management oversee AI-related risks and ask them to establish and disclose their overarching principles for the responsible use of AI.

We also seek to understand how these principles are embedded throughout the organisation.

Engaging on digital rights in practice

Social media companies are at the forefront of changing societal expectations and an evolving regulatory landscape.

Their ability to help protect the rights of their users will be key to protecting their long-term sustainability.  

We have had several engagements with social media companies.

For example, our engagement with Meta has been developing since 2018, when we engaged on general data protection regulation issues facing the company.

Since then, we have had several engagements focused on Meta’s content moderation policies and practices, including meeting with a leader of the product policy team in 2022.

Separately to that, in 2022 we signed a letter asking Meta to clarify how it considers, develops and deploys AI.

The letter was sent in collaboration with the World Benchmark Alliance and the Collective Impact Coalition.

The investor group then held a call with Meta’s director responsible for AI and the vice-president of civil rights, in which we gained more insight into the company’s plans to build board competence on AI and how it works cross-functionally to monitor AI. 

We have also been collaborating with a group of investors to engage the company on corporate governance, particularly around board structure.

And at Meta’s AGM this year we voted in support of a resolution asking for a third-party assessment of the human rights impact of the company’s targeted advertising policies and practices. 

Consistent and evolving discussions are essential for desired outcomes to be achieved.

We continue to engage investee companies on what we believe are the most material digital rights-related issues.

Katie Frame is social engagement lead at Schroders

Produced by Dionne Gibb, production editor at FT Adviser. (Photos via Envato Elements, and FT Fotoware)

Produced by Dionne Gibb, production editor at FT Adviser. (Photos via Envato Elements, and FT Fotoware)