Reputation Matters: Managing The Risks from AI
The opportunities that AI technology creates are immense, but let;'s talk about the risks.
Generative AI has been grabbing the headlines during the past year. It’s moved from being a subject that tech specialists talk about to one that's grabbed the attention and is being discussed by Boards and their respective C-Suites.
Today, AI has even become a geopolitical issue, with governments around the world working to establish their respective national strategies.
Without a doubt, AI is a transformative technology with the potential to facilitate growth for businesses in many sectors. And it is growth that governments are committed to delivering.
Yet, to deliver growth and fully realise the potential of generative AI, what is needed is for the public and private sectors to work together for stakeholders in both communities to understand the strategy, the needs and the risks as each party sees them.
Let's be clear: The generative AI genie is out of the bottle, and there is no going back now.
Companies are racing ahead, some in a discovery stage, others in alpha, beta or live models already embedded within their organizations. They are moving at pace in order to gain some form of competitive advantage.
Research appears to confirm that there is value in investing and installing AI within not just a business but also the public sector.
Let's look at some numbers. Investors are investing BIG on AI, with venture capitalists (VCs) and corporate venture capitalists (CVCs) leading the way.
According to IBM, 'Best-in-class companies reap a 13% ROI on AI projects—more than twice the average ROI of 5.9%.'
Corporate investors themselves see AI as having the potential of it becoming 'bigger than the internet.'
In a recent Global Corporate Venturing webinar, Marc Paradis, vice-president of data strategy at Northwell Holdings, said, "This is the biggest tech transformation that we’ve seen in our lifetimes," before adding, "It is bigger than the iPhone, I would argue this is bigger than the internet."
Yet Paradis gave some very valid warnings to investors, companies and start-ups that are looking to enter into this space.
We are in a stage where AI is also surrounded by risk. Risk that investors, companies and the public sector are looking at and seeing how best to manage. Risk to do with strategy and tactical deployment and usage. Risk that will affect the reputation and the financial bottom line of many public and private sector organisations.
Let's look at four key areas that organizations need to look at.
Hardware
Namely, chips, because AI requires a lot of processing power in order to deliver on the potential of AI, which is why countries are working at pace in order to secure access. Countries like Saudi Arabia and the UAE are racing to buy Nvidia chips to power their respective AI ambitions. Not only that, but it is worth looking at the investments that Sovereign Wealth Funds are making in this sector.
Data
Each company needs to look at the data it has and see how it can be used to help them find growth opportunities. The IP and ownership of data for LLMs will become an issue that many organisations might not be paying attention to. Services like OpenAI's ChatGPT have been using content from the internet in order to train its various. But in building ChatGPT, has it been using copyrighted content? If so, how are content and data owners rewarded for the contribution it has made to the growth of OpenAI? The same question can be asked of any other generative AI platform.
I asked ChatGPT about the data source OpenAI uses to construct answers to questions posed by users of its platform. In its answer, it stated that OpenAI hasn't released exhaustive specifics about the datasets used for every version of GPT; it uses 'Vast Amounts of Text.'Before adding that, 'GPT models are trained on vast amounts of text from the web. This gives the model a broad base of language patterns, facts, and nuances, allowing it to generate diverse and coherent responses.'
Training a platform using vast amounts of text from the web implies that it uses copyrighted content from people and organisations that publish their content online. So, does OpenAI learn and earn revenue by using thinking and content shared online by people who have some form of ownership of this content?
As an example, The New York Times is currently deciding whether to sue OpenAI over whether in training of ChatGPT OpenAI did not pay to get access to its content. The New York Times is also concerned that if services like ChatGPT are able to gather content and present it to users without paying for access, it reduces the number of people who might be directed to its own site, thus reducing its monetisation. Basically, it is all about money and the threat that it poses, again, to the news and content industry.
Policy
Whether national, international or corporate. Policy will define the failure or success of many organisations that adopt generative AI.
There are five primary policy areas that organisations need to look at, these being Technical, Business, Ethical and Legal, Social and Environmental and, finally, Security.
Given the pace at which AI is moving, there is a high chance of issues happening, which is why communications and reputation management teams need to be included in any multidisciplinary teams that are tasked in the public or private sector with drafting relevant policies.
Let's also consider the differences that exist between legal jurisdictions' views on AI and how to regulate AI, as well as data and data ownership and paying for usage.
In May 2017, The Economist ran a story that it featured as a front cover. The headline was, 'The world's most valuable resource is no longer oil, but data.' In the article, it called for 'a new approach to antitrust rules.' Five years on and it's worth revisiting that article.
Manpower
As generative AI embeds itself into any organisation, manpower will need to be looked at from many different points of view, and not just the efficiency.
Whether retraining, customer experience, work and tasks done by humans will change. This impacts a lot more than work but also supply chains.
AI will lead to an increase in automation of some jobs that are currently done by humans. Yet, the experience of that automation needs to match the expectations of the humans at the receiving end of the engagement.
Any interaction that leads to a 'computer says no' in a commercial and/or consumer environment can lead to reputational damage, which is why AI and the design of engagements need to be mindful of how people engage with systems.
AI and generative AI is already impacting the way we work, and the way companies and public sectors think. Many are working at pace when what they need to be doing is thinking strategically, carrying their due diligence with their partners and stakeholders and thinking of the impact of their policies and actions in the short, medium and long term.
Risk exists, and this needs to be managed because, after all, trust in organisations is based on their policies, the experience they deliver and how they engage with customers, clients and stakeholders. Reputation matters.