The rate at which the advanced AI landscape is progressing is blazing fast. But so are the risks that come with it.
The situation is such that it has become difficult for experts to foresee risks.
While most leaders are increasingly prioritizing GenAI applications over the coming months, they are also skeptical of the risks that come with it – data security concerns and biased outcomes, to name a few.
Mark Suzman, CEO of the Bill & Melinda Gates Foundation, believes that “while this technology can lead to breakthroughs that can accelerate scientific progress and boost learning outcomes, the opportunity is not without risk.”
Image by Author
Let Us Start With Data
Consider this – a famous Generative AI model creator states, “It collects personal information such as name, email address, and payment information when necessary for business purposes.”
Recent times have shown multiple ways it can go wrong without a guiding framework.
- Italy expressed concerns over unlawfully collecting personal data from users, quoting “no legal basis to justify the mass collection and storage of personal data for ‘training’ the algorithms underlying the platform’s operation.”
- Japan’s Personal Information Protection Commission also issued a warning for minimum data collection to train machine learning models.
- Industry leaders at HBR echo data security concerns and biased outcomes
As the Generative AI models are trained on data from almost all of the internet, we are a fractional part hidden in those neural network layers. This emphasizes the need to comply with data privacy regulations and not train models on users’ data without consent.
Recently, one of the companies was fined for building a facial recognition tool by scraping selfies from the internet, which led to a privacy breach and a hefty fine.
However, data security, privacy, and bias have all existed from pre-generative AI times. Then, what has changed with the launch of Generative AI applications?
Well, some existing risks have only become riskier, given the scale at which the models are trained and deployed. Let’s understand how.
Hallucination, Prompt Injection, and Lack Of Transparency
Understanding the internal workings of such colossal models to trust their response has become all the more important. In Microsoft’s words, these emerging risks are because LLMs are “designed to generate text that appears coherent and contextually appropriate rather than adhering to factual accuracy.”
Consequently, the models could produce misleading and incorrect responses, commonly termed hallucinations. They may emerge when the model lacks confidence in predictions, leading to the generation of less accurate or irrelevant information.
Further, prompting is how we interact with the language models; now, bad actors could generate harmful content by injecting prompts.
Accountability When AI Goes Wrong?
Using LLMs raises ethical questions about accountability and responsibility for the output generated by these models and the biased outputs, as is prevalent in all AI models.
The risks are exacerbated with the high-risk applications, such as in the healthcare sector – think of the repercussions of wrong medical advice on the patient’s health and life.
The bottom line is that organizations need to build ethical, transparent, and responsible ways of developing and using Generative AI.
If you are interested in learning more about whose responsibility it is to get Generative AI right, consider reading this post that describes how all of us can come together as a community to make it work.
As these large models are built on top of worldwide material, it is highly likely that they consumed creation – music, video, or books.
If the copyrighted data is used to train AI models without obtaining the necessary permission, crediting, or compensating the original creators, it leads to copyright infringement and can land the developers in serious legal trouble.
Image from Search Engine Journal
Deepfake, Misinformation And Manipulation
The one with a high chance of creating ruckus at scale is deepfakes—wondering what deepfake capability can land us into?
They are synthetic creations – text, images, or videos, that can digitally manipulate facial appearance through deep generative methods.
Result? Bullying, misinformation, hoax calls, revenge, or fraud – something that does not fit the definition of a prosperous world.
The post intends to make everyone aware that AI is a double-edged sword – it is not all magic that only works on initiatives that matter; the bad actors are also a part of it.
That is where we need to raise our guards.
Take the latest news of a fake video highlighting the withdrawal of one of the political personalities from the upcoming elections.
What could be the motive? – you might think. Well, such misinformation spreads like fire in no time and can severely impact the direction of the election process.
So, what can we do not fall prey to such fake information?
There are various lines of defense, let’s start from the most basic ones:
- Be skeptical and doubtful of everything you see around yourself
- Turn your default mode – “it might not be true,” rather than taking everything at face value. In short, question everything around you.
- Confirm the potentially suspicious digital content from multiple sources
Prominent AI researchers and industry experts such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, and Yuval Noah Harari have also voiced their concerns, calling for a pause on developing such AI systems.
There is a large looming fear that the race to build advanced AI, matching the prowess of Generative AI can quickly spiral out and go out of control.
Microsoft has recently announced that it will protect the buyers of its AI products from copyright infringement implications as long as they comply with guardrails and content filters. This is a significant relief and shows the right intent to take responsibility for the repercussions of using its products – which is one of the core principles of ethical frameworks
It will ensure that the authors retain control of their rights and receive fair compensation for their creation.
It is a great progress in the right direction! The key is to see how much it resolves the authors’ concerns.
So far, we have discussed the key ethical implications related to the technology to make it right. However, the one that stems from the successful utilization of this technological advancement is the risk of job displacement.
There is a sentiment that instills fear that AI will replace most of our work. Mckinsey recently shared a report about what the future of work will look like.
This topic requires a structural change in how we think of work and deserves a separate post. So, stay tuned for the next post, which will discuss the future of work and the skills that can help you survive in the GenAI era and thrive!
Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.