Many things have been said about the positives of Artificial Intelligence (AI), but the negatives are also important to the discourse. Experts who worry about the harm of AI have called for control mechanisms, and they seem to have gotten some measure of that from the White House.
At a July meeting, U.S. President, Joe Biden and top AI companies in the country, such as Google, Meta, Microsoft, Amazon, and OpenAI, reached an agreement to help make AI safer for consumers. This new agreement came at a time of considerable legitimate concerns. The AI companies made at least eight commitments after meeting with the American president.
However, questions remain. Does this agreement go far enough? What are the enforcement mechanisms for the commitments? We’ll answer these questions and more in this article.
Related: 5 Best Benefits of AI Mini Drone in Life-Saving Scenarios
Overview of AI Reception and Concerns
Last November, ChatGPT launched to critical acclaim and greatly enhanced the popularity of AI. The ease of use and access meant it was the first AI tool millions worldwide had access to. People loved the results. ChatGPT answered their most complex questions and even dropped some wise words when they requested them. So, what’s there not to love about AI?
Some industry experts think there are some things not to love about AI, at least for now. The Future of Life Institute called for a pause of the most advanced AI systems till experts and authorities learned more. Geoffrey Hinton, widely believed to be one of the godfathers of AI, quit his job at Google in May, raising concerns that AI chatbots may soon be more intelligent than humans.
AI tools like ChatGPT and Bing can write convincingly human-like text and create new images and other media that will fascinate people. However, such content can trick people and circulate disinformation, among other risks.
Tesla CEO Elon Musk, who is no stranger to AI, was one of the loudest voices calling out the negatives of AI. He admitted that AI can be very beneficial but harmful if no regulations exist.
Musk recently met with U.S. congressional leaders, including Senate Majority Leader Charles Schumer and House Speaker Kevin McCarthy, to enlighten lawmakers and discuss potential regulations. The discussions and others are expected to spur the U.S. Congress to write legislation enshrining safeguards into law.
Threats of AI Systems to Society

Undoubtedly, AI is a positive addition to the world, but it also sparks near-term concerns, including privacy, inequality, safety, bias, and security. As the tool becomes more powerful, the risks of misuse and accidents also increase. Here are significant threats society faces from AI systems:
Disinformation: Deep fakes have existed for a long time, but it’s usually easy to identify them. That may no longer be possible with AI. AI could increase the quantity and quality of fake media so drastically that people cannot tell the difference between reality and simulation.
Insecurity: AI can enhance biosecurity, cybersecurity, and other aspects of warfare in unimaginable ways. But there is a significant threat to society if AI security systems get into the wrong hands.
Economic Downturn: AI has started creating situations of human obsolescence as automation replaces several workers in various sectors. These situations could only worsen, causing the social and fiscal fabric of economies to be rethought.
Bias: AI systems may reproduce biases contained in their source material. Such biases may be sexism or racism, leading to societal harm.
Accidents: AI systems may go wrong unpredictably and cause severe bodily harm.
Also Read: NVIDIA Unveils Cutting-Edge AI Chip Configuration for Enhanced Performance
Details of the Agreement Between President Biden and AI Companies to Address the Threats

After President Biden met with AI companies, the White House released a set of eight rules that all parties agreed to. Here is a summary of the rules:
- AI systems must undergo internal and external (By independent experts) before public release.
- Information sharing across the industry with government, academia, and civil society on managing AI risks.
- Protecting proprietary and unreleased model weights by investing in insider threat and cybersecurity safeguards.
- Allowing third-party discovery and reporting of vulnerabilities in their AI systems for prompt correction.
- Development of robust technical mechanisms to help users identify AI-generated content by utilising a watermarking system.
- Public reportage of limitations, capabilities, and areas of inappropriate use in their AI systems.
- AI companies must prioritise research on AI systems’ societal risks to protect privacy and prevent harmful discrimination and bias.
- AI companies should develop and deploy advanced systems to help fix society’s greatest challenges, such as preventing cancer and mitigating climate change.
Also Read: AI and Ethics: Addressing Bias, Transparency, and Responsible AI Development
Conclusion
While this agreement is a good first step in the right direction, it’s not set in stone yet because AI companies are not legally liable. But it’s just a matter of time before tight regulations are in place. The White House said it is working on an executive order and seeking bipartisan legislation for more AI regulations.
Until a law is in place to regulate AI in the U.S. and other countries, AI companies are responsible for adhering to the safety commitments they made.
Frequently Asked Questions
What Is the UK Doing About AI Safeguards?
The UK government has not been left out in showing concerns about AI systems. In fact, the British government has announced that it is hosting the first global summit on AI “This autumn”.
Are AI’s Negatives Enough to Seek a Ban on Technology?
Like other major innovations, AI has negatives, but the positives far outweigh the risks. Instead of seeking the technology’s ban, stakeholders should advocate for the responsible use of AI systems.
What Are the Penalties Against AI Companies That Violate the New Agreement?
At the moment, AI companies that default will face no punishment because the agreement is non-binding. However, that may soon change with the arrival of new laws.
Author Profile

Latest entries
GAMING2023.12.078 Cringe-Worthy Game Delays of 2023 | We Almost Cried!
AI2023.12.07Latest AI Writing Assistant From Google: “Help Me Write” AI Feature for Chrome Browser Users
GAMING2023.12.07Diablo 4: Blizzard Has Now Fixed the Accidental Nerf It Gave to Every Player
AI2023.12.07Sports Illustrated Accused of Publishing AI-Written Articles