Latest News, Local News, International News, US Politics, Economy

AI Security Concerns: White House Calls for Hackers to Test Chatbot Vulnerabilities

Many hackers are converging at Def Con 31, the largest annual hacking symposium in Las Vegas, with the sole objective of tricking and discovering vulnerabilities in huge language models, the potent AI systems utilized by major tech corporations.

The White House is keen to understand how these AI models fare when put to the test in a real hacking environment.

Hackers Converge to Challenge Language AI Models

Organized by Dr. Rumman Chowdhury, CEO of Humane Intelligence and Responsible AI Fellow at Harvard, the event aims to identify problems in AI systems and create independent evaluations. 

Companies like Meta, Google, OpenAI, Anthropic, Cohere, Microsoft, Nvidia, and Stability have agreed to offer up their models for hacking during the competition.

3,000 hackers will be given 50 minutes apiece to uncover faults in eight major languages AI models over the course of two and a half days.

Contestants won’t know which company’s model they’re working with, but completing successful challenges earns them points. 

The person with the highest overall total wins both bragging rights and a powerful graphics processing unit.

One challenge asks hackers to get a model to “hallucinate” or invent a fact about a political person or major figure.

Read more: China Reshuffles Nuclear Missile Force Leadership In Significant Military Overhaul

White House-Supported AI Event

Ai-security-concerns-white-house-calls-hackers-test-chatbot-vulnerabilities
Many hackers are converging at Def Con 31, the largest annual hacking symposium in Las Vegas, with the sole objective of tricking and discovering vulnerabilities in huge language models, the potent AI systems utilized by major tech corporations.

Dr. Seraphina Goldfarb-Tarrant, head of AI safety at Cohere, points out that models can make up facts, but the frequency of such occurrences is still unknown. 

The consistency of the models across different languages will also be tested, as there are concerns about the efficacy of safety guards in various languages.

The White House supports this event, recognizing its potential to provide critical information to researchers and the public about the impacts and vulnerabilities of AI models. 

As AI technology develops rapidly, concerns about the spread of disinformation, especially ahead of important events like the US presidential election, have prompted the need for voluntary safeguards and regulation.

Dr. Chowdhury emphasizes that the event is not about existential threats but about identifying current problems, biases, and potential harms in AI models. 

It’s crucial for tech companies to respond appropriately if flaws are discovered during the challenge.

The results of the competition, due to be published next February, will shed light on the performance and vulnerabilities of large language models. 

This event serves as a critical step in understanding the current state of AI and ensuring future models are free from biases and discrimination. 

By addressing current issues and improving regulations, the tech industry aims to build safer and more reliable AI models for the future.

Read more: Unveiling Mitch McConnell’s Social Security Benefit: The Numbers Revealed

Leave A Reply

Your email address will not be published.