The darkest side of AI is "deepfakes," according to Microsoft
Posted: Mon Jan 06, 2025 4:46 am
The increasingly ubiquitous nature of artificial intelligence (AI) is causing as much excitement as it is concern. The frenetic development of this technology has many fearing that the dystopian plots of a plethora of science fiction films will turn ominously real. But what is the greatest danger lurking in the bowels of AI? Brad Smith, president of Microsoft, believes that the greatest hidden risk to this technology is “deepfakes” , content that is as disturbingly realistic as it is shamelessly false.
Speaking to US lawmakers in Washington to discuss AI regulation, Smith stressed the need to take appropriate measures to ensure that people know clearly when a photo or video is real or has been generated by artificial intelligence, in order to prevent the technology from being misused.
" We are going to have to address the problems posed by deepfakes , particularly when such deepfakes are embedded in the cyber operations of foreign forces. These are the kinds of activities that are already being implemented by the governments of Russia, China and Iran," Smith warned.
"We need to take steps to protect against the alteration of legitimate content with the intent to deceive or defraud people through the use of AI," said the Microsoft president .
Smith also urged lawmakers to create new rules to safeguard national security. “A new generation of export controls is needed to ensure that AI models are not stolen or used in ways that violate U.S. export control requirements,” he said.
It is necessary to ensure that AI remains under human control
For several weeks, Washington lawmakers have been debating the most appropriate laws to keep AI on a tight leash in a context that has simultaneously allowed a good number of companies to flourish in the heat of this technology (many of which are based in the United States).
Last week, Sam Altman, CEO of OpenaAI, the parent company of ChatGPT, told belize number screening the United States Senate that AI could potentially interfere with the integrity of electoral processes , making it absolutely imperative to regulate this technology.
Altman also called for global cooperation and the introduction of incentives to ensure compliance with security standards directly related to AI.
Speaking in Washington yesterday, Smith urged people to take responsibility for the potential chaos caused by AI and to protect critical infrastructure from this technology. “It is essential to ensure that machines are always subject to effective human oversight, and that those who design and operate them are accountable to others. In short, we must ensure that AI remains under human control ,” Smith said.
Smith also urged developers of powerful AI models to monitor the use of their technology and provide public reporting on machine-generated content . This would ultimately help identify fake content and promote transparency, he said.
The US Congress has several proposals on the table that focus on regulating AI that could endanger people’s lives or livelihoods (in medicine and finance, for example). In addition, lawmakers are considering introducing rules to ensure that AI is not used to promote discrimination or violate civil rights.
Speaking to US lawmakers in Washington to discuss AI regulation, Smith stressed the need to take appropriate measures to ensure that people know clearly when a photo or video is real or has been generated by artificial intelligence, in order to prevent the technology from being misused.
" We are going to have to address the problems posed by deepfakes , particularly when such deepfakes are embedded in the cyber operations of foreign forces. These are the kinds of activities that are already being implemented by the governments of Russia, China and Iran," Smith warned.
"We need to take steps to protect against the alteration of legitimate content with the intent to deceive or defraud people through the use of AI," said the Microsoft president .
Smith also urged lawmakers to create new rules to safeguard national security. “A new generation of export controls is needed to ensure that AI models are not stolen or used in ways that violate U.S. export control requirements,” he said.
It is necessary to ensure that AI remains under human control
For several weeks, Washington lawmakers have been debating the most appropriate laws to keep AI on a tight leash in a context that has simultaneously allowed a good number of companies to flourish in the heat of this technology (many of which are based in the United States).
Last week, Sam Altman, CEO of OpenaAI, the parent company of ChatGPT, told belize number screening the United States Senate that AI could potentially interfere with the integrity of electoral processes , making it absolutely imperative to regulate this technology.
Altman also called for global cooperation and the introduction of incentives to ensure compliance with security standards directly related to AI.
Speaking in Washington yesterday, Smith urged people to take responsibility for the potential chaos caused by AI and to protect critical infrastructure from this technology. “It is essential to ensure that machines are always subject to effective human oversight, and that those who design and operate them are accountable to others. In short, we must ensure that AI remains under human control ,” Smith said.
Smith also urged developers of powerful AI models to monitor the use of their technology and provide public reporting on machine-generated content . This would ultimately help identify fake content and promote transparency, he said.
The US Congress has several proposals on the table that focus on regulating AI that could endanger people’s lives or livelihoods (in medicine and finance, for example). In addition, lawmakers are considering introducing rules to ensure that AI is not used to promote discrimination or violate civil rights.