Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, March 30, 2026

Brief Hallucinations (Andrew McCabe and Allison Gill; Unjustified podcast)

  • Hackers linked to Iran have breached FBI Director Kash Patel’s personal emails. 
  • Attorney General Pam Bondi sent a Jack Smith progress memo to Congress outlining Trump's motive for illegally retaining classified documents. 
  • A top deputy to U.S. Attorney Jeanine Pirro acknowledged in a closed-door hearing this month that the Justice Department did not have evidence of wrongdoing in its criminal investigation of Fed Chair Jerome Powell. 
  • Legal experts are stunned after a federal judge catches DOJ lawyers using artificial intelligence to write briefs. 
  • Plus listener questions.
Interview: 


Thursday, May 29, 2025

The Hopkins Forum: Can the U.S. Outpace Chin a in AI Through Chip Controls? (John Donvan; Open to Debate podcast)

We are excited to announce the second debate of The Hopkins Forum, a partnership between Open to Debate and Johns Hopkins University’s Stavros Niarchos Foundation (SNF) Agora Institute.

The AI revolution is underway, and the U.S. and China are racing to the top. At the heart of this competition are semiconductors—especially advanced GPUs that power everything from natural language processing to autonomous weapons. The U.S. is betting that export controls can help check China’s technological ambitions. But will this containment strategy work—or could it inadvertently accelerate China’s drive for self-sufficiency? Those who think chip controls will work argue that restricting China’s access gives the U.S. critical breathing room to advance AI safely, set global norms, and maintain dominance. Those who believe chip controls are inadequate, or could backfire, warn that domestic chipmakers, like Nvidia and Intel, also rely on sales from China. Cutting off access could harm U.S. competitiveness in the long run, especially if other countries don’t fully align with U.S. policy.

As the race for AI supremacy intensifies, we debate the question: Can the U.S. Outpace China in AI Through Chip Controls?
Interview: 

 

Saturday, April 6, 2024

State-backed cyber groups will use AI to disrupt elections in the US, South Korea and India, Microsoft warns

There was also an increased use of AI-generated TV news anchors, a tactic that has also been used by Iran, with the "anchor" making unsubstantiated claims about Lai's private life including fathering illegitimate children. 

Microsoft said the news anchors were created by the CapCut tool, which is developed by Chinese company ByteDance, the owner of TikTok...

Monday, September 4, 2023

Will ChatGPT Do More Harm Than Good? (Open to Debate podcast)

It’s poised to “change our world.” That’s according to Bill Gates, referencing an advanced AI chatbot called ChatGPT, which seems to be all the rage. The tool, which was developed by OpenAI and backed by a company Gates founded, Microsoft, effectively takes questions from users and produces human-like responses. 

The “GPT” stands “Generative Pre-trained Transformer,” which denotes the design and nature of the artificial intelligence training. And yet despite the chatbot’s swelling popularity, it’s also not without controversy. Everything from privacy and ethical questions to growing concerns about the data it utilizes, has some concerned about the effects it will ultimately have on society. 

Its detractors fear job loss, a rise in disinformation, and even the compromising long-term effects it could have on humans’ capacity for reason and writing. Its advocates tout the advantages ChatGPT will inevitably lend organizations, its versatility and iterative ability, and the depth and diversity of the data from which it pulls. Against this backdrop, we debate the following question: Will ChatGPT do more harm than good?
Interview: 




Thursday, July 1, 2021

Weapons with minds of their own (Center for Investigative Reporting; Reveal podcast)

The future of warfare is being shaped by computer algorithms that are assuming ever greater control over battlefield technology. Will this give machines the power to decide who to kill?

The United States is in a race to harness gargantuan leaps in artificial intelligence to develop new weapons systems for a new kind of warfare. Pentagon leaders call it “algorithmic warfare.” But the push to integrate AI into battlefield technology raises a big question: How far should we go in handing control of lethal weapons to machines?

We team up with The Center for Public Integrity and national security reporter Zachary Fryer-Biggs to examine how AI is transforming warfare and our own moral code.

In our first story, Fryer-Biggs and Reveal’s Michael Montgomery head to the U.S. Military Academy at West Point. Sophomore cadets are exploring the ethics of autonomous weapons through a lab simulation that uses miniature tanks programmed to destroy their targets.

Next, Fryer-Biggs and Montgomery talk to a top general leading the Pentagon’s AI initiative. They also explore the legendary hackers conference known as DEF CON and hear from technologists campaigning for a global ban on autonomous weapons.

Machines are getting smarter, faster, and better at figuring out who to kill in battle. But should we let them?
Interview: 


Sunday, March 21, 2021

How to 'Futureproof' Yourself In An Automated World (Terry Gross, Fresh Air podcast)

'New York Times' tech columnist Kevin Roose says we've been approaching automation all wrong. "What we should be teaching people is to be more like humans, to do the things that machines can't do," he says. We talk about misconceptions about A.I, how algorithms decide who gets government assistance, and which jobs are less likely to be automated. His new book is 'Futureproof.'
Interview: 







Click Older Posts above to see more.