The Disturbing AI Interview That Has Everyone Fuming


It has been billed as a “one-type interview”, but it may be remembered as a new low for journalism. Jim Acosta, the former CNN anchor turned a consecutive host, ignited a firefighting storm by what may be one of the most uneasy interviews of the AI era: a televised conversation with an AI-generated version of Joaquin Oliver, the 17-year-old, who was killed in the 2018 bulk of Marjory Doure, Marjory.

The interview issued at the request of Joaquin’s parents, who created his son’s version to support his memory and amplify his message about gun violence. But many viewers – through the political spectrum – call it exploited, emotionally manipulative, and dangerous precedent.

It all started with a tweet

Acosta pushed the segment on X (formerly Twitter) on August 4: “Show you don’t want to miss at 4 o’clock, et / 1 pm pt. I will have a one-type interview with Joaquin Oliver. He died in the shooting of Parkland school in 2018.

In the clip, Acosta asks Joaquin’s AI -Avatar: “Joaquin, I would like to know what your solution for gun violence would be?” The AI answers: “A great question! I believe in a mixture of stronger gun control laws, mental health support and community commitment. Then, in an initial role -inversion, the avatar asks Acosta: “What do you think about that?” Acosta replies: “I think that’s a great idea.”

The reaction was imminent

The promo -Tweet has accumulated nearly 4 million views. But it also created a torrent of criticism, with users accusing Acosta of crossing a line, using the similarity of a deceased child to push a political agenda. “Jim Acosta hits a new low … Interview AI version of a dead child to push a gun control !!!” One user wrote. “WTF? This is more sick,” said another. “This is one of the strangest things I’ve ever seen in my life,” another commented.
“Unreal and Mental.”

Some of the most biting criticisms came from within the media industry itself. Journalist Glenn Greenwald wrote: “It says that what Jim Acosta-Using Ai revive a deceased teenager and then” interview “him to echo the own policy of Acosta-produced inter-ideological revulsion.

To stamp the counter, Acosta disabled people respond on the tweet.

The heart of the reaction is faith, ethics and the dangerous precedent to use AI to speak on behalf of the dead. Critics argue that this opens the door to unprecedented manipulation: Could a political group create AI -Avatars of fetuses to argue against abortion? Could companies use AI to generate posthumous approvals from celebrities? Could we soon see AI-generated “interviews” with dead soldiers, victims or civil rights leaders? These questions cut to the core of how society will act through the use of generative AI in media and recommendation.

The father speaks

In response to the outrage, Acosta defended himself showing that the idea came directly from the boy’s parents, Manuel and Patricia Oliver.
“Joaquin, known as Guac, must be 25-year-old today,” Acosta posted in next tweet. “His father approached me to make the story … to sustain the memory of his son.” He linked to a video in which Manuel Oliver explains: “Hi, everyone. Here is Manuel Oliver. I am the father of Joaquin Oliver,” he began. “Today, he must take 25 years, and my wife, Patricia, and myself, we asked our friend Jim Acosta to interview with our son, because now, thanks to AI, we can bring him back. It was our idea.”

He continued, his voice heavy with emotion: “We feel that Joaquin has many things to say, and as long as we have a chance that allows us to bring that to you and all, we will use it.”

Acosta then prompted viewers to watch the father’s video, suggesting that the context is important, and that the wishes of the parents should be respected.

A new line has been crossed

Regardless of the intention, the interview led to a cultural-stroke calculation. For some, it is a moving use of technology to support the memory of a loved one. For others, it is a deeply uncomfortable blurring of reality and simulation, one that risks inhumanizing the dead and becoming tragedy into algorithmically depicted activism.

The question now is whether this will become a new normal, or moment that forces society to draw a difficult line around what AI never has to do.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *