Sam Altman comes out swinging at The New York Times


From the moment when the general manager of Openai, Sam Altman, stepped on the stage, it was clear that this would not be a normal interview.

Altman and his chief operator, Brad Lightcap, stood awkwardly toward the back of the stage at a wrapped San Francisco site, which typically hosts jazz concerts. Hundreds of people filled steep theater seats on Wednesday night to watch Kevin Roose, a columnist with the platformist’s New York Times, and the platform of the platform records a live episode of his popular technology podcast, Hard Fork.

https://www.youtube.com/watch?v=CT63MVQN54O

Altman and Lightcap were the main event, but they went out too early. Roose explained that he and Newton planned – ideally, before Openai’s executives were supposed to go out – list several titles that were written about Openai in the weeks before the event.

“This is more fun that we are here for this,” said Altman. Seconds later, the Openai Director General asked, “Will you talk about where you process us because you don’t like user privacy?”

After minutes of the program starting, Altman kidnapped the conversation to talk about the New York Times trial against Openai and its biggest investor, Microsoft, in which the publisher claims that Altman company company Incorrectly used their articles to train large language models. Altman in particular peeis on recent development in the complaint, in which lawyers representing the New York Times asked Openai to keep consumer chatgpt and API customer data.

“The New York Times, one of the great institutions, really, has long been a position that we have to keep our users’ logs even if they chat in private mode, even if they asked us to remove them,” said Altman. “Still loves the New York Times, but about which we feel hard.”

For a few minutes, the General Manager of Openai pressed the podcasts to share their personal views on the New York Times trial – they laid down, noting that as journalists whose work appears in The New York Times, they are not involved in the trial.

Altman and Lightcap’s brave entry lasted only a few minutes, and the rest of the interview continued, apparently, as planned. However, the smell felt indicative of the inflection point Silicon Valley seems to approach its relationship with the media industry.

In the last several years, numerous publishers have brought lawsuits against Openai, Antropic, Google and Meta for training their AI models on copyrighted works. At a high level, these processes argue that AI models have the opportunity to be invalid, and even replace, the copyrighted works produced by media institutions.

But the tides may turn in favor of the technology companies. Earlier this week, Openai competitor Antropic received a major victory in his legal fight against publishers. A federal judge ruled that the use of anthropic books to train their AI models was legal in some circumstances, which could have broad implications for other publishers’ lawsuits against Openai, Google and Meta.

Perhaps Altman and Lightcap felt embodied by the industry to win their live interview with the New York Times journalists. But these days, Openai prevents threats from all directions, and that has become evident all night.

Mark Zuckerberg recently tried Recruit Openai’s highest talent by offering them $ 100 -million compensation packages To join Meta’s Meta laboratory, Altman revealed a weeks ago on his brother’s podcast.

When asked if the Meta -General Director really believes in superintelligent AI systems, or whether it’s just a recruiting strategy, lightcap has stopped: “I think [Zuckerberg] believes he is dominant. ”

Later, Roose asked Altman about Openai’s relationship with Microsoft, who was reportedly pushed to boiling point in recent months as the partners negotiate a new contract. While Microsoft was once a major boost to Openai, the two are now competing in corporate programs and other domains.

“In some deep partnership, there are stress points and we definitely have those,” Altman said. “We are both ambitious companies, so we find some lightning strikes, but I would expect it to be something we find deep value for both sides for a very long time.”

Openai’s leadership today seems to spend a lot of time hitting competitors and lawsuits. This may reach Openai’s ability to solve wider problems around AI, such as how to safely deploy very intelligent AI systems scale.

At one point, Newton asked Openai’s leaders how they thought of recent stories of mentally unstable people using chatgpt to go through dangerous rabbit holesincluding discussing conspiracy theories or suicide with the chat.

Altman said Openai is taking a lot of steps to prevent these conversations, such as cutting them early, or directing users to professional services where they can get help.

“We don’t want to slide into the mistakes I think the previous generation of TE Companiesnological Companies didn’t react fast enough,” said Altman. To the following question, the General Manager of Openai added, “However, to users who are in a rather weak mental place, who are on the edge of a psycho -pause, we have not yet understood how to go through a warning.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *