#Tech news

This Week in AI: Can we (and could we ever) trust OpenAI?


Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter on June 5. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI launched discounted plans for nonprofits and education customers and drew back the curtains on its most recent efforts to stop bad actors from abusing its AI tools. There’s not much to criticize, there — at least not in this writer’s opinion. But I will say that the deluge of announcements seemed timed to counter the company’s bad press as of late.

Let’s start with Scarlett Johansson. OpenAI removed one of the voices used by its AI-powered chatbot ChatGPT after users pointed out that it sounded eerily similar to Johansson’s. Johansson later released a statement saying that she hired legal counsel to inquire about the voice and get exact details about how it was developed — and that she’d refused repeated entreaties from OpenAI to license her voice for ChatGPT.

Now, a piece in The Washington Post implies that OpenAI didn’t in fact seek to clone Johansson’s voice and that any similarities were accidental. But why, then, did OpenAI CEO Sam Altman reach out to Johansson and urge her to reconsider two days before a splashy demo that featured the soundalike voice? It’s a tad suspect.

Then there’s OpenAI’s trust and safety issues.

As we reported earlier in the month, OpenAI’s since-dissolved Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources — but only ever (and rarely) received a fraction of this. That (among other reasons) led to the resignation of the teams’ two co-leads, Jan Leike and Ilya Sutskever, formerly OpenAI’s chief scientist.

Nearly a dozen safety experts have left OpenAI in the past year; several, including Leike, have publicly voiced concerns that the company is prioritizing commercial projects over safety and transparency efforts. In response to the criticism, OpenAI formed a new committee to oversee safety and security decisions related to the company’s projects and operations. But it staffed the committee with company insiders — including Altman — rather than outside observers. This as OpenAI reportedly considers ditching its nonprofit structure in favor of a traditional for-profit model.

Incidents like these make it harder to trust OpenAI, a company whose power and influence grows daily (see: its deals with news publishers). Few corporations, if any, are worthy of trust. But OpenAI’s market-disrupting technologies make the violations all the more troubling.

It doesn’t help matters that Altman himself isn’t exactly a beacon of truthfulness.

When news of OpenAI’s aggressive tactics toward former employees broke — tactics that entailed threatening employees with the loss of their vested equity, or the prevention of equity sales, if they didn’t sign restrictive nondisclosure agreements — Altman apologized and claimed he had no knowledge of the policies. But, according to Vox, Altman’s signature is on the incorporation documents that enacted the policies.

And if former OpenAI board member Helen Toner is to be believed — one of the ex-board members who attempted to remove Altman from his post late last year — Altman has withheld information, misrepresented things that were happening at OpenAI and in some cases outright lied to the board. Toner says that the board learned of the release of ChatGPT through Twitter, not from Altman; that Altman gave wrong information about OpenAI’s formal safety practices; and that Altman, displeased with an academic paper Toner co-authored that cast a critical light on OpenAI, tried to manipulate board members to push Toner off the board.

None of it bodes well.

Here are some other AI stories of note from the past few days:

  • Voice cloning made easy: A new report from the Center for Countering Digital Hate finds that AI-powered voice cloning services make faking a politician’s statement fairly trivial.
  • Google’s AI Overviews struggle: AI Overviews, the AI-generated search results that Google started rolling out more broadly earlier this month on Google Search, need some work. The company admits this — but claims that it’s iterating quickly. (We’ll see.)
  • Paul Graham on Altman: In a series of posts on X, Paul Graham, the co-founder of startup accelerator Y Combinator, brushed off claims that Altman was pressured to resign as president of Y Combinator in 2019 due to potential conflicts of interest. (Y Combinator has a small stake in OpenAI.)
  • xAI raises $6B: Elon Musk’s AI startup, xAI, has raised $6 billion in funding as Musk shores up capital to aggressively compete with rivals including OpenAI, Microsoft and Alphabet.
  • Perplexity’s new AI feature: With its new capability Perplexity Pages, AI startup Perplexity is aiming to help users make reports, articles or guides in a more visually appealing format, Ivan reports.
  • AI models’ favorite numbers: Devin writes about the numbers different AI models choose when they’re tasked with giving a random answer. As it turns out, they have favorites — a reflection of the data on which each was trained.
  • Mistral releases Codestral: Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has released its first generative AI model for coding, dubbed Codestral. But it can’t be used commercially, thanks to Mistral’s quite restrictive license.
  • Chatbots and privacy: Natasha writes about the European Union’s ChatGPT taskforce, and how it offers a first look at detangling the AI chatbot’s privacy compliance.
  • ElevenLabs’ sound generator: Voice cloning startup ElevenLabs introduced a new tool, first announced in February, that lets users generate sound effects through prompts.
  • Interconnects for AI chips: Tech giants including Microsoft, Google and Intel — but not Arm, Nvidia or AWS — have formed an industry group, the UALink Promoter Group, to help develop next-gen AI chip components.



Source link

Leave a comment

Your email address will not be published. Required fields are marked *