Much of the concentration in the generating has been on the text -based interfaces used to generate text, images and more. The other wave seems to be a sound, and is rotating quickly. In the latest development, Google announced today that he would add Chirp 3-Models to the text and text to his HD development platform he starting next week.
Last week, Google calmly announced that Chirp 3 would stay 8 new voices for 31 languages. The use of cases for the platform include building sound assistants, creating audiobooks, developing support agents and video sound. The news was announced at an event in Deepmind’s Google offices in London.
Her efforts are coming at the same time that others are also jumping forward with their work for his work. Last week, SESAME – the beginning of viral applications, very realistic sounds that sound “Maya” and “Miles” he – announced the start of his model to developers to build their personalized applications and services at the top of his technology.
In particular, there will be restrictions on use about Chirp 3 in an attempt to hold a handle for misuse. “We are just working through some of these things with our security team,” said Thomas Kurian, Google Cloud CEO at a news event today.
Elevenlabs is one of the main beginnings that have collected hundreds of millions in funds to expand their work in his voice services.
The news will bring Chirp 3 to the same stable as the newest versions of its LLM flag, the twins, which are being tested, as well as its model of image -image generation and its costly Veo 2 video generation tool.
It remains to be confirmed whether what Google is emitting with Chirp 3 will be as “realistic” as some of the other efforts to create “human” voices (sesame work stands in particular). But as Demis Hassabis, CEO of Deepmind, pointed out, this remains a marathon, not a sprint.
“In the near term … This idea that (he is) a silver bullet for everything in the next two years, I don’t see that it is still happening. You think we are still a few years away from something like Agi,” he said. “It will change things … over the next decade, so the middle to longer term. One of those interesting moments in time.”
Google began Vertex he in 2021 as a platform for developers to build cloud machinery learning services. That was, of course, far before the outbreak of interest in him, and especially the generator, who came with the beginning of the Openai GPT services.
Since then, the company has been supported in Vertex it partially as it plays other enterprises like Microsoft and Amazon – they are also building it for developers’ generating tools. In addition to building the generator at the top of the twins, developers can use Vertex it to classify data, train models and create models for production. It will be interesting if it moves to expand its walls with walls to patterns beyond those created by Google itself.
Google has built “chirp” sound services for years, returning to using the name as a code name for its early efforts to compete against Amazon’s Alexa service.