If you have scrolled a lot in social media lately, you probably have noticed many … dolls.
There are dolls about all X- and Facebook feeds. Instagram? Dolls. TIKTOK? You guessed it: dolls and tutorials on how to make dolls. There are even dolls everywhere in LinkedIn, probably the most serious and least funniest member of the gang.
You can call it the Barbie AI treatment or the Barbie Box trend. Or if Barbie is not her thing, you can go with AI -Action figures, action figure starter packs or the trend of the chatt action figures. But whatever you have to be the dolls everywhere.
And while you have some similarities (boxes and packaging that Mattels Barbie, personal accessories, imitate a plastic smile), they are all as different as the people who post them, with the exception of a decisive, common feature: they are not real.
In the new trend, people use generative AI tools like Chatgpt to reinterpret themselves as dolls or action numbers with accessories. It has proven to be very popular and not just with influencers.
Celebrities, politicians and large brands have all broken in. Journalists who report on the trend have made versions of themselves who paused the microphones and cameras (although this journalist will not make them suffer). And users have created almost every remarkable figure that you can imagine, from billionaire Elon Musk to actress and singer Ariana Grande.
According to the Tech Media website The Verge, she actually started on the professional social networking website LinkedIn, where she was popular with marketers who were looking for commitment. As a result, many of the dolls that you will see out there try to promote a company or a hectic pace. (Think, “Social Media Marketing Doll” or “SEO Manager Doll”.)
But since then it has been on other platforms where, like everyone, everyone has a bit of fun, to find out whether life in plastic is really fantastic. Nevertheless, according to several AI experts who have spoken to CBC News, it is not necessarily harmless fun.
“It is still a lot the wild west when it comes to generative AI,” said Anatoliy Gruzd, professor and research director for the social media laboratory of Metropolitan University in Toronto.
“Most political and legal framework did not fully obtain the innovation, so that you have left AI company to determine how you use the personal data you have provided.”
Data protection concerns
The popularity of the doll -generating trend is not surprising from a sociological point of view, says Matthew Guzdial, Professor of Assistant Computing Science at the University of Alberta.
“This is the type of Internet trend that we have had since we have had social media. Maybe it was things like a forwarded e -mail or a quiz in which you would share the results,” he told CBC News.
But as with every AI trend, there are some concerns about data use.
Generative AI generally presents considerable challenges for data protection. As Stanford University Institutes for Human Center Artificial Intelligence (Stanford Hai) notes, data protection issues and the Internet are not new, but AI is so “data hungry” that it increases the scale of risk.
“If you provide an online system with very personal data about you, such as your face or your job or your favorite color, you should do this with the understanding that this data is not only useful to achieve the immediate result – like a doll,” said Wendy Wong, a political science professor at the University of British Columbia, AI and human rights.
This data is returned to the system to create future answers, said Wong.
In addition, there are concerns that “bad actors” can use data that have been abolished online in order to aim at people, Stanford Hai states. In March, for example, the Canada’s competition office warned of the rise of fraud in connection with AI.
According to the new research results of the TMU social media laboratory, around two thirds of the Canadians tried to use generative AI tools at least once. However, about half of the 1,500 people who examined the researchers had hardly any understanding of how these companies collect or save personal data, the report says.
With this laboratory, Gruzd suggests caution when using these new apps. However, if you choose experimentation, he suggests looking for an option to use your data for training or other third -party purposes under the settings.
“If no such option is available, you may want to rethink the app. Otherwise you will not be surprised if your similarity appears in unexpected contexts such as online ads.”
The ecological and cultural effects of AI
Then there is the environmental impact. CBCS Quirks and quarks previously reported how AI systems are an energy-intensive technology that has the potential to consume as much electricity as an entire country.
A study by Cornell University claims that the training model of Openais GPT-3 language model in the US Center of Data Centers from Microsoft can evaporate 700,000 liters of clean fresh water, for example. Goldman Sachs has estimated that AI will increase an increase in the service requirements of data centers by 160 percent.
The energy required for the generation of artificial intelligence leaves a considerable CO2 footprint, but is also increasingly used as a tool for climate management. Nicole Mortillaro from CBC collapses, where the AI emissions come and how the innovative way of how the technology helps the planet.
The average chatt query takes approximately 10 times More electricity than a Google search for some estimates.
Even Sam Altman, CEO of Openaai, commented on the popularity of pictures and wrote to X last month that it had to temporarily introduce some borders while working on making it more efficient because its graphics processing units “melt”.
It’s great fun to see people who love pictures in Chatgpt.
But melt our GPUs.
We will temporarily introduce some tariff limits while we are working on making it more efficient. Hopefully it won’t take long!
Chatgpt Free Tier will soon receive 3 generations a day.
While the dolls with AI generated our social media feeds, a version of artists who deal with the devaluation of their work using the hashtag #starterpacknoai is also spread.
Previously, concerns about the last AI trend, in which users generated pictures of themselves in the style of the Tokyo Animation Studio Studio Studio Ghibli – and started a debate about whether the work of human artists steel.
Despite the concerns, Guzdial says that this type of trends are positive – for AI companies who try to expand their user base. These models are extremely expensive to train and continue to run, he said, but if enough people use them and rely on them, companies can increase their subscription prices.
“That is why this type of trends are so good for these companies that are deep in the red.”