Openai Director General Sam Altman put forward a major vision for the future of Chatgpt at an event organized by the firm VC Sequoia earlier this month.
When asked by a participant how the chatgt could become more personalized, Altman replied that he eventually wants the model to document and remember everything in a person’s life.
The ideal, he said, is a “very small reasoning pattern with a trillion signs of the context that you put your whole life”.
“This model can reason in your entire context and make it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever seen is there, plus all your data from other sources. And your life just continues to join the context,” he describes.
“Your company just does the same for all your company data,” he added.
Altman may have several reasons driven by data to think that this is the natural future of chatgpt. In the same discussion, when asked delightful ways young people use chatgpt, he said, “People in college use it as an operating system.” They charge files, connect data sources, and then use “complex stirring” against those data.
Moreover, with chatgpt’s memory options – which can use previous conversations and facts memorized as a context – he said that one tendency he has noticed is that young people “do not really make life decisions without asking chatgt”.
“An excessive explanation is: the elderly use chatgpt as, as, a Google replacement,” he said. “People in the 20s and 30s use it as a life advisor.”
There is a lot of a jump to see how the chatgt can become an omniscient system. Poked with agents that the valley is currently trying to build, this is an exciting future to think about.
Imagine your one automatically assigning your car oil changes and reminds you; Planning the travel necessary for an abroad wedding and ordering the gift from the register; Or the preordery of the other volume of the book series you have read for years.
But the scary part? How much should we trust a great lucrative technology company to know everything about our lives? These are companies that do not always behave in models.
Google, who began his life with the motto “Do not be evil” lost a lawsuit in the US that accused him of engaging in anti -compliant, monopolistic behavior.
Chatbots can be trained to respond in politically motivated ways. Not only have Chinese bots in accordance with China’s censorship requirements have been found, but Xai’s chatbot Grok was accidentally discussing a “white genocide” of South Africa when people asked them completely unrelated questions. Behavior, highly observed, meant the deliberate manipulation of his response engine at the command of its founder born in South Africa, Elon Musk.
Last month, the chatgt became as acceptable as it was honest sycophants. Users began to share views of bot screens by applauding dangerous, even dangerous decisions and ideas. Altman soon responded by promising that the team had fixed the tap that caused the problem.
Even the best, most reliable models still simply do things from time to time.
So having a comprehensive assistant of it can help our lives in ways we can only begin to see. But given the long history of Big Tech behavior, this is also a mature situation for misuse.