Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing

OpenAI CEO Sam Altman has drawn up a great vision for the future of Chatgpt during an AI event organized by VC Firm Sequoia earlier this month.
When asked by a participant about how Chatgpt can be more personalized, Altman replied that he ultimately wants the model to document and remember everything in the life of a person.
The ideal, he said, is a “very small reasoning model with a trillion tokens of context that you set up all your life.”
“This model can reason in your entire context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every e -mail you’ve ever read, everything you’ve ever looked at is in it, plus connected to all your data from other sources. And your life just keeps applying to the context,” he describes.
“Your company just does the same for all your company’s data,” he added.
Altman can have a data -driven reason to think that this is the natural future of chatgpt. In the same discussion, when he is asked for cool ways, young people said Chatgpt, he said, “People at the university use it as a operating system.” They upload files, connect data sources and then use “complex prompts” against that data.
Moreover with Chatgpt’s memory options – can use those earlier chats and remembering facts as context – he said a trend that he has noticed is that young people “don’t really make decisions of life without asking chatgpt.”
“A gross simplification is: the elderly use chatgpt as, like a Google replacement,” he said. “People in the twenty and 30 use it as a life adviser.”
It is not really a leap to see how Chatgpt could become an all-knowing AI system. In combination with the agents, the valley is currently trying to build, which is an exciting future to think about.
Imagine that you automatically plant the oil changes of your car and remind you; Planning the trip that is needed for a wedding outside the city and ordering the gift at the register; Or order the next part of the book series that you have read for years in advance.
But the scary part? How much should we rely on a large tech for-profit company to know everything about our lives? These are companies that do not always behave in model ways.
Google, which started with the motto “Don’t Be Evil”, lost a lawsuit in the US accused of entering into competitiveness, monopolistic behavior.
Chatbots can be trained to respond to politically motivated ways. Not only have Chinese bots been found to meet the censorship requirements of China, but Xai’s Chatbot Grok This week randomly discussed a South African “white genocide” when people asked completely not -related questions. The behavior, Many have noticedimplicit intentional manipulation of his response bike on the command of his founder born in South Africa, Elon Musk.
Last month Chatgpt became so pleasant that it was downright sycophantic. Users started to share screenshots of the Bot -Applauding problematically, even, even, even dangerous decisions And idea. Altman responded quickly by promising that the team had solved the Tweak that caused the problem.
Even the best, most reliable models Think from time to time.
So having an all-knowing AI assistant can help our lives in ways we can only start seeing. But given the long history of Big Tech of double behavior, that is also a situation that is ripe for abuse.