OpenAI warned about the discovery of AI that could threaten humanity
Miscellaneous / / November 23, 2023
A secret project called Q* could have been one of the reasons for Sam Altman's dismissal.
The day before exile OpenAI CEO Sam Altman, several of the company's staff researchers wrote to the board of directors letter warning of the discovery of powerful artificial intelligence, which they say "could threaten to humanity." About this with reference to our own sources writes Reuters.
It was this letter that was cited as one of the factors in the long list of complaints from the board of directors that led to Altman's dismissal. Reuters was unable to personally speak with the authors or obtain comment from OpenAI. However, after such a request, the company, in an internal message to employees, announced the existence of a certain project called Q* (pronounced Q-Star).
Some OpenAI employees believe that Q* could be a breakthrough in the so-called general artificial intelligence (AGI). OpenAI characterizes it as an autonomous system that outperforms humans at most economically valuable tasks.
So far, Q* can only perform math problems at the elementary level, according to one source. schools, but the successful completion of such tests gives researchers great optimism about the prospects for its development.
The main difference between such AI is that it is able to set goals, divide complex problems into small ones, work across a wide range of problems, and find solutions taking into account context from different domains. Simply put, such intelligence, in theory, can replace a highly qualified specialist with knowledge, skills and extensive experience.
Researchers believe that mathematics is the cutting edge of generative AI. Nowadays, he is good at writing and translating, statistically predicting the next word, and the answers to the same question can vary greatly. But conquering mathematics (where there is only one correct answer) implies that AI will have broader reasoning abilities that resemble human intelligence. This can be applied, for example, to new scientific research, experts say.
In their letter to the board of directors, the researchers noted the power of AI and the potential dangers. They also noted the work of a special group of “artificial intelligence scientists,” the existence of which was confirmed by numerous sources. The team is studying how to optimize existing AI models to improve their reasoning and ultimately perform scientific work with them, one of the people said.
Altman led the effort to make ChatGPT one of the fastest growing software applications in history and attracted the investment and computing resources from Microsoft necessary to bring such AGI. And shortly before his dismissal, at the summit of world leaders in San Francisco, he announced some “serious achievements that are just around the corner.”
Four times in the history of OpenAI, the last time was just a couple of weeks ago, I had to be in the room, when we, as it were, push back the veil of ignorance and approach the border of discoveries, and to do this is the professional honor of all life.
Sam Altman
A day later, the board fired Altman.
Perhaps one day all these decisions and events will form the basis of a book, an entire film, or even a history textbook.
Read also about AI🤖🧐🤖
- 7 ChatGPT analogues
- People from Apple presented AI Pin - a replacement for a smartphone without a screen based on ChatGPT
- 8 myths about artificial intelligence that even programmers believe in