- Chatgpt’s O3 model scored 136 on the Mensa IQ test and 116 runs on a custom offline test, mostly improved humans
- A new survey found that 25% General Jade believes AI is already conscious, and more than half think it will happen soon
- Changes in IQ and confidence in AI consciousness has happened very quickly
OpeniIs new Puffy The model, dubbed O3, scored an IQ of 136 on just Norway Testing – more than 98% of humanity, not bad for a glorious autokart. In less than a year, the AI models have become very complex, flexible and, in some ways, intelligent.
The jump is so standing that it can be something that may be to think that AI has become a skynet. According to a new Edubirdie survey25% General Z believes that AI is already self-aware, and thinks more than half Chatboat Gets emotional and possibly demands the right to vote.
There are some references to consider coming to the IQ test. Norway Mensa Test is public, which means that it is technically possible that model uses answers or questions for training. So, maximum researchers created a new IQ test It is completely offline and is out of access to training data.
On the test, which was made equal to the difficulty for the Mensa version, the O3 model scored 116. It is still high.
This puts O3 in the top 15% of human intelligence, hovering somewhere between “sharp grade student” and “annoying clever Trivia Night”. No feeling. No consciousness. But logic? It has been found in the spad.
Compare last year, when no AI tested above 90 on the same scale. In May last year, the best AI struggled with rotating triangles. Now, O3 is parked comfortably on the right side of the bell curve between the most bright of humans.
And there is now a crowd on that curve. Claude has inc Lood. Mithun scored in the 90s. Even the baseline default model for GPT-4O, Chatgpt, only a few IQ points below the O3.
Nevertheless, it is not just that these AI are becoming smarter. This is that they are learning fast. They are improved like software, such as humans do. And for a generation raised on the software, this is an unstable type of growth.
I don’t think consciousness means what you think means
For those who were raised in a world GoogleWith a Siri in their pocket and an Alexa on the shelf, AI means different from its most strict definition.
If you came to age during an epidemic when most of the conversations were mediation through the screen, an AI partner probably does not feel very different from the zoom class. So it is probably not a shock that, according to Edubardi, about 70% genes Say “please” and “thanks” while talking to AI,
Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. With about 20% of the sensitive workplace information, it uses a quarter of a quarter with contracts and share of individual details of colleagues for funny slack answers.
Many of the people involved in the survey rely on AI for different social conditions, who are not just saying from asking for days. One in eight already talks to AI about the workplace drama, and one in six has used AI as a physician.
If you rely very rely on AI, or it seems enough to treat as a friend (26%) or even a romantic partner (6%), the idea that AI is conscious feels less extreme. The more time you treat something like a person, the more it feels the same. It answers questions, misses things, and even imitates sympathy. And now that it is naturally following philosophical questions, it is happening naturally.
But intelligence is not the same as consciousness. IQ score does not mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired in this way. AI can only think in the sense that it can solve problems using programmed regioning. You can say that I am not separate, just with meat, not a circuit. But this will harm my feelings, you do not have to worry about any existing AI product.
Maybe someday, any day will change soon. I suspect, but I am open to prove wrong. I get a desire to suspend mistrust with AI. It can be easy to believe that your AI assistant really understands you when you are having your heart out at 3 o’clock and receiving supporting, supportive reactions rather than living on your origin as a future language model trained on the collective overhering of the Internet.
Maybe we are on the verge of real self-awareness artificial intelligence, but perhaps we are actually anthropological to good calculator. Either way, do not tell an AI a mystery that you do not want to use to train more advanced models.