Artificial Intelligence scares and intrigues us. Almost every week, there’s a new AI scare on the news like developers shutting down bots because they got too intelligent. Most of these news are a result of AI research misinterpreted by those outside the field. For the fundamentals of AI, feel free to read our comprehensive AI article.

The greatest fear about AI is singularity (also called Artificial General Intelligence), a system capable of human-level thinking. According to some experts, singularity also implies machine consciousness. Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our capabilities. Even before artificial intelligence was a computer science research topic, science fiction writers like Asimov were concerned about this and were devising mechanisms (i.e. Asimov’s Laws of Robotics) to ensure benevolence of intelligent machines.

For those who came to get quick answers:

  • Will singularity ever happen? According to most AI experts, yes.
  • When will it happen? Before the end of the century

The more nuanced answers are below. There have been several surveys of AI scientists asking about when such developments will take place.

Understand results of major surveys of AI researchers in 2 minutes

We looked at the results of 4 surveys with 995 participants where researchers estimated when singularity would happen. In all cases, majority of participants expected singularity before 2060.

Source: Survey distributed to attendees of the Artificial General Intelligence 2009 (AGI-09) conference

In 2009, 21 AI experts participating in AGI-09 conference were surveyed. Experts believe AGI will occur around 2050, and plausibly sooner. You can see above their estimates regarding specific AI achievements: passing the Turing test, passing third grade, accomplishing Nobel worthy scientific breakthroughs and achieving superhuman intelligence.

In 2012/2013, Vincent C. Muller, the president of the European Association for Cognitive Systems, and Nick Bostrom from the University of Oxford, who published over 200 articles on superintelligence and artificial general intelligence (AGI), conducted a survey of AI researchers. 550 participants answered the question: “When is AGI likely to happen?” The answers are distributed as 

  • 10% of participants think that AGI is likely to happen by 2022
  • For 2040, the share is 50%
  • 90% of participants think that AGI is likely to happen by 2075.

In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based on survey results, experts estimate that there’s a 50% chance that AGI will occur until 2060. However, there’s significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years. Some significant job functions that are expected to be automated until 2030 are: Call center reps, truck driving, retail sales.

In 2019, 32 AI experts participated in a survey on AGI timing: 

  • 45% of respondents predict a date before 2060
  • 34% of all participants predicted a date after 2060
  • 21% of participants predicted that singularity will never occur.

AI entrepreneurs are also making estimates on when we will reach singularity and they are a bit more optimistic than researchers:

  • Louis Rosenberg, computer scientist, entrepreneur and writer: 2030
  • Patrick Winston, MIT professor and director of the MIT Artificial Intelligence Laboratory from 1972 to 1997: 2040
  • Ray Kuzweil, computer scientist, entrepreneur and writer of 5 national best sellers including The Singularity Is Near : 2045
  • Jürgen Schmidhuber,  co-founder at AI company NNAISENSE and director of the Swiss AI lab IDSIA: ~2050

Understand why reaching AGI seems inevitable to most experts

These may seem like wild predictions, but they seem quite reasonable when you consider these facts:

  • Human intelligence is fixed unless we somehow merge our cognitive capabilities with machines. Elon Musk’s neural lace startup aims to do this but research on neural laces are in early stages.
  • Machine intelligence depends on algorithms, processing power and memory. Processing power and memory have been growing at an exponential rate. As for algorithms, until now we have been good at supplying machines with necessary algorithms to use their processing power and memory effectively.

Considering that our intelligence is fixed and machine intelligence is growing, it is only a matter of time before machines surpass us unless there’s some hard limit to their intelligence. We haven’t encountered such a limit yet.

This is a good analogy for understanding exponential growth. While machines can seem dumb right now, they can grow quite smart, quite soon.

Illustration of exponential growth
Source: Mother Jones

If classic computing slows its growth, quantum computing could complement it 

Classic computing has taken us quite far. AI algorithms on classical computers can exceed human performance in specific tasks like playing chess or Go . For example, AlphaGo Zero beat AlphaGo by 100-0. AlphaGo had beaten the best players on the earth. However, we are approaching the limits of how fast classical computers can be.

Moore’s law, which is based on the observation that t the number of transistors in a dense integrated circuit doubles about every two years, implies that cost of computing halves approximately every 2 years. However, most experts believe that Moore’s law is coming to an end during this decade. Though there are efforts to keep improving application performance, it will be challenging to keep the same rates growths.

Quantum Computing, which is still an emerging technology, can contribute to reducing computing costs after Moore’s law comes to an end. Quantum Computing is based on the evaluation of different states at the same time where classical computers can calculate one state at one time. The unique nature of quantum computing can be used to efficiently train neural networks, currently the most popular AI architecture in commercial applications. AI algorithms running on stable quantum computers has a chance to unlock singularity.

For more information about quantum computers feel free to read our articles on quantum computing.

Understand why some do not believe that we will never reach AGI

There are 3 major arguments against the importance or existence of AGI. We examined them along with their common rebuttals:

1- Intelligence is multi dimensional

Therefore, AGI will be different, not superior to human intelligence. This is true and human intelligence is also different than animal intelligence. Some animals are capable of amazing mental feats like squirrels remembering where they hid hundreds of nuts for months.

However, these differences do not stop humans from achieving far more than other species in terms of many typical measures of success for a species. For example homo sapiens is the species that contributes most to the bio-mass on the globe among mammals.

Different dimensions of intelligence
Source: Kevin Nelly

2- Intelligence is not the solution to all problems

For example, even the best machine analyzing existing data will probably not be able to find a cure for cancer. It will need to run experiments and analyze results to discover new knowledge in most areas.

This is true with some caveats. More intelligence can lead to better designed and managed experiments, enabling more discovery per experiment. History of research productivity should probably demonstrate this but data is quite noisy and there’s diminishing returns on research. We encounter harder problems like quantum physics as we solve simpler problems like Newtonian motion.

3- AGI is not possible because it is not possible to model the human brain

Theoretically it is possible to model any computational machine including the human brain with a relatively simple machine that can performs basic computations and has access to infinite memory and time. This is the Church-Turing hypothesis laid out in 1950. It is universally accepted. However as stated, it requires certain difficult conditions: infinite time and memory.

Most computer scientists believe that it will take less than infinite time and memory to model the human brain. However, there is not a mathematically sound way to prove this belief as we do not understand the brain enough to understand its computational power. We will just have to build such a machine!

And we haven’t been successful, yet. This is a video of what happens when machines play soccer, it is a bit dated (from 2017) but makes even me feel like soccer legend in comparison:

Hope this clarifies some of the major points regarding AGI. For more on how AI is changing the world, you can check out articles on AI, AI technologies and AI applications in marketing, sales, customer service, IT, data or analytics. And if you have a business problem that is not addressed here:

Let us find the right vendor for your business

Source: Arguments against AGI based partially on Wired’s summary of arguments against AGI and Wikipedia



  1. The claim that “humans contribute most to the biomass” on the planet is likely to be wrong. Check out this paper for a careful estimation:

Leave a Reply

Your email address will not be published. Required fields are marked *