AIMultipleAIMultiple
No results found.

Future of Deep Learning according to top AI Experts

Cem Dilmegani
Cem Dilmegani
updated on Nov 20, 2025

Deep learning currently delivers the best results for many AI applications. But there’s debate about its ultimate potential. Geoffrey Hinton believes deep learning will eventually solve all problems,1  while other scientists point to fundamental flaws without clear solutions.2  

As interest grows among researchers, developers, and the public, breakthroughs are likely. Turing Prize winners and other experts expect advances from capsule networks, deep reinforcement learning, and hybrid approaches that address deep learning’s current limitations.

Why Interest in Deep Learning Keeps Growing

Public interest

Deep learning interest has surged since 2012, when Geoffrey Hinton’s team showed it could dramatically improve image recognition accuracy.3 4  

Why people care:

Better predictions: Deep learning improves decision-making accuracy with data-driven insights.

Works with messy data: Unlike traditional machine learning, deep learning learns from unstructured and unlabeled datasets—analyzing images, text, and audio without extensive preprocessing.

These capabilities deliver real operational and financial benefits. After Hinton’s 2012 breakthrough, companies started investing heavily. Interest has remained stable since 2017.

Google search frequency for “deep learning” shows sustained public interest:

Research community

According to the AI Index, deep learning publications on arXiv increased almost sixfold over five years. ArXiv hosts open-access scientific articles in physics, mathematics, and computer science (both peer-reviewed and non-peer-reviewed).

Figure 1: Publications on deep learning has drastically increased. Source: AI Index

Developer community

Most popular open-source deep learning libraries:

  • TensorFlow and Keras (Google)
  • PyTorch (Meta, released 2016—rapidly growing)
  • Scikit-learn
  • Caffe
  • MXNet
  • Microsoft Cognitive Toolkit (CNTK)

These platforms help developers build deep learning models without having to start from scratch. PyTorch, in particular, has seen explosive growth since its 2016 release.

Figure 2: Github most favored open source libraries since 2014, Source: AI Index

Open source libraries for deep learning are generally written in JavaScript, Python,  C++  and Scala.

What are the technologies that can shape deep learning?

Deep learning pioneers Geoffrey Hinton, Yoshua Bengio, and Yann LeCun (Turing Prize winners) along with Gary Marcus suggest new methods to address current limitations.5

These methods include introducing reasoning or prior knowledge to deep learning, self-supervised learning, capsule networks, etc.

Introduction of non-learning-based AI approaches to deep learning

1. Hybrid Neuro-Symbolic AI

Gary Marcus, a deep learning pioneer, argues that current deep learning is data-hungry, shallow, brittle, and struggles to generalize.

Figure 3: Vendor diagram for systems, Source:ArXiv

For more on Gary Marcus’ ideas, feel free to read his articles: Deep Learning: A Critical Appraisal from 2018 and The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence from 2020.6 7

Four possibilities for deep learning’s future:

Unsupervised learning: If systems determine their own objectives and reason at more abstract levels, major improvements become possible.

Symbol-manipulation & hybrid models: Combining deep learning with symbolic systems (which excel at inference and abstraction) could deliver better results.

Insights from cognitive psychology: Understanding innate human mental machinery, common sense knowledge, and narrative comprehension could improve learning models.

Bolder challenges: Generalized AI should be multi-dimensional like natural intelligence to handle real-world complexity.

Marcus proposes a four-step program:

  1. Hybrid neuro-symbolic architectures: Embrace other AI approaches (prior knowledge, reasoning, rich cognitive models) alongside deep learning
  2. Rich cognitive frameworks: Build partly-innate cognitive structures and large-scale knowledge databases
  3. Abstract reasoning tools: Enable effective generalization
  4. Cognitive model representation: Develop mechanisms for representing and inducing cognitive models

2. Capsule Networks (CapsNets)

Geoffrey Hinton introduced capsule networks in 2017 as a new neural network architecture. Unlike CNNs, capsules work with vectors instead of scalars 8

The problem with CNNs: They approach object recognition differently than humans. CNNs struggle with rotation and scaling. Show a CNN an upside-down face, and it might not recognize it as a face.

How capsules work: They encapsulate results into vectors. When an image’s orientation changes, the vector moves accordingly. This helps models generalize better and handle rotated or scaled objects more like humans do.

3. Deep Reinforcement Learning

Deep reinforcement learning combines reinforcement learning with deep learning. Traditional reinforcement learning works on structured data. Deep reinforcement learning makes decisions about optimizing objectives based on unstructured data.

What it’s good for: Complicated control problems and target optimization actions. Models learn to maximize cumulative reward.

Yann LeCun’s view: Reinforcement learning works well in simulations but needs many trials and provides weak feedback. However, it requires less data than supervised learning models.

4. Few-shot learning (FLS)

Few-shot learning works with small amounts of training data—addressing deep learning’s data hunger problem.

Real application: Healthcare. Few-shot learning detects rare diseases even when training data contains only a handful of images. This matters when you can’t collect thousands of examples (rare conditions, expensive imaging).

Few-shot learning reduces computational costs and data collection requirements.

5. GAN-based data augmentation

Generative Adversarial Networks (GANs) create meaningful new data from unlabeled original data.

How it works:

  1. GANs generate synthetic data
  2. This synthetic data becomes training data
  3. Models train on the expanded dataset

Real results: A study on insect pest classification showed GAN-based augmentation helps CNNs perform better than classic augmentation methods while reducing data collection need9

6. Self-Supervised learning

Yann LeCun considers self-supervised learning a key component of future deep learning. Understanding how humans learn quickly could unlock self-supervised learning’s full potential and reduce reliance on large, annotated datasets.

Self-supervised learning models work without labeled data; they make predictions if given quality data and possible scenario inputs.ns if they have quality data and inputs of possible scenarios.

Other approaches

  • Imitation learning: When reinforcement learning provides few rewards, imitation learning offers an alternative. Agents learn tasks by imitating a supervisor’s demonstrations (observations and actions). Also called Learning from Demonstration or Apprenticeship Learning.
  • Physics guided/informed machine learning: Physics laws are integrated into training process to induce interpretability and improve accuracy of predictions in deep learning models.10
  • Transfer learning which is used to help machines transfer knowledge from one domain to another
  • Others: Motor learning and brain areas like cortical and subcortical neural circuits may be new fields of inspiration for machine learning models.11 12

5 deep learning applications could make an impact in the future

1- Climate crisis

Deep learning models like GPTs analyze business documents (invoices, utility bills) to automatically generate detailed carbon footprint calculations.

How it works:

  • Transportation data in invoices reveals fuel consumption and CO₂ emissions
  • Electricity usage in utility bills highlights energy inefficiencies

These detailed calculations drive more sustainable practices.

Deep learning also predicts renewable energy generation from wind and solar sources, improving climate forecasting and change prediction.13

2- Transportation

Tesla’s Autopilot and Full Self-Driving (FSD) use deep learning to optimize driving behavior, learning patterns that reduce energy consumption.

Key sustainability benefits:

Battery efficiency: Adjusts speed to maintain optimal battery performance and reduce energy consumption on long trips.

Energy-efficient routing: Uses real-time data to find routes that minimize energy use, considering traffic, elevation changes, and other variables.

3-Agriculture

John Deere integrates deep learning into tractors and harvesters:

Crop monitoring: Models analyze satellite and drone imagery to assess crop health, identify pests, and detect diseases early.

Precision planting: Autonomous tractors use deep learning to make real-time decisions on seed depth, spacing, and speed—minimizing soil disruption.

Yield prediction: Models predict crop yields based on weather, soil conditions, and historical data.

4-Mining

Mining companies use deep learning in autonomous haul trucks to optimize routes, reducing fuel consumption. These trucks operate 24/7 with minimal human oversight, making mining more energy-efficient and reducing environmental impact.

5-Waste management

ZenRobotics uses deep learning in robotic arms to sort and separate recyclables from waste. The AI identifies and classifies materials (plastics, metals, paper) with high accuracy, reducing waste sent to landfills.

For more, you can watch 3 AI experts share their views during AAAI 20
Principal Analyst
Cem Dilmegani
Cem Dilmegani
Principal Analyst
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 55% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
View Full Profile

Be the first to comment

Your email address will not be published. All fields are required.

0/450