AIMultiple ResearchAIMultiple Research

What is Meta Learning? Techniques, Benefits & Examples [2024]

What is Meta Learning? Techniques, Benefits & Examples [2024]What is Meta Learning? Techniques, Benefits & Examples [2024]

The performance of a learning model depends on its training dataset, the algorithm, and the parameters of the algorithm. Many experiments are required to find the best-performing algorithm and parameters of the algorithm. Meta-learning approaches help find these and optimize the number of experiments. This results in better predictions in a shorter time.

Meta learning can be used for different machine learning models (e.g., few-shot learning, reinforcement learning, natural language processing, etc.). Meta learning algorithms make predictions by taking the outputs and metadata of machine-learning algorithms as input. Meta learning algorithms can learn to use the best predictions from machine-learning algorithms to make better predictions.

In computer science, meta learning studies and approaches started in the 1980s and became popular after Jürgen Schmidhuber and Yoshua Bengio‘s works on the topic.

What is meta learning?

Meta learning, also known as “learning to learn”, is a subset of machine learning in computer science. It is used to improve the results and performance of a learning algorithm by changing some aspects of the learning algorithm based on experiment results. Meta learning helps researchers understand which algorithm(s) generate the best/better predictions from datasets.

Meta learning algorithms use metadata of learning algorithms as input. Then, they make predictions and provide information about the performance of these learning algorithms as output. For non-technical users, metadata is data about data. For example, the metadata of an image in a learning model can be its size, resolution, style, date created, and owner.

Systemic experiment design in meta learning is the most important challenge.

Why is meta learning important now?

Machine learning algorithms have some challenges, such as

  • Need for large datasets for training
  • High operational costs due to many trials/experiments during the training phase
  • Experiments/trials take a long time to find the best model which performs the best for a certain dataset.

Meta learning can help machine learning algorithms to tackle these challenges by optimizing learning algorithms and finding learning algorithms that perform better.

What is the interest in meta learning?

The interest in meta learning has been growing during the last five years, it has especially accelerated after 2017. As the use of deep learning and advanced machine learning algorithms has increased, the difficulties in training these learning algorithms have created an increase in interest for meta learning studies.

How does meta learning work?

In general, a meta learning algorithm is trained with outputs (i.e. the model’s predictions) and metadata of machine learning algorithms. After training, its skills are tested and used to make final predictions.

Meta learning covers tasks such as

  • observing the performance of different machine learning models about learning tasks
  • learning from meta data
  • performing faster learning processes for new tasks

For example, we may want to train a model to label different breeds of dogs.

  • We first need an annotated data set
  • Various ML models are built on the training set. They could focus just on certain parts of the dataset
  • The meta training process is used to improve the performance of these models
  • Finally, the meta training model can be used to build a new model from a few examples based on its experience with the previous training process
SOURCE:KDNUGGETS

What are the approaches and applications in meta learning?

Meta learning is used in various areas of the machine learning domain. There are different approaches in meta learning as model-based, metrics-based, and optimization-based approaches. We briefly explained some common approaches and methods in meta learning domain below.

Metric Learning

Metric learning means learning a metric space for predictions. This model gives good results in few-shot classification tasks.

Model-Agnostic Meta-Learning (MAML)

The neural network is trained by using a few examples to adapt the model to new tasks faster. MAML is a general optimization and task-agnostic algorithm, and it is used to train the parameters of a model for fast learning with a small number of gradient updates.

Recurrent Neural Networks (RNNs)

Recurrent neural networks (RNN) are a class/type of artificial neural networks, and they are applied to different machine learning problems, such as problems that have sequential data or time series data. RNN models are commonly used for language translation, speech recognition, and handwriting recognition tasks. In meta learning, RNNs are used as an alternative to creating a recurrent model which can gather data sequentially from datasets and process them as new inputs.

Stacking/Stacked Generalization

Stacking is a subfield of ensemble learning and is used in meta learning models. Also, supervised learning and unsupervised learning models benefit from stacking. Stacking includes the following steps:

  1. Learning algorithms are trained by using available data
  2. A combiner algorithm (e.g., a meta learning model or a logistics regression model) is created to combine all predictions of these learning algorithms, which refer as ensemble members.
  3. The combiner algorithm is used to make final predictions.

What are the benefits of meta learning?

Meta learning algorithms are used to improve machine learning solutions. The benefits of meta learning are

  • higher model prediction accuracy:
    • Optimizing learning algorithms: For example, optimizing hyperparameters to find best results. Thus, this optimization task, which is normally done by a human, is done by a meta learning algorithm
    • helping learning algorithms better adapt to changes in conditions
    • identifying clues to design better learning algorithms
  • a faster, cheaper training process
    • Supporting learning from fewer examples
    • Increasing the speed of learning processes by reducing necessary experiments
  • Building more generalized models: learning to solve many tasks, not only one task: meta learning does not focus on training one model on one specific dataset

Feel free to read our other articles about ML and deep learning:

If you need help in choosing vendors for meta learning or other ML solutions who can help you get started, let us know:

Find the Right Vendors

This article was drafted by former AIMultiple industry analyst Ayşegül Takımoğlu.

Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments