AIMultiple ResearchAIMultiple Research

Guide to Natural Language Understanding (NLU) in 2024

Cem Dilmegani
Updated on Feb 14
5 min read
Guide to Natural Language Understanding (NLU) in 2024Guide to Natural Language Understanding (NLU) in 2024

Understanding a speech or context is a problem that needs to be resolved for more capable conversational AI solutions. Natural Language Understanding (NLU) is a field that focuses on understanding the meaning of text or speech to respond better. It searches for what is the meaning and the purpose of that speech. 

NLU, the technology behind intent recognition, enables companies to build efficient chatbots. In order to help corporate executives raise the possibility that their chatbot investments will be successful, we address NLU-related questions in this article.

What is Natural Language Understanding?

NLU, a subset of natural language processing (NLP) and conversational AI, helps conversational AI applications to determine the purpose of the user and direct them to the relevant solutions. 

NLU is an AI-powered solution for recognizing patterns in a human language. It enables conversational AI solutions to accurately identify the intent of the user and respond to it. When it comes to conversational AI, the critical point is to understand what the user says or wants to say in both speech and written language

NLU helps computers to understand human language by understanding, analyzing and interpreting basic speech parts, separately.

Image shows how NLP and NLU are different.
Source: Stanford

Today, there are many NLU applications in different areas. Some examples are:

  • Chatbots
  • Voice-driven assistants
  • Search in natural language: It is a search for information using everyday spoken language.
  • Web-scale information extraction: It is a challenge to find the most suitable information among gigantic data on the web.
  • Legal discovery: A search for policies and laws in natural language.
  • Content summarization

5 Things to pay attention to while choosing NLU solutions

1. Language support:

The NLU platform should support the language of the input data. Currently, the quality of NLU in some non-English languages is lower due to less commercial potential of the languages. However, with increased research interest, this is changing.

2. Result quality: 

A good NLU solution should be able to recognize linguistic entities, extract the relationships between them and use semantic software to understand the content, no matter how it’s expressed. In general, understanding the context of the speech is associated with followings:

Computing power: 

The greater the capability of NLU models, the better they are in predicting speech context. In fact, one of the factors driving the development of ai chip devices with larger model training sizes is the relationship between the NLU model’s increased computational capacity and effectiveness (e.g GPT-3).

Training data set

In general, ML models learn via experience. Therefore, their predicting abilities improve as they are exposed to more data. The same reasoning is used for NLU.

Data quality

Raw data is valuable for NLU models after it has been cleaned, organized, and labeled.

3. Speed: 

Understanding the language is part of the process in conversational AI applications. Other parts include generating a response or acting based on the query. Therefore, the process of perceiving and interpreting the language should be done quickly. However, there may be a trade-off between the quality of the results vs the speed of computation. A selection should be made depending on the application area

4. Flexibility: 

Adaptability to different solution areas is important. This is achieved by the training and continuous learning capabilities of the NLU solution.

5. Usability: 

The solution should be easy to use by both non-tech and tech employees. A solution with different interfaces can be considered for a non-tech employee(a customer service employee for example) to actively develop this system with feedback. Since chatbots can be 

Why is it important now?

The main reasons why natural language processing are important today can be explained by these points:

NLU can be used as a tool that will support the analysis of an unstructured text

There are various ways that people can express themselves, and sometimes this can vary from person to person. Especially for personal assistants to be successful, an important point is the correct understanding of the user. NLU transforms the complex structure of the language into a machine-readable structure. This enables text analysis and enables machines to respond to human queries.

The amount of unstructured text that needs to be analyzed is increasing

Computers can perform language-based analysis for 24/7  in a consistent and unbiased manner. Considering the amount of raw data produced every day, NLU and hence NLP are critical for efficient analysis of this data. A well-developed NLU-based application can read, listen to, and analyze this data.

Industry analysts also see significant growth potential in NLU and NLP

Analysts estimate >20% CAGR in the 2020-2025 period. According to Markets Insider’s research in 2019, the global natural language processing (NLP) market is expected to be worth $35 billion by 2025 with a record a 22% CAGR in the 2020-2025 period. The main underlying reason for the increase is shift from the product-centric to the customer-oriented experiences. The increasing demand for smart devices and IoT is also contributing to the widespread use of NLU.

For example, a recent Gartner report points out the importance of NLU in healthcare. NLU helps to improve the quality of clinical care by improving decision support systems and the measurement of patient outcomes. 

How to evaluate the accuracy of NLU solutions?

NLU models can perform perfectly on a single and specific task. However, different tasks can lower the accuracy and precision. It is best to compare the performances of different solutions by using objective metrics. 

For example, a benchmark can be created by building a test set with a significant number of examples (e.g. >100) using customer service data to compare different services. Since most services are available as easy to register APIs, it should be easy to check this against API responses. 

Here is a benchmark article by SnipsAI, AI voice platform, comparing F1-scores, a measure of accuracy, of different conversational AI providers.

SnipsAI compares their system with Google’s API.ai, Facebook’s Wit.ai, Microsoft’s Luis.ai, and Amazon’s Alexa by using their open-source dataset on Github. The dataset includes 2400 queries for each of the 7 user intents they tested.

GLUE and its superior SuperGLUE are the most widely used benchmarks to evaluate the performance of a model on a collection of tasks, instead of a single task in order to maintain a general view on the NLU performance. They consist of nine sentence- or sentence-pair language understanding tasks, similarity and paraphrase tasks, and inference tasks. GLUE also provides a leaderboard for the benchmark.

What are the leading NLU companies?

As in many emerging areas, technology giants also take a big place in NLU. Some startups as well as open-source API’s are also part of the ecosystem.

  • Haptik
    • Knowledge base
    • AI recommendations
    • Smart variants
  • Microsoft
    • Knowledge Exploration Service
    • Language Understanding Intelligent Service (LUIS)
    • Azure Translator APIs
  • Google
    • Dialogflow
    • Translate API
    • Cloud Natural Language API
  • IBM
    • Watson Conversation Service
    • Watson Tone Analyzer
  • Amazon
    • Comprehend
    • Lex

Open-source alternatives are:

  • Facebook’s Wit.ai 
  • Rasa NLU
  • FuzzyWuzzy
  • PyNLPl
  • Stanford CoreNLP

Feel free to learn more about conversational AI from our research articles:

If you have questions about how natural language understanding can help your business, feel free to ask us: 

Find the Right Vendors
Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments