Blog

Differentiating Natural Language Processing (NLP) and Predictive Analytics

Differentiating Natural Language Processing (NLP) and Predictive Analytics

Artificial Intelligence (AI) has recently come to the forefront of the public's attention because of impressive results coming from large language models such as those trained by OpenAI and also in the open-source landscape. In this blog post, I will contrast some of the more traditional uses of predictive analytics with some use cases for large language models (LLMs) and natural language processing (NLP).

Predictive analytics uses historical data to reveal patterns or trends to predict future outcomes. These predictions are then used to aid decision making. Predictive analytics typically employs machine learning, statistics, and data mining techniques. Let us look at some use cases for predictive analytics which have gained traction and provided measurable change.

In one striking example, Los Angeles County used predictive analytics to identify individuals at risk of homelessness, and then used those predictions to provide intervention in the form of financial assistance. After the pilot program, 90% of the 54 recipients who were at risk self-reported that they were able to keep their homes. [1]

The use of predictive analytics within child welfare is more challenging. Ideally it would help identify children who are at higher risk of maltreatment by analyzing patterns and risk factors that indicate past abuse. However, the risk of producing incorrect predictions is higher, especially if these predictions are due to unfavorable bias of the models.

Fortunately, predictive analytics are not the only possible use case for AI. NLP algorithms can sift through vast amounts of language data, much larger than anyone is able analyze. For instance, by analyzing real-time feeds from social media, NLP models can be used to provide early warnings during a natural disaster or other crisis. [2] Likewise they can be used to analyze emerging trends within market research. Language translation models have shown remarkable success and are widely used in the form of Google Translate and related products; on-device speech recognition has become a standard on phones, tablets, and computers.

Another leading use cases of LLMs is search. With the advent of large language models (LLMs), NLP has shown the ability to comprehend and reply to natural language at an impressive level. LLMs have grown to billions of automatically tuned parameters and new methods of data preparation and increased computational power has meant impressive results in question answering.  

Like the deep learning models referenced above, large language models are able to capture complex patterns within text data. At Augintel we are able to leverage the patterns within these large models to improve accuracy and efficiency in information retrieval.

The scale of data within social services organizations can be staggering, encompassing millions of notes from individual case workers. By improving search, Augintel helps case workers and organizational policymakers to access the information they need quickly. As a result, case workers both have more time to spend in the field and decision makers better understand the data within their organization. As NLP continues to advance, Augintel is keeping pace with the latest advances.  

[1] https://www.smartcitiesdive.com/news/cities-predict-homelessness-analytics/635242/

[2] https://www2.deloitte.com/nz/en/blog/consulting/2023/revolutionising-disaster-response.html

DROP US A LINE FOR MORE INFORMATION OR TO SCHEDULE A DEMO
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.