Can sentiment explanations make you richer?

March 25, 2024

Let’s imagine for a moment that an algo-trading company, “Fin-Sent-X,” has enabled a new market platform through which anyone can subscribe to any of its machine learning (ML) models. These models can predict future financial indicators, such as changes in stock value, based on historical news feeds.

According to the company’s website, their newly developed algorithm uses  an advanced AI architecture to extract the “sentiment” from each historical news article, namely, the polarity of its tone.

Then, using a series of such sentiments, the ML model can predict whether a stock’s value will rise or fall. Would you subscribe to such a service with your own life savings? How long would you let the platform place bids on your behalf without really understanding how the model works? Wouldn’t you want to know if there is some mechanism that can help you inspect WHY various sentiments are being concluded from any given news article, to gain more trust in the service?

In the FAME EU project, IBM Research is enabling this capability as an independent and tradable technology component that can be integrated with any machine learning model to explain its sentiment classification by leveraging on the power of generative AI and large language models (LLMs).

What is sentiment analysis?

Sentiment analysis (SA), or opinion mining, is a natural language processing technique used to determine whether data is positive, negative, or neutral. SA is often performed on textual data to help businesses monitor brand and product sentiment in customer feedback and understand customer needs. In financial forecasting, SA is a valuable tool that can be used to analyse market atmosphere and predict stock market trends.

The FAME project is setting up a marketplace platform with the ability to trade and integrate different types of digital assets, such as data, models, and in our case, an analytical service for post-hoc explainability of SA classifications.

While the inner scheme developed for the elicitation of sentiment explanations is in itself novel, the work pursued in FAME has also revealed that the value of such information plays a dual role. Not only it helps humans make sense of the inner working of the model that determines the sentiment to ensure it goes in line with their expectations, but maybe more surprisingly, it also helps improving the accuracy of the machine learning model that predicts future stock trends by enriching its sentiment feed input with sentiment explanations.

The objective in FAME is to enable via its platform, a complementary explainability model that can “wrap around” any other trading algorithm analytics that predicts stock market trends by replacing the original news feed input with a sentiment feed to reduce its size without compromising the accuracy of its market predictions. To enable such a service, a new model was developed with the power of generative AI, to automatically highlight for each sentiment, a sufficient subset of the terms in the original news narrative that most prominently justify its classification. Somewhat, as a surprising by-product of this work, it was also found that enriching the input for market predictions to include not only the derived sentiments, but also accompany each with the terms that explain it, yields more accurate predictions about next day predicted stock value.

How do we achieve this?

Our approach leverages an LLM to find the set of sufficient terms that explains the original news narrative sentiment, and then uses that set to enrich the input to the machine learning model.

IBM has extended the SAX4BPM library developed as part of the FAME EU project to cope with text input to enable such an explainability service. This was developed as a machine learning model-agnostic service for the explainability of any SA model, given a narrative and its corresponding sentiment as an input. This newly developed functionality service utilizes GPT-3.5 turbo, an LLM for the generated explanations that is made deployable as part of the overall FAME landscape. The functionality for explainability is developed as a platform tradeable component, that can be integrated with any SA model to provide an explanation for its predictions, while remaining independent of the SA model itself.

The core idea underlying the explainability algorithm is to employ a novel interactive scheme with the LLM as a zero-shot explainable AI (XAI) model, to determine which are the k-sufficient terms yielding the sentiment that was determined by the SA model, regardless of the type of model that is used for the latter. Subsequent to the enablement of this service, a comparative analysis was conducted with an LSTM model that was used for predicting market trends against historical data about currency-pair values. The essence of this test was originally intended to assess the viability of replacing the full-text news ads feed with the corresponding extracted feed of sentiments. However, the assessment was further enhanced to test the effect of enriching the sentiment feed, having each individual sentiment also be accompanied by its corresponding explanatory k-terms.

The outcome of this comparison has surprised its developers, revealing that the LSTM model achieves greater accuracy in its predictions by leveraging enriched input. This opens up new horizons for investigating additional applications where explainability information can serve not only to enhance human interpretability but also to improve the precision of AI models.

More information: