Fake news is written to confuse and manipulate public opinion. As such, its intent is always to deceive. But the outcome of twisting facts is, arguably, most evident in financial markets, where there’s always money to be made by shifting people’s trust. Share price, after all, is as much a product of demand as they are of fiscal matters.
Researchers at the University of Göttingen, University of Frankfurt, and the Jožef Stefan Institute in Ljubljana, Slovenia, have developed a new framework that, they hope, will help us identify such content. Since malevolent actors can tailor content to appear genuine, through avoiding incriminating terms, for example, the team focused on other aspects of the text.
No swindlin’
“Here we look at other aspects of the text that makes up the message, such as the comprehensibility of the language and the mood that the text conveys,” says Professor Jan Muntermann from the University of Göttingen, co-author of the paper describing the approach.
The authors used machine learning for the task. This algorithm was tasked with creating analytical models that can be applied to identify suspicious messages on material based on characteristics other than the wording. In very broad lines, it operates similarly to spam filters.
However, there are important differences. For example, today’s spam filters can be appeased by removing incriminating words, so there is a constant back and forth between fraudsters and the systems meant to keep them at bay. To counteract this, the team tested an approach that involves using several overlapping detection models to increase the system’s accuracy (its ability to tell apart fake news from valid information) and robustness (its ability to see through attempts to hide fake news). They explain that even if flagged words are removed from a piece of text, the algorithm can still identify it as fake news based on other linguistic features.
“This puts scammers into a dilemma. They can only avoid detection if they change the mood of the text so that it is negative, for instance,” explains Dr Michael Siering. “But then they would miss their target of inducing investors to buy certain stocks.”
The main intended purpose of this system is to identify attempts to manipulate the corporate news ecosystem in order to influence stock prices — which can lead to major monetary losses for a lot of people. The authors envision a system where their approach can be used as a type of market watchdog, which would flag such attempts at market manipulation and lead to a temporary suspension in the trading of affected stocks. Alternatively, it could potentially become a source of evidence for criminal prosecutions in the future.
Either way, the implementation of such a system would go a long way towards improving public and corporate confidence in the stock market. Normally this wouldn’t really be relevant news for us here, but seeing as retail (i.e. us common Joes and Janes) now comprises an estimated 10% of stock trading, by volume, in the US, I’m certain at least some of you partake as well.
It would be extremely interesting to see how such a system would impact the evolution of the “meme stocks” we’ve seen recently. Although the largest of these undeniably enjoyed major grassroots support, there were definitely a lot of pieces trying to sway public opinion both for and against them. Would a system such as that detailed here help boost retail confidence in meme stocks, in paricular? Or would it stifle their growth by stifling hype around them? Given that the framework is already trialed and the results published, I think it’s a safe bet to say that we’re going to find out in the future.