News APP

NewsApp (Free)

Read news as it happens
Download NewsApp

Available on  gplay

This article was first published 6 years ago
Home  » News » How Lyric Jain fights the Fake News menace

How Lyric Jain fights the Fake News menace

By P Rajendran
August 16, 2018 08:53 IST
Get Rediff News in your Inbox:

The tools he uses are logic, the reputations of the sources, and how much emotions influence the content.
P Rajendran reports for Rediff.com from New York.

IMAGE: Lyric Jain Photograph: Kind courtesy Lyric Jain

Fake news has been around a long time. But those who suffer the consequences are condemned to endure it again if they lack the logical arsenal to fight wars about facts.

Lyric Jain's LogicAlly aims to do just that -- let people know whether the news they come across is trustworthy or not.

The tools he uses are logic, the reputations of the sources, and how much emotions influence the content.

Jain felt a need to keep tabs on news that economised on the truth after seeing what it did to push the British to vote their country out of the European Union, and the Americans to vote in Donald J Trump.

Still, he said, he should not have been surprised.

 

"We found a satire magazine from 1860 (Puck)," Jain said. "It mentioned 'fake news'. As a term and a concept it's been around for a very long time."

Driven by enthusiasm, Jain dove in, but found the problem was a little harder to solve than he had imagined.

"At the very start, I approached it a bit too naively. I thought this was just a problem with detection," he said.

"It's not quite as simple as that. It's more a combination of how we change how people behave with content, as well as challenges of almost instant verification."

"We approached it from a citizen-education perspective in which we try to get people to be as sceptical about information they find online but we also provide them with the tools, and do all the heavy lifting ourselves using our (software)."

Funded by an MIT project, Jain and his team attacked the problem more analytically.

They see who the domain owner is, how old the domain is, are they credible, are they experts by any chance?

Do they have any biases and (have the claims) been debunked by fact-checkers?

These are typical procedures a fact-checker or journalist would go through if they had infinite time and resources while encountering every piece of information.

"People lead very busy lives and we do all the heavy lifting (for them)."

LogicAlly, Jain said, relied on three major models working together to assess credibility.

"We've got metadata-based indicators. That would be stuff like the domain, the authors, their historic backgrounds and biases."

"There the text itself -- if there is any emotive language or circular logic or logical fallacy, if it contains claims that have been debunked by fact-checkers."

"Our third model is social media monitoring. We look at how people are responding to it in monitoring how certain claims and certain stories are passed through different people in different networks on social media."

Jain conceded that not every issue could be addressed, and that all logical fallacies were not equally easy to measure.

"We've got a knowledge base that contains widely accepted information, stuff that is absolutely fact," he said. "Additionally, we've got access to a statistical database which allows us to verify statistical claims quite easily.

"Logical fallacies, in particular, are one of our lower-performing models, but they improve the accuracy of our overall ensemble models."

"Stuff like overuse of emotive language, appeals to emotion, circular reasoning, ad hominem attacks, also for straw man fallacies."

Ad homimem, for example, involves attacking a person, while a straw man involves arguing against a case the opponent has not made.

And given the limited size of the knowledge base, the software may not be informed about what else a person may have done to deserve a label, or have said elsewhere. But together with other information in the media, assessments can be made, Jain pointed out.

Frederick Burr, Opper, Puck magazine, volume 35, no 887, (March 7, 1894), centerfold. Kind Courtesy: Library of Congress

"We have data sets that are barely consistent... in listing out typical straw man and ad hominem attacks," he said. "(It's) more than using a list, a sort of dictionary -- a finite list. It's more than language and the parsing behind that language that we're able to gauge."

"So even if its a type of attack or type of fallacy that hasn't been registered in our existing 700,000 article knowledge base, is not documented in the 700,000 articles that made their knowledge base, it will be able to (make an assessment) by analysing the language.'

Jain admitted that some of the assessments worked better than the other.

"In some situations, some of these models are less sound, but the idea is that when all three of them are working together, it's a very low chance that misinformation could seep through," he said.

Asked about claims of bias against Facebook, YouTube and other outlets when they blocked content they deemed objectionable, Jain said his group was keenly aware of how perceptions worked and have found its own way to address the matter.

"We are definitely agnostic in terms of our political affiliations," he said, adding that in urban cities it is natural attract people with liberal sensitivities.

"When we recruit people for our company we have passionate people on both sides of the political aisle. That's a good way of making sure that concerns from either side are addressed, specifically in terms of banning content on Logically."

"Unless in extreme circumstances -- if it's stuff that can incite violence or if it's terrorist content -- we will not take down content on Logically."

"It will only be marked and flagged as potential hate speech or potential misinformation. All we can do is make people aware of the context of the information."

"We don't believe in censoring information."

Thing is, in December 2016, Facebook itself had tried to flag untruthful content with the ambiguous term 'disputed', explaining that to mean that independent fact-checkers had disputed its accuracy.

They stopped it within a year, citing a number of problems, one of which Jain's team was particularly disquieting.

With masterly understatement, Jain said, "One of the findings from that is slightly concerning for us: If something is indicated and flagged as misinformation, people were more likely to believe it than if it wasn't tagged", admitting that was a very concerning revelation.

"What we hypothesise is that (because) Facebook tags say very few things, tagging anything explicitly draws attention to it," he added.

Logically tries to circumvent that by ensuring there is always a tag, so as not to specifically draw attention to it; the tag is just another indicator of what the content is.

Of course, considerable evidence suggests that people dealing with information that contradicts their belief just dig in deeper and, perversely enough, become more convinced of their original views.

The LogicAlly team is doing a lot of work to ensure accurate tagging, Jain said, describing it as a lot of work, natural language scientists, developers, designers, fact-checkers and journalists.

Speaking of the situation in India and the spate of social media rumors, such as on WhatsApp, that have even resulted in murders, Jain says LogicAlly is holding discussions to help address such untruths that pass for information.

According to him, "Maharashtra is very active in monitoring things that lead to violence. However, in other parts of India we are not sure what the best course of action is."

Still, Jain said, incendiary information is not what he hopes to primarily address.

"One thing we are picking up quite significantly in India and other parts of Asia is health misinformation," he said, providing an example of such a questionable idea: 'Drink orange juice and give up your cancer medication.'

He cited the example of his grandmother, who was diagnosed with cancer while in the UK.

"She used to spend winters in India. Over there, she kind of gave up her cancer meds because of similar kinds of information."

Jain was born in Mysore and would perhaps have remained there had his sister Rythm not gone to the UK and got into the habit of collecting degrees -- at last count, six.

After coming on trips to visit her, the family decided to settle in the UK, where Jain finished his schooling and went on to more interesting things, including a stint at Cambridge in the UK, and at MIT in the US.

"I'd say I wasn't very good at it early on. Especially when I first moved I was still finding my identity. Until last August-September, I hadn't been to India in six-seven years; but since then I've been there -- five times??" Jain said.

"I've really enjoyed my time there seeing some of the extended family. I'm definitely enjoying reconnecting with India now."

Get Rediff News in your Inbox:
P Rajendran / Rediff.com