Article 542F8 AI could help scientists fact-check covid claims amid a deluge of research

AI could help scientists fact-check covid claims amid a deluge of research

by
Karen Hao
from MIT Technology Review on (#542F8)

An experimental tool helps researchers wade through the overwhelming amount of coronavirus literature to check whether emerging studies follow scientific consensus.

Why it matters: Since the start of the coronavirus pandemic, there has been a flood of relevant preprints and papers, produced by people with varying degrees of expertise and vetted through varying degrees of peer review. This has made it challenging for researchers trying to advance their understanding of the virus to sort scientific fact from fiction.

How it works: The SciFact tool, developed by the Seattle-based research nonprofit Allen Institute for Artificial Intelligence (AI2), is designed to help with this process. Type a scientific claim into its search bar-say, hypertension is a comorbidity for covid" (translation: hypertension can cause complications for covid patients)-and it will populate a feed with relevant papers, labeled as either supporting or refuting the assertion. It also displays the abstracts of each paper and highlights the specific sentences within them that provide the most relevant evidence for assessing the claim.

How it was built: The system is built on top of a neural network called VeriSci. It was trained on an existing fact-checking data set compiled from Wikipedia and fine-tuned on a new scientific fact-checking data set containing 1,409 scientific claims, accompanied by 5,183 abstracts.

Researchers at AI2 curated the latter data set using Semantic Scholar, a publicly available database of scientific papers, which the nonprofit launched and has maintained since 2015. They randomly selected a sample of papers from a few dozen well-regarded journals in the life and medical sciences, including Cell, Nature, and JAMA. They then extracted the sentences in the papers that included citations and asked expert annotators to rewrite them into scientific claims that could be corroborated or contradicted by the literature. For every claim, the annotators then read through the abstracts of the corresponding citations and identified the sentences containing supporting or refuting evidence.

How it performs: When the researchers tested VeriSci on scientific claims related to covid-19, they found that it retrieved relevant papers and accurately labeled them 23 out of 36 times. Despite this imperfect performance, the result still outperforms the same neural network trained on other existing fact-checking databases and serves as the first known proof of concept for how an AI-based system for scientific fact-checking can be possible. In the future, some of the tool's errors could be reduced in part through use of more training data; others will need further advancements in natural-language understanding.

What it should and shouldn't be used for: SciFact is meant to help scientists researching covid-19 to quickly check their hypotheses or emerging claims against existing scientific literature. It is not meant to dispel the kinds of misinformation or conspiracy theories that circulate on social media (e.g., that covid-19 is a bioweapon) or opinion-based statements (e.g., that the government should require people to stand six feet apart to slow the spread of the virus). Given the tool's experimental nature, experts should still be sure to read the abstracts rather than rely solely on the support" and refute" labels. The researchers also note that the tool doesn't check the legitimacy of the papers retrieved, so experts should exercise judgment.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments