Algorithm mines materials science literature for new discoveries

July 05, 2019 //By Rich Pell
Algorithm mines materials science literature for new discoveries
Researchers at Lawrence Berkeley National Laboratory have found that that text mining of scientific literature using an algorithm with no training in materials science can uncover new scientific knowledge.

In their study, the researchers collected 3.3 million abstracts of published materials science papers and fed them into an algorithm called Word2vec. By analyzing relationships between words the algorithm was able to predict discoveries of new thermoelectric materials years in advance and suggest as-yet unknown materials as candidates for thermoelectric materials.

"Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals, says Anubhav Jain, a scientist in Berkeley Lab's Energy Storage & Distributed Resources Division. "That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven't studied so far."

Their study, say the researchers, establishes that text mining of scientific literature can uncover hidden knowledge, and that pure text-based extraction can establish basic scientific knowledge. The project, they say, was motivated by the difficulty making sense of the overwhelming amount of published studies.

"In every research field there's 100 years of past research literature, and every week dozens more studies come out," says Berkeley Lab scientist Gerbrand Ceder, who helped lead the study. "A researcher can access only fraction of that. We thought, can machine learning do something to make use of all this collective knowledge in an unsupervised manner – without needing guidance from human researchers?"

The researchers collected the 3.3 million abstracts from papers published in more than 1,000 journals between 1922 and 2018. Word2vec took each of the approximately 500,000 distinct words in those abstracts and turned each into a 200-dimensional vector, or an array of 200 numbers.

"What's important is not each number, but using the numbers to see how words are related to one another," says Jain, who leads a group working on discovery and design of new materials for energy applications using a mix of theory, computation, and


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.