Difference between revisions of "Journal:Terminology spectrum analysis of natural-language chemical documents: Term-like phrases retrieval routine"

From LIMSWiki
Jump to navigationJump to search
(Added content. Saving and adding more.)
(Converted ombox to template)
 
(7 intermediate revisions by the same user not shown)
Line 19: Line 19:
|download    = [http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4?site=jcheminf.springeropen.com http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4] (PDF)
|download    = [http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4?site=jcheminf.springeropen.com http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4] (PDF)
}}
}}
{{ombox
 
| type      = content
{{Ombox math}}
| style    = width: 500px;
 
| text      = This article should not be considered complete until this message box has been removed. This is a work in progress.
}}
==Abstract==
==Abstract==
'''Background''': This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural [[information]].
'''Background''': This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural [[information]].
Line 153: Line 151:
The significant part of text pre-processing stage is selection of individual tokens being the words of general English and recognition of various meaningful text strings which are: the general scientific terms (actually performed at the final terminology spectrum building stage but described here for convenience); tokens denoting chemical elements, stable isotopes and measurement units; tokens which cannot be a part of any terms in any way. This part of work is performed using specially developed dictionaries described in details in Table 1.
The significant part of text pre-processing stage is selection of individual tokens being the words of general English and recognition of various meaningful text strings which are: the general scientific terms (actually performed at the final terminology spectrum building stage but described here for convenience); tokens denoting chemical elements, stable isotopes and measurement units; tokens which cannot be a part of any terms in any way. This part of work is performed using specially developed dictionaries described in details in Table 1.


{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="4"|'''Table 1.''' Developed/modified dictionaries used for recognition of general English words, general chemical science terms and tokens with special meaning
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Dictionary/''Usage for''
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Description
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Reference
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|General chemical science terms<br />&nbsp;<br />''Selection of general terms (chemical and from related fields of physics, mathematics …)''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~7500 General scientific terms in chemistry, physics and mathematics<br />&nbsp;<br />IUPAC Compendium is used
  | style="background-color:white; padding-left:10px; padding-right:10px;"|http://goldbook.iupac.org/<br />&nbsp;<br />IUPAC Compendium of Chemical Terminology (''Gold Book'')
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Naphthenes, solvation energy, osmotic pressure, reaction dynamics …
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|General English words dictionary<br />&nbsp;<br />''Selection of general English wordsGeneral chemical science terms''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~58,000 general English words. It is based on Corncob Lowercase Dictionary modified by us for stated goals. 566 words were excluded, which are often used in scientific terminology
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Modified Corncob Lowercase list of more than 57,000 English words http://ru.scribd.com/doc/147594864/<br />&nbsp;<br />Corncob Lowercase (see Additional file 3 for excluded words)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Abbreviate, academic, accelerate …<br />&nbsp;<br />'''Excluded''': Abrasion, absorption, aerosol …
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Stop list<br />&nbsp;<br />''Filtering tokens which are not part of terms in any way''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~2060 tokens. List contains the words, abbreviations and so on, which cannot be incorporated into any term-like phrases
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proprietary design (see Additional file 4)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|e.g., de, ca., fig., al., co-exist, et, etc., i.e., ltd …
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Stable isotopes<br />&nbsp;<br />''Filtering n-grams containing digits''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~250 isotopes. It is based on The Berkeley Laboratory Isotopes Project’s isotopes database
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proprietary design, based on The Berkeley Laboratory Isotopes Project’s DB: http://ie.lbl.gov/education/isotopes.htm (see Additional file 5)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|1H, 2H, 3He, 4He, 6Li, 7Li …
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Chemical elements signs<br />&nbsp;<br />''Filtering n-grams containing digits''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~126 chemical elements. It is based on periodic table
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proprietary design, based on periodic table (see Additional file 6)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|H, He, Li, Be, B, C, N, O, F …
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Measurement units<br />&nbsp;<br />''Filtering n-grams containing units of measure''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~100 records now, partially based on IUPAC ''Gold Book''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proprietary design, partially based on http://goldbook.iupac.org/ (see Additional file 7)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|(a.u.), (ev), a.u, °C, ppm, kV, mol, g<sup>−1</sup>, ml<sup>−1</sup>, gcat, gcat h …
|-
|}
|}
Some extra explanation needs to be given on the general English dictionary, the stop list dictionary and the procedure of recognition of general scientific terms.
More than 560 words either found in scientific terminology (for instance: "acid", "alcohol", "aldehyde", "alloy", "aniline", etc.) or occurring in composite terms (for example, "abundant" may be part of the term "most abundant reactive intermediates") were excluded from the original version of the Corncob Lowercase Dictionary.
The ''IUPAC Compendium of Chemical Terminology'' (the only well-known and time-proven dictionary) is used as a source of general chemistry terms. To find the best way to match an n-gram to a scientific term from the compendium, a number of experiments have been performed which resulted in the following criteria:
1. N-gram is considered a general scientific term if all n-gram tokens are the words of a certain IUPAC ''Gold Book'' term, regardless of their order; and
2. If (n − 1) of n-gram tokens coincide with the (n − 1) words of an IUPAC ''Gold Book'' term, and the remaining word is among other terms in the dictionary, then the n-gram is considered a general scientific term too.
Some examples may be given. The n-gram "RADIAL CONCENTRATION GRADIENT" is a general scientific term because the phrase "concentration gradient" is in the compendium and the word "radial" is part of the term "radial development." The n-gram "CONTENT CATALYTIC ACTIVITY" is a general term because the term "catalytic activity content" is present in the compendium and differs from the n-gram only by word order. The n-gram "TOLUENE ADSORPTION CAPACITY" is not considered a general term, despite the fact that two words coincide with the term "absorption capacity," because the remaining word "TOLUENE" is special and is not found in the compendium. The n-gram "COBALT ACETATE DECOMPOSITION" is not considered a general term either as only the term "decomposition" may be found.
The final comment is about the stop list dictionary that, at first glance, may look like a set of arbitrary words. But, actually, it is based on a series of observations performed with the set of wrongly identified term-like phrases by the earlier version of the terminology analysis system.
====Strict filtering====
The last step in the text pre-processing stage is strict filtering developed to remove unnecessary words and meaningless combinations of symbols. If at least one n-gram token is labeled by the strict filtering tag ("rubbish" : "true") then such an n-gram is not considered a term-like phrase. At this stage, certain character sequences — as described by the filtering rules (Table 2) and not exempt by the list of exceptions (Table 3) — are looked for. They are successive digits, special symbols, measurement units, symbols of chemical elements, brackets and so on. Custom regular expressions and standard dictionaries described in Table 1 are used for this procedure. A general scheme of strict filtering parsing is illustrated in Fig. 4.
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="70%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="3"|'''Table 2.''' Rules for strict filtering procedure
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|No.
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Rule
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|1
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''SpecialSymbolsRule'''''<br />&nbsp;<br />True if a token contains at least one of the special symbols different from: . -,/: () [] + = @ ®
  | style="background-color:white; padding-left:10px; padding-right:10px;"|SIZE(**), SELECTIVITY%, NIMG_650, H2S↔35SCAT, 1AUDAE_AM, ΔGADS, H0 ≦−8.2
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|2
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''StopListRule'''''<br />&nbsp;<br />True if a token is in the stop list (Table 1)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|LITERATURE, VIEWPOINT, PERCENT, PRESENT, IMPORTANCE, FUNDAMENTAL, CONCLUSION, TYPICALLY, EXAMPLE, INTRODUCTION
|-
  | colspan="3"|'''Rules of regular expressions''':<br />&nbsp;<br />True, if a token satisfies at least one of the regular expressions from the following list...
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|3
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''4DigitRule'''''<br />&nbsp;<br />True if a token contains four or more digits in succession
  | style="background-color:white; padding-left:10px; padding-right:10px;"|FQM-3994, RYC-2008-03387, 20000H-1, MAT2010-21147, CO(0001)-CARBIDE, CO(111)/CO(0001), RU(0001) ELECTRODE
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" rowspan="2"|4
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''3DigitRule'''''<br />&nbsp;<br />True if a token contains three digits in succession
  | style="background-color:white; padding-left:10px; padding-right:10px;"|215KMTA, 220ML, 148H-1, CU2O(111), AU{111}-CEO2{100}, MGO/AG(100)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''2DigitRule'''''<br />&nbsp;<br />True if a token begins with one or two digits
  | style="background-color:white; padding-left:10px; padding-right:10px;"|12C16O-13C16O, 31P{1H}, 2-PROPANOL, 2-METHYL-1-BUTENE, 3-METHYL-1,3-BUTADIENE, 15 %H3PW12O40/TIO2
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''UnitsRule'''''<br />&nbsp;<br />True if a token ends with a string from the dictionary of measurement units (Table 1)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|KJMOL-1, MMOL.MIN-1, KJ.MOL-1, G.GZEOLITE-1.H-1, CM3.MIN-1.G-1
|-
|}
|}
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="70%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="3"|'''Table 3.''' Exceptions for strict filtering procedure
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|No.
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Exception
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|1
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Facet_Index_4digits'''''<br />&nbsp;<br />Token denotes the substance containing a four-digit facet index. The list of chemical element signs is used (Table 1).
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': RU(0001); CO(0001)-CARBIDE; α-FE2O3(0001)<br />&nbsp;<br />''rubbish'': HPG1800B; RYC-2008-03387; 20000H-1
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|2
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Miller_Index_3digits'''''<br />&nbsp;<br />Token denotes the substance containing a three-digit crystallographic Miller index. The list of chemical element signs is used.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': CEO2(111); PT(111); AU{111}-CEO2{100}; (NI,AL)(111); AL2O3/NIAL(110)<br />&nbsp;<br />''rubbish'': R873; 50WX8-100; 270-470OC
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|3
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Substances_3digits'''''<br />&nbsp;<br />Token denotes chemical containing three digits in succession. Chemical elements signs list and regular expressions as <code>EL/\{\d{3}\}</code> are used.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 15N218O; H235S; H218O-SSITKA; H216O/H218O<br />&nbsp;<br />''rubbish'': FA100; TSVET-500; CE-440
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|4
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Isotopes'''''<br />&nbsp;<br />Token denotes an isotope. Stable isotopes and chemical elements signs lists are used (Table 1).
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 13C CP-MAS NMR; 12C16O-13C16O MIXTURE; 31P MAS NMR SPECTROSCOPY<br />&nbsp;<br />''rubbish'': 04,21H; 11H; 11HV; 1 %18O2; -1H-1; 57CO
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Substances_2digits'''''<br />&nbsp;<br />Token denotes substance, which begins with one or two digits.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 5-PENTANEDIOL; 2-AMINOBENZENE-1,4-DICARBOXYLATE; 5-BROMO-3-(N,N-DIETHYLAMINO-ETHOXY)-2-METHYLINDOLE<br />&nbsp;<br />''rubbish'': 2R,3S; 2LFH; 5NICZPOL; 1KPM; 4-CP
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|6
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Catalysts'''''<br />&nbsp;<br />Token denotes a catalytic system which is a chemical composition with the "." character.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 1.5AU/C; 1.0CUCOK/ZRO2; CE0.9PR0.1O2; CU0.2CO0.8FE2O4; MG3ZN3.-XFE0.5AL0.5; LAFE0.7NI0.3O3-Δ; CE0.8GD0.2O2-Δ; MN0.8ZR0.2<br />&nbsp;<br />''rubbish'': VOL. %; (B)2.5 %; DISP.[%]
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|7
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Comp'''''<br />&nbsp;<br />Token denotes the chemical or catalyst composition. Tag <code>COMP</code> is used.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 20 %CU/ZNAL; 0.4 %PD/AL2O3; 4 %PT-4 %RE/TIO2; (5 %)PB(10 %)-SBA15<br />&nbsp;<br />''rubbish'': 50 %AIR; 1.5 %WT; 0-2.5MOL %; CA.23 %
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|8
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Cryst_hydrates'''''<br />&nbsp;<br />Tokens denote crystalline hydrates. Regular expressions as <code>*[A-Za-z].*H2O$</code> are used.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': AL(NO3)3*6H2O; FE2(SO4)3.9H2O; AUCL4(NH4)7[TI2(O2)2(CIT)(HCIT)]2.12H2O;<br />&nbsp;<br />''rubbish'': 0.6 %H2O; 0.03 %C3H6; 0.06286*T;
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|9
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''SpatialDimension'''''<br />&nbsp;<br />Token denotes the 1-, 2- or 3-dimensional method or pattern.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': 2D-SAXS; 2D-GC; 1D-3D COPPER – OXIDE; 1D-STRUCTURE; 1D COPPER – OXIDE<br />&nbsp;<br />''rubbish'': 12-MR; 1LATTICE; 16ACR; 60HPW
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|10
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''Names'''''<br />&nbsp;<br />Token denotes a proper name. A set of regular expressions is used for recognition.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': BRØNSTED ACID; BRӦNSTED BASIC SITE; MӦSSBAUER SPECTROSCOPY;<br />&nbsp;<br />''rubbish'': L’ARGENTIЀRE; PROCESS’S
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|11
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''OscarTags'''''<br />&nbsp;<br />True if a token has any Oscar tag and matches the following regular expressions: <code>\-[A-Za-z]{2}</code>, <code>\{</code>, <code>\[*[A-Za-z]</code> and etc.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''terms'': STEM-HAADF; L-CYSTINE; DI-TERT-BUTYLPEROXIDE;[AU(EN)2]2[CU(OX)2]3<br />&nbsp;<br />''rubbish'': 128°- Y-ROTATED; π- BACKDONATION; CONVERSION(%);CU(1)MN; M1(2); ACTIVITY [2]
|-
|}
|}
''EL'' designation of any chemical element, ''IS'' designation of any stable isotope
[[File:Fig4 Alperin JofCheminformatics2016 8.gif|664px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="664px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 4''' General scheme of strict filtering tagging</blockquote>
|-
|}
|}
The following examples may be given to illustrate the decision-making process of defining a token as "valid" or "rubbish" (Fig. 5).
[[File:Fig5 Alperin JofCheminformatics2016 8.gif|711px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="711px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 5''' Examples of strict filtering tagging</blockquote>
|-
|}
|}
====Summary of pre-processing stage====
The final result of the text pre-processing stage is the marked and structured text with tagged tokens. These tags are used then by various rules for term-like phrase selection. As there is no need for all the tags from OSCAR4 and Penn Treebank Tag Set, only a few of them are used in the term-like phrases retrieval procedure. The consolidated list of all tags is used, which may be assigned to tokens at different steps of the text pre-processing stage, as specified in the Table 4.
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="70%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="5"|'''Table 4.''' The consolidated list of all tags assigned to tokens at different steps of the text pre-processing stage; it is also indicated whether a tag is used in strict filtering or in term-like phrases retrieval procedure with help of POS-based rules.
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Group of tags
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Tag
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Explanation
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Strict filtering
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Morphological pattern
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" rowspan="14"| POS
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>JJ</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Adjective
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>JJR</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Adjective, comparative
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>VBG</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Verb, gerund or present participle
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n ≥ 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>VBD</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Verb, past tense includes the conditional form of the verb to be
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>VBN</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Verb, past participle
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>NNP</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proper Noun, singular
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>NN</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Noun, singular or mass
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n ≥ 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>NNPS</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Proper Noun, plural
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n ≥ 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>NNS</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Noun, plural
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n ≥ 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>IN</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Preposition or subordinating conjunction
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>DT</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Determiner
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>RB</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Adverb
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 2)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>RBS</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Adverb, superlative
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 2)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>FW</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Foreign word
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (n-grams n > 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" rowspan="2"| OSCAR
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>CM</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Chemical matter
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (all n-grams)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>ONT</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Ontological term
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (all n-grams)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" rowspan="3"| Own tags
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>COMP</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Chemical composition
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (all n-grams)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>rubbish</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Token for which strict filtering to be applied
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (all n-grams)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|<code>GCST</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|General Chemistry Scientific Term
  | style="background-color:white; padding-left:10px; padding-right:10px;"|
  | style="background-color:white; padding-left:10px; padding-right:10px;"|Yes (all n-grams)
|-
|}
|}
As an illustration of tag assignment the following example may be given. Figure 6 shows an example sentence where a few tokens have been tagged. For instance, there are the following different tags used in the example for token ''2.7 %CO/10.0 %H2O/He'' – ('''pos''' = "CD"; '''lemma''' = "2.7 %CO/10.0 %H2O/He"; '''oscar''' = "CM"; '''rubbish''' = "false"; '''exception''' = "comp"). Every token has at least two tags — '''<code>pos</code>''' (it holds the part-of-speech information) and '''<code>lemma</code>''' (it corresponds to the lemma of a token). In addition some tokens related to chemistry (indicating chemical substances, formulas, reactions and etc.) have a tag '''<code>oscar</code>''' taking the values of <code>CM</code> or <code>ONT</code>. Last but not least is the tag '''<code>rubbish</code>''' ("true" or "false") marking tokens for which strict filtering is to be applied.
[[File:Fig6 Alperin JofCheminformatics2016 8.gif|664px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="664px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 6''' An illustration of tags assignment to different tokens</blockquote>
|-
|}
|}
===N-grams spectrum retrieval procedure===
As it is defined earlier within our study, the term "n-gram at length ''n''" connotes a sequence or string of ''n'' consecutive tokens situated within the same sentence with omission of useless tokens (at the moment only definite/indefinite articles). N-gram set is obtained by moving a window of ''n'' tokens length through an entire sentence. This moving is performed token by token. This process is to be repeated for all sentences for a set of all texts: <math id="M1">T = \left\{ {T_{1},T_{2},\ldots,T_{m}} \right\}</math>
For a set of texts, each n-gram may be characterized by textual frequency of n-gram occurrence <math id="M2">f_{T}\left( T_{i} \right)</math>—total number of n-gram occurrences within a text <math id="M3">T_{i}</math> and by absolute frequency of occurrence <math id="M4">f_{A} = \sum\limits_{i}f_{A}\left( T_{i} \right)</math>—total number of n-gram occurrences. As a result each n-gram may be described by a vector <math id="M5">\mathbf{F}\left( T \right) = \left\{ {f_{T}\left( T_{1} \right),f_{T}\left( T_{2} \right),\ldots,f_{T}\left( T_{m} \right)} \right\}</math> within a set of texts enabling us to develop the additional procedures for n-gram filtering and text information analysis.
The full n-gram data set is redundant and it creates difficulties for analysis. For specific purposes different filtration procedures are to be applied. For instance, threshold filtering based on the values of <math id="M6">\text{max}f_{A} = \text{max}\sum_{i}f_{T}\left( T_{i} \right)</math> and <math id="M7">\text{max}f_{T}\left( T_{i} \right)</math> may be used.
===Module of terminology spectrum building===
The final stage of the analysis is to distinguish among the scores of n-grams such as the term-like phrases, general chemistry scientific terms, names of chemical entities and useless n-grams. The calculation of textual and absolute frequencies of term occurrence finishes the terminology spectrum building.
To select term-like n-grams the sets of accept and reject rules are applied. They are all based on token tags assigned at previous steps and developed dictionaries (Table 1). The intention of each set of rules is to determine whether an n-gram of defined length is a term-like phrase or not by analyzing its structure. All rules are applied in a consecutive manner. If an n-gram conforms to an accept or reject rule in the rule sequence, the procedure will be stopped with declaring the n-gram as either a non-term-like or a term-like phrase, probably having a special meaning (e.g. general chemistry scientific term or chemical entity). If no rule is applicable, the n-gram will be considered a term-like phrase too. There are a few general rules that can be used for analysis of n-grams of any length. There are also tailored sets of rules for 1-grams (Table 5), 2-grams (Table 6) and for long (n > 2)-grams (Table 7).
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''Table 5.''' Accept and reject rules succession for unigrams (1-grams)
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Description
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''GeneralChemTermRule (accept rule)'''''<br />&nbsp;<br />True if a 1-gram is a general chemistry scientific term
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''StrictFilteringTagRule (reject rule)'''''<br />&nbsp;<br />True if a 1-gram consists of a token with the strict filtering tag <code>rubbish:true</code>
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''ShortTokensRule (reject rule)'''''<br />&nbsp;<br />True if a 1-gram consists of a short token of length less than three characters; this rule is to exclude noise existing in documents such as axes labels and so on.
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''UnitsRule (reject rule)'''''<br />&nbsp;<br />True if a 1-gram contains a string being a measurement unit from the dictionary (Table 1)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''ChemUnigramRule (accept rule)'''''<br />&nbsp;<br />True if a 1-gram is tagged by any OSCAR tag and by one of the following POS tags: <code>FW</code>, <code>NNP</code>, or tagged by tag <code>COMP</code>; selected unigrams are assumed and marked to have a chemical sense
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Term-like'': barium, phenanthrene, pentanol, xanes
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''GeneralEnglishDictRule (reject rule)'''''<br />&nbsp;<br />True if a 1-gram is in the General English Dictionary (Table 1)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Filtered'': topography, paint, plateau, pool, searching, file, addenda, improvement, theme …<br />&nbsp;<br />''Term-like'': hydrocalcite, acetylacetone, cracking, ageing
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''UnigramPOSRule (reject rule)'''''<br />&nbsp;<br />True if a 1-gram is not a noun or a gerund; term-like 1-gram must be tagged with the following POS tags: <code>VBG</code>, <code>NN</code>, <code>NNPS</code>, <code>NNS</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Filtered'': schematized, suddenly, skeletal, behind<br />&nbsp;<br />''Term-like'': ethylene, hydrocalcite, leaching, 12n-decylhexadecanamide, sulfamethoxazole, anchoring
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''UnigramAddRules (reject rules)'''''<br />&nbsp;<br />Set of regular expressions to filter unigrams denoting various ions, signs, captions and etc.
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Filtered'': M(O2), GA15.6, PW91, V2.1, G(D), TI(V), PD(I), PT0, P(X), BA2+, CE(3+), cm3, CH3, AA, Cu2+, Mo6+, Et-CP, GC–MS, Zn-Al
|-
|}
|}
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''Table 6.''' Reject and accept rules consecution for bigrams (2-grams)
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Description
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''GeneralChemTermRule (accept rule)'''''<br />&nbsp;<br />Same rule as for 1-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''StrictFilteringTagRule (reject rule)'''''<br />&nbsp;<br />Same rule as for 1-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''ShortTokensRule (reject rule)'''''<br />&nbsp;<br />True if a 2-gram consists of only short tokens greater than three characters
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''IdenticalTokensRule (reject rule)'''''<br />&nbsp;<br />True if a 2-gram contains at least two identical tokens
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''UnitsRule (reject rule)'''''<br />&nbsp;<br />True if any token in a 2-gram ends with measurement unit string from the dictionary (Table 1); it should be noted that measurement unit may consist of several tokens, for example, the "g/h" consists of three tokens ["g", "/", "h"]
  | style="background-color:white; padding-left:10px; padding-right:10px;"|PPM C7H14, 70ML MIN-1, CM3MIN-1 H2, MIN-1 FLOW, H-1 GAS, PPM N2O/AR, ML G-1MIN-1, MOL-1 HYDROLYSIS, PPM NOX/5%O2/N2
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''BiGramPOSRule (accept rule with exception)'''''<br />&nbsp;<br />True, if the fist token is tagged with one of the following POS tags: <code>JJ</code>, <code>JJR</code>, <code>FW</code>, <code>VBG</code>, <code>VBD</code>, <code>VBN</code>, <code>NN</code>, <code>NNP</code>, <code>NNPS</code>, <code>NNS</code>; and the second token is tagged with one of: <code>FW</code>, <code>VBG</code>, <code>NN</code>, <code>NNP</code>, <code>NNPS</code>, <code>NNS</code><br />&nbsp;<br />Exception — the following combinations are not allowed: <code>VBG, VBG</code>, <code>VBG, FW</code>, and <code>NNP, FW</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Term-like'': Andronov bifurcation, Na2CO3 impregnation, nickel catalyst; supported MgO, anchored lysine, stirred glass; carbonaceous particle, temperature-programmed adsorption, Fischer–Tropsch catalyst; in situ EXAF, UV–VIS spectroscopy, Raman spectroscopy<br />&nbsp;<br />''Filtered due to exception'': involving reforming, reforming minimizing, using in, Shimada etc.
|-
|}
|}
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''Table 7.''' Reject and accept rules consecution for n-grams (n ≥ 3)
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Description
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Examples
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''GeneralChemTermRule (accept rule)'''''<br />&nbsp;<br />Same rule as for 2-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''StrictFilteringTagRule (reject rule)'''''<br />&nbsp;<br />Same rule as for 2-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''ShortTokensRule (reject rule)'''''<br />&nbsp;<br />Same rule as for 2-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''IdenticalTokensRule (reject rule)'''''<br />&nbsp;<br />Same rule as for 2-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="2"|'''''UnitsRule (reject rule)'''''<br />&nbsp;<br />Same rule as for 2-grams
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''''ManyGramPOSRule (accept rule with exception)'''''<br />&nbsp;<br />True, if the '''fist''' token must be tagged with one of the following POS tags (noun, gerund, adjective, adverb or participle): <code>NN</code>, <code>NNP</code>, <code>VBG</code>, <code>VBD</code>, <code>VBN</code>, <code>JJ</code>, <code>JJR</code>, <code>RB</code>, <code>RBS</code>, <code>FW</code>; and the '''middle''' in any position token (+ preposition or determiner) is: <code>NN</code>, <code>NNP</code>, <code>VBG</code>, <code>VBD</code>, <code>VBN</code>, <code>JJ</code>, <code>JJR</code>, <code>RB</code>, <code>RBS</code>, <code>FW</code> + '''<code>IN</code>''', '''<code>DT</code>'''; and the '''last''' token is: <code>VBG</code>, <code>NN</code>, <code>NNP</code>, <code>NNPS</code>, <code>NNS</code> (gerund or noun)<br />&nbsp;<br />Exception — the following combinations are not allowed (describing phrases which looks like to be torn from their context): <code>VGB, NN</code>, <code>VGB, IN</code>, <code>VBN, NN</code>, <code>VBN, JJ</code>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|''Term-like'': X-ray fluorescence spectrometer; Brønsted basic site; Pd(110) surface oscillation; doping CsPW with platinum; catalyzed N2O decomposition; crystalline phase transition; catalyzed oxidation of NO; complete photoreduction of Pd(II); propagating thermosynthesis; reforming of the biomass; drying inside the microscope column<br />&nbsp;<br />''Filtered due to exception'':used during steam reforming; catalyzed by metalloporphyrin; investigated by XRD; using atomic absorption
|-
|}
|}
The following examples may be given to illustrate the decision making process whether an n-gram may be considered a term-like phrases or not (Fig. 7).
[[File:Fig7 Alperin JofCheminformatics2016 8.gif|664px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="664px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 7''' An illustration of term-like phrases retrieval procedure with POS based accept rules</blockquote>
|-
|}
|}
The next step in the terminology analysis stage is the tagging of term-like phrases to describe their roles as entities having a special meaning. There are the following tags at the moment: <code>term-like phrase</code>, <code>general chemistry term</code>, and <code>chemical entity</code>. The final step is the additional filtration procedure aimed to reduce the number of term-like phrases performed by removing short term-like phrases which are parts of n-grams with more length. The criterion of filter application is equality of the absolute frequencies of occurrence for short and long n-grams.
==Results and discussion==
An example of automatic term-like phrases retrieval is shown in Fig. 8 with some term-like and filtered-off n-grams highlighted. For the filtered-off n-grams the reject rules used are given as well. For the detailed results of terminology analysis for one preselected Congress abstract see the Additional file 1.
[[File:Fig8 Alperin JofCheminformatics2016 8.gif|784px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="784px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 8''' An example of terminology analysis results (with some term-like and filtered-off n-grams ''highlighted'')</blockquote>
|-
|}
|}
To understand the overall performance of term-like phrases retrieval routine, the full set of text abstracts belonging to five EuropaCat events were processed. Obtained data were statistically analyzed (see Table 8). It may be seen that the term-like phrases retrieval procedure reduces the total number of all available n-grams to a range of 1÷3 percent, which depends on the n-gram length n.
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="6"|'''Table 8.''' Consolidated table of experimental results on terminology analysis of EuropaCat abstracts set<br />&nbsp;<br />Number of texts: 6387; total amount of tokens: 5,148,124 (EuropaCat 2013, 2011, 2009, 2007, 2005)
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|n
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|N—total number of n-grams
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|N<sub>TL</sub>—total number of term-like phrases (% of N)
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|N<sub>GS</sub>—total number of general scientific terms (% of N<sub>TL</sub>)
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|N<sub>COMP</sub>—total number of phrases with tag <code>COMP</code> (% of N<sub>TL</sub>)
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|N<sub>CM</sub>—total number of phrases with OSCAR tag <code>CM</code> (% of N<sub>TL</sub>)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|1
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~5.15 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|68,811 (~1.3 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|574 (0.8 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|8776 (12.7 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|40,354 (58.6 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|2
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~4.94 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|135,002 (~2.7 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|11,263 (8.3 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5199 (3.9 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|52,641 (38.9 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|3
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~4.74 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|130,706 (~2.8 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|1031 (0.8 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5194 (4 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|64,101 (49.0 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|4
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~4.54 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|118,893 (~2.6 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|41 (0.03 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|4064 (3.4 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|56,047 (47.1 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~4.35 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|94,546 (~2.2 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|5 (0.005 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|3390 (3.6 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|43,550 (46.0 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|6
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~4.16 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|58,775 (~1.4 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|2469 (4.2 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|29,992 (51.0 %)
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|7
  | style="background-color:white; padding-left:10px; padding-right:10px;"|~3.97 × 10<sup>6</sup>
  | style="background-color:white; padding-left:10px; padding-right:10px;"|46,224 (~1.2 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|2403 (5.2 %)
  | style="background-color:white; padding-left:10px; padding-right:10px;"|26,030 (56.3 %)
|-
|}
|}
Table 8 demonstrates that the maximum absolute amount of term-like n-grams corresponds to the value of n = 2 (bigrams), which is in good accordance with the well-known fact of the average term length in scientific texts. On the other hand, term indexes are often limited to the n-grams lengths n = 1, 2, 3. The limit n = 3 looks good enough for general science vocabulary (see N<sub>GS</sub> value from Table 8—a number of general scientific terms found), but it is not sufficient for a specialized thesaurus (e.g. for catalysis). The number of term-like n-grams with the <code>COMP</code> tag is also large for different n, including n > 3. Summarizing, it should be said that long-length terms retrieval is the distinctive feature of the suggested approach.
It is also seen from Table 8 that nearly half the total amount of 1-grams have an OSCAR tag <code>CM</code>. It should be noted also that if a plausible term-like phrase has just one token with OSCAR tag, it will be considered to also have the same tag by the system. It may explain the close values (in percentages) for phrases with different length.
To assess the overall effectiveness of the term-like phrases retrieval procedure, it seems necessary to quantitatively answer the questions about what precision and recall values can possibly be achieved. To do that, a preliminary study on comparison between automatically and manually selected term-like phrases was performed with the help of two professional chemical scientists who picked out the term-like phrases from a limited set of a few arbitrarily selected documents. To include a phrase in the list of term-like phrases, a consent among both experts was required. It should be noted here that experts were not required to follow the same procedure of moving a window of ''n'' tokens length on an entire sentence used by n-grams isolation. Moreover, experts took into account and analyzed the information put into some simple grammatical structures, which are typical for scientific texts, such as structures with enumeration and so on. It leads to additional differences between the sets of expert and automatically selected term-like phrases (for an example see Fig. 9).
[[File:Fig9 Alperin JofCheminformatics2016 8.gif|709px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="709px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Fig. 9''' An example of terminology analysis results (with some automatically retrieved and expert selected term-like phrases)</blockquote>
|-
|}
|}
The data obtained through expert terminological analysis were compared with the automatically retrieved terms. The precision (P), recall (R) and F-measure values were calculated. In the paper, the precision<ref name="WPPrec">{{cite web |url=https://en.wikipedia.org/wiki/Precision_and_recall |title=Precision and recall |publisher=Wikimedia Foundation}}</ref> indicates a fraction of automatically retrieved term-like phrases which coincide with expert selected ones. Recall is a fraction of an expert’s selected term-like phrases that are retrieved by the system.
<math id="M8">\begin{array}{l}
{P = \frac{\text{Number}\,\text{of}\,\text{coincidences}}{\text{Number}\,\text{of}\,\text{term-like}\,\text{phrases}};\quad R = \frac{\text{Number}\,\text{of}\,\text{coincidences}}{\text{Number}\,\text{of}\,\text{terms}\,\text{retrived}\,\text{by}\,\text{experts}}{}} \\
{\left( {\text{Number}\,\text{of}\,\text{coincidences}} \right) = \text{Number}\,\text{of}\,\left\{ {\left( \begin{array}{l}
{\text{Term-like}\,\text{phrases}{}} \\
{\text{retrived}\,\text{by}\,\text{experts}{}} \\
\end{array} \right) \cap \left( \begin{array}{l}
{\text{Term-like}\,\text{phrases}{}} \\
{\text{retrived}\,\text{by}\,\text{the}\,\text{system}{}} \\
\end{array} \right)} \right\}{}} \\
\end{array}</math>
Both precision and recall therefore may be used as a measure of term-like phrase retrieval process relevance and efficiency. In simple terms, high precision values mean that substantially more term-like phrases are selected than erroneous phrases, while high recall values mean that the most term-like phrases are selected from the text.
Very often these two measures (P and R) are used together to calculate a single value named as F<sub>1</sub>-measure<ref name="WPF1">{{cite web |url=https://en.wikipedia.org/wiki/F1_score |title=F1 score |publisher=Wikimedia Foundation}}</ref> to provide an overall performance system characteristic. F<sub>1</sub>-measure is a harmonic mean of P and R, where F<sub>1</sub> can reach 1 as its best and 0 as its worst values:
<math id="M9">F_{1} = 2PR/\left( {P + R} \right)</math>
The results on the number of expert selected and automatically retrieved term-like phrases, number of coincidences and calculated P, R and F<sub>1</sub> values are represented in Table 9. For the detailed results of terminology analysis for one preselected text, see the Additional file 1.
{|
| STYLE="vertical-align:top;"|
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="70%"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="7"|'''Table 9.''' Precision, Recall and F-measure estimated from the data obtained for five arbitrarily selected texts<br />&nbsp;<br />No. 1—Design, synthesis and catalysis of recoverable catalysts assembled in emulsion and…, C. Li et al. (2005)<br />No. 2—Understanding reaction pathways on model catalyst surfaces, F. Gao et al. (2007)<br />No. 3—Solid acid catalysts Based on H3PW12O40 Heteropoly Acid: Acid and Catalytic Pr…, A.M. Alsalme et al. (2011)<br />No. 4—Advantages of using TOF–SIMS method in surface studies of heterogeneous…, M.I Szynkowska et al. (2005)<br />No. 5—ECS-Materials: synthesis and characterization of a new class of crystalline…, G. Bellussi et al. (2007)<br />
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Text no.
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Number of terms retrieved by two experts
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Number of term-like phrases retrieved by the system
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Number of coincidences
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Precision
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|Recall
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;"|F<sub>1</sub>-measure
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|No. 1
  | style="background-color:white; padding-left:10px; padding-right:10px;"|164
  | style="background-color:white; padding-left:10px; padding-right:10px;"|221
  | style="background-color:white; padding-left:10px; padding-right:10px;"|135
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.61
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.82
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.70
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|No. 2
  | style="background-color:white; padding-left:10px; padding-right:10px;"|155
  | style="background-color:white; padding-left:10px; padding-right:10px;"|174
  | style="background-color:white; padding-left:10px; padding-right:10px;"|96
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.55
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.62
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.58
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|No. 3
  | style="background-color:white; padding-left:10px; padding-right:10px;"|170
  | style="background-color:white; padding-left:10px; padding-right:10px;"|172
  | style="background-color:white; padding-left:10px; padding-right:10px;"|113
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.66
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.66
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.66
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|No. 4
  | style="background-color:white; padding-left:10px; padding-right:10px;"|68
  | style="background-color:white; padding-left:10px; padding-right:10px;"|119
  | style="background-color:white; padding-left:10px; padding-right:10px;"|40
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.34
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.59
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.43
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"|No. 5
  | style="background-color:white; padding-left:10px; padding-right:10px;"|125
  | style="background-color:white; padding-left:10px; padding-right:10px;"|215
  | style="background-color:white; padding-left:10px; padding-right:10px;"|106
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.50
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.85
  | style="background-color:white; padding-left:10px; padding-right:10px;"|0.63
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" colspan="4"|'''P, R and F values calculated for the entire five-text set:'''<br />&nbsp;<br />''Unique expert term-like phrases'': 655<br />''Term-like n-grams'': 872<br />''Coincidences'': 466
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''0.53'''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''0.71'''
  | style="background-color:white; padding-left:10px; padding-right:10px;"|'''0.61'''
|-
|}
|}
It may be concluded therefore that further improvements can be made with term-like phrase retrieval efficiency by bringing into consideration the knowledge of typical grammatical structures used in scientific texts<ref name="BolshakovaAHeur15" /><ref name="BolshakovaLSPL10">{{cite book |chapter=LSPL-patterns as a tool for information extraction from natural language texts |title=New Trends in Classification and Data Mining |author=Bolshakova, E.; Efremova, N.; Noskov, A. |editor=Markov, K.; Ryazanov, V.; Velychko, V.; Aslanyan, L. |publisher=ITHEA |pages=110–118 |year=2010 |isbn=9789541600429}}</ref> as well as numeric values of both textual and absolute frequencies of n-gram occurrences.
It is also seen that the first version of the terminology analysis system delivers sufficiently high values for precision and recall achievable in the term-like phrases retrieval process. Some comparison can be made with P = 0.34÷0.40, R = 0.11÷0.14, F<sub>1</sub> = 0.17÷0.20 values reported<ref name="KimAuto13" /> by such well-known keyphrases retrieval systems as Wingnus, Sztergak, KP-Mminer, although such disparity does not look consistent enough to be credible due to different goals of the systems (term-like phrases vs. keyphrases retrieval) being brought into comparison.
==Conclusions==
As mentioned in the introduction, scientific publications are still the most important sources of scientific knowledge, and new methods aimed to retrieve meaningful information from natural language documents are particularly welcome today. The structural foundation of any such publication is widely accepted terms and term-like phrases conveying useful facts and shades of meaning of a document content.
The present study is aimed to develop, test and assess the methodology of automated extraction of a full terminology spectrum from natural language chemical PDF documents, while retrieving as many term-like phrases as possible. Term-like phrases are defined as one or more consecutive words and/or alphanumeric string combinations, which convey specific scientific meaning with unchanged spelling and context as in a real text. The terminology spectrum of a natural language publication is defined as an indexed list of tagged entities: recognized general science notions, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram text analysis with sequential application of complex accept and reject rules. The main distinctive feature of the suggested approach is in picking out all parsable term-like phrases, not just selecting a limited set of keyphrases meeting any predefined criteria. The next step is to build an extensive term index of a text collection. The developed approach neither takes into account semantic similarity nor differentiates between similar term-like phrases (distinct evaluation metrics may be employed to do it at the later stages). The approach which includes a number of sequentially running procedures appears to show good results in terminology spectrum retrieval as compared with well-known keyphrase retrieval systems.<ref name="KimAuto13" /> The term-like phrase parsing efficiency is quantified with precision (P = 0.53), recall (R = 0.71) and F<sub>1</sub>-measure (F1 = 0.61) values calculated from a limited set of documents manually processed by professional chemical scientists.
Terminology spectrum retrieval may be used to perform various types of text analysis across document collections. We believe that this sort of terminology spectrum may be successfully employed for text information retrieval and for reference database development. For example, it may be used to develop thesauri, to analyze research trends in subject fields of research by registering changes in terminology, to derive inference rules in order to understand particular text content, to look for the similarity between documents by comparing their terminology spectrum within an appropriate vector space, and to develop methods to automatically map a document to a reference database field.
For instance, if a set <math id="10">T = \left\{ {T_{1},T_{2},\ldots,T_{m}} \right\}</math> contains a collection of texts from different time periods (in our research, several different events from the EuropaCat research conference were used), the analysis of textual and absolute frequencies of occurrence will allow to follow up the "life cycle" of each term-like phrase on the quantitative level (term usage increasing, decreasing and so on). That gives a unique capability to find out research trends and new concepts in the subject field by registering changes in terminology usage in the most rapidly developing areas of research. Moreover, similar dynamics of change over time for different terms often indicates the existence of an associative linkage between them (e.g. between a new process and developed catalyst or methodology).<ref name="GusevAnExp12">{{cite journal |title=An express analysis of the term vocabulary of a subject area: The dynamics of change over time |journal=Automatic Documentation and Mathematical Linguistics |author=Gusev, V.D.; Salomatina, N.V.; Kuzmin, A.O.; Parmon, V.N. |volume=46 |issue=1 |pages=1–7 |year=2012 |doi=10.3103/S0005105512010025}}</ref> Indicator words or phrases such as "for the first time," "unique," and "distinctive feature" and so on may also be used in order to detect things like new recipes or catalyst composition for the explored process.
Usage of terminology spectrum for information retrieval will be the subject of our subsequent publications.
==Declarations==
===Author's contributions===
BA contributed to software development and architecture. AK conceived of the project and the tasks to be solved. AK and LI designed and performed the experiments, tested the applications and offered feedback as chemical experts. NS and VG were responsible for L-gram analysis algorithm and scientific feedback. VP conceived and coordinated the study. All authors contributed to the scientific and methodological progress of this project. All authors read and approved the final manuscript.
===Acknowledgements===
Financial assistance provided by Russian Academy of Science Project No. V.46.4.4 are gratefully acknowledged.
===Competing interests===
The authors declare that they have no competing interests.
===Open access===
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
===Additional files===
'''Additional file 1.''' The detailed example of PDF transformation with terminology analysis performed by experts and by automatic analysis: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM1_ESM.pdf 13321_2016_136_MOESM1_ESM.pdf]
'''Additional file 2.''' OSCAR4 tokenizer modification: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM2_ESM.pdf 13321_2016_136_MOESM2_ESM.pdf]
'''Additional file 3.''' List of excluded words from general English Corncob-Lowercase list: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM3_ESM.pdf 13321_2016_136_MOESM3_ESM.pdf]
'''Additional file 4.''' List of stop words used: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM4_ESM.pdf 13321_2016_136_MOESM4_ESM.pdf]
'''Additional file 5.''' List of stable isotopes: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM5_ESM.pdf 13321_2016_136_MOESM5_ESM.pdf]
'''Additional file 6.''' List of chemical element symbols: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM6_ESM.pdf 13321_2016_136_MOESM6_ESM.pdf]


'''Additional file 7.''' List of measurement units: [https://static-content.springer.com/esm/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_MOESM7_ESM.pdf 13321_2016_136_MOESM7_ESM.pdf]


==References==
==References==
Line 159: Line 802:


==Notes==
==Notes==
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Numerous grammar errors were also corrected throughout the entire text.  
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Numerous grammar errors were also corrected throughout the entire text. Finally, the original document on SpringerOpen includes a reference that doesn't clearly get placed inline. It's assumed the final citation from Guzev ''et al.'' was meant to be placed in the last paragraph, which is where we have put it.


<!--Place all category tags here-->
<!--Place all category tags here-->
[[Category:LIMSwiki journal articles (added in 2016)‎]]
[[Category:LIMSwiki journal articles (added in 2016)‎]]
[[Category:LIMSwiki journal articles (all)‎]]
[[Category:LIMSwiki journal articles (all)‎]]
[[Category:LIMSwiki journal articles (with rendered math)]]
[[Category:LIMSwiki journal articles on chemical informatics]]
[[Category:LIMSwiki journal articles on chemical informatics]]
[[Category:LIMSwiki journal articles on software]]
[[Category:LIMSwiki journal articles on software]]

Latest revision as of 18:46, 6 October 2021

Full article title Terminology spectrum analysis of natural-language chemical documents: Term-like phrases retrieval routine
Journal Journal of Cheminformatics
Author(s) Alperin, Boris L.; Kuzmin, Andrey O.; Ilina, Ludmila Y.; Gusev, Vladimir D.; Salomatina, Natalia V.; Parmon, Valentin, N.
Author affiliation(s) Boreskov Institute of Catalysis, Sobolev Institute of Mathematics, Novosibirsk State University
Primary contact Email: kuzmin [at] catalysis.ru
Year published 2016
Volume and issue 8
Page(s) 22
DOI 10.1186/s13321-016-0136-4
ISSN 1758-2946
Distribution license Creative Commons Attribution 4.0 International
Website http://jcheminf.springeropen.com/articles/10.1186/s13321-016-0136-4
Download http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4 (PDF)

Abstract

Background: This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural information.

Results: The assessment of the retrieval process, expressed quantitatively with a precision (P), recall (R) and F1-measure, which are calculated manually from a limited set of documents (the full set of text abstracts belonging to five EuropaCat events were processed) by professional chemical scientists, has proved the effectiveness of the developed approach. The term-like phrase parsing efficiency is quantified with precision (P = 0.53), recall (R = 0.71) and F1-measure (F1 = 0.61) values.

Conclusion: The paper suggests using such terminology spectra to perform various types of textual analysis across document collections. This sort of terminology spectrum may be successfully employed for text information retrieval, for reference database development, to analyze research trends in subject fields of research and to look for the similarity between documents.

Fig0.5 Alperin JofCheminformatics2016 8.gif

Keywords: Terminology spectrum, natural language text analysis, n-Gram analysis, term-like phrases retrieval, text information retrieval

Background

The current situation in chemistry, as in any other field of natural science, can be characterized by a substantial growth of texts in natural languages (research papers, conference proceedings, patents, etc.), still being the most important sources of scientific knowledge and experimental data, information about modern research trends and terminology used in the subject areas of science. It greatly increases the value of such powerful information systems as Scopus®, SciFinder®, and Reaxys® which are capable of handling large text document databases and especially those fitted with advanced text information retrieval capabilities. In fact, both efficiency and productivity of modern scientific research in chemistry depend rigorously on quality and completeness of its information support, which is oriented firstly on advanced and flexible reference search, discovering and analysing of text information to afford the most relevant answers to user questions (substances, reactions, relevant patents or journal articles). The main ideas and developments in the information retrieval methods coupled with techniques of full text analysis are now well described and examined.[1]

In conventional information systems, the majority of text information retrieval and discovery methods are based on using specific sets of pre-defined document metadata, e.g. keywords or indexes of terms characterizing the texts content. User queries are converted using an index into information requests expressed by a combination of Boolean terms while bringing into play the vector space and terms weight. Probabilistic approaches may also be employed to take into account such features as terms distribution, co-occurrence information and their relationships derived from information retrieval thesauri (IRT) to include them into analytic process. Any kind of such indexes have to primarily be produced and updated manually by trained experts, but now the possibilities of automated index development attracts closer attention.

It is assumed that the structural foundation of any scientific text is its terminology, which may be represented, in principle, by advanced IRT. However, it leads to difficulties in applying conventional IRTs in practical information text analysis procedures because of limitations inherent in them. Typically, such thesauri are made manually in a very labor-intensive process and often are constructed to reflect the general terminology only. Terms from thesauri originally represent a formally written description of scientific conceptions and definitions which may not exactly match the real usage and spelling used in scientific texts. Moreover, a thesaurus developed for one type of text may be less efficient or not applicable when used with another. A good example is the IUPAC Gold Book compendium of chemical nomenclature, terminology, units and definition recommendations.[2] Terminology drafted by experts of IUPAC spans a wide range of chemistry but does not describe any field in detail and represents only a well-established upper level of scientific terminology. Summarizing, IRT based text analysis alone is unable to solve the problem of the variability of scientific texts written in natural languages because the accuracy of matching thesaurus terms with real text phrases leaves much to be desired.

It should also be noted that the language of science is evolving faster than that of natural language, especially in chemistry and molecular biology. Thus, the analysis of terminology of subject text collection should be done automatically using both primitive extraction and sophisticated knowledge-based parsing. Only automated data analysis can process and reveal the variety of term-like word combinations in the constantly changing world of scientific publications. Automated parsing and analysis of document collections or isolated documents for term-like phrases can also help to discover various contexts in which the same scientific terminology is used in different publications or even parts of the same publication.

There is nothing new in the idea of automated term retrieval. Typically, the terminology analysis of text content is focused on recognition of chemical entities and automatic keyphrase extraction aimed to provide a limited set of keywords which might characterize and classify the document as a whole. Two main strategies are usually applied: machine self-learning and usage of various dictionaries with automated selection rules (heuristics) coupled with calculated features[3], such as TF-IDF.[4][5] Therefore, keyphrase retrieval procedures typically involve the following stages: initial text pre-processing; selecting a candidate to a keyphrase; applying rules to each candidate; and compiling a list of keyphrases.[6] A few existing systems had been analyzed in terms of precision (P), recall (R) and F1-score attainable for existing keyphrase extraction datasets. For such well-known systems as Wingnus, Sztergak, and KP-Mminer, these values are reported as P = 0.34÷0.40, R = 0.11÷0.14, and F1 = 0.17÷0.20.[6] Open-Source Chemistry Analysis Routines (OSCAR4)[7] and ChemicalTagger[8] NLP may also be mentioned as tools for the recognition of named chemical entities and for parsing and tagging the language of text publications in chemistry.

However, there are some inherent shortcomings in the above mentioned keyphrase extraction approaches due to the presence of a significant amount of cases where a limited set of automatically selected top ranked keyphrases does not properly describe the document in details (e.g., a paper may contain the description of a specific procedure of catalyst preparation while not being the main subject of the paper). It may also be seen from the aforementioned values of P, R and F that in many cases the extracted keyphrases do not match the keyphrases selected by experts to an adequate degree. Exact matching of keyphrases is a rather rare event, partially due to the difficulties of taking into account nearly similar phrases, for instance, semantically similar phrases. On the other hand, even though the widely used n-gram analysis can build a full spectrum of token sequences present in the text, it may also produce a great level of noise, making it difficult to use them. Some attempts have been made to take into account the semantic similarity of n-grams and to differentiate between rubbish and candidates to plausible keyphrases.[9][10]

The problem of automatic recognition of scientific terms in natural language texts has been explored in recent decades.[11] That research has shown that taking into account the linguistic information may improve the terms extraction efficiency. The information about grammatical structure of multi-word scientific terms, their text variants, and the context of their usage may be represented as a set of lexico-syntactic patterns. For instance, values of P, R and F-measure equal to 73.1, 53.6 and 61.8 percent respectively for term extraction from scientific texts (only in Russian) on computer science and physics were obtained.[12]

A "terminology spectrum" of a natural language publication may be defined as an indexed list of tagged token sequences with calculated weights, such as recognized general scientific notions, terms linked to existing thesauri, names of chemical entities and "term-like phrases." The term-like phrases are not exactly the keyphrases or terms in the usual sense (like published in thesauri). Such term-like phrases are defined here as one or more consecutive tokens (represented by words and/or alphanumeric strings combinations), which convey specific scientific meaning with unchanged spelling and context as in a real text document. For instance, a term-like phrase may look similar to a specific generally used term but with different spelling or word order reflecting the usage of the term in a different context in natural language environment. Consequently, they may describe real text content and the essence of real processes that the scientific research handles, which makes the analysis of such phrases extremely useful. That sort of terminology spectrum of a natural language publication may be considered as some kind of knowledge representation of a text and may be successfully employed in various information retrieval strategies, text analysis and reference systems.[13]

The present work is aimed to develop and test the methodology of automated retrieval of full terminology spectrum from any natural language chemical text collections in PDF format, with term-like phrases selection being the central part of the procedure. The retrieval routine is based on n-gram text analysis with sequential execution of a complex grouping of "accept" and ‘"eject" rules while taking into account the morphological and structural information. The term "n-gram" denotes here a text string or a sequence of n consecutive words or tokens presented in a text. Numerical assessment of automated term-like phrases retrieval process efficiency done in the paper is calculated by comparing automatically extracted term-like phrases and those manually selected by experts.

Methods

Text collection used for experiments

Chemical catalysis is a foundation of chemical industry and represents a very complex field of scientific and technological research. It includes chemistry, various subject fields of physics, chemical engineering, material science and a lot more. One of the most representative research conferences in catalysis is the European Congress on Catalysis or EuropaCat, which has been chosen as a source of scientific texts covering the wide range of themes of research. A set of abstracts of EuropaCat conferences of 2013, 2011, 2009, 2007, and 2005 (about 6000 documents from all five Congress events) has been used for textual analysis in the present study. All abstracts are in PDF format.

General description of terminology spectrum retrieval process

The developed system of terminology spectrum analysis consists of the following sequentially running procedures or steps, as depicted in Fig. 1.

Fig1 Alperin JofCheminformatics2016 8.gif

Fig. 1 General scheme of the terminology spectrum building process with term-like phrases retrieval

The server side of the terminology spectrum analysis system runs on Java SE 6 platform and the client is a PHP web application to view texts and the results of terminology analysis. To store all data collected in the terminology retrieval process, the cross-platform document-oriented database MongoDB is used.[14] The choice in favor of MongoDB was conditioned by the need to process nested n-gram structures up to level seven.

The main stages and analytic methods involved in the process are discussed in the following sections.

Text materials conversion with PdfTextStream library

The scientific texts are mainly published in PDF format which does not typically contain any information about document structure and therefore is not suitable for immediate text analysis. Thus, at first, a document has to be preprocessed by converting a PDF file into text format and analyzing its structure (highlighting titles, authors, headings, references, etc.) with the aim to make the text suitable for further content information retrieval (see Fig. 2). The following steps are used with PdfTextStream library[15] (stages 1–2 on Fig. 1) to make such a PDF transformation (for a detailed example see Additional File 1):

Fig2 Alperin JofCheminformatics2016 8.gif

Fig. 2 An example of PDF-to-text transformation

1. Isolate text blocks which have the same formatting (e.g. bold, underline and etc.).

2. Remove empty blocks and merge blocks located on the same text row.

3. Analyze the document structure by classifying each block as containing information about the publication title, the headings, authors, organizations, e-mails, references and content. To perform such analysis a set of special taggers has been developed which are executed sequentially to analyze and tag each text block. Taggers utilize such features as the position of the first and last rows of text block, its text formatting, the position of a block of text on a page, etc. All developed taggers have been adjusted to handle each conference event individually.

4. Filter text blocks to remove unclassified text blocks, for instance, situated before the publication title, because such blocks typically contain useless and already known information about a conference or journal.

5. Unify special symbols (such as variants of the dash, hyphen, and quote characters), removal of space characters placed before brackets in writings of crystal indexes, etc. Regular expressions are used.

Text pre-processing

The text pre-processing stage (step three in Fig. 1) is used to transform a text document obtained from stages one and two into a unified structured format with markup. During this stage the text is split into individual words and sentences (tokenization) followed by a morphological analysis that includes: highlighting objects such as formulas and chemical entities, removing unnecessary words and meaningless combinations of symbols, and recognizing general English words and tokens with special meaning (units, stable isotopes, acronyms, etc.). The result of this stage is a fully marked structured text to be stored in the database. The following steps are involved in the text pre-processing stage.

Tokenization

A tokenizer from the OSCAR4 library is used for splitting a text into words, phrases and other meaningful elements. The tokenizer has been adapted for better handling of chemical texts.

The present study established that the original OSCAR4 tokenizer, in view of our needs, had some shortcomings. The first issue was a separation of tokens with a hyphen "-", which often led to mistakes in recognizing compound terms. To overcome this issue, the parts of the source code which are responsible for splitting tokens with hyphens were commented out (see Additional File 2). Next was a problem where some complex tokens, representing various chemical compositions, were considered by the tokenizer as a sequence of tokens (see Fig. 3). In such cases it was necessary to combine those isolated tokens into an integrated one. The modified tokenizing procedure now makes merging of tandem tokens separated with either the "/" or ":" characters, provided that they are marked by OSCAR4 tag CM or incorporate a chemical element symbol sign. Additionally, tokens that look as "number %" and are situated at the beginning of such a phrase describing chemical compositions are merged into the integral token too (see Fig. 3). https://static-content.springer.com/image/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_Fig3_HTML.gif

Fig3 Alperin JofCheminformatics2016 8.gif

Fig. 3 An example of the tokenization process. Frames outline the results of modified OSCAR4 tokenizer, additional outer frames isolate tokens describing a chemical composition (possessing the tag "COMP").

An example of the work of the modified tokenizer is shown on Fig. 3. Blue frames hold the tokens identified by modified OSCAR4 tokenizer. Additional red frames outline tokens which are combined into integral ones. Such tokens are marked with the isolated tag COMP. This tag is used by accept rule ChemUnigramRule to identify one-word n-grams describing chemical compositions.

Then the position of a token in the text is determined. Splitting the series of tokens into sentences finalizes the tokenization process, which is realized with the help of the WordToSentenceAnnotator routine of Stanford CoreNLP library.[16][17]

Morphological analysis and labeling tokens with their POS tags

Morphological analysis (Stanford CoreNLP library[18] is used) maps each word with a set of part-of-speech tags (Penn Treebank Tag Set[19] by Stanford CoreNLP is used). Typical tags used in the research are: NN (plural NNS) — noun; VB — verb; JJ — adjective; CD — ordinal numeral, etc. For the full information about the POS tags used by terminology spectrum building procedure, see Table 4 (later in the paper).

Lemmatization

Lemmatization is the process of grouping together different inflected word forms so they can be treated as a single item. But, in the present work, lemmatization is only used to replace nouns in the plural form with their lemmas. Preliminary experiments demonstrate that additional lemmatization is not helpful and leads to a significant loss of meaningful information (for example, reforming process leads to reform and process lemmas with the loss of the name of a very important modern industrial chemical process in refining).

Recognition of names of chemical entities

Meta-information about names of chemical entities is very important in various term-like phrases retrieval strategies. The open source OSCAR4 (Open Source Chemistry Analysis Routines)[7][20] software package is applied for selection and semantic annotation of chemical entities across a text. Among a variety of tags and attributes utilized by OSCAR4 routine only the following ones are used in the present study:

1. CM — chemical term (chemical name, formula or acronym);

2. RN — reaction (for example, epoxidation, dehydrogenation, hydrolysis, etc.);

3. ONT — ontology term (for example, glass, adsorption, cation, etc.).

When a token is a part of some recognized chemical entity the token gets the same OSCAR4 tag as a whole entity.

Recognition of tokens with special meaning

The significant part of text pre-processing stage is selection of individual tokens being the words of general English and recognition of various meaningful text strings which are: the general scientific terms (actually performed at the final terminology spectrum building stage but described here for convenience); tokens denoting chemical elements, stable isotopes and measurement units; tokens which cannot be a part of any terms in any way. This part of work is performed using specially developed dictionaries described in details in Table 1.

Table 1. Developed/modified dictionaries used for recognition of general English words, general chemical science terms and tokens with special meaning
Dictionary/Usage for Description Reference Examples
General chemical science terms
 
Selection of general terms (chemical and from related fields of physics, mathematics …)
~7500 General scientific terms in chemistry, physics and mathematics
 
IUPAC Compendium is used
http://goldbook.iupac.org/
 
IUPAC Compendium of Chemical Terminology (Gold Book)
Naphthenes, solvation energy, osmotic pressure, reaction dynamics …
General English words dictionary
 
Selection of general English wordsGeneral chemical science terms
~58,000 general English words. It is based on Corncob Lowercase Dictionary modified by us for stated goals. 566 words were excluded, which are often used in scientific terminology Modified Corncob Lowercase list of more than 57,000 English words http://ru.scribd.com/doc/147594864/
 
Corncob Lowercase (see Additional file 3 for excluded words)
Abbreviate, academic, accelerate …
 
Excluded: Abrasion, absorption, aerosol …
Stop list
 
Filtering tokens which are not part of terms in any way
~2060 tokens. List contains the words, abbreviations and so on, which cannot be incorporated into any term-like phrases Proprietary design (see Additional file 4) e.g., de, ca., fig., al., co-exist, et, etc., i.e., ltd …
Stable isotopes
 
Filtering n-grams containing digits
~250 isotopes. It is based on The Berkeley Laboratory Isotopes Project’s isotopes database Proprietary design, based on The Berkeley Laboratory Isotopes Project’s DB: http://ie.lbl.gov/education/isotopes.htm (see Additional file 5) 1H, 2H, 3He, 4He, 6Li, 7Li …
Chemical elements signs
 
Filtering n-grams containing digits
~126 chemical elements. It is based on periodic table Proprietary design, based on periodic table (see Additional file 6) H, He, Li, Be, B, C, N, O, F …
Measurement units
 
Filtering n-grams containing units of measure
~100 records now, partially based on IUPAC Gold Book Proprietary design, partially based on http://goldbook.iupac.org/ (see Additional file 7) (a.u.), (ev), a.u, °C, ppm, kV, mol, g−1, ml−1, gcat, gcat h …

Some extra explanation needs to be given on the general English dictionary, the stop list dictionary and the procedure of recognition of general scientific terms.

More than 560 words either found in scientific terminology (for instance: "acid", "alcohol", "aldehyde", "alloy", "aniline", etc.) or occurring in composite terms (for example, "abundant" may be part of the term "most abundant reactive intermediates") were excluded from the original version of the Corncob Lowercase Dictionary.

The IUPAC Compendium of Chemical Terminology (the only well-known and time-proven dictionary) is used as a source of general chemistry terms. To find the best way to match an n-gram to a scientific term from the compendium, a number of experiments have been performed which resulted in the following criteria:

1. N-gram is considered a general scientific term if all n-gram tokens are the words of a certain IUPAC Gold Book term, regardless of their order; and

2. If (n − 1) of n-gram tokens coincide with the (n − 1) words of an IUPAC Gold Book term, and the remaining word is among other terms in the dictionary, then the n-gram is considered a general scientific term too.

Some examples may be given. The n-gram "RADIAL CONCENTRATION GRADIENT" is a general scientific term because the phrase "concentration gradient" is in the compendium and the word "radial" is part of the term "radial development." The n-gram "CONTENT CATALYTIC ACTIVITY" is a general term because the term "catalytic activity content" is present in the compendium and differs from the n-gram only by word order. The n-gram "TOLUENE ADSORPTION CAPACITY" is not considered a general term, despite the fact that two words coincide with the term "absorption capacity," because the remaining word "TOLUENE" is special and is not found in the compendium. The n-gram "COBALT ACETATE DECOMPOSITION" is not considered a general term either as only the term "decomposition" may be found.

The final comment is about the stop list dictionary that, at first glance, may look like a set of arbitrary words. But, actually, it is based on a series of observations performed with the set of wrongly identified term-like phrases by the earlier version of the terminology analysis system.

Strict filtering

The last step in the text pre-processing stage is strict filtering developed to remove unnecessary words and meaningless combinations of symbols. If at least one n-gram token is labeled by the strict filtering tag ("rubbish" : "true") then such an n-gram is not considered a term-like phrase. At this stage, certain character sequences — as described by the filtering rules (Table 2) and not exempt by the list of exceptions (Table 3) — are looked for. They are successive digits, special symbols, measurement units, symbols of chemical elements, brackets and so on. Custom regular expressions and standard dictionaries described in Table 1 are used for this procedure. A general scheme of strict filtering parsing is illustrated in Fig. 4.

Table 2. Rules for strict filtering procedure
No. Rule Examples
1 SpecialSymbolsRule
 
True if a token contains at least one of the special symbols different from: . -,/: () [] + = @ ®
SIZE(**), SELECTIVITY%, NIMG_650, H2S↔35SCAT, 1AUDAE_AM, ΔGADS, H0 ≦−8.2
2 StopListRule
 
True if a token is in the stop list (Table 1)
LITERATURE, VIEWPOINT, PERCENT, PRESENT, IMPORTANCE, FUNDAMENTAL, CONCLUSION, TYPICALLY, EXAMPLE, INTRODUCTION
Rules of regular expressions:
 
True, if a token satisfies at least one of the regular expressions from the following list...
3 4DigitRule
 
True if a token contains four or more digits in succession
FQM-3994, RYC-2008-03387, 20000H-1, MAT2010-21147, CO(0001)-CARBIDE, CO(111)/CO(0001), RU(0001) ELECTRODE
4 3DigitRule
 
True if a token contains three digits in succession
215KMTA, 220ML, 148H-1, CU2O(111), AU{111}-CEO2{100}, MGO/AG(100)
2DigitRule
 
True if a token begins with one or two digits
12C16O-13C16O, 31P{1H}, 2-PROPANOL, 2-METHYL-1-BUTENE, 3-METHYL-1,3-BUTADIENE, 15 %H3PW12O40/TIO2
5 UnitsRule
 
True if a token ends with a string from the dictionary of measurement units (Table 1)
KJMOL-1, MMOL.MIN-1, KJ.MOL-1, G.GZEOLITE-1.H-1, CM3.MIN-1.G-1
Table 3. Exceptions for strict filtering procedure
No. Exception Examples
1 Facet_Index_4digits
 
Token denotes the substance containing a four-digit facet index. The list of chemical element signs is used (Table 1).
terms: RU(0001); CO(0001)-CARBIDE; α-FE2O3(0001)
 
rubbish: HPG1800B; RYC-2008-03387; 20000H-1
2 Miller_Index_3digits
 
Token denotes the substance containing a three-digit crystallographic Miller index. The list of chemical element signs is used.
terms: CEO2(111); PT(111); AU{111}-CEO2{100}; (NI,AL)(111); AL2O3/NIAL(110)
 
rubbish: R873; 50WX8-100; 270-470OC
3 Substances_3digits
 
Token denotes chemical containing three digits in succession. Chemical elements signs list and regular expressions as EL/\{\d{3}\} are used.
terms: 15N218O; H235S; H218O-SSITKA; H216O/H218O
 
rubbish: FA100; TSVET-500; CE-440
4 Isotopes
 
Token denotes an isotope. Stable isotopes and chemical elements signs lists are used (Table 1).
terms: 13C CP-MAS NMR; 12C16O-13C16O MIXTURE; 31P MAS NMR SPECTROSCOPY
 
rubbish: 04,21H; 11H; 11HV; 1 %18O2; -1H-1; 57CO
5 Substances_2digits
 
Token denotes substance, which begins with one or two digits.
terms: 5-PENTANEDIOL; 2-AMINOBENZENE-1,4-DICARBOXYLATE; 5-BROMO-3-(N,N-DIETHYLAMINO-ETHOXY)-2-METHYLINDOLE
 
rubbish: 2R,3S; 2LFH; 5NICZPOL; 1KPM; 4-CP
6 Catalysts
 
Token denotes a catalytic system which is a chemical composition with the "." character.
terms: 1.5AU/C; 1.0CUCOK/ZRO2; CE0.9PR0.1O2; CU0.2CO0.8FE2O4; MG3ZN3.-XFE0.5AL0.5; LAFE0.7NI0.3O3-Δ; CE0.8GD0.2O2-Δ; MN0.8ZR0.2
 
rubbish: VOL. %; (B)2.5 %; DISP.[%]
7 Comp
 
Token denotes the chemical or catalyst composition. Tag COMP is used.
terms: 20 %CU/ZNAL; 0.4 %PD/AL2O3; 4 %PT-4 %RE/TIO2; (5 %)PB(10 %)-SBA15
 
rubbish: 50 %AIR; 1.5 %WT; 0-2.5MOL %; CA.23 %
8 Cryst_hydrates
 
Tokens denote crystalline hydrates. Regular expressions as *[A-Za-z].*H2O$ are used.
terms: AL(NO3)3*6H2O; FE2(SO4)3.9H2O; AUCL4(NH4)7[TI2(O2)2(CIT)(HCIT)]2.12H2O;
 
rubbish: 0.6 %H2O; 0.03 %C3H6; 0.06286*T;
9 SpatialDimension
 
Token denotes the 1-, 2- or 3-dimensional method or pattern.
terms: 2D-SAXS; 2D-GC; 1D-3D COPPER – OXIDE; 1D-STRUCTURE; 1D COPPER – OXIDE
 
rubbish: 12-MR; 1LATTICE; 16ACR; 60HPW
10 Names
 
Token denotes a proper name. A set of regular expressions is used for recognition.
terms: BRØNSTED ACID; BRӦNSTED BASIC SITE; MӦSSBAUER SPECTROSCOPY;
 
rubbish: L’ARGENTIЀRE; PROCESS’S
11 OscarTags
 
True if a token has any Oscar tag and matches the following regular expressions: \-[A-Za-z]{2}, \{, \[*[A-Za-z] and etc.
terms: STEM-HAADF; L-CYSTINE; DI-TERT-BUTYLPEROXIDE;[AU(EN)2]2[CU(OX)2]3
 
rubbish: 128°- Y-ROTATED; π- BACKDONATION; CONVERSION(%);CU(1)MN; M1(2); ACTIVITY [2]

EL designation of any chemical element, IS designation of any stable isotope

Fig4 Alperin JofCheminformatics2016 8.gif

Fig. 4 General scheme of strict filtering tagging

The following examples may be given to illustrate the decision-making process of defining a token as "valid" or "rubbish" (Fig. 5).


Fig5 Alperin JofCheminformatics2016 8.gif

Fig. 5 Examples of strict filtering tagging

Summary of pre-processing stage

The final result of the text pre-processing stage is the marked and structured text with tagged tokens. These tags are used then by various rules for term-like phrase selection. As there is no need for all the tags from OSCAR4 and Penn Treebank Tag Set, only a few of them are used in the term-like phrases retrieval procedure. The consolidated list of all tags is used, which may be assigned to tokens at different steps of the text pre-processing stage, as specified in the Table 4.

Table 4. The consolidated list of all tags assigned to tokens at different steps of the text pre-processing stage; it is also indicated whether a tag is used in strict filtering or in term-like phrases retrieval procedure with help of POS-based rules.
Group of tags Tag Explanation Strict filtering Morphological pattern
POS JJ Adjective Yes (n-grams n > 1)
JJR Adjective, comparative Yes (n-grams n > 1)
VBG Verb, gerund or present participle Yes (n-grams n ≥ 1)
VBD Verb, past tense includes the conditional form of the verb to be Yes (n-grams n > 1)
VBN Verb, past participle Yes (n-grams n > 1)
NNP Proper Noun, singular Yes (n-grams n > 1)
NN Noun, singular or mass Yes (n-grams n ≥ 1)
NNPS Proper Noun, plural Yes (n-grams n ≥ 1)
NNS Noun, plural Yes (n-grams n ≥ 1)
IN Preposition or subordinating conjunction Yes (n-grams n > 1)
DT Determiner Yes (n-grams n > 1)
RB Adverb Yes (n-grams n > 2)
RBS Adverb, superlative Yes (n-grams n > 2)
FW Foreign word Yes (n-grams n > 1)
OSCAR CM Chemical matter Yes Yes (all n-grams)
ONT Ontological term Yes Yes (all n-grams)
Own tags COMP Chemical composition Yes (all n-grams)
rubbish Token for which strict filtering to be applied Yes Yes (all n-grams)
GCST General Chemistry Scientific Term Yes (all n-grams)

As an illustration of tag assignment the following example may be given. Figure 6 shows an example sentence where a few tokens have been tagged. For instance, there are the following different tags used in the example for token 2.7 %CO/10.0 %H2O/He – (pos = "CD"; lemma = "2.7 %CO/10.0 %H2O/He"; oscar = "CM"; rubbish = "false"; exception = "comp"). Every token has at least two tags — pos (it holds the part-of-speech information) and lemma (it corresponds to the lemma of a token). In addition some tokens related to chemistry (indicating chemical substances, formulas, reactions and etc.) have a tag oscar taking the values of CM or ONT. Last but not least is the tag rubbish ("true" or "false") marking tokens for which strict filtering is to be applied.

Fig6 Alperin JofCheminformatics2016 8.gif

Fig. 6 An illustration of tags assignment to different tokens

N-grams spectrum retrieval procedure

As it is defined earlier within our study, the term "n-gram at length n" connotes a sequence or string of n consecutive tokens situated within the same sentence with omission of useless tokens (at the moment only definite/indefinite articles). N-gram set is obtained by moving a window of n tokens length through an entire sentence. This moving is performed token by token. This process is to be repeated for all sentences for a set of all texts:

For a set of texts, each n-gram may be characterized by textual frequency of n-gram occurrence —total number of n-gram occurrences within a text and by absolute frequency of occurrence —total number of n-gram occurrences. As a result each n-gram may be described by a vector within a set of texts enabling us to develop the additional procedures for n-gram filtering and text information analysis.

The full n-gram data set is redundant and it creates difficulties for analysis. For specific purposes different filtration procedures are to be applied. For instance, threshold filtering based on the values of and may be used.

Module of terminology spectrum building

The final stage of the analysis is to distinguish among the scores of n-grams such as the term-like phrases, general chemistry scientific terms, names of chemical entities and useless n-grams. The calculation of textual and absolute frequencies of term occurrence finishes the terminology spectrum building.

To select term-like n-grams the sets of accept and reject rules are applied. They are all based on token tags assigned at previous steps and developed dictionaries (Table 1). The intention of each set of rules is to determine whether an n-gram of defined length is a term-like phrase or not by analyzing its structure. All rules are applied in a consecutive manner. If an n-gram conforms to an accept or reject rule in the rule sequence, the procedure will be stopped with declaring the n-gram as either a non-term-like or a term-like phrase, probably having a special meaning (e.g. general chemistry scientific term or chemical entity). If no rule is applicable, the n-gram will be considered a term-like phrase too. There are a few general rules that can be used for analysis of n-grams of any length. There are also tailored sets of rules for 1-grams (Table 5), 2-grams (Table 6) and for long (n > 2)-grams (Table 7).

Table 5. Accept and reject rules succession for unigrams (1-grams)
Description Examples
GeneralChemTermRule (accept rule)
 
True if a 1-gram is a general chemistry scientific term
StrictFilteringTagRule (reject rule)
 
True if a 1-gram consists of a token with the strict filtering tag rubbish:true
ShortTokensRule (reject rule)
 
True if a 1-gram consists of a short token of length less than three characters; this rule is to exclude noise existing in documents such as axes labels and so on.
UnitsRule (reject rule)
 
True if a 1-gram contains a string being a measurement unit from the dictionary (Table 1)
ChemUnigramRule (accept rule)
 
True if a 1-gram is tagged by any OSCAR tag and by one of the following POS tags: FW, NNP, or tagged by tag COMP; selected unigrams are assumed and marked to have a chemical sense
Term-like: barium, phenanthrene, pentanol, xanes
GeneralEnglishDictRule (reject rule)
 
True if a 1-gram is in the General English Dictionary (Table 1)
Filtered: topography, paint, plateau, pool, searching, file, addenda, improvement, theme …
 
Term-like: hydrocalcite, acetylacetone, cracking, ageing
UnigramPOSRule (reject rule)
 
True if a 1-gram is not a noun or a gerund; term-like 1-gram must be tagged with the following POS tags: VBG, NN, NNPS, NNS
Filtered: schematized, suddenly, skeletal, behind
 
Term-like: ethylene, hydrocalcite, leaching, 12n-decylhexadecanamide, sulfamethoxazole, anchoring
UnigramAddRules (reject rules)
 
Set of regular expressions to filter unigrams denoting various ions, signs, captions and etc.
Filtered: M(O2), GA15.6, PW91, V2.1, G(D), TI(V), PD(I), PT0, P(X), BA2+, CE(3+), cm3, CH3, AA, Cu2+, Mo6+, Et-CP, GC–MS, Zn-Al
Table 6. Reject and accept rules consecution for bigrams (2-grams)
Description Examples
GeneralChemTermRule (accept rule)
 
Same rule as for 1-grams
StrictFilteringTagRule (reject rule)
 
Same rule as for 1-grams
ShortTokensRule (reject rule)
 
True if a 2-gram consists of only short tokens greater than three characters
IdenticalTokensRule (reject rule)
 
True if a 2-gram contains at least two identical tokens
UnitsRule (reject rule)
 
True if any token in a 2-gram ends with measurement unit string from the dictionary (Table 1); it should be noted that measurement unit may consist of several tokens, for example, the "g/h" consists of three tokens ["g", "/", "h"]
PPM C7H14, 70ML MIN-1, CM3MIN-1 H2, MIN-1 FLOW, H-1 GAS, PPM N2O/AR, ML G-1MIN-1, MOL-1 HYDROLYSIS, PPM NOX/5%O2/N2
BiGramPOSRule (accept rule with exception)
 
True, if the fist token is tagged with one of the following POS tags: JJ, JJR, FW, VBG, VBD, VBN, NN, NNP, NNPS, NNS; and the second token is tagged with one of: FW, VBG, NN, NNP, NNPS, NNS
 
Exception — the following combinations are not allowed: VBG, VBG, VBG, FW, and NNP, FW
Term-like: Andronov bifurcation, Na2CO3 impregnation, nickel catalyst; supported MgO, anchored lysine, stirred glass; carbonaceous particle, temperature-programmed adsorption, Fischer–Tropsch catalyst; in situ EXAF, UV–VIS spectroscopy, Raman spectroscopy
 
Filtered due to exception: involving reforming, reforming minimizing, using in, Shimada etc.
Table 7. Reject and accept rules consecution for n-grams (n ≥ 3)
Description Examples
GeneralChemTermRule (accept rule)
 
Same rule as for 2-grams
StrictFilteringTagRule (reject rule)
 
Same rule as for 2-grams
ShortTokensRule (reject rule)
 
Same rule as for 2-grams
IdenticalTokensRule (reject rule)
 
Same rule as for 2-grams
UnitsRule (reject rule)
 
Same rule as for 2-grams
ManyGramPOSRule (accept rule with exception)
 
True, if the fist token must be tagged with one of the following POS tags (noun, gerund, adjective, adverb or participle): NN, NNP, VBG, VBD, VBN, JJ, JJR, RB, RBS, FW; and the middle in any position token (+ preposition or determiner) is: NN, NNP, VBG, VBD, VBN, JJ, JJR, RB, RBS, FW + IN, DT; and the last token is: VBG, NN, NNP, NNPS, NNS (gerund or noun)
 
Exception — the following combinations are not allowed (describing phrases which looks like to be torn from their context): VGB, NN, VGB, IN, VBN, NN, VBN, JJ
Term-like: X-ray fluorescence spectrometer; Brønsted basic site; Pd(110) surface oscillation; doping CsPW with platinum; catalyzed N2O decomposition; crystalline phase transition; catalyzed oxidation of NO; complete photoreduction of Pd(II); propagating thermosynthesis; reforming of the biomass; drying inside the microscope column
 
Filtered due to exception:used during steam reforming; catalyzed by metalloporphyrin; investigated by XRD; using atomic absorption

The following examples may be given to illustrate the decision making process whether an n-gram may be considered a term-like phrases or not (Fig. 7).


Fig7 Alperin JofCheminformatics2016 8.gif

Fig. 7 An illustration of term-like phrases retrieval procedure with POS based accept rules

The next step in the terminology analysis stage is the tagging of term-like phrases to describe their roles as entities having a special meaning. There are the following tags at the moment: term-like phrase, general chemistry term, and chemical entity. The final step is the additional filtration procedure aimed to reduce the number of term-like phrases performed by removing short term-like phrases which are parts of n-grams with more length. The criterion of filter application is equality of the absolute frequencies of occurrence for short and long n-grams.

Results and discussion

An example of automatic term-like phrases retrieval is shown in Fig. 8 with some term-like and filtered-off n-grams highlighted. For the filtered-off n-grams the reject rules used are given as well. For the detailed results of terminology analysis for one preselected Congress abstract see the Additional file 1.

Fig8 Alperin JofCheminformatics2016 8.gif

Fig. 8 An example of terminology analysis results (with some term-like and filtered-off n-grams highlighted)

To understand the overall performance of term-like phrases retrieval routine, the full set of text abstracts belonging to five EuropaCat events were processed. Obtained data were statistically analyzed (see Table 8). It may be seen that the term-like phrases retrieval procedure reduces the total number of all available n-grams to a range of 1÷3 percent, which depends on the n-gram length n.

Table 8. Consolidated table of experimental results on terminology analysis of EuropaCat abstracts set
 
Number of texts: 6387; total amount of tokens: 5,148,124 (EuropaCat 2013, 2011, 2009, 2007, 2005)
n N—total number of n-grams NTL—total number of term-like phrases (% of N) NGS—total number of general scientific terms (% of NTL) NCOMP—total number of phrases with tag COMP (% of NTL) NCM—total number of phrases with OSCAR tag CM (% of NTL)
1 ~5.15 × 106 68,811 (~1.3 %) 574 (0.8 %) 8776 (12.7 %) 40,354 (58.6 %)
2 ~4.94 × 106 135,002 (~2.7 %) 11,263 (8.3 %) 5199 (3.9 %) 52,641 (38.9 %)
3 ~4.74 × 106 130,706 (~2.8 %) 1031 (0.8 %) 5194 (4 %) 64,101 (49.0 %)
4 ~4.54 × 106 118,893 (~2.6 %) 41 (0.03 %) 4064 (3.4 %) 56,047 (47.1 %)
5 ~4.35 × 106 94,546 (~2.2 %) 5 (0.005 %) 3390 (3.6 %) 43,550 (46.0 %)
6 ~4.16 × 106 58,775 (~1.4 %) - 2469 (4.2 %) 29,992 (51.0 %)
7 ~3.97 × 106 46,224 (~1.2 %) - 2403 (5.2 %) 26,030 (56.3 %)

Table 8 demonstrates that the maximum absolute amount of term-like n-grams corresponds to the value of n = 2 (bigrams), which is in good accordance with the well-known fact of the average term length in scientific texts. On the other hand, term indexes are often limited to the n-grams lengths n = 1, 2, 3. The limit n = 3 looks good enough for general science vocabulary (see NGS value from Table 8—a number of general scientific terms found), but it is not sufficient for a specialized thesaurus (e.g. for catalysis). The number of term-like n-grams with the COMP tag is also large for different n, including n > 3. Summarizing, it should be said that long-length terms retrieval is the distinctive feature of the suggested approach.

It is also seen from Table 8 that nearly half the total amount of 1-grams have an OSCAR tag CM. It should be noted also that if a plausible term-like phrase has just one token with OSCAR tag, it will be considered to also have the same tag by the system. It may explain the close values (in percentages) for phrases with different length.

To assess the overall effectiveness of the term-like phrases retrieval procedure, it seems necessary to quantitatively answer the questions about what precision and recall values can possibly be achieved. To do that, a preliminary study on comparison between automatically and manually selected term-like phrases was performed with the help of two professional chemical scientists who picked out the term-like phrases from a limited set of a few arbitrarily selected documents. To include a phrase in the list of term-like phrases, a consent among both experts was required. It should be noted here that experts were not required to follow the same procedure of moving a window of n tokens length on an entire sentence used by n-grams isolation. Moreover, experts took into account and analyzed the information put into some simple grammatical structures, which are typical for scientific texts, such as structures with enumeration and so on. It leads to additional differences between the sets of expert and automatically selected term-like phrases (for an example see Fig. 9).

Fig9 Alperin JofCheminformatics2016 8.gif

Fig. 9 An example of terminology analysis results (with some automatically retrieved and expert selected term-like phrases)

The data obtained through expert terminological analysis were compared with the automatically retrieved terms. The precision (P), recall (R) and F-measure values were calculated. In the paper, the precision[21] indicates a fraction of automatically retrieved term-like phrases which coincide with expert selected ones. Recall is a fraction of an expert’s selected term-like phrases that are retrieved by the system.

Both precision and recall therefore may be used as a measure of term-like phrase retrieval process relevance and efficiency. In simple terms, high precision values mean that substantially more term-like phrases are selected than erroneous phrases, while high recall values mean that the most term-like phrases are selected from the text.

Very often these two measures (P and R) are used together to calculate a single value named as F1-measure[22] to provide an overall performance system characteristic. F1-measure is a harmonic mean of P and R, where F1 can reach 1 as its best and 0 as its worst values:

The results on the number of expert selected and automatically retrieved term-like phrases, number of coincidences and calculated P, R and F1 values are represented in Table 9. For the detailed results of terminology analysis for one preselected text, see the Additional file 1.

Table 9. Precision, Recall and F-measure estimated from the data obtained for five arbitrarily selected texts
 
No. 1—Design, synthesis and catalysis of recoverable catalysts assembled in emulsion and…, C. Li et al. (2005)
No. 2—Understanding reaction pathways on model catalyst surfaces, F. Gao et al. (2007)
No. 3—Solid acid catalysts Based on H3PW12O40 Heteropoly Acid: Acid and Catalytic Pr…, A.M. Alsalme et al. (2011)
No. 4—Advantages of using TOF–SIMS method in surface studies of heterogeneous…, M.I Szynkowska et al. (2005)
No. 5—ECS-Materials: synthesis and characterization of a new class of crystalline…, G. Bellussi et al. (2007)
Text no. Number of terms retrieved by two experts Number of term-like phrases retrieved by the system Number of coincidences Precision Recall F1-measure
No. 1 164 221 135 0.61 0.82 0.70
No. 2 155 174 96 0.55 0.62 0.58
No. 3 170 172 113 0.66 0.66 0.66
No. 4 68 119 40 0.34 0.59 0.43
No. 5 125 215 106 0.50 0.85 0.63
P, R and F values calculated for the entire five-text set:
 
Unique expert term-like phrases: 655
Term-like n-grams: 872
Coincidences: 466
0.53 0.71 0.61

It may be concluded therefore that further improvements can be made with term-like phrase retrieval efficiency by bringing into consideration the knowledge of typical grammatical structures used in scientific texts[12][23] as well as numeric values of both textual and absolute frequencies of n-gram occurrences.

It is also seen that the first version of the terminology analysis system delivers sufficiently high values for precision and recall achievable in the term-like phrases retrieval process. Some comparison can be made with P = 0.34÷0.40, R = 0.11÷0.14, F1 = 0.17÷0.20 values reported[6] by such well-known keyphrases retrieval systems as Wingnus, Sztergak, KP-Mminer, although such disparity does not look consistent enough to be credible due to different goals of the systems (term-like phrases vs. keyphrases retrieval) being brought into comparison.

Conclusions

As mentioned in the introduction, scientific publications are still the most important sources of scientific knowledge, and new methods aimed to retrieve meaningful information from natural language documents are particularly welcome today. The structural foundation of any such publication is widely accepted terms and term-like phrases conveying useful facts and shades of meaning of a document content.

The present study is aimed to develop, test and assess the methodology of automated extraction of a full terminology spectrum from natural language chemical PDF documents, while retrieving as many term-like phrases as possible. Term-like phrases are defined as one or more consecutive words and/or alphanumeric string combinations, which convey specific scientific meaning with unchanged spelling and context as in a real text. The terminology spectrum of a natural language publication is defined as an indexed list of tagged entities: recognized general science notions, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram text analysis with sequential application of complex accept and reject rules. The main distinctive feature of the suggested approach is in picking out all parsable term-like phrases, not just selecting a limited set of keyphrases meeting any predefined criteria. The next step is to build an extensive term index of a text collection. The developed approach neither takes into account semantic similarity nor differentiates between similar term-like phrases (distinct evaluation metrics may be employed to do it at the later stages). The approach which includes a number of sequentially running procedures appears to show good results in terminology spectrum retrieval as compared with well-known keyphrase retrieval systems.[6] The term-like phrase parsing efficiency is quantified with precision (P = 0.53), recall (R = 0.71) and F1-measure (F1 = 0.61) values calculated from a limited set of documents manually processed by professional chemical scientists.

Terminology spectrum retrieval may be used to perform various types of text analysis across document collections. We believe that this sort of terminology spectrum may be successfully employed for text information retrieval and for reference database development. For example, it may be used to develop thesauri, to analyze research trends in subject fields of research by registering changes in terminology, to derive inference rules in order to understand particular text content, to look for the similarity between documents by comparing their terminology spectrum within an appropriate vector space, and to develop methods to automatically map a document to a reference database field.

For instance, if a set contains a collection of texts from different time periods (in our research, several different events from the EuropaCat research conference were used), the analysis of textual and absolute frequencies of occurrence will allow to follow up the "life cycle" of each term-like phrase on the quantitative level (term usage increasing, decreasing and so on). That gives a unique capability to find out research trends and new concepts in the subject field by registering changes in terminology usage in the most rapidly developing areas of research. Moreover, similar dynamics of change over time for different terms often indicates the existence of an associative linkage between them (e.g. between a new process and developed catalyst or methodology).[24] Indicator words or phrases such as "for the first time," "unique," and "distinctive feature" and so on may also be used in order to detect things like new recipes or catalyst composition for the explored process.

Usage of terminology spectrum for information retrieval will be the subject of our subsequent publications.

Declarations

Author's contributions

BA contributed to software development and architecture. AK conceived of the project and the tasks to be solved. AK and LI designed and performed the experiments, tested the applications and offered feedback as chemical experts. NS and VG were responsible for L-gram analysis algorithm and scientific feedback. VP conceived and coordinated the study. All authors contributed to the scientific and methodological progress of this project. All authors read and approved the final manuscript.

Acknowledgements

Financial assistance provided by Russian Academy of Science Project No. V.46.4.4 are gratefully acknowledged.

Competing interests

The authors declare that they have no competing interests.

Open access

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Additional files

Additional file 1. The detailed example of PDF transformation with terminology analysis performed by experts and by automatic analysis: 13321_2016_136_MOESM1_ESM.pdf

Additional file 2. OSCAR4 tokenizer modification: 13321_2016_136_MOESM2_ESM.pdf

Additional file 3. List of excluded words from general English Corncob-Lowercase list: 13321_2016_136_MOESM3_ESM.pdf

Additional file 4. List of stop words used: 13321_2016_136_MOESM4_ESM.pdf

Additional file 5. List of stable isotopes: 13321_2016_136_MOESM5_ESM.pdf

Additional file 6. List of chemical element symbols: 13321_2016_136_MOESM6_ESM.pdf

Additional file 7. List of measurement units: 13321_2016_136_MOESM7_ESM.pdf

References

  1. Salton, G. (1991). "Developments in Automatic Text Retrieval". pp. 974–980. doi:10.1126/science.253.5023.974. PMID 17775340. 
  2. "IUPAC Gold Book". International Union of Pure and Applied Chemistry. 2014. http://goldbook.iupac.org/. 
  3. Hussey, R.; Williams, S.; Mitchell, R. (2012). "Automatic keyphrase extraction: A comparison of methods". eKNOW, Proceedings of The Fourth International Conference on Information Process, and Knowledge Management: 18–23. ISBN 9781612081816. 
  4. Eltyeb, S.; Salim, N. (2014). "Chemical named entities recognition: a review on approaches and applications". Journal of Cheminformatics 6: 17. doi:10.1186/1758-2946-6-17. PMC PMC4022577. PMID 24834132. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4022577. 
  5. Gurulingappa, H.; Mudi, A.; Toldo, L.; Hofmann-Apitus, M.; Bhate, J. (2013). "Challenges in mining the literature for chemical information". RSC Advances 2013 (3): 16194-16211. doi:10.1039/C3RA40787J. 
  6. 6.0 6.1 6.2 6.3 Kim, S.N.; Madelyan, O.; Kan, M.-Y.; Baldwin, T. (2013). "Automatic keyphrase extraction from scientific articles". Language Resources and Evaluation 47 (3): 723–742. doi:10.1007/s10579-012-9210-3. 
  7. 7.0 7.1 Jessop, D.M.; Adams, S.E.; Willighagen, E.L.; Hawizy, L.; Murray-Rust, P. (2011). "OSCAR4: A flexible architecture for chemical text-mining". Journal of Cheminformatics 3: 41. doi:10.1186/1758-2946-3-41. PMC PMC3205045. PMID 21999457. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3205045. 
  8. Hawizy, L.; Jessop, D.M.; Adams, N.; Murray-Rust, P. (2011). "ChemicalTagger: A tool for semantic text-mining in chemistry". Journal of Cheminformatics 3: 17. doi:10.1186/1758-2946-3-17. PMC PMC3117806. PMID 21575201. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117806. 
  9. "Re-examining automatic keyphrase extraction approaches in scientific articles". MWE '09 Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications: 9–16. 2009. ISBN 9781932432602. 
  10. "Approximate matching for evaluating keyphrase extraction". RANLP '09: International Conference on Recent Advances in Natural Language Processing: 484–489. 2009. 
  11. Castellvi, M.T.C.; Bagot, R.E.; Palatresi, J.V. (2001). "Automatic term detection: A review of current systems". In Bourigault, D.; Jacquemin, C.; L'Homme, M.-C.. Recent Advances in Computational Terminology. John Benjamins Publishing Company. pp. 53–87. doi:10.1075/nlp.2.04cab. ISBN 9789027298164. 
  12. 12.0 12.1 Bolshakova, E.I.; Efremova, N.E. (2015). "A Heuristic Strategy for Extracting Terms from Scientific Texts". In Khachay, M.Y.; Konstantinova, N.; Panchenko, A.; Ignatov, D.I.; Labunets, V.G.. Analysis of Images, Social Networks and Texts. Springer International Publishing. pp. 297-307. doi:10.1007/978-3-319-26123-2_29. ISBN 9783319261232. 
  13. Salton, G.; Buckley, C. (1991). "Global Text Matching for Information Retrieval". pp. 1012–1015. doi:10.1126/science.253.5023.1012. PMID 17775345. 
  14. Chodorow, K.; Dirolf, M. (2010). MongoDB: The Definitive Guide. O'Reilly Media. ISBN 9781449381561. 
  15. "PDFxStream". Snowtide Informatics Systems, Inc. 2016. https://www.snowtide.com/. 
  16. "Stanford CoreNLP – A suite of core NLP tools". Github. 2016. http://stanfordnlp.github.io/CoreNLP/. 
  17. "The Stanford CoreNLP Natural Language Processing Toolkit". Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations: 55–60. 2014. doi:10.3115/v1/P14-5010. 
  18. Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y. (2003). "Feature-rich part-of-speech tagging with a cyclic dependency network". NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology 1: 173–180. doi:10.3115/1073445.1073478. 
  19. Taylor, A.; Marcus, M.; Santorini, B. (2003). "The Penn Treebank: An Overview". In Abeillé, A.. Text, Speech and Language Technology. 20. Springer Netherlands. pp. 5–22. doi:10.1007/978-94-010-0201-1_1. ISBN 978-94-010-0201-1. 
  20. "Semantic enrichment of journal articles using chemical named entity recognition". ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions: 45–48. 2007. 
  21. "Precision and recall". Wikimedia Foundation. https://en.wikipedia.org/wiki/Precision_and_recall. 
  22. "F1 score". Wikimedia Foundation. https://en.wikipedia.org/wiki/F1_score. 
  23. Bolshakova, E.; Efremova, N.; Noskov, A. (2010). "LSPL-patterns as a tool for information extraction from natural language texts". In Markov, K.; Ryazanov, V.; Velychko, V.; Aslanyan, L.. New Trends in Classification and Data Mining. ITHEA. pp. 110–118. ISBN 9789541600429. 
  24. Gusev, V.D.; Salomatina, N.V.; Kuzmin, A.O.; Parmon, V.N. (2012). "An express analysis of the term vocabulary of a subject area: The dynamics of change over time". Automatic Documentation and Mathematical Linguistics 46 (1): 1–7. doi:10.3103/S0005105512010025. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Numerous grammar errors were also corrected throughout the entire text. Finally, the original document on SpringerOpen includes a reference that doesn't clearly get placed inline. It's assumed the final citation from Guzev et al. was meant to be placed in the last paragraph, which is where we have put it.