AWS for Industries
How to find key geoscience terms in text without mastering NLP using Amazon Comprehend
Geosemantics is the application of linguistic techniques to geoscience. Geoscientists often have access to more reports than they can reasonably read so they are commonly challenged in filtering through reports to find relevant information (for example, this report about the Wolfcamp and Bone Spring shale plays. Traditional Natural Language Processing (NLP) techniques such as Named Entity Recognition (NER) must be trained to identify geologically relevant terms.
Amazon Comprehend provides a suite of NLP tools and pre-trained models for common tasks such as NER. While models are trained for general use, they are designed to be extensible for domain-specific text.
In this post, we build a custom entity recognizer using Comprehend through the AWS SDK for Python (Boto3).
Stratigraphic named entity recognition
Stratigraphy is a branch of geology concerned with the study of rock layers and layering. Stratigraphic intervals are the result of global and local conditions. These intervals have known rock properties that are used to reduce uncertainty about the subsurface. Exploration geology reports provide information about the stratigraphy, biological markers (biostratigraphy) and associated geological age (chronostratigraphy).
Approach to building a NER
Named Entity Recognition is built by supervised machine learning models. In order to train the model, the scientist needs a training text and annotation for the terms of interest, often provided as a key-value entity list. The British Geological Survey (BGS) built an NER model using Stanford’s CoreNLP system. Their training and testing data is publicly available.
We desire to train the NER to classify entities within the text so we can use this information for further analysis. The BGS entity list identifies general geological terms (labeled LEXICON), chorostratigraphic terms, and biostrategraphic terms (labeled BIOZONE). For this analysis, we focused only on chronostratigraphic terms.
The training document contains lines from a geological report. For example:
The entity list file is a structured list of entities including:
The data must follow this specific format. You need three documents: Training text, testing text, and entity list. We will use the BSG’s text and entity labels formatted for use with Comprehend.
Walkthrough
The code and support files are available here. You can also use this AWS CloudFormation template to create all the resources needed for this project in your account. Alternatively, you can download and unzip the dataset onto your computer from the geosemantics_comprehend_blog_data.zip file.
In this example, we create a custom entity recognizer to extract information about geologic eras. To train a custom entity recognition model, choose one of two ways to provide data to Amazon Comprehend: Annotations or Entity lists. In this example, we use the entity list method. I removed variations of the name such as “Age”, “Epoch”, and “Eon.” BGS provides data segmented into training and test sets. These files plus the entity list must be uploaded to Amazon S3 and Comprehend must have permission to access the S3 bucket through a service-linked IAM role.
Using a Python Jupyter Notebook in Amazon SageMaker, execute the code below to create an Amazon Comprehend custom entity training job. This snippet assumes that the training, test, and entity documents are in the same SageMaker folder as the Jupyter Notebook.
This cell executes three key tasks.
- It copies the training, test, and entity documents to Amazon S3.
- It designs an Amazon Comprehend custom entity extraction training job and directs Amazon Comprehend to access the training and entity documents from Amazon S3.
- It beings the training job that takes 20-25 minutes to train the model.
You can check the status of the training job every 60 seconds using this code. Once the status is “TRAINED” you can proceed to the next step.
Optional:
Amazon Simple Notification Service can send you a text message once the training is complete with this code.
Once the training is complete, Amazon Comprehend reports accuracy metrics for the entities in the training dataset. The classification metrics indicate the model correctly recognized the custom entity in 99.17% of instances. The proportion of actual positives that are correctly identified by the model was 98.36%.
Column 1 | Column 2 | Column 3 |
Percision | 99.17 | Positive predictive value |
Recall | 98.36 | True positive rate |
F-1 Score | 98.76 | Harmonic mean of the precision and recall |
Test your model
To test the model, we create a detection job. We provide a few parameters including the format of the test document and where to save the results. When the detection job is complete, Comprehend will save the results as JSON files in your output S3 bucket path.
As before, you can choose to receive an Amazon SNS message once the detection job is complete.
You might now notice that Amazon Comprehend has picked up additional words with varying spellings. This is how Comprehend differs from a simple text look up. Comprehend is using a probabilistic model based on natural language processing to identify chronostratigraphic terms.
Example input:
Example Response:
The results show:
- Offset: The beginning and end of the offset is the number of characters into the line when the word appears.
- Score: Comprehend’s confidence that the identified text is of the specified type ranging from 0-1.
- Type: The type of the entity extracted based on the training model.
Extension
Amazon Comprehend can be used for batch inference, as we have done here, or for real-time inference as described in this blog post.
Conclusion
In this post, we built a custom entity recognition model to identify geologic periods without using NLP frameworks. The Amazon Comprehend response provides metadata that can be used for filtering geoscience documentation. Combined with a search index like Amazon ElastiCache or Apache Solr, these results could substantially reduce the time geoscientists spend search for data in reports.
Future projects could extend this to additional entity types or apply text extraction methods to convert PDF reports into tabular data with the appropriate metadata about the geological age. This model can scale to analyze documents of arbitrary length.
The workflow this type of analysis is typically batch, but the model could be extended to provide near-realtime inference by deploying the model as an Endpoint.
Try custom entities now from the Amazon Comprehend console and get detailed instructions in the Amazon Comprehend documentation. This solution is available in all Regions where Amazon Comprehend is available. Please refer to the AWS Region Table for more information.