target_extraction.allen.models package¶
Subpackages¶
- target_extraction.allen.models.target_sentiment package
- Submodules
- target_extraction.allen.models.target_sentiment.atae module
- target_extraction.allen.models.target_sentiment.in_context module
- target_extraction.allen.models.target_sentiment.interactive_attention_network module
- target_extraction.allen.models.target_sentiment.split_contexts module
- target_extraction.allen.models.target_sentiment.util module
- Module contents
Submodules¶
target_extraction.allen.models.target_tagger module¶
-
class
target_extraction.allen.models.target_tagger.
TargetTagger
(vocab, text_field_embedder, pos_tag_embedding=None, pos_tag_loss=None, label_namespace='labels', encoder=None, feedforward=None, label_encoding=None, crf=True, include_start_end_transitions=True, constrain_crf_decoding=None, calculate_span_f1=None, dropout=None, verbose_metrics=False, initializer=<allennlp.nn.initializers.InitializerApplicator object>, regularizer=None)[source]¶ Bases:
allennlp.models.model.Model
The
TargetTagger
encodes a sequence of text with an optionalSeq2SeqEncoder
, then uses either Conditional Random Field or simply a softmax model to predict a tag for each token in the sequence.This is in affect the same as the
CrfTagger
with the following differences:It allows you to not have to use a
Seq2SeqEncoder
It allows you to not have to use a
CRF
module but rather use a the simpler softmax over the logits
- vocab
Vocabulary
, required A Vocabulary, required in order to compute sizes for input/output projections.
- text_field_embedder
TextFieldEmbedder
, required Used to embed the tokens
TextField
we get as input to the model.- pos_tag_embedding
Embedding
, optional (default=None). Used to embed the
pos_tags
SequenceLabelField
we get as input to the model.- pos_tag_loss:
float
, optional (default=None) Whether to predict POS tags as an auxilary loss. The float here would represent the amount to scale that loss in the overall loss function. The POS tags are predicted using a CRF if the main task uses a CRF else like the main task it will use greedy decoding based on softmax. NOTE we assume always that the label encoding for POS tags are of BIO format.
- encoder
Seq2SeqEncoder
, optional (default=None) The encoder that we will use in between embedding tokens and predicting output tags.
- label_namespace
str
, optional (default=``labels``) This is needed to compute the SpanBasedF1Measure metric. Unless you did something unusual, the default value should be what you want.
- feedforward
FeedForward
, optional, (default = None). An optional feedforward layer to apply after the encoder.
- label_encoding
str
, optional (default=``None``) Label encoding to use when calculating span f1 and constraining the CRF at decoding time . Valid options are “BIO”, “BIOUL”, “IOB1”, “BMES”. Required if
calculate_span_f1
orconstrain_crf_decoding
is true.- crf:
bool
, optional (default=``True``) Whether to use a CRF, if not then it just chooses the max label over the softmax (greedy decoding).
- include_start_end_transitions
bool
, optional (default=``True``) Whether to include start and end transition parameters in the CRF.
- constrain_crf_decoding
bool
, optional (default=``None``) If
True
, the CRF is constrained at decoding time to produce valid sequences of tags. If this isTrue
, thenlabel_encoding
is required. IfNone
and label_encoding is specified, this is set toTrue
. IfNone
and label_encoding is not specified, it defaults toFalse
.- calculate_span_f1
bool
, optional (default=``None``) Calculate span-level F1 metrics during training. If this is
True
, thenlabel_encoding
is required. IfNone
and label_encoding is specified, this is set toTrue
. IfNone
and label_encoding is not specified, it defaults toFalse
.- dropout:
float
, optional (default=``None``). Use `Variational Dropout <https://arxiv.org/abs/1512.05287>`_ for sequence and normal dropout for non sequences.
- verbose_metrics
bool
, optional (default = False) If true, metrics will be returned per label class in addition to the overall statistics.
- initializer
InitializerApplicator
, optional (default=``InitializerApplicator()``) Used to initialize the model parameters.
- regularizer
RegularizerApplicator
, optional (default=``None``) If provided, will be used to calculate the regularization penalty during training.
-
forward
(tokens, pos_tags=None, tags=None, metadata=None)[source]¶ - tokens
Dict[str, torch.LongTensor]
, required The output of
TextField.as_array()
, which should typically be passed directly to aTextFieldEmbedder
. This output is a dictionary mapping keys toTokenIndexer
tensors. At its most basic, using aSingleIdTokenIndexer
this is:{"tokens": Tensor(batch_size, num_tokens)}
. This dictionary will have the same keys as were used for theTokenIndexers
when you created theTextField
representing your sequence. The dictionary is designed to be passed directly to aTextFieldEmbedder
, which knows how to combine different word representations into a single vector per token in your input.- pos_tags
torch.LongTensor
, optional (default =None
) A torch tensor representing the sequence of POS tags of shape
(batch_size, num_tokens)
- tags
torch.LongTensor
, optional (default =None
) A torch tensor representing the sequence of integer gold class labels of shape
(batch_size, num_tokens)
.- metadata
List[Dict[str, Any]]
, optional, (default = None) metadata containg the original words in the sentence to be tagged under a ‘words’ key as well as the original text under a ‘text’ key.
An output dictionary consisting of:
- logits
torch.FloatTensor
The logits that are the output of the
tag_projection_layer
- class_probabilities:
torch.FloatTensor
A tensor of shape
(batch_size, num_tokens, tag_vocab_size)
representing a distribution of the tag classes per word. NOTE that when using the CRF the highest class probability does not mean that will be the tag as that might not be globally optimal for the sentence.- mask
torch.LongTensor
The text field mask for the input tokens
- tags
List[List[int]]
The predicted tags using the Viterbi algorithm if CRF is being used else they are from the max over the logits using the softmax approach.
- loss
torch.FloatTensor
, optional A scalar loss to be optimised. Only computed if gold label
tags
are provided.- words
List[str]
, optional A list of tokens that were the original input into the model
- text
str
, optional A string that was the original text that the tokens have come from.
- Return type
Dict
[str
,Tensor
]
- tokens
-
get_metrics
(reset=False)[source]¶ Returns a dictionary of metrics. This method will be called by allennlp.training.Trainer in order to compute and use model metrics for early stopping and model serialization. We return an empty dictionary here rather than raising as it is not required to implement metrics for a new model. A boolean reset parameter is passed, as frequently a metric accumulator will have some state which should be reset between epochs. This is also compatible with [Metric`s](../training/metrics/metric.md). Metrics should be populated during the call to `forward, with the Metric handling the accumulation of the metric until this method is called.
- Return type
Dict
[str
,float
]
-
get_softmax_labels
(class_probabilities, mask)[source]¶ This method has copied a large chunck of code from the SimpleTagger.decode method.
- class_probabilitiesA tensor containing the softmax scores for the
tags.
mask: A Tensor of 1’s and 0’s indicating whether a word exists.
A List of Lists where each inner list contains integers representing the most likely tag label index based on the softmax scores. Only returns the tag label indexs for words that exist based on the mask provided.
- Return type
List
[List
[int
]]