target_extraction.allen.modules.target_position_weight package¶
Submodules¶
target_extraction.allen.modules.target_position_weight.relative_target_position_weight module¶
-
class
target_extraction.allen.modules.target_position_weight.relative_target_position_weight.RelativeTargetPositionWeight(zero_target_word_weighting=False)[source]¶ Bases:
target_extraction.allen.modules.target_position_weight.target_position_weight.TargetPositionWeightThe weighting performed here is the following:
$$1 -
rac{| heta - i|}{n}$$
Where $i$ is the location of the token, $ heta$ is the location of the nearest target token (can be more than one taregt token in the sentence if the target is a multi-word target), and $n$ is the token length of the text. The weight of the target tokens by default is $1$ thus target tokens are not down weighted. This is the same weighting as equation 7 within Chen et al. 2017 and equation 2 in Zhao et al. 2019
- param zero_target_word_weighting
If True it will apply a weight of 0 to all target words (same as masking the target words). This would be the same weighting function as Zhang et al. 2019
-
forward(targets_features, relative_target_positions, sequence_mask)[source]¶ - Parameters
targets_features (
Tensor) – A tensor of shape (batch * num_targets, text_sequence_length, dim). This tensor will be returned weighted by the position of the tokens in the sequence with respect to the target tokens.relative_target_positions (
Tensor) – A tensor of shape (batch, num_targets, text_sequence_length). This will be a tensor that contains the position of each token to its associated target tokens in the sample.sequence_mask (
Tensor) – A tensor of shape (batch * num_targets, text_sequence_length). The mask determines which tokens are to be weighted based on their position in the sequence.
- Return type
Tuple[Tensor,Tensor]- Returns
A tuple of two tensors 1. tensor of shape (batch * num_targets, text_sequence_length, dim), where the target_features have been weighted based on each tokens position to its sample’s respective target token position. 2. tensor of shape (batch * num_targets, text_sequence_length) representing the weights that the target_features have been multipled by to get the first tensor in this tuple.
- Raises
ConfigurationError – If the targets_features first dimension is not batch size * num targets size.
target_extraction.allen.modules.target_position_weight.target_position_weight module¶
-
class
target_extraction.allen.modules.target_position_weight.target_position_weight.TargetPositionWeight[source]¶ Bases:
torch.nn.modules.module.Module,allennlp.common.registrable.RegistrableA
TargetPositionWeightis aModulethat represents different methods that can weight a target sample’s encoded text by the position the tokens take in the text with respect to the target tokens.-
forward(targets_features, relative_target_positions, sequence_mask)[source]¶ - Parameters
targets_features (
Tensor) – A tensor of shape (batch * num_targets, text_sequence_length, dim). This tensor will be returned weighted by the position of the tokens in the sequence with respect to the target tokens.relative_target_positions (
Tensor) – A tensor of shape (batch, num_targets, text_sequence_length). This will be a tensor that contains the position of each token to its associated target tokens in the sample.sequence_mask (
Tensor) – A tensor of shape (batch * num_targets, text_sequence_length). The mask determines which tokens are to be weighted based on their position in the sequence.
- Return type
Tuple[Tensor,Tensor]- Returns
A tuple of two tensors 1. tensor of shape (batch * num_targets, text_sequence_length, dim), where the target_features have been weighted based on each tokens position to its sample’s respective target token position. 2. tensor of shape (batch * num_targets, text_sequence_length) representing the weights that the target_features have been multipled by to get the first tensor in this tuple.
- Raises
ConfigurationError – If the targets_features first dimension is not batch size * num targets size.
-