target_extraction.allen.modules.inter_target package

Submodules

target_extraction.allen.modules.inter_target.inter_target module

class target_extraction.allen.modules.inter_target.inter_target.InterTarget[source]

Bases: torch.nn.modules.module.Module, allennlp.common.registrable.Registrable

A InterTarget is a Module that takes as input a tensor of shape (batch, num_targets, dim), where the tensor represents the features for each target within a text. The output is the same shape tensor (batch, num_targets, dim) but where each target has been encoded with some information from its surrounding targets within the same text.

forward(targets_features, mask)[source]
Parameters
  • targets_features (Tensor) – A tensor of shape (batch, num_targets, dim)

  • mask (Tensor) – A tensor of shape (batch, num_targets). The mask determines which targets are padding and which are not 0 indicates padding.

Return type

Tensor

Returns

A tensor of shape (batch, num_targets, dim), where the features from the others targets have been encoded within each other through this model.

get_input_dim()[source]
Return type

int

Returns

The dim size of the input to forward which is of shape (batch, num_target, dim)

get_output_dim()[source]
Return type

int

Returns

The dim size of the return from forward which is of shape (batch, num_target, dim)

target_extraction.allen.modules.inter_target.sequence_inter_target module

class target_extraction.allen.modules.inter_target.sequence_inter_target.SequenceInterTarget(sequence_encoder)[source]

Bases: target_extraction.allen.modules.inter_target.inter_target.InterTarget

forward(targets_features, mask)[source]
Parameters
  • targets_features (Tensor) – A tensor of shape (batch, num_targets, dim)

  • mask (Tensor) – A tensor of shape (batch, num_targets). The mask determines which targets are padding and which are not 0 indicates padding.

Return type

Tensor

Returns

A tensor of shape (batch, num_targets, dim), where the features from the others targets are encoded into each other through the sequence_encoder e.g. LSTM, where in the case of an LSTM it encodes each target starting from the first (left most) target to the last (right most) target in the text. If Bi-Directional then the LSTM will also encode from the last to the first target in the text. This type of encoding is a generalisation of Modeling Inter-Aspect Dependencies for Aspect-Based Sentiment Analysis, from that paper it would model equation 4.

get_input_dim()[source]
Return type

int

Returns

The dim size of the input to forward which is of shape (batch, num_target, dim)

get_output_dim()[source]
Return type

int

Returns

The dim size of the return from forward which is of shape (batch, num_target, dim)

Module contents