Abstract:
Machine reading comprehension multiple-choice tasks have attracted wide attention in the field of natural language processing, but the effectiveness of existing pre-trained models in logical reasoning multiple-choice tasks needs to be improved. An improved Deberta-Discourse graph neural network (Deberta-DGNN) model is proposed based on Deberta and Discourse Graph Neural Network. The Deberta model was used to extract the feature of the word vector. The hidden relationship between sentences was extracted by constructing a graph, and the sequence features output by the Deberta model were cut as discourse units to construct a logical graph. In order to keep the original meaning of the original text, position information was added to the discourse unit in the logic diagram. For the problem that the long-distance dependent nodes of the logic graph were difficult to interact effectively, the node was introduced into the multi-head self-attention mechanism to alleviate this problem. The proposed model has a test accuracy of 68.40% on the Reclor dataset, and the effect has been significantly improved.