GENERATIVE INTERPRETABLE RECOMMENDATION INTEGRATING IMAGE AND TEXT
-
Abstract
Aiming at the problems that the recommendation explanation in the interpretable recommendation model is not in accordance with the actual preferences of users and the lack of research on the attributes of users and items labels with different backgrounds, we proposed a novel generative interpretable recommendation model integrating image and text (GIR-IT). A multi-modal long short-term memory network based on hybrid fusion was constructed to fuse information such as images and texts to improve the interpretability of the model. The label of the item was added to the model by one-hot coding, and the personalized preference of the user was captured by mining the correlation between the user and the item label, which improved the recommendation accuracy of the model. The experimental results on Amazon datasets show that the GIR-IT model proposed in this paper improves the TOP-N recommendation accuracy index F1 increased to 8.272% to 11.693% and reduces the loss of interpretable text generation by 2.73% to 33.48%, compared with the classic baseline models BPR, VBPR, NRT and VECF.
-
-