融合图文的生成式可解释性推荐

GENERATIVE INTERPRETABLE RECOMMENDATION INTEGRATING IMAGE AND TEXT

  • 摘要: 针对可解释推荐模型中推荐解释与用户实际偏好符合度不高,并缺乏不同背景用户与物品标签属性研究的问题,提出一种新的融合图文的生成式可解释推荐模型 (GIR-IT)。基于混合融合方式的多模态长短期记忆网络,将图像和文本等信息进行融合,提高模型的可解释性;将物品的标签经过独热编码加入到模型中,通过挖掘用户与物品标签的相关性来捕获用户个性化的偏好,提高模型的推荐准确率。在 Amazon 数据集上的实验结果表明,将提出的 GIR-IT 模型与经典基线模型 BPR、VBPR、NRT 和 VECF 相比,TOP-N 推荐准确率指标 F1 提高至 8.272%~11.693%,可解释性的文字生成损失降低了 2.73%~33.48%。

     

    Abstract: Aiming at the problems that the recommendation explanation in the interpretable recommendation model is not in accordance with the actual preferences of users and the lack of research on the attributes of users and items labels with different backgrounds, we proposed a novel generative interpretable recommendation model integrating image and text (GIR-IT). A multi-modal long short-term memory network based on hybrid fusion was constructed to fuse information such as images and texts to improve the interpretability of the model. The label of the item was added to the model by one-hot coding, and the personalized preference of the user was captured by mining the correlation between the user and the item label, which improved the recommendation accuracy of the model. The experimental results on Amazon datasets show that the GIR-IT model proposed in this paper improves the TOP-N recommendation accuracy index F1 increased to 8.272% to 11.693% and reduces the loss of interpretable text generation by 2.73% to 33.48%, compared with the classic baseline models BPR, VBPR, NRT and VECF.

     

/

返回文章
返回