Wang Chengcheng, Wang Yajun, Nan Jiangang. GENERATIVE INTERPRETABLE RECOMMENDATION INTEGRATING IMAGE AND TEXTJ. Computer Applications and Software, 2025, 42(9): 317-323. DOI: 10.3969/j.issn.1000-386x.2025.09.042
Citation: Wang Chengcheng, Wang Yajun, Nan Jiangang. GENERATIVE INTERPRETABLE RECOMMENDATION INTEGRATING IMAGE AND TEXTJ. Computer Applications and Software, 2025, 42(9): 317-323. DOI: 10.3969/j.issn.1000-386x.2025.09.042

GENERATIVE INTERPRETABLE RECOMMENDATION INTEGRATING IMAGE AND TEXT

  • Aiming at the problems that the recommendation explanation in the interpretable recommendation model is not in accordance with the actual preferences of users and the lack of research on the attributes of users and items labels with different backgrounds, we proposed a novel generative interpretable recommendation model integrating image and text (GIR-IT). A multi-modal long short-term memory network based on hybrid fusion was constructed to fuse information such as images and texts to improve the interpretability of the model. The label of the item was added to the model by one-hot coding, and the personalized preference of the user was captured by mining the correlation between the user and the item label, which improved the recommendation accuracy of the model. The experimental results on Amazon datasets show that the GIR-IT model proposed in this paper improves the TOP-N recommendation accuracy index F1 increased to 8.272% to 11.693% and reduces the loss of interpretable text generation by 2.73% to 33.48%, compared with the classic baseline models BPR, VBPR, NRT and VECF.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return