基于多特征融合相似度的视频关键帧提取算法

VIDEO KEYFRAME EXTRACTION ALGORITHM BASED ON MULTI-FEATURE FUSION SIMILARITY

  • 摘要: 针对基于内容的视频检索中关键帧提取效率不高,导致选取的关键帧代表性不够,造成整个视频检索系统性能不足的问题,提出一种基于多特征融合相似度的关键帧提取算法。该算法使用颜色直方图与全卷积神经网络结合的方法对视频进行镜头检测,将视频分割成内容相关性更高的镜头,使用多特征融合来相似度的方法在镜头中提取关键帧,使用深度特征来相似度的方法去除冗余的关键帧,得到更精确的结果。实验数据表明,该算法提取的关键帧对视频有较强的概括性,可应用于视频检索与摘要,整体在全率与查准率分别能达到85.61%和83.21%,与其他算法比较,该算法提取的关键帧冗余度相对较小。

     

    Abstract: Aiming at the low efficiency of keyframe extraction in content-based video retrieval, resulting in insufficient representation of selected keyframes and performance of the entire video retrieval system, this paper proposes a keyframe extraction algorithm based on multi-feature fusion similarity. A combination method of color histogram and full convolutional neural network was used to detect video shots, and segmented the video into shots with higher content correlation. The multi-feature fusion similarity method was used to extract keyframes from the segmented shots. This paper used the deep feature similarity method to remove redundant keyframes, and obtained more accurate results. Experimental results shows that the extracted keyframes have a strong generality for video, and can be applied to video retrieval and summary. The overall recall and precision rate can reach 85.61% and 83.21%, respectively. Compared with other algorithms, the redundancy of the key frames extracted by this algorithm is relatively small.

     

/

返回文章
返回