Skip navigation

Please use this identifier to cite or link to this item: http://localhost:8080/xmlui/handle/123456789/292
Title: Certain Investigations on Content Based Video Indexing and Retrieval Using Heuristic Approaches
Other Titles: https://shodhganga.inflibnet.ac.in/handle/10603/303674
https://shodhganga.inflibnet.ac.in/bitstream/10603/303674/2/02_certificates.pdf
Authors: Brindha, N
Visalakshi, P
Keywords: Engineering and Technology
Computer Science
Computer Science Software Engineering
Data Mining
Multimedia
Video Indexing
Issue Date: 2019
Publisher: Anna University
Abstract: Data mining task aims to refine the knowledge that fits the data in order to identify the interesting information from the vast set of raw data generally present in the databases. Multimedia database is a collection of related multimedia objects that are stored and are to be managed in an effective manner. Multimedia mining process analyses the multiple information media in order to find the relationship between the varied types of data. Mining is a challenging task due to the unstructured nature of the objects. Videos play a most supreme role in multimedia data thereby including media types such as audio, image, graphics and text. The technology behind videography includes the steps namely capturing, recording and processing the images caught at a moment (still images). Video data mining tries to discover recurrent video patterns from a large video database automatically and is one of the core challenging research problem. A video retrieval system is a framework used for handling video data that identifies the required and similar video clips from a video. Content based search will analyse the actual content of the video whereas metadata search occurs with keywords, tags and descriptors. The unique feature of a video is to convey the rich semantic content. It aims to provide a quick and easy mechanism for indexing and storing the visual data based on the content allowing the user to perform an easy search and retrieval. Nowadays search engines like Google and Yahoo uses textual annotation for the search of video data. The major drawback is the incorrect textual data correlation associated with the video. The second drawback is that the user is interested only on the content (i.e.) visual appearance of the object. To overcome these problems, search is performed on the video content. Content-Based Video Indexing and Retrieval System (CBVRS) is a recent innovation that overcomes the limitations of text based video retrieval and is today widely adopted. CBVRS assist the user to search the video sequence and retrieve the required video based on the content from a large database. It is also a suitable method for the retrieval of the required video clip by feature extraction and classification. CBVRS showed significant results for meaningful video data retrieval in areas like news broadcasting, advertising, distance learning, medical applications, security and sports related applications. Video retrieval is always in a demand due to the development of multimedia data. NEED FOR THE STUDY The significance of video based data retrieval for large applications and problem solving in various domains such as engineering and medical sciences including the day to day activities (Wankhede et al. 2015) has motivated the present research. Many researchers including Ja-Hwung Su et al. (2010) suggested different methods for searching, indexing and retrieval of videos from databases but the reliable and effective system is still an open problem. Weiming Hu et al. (2011) suggested that the goal of video standards is to ensure compatibility between description interfaces for video contents in order to facilitate the development of fast and accurate video retrieval algorithms. One of the key functions in CBVR is to bridge the “semantic gap”, which refers to the gap between user understanding and system perception. To overcome the problem, the process of feature extraction and classification are suggested. Low level features such as color and textures are easy to measure and compute however, it is a paramount challenge to connect the low level features to a semantic meaning, especially involving intellectual and emotional aspects of the human operator (user).Feature extraction is performed on the key frame and the classification process of the matched frames being carried out using Support Vector Machine (SVM). The high level motion feature is obtained from the matched frame. The method adopted in the research is a hybrid Support Vector Machine-Echo State Neural Network (SVM-ESN) that classifies the video content based on the features. OBJECTIVES OF THE STUDY The objective of research is to classify the videos from the web video library available in MSR (Microsoft Research) dataset and retrieve the appropriate one from the category for user utilization. The first phase is to execute the process of video segmentation by the video scene segmentation method. The second phase is to classify the contents of the video using keyframe extraction and by adopting the final frame difference of the frames and the block matching algorithm. The third phase is to extract the low level feature and the high level features using Fuzzy Local C-means Clustering Algorithm (FLICM) and Perceived Motion Energy Spectrum (PMES) techniques. In the fourth phase a hybrid SVM-ESN classifier is used to bridge the semantic gap. The computational results indicate that the proposed FLICM and PMES method were found to be effective for feature extraction. It also shows that the adopted SVM-ESN supervised learning method produces improved performance for video classification and retrieval when compared to other existing methods. The methods are evaluated using the measures such as precision, recall and F-Score. The proposed approach significantly reduces the “user's effort” of composing a query and capturing the “user's information need” more precisely
URI: http://localhost:8080/xmlui/handle/123456789/292
Appears in Collections:Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
02_abstracts.pdfABSTRACT68.09 kBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.