№ | Слайд | Текст |
1 |
 |
Content-Based Image Retrieval using the Bag-of-Words ConceptFatih Cakir Melihcan Turk F. Sukru Torun Ahmet Cagri Simsek |
2 |
 |
OutlineIntroduction Bag-of-Words Concept Dictionary Formation Content-Based Image Retrieval using BoW Results Conclusion References 2/23 |
3 |
 |
Introduction : MotivationCBIR motivation: Huge amount of multimedia content demands a sophisticated analysis rather than simple textual processing (metadata such as annotations or keywords). Traditional methods for retrieving images is not very satisfactory or may not meet user demand E.g. In Google image typing ‘Apple’ returns the Apple products as well as the apple fruit. Main reason is the ambiguity in the language. Several other limitations. 3/23 |
4 |
 |
Introduction : MotivationCBIR systems compensates such issues by analyzing the actual ‘content’ of the image hence yielding a more effective feature for describing the image rather than user defined meta-data Content may be texture, color or any other information that can be derived from the image itself. One promising idea is to represents images as ‘words’ analogous to text retrieval solutions. Document ~ Image, term (word) ~ visual word First introduces in [3]. 4/23 |
5 |
 |
Bag of ‘words’ Concept5/23 |
6 |
 |
Bag-of-Words Concept: Analogy to documents6/23 Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. |
7 |
 |
Bag-of-Words Concept: Analogy to documentsEach image can be represented as a histogram . where each bin of the histogram corresponds to a visual word in the dictionary and the value of the bin is the frequency of occurrence of such visual word 7/23 |
8 |
 |
Bag of ‘words’ ConceptHence, we consider an image as a document. And as words/terms define a document, visual words define an image. Words are known? What are ‘visual words’? Need to define a dictionary 8/23 |
9 |
 |
2.1. Bag of ‘words’ Concept : Construct a dictionary 9/23 |
10 |
 |
Dictionary Formation: Feature ExtractionRepresent each patch/interest point with SIFT descriptors [1 Lowe ‘99] 10/23 |
11 |
 |
Dictionary Formation : Vector QuantizationVector quantization 11/23 |
12 |
 |
Example Dictionary12/23 |
13 |
 |
Example ‘Visual words’13/23 |
14 |
 |
Image Representationcodewords frequency 14/23 |
15 |
 |
Content Based Image Retrieval using BoWWe saw have to represent images using the BoW concept. With histograms. It is a mapping of classical text representation onto the image domain. Hence based on the similarity of histograms we can return ranked results, given an query image. Category search: Retrieving an arbitrary image representative of a specific class. Used a subset of Caltech 101 dataset [2]. 15/23 |
16 |
 |
Content Based Image Retrieval using BoWGiven an query image return the top k most similar results. A ‘positive’ or ‘true’ match considered to be within the same category. Mean average precision value (MAP) is computed for each category using 10 query images. 16/23 |
17 |
 |
Content Based Image Retrieval using BoW: DetailsFor vector quantization K-means is used with K=3000. Hence the dictionary contains 3000 visual words and the histogram has 3000 bins representing each visual word. L2-norm – Euclidean distance is used for similarity measure. Visual words are represented using Lowe’s SIFT descriptors. Interest points are extracted using DOG (Difference of Gaussians). For each of the 18 category 10 query images are used and the average MAP value is considered as the categories success rate. 17/23 |
18 |
 |
Results : MAP of category based queries18/23 |
19 |
 |
Results : MAP of varied dictionary sizes19/23 |
20 |
 |
ResultsThe ‘Motorbikes’ category has the highest MAP rate (0.70). The lowest is category ‘camera’ (0.07). Average of MAP rates : 0.25 As the dictionary size get larger (i.e. more visual words) images are represented accurately, hence MAP values increase Performance seem to converge after K>3000. 20/23 |
21 |
 |
ConclusionContent-Based Image Retrieval systems has gained severe interest among research scientists since multimedia files such as images and videos has dramatically entered our lives throughout the last decade Textual analysis is not sufficient for effective retrieval systems Analogous to document representation an image can be described by ‘visual words’. BoW concept. Using only such feature results are highly satisfying. 21/23 |
22 |
 |
References[1] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004 [2]http://www.vision.caltech.edu/Image_Datasets/Caltech101/ [3] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In Proc. ICCV, 2003 22/23 |
23 |
 |
Thank YouQuestions and Demo! 23/23 |
«Content-Based Image Retrieval using the Bag-of-Words Concept» |
http://900igr.net/prezentacija/anglijskij-jazyk/content-based-image-retrieval-using-the-bag-of-words-concept-101557.html