Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό. Ανεκτική αναζήτηση. ηµιουργία Ευρετηρίου. ιδάσκων ηµήτριος Κατσαρός, Ph.D.

Σχετικά έγγραφα
Ανάκληση Πληροφορίας. Information Retrieval. Διδάσκων Δημήτριος Κατσαρός

Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό

Ανάκληση Πληποφοπίαρ. Information Retrieval. Διδάζκων Δημήηριος Καηζαρός

Ανάκληση Πληποφοπίαρ. Information Retrieval. Διδάζκων Δημήηριος Καηζαρός

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Instruction Execution Times

The Simply Typed Lambda Calculus

Information Retrieval

Ανάκληση Πληποφοπίαρ. Information Retrieval. Διδάζκων Δημήηριος Καηζαρός

Physical DB Design. B-Trees Index files can become quite large for large main files Indices on index files are possible.

2 Composition. Invertible Mappings

EE512: Error Control Coding

Block Ciphers Modes. Ramki Thurimella

Homework 3 Solutions

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Concrete Mathematics Exercises from 30 September 2016

Συστήματα Διαχείρισης Βάσεων Δεδομένων

ΚΥΠΡΙΑΚΟΣ ΣΥΝΔΕΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY 21 ος ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ Δεύτερος Γύρος - 30 Μαρτίου 2011

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Other Test Constructions: Likelihood Ratio & Bayes Tests

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

derivation of the Laplacian from rectangular to spherical coordinates

Approximation of distance between locations on earth given by latitude and longitude

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Μηχανική Μάθηση Hypothesis Testing

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Section 8.3 Trigonometric Equations

Galatia SIL Keyboard Information

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

The challenges of non-stable predicates

LESSON 14 (ΜΑΘΗΜΑ ΔΕΚΑΤΕΣΣΕΡΑ) REF : 202/057/34-ADV. 18 February 2014

Lecture 2. Soundness and completeness of propositional logic

Ευρετηρίαση ΜΕΡΟΣ ΙIΙ. Επεξεργασία Κειμένου

TMA4115 Matematikk 3

5.4 The Poisson Distribution.

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

3. Λεξικά & Ανάκτηση Ανεκτική σε Σφάλματα

LECTURE 2 CONTEXT FREE GRAMMARS CONTENTS

Advanced Subsidiary Unit 1: Understanding and Written Response

Right Rear Door. Let's now finish the door hinge saga with the right rear door

[1] P Q. Fig. 3.1

Information Retrieval

Ανάκληση Πληροφορίας. Information Retrieval. Διδάσκων Δημήτριος Κατσαρός

C.S. 430 Assignment 6, Sample Solutions

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Fractional Colorings and Zykov Products of graphs

Example of the Baum-Welch Algorithm

Every set of first-order formulas is equivalent to an independent set

Problem Set 3: Solutions

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Numerical Analysis FMN011

Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων. Εξάμηνο 7 ο

Elements of Information Theory

Modbus basic setup notes for IO-Link AL1xxx Master Block

Statistical Inference I Locally most powerful tests

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Section 7.6 Double and Half Angle Formulas

Finite Field Problems: Solutions

Chapter 2 * * * * * * * Introduction to Verbs * * * * * * *

EPL 603 TOPICS IN SOFTWARE ENGINEERING. Lab 5: Component Adaptation Environment (COPE)

Inverse trigonometric functions & General Solution of Trigonometric Equations

Example Sheet 3 Solutions

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Math 6 SL Probability Distributions Practice Test Mark Scheme

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Matrices and Determinants

Section 9.2 Polar Equations and Graphs

( ) 2 and compare to M.

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Dynamic types, Lambda calculus machines Section and Practice Problems Apr 21 22, 2016

(C) 2010 Pearson Education, Inc. All rights reserved.

Πανεπιστήμιο Κρήτης, Τμήμα Επιστήμης Υπολογιστών Άνοιξη HΥ463 - Συστήματα Ανάκτησης Πληροφοριών Information Retrieval (IR) Systems

FINAL TEST B TERM-JUNIOR B STARTING STEPS IN GRAMMAR UNITS 8-17

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

the total number of electrons passing through the lamp.

Math221: HW# 1 solutions

Στο εστιατόριο «ToDokimasesPrinToBgaleisStonKosmo?» έξω από τους δακτυλίους του Κρόνου, οι παραγγελίες γίνονται ηλεκτρονικά.

Congruence Classes of Invertible Matrices of Order 3 over F 2

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

4. Κατασκευή Ευρετηρίου

TaxiCounter Android App. Περδίκης Ανδρέας ME10069

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Συντακτικές λειτουργίες

CE 530 Molecular Simulation

ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΕΣΣΑΛΙΑΣ ΤΜΗΜΑ ΠΟΛΙΤΙΚΩΝ ΜΗΧΑΝΙΚΩΝ ΤΟΜΕΑΣ ΥΔΡΑΥΛΙΚΗΣ ΚΑΙ ΠΕΡΙΒΑΛΛΟΝΤΙΚΗΣ ΤΕΧΝΙΚΗΣ. Ειδική διάλεξη 2: Εισαγωγή στον κώδικα της εργασίας

Code Breaker. TEACHER s NOTES

On a four-dimensional hyperbolic manifold with finite volume

PARTIAL NOTES for 6.1 Trigonometric Identities

Srednicki Chapter 55

14 Lesson 2: The Omega Verb - Present Tense

ST5224: Advanced Statistical Theory II

UNIVERSITY OF CALIFORNIA. EECS 150 Fall ) You are implementing an 4:1 Multiplexer that has the following specifications:

ΤΕΧΝΟΛΟΓΙΚΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΥΠΡΟΥ ΤΜΗΜΑ ΝΟΣΗΛΕΥΤΙΚΗΣ

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Προσομοίωση BP με το Bizagi Modeler

Assalamu `alaikum wr. wb.

Transcript:

Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό ιδάσκων ηµήτριος Κατσαρός, Ph.D. @ Τµ. Μηχανικών Η/Υ, Τηλεπικοινωνιών & ικτύων Πανεπιστήµιο Θεσσαλίας ιάλεξη 2η: 21/02/2007 1 Ανεκτική αναζήτηση & ηµιουργία Ευρετηρίου 2 Ανεκτική αναζήτηση 3 1

Στη διάλεξη αυτή Tolerant retrieval Wild-card queries Spelling correction Soundex 4 Wild-card queries 5 Wild-card queries: * mon*: find all docs containing any word beginning mon. Easy with binary tree (or B-tree) lexicon: retrieve all words in range: mon w < moo *mon: find words ending in mon : harder Maintain an additional B-tree for terms backwards. Can retrieve all words in range: nom w < non. Exercise: from this, how can we enumerate all terms meeting the wild-card query pro*cent? 6 2

Query processing At this point, we have an enumeration of all terms in the dictionary that match the wild-card query. We still have to look up the postings for each enumerated term. E.g., consider the query: se*ate AND fil*er This may result in the execution of many Boolean AND queries. 7 B-trees handle * s at the end of a query term How can we handle * s in the middle of query term? (Especially multiple * s) The solution: transform every wild-card query so that the * s occur at the end This gives rise to the Permuterm Index. 8 Permuterm index For term hello index under: hello$, ello$h, llo$he, lo$hel, o$hell where $ is a special symbol. Queries: X lookup on X$ X* lookup on X*$ *X lookup on X$* *X* lookup on X* X*Y lookup on Y$X* X*Y*Z??? Exercise! Query = hel*o X=hel, Y=o Lookup o$hel* 9 3

Permuterm query processing Rotate query wild-card to the right Now use B-tree lookup as before. Permuterm problem: quadruples lexicon size Empirical observation for English. 10 Bigram indexes Enumerate all k-grams (sequence of k chars) occurring in any term e.g., from text April is the cruelest month we get the 2-grams (bigrams) $a,ap,pr,ri,il,l$,$i,is,s$,$t,th,he,e$,$c,cr,ru, ue,el,le,es,st,t$, $m,mo,on,nt,h$ $ is a special word boundary symbol Maintain an inverted index from bigrams to dictionary terms that match each bigram. 11 Bigram index example $m mace madden mo on among among amortize around 12 4

Processing n-gram wild-cards Query mon* can now be run as $m AND mo AND on Fast, space efficient. Gets terms that match AND version of our wildcard query. But we d enumerate moon. Must post-filter these terms against query. Surviving enumerated terms are then looked up in the term-document inverted index. 13 Processing wild-card queries As before, we must execute a Boolean query for each enumerated, filtered term. Wild-cards can result in expensive query execution Avoid encouraging laziness in the UI: Type your search terms, use * if you need to. E.g., Alex* will match Alexander. Search 14 Advanced features Avoiding UI clutter is one reason to hide advanced features behind an Advanced Search button It also deters most users from unnecessarily hitting the engine with fancy queries 15 5

Spelling correction 16 Spell correction Two principal uses Correcting document(s) being indexed Retrieve matching documents when query contains a spelling error Two main flavors: Isolated word Check each word on its own for misspelling Will not catch typos resulting in correctly spelled words e.g., from form Context-sensitive Look at surrounding words, e.g., I flew form Heathrow to Narita. 17 Document correction Primarily for OCR ed documents Correction algorithms tuned for this Goal: the index (dictionary) contains fewer OCRinduced misspellings Can use domain-specific knowledge E.g., OCR can confuse O and D more often than it would confuse O and I (adjacent on the QWERTY keyboard, so more likely interchanged in typing). 18 6

Query mis-spellings Our principal focus here E.g., the query Alanis Morisett We can either Retrieve documents indexed by the correct spelling, OR Return several suggested alternative queries with the correct spelling Did you mean? 19 Isolated word correction Fundamental premise there is a lexicon from which the correct spellings come Two basic choices for this A standard lexicon such as Webster s English Dictionary An industry-specific lexicon hand-maintained The lexicon of the indexed corpus E.g., all words on the web All names, acronyms etc. (Including the mis-spellings) 20 Isolated word correction Given a lexicon and a character sequence Q, return the words in the lexicon closest to Q What s closest? We ll study several alternatives Edit distance Weighted edit distance n-gram overlap 21 7

Edit distance Given two strings S 1 and S 2, the minimum number of basic operations to covert one to the other Basic operations are typically character-level Insert Delete Replace E.g., the edit distance from cat to dog is 3. Generally found by dynamic programming. 22 Edit distance Also called Levenshtein distance See http://www.merriampark.com/ld.htm for a nice example plus an applet to try on your own 23 Weighted edit distance As above, but the weight of an operation depends on the character(s) involved Meant to capture keyboard errors, e.g. m more likely to be mis-typed as n than as q Therefore, replacing m by n is a smaller edit distance than by q (Same ideas usable for OCR, but with different weights) Require weight matrix as input Modify dynamic programming to handle weights 24 8

Using edit distances Given query, first enumerate all dictionary terms within a preset (weighted) edit distance (Some literature formulates weighted edit distance as a probability of the error) Then look up enumerated dictionary terms in the term-document inverted index Slow but no real fix Tries help Better implementations see Kukich, Zobel/Dart references. 25 Edit distance to all dictionary terms? Given a (mis-spelled) query do we compute its edit distance to every dictionary term? Expensive and slow How do we cut the set of candidate dictionary terms? Here we use n-gram overlap for this 26 n-gram overlap Enumerate all the n-grams in the query string as well as in the lexicon Use the n-gram index (recall wild-card search) to retrieve all lexicon terms matching any of the query n-grams Threshold by number of matching n-grams Variants weight by keyboard layout, etc. 27 9

Example with trigrams Suppose the text is november Trigrams are nov, ove, vem, emb, mbe, ber. The query is december Trigrams are dec, ece, cem, emb, mbe, ber. So 3 trigrams overlap (of 6 in each term) How can we turn this into a normalized measure of overlap? 28 One option Jaccard coefficient A commonly-used measure of overlap Let X and Y be two sets; then the J.C. is X Y / X Y Equals 1 when X and Y have the same elements and zero when they are disjoint X and Y don t have to be of the same size Always assigns a number between 0 and 1 Now threshold to decide if you have a match E.g., if J.C. > 0.8, declare a match 29 Matching trigrams Consider the query lord we wish to identify words matching 2 of its 3 bigrams (lo, or, rd) lo or rd alone lord sloth border lord morbid ardent border card Standard postings merge will enumerate Adapt this to using Jaccard (or another) measure. 30 10

Caveat Even for isolated-word correction, the notion of an index token is critical what s the unit we re trying to correct? In Chinese/Japanese, the notions of spellcorrection and wildcards are poorly formulated/understood 31 Context-sensitive spell correction Text: I flew from Heathrow to Narita. Consider the phrase query flew form Heathrow We d like to respond Did you mean flew from Heathrow? because no docs matched the query phrase. 32 Context-sensitive correction Need surrounding context to catch this. NLP too heavyweight for this. First idea: retrieve dictionary terms close (in weighted edit distance) to each query term Now try all possible resulting phrases with one word fixed at a time flew from heathrow fled form heathrow flea form heathrow etc. Suggest the alternative that has lots of hits? 33 11

Exercise Suppose that for flew form Heathrow we have 7 alternatives for flew, 19 for form and 3 for heathrow. How many corrected phrases will we enumerate in this scheme? 34 Another approach Break phrase query into a conjunction of biwords (Lecture 2). Look for biwords that need only one term corrected. Enumerate phrase matches and rank them! 35 General issue in spell correction Will enumerate multiple alternatives for Did you mean Need to figure out which one (or small number) to present to the user Use heuristics The alternative hitting most docs Query log analysis + tweaking For especially popular, topical queries 36 12

Computational cost Spell-correction is computationally expensive Avoid running routinely on every query? Run only on queries that matched few docs 37 Thesauri Thesaurus: language-specific list of synonyms for terms likely to be queried car automobile, etc. Machine learning methods can assist more on this in later lectures. Can be viewed as hand-made alternative to editdistance, etc. 38 Query expansion Usually do query expansion rather than index expansion No index blowup Query processing slowed down Docs frequently contain equivalences May retrieve more junk puma jaguar retrieves documents on cars instead of on sneakers. 39 13

Soundex 40 Soundex Class of heuristics to expand a query into phonetic equivalents Language specific mainly for names E.g., chebyshev tchebycheff 41 Soundex typical algorithm Turn every token to be indexed into a 4-character reduced form Do the same with query terms Build and search an index on the reduced forms (when the query calls for a soundex match) http://www.creativyst.com/doc/articles/soundex1/soundex1.htm#top 42 14

Soundex typical algorithm 1. Retain the first letter of the word. 2. Change all occurrences of the following letters to '0' (zero): 'A', E', 'I', 'O', 'U', 'H', 'W', 'Y'. 3. Change letters to digits as follows: B, F, P, V 1 C, G, J, K, Q, S, X, Z 2 D,T 3 L 4 M, N 5 R 6 43 Soundex continued 4. Remove all pairs of consecutive digits. 5. Remove all zeros from the resulting string. 6. Pad the resulting string with trailing zeros and return the first four positions, which will be of the form <uppercase letter> <digit> <digit> <digit>. E.g., Herman becomes H655. Will hermann generate the same code? 44 Exercise Using the algorithm described above, find the soundex code for your name Do you know someone who spells their name differently from you, but their name yields the same soundex code? 45 15

Language detection Many of the components described above require language detection For docs/paragraphs at indexing time For query terms at query time much harder For docs/paragraphs, generally have enough text to apply machine learning methods For queries, lack sufficient text Augment with other cues, such as client properties/specification from application Domain of query origination, etc. 46 What queries can we process? We have Basic inverted index with skip pointers Wild-card index Spell-correction Soundex Queries such as (SPELL(moriset) /3 toron*to) OR SOUNDEX(chaikofski) 47 Aside results caching If 25% of your users are searching for britney AND spears then you probably do need spelling correction, but you don t need to keep on intersecting those two postings lists Web query distribution is extremely skewed, and you can usefully cache results for common queries more later. 48 16

Exercise Draw yourself a diagram showing the various indexes in a search engine incorporating all this functionality Identify some of the key design choices in the index pipeline: Does stemming happen before the Soundex index? What about n-grams? Given a query, how would you parse and dispatch sub-queries to the various indexes? 49 Exercise on previous slide Is the beginning of what do we we need in our search engine? Even if you re not building an engine (but instead use someone else s toolkit), it s good to have an understanding of the innards 50 Resources MG 4.2 Efficient spell retrieval: K. Kukich. Techniques for automatically correcting words in text. ACM Computing Surveys 24(4), Dec 1992. J. Zobel and P. Dart. Finding approximate matches in large lexicons. Software - practice and experience 25(3), March 1995. http://citeseer.ist.psu.edu/zobel95finding.html Nice, easy reading on spell correction: Mikael Tillenius: Efficient Generation and Ranking of Spelling Error Corrections. Master s thesis at Sweden s Royal Institute of Technology. http://citeseer.ist.psu.edu/179155.html 51 17

ηµιουργία Ευρετηρίου 52 Index construction How do we construct an index? What strategies can we use with limited main memory? 53 Our corpus for this lecture Number of docs = n = 1M Each doc has 1K terms Number of distinct terms = m = 500K 667 million postings entries 54 18

How many postings? Number of 1 s in the i th block = nj/i Summing this over m/j blocks, we have m/ J = nj / i = nj H m/j ~ nj ln m / J. i 1 For our numbers, this should be about 667 million postings. 55 Recall index construction Term Doc # Documents are parsed to extract words and these are saved with the Document ID. Doc 1 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Doc 2 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious I 1 did 1 enact 1 julius 1 caesar 1 I 1 was 1 killed 1 i' 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 56 2 Key step After all documents have been parsed the inverted file is sorted by terms. We focus on this sort step. We have 667M items to sort. Term Doc # I 1 did 1 enact 1 julius 1 caesar 1 I 1 was 1 killed 1 i' 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2 Term Doc # ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 I 1 I 1 i' 1 it 2 julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 57 19

Index construction As we build up the index, cannot exploit compression tricks Parse docs one at a time. Final postings for any term incomplete until the end. (actually you can exploit compression, but this becomes a lot more complex) At 10-12 bytes per postings entry, demands several temporary gigabytes 58 System parameters for design Disk seek ~ 10 milliseconds Block transfer from disk ~ 1 microsecond per byte (following a seek) All other ops ~ 10 microseconds E.g., compare two postings entries and decide their merge order 59 Bottleneck Parse and build postings entries one doc at a time Now sort postings entries by term (then by doc within each term) Doing this with random disk seeks would be too slow must sort N=667M records If every comparison took 2 disk seeks, and N items could be sorted with N log 2 N comparisons, how long would this take? 60 20

Sorting with fewer disk seeks 12-byte (4+4+4) records (term, doc, freq). These are generated as we parse docs. Must now sort 667M such 12-byte records by term. Define a Block ~ 10M such records can easily fit a couple into memory. Will have 64 such blocks to start with. Will sort within blocks first, then merge the blocks into one long sorted order. 61 Sorting 64 blocks of 10M records First, read each block and sort within: Quicksort takes 2N ln N expected steps In our case 2 x (10M ln 10M) steps Exercise: estimate total time to read each block from disk and and quicksort it. 64 times this estimate - gives us 64 sorted runs of 10M records each. Need 2 copies of data on disk, throughout. 62 Merging 64 sorted runs Merge tree of log 2 64= 6 layers. During each layer, read into memory runs in blocks of 10M, merge, write back. 1 2 3 4 1 2 3 Merged run. Runs being merged. Disk 4 63 21

Merge tree 1 run? 2 runs? 4 runs? 8 runs, 80M/run 16 runs, 40M/run 32 runs, 20M/run Sorted runs. Bottom level of tree. 1 2 63 64 64 Merging 64 runs Time estimate for disk transfer: 6 x (64runs x 120MB x 10-6 sec) x 2 ~ 25hrs. Disk block transfer time. Why is this an Overestimate? Work out how these transfers are staged, and the total time for merging. # Layers in merge tree Read + Write 65 Exercise - fill in this table Step Time? 1 2 3 4 5 64 initial quicksorts of 10M records each Read 2 sorted blocks for merging, write back Merge 2 sorted blocks Add (2) + (3) = time to read/merge/write 64 times (4) = total merge time 66 22

Large memory indexing Suppose instead that we had 16GB of memory for the above indexing task. Exercise: What initial block sizes would we choose? What index time does this yield? Repeat with a couple of values of n, m. In practice, spidering often interlaced with indexing. Spidering bottlenecked by WAN speed and many other factors - more on this later. 67 Distributed indexing For web-scale indexing (don t try this at home!): must use a distributed computing cluster Individual machines are fault-prone Can unpredictably slow down or fail How do we exploit such a pool of machines? 68 Distributed indexing Maintain a master machine directing the indexing job considered safe. Break up indexing into sets of (parallel) tasks. Master machine assigns each task to an idle machine from a pool. 69 23

Parallel tasks We will use two sets of parallel tasks Parsers Inverters Break the input document corpus into splits Each split is a subset of documents Master assigns a split to an idle parser machine Parser reads a document at a time and emits (term, doc) pairs 70 Parallel tasks Parser writes pairs into j partitions Each for a range of terms first letters (e.g., a-f, g-p, q-z) here j=3. Now to complete the index inversion 71 Data flow assign Master assign Postings Parser a-f g-p q-z Inverter a-f Parser a-f g-p q-z Inverter g-p splits Parser a-f g-p q-z Inverter q-z 72 24

Inverters Collect all (term, doc) pairs for a partition Sorts and writes to postings list Each partition contains a set of postings Above process flow a special case of MapReduce. 73 Dynamic indexing Docs come in over time postings updates for terms already in dictionary new terms added to dictionary Docs get deleted 74 Simplest approach Maintain big main index New docs go into small auxiliary index Search across both, merge results Deletions Invalidation bit-vector for deleted docs Filter docs output on a search result by this invalidation bit-vector Periodically, re-index into one main index 75 25

Issue with big and small indexes Corpus-wide statistics are hard to maintain E.g., when we spoke of spell-correction: which of several corrected alternatives do we present to the user? We said, pick the one with the most hits How do we maintain the top ones with multiple indexes? One possibility: ignore the small index for such ordering Will see more such statistics used in results ranking 76 Building positional indexes Still a sorting problem (but larger) Why? Exercise: given 1GB of memory, how would you adapt the block merge described earlier? 77 Building n-gram indexes As text is parsed, enumerate n-grams. For each n-gram, need pointers to all dictionary terms containing it the postings. Note that the same postings entry can arise repeatedly in parsing the docs need efficient hash to keep track of this. E.g., that the trigram uou occurs in the term deciduous will be discovered on each text occurrence of deciduous 78 26

Building n-gram indexes Once all (n-gram term) pairs have been enumerated, must sort for inversion Recall average English dictionary term is ~8 characters So about 6 trigrams per term on average For a vocabulary of 500K terms, this is about 3 million pointers can compress 79 Index on disk vs. memory Most retrieval systems keep the dictionary in memory and the postings on disk Web search engines frequently keep both in memory massive memory requirement feasible for large web service installations less so for commercial usage where query loads are lighter 80 Indexing in the real world Typically, don t have all documents sitting on a local filesystem Documents need to be spidered Could be dispersed over a WAN with varying connectivity Must schedule distributed spiders Have already discussed distributed indexers Could be (secure content) in Databases Content management applications Email applications 81 27

Content residing in applications Mail systems/groupware, content management contain the most valuable documents http often not the most efficient way of fetching these documents - native API fetching Specialized, repository-specific connectors These connectors also facilitate document viewing when a search result is selected for viewing 82 Secure documents Each document is accessible to a subset of users Usually implemented through some form of Access Control Lists (ACLs) Search users are authenticated Query should retrieve a document only if user can access it So if there are docs matching your search but you re not privy to them, Sorry no results found E.g., as a lowly employee in the company, I get No results for the query salary roster 83 Users in groups, docs from groups Index the ACLs and filter results by them Documents Users 0/1 0 if user can t read doc, 1 otherwise. Often, user membership in an ACL group verified at query time slowdown 84 28

Exercise Can spelling suggestion compromise such document-level security? Consider the case when there are documents matching my query, but I lack access to them. 85 Compound documents What if a doc consisted of components Each component has its own ACL. Your search should get a doc only if your query meets one of its components that you have access to. More generally: doc assembled from computations on components e.g., in Lotus databases or in content management systems How do you index such docs? No good answers 86 Rich documents (How) Do we index images? Researchers have devised Query Based on Image Content (QBIC) systems show me a picture similar to this orange circle watch for lecture on vector space retrieval In practice, image search usually based on metadata such as file name e.g., monalisa.jpg New approaches exploit social tagging E.g., flickr.com 87 29

Passage/sentence retrieval Suppose we want to retrieve not an entire document matching a query, but only a passage/sentence - say, in a very long document Can index passages/sentences as mini-documents what should the index units be? This is the subject of XML search 88 30