Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό

Σχετικά έγγραφα
Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό. Συµπίεση Ευρετηρίου. Term weighting. ιδάσκων ηµήτριος Κατσαρός, Ph.D.

ΜΥΕ003: Ανάκτηση Πληροφορίας. Διδάσκουσα: Ευαγγελία Πιτουρά Κεφάλαιο 5: Στατιστικά Συλλογής. Συμπίεση.

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

2 Composition. Invertible Mappings

Information Retrieval

Homework 3 Solutions

derivation of the Laplacian from rectangular to spherical coordinates

Elements of Information Theory

ΜΥΕ003: Ανάκτηση Πληροφορίας Διδάσκουσα: Ευαγγελία Πιτουρά. Κεφάλαια 4, 5: Κατασκευή Ευρετηρίου. Στατιστικά Συλλογής. Συμπίεση

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

ΜΥΕ003: Ανάκτηση Πληροφορίας Διδάσκουσα: Ευαγγελία Πιτουρά. Κεφάλαια 4, 5: Κατασκευή Ευρετηρίου. Στατιστικά Συλλογής. Συμπίεση

Πανεπιστήμιο Κρήτης, Τμήμα Επιστήμης Υπολογιστών Άνοιξη HΥ463 - Συστήματα Ανάκτησης Πληροφοριών Information Retrieval (IR) Systems

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Instruction Execution Times

Other Test Constructions: Likelihood Ratio & Bayes Tests

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

TMA4115 Matematikk 3

EE512: Error Control Coding

The Simply Typed Lambda Calculus

Physical DB Design. B-Trees Index files can become quite large for large main files Indices on index files are possible.

Math 6 SL Probability Distributions Practice Test Mark Scheme

Congruence Classes of Invertible Matrices of Order 3 over F 2

Problem Set 3: Solutions

C.S. 430 Assignment 6, Sample Solutions

Lecture 2. Soundness and completeness of propositional logic

Ανάκληση Πληποφοπίαρ. Information Retrieval. Διδάζκων Δημήηριος Καηζαρός

6. Βαθμολόγηση, Στάθμιση Όρων, και το Μοντέλο Διανυσματικού Χώρου

Section 8.3 Trigonometric Equations

ΚΥΠΡΙΑΚΟΣ ΣΥΝΔΕΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY 21 ος ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ Δεύτερος Γύρος - 30 Μαρτίου 2011

Section 7.6 Double and Half Angle Formulas

Concrete Mathematics Exercises from 30 September 2016

Statistical Inference I Locally most powerful tests

Block Ciphers Modes. Ramki Thurimella

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Approximation of distance between locations on earth given by latitude and longitude

Example Sheet 3 Solutions

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Srednicki Chapter 55

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

Πανεπιστήμιο Κρήτης, Τμήμα Επιστήμης Υπολογιστών Άνοιξη HΥ463 - Συστήματα Ανάκτησης Πληροφοριών Information Retrieval (IR) Systems

Math221: HW# 1 solutions

Section 9.2 Polar Equations and Graphs

Matrices and Determinants

Finite Field Problems: Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Ανάκληση Πληποφοπίαρ. Information Retrieval. Διδάζκων Δημήηριος Καηζαρός

5.4 The Poisson Distribution.

Numerical Analysis FMN011

Calculating the propagation delay of coaxial cable

Dynamic types, Lambda calculus machines Section and Practice Problems Apr 21 22, 2016

Chapter 2 * * * * * * * Introduction to Verbs * * * * * * *

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Assalamu `alaikum wr. wb.

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Ανάκτηση Πληροφορίας

Modbus basic setup notes for IO-Link AL1xxx Master Block

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Notes on the Open Economy

[1] P Q. Fig. 3.1

Démographie spatiale/spatial Demography

Πανεπιστήμιο Κρήτης, Τμήμα Επιστήμης Υπολογιστών Άνοιξη HΥ463 - Συστήματα Ανάκτησης Πληροφοριών Information Retrieval (IR) Systems

Example of the Baum-Welch Algorithm

PARTIAL NOTES for 6.1 Trigonometric Identities

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

ΠΑΝΕΠΙΣΤΗΜΙΟ ΠΑΤΡΩΝ ΠΟΛΥΤΕΧΝΙΚΗ ΣΧΟΛΗ ΤΜΗΜΑ ΜΗΧΑΝΙΚΩΝ Η/Υ & ΠΛΗΡΟΦΟΡΙΚΗΣ. του Γεράσιμου Τουλιάτου ΑΜ: 697

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων. Εξάμηνο 7 ο

Μηχανική Μάθηση Hypothesis Testing

Every set of first-order formulas is equivalent to an independent set

ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΥΠΡΟΥ - ΤΜΗΜΑ ΠΛΗΡΟΦΟΡΙΚΗΣ ΕΠΛ 133: ΑΝΤΙΚΕΙΜΕΝΟΣΤΡΕΦΗΣ ΠΡΟΓΡΑΜΜΑΤΙΣΜΟΣ ΕΡΓΑΣΤΗΡΙΟ 3 Javadoc Tutorial

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Πώς μπορεί κανείς να έχει έναν διερμηνέα κατά την επίσκεψή του στον Οικογενειακό του Γιατρό στο Ίσλινγκτον Getting an interpreter when you visit your

Solutions to Exercise Sheet 5

The challenges of non-stable predicates

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Areas and Lengths in Polar Coordinates

Συντακτικές λειτουργίες

CRASH COURSE IN PRECALCULUS

SOAP API. Table of Contents

Συστήματα Διαχείρισης Βάσεων Δεδομένων

HISTOGRAMS AND PERCENTILES What is the 25 th percentile of a histogram? What is the 50 th percentile for the cigarette histogram?

4.6 Autoregressive Moving Average Model ARMA(1,1)

ST5224: Advanced Statistical Theory II

Right Rear Door. Let's now finish the door hinge saga with the right rear door

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Lecture 10 - Representation Theory III: Theory of Weights

Information Retrieval

Θεωρία Πληροφορίας και Κωδίκων

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

( ) 2 and compare to M.

Tridiagonal matrices. Gérard MEURANT. October, 2008

Advanced Subsidiary Unit 1: Understanding and Written Response

Εύρεση & Διαχείριση Πληροφορίας στον Παγκόσµιο Ιστό. Διδάσκων Δημήτριος Κατσαρός

Transcript:

Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό ιδάσκων ηµήτριος Κατσαρός, Ph.D. @ Τµ. Μηχανικών Η/Υ, Τηλεπικοινωνιών & ικτύων Πανεπιστήµιο Θεσσαλίας ιάλεξη 3η: 28/02/2007 1

Συµπίεση Ευρετηρίου & Term weighting 2

Συµπίεση Ευρετηρίου 3

Corpus size for estimates Consider N = 1M documents, each with about L=1K terms. Avg 6 bytes/term incl. spaces/punctuation 6GB of data. Say there are m = 500K distinct terms among these. 4

Recall: Don t build the matrix A 500K x 1M matrix has half-a-trillion 0 s and 1 s. But it has no more than one billion 1 s. matrix is extremely sparse. So we devised the inverted index Devised query processing for it Where do we pay in storage? 5

Where do we pay in storage? Terms Term N docs Tot Freq ambitious be brutus 2 2 capitol caesar 2 3 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the 2 2 told you was 2 2 with Doc # Freq 2 2 1 2 1 2 Pointers 6

Index size Stemming/case folding/no numbers cuts number of terms by ~35% number of non-positional postings by 10-20% Stop words Rule of 30: ~30 words account for ~30% of all term occurrences in written text [ = # positional postings] Eliminating 150 commonest terms from index will reduce non-positional postings ~30% without considering compression With compression, you save ~10% 7

Storage analysis First, we will consider space for postings Basic Boolean index only No analysis for positional indexes, etc. We will devise compression schemes Then we will do the same for the dictionary 8

Postings: two conflicting forces A term like Calpurnia occurs in maybe one doc out of a million we would like to store this posting using log M ~ 20 bits. A term like the occurs in virtually every doc, so 20 bits/posting is too expensive. Prefer 0/1 bitmap vector in this case 9

Postings file entry We store the list of docs containing a term in increasing order of docid. Brutus: 33,47,154,159,202 Consequence: it suffices to store gaps. 33,14,107,5,43 Hope: most gaps can be encoded with far fewer than 20 bits. 10

Variable length encoding Aim: For Calpurnia, we will use ~20 bits/gap entry. For the, we will use ~1 bit/gap entry. If the average gap for a term is G, we want to use ~log 2 G bits/gap entry. Key challenge: encode every integer (gap) with ~ as few bits as needed for that integer. Variable length codes achieve this by using short codes for small numbers 11

(Elias) γ codes for gap encoding Length Offset Represent a gap G as the pair <length,offset> length is log 2 G in unary and uses log 2 G +1 bits to specify the length of the binary encoding of the offset offset = G 2 log 2 G in binary encoded in log 2 G bits. Recall that the unary encoding of x is a sequence of x 1 s followed by a 0. 12

γ codes for gap encoding e.g., 9 is represented as <1110,001>. 2 is represented as <10,1>. Exercise: what is the γ code for 1? Exercise: does zero have a γ code? Encoding G takes 2 log 2 G +1 bits. γ codes are always of odd length. 13

Exercise Given the following sequence of γ coded gaps, reconstruct the postings sequence: 1110001110101011111101101111011 From these γ-decode and reconstruct gaps, then full postings. 14

What we ve just done Encoded each gap as tightly as possible, to within a factor of 2. For better tuning and a simple analysis we need a handle on the distribution of gap values. 15

Zipf s law The kth most frequent term has frequency proportional to 1/k. We use this for a crude analysis of the space used by our postings file pointers. Not yet ready for analysis of dictionary space. 16

Zipf s law log-log plot 17

Rough analysis based on Zipf The i th most frequent term has frequency proportional to 1/i Let this frequency be c/i. Then 500,000 = c / i = 1. i 1 The k th Harmonic number is H = k k Thus c = 1/H m, which is ~ 1/ln m = 1/ln(500k) = 1/ i. i 1 ~ 1/13. So the i th most frequent term has frequency roughly 1/13i. 18

Postings analysis contd. Expected number of occurrences of the i th most frequent term in a doc of length L is: Lc/i L/13i 76/i for L=1000. Let J = Lc ~ 76. Then the J most frequent terms are likely to occur in every document. Now imagine the term-document incidence matrix with rows sorted in decreasing order of term frequency: 19

Rows by decreasing frequency J most frequent terms. J next most frequent terms. J next most frequent terms. N docs N gaps of 1 each. N/2 gaps of 2 each. N/3 gaps of 3 each. m terms etc. 20

J-row blocks In the i th of these J-row blocks, we have J rows each with N/i gaps of i each. Encoding a gap of i takes us 2log 2 i +1 bits. So such a row uses space ~ (2N log 2 i )/i bits. For the entire block, (2N J log 2 i )/i bits, which in our case is ~ 1.5 x 10 8 (log 2 i )/i bits. Sum this over i from 1 up to m/j = 500K/76 6500. (Since there are m/j blocks.) 21

Exercise Work out the above sum and show it adds up to about 53 x 150 Mbits, which is about 1GByte. So we ve taken 6GB of text and produced from it a 1GB index that can handle Boolean queries! Neat! Make sure you understand all the approximations in our probabilistic calculation. 22

Caveats This is not the entire space for our index: does not account for dictionary storage next up; as we get further, we ll store even more stuff in the index. Analysis assumes Zipf s law model applies to occurrence of terms in docs. All gaps for a term are taken to be the same! Does not talk about query processing. 23

More practical caveat: alignment γ codes are neat in theory, but, in reality, machines have word boundaries 8, 16, 32 bits Compressing and manipulating at individual bitgranularity is overkill in practice Slows down query processing architecture In practice, simpler byte/word-aligned compression is better See Scholer et al., Anh and Moffat references For most current hardware, bytes are the minimal unit that can be very efficiently manipulated Suggests use of variable byte code 24

Byte-aligned compression Used by many commercial/research systems Good low-tech blend of variable-length coding and sensitivity to alignment issues Fix a word-width of, here, w = 8 bits. Dedicate 1 bit (high bit) to be a continuation bit c. If the gap G fits within (w 1) = 7 bits, binaryencode it in the 7 available bits and set c = 0. Else set c = 1, encode low-order (w 1) bits, and then use one or more additional words to encode G/2 w 1 using the same algorithm 25

Exercise How would you adapt the space analysis for γ coded indexes to the variable byte scheme using continuation bits? 26

Exercise (harder) How would you adapt the analysis for the case of positional indexes? Intermediate step: forget compression. Adapt the analysis to estimate the number of positional postings entries. 27

Word-aligned binary codes More complex schemes indeed, ones that respect 32-bit word alignment are possible Byte alignment is especially inefficient for very small gaps (such as for commonest words) Say we now use 32 bit word with 2 control bits Sketch of an approach: If the next 30 gaps are 1 or 2 encode them in binary within a single word If next gap > 5, encode just it in a word For intermediate gaps, use intermediate strategies Use 2 control bits to encode coding strategy 28

Dictionary and postings files Term Doc # Freq ambitious be brutus brutus capitol caesar caesar 2 2 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the the told you was was with Term N docs Tot Freq ambitious be brutus 2 2 capitol caesar 2 3 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the 2 2 told you was 2 2 with Usually in memory Gap-encoded, on disk Doc # Freq 2 2 1 2 1 2 29

Inverted index storage We have estimated postings storage Next up: Dictionary storage Dictionary is in main memory, postings on disk This is common, and allows building a search engine with high throughput But for very high throughput, one might use distributed indexing and keep everything in memory And in a lower throughput situation, you can store most of the dictionary on disk with a small, in-memory index Tradeoffs between compression and query processing speed Cascaded family of techniques 30

How big is the lexicon V? Grows (but more slowly) with corpus size Empirically okay model: Heap s Law m = kt b where b 0.5, k 30 100; T = # tokens For instance TREC disks 1 and 2 (2 GB; 750,000 newswire articles): 500,000 terms m is decreased by case-folding, stemming Indexing all numbers could make it extremely large (so usually don t) Spelling errors contribute a fair bit of size Exercise: Can one derive this from Zipf s Law? 31

Dictionary storage - first cut Array of fixed-width entries 500,000 terms; 28 bytes/term = 14MB. Terms Freq. Postings ptr. a 999,712 aardvark 71.. zzzz 99 Allows for fast binary search into dictionary 20 bytes 4 bytes each 32

Exercises Is binary search really a good idea? What are the alternatives? 33

Fixed-width terms are wasteful Most of the bytes in the Term column are wasted we allot 20 bytes for 1 letter terms. And we still can t handle supercalifragilisticexpialidocious. Written English averages ~4.5 characters/word. Exercise: Why is/isn t this the number to use for estimating the dictionary size? Ave. dictionary word in English: ~8 characters Short words dominate token counts but not type average. 34

Compressing the term list: Dictionary-as-a-String Store dictionary as a (long) string of characters: Pointer to next word shows end of current word Hope to save up to 60% of dictionary space..systilesyzygeticsyzygialsyzygyszaibelyiteszczecinszomo. Freq. 33 29 44 126 Postings ptr. Term ptr. Binary search these pointers Total string length = 500K x 8B = 4MB Pointers resolve 4M positions: log 2 4M = 22bits = 3bytes 35

Total space for compressed list 4 bytes per term for Freq. 4 bytes per term for pointer to Postings. 3 bytes per term pointer Avg. 8 bytes per term in term string Now avg. 11 500K terms 9.5MB bytes/term, not 20. 36

Blocking Store pointers to every kth term string. Example below: k=4. Need to store term lengths (1 extra byte).7systile9syzygetic8syzygial6syzygy11szaibelyite8szczecin9szomo. Freq. Postings ptr. Term ptr. 33 29 44 126 7 Save 9 bytes on 3 pointers. Lose 4 bytes on term lengths. 37

Net Where we used 3 bytes/pointer without blocking 3 x 4 = 12 bytes for k=4 pointers, now we use 3+4=7 bytes for 4 pointers. Shaved another ~0.5MB; can save more with larger k. Why not go with larger k? 38

Exercise Estimate the space usage (and savings compared to 9.5MB) with blocking, for block sizes of k = 4, 8 and 16. 39

Impact on search Binary search down to 4-term block; Then linear search through terms in block. 8 documents: binary tree ave. = 2.6 compares Blocks of 4 (binary tree), ave. = 3 compares 5 3 7 2 4 6 8 1 5 1 6 2 7 3 8 4 = (1+2 2+4 3+4)/8 =(1+2 2+2 3+2 4+5)/8 40

Exercise Estimate the impact on search performance (and slowdown compared to k=1) with blocking, for block sizes of k = 4, 8 and 16. 41

Total space By increasing k, we could cut the pointer space in the dictionary, at the expense of search time; space 9.5MB ~8MB Net postings take up most of the space Generally kept on disk Dictionary compressed in memory 42

Extreme compression (see MG) Front-coding: Sorted words commonly have long common prefix store differences only (for last k-1 in a block of k) 8automata8automate9automatic10automation 8{automat}a1 e2 ic3 ion Encodes automat Extra length beyond automat. Begins to resemble general string compression. 43

Extreme compression Using (perfect) hashing to store terms within their pointers not great for vocabularies that change. Large dictionary: partition into pages use B-tree on first terms of pages pay a disk seek to grab each page if we re paying 1 disk seek anyway to get the postings, only another seek/query term. 44

Compression: Two alternatives Lossless compression: all information is preserved, but we try to encode it compactly What IR people mostly do Lossy compression: discard some information Using a stopword list can be viewed this way Techniques such as Latent Semantic Indexing (later) can be viewed as lossy compression One could prune from postings entries that are unlikely to turn up in the top k list for query on word Especially applicable to web search with huge numbers of documents but short queries (e.g., Carmel et al. SIGIR 2002) 45

Top k lists Don t store all postings entries for each term Only the best ones Which ones are the best ones? More on this subject later, when we get into ranking 46

Resources IIR 5 MG 3.3, 3.4. F. Scholer, H.E. Williams and J. Zobel. 2002. Compression of Inverted Indexes For Fast Query Evaluation. Proc. ACM-SIGIR 2002. V. N. Anh and A. Moffat. 2005. Inverted Index Compression Using Word-Aligned Binary Codes. Information Retrieval 8: 151 166. 47

Παραµετρική αναζήτηση 48

Parametric search Fields Most documents have, in addition to text, some meta-data in fields e.g., Language = French Format = pdf Subject = Physics etc. Date = Feb 2000 A parametric search interface allows the user to combine a full-text query with selections on these field values e.g., language, date range, etc. Values 49

Parametric search example Notice that the output is a (large) table. Various parameters in the table (column headings) may be clicked on to effect a sort. 50

Parametric search example We can add text search. 51

Parametric/field search In these examples, we select field values Values can be hierarchical, e.g., Geography: Continent Country State City A paradigm for navigating through the document collection, e.g., Aerospace companies in Brazil can be arrived at first by selecting Geography then Line of Business, or vice versa Filter docs in contention and run text searches scoped to subset 52

Index support for parametric search Must be able to support queries of the form Find pdf documents that contain stanford university A field selection (on doc format) and a phrase query Field selection use inverted index of field values docids Organized by field name Use compression etc. as before 53

Parametric index support Optional provide richer search on field values e.g., wildcards Find books whose Author field contains s*trup Range search find docs authored between September and December Inverted index doesn t work (as well) Use techniques from database range search See for instance www.bluerwhite.org/btree/ for a summary of B-trees Use query optimization heuristics as before 54

Field retrieval In some cases, must retrieve field values E.g., ISBN numbers of books by s*trup Maintain forward index for each doc, those field values that are retrievable Indexing control file specifies which fields are retrievable (and can be updated) Storing primary data here, not just an index (as opposed to inverted ) 55

Zones A zone is an identified region within a doc E.g., Title, Abstract, Bibliography Generally culled from marked-up input or document metadata (e.g., powerpoint) Contents of a zone are free text Not a finite vocabulary Indexes for each zone - allow queries like sorting in Title AND smith in Bibliography AND recur* in Body Not queries like all papers whose authors cite themselves Why? 56

Zone indexes simple view Term N docs Tot Freq ambitious be brutus 2 2 capitol caesar 2 3 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the 2 2 told you was 2 2 with Doc # Freq 2 2 1 2 1 2 Term N docs Tot Freq ambitious be brutus 2 2 capitol caesar 2 3 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the 2 2 told you was 2 2 with Doc # Freq 2 2 1 2 1 2 Term N docs Tot Freq ambitious be brutus 2 2 capitol caesar 2 3 did enact hath I 1 2 i' it julius killed 1 2 let me noble so the 2 2 told you was 2 2 with Doc # Freq 2 2 1 2 1 2 Title Author Body etc. 57

So we have a database now? Not really. Databases do lots of things we don t need Transactions Recovery (our index is not the system of record; if it breaks, simply reconstruct from the original source) Indeed, we never have to store text in a search engine only indexes We re focusing on optimized indexes for textoriented queries, not an SQL engine. 58

Scoring και Ranking 59

Scoring Thus far, our queries have all been Boolean Docs either match or not Good for expert users with precise understanding of their needs and the corpus Applications can consume 1000 s of results Not good for (the majority of) users with poor Boolean formulation of their needs Most users don t want to wade through 1000 s of results cf. use of web search engines 60

Scoring We wish to return in order the documents most likely to be useful to the searcher How can we rank order the docs in the corpus with respect to a query? Assign a score say in [0,1] for each doc on each query Begin with a perfect world no spammers Nobody stuffing keywords into a doc to make it match queries More on adversarial IR under web search 61

Linear zone combinations First generation of scoring methods: use a linear combination of Booleans: E.g., Score = 0.6*<sorting in Title> + 0.3*<sorting in Abstract> + 0.05*<sorting in Body> + 0.05*<sorting in Boldface> Each expression such as <sorting in Title> takes on a value in {0,1}. Then the overall score is in [0,1]. For this example the scores can only take on a finite set of values what are they? 62

Linear zone combinations In fact, the expressions between <> on the last slide could be any Boolean query Who generates the Score expression (with weights such as 0.6 etc.)? In uncommon cases the user, in the UI Most commonly, a query parser that takes the user s Boolean query and runs it on the indexes for each zone 63

Exercise On the query bill OR rights suppose that we retrieve the following docs from the various zone indexes: Author Title Body bill rights bill rights bill rights 1 2 3 5 8 3 5 9 1 2 5 3 5 8 9 9 Compute the score for each doc based on the weightings 0.6,0.3,0.1 64

General idea We are given a weight vector whose components sum up to 1. There is a weight for each zone/field. Given a Boolean query, we assign a score to each doc by adding up the weighted contributions of the zones/fields. Typically users want to see the K highest-scoring docs. 65

Index support for zone combinations In the simplest version we have a separate inverted index for each zone Variant: have a single index with a separate dictionary entry for each term and zone E.g., bill.author 1 2 bill.title 3 5 8 bill.body 1 2 5 9 Of course, compress zone names like author/title/body. 66

Zone combinations index The above scheme is still wasteful: each term is potentially replicated for each zone In a slightly better scheme, we encode the zone in the postings: bill 1.author, 1.body 2.author, 2.body 3.title As before, the zone names get compressed. At query time, accumulate contributions to the total score of a document from the various postings, e.g., 67

Score accumulation 1 2 3 5 0.7 0.7 0.4 0.4 bill 1.author, 1.body 2.author, 2.body 3.title rights 3.title, 3.body 5.title, 5.body As we walk the postings for the query bill OR rights, we accumulate scores for each doc in a linear merge as before. Note: we get both bill and rights in the Title field of doc 3, but score it no higher. Should we give more weight to more hits? 68

Where do these weights come from? Machine learned relevance Given A test corpus A suite of test queries A set of relevance judgments Learn a set of weights such that relevance judgments matched Can be formulated as ordinal regression More in next week s lecture 69

Full text queries We just scored the Boolean query bill OR rights Most users more likely to type bill rights or bill of rights How do we interpret these full text queries? No Boolean connectives Of several query terms some may be missing in a doc Only some query terms may occur in the title, etc. 70

Full text queries To use zone combinations for free text queries, we need A way of assigning a score to a pair <free text query, zone> Zero query terms in the zone should mean a zero score More query terms in the zone should mean a higher score Scores don t have to be Boolean Will look at some alternatives now 71

Incidence matrices Recall: Document (or a zone in it) is binary vector X in {0,1} v Query is a vector Score: Overlap measure: X Y Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 0 0 0 1 Brutus 0 1 0 0 Caesar 0 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 worser 1 0 1 0 72

Example On the query ides of march, Shakespeare s Julius Caesar has a score of 3 All other Shakespeare plays have a score of 2 (because they contain march) or 1 Thus in a rank order, Julius Caesar would come out tops 73

Overlap matching What s wrong with the overlap measure? It doesn t consider: Term frequency in document Term scarcity in collection (document mention frequency) of is more common than ides or march Length of documents (And queries: score not normalized) 74

Overlap matching One can normalize in various ways: Jaccard coefficient: X Y / X Y Cosine measure: X Y / X Y What documents would score best using Jaccard against a typical query? Does the cosine measure fix this problem? 75

Scoring: density-based Thus far: position and overlap of terms in a doc title, author etc. Obvious next: idea if a document talks about a topic more, then it is a better match This applies even when we only have a single query term. Document relevant if it has a lot of the terms This leads to the idea of term weighting. 76

Term weighting 77

Term-document count matrices Consider the number of occurrences of a term in a document: Bag of words model Document is a vector in N v : a column below Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 157 73 0 0 0 0 Brutus 4 157 0 1 0 0 Caesar 232 227 0 1 Calpurnia 0 10 0 0 0 0 Cleopatra 57 0 0 0 0 0 mercy 2 0 3 5 5 1 worser 2 0 1 0 78

Bag of words view of a doc Thus the doc John is quicker than Mary. is indistinguishable from the doc Mary is quicker than John. Which of the indexes discussed so far distinguish these two docs? 79

Counts vs. frequencies Consider again the ides of march query. Julius Caesar has 5 occurrences of ides No other play has ides march occurs in over a dozen All the plays contain of By this scoring measure, the top-scoring play is likely to be the one with the most ofs 80

Digression: terminology WARNING: In a lot of IR literature, frequency is used to mean count Thus term frequency in IR literature is used to mean number of occurrences in a doc Notdivided by document length (which would actually make it a frequency) We will conform to this misnomer In saying term frequency we mean the number of occurrences of a term in a document. 81

Term frequency tf Long docs are favored because they re more likely to contain query terms Can fix this to some extent by normalizing for document length But is raw tf the right measure? 82

Weighting term frequency: tf What is the relative importance of 0 vs. 1 occurrence of a term in a doc 1 vs. 2 occurrences 2 vs. 3 occurrences Unclear: while it seems that more is better, a lot isn t proportionally better than a few Can just use raw tf Another option commonly used in practice: wf t d = 0 if tft, d = 0, 1+ logtft,, d otherwise 83

Score computation Score for a query q = sum over terms t in q: = t q tf, t d [Note: 0 if no query terms in document] This score can be zone-combined Can use wf instead of tf in the above Still doesn t consider term scarcity in collection (ides is rarer than of) 84

Weighting should depend on the term overall Which of these tells you more about a doc? 10 occurrences of hernia? 10 occurrences of the? Would like to attenuate the weight of a common term But what is common? Suggest looking at collection frequency (cf ) The total number of occurrences of the term in the entire collection of documents 85

Document frequency But document frequency (df ) may be better: df = number of docs in the corpus containing the term Word cf df ferrari 10427 insurance 10440 3997 Document/collection frequency weighting is only possible in known (static) collection. So how do we make use of df? 86

tf x idf term weights tf x idf measure combines: term frequency (tf ) or wf, some measure of term density in a doc inverse document frequency (idf ) measure of informativeness of a term: its rarity across the whole corpus could just be raw count of number of documents the term occurs in (idf i = 1/df i ) but by far the most commonly used version is: idf i = log n df See Kishore Papineni, NAACL 2, 2002 for theoretical justification i 87

Summary: tf x idf (or tf.idf) Assign a tf.idf weight to each term i in each document d What is the wt w = tf log( n / df ) of a term that i, d i, d i occurs in all of the docs? tfi, d = frequency of term i in document j n = total number of documents df = the number of documents that contain term i Increases with the number of occurrences within a doc Increases with the rarity of the term across the whole corpus i 88

Real-valued term-document matrices Function (scaling) of count of a word in a document: Bag of words model Each is a vector in R v Here log-scaled tf.idf Note can be >1! Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 13.1.4 0.0 0.0 0.0 0.0 Brutus 3.0 8.3 0.0 1.0 0.0 0.0 Caesar 2.3 2.3 0.0 0.5 0.3 0.3 Calpurnia 0.0 11.2 0.0 0.0 0.0 0.0 Cleopatra 17.7 0.0 0.0 0.0 0.0 0.0 mercy 0.5 0.0 0.7 0.9 0.9 0.3 worser 1.2 0.0 0.6 0.6 0.6 0.0 89

Documents as vectors Each doc j can now be viewed as a vector of wf idf values, one component for each term So we have a vector space terms are axes docs live in this space even with stemming, may have 20,000+ dimensions (The corpus of documents gives us a matrix, which we could also view as a vector space in which words live transposable data) 90

Recap We began by looking at zones in scoring Ended up viewing documents as vectors in a vector space We will pursue this view next time. 91