ΕΠΛ660. Ανάλυση Υπερσυνδέσµων

Σχετικά έγγραφα
Εύρεση & ιαχείριση Πληροφορίας στον Παγκόσµιο Ιστό

Εισαγωγή στην ανάλυση συνδέσμων

Ανάλυση υπερσυνδέσµων

ΜΥΕ003: Ανάκτηση Πληροφορίας. Διδάσκουσα: Ευαγγελία Πιτουρά Κεφάλαιο 21: Ανάλυση Συνδέσμων.

ΜΥΕ003: Ανάκτηση Πληροφορίας. Διδάσκουσα: Ευαγγελία Πιτουρά Κεφάλαιο 21: Ανάλυση Συνδέσμων.

Section 8.3 Trigonometric Equations

derivation of the Laplacian from rectangular to spherical coordinates

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Ανάκληση Πληροφορίας. Διδάσκων Δημήτριος Κατσαρός

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Other Test Constructions: Likelihood Ratio & Bayes Tests

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Homework 3 Solutions

EE512: Error Control Coding

Example of the Baum-Welch Algorithm

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Congruence Classes of Invertible Matrices of Order 3 over F 2

Section 7.6 Double and Half Angle Formulas

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

5.4 The Poisson Distribution.

Problem Set 3: Solutions

Matrices and Determinants

Instruction Execution Times

ΠΑΝΕΠΙΣΤΗΜΙΟ ΠΑΤΡΩΝ ΠΟΛΥΤΕΧΝΙΚΗ ΣΧΟΛΗ ΤΜΗΜΑ ΜΗΧΑΝΙΚΩΝ Η/Υ & ΠΛΗΡΟΦΟΡΙΚΗΣ. του Γεράσιμου Τουλιάτου ΑΜ: 697

2 Composition. Invertible Mappings

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

TMA4115 Matematikk 3

Finite Field Problems: Solutions

The challenges of non-stable predicates

Concrete Mathematics Exercises from 30 September 2016

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Srednicki Chapter 55

Statistical Inference I Locally most powerful tests

Math 6 SL Probability Distributions Practice Test Mark Scheme

Example Sheet 3 Solutions

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

The Simply Typed Lambda Calculus

Physical DB Design. B-Trees Index files can become quite large for large main files Indices on index files are possible.

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Fractional Colorings and Zykov Products of graphs

Solutions to Exercise Sheet 5

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

PARTIAL NOTES for 6.1 Trigonometric Identities

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Numerical Analysis FMN011

6.3 Forecasting ARMA processes

C.S. 430 Assignment 6, Sample Solutions

ΚΥΠΡΙΑΚΟΣ ΣΥΝΔΕΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY 21 ος ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ Δεύτερος Γύρος - 30 Μαρτίου 2011

Block Ciphers Modes. Ramki Thurimella

Approximation of distance between locations on earth given by latitude and longitude

Lecture 10 - Representation Theory III: Theory of Weights

Μηχανική Μάθηση Hypothesis Testing

( ) 2 and compare to M.

How to register an account with the Hellenic Community of Sheffield.

ST5224: Advanced Statistical Theory II

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Lecture 2. Soundness and completeness of propositional logic

Αλγόριθμοι και πολυπλοκότητα Depth-First Search

Section 9.2 Polar Equations and Graphs

Lecture 15 - Root System Axiomatics

4.6 Autoregressive Moving Average Model ARMA(1,1)

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

A Method for Creating Shortcut Links by Considering Popularity of Contents in Structured P2P Networks

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Math221: HW# 1 solutions

Second Order RLC Filters

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

Uniform Convergence of Fourier Series Michael Taylor

w o = R 1 p. (1) R = p =. = 1

Strain gauge and rosettes

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

Code Breaker. TEACHER s NOTES

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

[1] P Q. Fig. 3.1

Calculating the propagation delay of coaxial cable

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

Reminders: linear functions

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Right Rear Door. Let's now finish the door hinge saga with the right rear door

Εγκατάσταση λογισμικού και αναβάθμιση συσκευής Device software installation and software upgrade

Models for Probabilistic Programs with an Adversary

Τ.Ε.Ι. ΔΥΤΙΚΗΣ ΜΑΚΕΔΟΝΙΑΣ ΠΑΡΑΡΤΗΜΑ ΚΑΣΤΟΡΙΑΣ ΤΜΗΜΑ ΔΗΜΟΣΙΩΝ ΣΧΕΣΕΩΝ & ΕΠΙΚΟΙΝΩΝΙΑΣ

Στο εστιατόριο «ToDokimasesPrinToBgaleisStonKosmo?» έξω από τους δακτυλίους του Κρόνου, οι παραγγελίες γίνονται ηλεκτρονικά.

Lecture 34 Bootstrap confidence intervals

Potential Dividers. 46 minutes. 46 marks. Page 1 of 11

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Bayesian modeling of inseparable space-time variation in disease risk

Transcript:

Ανάλυση Υπερσυνδέσµων

Περιεχόµενα Μαθήµατος Anchor text Link analysis for ranking Markov Chains Pagerank and variants How can I improve the PageRank of my Web pages? HITS

The Web as a Directed Graph Page A Anchor hyperlink Page B Assumption 1: A hyperlink between pages denotes author perceived relevance (quality signal) Assumption 2: The anchor of the hyperlink describes the target page (textual context)

Anchor Text WWW Worm - McBryan [Mcbr94] : is intended to locate all of the WWW-addressable resources on the Internet For ΙΒΜ how to distinguish between: IBM s home page (mostly graphical) IBM s copyright page (high term freq. for ibm ) Rival s spam page (arbitrarily high term freq.) ibm ibm.com IBM home page A million pieces of anchor text with ibm send a strong signal www.ibm.com

Indexing anchor text When indexing a document D, include anchor text from links pointing to D. Armonk, NY-based computer giant IBM announced today www.ibm.com Joe s computer hardware links Compaq HP IBM Big Blue today announced record profits for the quarter

Indexing anchor text Can sometimes have unexpected side effects - e.g., evil empire. Can index anchor text with less weight.

Anchor Text Other applications Weighting/filtering links in the graph HITS [Chak98], Hilltop [Bhar01] Generating page descriptions from anchor text [Amit98, Amit00]

Citation Analysis Citation frequency Co-citation coupling frequency Co-citations with a given author measures impact Co-citation analysis [Mcca90] Bibliographic coupling frequency Articles that co-cite the same articles are related Citation indexing Who is author cited by? (Garfield [Garf72])

Query-independent ordering First generation: using link counts as simple measures of popularity. Two basic suggestions: Undirected popularity: Each page gets a score = the number of in-links plus the number of out-links (3+2=5). Directed popularity: Score of a page = number of its in-links (3).

Query processing First retrieve all pages meeting the text query (say venture capital). Order these by their link popularity (either variant on the previous page).

Link Analysis Ranking Algorithms Start with a collection of web pages Extract the underlying hyperlink graph Run the LAR algorithm on the graph Output: an authority weight for each node w w w w w

Link Analysis: Intuition A link from page p to page q denotes endorsement page p considers page q an authority on a subject mine the web graph of recommendations assign an authority value to every page

Algorithm input Query independent: rank the whole Web PageRank (Brin and Page 98) was proposed as query independent Query dependent: rank a small subset of pages related to a specific query HITS (Kleinberg 98) was proposed as query dependent

InDegree algorithm Rank pages according to in-degree w i = B(i) w=2 w=3 w=2 1. Red Page 2. Yellow Page 3. Blue Page 4. Purple Page 5. Green Page w=1 w=1

Spamming simple popularity Exercise: How do you spam each of the following heuristics so your page gets a high score? Each page gets a score = the number of in-links plus the number of out-links. Score of a page = number of its in-links.

Pagerank scoring Imagine a browser doing a random walk on web pages: Start at a random page 1/3 1/3 1/3 At each step, go out of the current page along one of the links on that page, equiprobably In the steady state each page has a long-term visit rate - use this as the page s score.

Not quite enough The web is full of dead-ends. Random walk can get stuck in dead-ends. Makes no sense to talk about long-term visit rates.??

Teleporting At a dead end, jump to a random web page. At any non-dead end, with probability 10%, jump to a random web page. With remaining probability (90%), go out on a random link. 10% - a parameter.

Result of teleporting Now cannot get stuck locally. There is a long-term rate at which any page is visited (not obvious, will show this). How do we compute this visit rate?

Random walks Random walks on graphs correspond to Markov Chains The set of states S is the set of nodes of the graph G The transition probability matrix is the probability that we follow an edge from one node to another

Markov chains A Markov chain consists of n states, plus an n n transition probability matrix P. At each step, we are in exactly one of the states. For 1 i,j n, the matrix entry P ij tells us the probability of j being the next state, given we are currently in state i. i P ij j

Markov chains A Markov chain describes a discrete time stochastic process over a set of states S = {s 1, s 2, s n } according to a transition probability matrix P = {P ij } P ij = probability of moving to state j when at state i j P ij = 1 (stochastic matrix) Memorylessness property: The next state of the chain depends only at the current state and not on the past of the process (first order MC) higher order MCs are also possible

Markov chains Clearly, for all i, n j= 1 P ij = 1. Markov chains are abstractions of random walks.

Ergodic Markov Chain Με οποιοδήποτε διάνυσµα ξεκινήσουµε, όταν εφαρµοστεί η power method σε έναν Markov πίνακα P, συγκλίνει σε ένα µοναδικό θετικό διάνυσµα, το οποίο αποκαλείται stationary vector Προϋποθέσεις σύγκλισης O P είναι stochastic: οι γραµµές αθροίζουν στο 1 O P είναι irreducible: το υποκείµενο γράφηµα είναι strongly-connected A directed graph is called strongly connected if for every pair of vertices u and v there is a path from u to v and a path from v to u. O P είναι aperiodic: για οποιεσδήποτε σελίδες Pi και Pj υπάρχουν µονοπάτια από την Pi στην Pj (µε οποιεσδήποτε επαναλήψεις) οποιουδήποτε µήκους

Ergodic Markov Chains A Markov chain is ergodic if : 1. the corresponding graph is strongly connected. 2. It is not periodic Ergodic Markov Chains are important since they guarantee the corresponding Markovian process converges to a unique distribution, in which all states have strictly positive probability.

Ergodic Markov chains For any ergodic Markov chain, there is a unique long-term visit rate for each state. Steady-state probability distribution. Over a long time-period, we visit each state in proportion to this rate. It doesn t matter where we start.

Probability vectors A probability (row) vector x = (x 1, x n ) tells us where the walk is at any point. E.g., (000 1 000) means we re in state i. 1 i n More generally, the vector x = (x 1, x n ) means the walk is in state i with probability x i. n i= 1 x i = 1.

Change in probability vector If the probability vector is x = (x 1, x n ) at this step, what is it at the next step? Recall that row i of the transition prob. Matrix P tells us where we go next from state i. So from x, our next state is distributed as xp.

Steady state example The steady state looks like a vector of probabilities a = (a 1, a n ): a i is the probability that we are in state i. 1/4 3/4 1 2 1/4 3/4 For this example, a 1 =1/4 and a 2 =3/4.

How do we compute this vector? Let a = (a 1, a n ) denote the row vector of steadystate probabilities. If our current position is described by a, then the next step is distributed as ap. But a is the steady state, so a=ap. Solving this matrix equation gives us a. So a is the (left) eigenvector for P. (Corresponds to the principal eigenvector of P with the largest eigenvalue.) Transition probability matrices always have larges eigenvalue 1.

One way of computing a Recall, regardless of where we start, we eventually reach the steady state a. Start with any distribution (say x=(10 0)). After one step, we re at xp; after two steps at xp 2, then xp 3 and so on. Eventually means for large k, xp k = a. Algorithm: multiply x by increasing powers of P until the product looks stable.

PageRank PageRank is one of the methods Google uses to determine a page s relevance or importance. Οι Sergey Brin και Lawrence Page δεν χρησιµοποίησαν την έννοια της Markov chain, αλλά την έννοια του random surfer

Pagerank summary Preprocessing: Given graph of links, build matrix P. From it compute a. The entry a i is a number between 0 and 1: the pagerank of page i. Query processing: Retrieve pages meeting query. Rank them by their pagerank. Order is query-independent.

PageRank algorithm [BP98] Good authorities should be pointed by good authorities Random walk on the web graph pick a page at random with probability 1- α jump to a random page with probability α follow a random outgoing link Rank according to the stationary distribution 1. Red Page 2. Purple Page 3. Yellow Page 4. Blue Page 5. Green Page

O Αλγόριθµος PageRank Βασική Ιδέα Μια σελίδα που δέχεται πολλές αναφορές περιµένει κανείς να είναι γενικά πιο σηµαντική Οι Web pages δεν είναι όλες το ίδιο σηµαντικές www.joe-schmoe.com vs. www.stanford.edu Αναφορές (Inlinks) ως «ψήφοι» as votes www.stanford.edu 23,400 inlinks www.joe-schmoe.com 1 inlink εν είναι όλες οι αναφορές το ίδιο σηµαντικές Θεωρεί «έµµεσες αναφορές» indirect citations (αναφορές από σελίδες που επίσης έχουν πολλές αναφορές θεωρούνται πιο σηµαντικές) Αναδροµικό πρόβληµα!

PageRank: Βασική εξίσωση PageRank is a vote, by all the other pages on the Web, about how important a page is A link to a page counts as a vote of support The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85 PR(A) = (1-d) + d (PR(T1)/C(T1) +... + PR(Tn)/C(Tn)) Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be 1

PageRank: Πώς υπολογίζεται; Lets take the simplest example network: two pages, each pointing to the other: A B Each page has one outgoing link (the outgoing count is 1, i.e. C(A) = 1 and C(B) = 1) Guess 1: We don t know what their PR should be to begin with, so let s take a guess at 1.0 and do some calculations with d = 0.85 PR(A)= (1 d) + d(pr(b)/1) PR(B)= (1 d)+d(pr(a)/1) PR(A)= 0.15+0.85*1= 1 PR(B)= 0.15+0.85*1= 1 The numbers aren t changing at all! So it looks like we started out with a lucky guess!!!

PageRank: Πώς υπολογίζεται; Guess 2: No, that s too easy, maybe I got it wrong. Ok, let s start the guess at 0 instead and recalculate: PR(A) = 0.15 + 0.85 * 0= 0.15 PR(B) = 0.15 + 0.85 * 0.15= 0.2775 PR(A) = 0.15 + 0.85 * 0.2775= 0.385875 PR(B) = 0.15 + 0.85 * 0.385875 = 0.47799375 PR(A) = 0.15 + 0.85 * 0.47799375= 0.5562946875 PR(B) = 0.15 + 0.85 * 0.5562946875 = 0.622850484375 and so on. The numbers just keep going up. But will the numbers stop increasing when they get to 1.0?

PageRank: Πώς υπολογίζεται; Guess 3: Well let s see. Let s start the guess at 40 each and do a few cycles: PR(A) = 40 PR(B) = 40 First calculation PR(A) = 0.15 + 0.85 * 40= 34.25 PR(B) = 0.15 + 0.85 * 34.25 = 29.1775 PR(A) = 0.15 + 0.85 * 29.1775= 24.950875 PR(B) = 0.15 + 0.85 * 24.950875 = 21.35824375 Those numbers are heading down alright! It sure looks the numbers will get to 1.0 and stop

Υπολογισµός του PageRank Το PageRank µιας σελίδας είναι το άθροισµα του PageRank των σελίδων που δείχνουν σ αυτή: Το πρόβληµα µε τη εξίσωση αυτή είναι ότι δεν ξέρουµε το PageRank των σελίδων που δείχνουν στη P i Το πρόβληµα επιλύθηκε µε επαναληπτική διαδικασία Αρχικά κάθε σελίδα έχει το ίδιο PageRank, ίσο µε 1/n Ακολουθούµε την παραπάνω εξίσωση επαναληπτικά

Υπολογισµός του PageRank Έστω ότι r k+1 (P i ) είναι το PageRank της σελίδας Pi στην επανάληψη k+1: Η διαδικασία ξεκινά µε r 0 (P i )=1/n για κάθε σελίδα Συνεχίζεται µε την ελπίδα ότι τελικά θα συγκλίνει

Υπολογισµός του PageRank Εφαρµόζοντας την επαναληπτική διαδικασία στο µικρό γράφηµα αριστερά, µετά από µερικές επαναλήψεις έχουµε τον πίνακα δεξιά:

PageRank: Γρήγορος υπολογισµός How many times do we need to repeat the calculation for big networks? That s a difficult question; for a network as large as the World Wide Web it can be many millions of iterations! The damping factor is quite subtle. If it s too high then it takes ages for the numbers to settle, if it s too low then you get repeated overshoot. Take a look at: Boldi, P., Santini, M., and Vigna, S. 2005. PageRank as a function of the damping factor. In Proceedings of the 14th international Conference on World Wide Web (Chiba, Japan, May 10-14, 2005).

PageRank: Παράδειγµα 1 A D B C Average PR: 1.00 A D B C PR(A)=1.49, PR(B)=0.78, PR( C)= 1.58, PR(D)=0.15 It took about 20 iterations before the network began to settle on these values Look at Page D - it has a PR of 0.15 even though no-one is voting for it. So, for Page D, no backlinks means the equation looks like this: PR(D) = (1-d) + d * (0)= 0.15 Observation: every page has at least a PR of 0.15 to share out But this may only be in theory - there are rumours that Google undergoes a post-spidering phase whereby any pages that have no incoming links at all are completely deleted from the index

PageRank: Παράδειγµα 2 As you d expect, the home page has the most PR it has the most incoming links!

PageRank: Παράδειγµα 3 The vote of the Product page has been split evenly between More and the external site. We now value the external Site B equally with our More page.

PageRank: Παράδειγµα 4 That s much better. The Product page has kept three quarters of its vote within our site - unlike previous where it was giving away fully half of it s vote to the external site! Keeping just this small extra fraction of the vote within our site has had a very nice effect on the Home Page too PR of 2.28 compared with just 1.66 in previous example.

PageRank: Παράδειγµα 4 Observation: increasing the internal links in your site can minimize the damage to your PR when you give away votes by linking to external sites. Principle: If a particular page is highly important use a hierarchical structure with the important page at the top. Where a group of pages may contain outward links increase the number of internal links to retain as much PR as possible. Where a group of pages do not contain outward links the number of internal links in the site has no effect on the site s average PR. You might as well use a link structure that gives the user the best navigational experience

PageRank: Παράδειγµα 5 Observation: as you add more internal links in your site, the PR will be spread out more evenly between the pages

PageRank: Παράδειγµα 6 Let s see if we can get 1,000 pages pointing to our home page, but only have one link leaving it Observation: it doesn t matter how many pages you have in your site, your average PR will always be 1.0 at best. But a hierarchical layout can strongly concentrate votes, and therefore the PR, into the home page!

Pagerank: Issues and Variants How realistic is the random surfer model? What if we modeled the back button? [Fagi00] Surfer behavior sharply skewed towards short paths [Hube98] Search engines, bookmarks & directories make jumps non-random. Biased Surfer Models Weight edge traversal probabilities based on match with topic/query (non-uniform edge selection) Bias jumps to pages on topic (e.g., based on personal bookmarks & categories of interest)

Topic Specific Pagerank The original PageRank Purely based on the hyperlinks of web pages. Contents are not considered. A vector of PageRank is used for all web pages. Topic-Specific PageRank For each topic, a vector of PageRank is created. Each page has several PageRank values. One for each topic. Conceptually, we use a random surfer who teleports, with say 10% probability, using the following rule: Selects a category (say, one of the 16 top level ODP categories) based on a query & user -specific distribution over the categories Teleport to a page uniformly at random within the chosen category

Topic Specific Pagerank PR(A) = (1-d)v + d (PR(T1)/C(T1) +... + PR(Tn)/C(Tn)) Predefined personalization vectors v Bias the jump based on user query v prefers the topics user is interested in. T. Haveliwala, Topic-Sensitive PageRank, WWW 2002

Interpretation Sports 10% Sports teleportation

Interpretation Health 10% Health teleportation

Topic Specific Pagerank [Have02] Implementation offline:compute pagerank distributions wrt to individual categories Query independent model as before Each page has multiple pagerank scores one for each ODP category, with teleportation only to that category

Personalized PageRank PR(A) = (1-d)v + d (PR(T1)/C(T1) +... + PR(Tn)/C(Tn)) Input: Web graph W influence vector v v : (page degree of influence) Output: Rank vector r: (page page importance wrt v) r = PR(W, v)

Interpretation of Composite Score For a set of personalization vectors {v j } j [w j PR(W, v j )] = PR(W, j [w j v j ]) j j j j j j Weighted sum of rank vectors itself forms a valid rank vector, because PR() is linear wrt v j

Interpretation Health Sports pr = (0.9 PR sports + 0.1 PR health ) gives you: 90% sports teleportation, 10% health teleportation

Hubs and Authorities Authority is not necessarily transferred directly between authorities Pages have double identity hub identity authority identity Good hubs point to good authorities Good authorities are pointed by good hubs Jon Kleinberg, Authoritative Sources in a Hyperlinked Environment, Journal of the ACM 46(5): 604-632(1999) hubs authorities

Hyperlink-Induced Topic Search (HITS) In response to a query, instead of an ordered list of pages each meeting the query, find two sets of inter-related pages: Hub pages are good lists of links on a subject. e.g., Bob s list of cancer-related links. Authority pages occur recurrently on good hubs for the subject. Best suited for broad topic queries rather than for page-finding queries. Gets at a broader slice of common opinion.

Hubs and Authorities Thus, a good hub page for a topic points to many authoritative pages for that topic. A good authority page for a topic is pointed to by many good hubs for that topic. Circular definition - will turn this into an iterative computation.

High-level scheme Extract from the web a base set of pages that could be good hubs or authorities. From these, identify a small set of top hub and authority pages; iterative algorithm.

Base set Given text query (say browser), use a text index to get all pages containing browser. Call this the root set of pages. Add in any page that either points to a page in the root set, or is pointed to by a page in the root set. Call this the base set.

Query dependent input Root Set

Query dependent input IN Root Set OUT

Query dependent input IN Root Set OUT

Query dependent input Base Set IN Root Set OUT

Assembling the base set [Klei98] Root set typically 200-1000 nodes. Base set may have up to 5000 nodes. How do you find the base set nodes? Follow out-links by parsing root set pages. Get in-links (and out-links) from a connectivity server. (Actually, suffices to text-index strings of the form href= URL to get in-links to URL.)

Distilling hubs and authorities Compute, for each page x in the base set, a hub score h(x) and an authority score a(x). Initialize: for all x, h(x) 1; a(x) 1; Iteratively update all h(x), a(x); After iterations output pages with highest h() scores as top hubs highest a() scores as top authorities.

Iterative update Repeat the following updates, for all x: y a x h ) ( ) ( y x y a x h a ) ( ) ( x y y h x a a ) ( ) ( x x

HITS Algorithm Initialize all weights to 1. Repeat until convergence O operation : hubs collect the weight of the authorities h = i aj j:i j I operation: authorities collect the weight of the hubs a i = h j j: j i Normalize weights under some norm

Scaling To prevent the h() and a() values from getting too big, can scale down after each iteration. Scaling factor doesn t really matter: we only care about the relative values of the scores.

How many iterations? Claim: relative values of scores will converge after a few iterations: in fact, suitably scaled, h() and a() scores settle into a steady state! proof of this comes later. We only require the relative orders of the h() and a() scores - not their absolute values. In practice, ~5 iterations get you close to stability.

Japan Elementary Schools schools LINK Page-13 ú { ÌŠw Z Hubs a Šw Zƒz [ƒ ƒy [ƒw 100 Schools Home Pages (English) K-12 from Japan 10/...rnet and Education ) http://www...iglobe.ne.jp/~ikesan l f j Šw Z U N P g Œê ÒŠ ÒŠ Œ Šw Z Koulutus ja oppilaitokset TOYODA HOMEPAGE Education Cay's Homepage(Japanese) y ì Šw Z ̃z [ƒ ƒy [ƒw UNIVERSITY J ³ Šw Z DRAGON97-TOP  ª Šw Z T N P gƒz [ƒ ƒy [ƒw µ é¼âá á Ë å ¼ á Ë å ¼ Authorities The American School in Japan The Link Page ª è s ˆä c Šw Zƒz [ƒ ƒy [ƒw Kids' Space ˆÀ é s ˆÀ é ¼ Šw Z { é ³ˆç åšw Šw Z KEIMEI GAKUEN Home Page ( Japanese ) Shiranuma Home Page fuzoku-es.fukui-u.ac.jp welcome to Miasa E&J school _ Þ ìœ E l s ì ¼ Šw Z ̃y http://www...p/~m_maru/index.html fukui haruyama-es HomePage Torisu primary school goo Yakumo Elementary,Hokkaido,Japan FUZOKU Home Page Kamishibun Elementary School...

Things to note Pulled together good pages regardless of language of page content. Use only link analysis after base set assembled iterative scoring is query-independent. Iterative computation after text index retrieval - significant overhead.

Proof of convergence n n adjacency matrix A: each of the n pages in the base set has a row and column in the matrix. Entry A ij = 1 if page i links to page j, else = 0. 1 2 3 1 2 3 1 2 3 0 1 0 1 1 1 1 0 0

Hub/authority vectors View the hub scores h() and the authority scores a() as vectors with n components. Recall the iterative updates h( x) a( x) xa y yax a( h( y) y)

Rewrite in matrix form h=aa. a=a t h. Recall A t is the transpose of A. Substituting, h=aa t h and a=a t Aa. Thus, h is an eigenvector of AA t and a is an eigenvector of A t A. Further, our algorithm is a particular, known algorithm for computing eigenvectors: the power iteration method. Guaranteed to converge.

Issues Topic Drift Off-topic pages can cause off-topic authorities to be returned E.g., the neighborhood graph can be about a super topic Mutually Reinforcing Affiliates Affiliated pages/sites can boost each others scores Linkage between affiliated pages is not a useful signal

Resources IIR Chap 21 http://www2004.org/proceedings/docs/1p309.pd f http://www2004.org/proceedings/docs/1p595.pd f http://www2003.org/cdrom/papers/refereed/p27 0/kamvar-270-xhtml/index.html http://www2003.org/cdrom/papers/refereed/p64 1/xhtml/p641-mccurley.html