1 / 58

Competition over popularity in social networks

Eitan Altman May, 2013. Competition over popularity in social networks. Cultural, Social, Artistic reasons can make a content a potential success We are interested in understanding how Information technology can contribute to the dissemination of content.

kaoru
Télécharger la présentation

Competition over popularity in social networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eitan Altman May, 2013 Competition over popularity in social networks

  2. Cultural, Social, Artistic reasons can make a content a potential success We are interested in understanding how Information technology can contribute to the dissemination of content INTRODUCTIONWhat makes a content popular?

  3. Who are the actors related to dissemination of content? What are the tools for dissemination of content? How efficient are they? Can data analysis be used to understand why a given content is successful? When and how much should we invest in promoting content? Questions we wish to answer

  4. The actors and their strategic choices Zoom: In what content to specialize Tools for accelerating dissemination Zoom: Analyzing the role of recommendation lists Dissemination models Dynamic game models for competition over popularity: tools for the solution, results Start with classification of models for content dissemination Outline of talk

  5. What is there in common between the following videos? The most popular video with more than 1.5 billion viewers on youtube

  6. A popular music video

  7. Difference in potential interested audience size Both exhibit viral behavior What is common? What is Different?

  8. Fraction of infected: Solution: Hence Integrating, we get: Finally. Deterministic epidemic models:

  9. k=1 k=3 Examples with x(0) = 0.0001, 0.01, 0.3

  10. Unpopular videowith many views

  11. President Barack Obama 2009 Inauguration and Address 3 years. Concave? Epidemic?

  12. Consider the model: This models a constant rate M at which a non-infected node becomes infected. An infected node does not infect others. This gives This is a negative exponential model that converges to a constant propagation models without virality, with max population size

  13. Converge to 1 Curves with different x(0)

  14. Involved decision makers: Social network provider (SNP), content provider (CP), content creators (CCr) consumers of content (CoCo). Goal of SNP, CP, CCr: maximize visibility of content. Higher visibility (more views) allows SNP, CP and CCr to receive more advertisements money. The content itself can be an advertisement which the CCr wishes to be visible. Decision making in social networks

  15. SNP: what type of services to offer. CP: what type of content to specializes in CCr: have actions available by the SNR (share, like, embed) CoCo: can decide what to consume based on available information (recommendation lists) ACTORS and Actions:

  16. A Static game Problem R resources (eg content types), M players. Cost C(ji) for player i to associate with resource j The cost depends C(ji) depends on the number n(j) connected to j. Nondecreasing. Application: Each of M content providers has to decide in which type of content to specialize.

  17. Solution: Map to Crowding Games

  18. A CP can diversify its content MAPS to splittable routing games by [ORS] The utility is a decreasing function in the total amount of competing content. 2. Splitable case

  19. Need to revise the whole routing game basic results.

  20. Each video has a recommendation list: set of recommended videos Size of the list N: depends on the screen size. Define a weighted recommendation graph. Nodes: videos. Weight of a node: number of views, or age etc. Direct link between A and B if B is in the recommendation list of A. youtube data for recommendations

  21. We take 1000 random videos Draw a curve where X=number of views of a video Y=average no. of views of its recommended list. Not a good fit Measurements and curve fitting

  22. Horizontal axis: a function f of number of views of a video Vertical axis: average of a function f of the average no. of views of videos in its recommended list. Good linear fit Average(Log(y))= a log(x) + b, a>1, b>0 for N<5 The log of number of views

  23. Consider a random walk over the recommendation graph. At time n+1 it visits at random (uniform probability) one of the videos recommended at time n. State x(n )=number of views of a video at step n Assume: x(n) is Markov. Markov analysis

  24. F can serve as a Lyapunov function E[f(x(n+1)- x(n)|x(n))> (a-1)f(x(n))+b For N<5 since a>1, the Markov chain is instable (not positive recurrent). Therefore the expected time to return to a given video is infinite. Hence small screen means bad Page rank. Stability analysis:

  25. Markov Decision Processes: We are given a State space Action space Transition probabilities Immediate costs/utilities We define information and strategies Cost criterion to minimize, or payoff to maximize over a subset of policies V(x,t,u) x- is initial state, t is the horizon, u is the policy Dynamic game models for popularity

  26. The state at time T contain all the information that determines the future evolution for given choices of control after T Optimality principle: Let V(x,t) be the optimal value starting at time 0 at state x till some time t. Then V(x,t) = Max E[V(x,s,u)+V(X(s),t-s)] This is Dynamic Programming PRINCIPLE States

  27. Total cost (reward): E [ T can be a stopping time. Running reward ( r) and final reward (g). Other criterion: Risk sensitive cost Sample path criteria. E.G. sample path total cost (without expectation). Denote by R(x,t). Criteria

  28. The optimality principle implies: V(x,t+1)=Max_a [ r(x,a) + Total cost: V(x,t) does not depend on t. Finite spaces: V is the unique solution of the DP Discrete Time total payoff criterion

  29. Define J(x,t,u)=Eu [exp ( - a R(x,t) ] The standard optimality principle does not hold. Instead, V(x,t) = Max E[V(x,s,u) x V(X(s),t-s)] We obtain a multiplicative dynamic programming. Dynammic programming transforms optimization over strategies to one over actions. In games: NE over strategies transforms to a set of fixed point equations: NE over actions. Risk sensitive cost

  30. Assume one can go from state x to any state y in the set S(x). The time T(x,a,y) till a transition occurs to state y if an action a is used, is exponentially distributed with parameter L(x,a,y). Then the next transition from state occurs at a time T that is the minimum over all y of T(x,a,y). It isexponentiallydistributedwithparameter The next transition is to state y w.p. L(x,a,y)/L(y) Continuous time control: Markov case

  31. We may view this as if there are different exponential timers in different states. We may wish to have a single one. Idea: Assume we have rate L(1) at state 1 and rate L(2)>L(1) at state 2. Let p=L(1)/L(2). We shall now use the same rate of transition L(2) in both states, but at state 1 we shall also allow the possibility of transitions from state 1 to state 1 which occur with probability 1-p . These are called fictitious transitions. Only a fraction p of the transitions ae to othe states, which occur with rate L(2)p = L(1) Uniformization

  32. Individuals who wish to disseminate content through a social network. Goal: visibility, popularity Social network provider (SNP) interested in maximizing the amount of downloads Has tools to accelerate the dissemination of popular content. Example: Recommendation graph The SNP can give priority in the recommendation graph to someone who pays Problem 3:Competing over popularity of content:

  33. Example: YOUTUBE

  34. Example: YOUTUBE AD 2 AD 1 AD 3

  35. Example: YOUTUBE AD 2 AD 1 AD 3 Recom graph

  36. A list containing other ad events:Sharing and embedding

  37. Snowball epidemic effects • Other acceleration • Factors: • Other publishers • Embed content • Comments and • Responses increase • visibility

  38. N content creators (seeds)– players M potential destination A destination m is interested in the first content that it will be aware of. Information on content n arrives at a destination after a time exponentially distributed with parameter λ(n). The goal of a seed: maximize the number of destinations Xi(T) at time T (T large) that have its content (dissemination utility). Model

  39. Player n can accelerate its information process by a constant a at a cost c(a) Uniformization: let = total utility for player i if at time 0 the system is at state x, player j takes action ajand the utility to go for player i from the next transition onwards is v(y) if the state after the next transition is y. Define dessimination utility of player i to be g(xi) and ζi(xi) = g(xi+1) – g(xi)

  40. We solve the DP Fixed Point Eq:

  41. For linear dissemination utility, we can reduce the state space to the number of destinations that have some content. 1-dimensional! Solution: formulate explicit M matrix games, the equilibrium at matrix m is the equilibrium of the original game at state m If Ci(a)=Gi (a-1) (linear in a) then the equilibrium policy for player I is a threshold (Gi/λi)

  42. Possible to aggregate set of states S1, S2, … , Sr into states if states within Si are not distinguishable: Same transition probabilities from any x in Si to any Sj Same immediate rewards/costs for any x in Si Same available actions State aggregation

  43. This is a differential game with a compact state space. The case of no information

  44. Again state space collapce to dimension 1 Equilibrium at state m obtained as equilibrium of m-th matrix game. Now m is a real number For linear acceleration cost – same threshold policies Results

  45. Semi-dynamic case (policies constant in time): explicit expressions for the state evolution and the utility. Taking the sum, we get: dx/dt = C(M-x) Hence X(t) = M(1-exp(-Ct)) Results

  46. Let Xi be lim Xi(t) as t-> infinity. Then starting at X(0)=0, we get Xi = Ci/(C1+ … + Cn) Where Ci = lambda(i) w(i) Assume symmetry The case of no information

  47. Player I chooses w(i) Pays g w(i) Earns Ui ( M w(i)/( w(1) + … + w(n) ) There exists a unique equilibrium. Can be computed using a convex optimization problem. Kelly problem:

  48. Semi-dynamic case (policies constant in time): explicit expressions for the state evolution and the utility. The state is proportional to Results

More Related