Location ranking in real time

Location Ranking

Location Ranking

Locations map

In the image above, Anchor Location generated 30 signals. Conjugate Location 1, located 7mi northwest to Anchor generated 10 and Conjugate Location 2, located 1.5mi north generated 8. Probabilistic radius therefore is:

(30 · 0 + 10 · 7 + 8 · 1.5 +….) / (30 + 8 + 10 +…) The first term in the numerator is 30 · 0 since distance from Anchor to itself is zero.

Bear with me, it’s worth it! Next, let us present this data on a visual chart, in order to build a Probabilistic Radius:

Location data provided with location Technology

Each line on the “location distance” chart represents a single day of data, and describes cumulative distribution function of probabilistic radii. This means that for most location points the device did not move more than 60 – 100 miles during an hour. The hypothesis is that locations with an extremely high radius generate noise in the system. Those who have a consistently high radius are considered “Bad”.

OK, but how do we define “consistency”? Consistency, for the sake of our discussion, is an event having a high probability to happen in any given time frame.

Another thing we need to define is “high radius”, or threshold above which probabilistic radius is said to be high. We shall not give this threshold a numerical value at the moment, and rather just denote it as . Later we shall show, that it is more a continuous interval than just a number.

Lets call a location which has radius above  on a specific day as “Sparse”, and so we are able to formulate our goal in a probabilistic fashion:

The days when we observe signals from the location, are called “observations”, and those days when the location is Sparse, are called “successes”. Once the proportion of such “successes” reaches some threshold, the location is called “Bad”. This is clearly a Binomial process with probability  – a probability that we shall try to estimate using Bayesian methods. Indeed, for every location  the probability of location  being Sparse  times out of  , given  is Binomial:

In a nutshell, Bayesian learning is about having some prior probabilities for events which are updated as new information becomes available. The updated probabilities are called “posterior”. It is reasonable, that a location which has a probabilistic radius of 300 miles should probably contribute more to the posterior probability, than the one having 70 miles, on any given day. Using statistical modelling (Beta-Binomial fitting) we’ve managed to establish the functional relationship between probabilistic radius and “learning rate”. That relationship is defined through coefficients of  function. So, for every threshold radius , we say there is 

Alpha&Beta Coefficients

The higher Probabilistic Radius is, the less “successes” we need to establish that the location is a “Bad” one.

Indeed, we can describe  and  as function of  , call them  and  respectively:

We then consider location  we observed  times (days) in the period  which had radius over 100 miles at least once. Define the following:

For every observation  where the radius  was over 100 miles,  is the respective  . Then calculate average  where  is the number of times location had radius over 100km. In the same fashion we get .

Now, according to Bayes, the posterior mean for probability of a location  being Sparse:

, where  is number of times the location was observed with radius > 100, and  is total number of observations of location  in period 

Where

So, we have now some locations tagged as “high quality” and others as “low quality”. Using this tagging, we can calculate for each mobile app, the proportion of signals coming from each location quality type, and rank the data vendor (e.g.: apps and bidstream providing the data) according this metric.

Voila! To conclude, we have seen how by leveraging machine learning, we put in place a mechanism that ensures that we use only the highest quality of data for generating actionable insights. The algorithm “cleans” the location stream of data in a self learning way, taking almost no initial inputs, and by that refrains from placing bids on bad locations and works as a metric for the quality of location data.

Logo Polaris

Want to look
under the hood?

 Semeon Balagula
Semeon Balagula

Data Scientist at Ubimo

Share on linkedin
Share on twitter
Share on facebook
Super Bowl Shopper Analysis
Articles & Research

Super Bowl Analysis: Who Threw The Largest Party?

Which city threw the largest Super Bowl Parties and where did visitors shop for game day foods?

Forrester wave article
News

The Forrester Wave™: Location Intelligence Platforms, Q4 2018

Forrester has identified Ubimo […]

Get the latest in
Location Intelligence

Subscribe to the blog

CEO interview
News

Deep-Dive Video: Location Intelligence

Watch Ubimo CEO Ran Ben-Yair and Kevin Benedict from Regalix

dpaa video
Articles & Research

Dpaa Video Everywhere Conference

3 Key Takeaways from DPAA Video Everywhere Conference

Future of Polaris
Inside Ubimo

Product roadmap: The future of Polaris

As the power of location intelligence makes itself more evident, it has become clear to us that the time has come to combine our two powerful platforms into one.