A Bayesian Look At Mobility Device Transitions in Supply Chain Management

Time not tide waits for no man.

And so it is for mobility in supply chain management.  Once primarily the domain of laser barcode scanners and barcode printers, (with a smattering of voice technology), mobility in supply chain management is exploding in multiple directions.   A key driver in this evolution is all the new hardware options now available for mobilizing supply chain processes.   From ruggedized tablets to smart phones to imaging cameras to the “internet of things”, today’s supply chain managers have multiple options for mobilizing their processes, with the combined goals of increasing productivity, increasing customer satisfaction, and/or lowering cost.

Let’s take a look at this trend, and apply a little Bayes’ analysis to try and quantify its development.

Let’s start with a simple question.   What is the probability that new hardware technology, (aka tablets, smartphones, imagers, etc), significantly displaces the sale and/or deployment of traditional barcode scanning technology in the first half (1H) of 2014?   We want to know if it is a SIGNIFICANT cut, and we want to stick to the 1H of 2014 to increase the relevance.

To use a Bayes’ approach, we need to start by defining an a-priori probability to our question.   What do we think is the probability that tablets, imagers, etc, will have a significant negative impact on the traditional barcode scanning deployment by 1H 2014?   This is going to be a subjective guess, and everyone will likely have their own opinion.   For the sake of argument here, let’s assign this a probability of 1/4, or 25%.

Whether you agree with this number or not doesn’t matter.   We’re going to “tweak” it using a Bayesian approach[1], and it is the process of tweaking we want to evaluate.   You can change the numbers to your own heart’s content later.

Given our specific question regarding supply chain mobility, and our assigned a-priori probability, we now want to look around for any evidence that might support, or detract from our guess.   This evidence will be used to tweak our probability.[1]   So, what kind of evidence can we find regarding the adoption of new mobility devices in supply chain operations?  And more specifically, what evidence can we find regarding all these new device’s impact on the barcode scanning market?

Let’s start with the available media outlets.  DCVelocity released an article in Nov. 2013, that describes some specific use cases of tablet and smartphone adoption by JBHunt and others in supply chain operations[2].  Likewise, VDC published a study in Sept. 2013, forecasting a decline in sales in barcode scanners through 2017.  The basis for this decline in barcoding CAGR emphasized the adoption of imagers and consumer devices?[3].   A little digging will turn up these, and a number of other other indicators that address our question.   But just how valuable is all this Evidence[1] with regard to our prediction?

There are two questions we need to ask to use this evidence in a Bayes’ evaluation.  They are;

  1. What is the probability that these journals would report this information in Q2/Q3 of 2013 IF barcode scanners ARE going to fall off a cliff in 1H 2014?     And,
  2. What is the probability that they would be report this information in Q2/Q3 of 2013 IF barcode scanners are NOT going to be significantly displaced in 1H 2014?  In other words, what is the probability these are just some timely releases, BUT there’s not going to be any huge conversion from barcoders to new hardware in 1H 2014?

For the first question, what is the likelihood that DCVelocity and VDC, who make a living by timely reporting and forecasting, would be releasing this information now IF barcode scanners will be significantly displaced in 1H 2013?   We should probably expect there is a very high likelihood that we would see these reports at the end of 2013, if we’re on the cusp of a major technology transition early 2014.
Actually, we should probably expect to see these, and a lot more.   Let’s set this probability very high say, 95%.

Ok, now what about that second question.   We’re not suggesting that new hardware devices will not have any impact.   We’ve been witnessing the adoption of non-barcode devices in supply chain ops for a few years now.  What we want to ask though, is how likely is it that we see these reports come out late in 2013, BUT barcode scanning isn’t really all that negatively impacted in 1H2014.  (Any more say, than it was in 2013.)  That’s an interesting question, and bears a little thought.  (And btw, it is this thought problem that is really the major benefit of using Bayes’ analysis.)  Consider this.

Even if the barcode market is NOT significantly displaced in 1H2014, there is still a pretty high likelihood that we would see these reports.   Why?  Many reasons come to mind, but here are a few.

  • Publish or perish
    • Industry journals and syndicated research organizations have editorial calendars to meet.  Therefore, even if barcoding was NOT negatively impacted by a LARGE amount 1H2014, there’s still a good bet that we’d see these articles published.
  • We’ve been witnessing the evolution of new hardware form factors for several years now.
    • Although interesting as a data point, this isn’t new news.   Tablets have been out for several hardware generations now, and have been available ruggedized for a while.   Imagers have been around for years.  The point being, even if we do NOT see a large negative impact next year, we would probably still see these reports.

Given that, it’s probably safe to assume these reports would be published even if there is no HUGE displacement of devices in 1H2014.   Let’s put that likelihood at 75% to suggest it’s still likely, but not as likely as would be IF we were on the verge of a major swing event.

Okay, now we have a question with a priori probability.  We have some new evidence, (….read some news…), and we’ve guessed how important, or impactful, this news is given our question.   What would Bayes tell us should be our NEW probability for our question?

I’m not going to detail the mathematics here.   You can find that elsewhere.  (See [1] for example.)   But, just running the numbers yields,

P[ Significant decline | Evidence by reporting ] =
     P[ Evidence | Significant decline ] x P[ Significant decline]  / 
       ( P[ Evidence | Significant decline ] x P[ Significant decline] + 
          P[ Evidence | NOT Significant decline ] x P[ NOT Significant decline] ) 

     = (.95 x .25) / ((.95 x .25) + (.75 x .75))
     =   .2375 / ( .2375 + .5625 )
     = .297

Based on our analysis, we now feel that there is a 29.7% likelihood that new hardware technology will significantly displace barcode scanners in 1H2014.  Up from 25% originally.  Not a great big jump, is it?  Interesting, but probably not great enough for us to start ringing the fire alarm …..yet….

What if??…

Let’s broaden our question, and now ask, “What if……..”

Say we’re a vendor selling autoID technology and services.  What we really want to know is, what would we consider REALLY impactful evidence.  In other words, what would we need to see to convince us that barcode devices are about to suffer a SIGNIFICANT displacement in 1H2014.   Something more tangible than a[nother] headline like, “Mobility Devices EXPLODING in 2014!!!”

Here, Mr. Bayes lends us a big hand.  The answer to our question comes down to looking at the ratio between the probability our evidence occurs IF there is a significant displacement to barcode devices 1H2014, versus the probability that our evidence would occur if there was NOT going to be a significant displacement in 1H2014.  In other words, we would need to see some evidence, or some event, such that,

P[ Evidence occurs given that there IS a significant hardware displacement ]
------------------------------------------------------------------------------
P[ Evidence occurs given that there is NOT a significant hardware displacement ]

was a very BIG number.[1]   And, the bigger the better.

So, what kind of Evidence can we imagine that would fit that bill?

Suppose it comes to light that a major supply chain company has contracted to deploy a significant amount of new tablets and imagers to modify their operations?  To use the parlance of the media, what if we discover that someone has developed the “killer supply chain app” for tablets and/or imagers, and also that some major player has contracted to buy and deploy it?   Whatever the app, (….omnichannel anyone?…), the key is that we find evidence that some player has adopted it…..for actual money….

Major players in supply chain management do NOT adopt new technology on a whim.   That is a direct route to the unemployment line.   Big players have planning committees, and strategy initiatives, and lots of checks and balances to ensure any new technology adoptions provide significant benefits, and do so for a looooong time.   Given this, we can now play the following game.

The likelihood that we see a major player announcing barcode hardware displacement early in 2014, IF we’re witnessing a significant displacement of mobility hardware should be fairly high.   In other words, if the adoption of new hardware technology IS taking a drastic uptick, then we probably would expect to see a report of a major player adoption.   Let’s say this likelihood is 80%.

But actually, the second question becomes more important.  If new mobile device hardware was NOT going to displace barcode devices in 1H2014, would we really expect to see a major player announcing this type of adoption?   I would argue that it would NOT be likely.   Again, not unless someone wants a quick trip to the unemployment line, OR someone is exceptionally visionary and/or knows something that absolutely nobody else knows…..   Therefore, let’s assign this likelihood a very small amount say, 5%.

Ok, so what does our new probability become?

P[ Significant barcode device decline | Evidence a major player announces new technology deployment ] =
    P[ Evidence | Significant decline ] x P[ Significant decline]  / 
       ( P[ Evidence | Significant decline ] x P[ Significant decline] + 
          P[ Evidence | NOT Significant decline ] x P[ NOT Significant decline] ) 

   = (.8 x .25) / ((.8 x .25) + (.05 x .75))
   =   .2 / ( .2 + .0375)
   = .842

Or, 84.2%  !!!    So, given THIS evidence, our probability shoots up from 25% to 84.2%, or 3.36 times as high.   And that, as they say, is pretty significant.

Conclusion

Applying Bayes’ reasoning to the problem of new mobility adoption in supply chain management provides a method for focusing analysis, and even generating quantifiable numbers to compare.  Yes, it can seem a bit contrived, but I would argue it’s better than just holding up a finger to test the wind, or blindly trusting “everything you read in the news”.  Also, the process of reviewing and considering “evidence” is valuable in facilitating management dialogue.  If you are a vendor or other provider in supply chain autoID, I would recommend you go through this exercise yourself, use your own numbers based on your perspective and market position, and debate your own prediction.   Again, it’s not the final numbers or even the probabilities that are primary.   It is the process of analysis, and the dialogue around the assumptions that hold the value.

Using Bayes’ process here, we’ve developed a couple of useful insights.   We’ve identified how much weight we’re willing to assign to commonly reported information on mobility technology adoption.  We’ve also highlighted what we would WANT to see as evidence to convince us that 1H2014 is the BIG ramp-up for new mobility hardware in supply chain.   To summarize, we don’t find reporting from standard media outlets as important as other market events we could imagine.

What do you think?

Ok, now it’s your turn.   What do you think will be the impact of new mobility devices on the barcode device market in 1H2014?   What Evidence would you consider significant as an indicator in this evolution?   And, what about Bayes’ analysis?   Any ideas on how to make it more useful, or specific to mobility in supply chain management?   Let us know!

Using R’s Text Mining For Competitive Intelligence Gathering In MDM / BYOD

I wanted to analyze one of the vendors in the mobile device management (MDM / BYOD) industry.   The desire is to do a textual analysis of a large store of documents available on the web in an effort to gain insight into the major themes of the vendor’s marketing mix.   This is a highlight of my first attempt.

Pulled down a “Vendor Solutions Overview.pdf” document to work through the initial analysis and start the process.  I manually converted the document to text because I could not figure out how to read a PDF file into R (yet.!).  I used R’s readLines() to read the file in as a vector of strings, and then followed the steps here (http://www.rdatamining.com/examples/text-mining) to convert the vector to a corpus, and begin the exploration.[1,2,3]

I did not stem the words, but did try with and without removing stop words.   Removing them was definitely best.  Removing the vendor’s name as well as the standard list of english stopwords was very useful.

Once I had my Corpus, I built the document term matrix [1, 2], and reviewed the frequent terms and those terms associated with the word, “secure”.  Insightful, and useful for continued exploration.  Finally, created a wordcloud.   As you can see from the wordcloud, the primary themes here are “secure”, and “email”, followed by “mobile device management”.   That inclusion of the word, “email” was very insightful there as it points to a theme beyond the standard MDM / BYOD literature.

> findFreqTerms(myDtm, lowfreq=10)
[1] "email" "secure"
> findFreqTerms(myDtm, lowfreq=8)
[1] "device" "email" "mobile" "secure"

Interesting terms associated with “secure” included

  • whitelists
  • blacklists
  • container[ized]
    • Here stemming would have been effective as “container”, “containerized”, and “containerize” all came up
  • attachments

MDM Vendor Wordcloud

Next step: Plan to pull down multiple docs from same competitive vendor and create a broader database to analyze them all together.   Should get a more comprehensive overview of themes and points for competitive positioning.

  1. http://www.rdatamining.com/examples/text-mining
  2. http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
  3. http://cran.r-project.org/web/packages/tm/tm.pdf

Using Pareto Analysis in R For Channel Partner Management

Managing a large, global reseller channel can be a daunting task.   Partner segmentation, rewards and incentive programs, and a myriad of other details can make it hard to see the signals amongst all the noise.   I’ve developed a simple Pareto analysis using R that is useful for keeping focus on what / who is really driving sales.[1]

To do the analysis, I pull down a current report of all partner’s revenue year-to-date.  Note that this could also be segmented by product, or other factors if desired.  But, for the high level overview, I start with just the complete YTD revenue.   In the analysis, each partner will become a separate factor.   In order to keep the number of factors down to a useable amount, it’s also useful to pull out a subset of the complete partner base to work on.   I segment by geography, but you’ll need to use your own intelligence here for your channel.  The point is, analyzing a group of 65 partners can be more insightful than a complete list of 1000, if it is possible to easily create a segment, or subset.

Once you have your list of partners, with their current sales, the next step is to pull them into R.

> head(sdf)
    Reseller Revenue
 1 Partner 1    1432
 2 Partner 2     252
 3 Partner 3    3000
 4 Partner 4     347
 5 Partner 5      52
 6 Partner 6    4028

To use the pareto.chart function in the qcc package,[2, 3] we need to convert this to a table with revenue as the variable, and Reseller name as the row names.   A little manipulation gets us the table we want.

> head(rev)
 Partner 1 Partner 2 Partner 3 Partner 4 Partner 5 Partner 6
      1432       252      3000       347        52      4028   ………….

Now I utilize the pareto.chart in the qcc package to do the analysis for me.

Partner Pareto

Looking at the chart, I can see directly which resellers comprise the top 80% of my channel revenue.[3]   However, this is easier to see using the text output in R.

Pareto chart analysis for rev
               Percentage Cum.Percent.
   Partner 28  13.837      13.837
   Partner 7   8.017       21.854
   Partner 12  6.017       27.871
   Partner 50  5.131       33.002
   Partner 35  3.662       36.664
   Partner 64  3.336       40.000
   Partner 58  3.309       43.309
   Partner 43  3.300       46.609
   Partner 61  3.293       49.902
   Partner 44  3.271       53.173
   Partner 16  3.228       56.401
   Partner 42  3.222       59.623
   Partner 20  3.133       62.756
   Partner 26  2.748       65.504
   Partner 27  2.612       68.115
   Partner 6   2.576       70.691
   Partner 54  2.474       73.165
   Partner 33  2.292       75.457
   Partner 62  2.155       77.612
   Partner 65  2.142       79.754
   Partner 3   1.918       81.673
   Partner 53  1.820       83.493

Here, I’ve removed a couple of the columns from the output to focus on the percentages.   The columns I removed were the revenue for each partner, and the cumulative revenue for the partners.   That will be specific to the individual sample.   But, the cumulative percentage column is the most useful, as that shows what percentage of revenue is contained in the top producing partners.   Here, you can see that only ~20, (out of 65 total), partners are generating ~80% of this group’s revenues.   Not quite 80:20, but the concept is the same.  You can set the cumulative percentage threshold at whatever level is desirable.  The point being, this allows you to quite easily determine which of your partners are the top producers, and how much revenue they actually account for.

Now that I have a list of who the top producing resellers are for this year, I can play the same game for each individual reseller’s revenue.  That being, I can generate a report showing all the deals for any one reseller, and then use the Pareto analysis to determine which deals, (or customers), were responsible for their top sales of my product.

Partner's Customer Pareto

Pareto chart analysis for ex.partner.rev
   Customer Percentage Cum.Percent.
   K1       17.693       17.693
   J1       14.016       31.709
   I1       12.327       44.036
   H1       12.040       56.076
   G1        7.517       63.593
   F1        6.940       70.533
   E1        5.773       76.306
   D1        4.266       80.572

Here I see the top 8 customers, (or deals), that comprise 80% of this Partner’s revenue for this year.   (Again, I’ve removed the Frequency and Cumulative Frequency columns for brevity.)

At this point, I know who my top producing partners are, and what customers / deals generated most of their sales.   This gives me a way to easily hone in on who is producing, and where, and provides feedback to product development, as well as all our other marketing activities.   This also opens the door to the development of a lot more useful insights, and insightful questions.   Consider the following:

  • What differences, (if any), exist between the top producers and the rest?
    • Are there factors that can be identified that show positive correlation in sales for the top producers versus the rest?
      • Industry, customer type, geography, sales process, etc, etc?
  • How does this same analysis break down by product?
    • Are certain products doing better with certain partners, or customers, than others?
  • Profiling the top deals by Partner
    • What made those top deals happen?   What was the customer value and the ROI?   Can that be replicated across other customers / industries, etc?   Can other deals be “grown” based on this feedback?

Using Pareto analysis in R is a fast, easy way to hone channel partner program management by highlighting where the bulk of my product revenue is coming from.   It then goes further in providing the ground for developing the next set of insights to further understand what is selling via the channel.

One last note, (for math geeks only…)   The sorted revenue from these lists, (Partner, or Partner’s Customer), can be fit to an Exponential Decay curve.   This would be useful if you wanted to take this to the next step and develop a more formal mathematical model of your channel sales.   For more information see [4].  I have done this, and it can be insightful.   The most important aspect for me was comparing the decay parameter in the y = k*exp(B*x) model.  (The B[eta] parameter.)    Different Partners will have different B values based on the steepness of their slope, which is a direct indicator of how many deals they have to do to attain the 80% threshold, (or whatever threshold you like to set).   Partners with a steeper slope require less deals to get to the threshold.   Those with a shallower decay require more deals.   And, a little calculus will lead to a direct model for calculating the number of deals required for any slope parameter.
x >= ln(0.8*exp(B * TotalDealCount + 0.2) / B
This offers another way to compare Partners, but has not been found useful in simple daily analysis.

  1. http://en.m.wikipedia.org/wiki/Seven_Basic_Tools_of_Quality
  2. Scrucca, L. (2004). qcc: an R package for quality control charting and statistical process control. R News 4/1, 11-17.
  3. http://nicoleradziwill.com/r/pareto-charts-75-925-copy.pdf
  4. http://www.physics.pomona.edu/sixideas/labs/LRM/LR13.pdf

Fixing Partner Management At The Field Level…

Reading CRN Post, “Where The Rubber Meets The Road: How To Fix Field Engagement”.

I’m pleased to say that my company does NOT suffer from most of the maladies described in this article.  What we’ve done, that alleviates much of what is discussed in the article, is to have the partner manager(s), (PAM, CAM, etc), engage at the strategic level, and leave the pipeline / deal engagement to the field sales team.   So, channel management is focused on training, certification, business development, marketing, etc, and the field sales teams are responsible for working with their partner counterparts to prospect, qualify, etc, up to closure.

This separation allows channel management to focus on what the partner wants in order to drive their business, while at the same time keeps the vendor sales team tied in to the deal.  This has the added advantage of alleviating the partner concerns around the vendor coming in to “steal their deal”, since the two teams are working together.

It ain’t perfect, trust me.  But, it does eliminate issues discussed in CRN’s article.

 

Mr. Pareto And Channel Partner Revenue

Was doing a quick check up on channel partner revenues year-to-date (YTD).   A quick frequency distribution in R showed the following histogram.

Distribution of channel partners by revenue.

Distribution of channel partners by revenue.

This appears to be a standard case of the ol’ Pareto Principle applied to channel partner revenue.   Effectively, a large portion of partners do a little bit of revenue, and more importantly from the management perspective, a few partners (~26% or 17 / 64 ) are responsible for most of the revenue (>80%).

Identification of these 17 out of 64 partners is important for rewards and further evaluation.   Most importantly, what have these 17 done to be so effective this year that a good channel manager can help the other 64 do better next year?   (Sharing is caring !!!)

Mathematically, (and for R programming purposes), this is an exhibit of the Pareto PDF and parameterization could progress using that model. [1,2, 3, 4 ]   Note that this also falls under the category of the “newer” long-tail distribution models, and emerging analysis on long-tailed distributions. [5]  Finally, there is significant research & modeling done on this as part of Quality Control, and the Pareto chart is listed as one of the “7 basic tools of quality control”. [1]  In this case, “quality control” involves revenue management and growth across channel partners.

Next steps here should be to create estimators for the Pareto PDF distribution, and use R’s curve() functions to verify the curve and it’s associated PDF. [4]

  1. http://whatis.techtarget.com/definition/Pareto-chart-Pareto-distribution-diagram
  2. http://betterexplained.com/articles/understanding-the-pareto-principle-the-8020-rule/
  3. http://en.m.wikipedia.org/wiki/Pareto_principle
  4. http://en.m.wikipedia.org/wiki/Pareto_distribution
  5. http://en.m.wikipedia.org/wiki/Long_tail