big data analysis

Counting the beat

calculator

With numbers, leverage matters more than materiality.
‘Data-driven decision making’ is only better if you collect and merge the right kinds of data.
Sometimes you need a stand-alone crystal ball, not a fully-integrated computer system.
Absolute value changes and percentage changes; like heads and tails on a coin. You need both to get the full value.
The plural of anecdote isn’t data.
Accountants count the cost, when strategists fail to differentiate themselves.
Behind most good text and noble speeches are great numbers.

Advertisements

Big Data versus Design

Big Data and Big Data analytics are very topical at present in business writing. Start-up companies and multi-national corporates alike, try to exploit the data they generate to :

  • gain customer insight,
  • support a plan for blockbuster innovation,
  • study history,
  • run forecasting simulations,
  • do virtual prototyping.

In government, senior policy makers talk about developing evidence-based policy, (to excuse inaction?), while waiting for the perfect set of evidence (data) to come along.

It may be that Big Data analytics gets somewhat over-hyped as the engine of progress, but that design, (for product innovation, or to design novel solutions to new business problems for which history doesn’t show us the answer) is at least as important for progress. Some examples of history not showing us the answer? Evolution generally (biological species, synthetic biology, business strategy) and the emergence of artificial intelligence in systems.

Design of course relies on a group of human techniques including; discussion, brainstorming, imagination, intuition, reverse-engineering, provocations, thought leadership and lateral thinking. In the main, they don’t sound very business-like, but try and deliver significant business innovation without them!

In conclusion, exploiting Big Data is necessary. Encouraging great design, aided by Big Data, is sufficient.

Jobs, AI and business models

I just read a really interesting article on the McKinsey’s website ‘Artificial Intelligence meets the C-suite’, where some leading business academics discuss the implications of rapidly advancing artificial intelligence on conventional organisational structures run by senior executives. http://www.mckinsey.com/Insights/Strategy/Artificial_intelligence_meets_the_C-suite?cid=mckq50-eml-alt-mkq-mck-oth-1409

Rather than review the article, instead, here are some follow-on points to consider.

  1. In a future World influenced, if not dominated by AI and hyper-competition, will the strategic goal of capturing  ‘sustainable competitive advantage’ instead become ‘maintain competitive advantage’, with advantage mostly gained by using data and cutting-edge analytical techniques?
  2. Will most future companies become more like MI5/MI6 – gathering and analysing data comprising most of the work and then acting in very specific ways, once insight is gained?
  3. With the rise of AI, will a growth job for human managers be to spend increasingly more time making judgements about whether to develop & deploy staff, versus commission AI to create/deliver products & services?
  4. Will next-generation, business process reengineering (BPR) instead become AI BPR?
  5. With the rise of AI, will ‘efficiency in limited-scope environments’ dominate over ‘inefficiency in wide-scope environments’, causing entrepreneurs to move their business models into that space? Some examples:
  • to base their business model on data expertise (and rapidly go where the data takes them), not (staff) domain expertise,
  • to simplify (value chain) negotiations,
  • to simplify the challenge of motivating & leading staff,
  • to simplify the need to gain political consensus,
  • to balance internal data analysis (on costs, internal resources & activities) with external data analysis (on markets).

Clash of the titans – Big Data and the Internet of Things

Forecasting & trading models appear to become ever-more capable. They crunch ever-bigger datasets (the age of big data), manage market trades and even shape data-driven public policy.

Meanwhile the Internet of Things is computerising evermore devices, to control the timing of service provision to us, often where the how of the service remains a mystery to us.

At some point, will human choices be sacrificed between these two complexifying forces, as they progressively control our World for us?

Even without the rise of high performance computing, the age of big data and the Internet of Things, coalition politics (both at the UK and EU level) appear to be putting the brakes on implementing effective change. So as parliamentary change management slows down while digital action speeds up, where are these changes taking us, as a society?

Will voter apathy rise further and will we escape on-masse, to the World of mall shopping, computer games, reality TV, You Tube home videos and sport on the terraces?

Lastly, the irony of the social network, might be when people start using it to help make sense of the device-social network (The Internet of Things), as its sociability quickly overtakes our own…

Information & Communications Technology

Data has travelled half way around the World before data integration has got its boots on.

Open the pipes to let the water flow. Open the system interfaces to let the data flow.

Big Data and Human Creativity – the twin elements of modern day progress.

Data privacy exists if you can directly restrict data’s ability to mingle with other data. The rest is illusion.

Technology spreads rumour, hype and gossip just as fast as it spreads facts. Don’t confuse latest tech with greatest accuracy.

Coaches and players

I read something interesting recently in Nate Silver’s book ‘The Art and Science of Prediction’ about (elite chess) players taking the best of three different computer models to win the game. For them, the task was less about being a player and more about coaching the best contributions from the models that they could then use.

Moving from chess to university research, is this a glimpse of how future university research will be done (develop multiple models that analyse the same vast data-set, then select individually or in combination from those)? One implication is that the demand for big data analysis in this sector will explode as models proliferate like virus mutations.

We’re already living in the age of ‘Big Data’ analysis with researchers crunching massive data-sets to uncover relationships and test out their theories At the same time, statistical theory continues to remind us that correlation isn’t the same thing as causation.

So although the historical data is real (or as real as we can get it using our best available technology to capture it), how much of the resulting output is ‘real’ because of the equations versus ‘more real’ because of the equations and the quality of the programming code? To elaborate, even if a researcher does (unknowingly) formulate the perfect, lengthy set of equations to essentially model something observable, how much is inadvertently ‘lost in translation’ by the data analysis coders? On a related note, perhaps our rate of innovation throughout history has been faster than we realised, it’s just our rate of proof of concept has been slow, since people lacked good tools to test the theories.

Finally, should we take a view that although correlation isn’t causation, perhaps various clusters of correlations can be modelled with the best cluster acting as a proxy for causation.

Why does this matter? Apparently various academic-published research results are ‘false positives’ i.e. hard for an independent set of researchers to repeat and get the identical results. The more this happens, the more it starts to debase all research findings, at least in the eyes of research-grant funders, who grow more skeptical about what they’re really funding.  Furthermore, where those grant funders get their funding from fundraising activity (charities, biotech companies issuing shares, research councils asking central government for more funding), the upstream donors also become increasingly skeptical.

If instead, leading researchers were more honest in their published articles (supported by the university establishment in its incentive structure) about declaring the best correlation cluster model found to act as a proxy for real causation, society’s expectations of researchers (as coaches coaxing approximations, not as lab boffins uncovering ultimate scientific truth) would become more realistic?