Industry wisdom tells us “don’t boil the ocean” – “one bite at a time” – “a journey of a thousand miles begins with a single step.”
And we get it. IBM estimates that 2.5 billion gigabytes of data are created every day. Major corporations control hundreds of petabytes – little of which has business value, most of which may become subject to a discovery request, and all of which is damaging if lost or stolen.
Faced with this landscape, it’s hard to do anything except think about what is in front of you. That said, oceans are warming (not boiling, yet), Americans eat their body weight in meat every year, and I drove 3,456 miles over Christmas.
With this in mind, we are asking the question – What would it take to process and classify every gigabyte of data in your organization?
Here are some ideas on what it might take:
· A cost structure that allows for decreasing marginal variable cost of processing data
· Technology that reliably handles many (thousands of) different types of data
· The right kind of team, and the right kind of workflow
· A dashboard for tracking utilization and capacity at a granular level
· A classification schedule that can be streamlined – and a classification engine that works
· Leadership that can drive utilization through advocacy, adoption, and persistence
In a world where it costs hundreds of dollars to process, analyze, and review every gigabyte of data, big data means big pain.
But, that doesn’t have to be the world we live in.