On Tuesday, March 19th, Alan Jacobs posted a technology article for The Atlantic titled “Jobs of the Future: A Skeptic's Response.” In the article, he voices his doubts that a skillset promoted by the internet and social networking would usher in a new wave of future employment:
‘Where are these jobs that will require such rapid "searching, browsing, assessing quality, and synthesizing the vast quantities of information"? We don't need those skills to drive a truck or manage company accounts or sell clothes or do IT customer service or write novels or write code or give inoculations to patients or teach seven-year-olds how to read ... so what do, or what will, need them for? And how many of us will need them?’
We might not need those skills to drive a truck, but if you are responsible for a whole fleet of trucks, you may need to search a database that tracks every truck’s location and cargo. Online retailers of clothes have huge customer databases that they can use to selectively target promotions to those they think would be most responsive to them. A typical call to IT tech support goes to a first tier agent, whose first task is to request your information so they can find you in a database. Then, when it comes down to the actual support, if they don’t know how to fix your problem, they search through forums, help documents, and bug databases to try and find an answer.
In my own field of bioinformatics, the trend toward “searching, browsing, assessing quality, and synthesizing the vast quantities of information” is driving a shift of focus (more on that in a future blog post) that is real science, not science-fiction. The advancement of machines that sequence DNA has progressed so rapidly that it’s broken Moore’s Law , and we can now sequence an entire human genome in a ‘matter of weeks’.
All of these facts point to a single trend: our ability to produce data is outstripping our ability to understand it. In fact, the need to make sense of these mountains of information is so great that it’s given rise to one of the hottest interdisciplinary fields on the market: data mining and predictive analytics.
Data mining, loosely defined, is the process of analyzing data and shaping it into useful information. But how does a data miner know which piece of data matters and which doesn’t? They must cleave through those mountains of information in order to find data diamonds, and that requires the skills that Alan Jacobs questions.
Later in his article, he calls on employers of the future to seek individuals capable of slow, patient, and careful thought. Rest assured, these skills will not fall by the wayside. Let’s set aside the fact that we’ll always need this kind of thought to drive innovation. This kind of ‘deep thought’ (and no, I don’t mean the Jack Handy variety) is the next logical step after data mining, and it is called predictive analytics. Using a statistical skillset, analysts try to predict future trends in everything ranging from the business forecasting to political unrest.
In the field of bioinformatics, we produce more genetic sequence data than we can analyze. The fact is that before we can do any deep thinking about this data, we have to mine out the really difficult problems, to separate the wheat from the chaff, so we can turn them into diamonds. I suspect this trend applies to just about any field that collects data, and what field in this century doesn’t? As the cliché goes, knowledge is power, and this is one cliché that proves truer day by day.
So what do jobs of the future look like? Yes, we’ll still need truck drivers, teachers, and novelists. I won’t argue that their core competencies will include data searching, though they can all benefit from these skills. For the growing workforce that deals with data, however, these skills are mandatory, and of those people, it will be the deep thinkers who make that data precious.