There's Always Been Too Much Information

We continue to hear claims that the flood of information can be brought under control by AI and machine learning solutions.

I can see at least three impossibilities in this plan, all reflected so far in empirical facts:

  1. The idea that too much information is a recent trend is easily refutable and ultimately short-sighted.

  2. The idea that we would trust AI and machine learning to control what we receive and know is impractical and potentially dangerous.

  3. The assumption that information from AI and machine learning would be a substitute and wouldn’t simply represent more information is illogical.

To the first point, I remember reading through some of the archives of a journal from the mid-19th century, and coming across an article in which an outraged and exhausted physician was decrying the sheer avalanche of information he was facing at the time. It was striking that the complaints went back that far, and that we were adding these 150-year-old complaints to the digital information ecosystem, compounding the very problem this long-deceased physician had found so frustrating.

Feeling overwhelmed by information is an ancient problem. What do people do when they’re overwhelmed by information? They either give up, or they stack and sort. Think back to the library at Alexandria, and you realize that information availability has long exceeded any individual’s ability to consume it. While the problem has become more acute and magnified over time, it is still the same problem. It’s not even clear that the scale of non-transient information relative to the population is far greater than it was back in the days of Alexandria’s library.

Now we come to the belief that machines can be trusted. Let’s walk that idea. If there are AI and machine learning tools processing and synthesizing information, they will be making new claims and connections, which we will need to test and validate. It would be foolish to trust these assertions without question. There is already a Wikipedia entry entitled “Death by GPS,” outlining the fatalities resulting from people following their GPS directions to their doom. At this point, computer code and algorithms are prone to pursuing unanticipated paths, and making connections still that are bewildering and inhuman. Assuming that AI is 50%+ accurate or reliable or useful, we won’t know which 50% we’re dealing with, so we’ll have to check everything.

Checking the outputs of AI and machine learning will add to the amount of information we have, and while some will be recursive with AI and machine learning, helping to refine it, the fact that more information will be produced will certainly be the case. But more fundamentally, AI and machine learning (and whatever’s next) will need inputs, which means we will want more unprocessed information, or certainly the same amount or the amount now growing at the rate it’s growing.

So, AI and machine learning won’t solve the “too much information” problem, and will only add to it. Which begs the question — Is “too much information” really a problem? Do we really want a world where we’re spoon fed answers by authoritative AI and machines?

The collapse of local newspapers is an example of the instability and problems that can be created when we have too little information. Who knows what some city councils are doing? Who knows what some local tax authorities are planning? Who knows what’s going on in local jails and detention centers? We don’t have enough information about many of these things now.

AI and machine learning represent a new branch of humans pursuing sufficient information to continue their dominance of the planet and to plan their futures so they can defeat pain and death. Those are the real pursuits of humanity. Eliminating information is not part of this. In fact, we may be so far from our goal because we still don’t have the information we need or the best ways to manage it.

There isn’t too much information. We may need more and better information. As usual, it seems the real problems are more subtle than the solutions architects and strategists are willing to believe.