Automation is just one facet on the broader spectrum of AI and machine intelligence. Yes, it’s going to affect us all (it already is with the increasing emergence of intelligent agents and bots), but I think there is a far deeper issue here that – at least for the majority of people who haven’t become immersed in the “AI” meme – is going largely unnoticed. That is, the very nature of human knowledge and how we understand the world. Machines are now doing things that – quite simply – we don’t understand, and probably never will.

I think most of us are familiar with the DIKW model (over-simplification if ever there was), but if you ascribe to this relationship between data, information, knowledge and wisdom, I think the top layers – knowledge and wisdom – are getting compressed by our growing dependencies on the bottom two layers – data and information. What will the DIKW model look like in 20 years time? I’m thinking a barely perceptible “K” and “W” layers!

If you think this is a rather outrageous prediction, I recommend reading this article from David Weinberger, who looks at how machines are rapidly outstripping our puny human abilities to understand them. And it seems we’re quite happy with this situation, since being fairly lazy by nature, we’re more than happy to let them make complex decisions for us. We just need to feed them the data – and there’s plenty of that about!

This quote from the piece probably best sums it up:

As long as our computer models instantiated our own ideas, we could preserve the illusion that the world works the way our knowledge —and our models?—?do. Once computers started to make their own models, and those models surpassed our mental capacity, we lost that comforting assumption. Our machines have made obvious our epistemological limitations, and by providing a corrective, have revealed a truth about the universe. 

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend

We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Should we be worried? I think so – do you?

The post The depreciating value of human knowledge appeared first on Communities & Collaboration.

Original source – Steve Dale online

Comments closed