I wrote and rewrote this article. Frankly, writing about ChatGPT seemed intelligent to me at first, made me question things. And then faced with the litany, the deluge, it began to appear completely useless. Especially since over time the feedback has been: it knows how to produce a quantity of correct answers, but insipid ones, just like the flow of low-quality content flooding our media. Perhaps, to invoke Nassim Nicholas Taleb’s concept, this artificial intelligence has a major flaw: it has no skin in the game, none of its answers commits it to anything.

However, I’m revisiting this article to highlight the difference between knowing, wanting, and being able to, which seems to appear strikingly around ChatGPT, and which echoes our issues around the civilizational challenges we’re experiencing.

So yes, everyone is talking about it, about the ChatGPT artificial intelligence. I haven’t tried it, and I won’t try it for reasons of conviction (Peter Thiel, Elon Musk, as initial investors, a desire to be opensource quickly forgotten and closed off, investments from microsoft).

But I’m not blind—it’s going to have a major impact, particularly on my work as a manager, leader, coach.

Knowing

What does this artificial intelligence demonstrate in my opinion?

That knowing is often not the issue. I have no doubt that it can tell us in enough detail how to organize, reorganize, communicate in this or that organization. That it can model complicated and complex systems and tell us the global impacts of each small movement, or the repercussions, or the speed of propagation of habits, knowledge, etc. It would highlight all the dynamics of dependencies, adherences, etc.

It could allow us to run simulations in complex systems in real time, at the pace of emergence.

But knowing has never been fundamentally the main concern of management, leadership, coaching.

We’ve been observing for many years that organizations, associations, companies, for the most part, move slower than our knowledge about how to organize, how to manage, etc. And that’s normal.

Wanting

But do we want what we know? And do we know what we want? (You have two hours).

Until now, what slows down organizations is not knowing, it’s clarity in wanting.

For an artificial intelligence to answer us, we still need to ask it the real questions that plague us, and feed it the “real” information.

We already understand that talking about “real” information is a problem in itself. What is true for one person, is it true for another?

Are we ready to tell it: ok propose a reorganization, but don’t forget my career plan or my compensation? Do we want to explain to the artificial intelligence that Mireille wants Jean’s position, but that Cyril is lying in wait to take the position unless Antoine fails his current project. And that’s the point of view of one person in the system, another might say something else.

This raises plenty of politico-philosophical questions: I want to reorganize for what? For value? Let’s define value. Value at the individual level, the team level, the organizational level, of a set of organizations, of a civilization? The forms will change completely.

The artificial intelligence answers a question. A question wants to solve something.

The answers proposed to resolve a question appear to me necessarily politicized.

To become more down-to-earth again, in our coaching or in management, the problem is generally not knowing, but clarifying what we want, why we want it, where we wish to go. Am I stating the obvious? Yes, but I’m pointing out that artificial intelligence won’t be able to solve this for us.

Being able to

More obviously, wanting is not necessarily being able to.

Here we find all the usual issues: change management, communication, emergence, adaptation, etc., so I won’t dwell on the question. Just remind you once again that wanting is not being able to. And as we’ve seen, knowing is not wanting, nor therefore being able to.

Politics

So an artificial intelligence gives us answers without grounding, or theoretical answers (and I like theory), very well. That it probably can indeed pass coaching certifications with flying colors, so much the better. That its proposals don’t commit it to anything (no skin in the game) and that it spouts Gandhi quotes of the highest order, even better. A whole bunch of coaches and managers can indeed worry about it.

For now, what it highlights is that its answers are necessarily an interpretation of a vision of civilization. We should ask ourselves which vision? Which civilization?

But isn’t that always the question we ultimately ask ourselves?