It all started with a small discomfort. I felt less confident than usual, less bold, questioned my decisions more often, and I aimed lower than usual. Usually, I’d rather push myself a bit than to finish something feeling like I could have done more.
My decisions started to feel like an internal struggle between what appeared to be the logical step and what would match my personality. This, of course, led to nothing good. After all, how could I own a decision that doesn’t feel like me at all?
At that time, I entered some new domains, extended my circles a lot, and I used a new tool, AI, to help navigate the new waters. I had a hunch that my discomfort originates from my use of the new tool, this was the only new element in my approach after all.
To test my theory, I switched to a different provider. This other model gave significantly bolder suggestions, and that aligned with my personality much better. It didn’t take a lot of time for the feeling that I’m not myself anymore to go away.
Looking back, my mistake was to accept what the AI suggested on topics I wasn’t familiar with, I let it make the decisions for me. Instead, I should have had it outline the reasoning, think it through critically, as I usually do, and make the decision myself.
Of course, it’s easy to be wise in hindsight, but at the time, I had no idea what the consequences of my actions could be. I caught it early and managed to fix it, but it is very clear where it would have led if left unchecked.
If I kept outsourcing my decisions like I did, they would have kept feeling increasingly foreign. I would have kept accepting the AI’s suggestions and trying to apply them. As I would have gone deeper and deeper into a topic, my ability to follow the decision making process and reason about the logic behind it would have counted for nothing, since I lacked the necessary understanding of the topic at hand.
With the lack of understanding, if anything went wrong, I’d have no option but to assume that I did not apply the suggestion well enough. At this point, I would not have the option to stop using AI and start making my own decisions, because I would be deep into a topic I’m not familiar with. My only options would be to keep going and keep trusting the AI, or in other words keep letting the whirlpool suck me in, or accept the failure and start over again, if I still had that option.
Even if I ended up finishing the project successfully relying on external judgment, it would have come at a huge price: I would not have been in control. Building something is usually not the hardest part. The biggest challenge is to maintain something without losing sight of what defines it. How could I ever be in charge of something that I don’t understand? How could I respond to changes, problems, crossroads, or external doubt? I could only ask the AI, but at that point, is it still the tool and me the user, or have the roles switched already? What exactly would be my value, what would I bring to the table? The bitter truth is that I would be no longer the user, and I would bring no value.
But taking the loss and starting again is not free either. It would have cost me money, delay in my career, the loss of my time, and the missed opportunity to learn a new domain. The last one is very hidden, but it might be the most important one.
As my confidence shrinks, so would my ability to initiate, take charge and grasp opportunities as they arise. I would have no confidence to question decisions, to question the status quo, to take something and try to improve it. Inevitably, my results would eventually match my low confidence. This might turn into a spiral where my low confidence results in subpar outcomes, which further lowers my confidence.
That is how I’d react at least, but there is another side of this coin. Depending on the personality, some people would feel like they own the success of the AI, and it would inflate their ego. The less they know about the topic, the more inflated their ego would get. It would lead to more decisions made this way, bigger decisions made this way, and much bolder, riskier decisions made this way. Decisions made without truly understanding the domain or the problem are impossible to own, maintain, and build upon after all.
But the consequences don’t stop at the individual level. If I waver because of my low confidence, I would lose my authority and authenticity as a leader. A leader is a stable point everyone can turn to for guidance, someone who can always point towards a goal or direction to work towards. When I cannot do that as a leader, everyone around me suffers.
On an organizational scale, if a system is created relying on AI for decision making, we’ll end up with a system nobody really understands; no-one will know its history and the logic that shaped it. Instead of a strategic asset, we’ll end up with a black box, something we cannot truly rely on.
It would be akin to a group of amateurs building a house based on a subpar blueprint. If the blueprint had no foundation, because it was forgotten or it felt obvious, no-one would question it, the house would be built without one. If the dependencies, such as water pipes or electricity were not considered on the blueprint, they would have to be bolted on afterwards. When the house falls apart, nobody will know why, or how to fix it.
This story was about AI, but taking a step back, it could have been about any other tool. It could have been about over-reliance on consultants, or overuse of frameworks, the mechanism would be exactly the same. The point of tools is to make our job easier, better, more precise, or even possible at all. The sharper the tool, the bigger its effects, be it good or bad, and AI is the sharpest tool we’ve had in a long time. It’s worth handling it accordingly.