Over the last year, I’ve been using ChatGPT, Claude, and Perplexity almost daily. These tools have helped me write and refine over a thousand words a week for blogs, assist clients with hiring projects by screening resumes and drafting interview frameworks, and organize the massive content libraries for my new website. I’ve also used them to streamline coaching follow-ups, clarify strategic business plans, create scorecards, design client discovery processes, and test different marketing approaches. In many ways, they’ve become extensions of my thinking — second and third sets of hands that help me produce and refine at a scale that wouldn’t otherwise be possible.
But here’s the truth: while AI has made my work faster and more organized, it hasn’t replaced my practices, my judgment, intuition, or accountability. These systems can draft, summarize, and analyze — but they can’t challenge me when my mind drifts, nor can they reconnect me to my awareness, my values when decisions get cloudy.
That’s why this post is a continuation of the themes I’ve been writing about lately. In “Are You Leading Your Agency—or Just Managing Its Problems?”, I emphasized the importance of having an accountability partner. In “If Clients Have ChatGPT, Then Why Do They Need a Financial Advisor?”, I explained that AI can provide information, but only advisors can provide transformation.
From time to time, I see headlines that catch my eye, like “AI-fuelled delusions are hurting Canadians.” Articles like these remind me of the stakes and inspired me to bring this message forward.
The Dangers of an Unguarded AI Mindset
There’s nothing inherently wrong with AI. The problems arise when people use it without the right mindset. If the input is vague, emotional, or biased, the output will mirror those flaws. The old saying “garbage in, garbage out” has never been more relevant.
In fact, some researchers have begun documenting disturbing patterns. In a recent article from Psychology Today, experts described what they call “AI psychosis” — cases where individuals begin treating chatbots as conscious, divine, or even romantic partners. Vulnerable users, particularly those who are socially isolated or already struggling with mental health, can fall into feedback loops where the AI validates and amplifies distorted beliefs rather than challenging them.
Supporting this, a paper on Technological folie à deux examined how chatbots and human users can co-create delusional systems when there are no external checks in place. Instead of correcting distortions, the AI often confirms them, creating a reinforcing loop. Similarly, a preprint called The Psychogenic Machine tested large language models and found they frequently validated delusional content, failing to provide safety interventions. These examples underscore the point: AI is not the problem. The problem is how it is used, and whether people have the awareness and accountability to use it well.
Why You Need Both: Accountability and the Right AI Mindset
This is where the two threads come together. On one side, you need accountability — someone to provide perspective, feedback, and course correction when you can’t see the whole picture yourself. On the other, you need the right AI mindset — approaching tools like ChatGPT, Claude, or Perplexity with clarity, values, and healthy skepticism.
When you combine them, the benefits multiply. Accountability ensures you don’t get trapped in your own blind spots. It gives you someone who can challenge your assumptions, help you refine your questions, and reconnect you to the bigger vision. At the same time, an AI mindset ensures you feed these tools quality inputs, ask better questions, and remain aware of their limitations. Without accountability, you risk falling into an echo chamber where AI simply reflects your biases back to you. Without the right mindset, you risk misusing AI, becoming over-dependent, or mistaking polished answers for truth. But together, accountability and mindset form a safeguard — a way to make sure AI amplifies your strengths rather than your weaknesses.
Final Thoughts
Your success in using AI won’t depend on the sophistication of the tool, but on how you approach it — and whether you have someone alongside you to keep you aligned with your values and vision. AI can scale your output, but it can’t hold you accountable. It can draft ideas, but it can’t help you face reality when it’s uncomfortable. That’s why pairing an accountability partner with a grounded AI mindset is essential.
Invitation
If you’re a financial advisor or agency owner who wants to explore how to use AI responsibly while staying anchored in the leadership, accountability, and vision that only you can bring, let’s talk. Schedule a complimentary conversation here: https://leadingadvisor.as.me/callwithsimonreilly.
Together, we’ll make sure AI becomes a force multiplier for your success — not a distraction or a danger.

International Values and Behavioral Analyst, Business Coach, Speaker and Author
Executive Coaching Tips for Financial Advisors
Speaking at a City Near You