AI coach update

A few months ago, I decided to run an experiment. I wasn’t just going to use AI to support my marathon training - I was going to treat it as my actual coach. The goal was simple but ambitious: break three hours in the marathon. I wanted to see whether AI could genuinely guide me there, not just provide suggestions along the way.

At the start, everything felt surprisingly natural. I followed the sessions, reported back after each run, and adjusted based on the feedback I was getting. There was a rhythm to it, almost like a real coaching relationship. Over time, the program started to feel tailored, like it understood how I trained, how I pushed, and where I tended to get things slightly wrong.

Then curiosity got the better of me.

I’d been hearing a lot about another AI platform. It was being talked about as more advanced, more capable - supposedly a step up. So I decided to test it. I took the same questions, the same goal, and started again in a different program to see if the experience would be better.

It didn’t take long to realise something was off.

On the surface, the new model did exactly what you would expect. It gave me a clean, structured weekly plan. Efficient, logical, easy to follow. But something about it felt disconnected. It didn’t feel like it knew me, because in reality, it didn’t.

Within a couple of days, I switched back.

At first, I was disappointed in myself for not sticking it out longer, but the more I reflected on it, the more I realised the lesson wasn’t about which AI was “better.” It was about what actually makes coaching effective in the first place.

The biggest difference between the two experiences wasn’t the quality of information. Both models had access to the same knowledge about training, pacing, and performance. The difference was context. One had spent months learning how I trained, how I responded to sessions, how I interpreted effort and fatigue. The other was starting from zero.

That gap was impossible to ignore.

It made me realise that coaching, whether it’s human or AI, isn’t just about delivering the right program. It’s about understanding the person following it. Over time, that understanding becomes an advantage. It allows adjustments that feel subtle but are actually incredibly important - holding back when needed, pushing at the right moments, and shaping the plan around how someone actually behaves, not just how they should behave.

Once I switched back, that sense of continuity returned almost immediately. The sessions felt connected again, building on what had already been done rather than starting fresh. After each run, I’d share how it went, get feedback, and sometimes see the next session tweaked based on that conversation. It wasn’t static - it was evolving.

One thing I did notice, though, is that AI tends to be consistently positive. Even when I didn’t execute a session perfectly, there was always a way to frame it as useful or productive. In most cases, that’s helpful, but there were also moments where it would push back and tell me to rein it in. Those moments stood out, probably because they were less frequent, but they carried weight when they came.

So the obvious question is whether it’s actually working.

From a data perspective, there are some encouraging signs. Over the past few months, I’ve built up consistent volume in a way I haven’t before, logging multiple consecutive months above 180 kilometres. More importantly, my long runs are starting to feel easier at the same or slightly faster paces, with lower heart rates. That suggests that something is improving beneath the surface, particularly in terms of efficiency.

Of course, it’s fair to argue that I might have improved anyway just by running more. That’s probably true to an extent. But what stands out to me is that it wasn’t just about increasing volume - it was about doing it at the right intensity. Having guidance on when to slow down, when to hold back, and when to push has made it easier to stay consistent without burning out.

The real takeaway for me, though, goes beyond whether AI can produce a good training plan.

It’s made me rethink what coaching actually is.

We often assume that the value of a coach sits in their knowledge - their ability to design sessions or build programs. But what this experience highlighted is that the real value sits in the relationship built over time. The longer that relationship exists, the more context the coach has, and the more effective their guidance becomes.

That applies just as much to AI as it does to a human.

In that sense, switching coaches - whether it’s a person or a platform - comes at a cost. You’re not just changing the plan, you’re resetting the understanding that sits behind it. That’s why stability in coaching is probably more important than most people realise, and why the best coaches tend to keep their athletes for long periods of time.

As for whether AI can replace a human coach, I don’t think it’s a simple yes or no.

What I do think is that it’s closed the gap more than most people would expect. It can provide structure, consistency, feedback, and a level of personalisation that feels surprisingly close to real coaching. It’s not perfect, and there are still elements of human coaching that are hard to replicate, but it’s no longer just a novelty.

In two weeks, I’ll get a clearer answer when I line up at the Newcastle Marathon. My current personal best sits at 3 hours and 28 minutes, and the goal is to take a significant step toward that sub-3 target.

That’s where the theory gets tested.

Regardless of the result, one thing is already clear. AI didn’t do the work for me, but it did provide something that every athlete needs - structure, feedback, and consistency.

And that might be closer to great coaching than most people think.

Next
Next

Summary 18th February