Back to all posts

I'm Halfway Through Udacity's AI-Powered Software Engineer Course — Here's What's Already Shifted

3 min read

I've been a software engineer for over 10 years. I've done TDD, I know design patterns, and I've shipped production code in many setups. So when I enrolled in Udacity's AI-Powered Software Engineer Nanodegree, it wasn't because I felt like a beginner.

I enrolled because I felt like something was changing faster than I could keep up with, and I wanted a structured way to think about it.

I'm still in the middle of it. But two modules in, one thing has already stuck with me hard enough that I had to write it down.

What I Thought I Knew About TDD

I've been practicing Test-Driven Development and I thought I understood it. In fact, I did — in a world where I was the one writing the code. But then I added AI to the Loop. Here's what changes when you pair TDD with AI code generation: the speed of code production becomes almost violent.

You describe what you want. Claude writes a working implementation in seconds. It feels like productivity on steroids.

And that's exactly where the danger is.

When AI can generate code faster than you can review it, you lose something critical: the design conversation. Without tests in place first, you end up reviewing implementation details instead of guiding outcomes. You become a code reviewer of decisions you didn't make. The AI is driving. You're in the passenger seat, approving turns.

I believe TDD fixes this.

TDD with AI Isn't About Correctness Anymore. It's About Control.

The traditional argument for TDD is that tests catch bugs and force you to think about the API before you write implementation. That's still true. But with AI, there's a more fundamental reason to write tests first: Tests are the specification you hand to the AI. They define what you actually want.

When you write the test before asking the AI to implement anything, you're not just checking future behavior. You're designing it.

You're forcing yourself to answer: what should this do, in concrete, runnable terms? The AI then works within that contract. It can generate as fast as it wants, but it's generating toward your intention, not its own interpretation of your vague prompt.

Without that contract, you get code that works, but may not be the code you needed. And at AI speed, you can accumulate a lot of that before you notice the design has gone sideways.

What This Looks Like in Practice

Before the course reframed this for me, my AI workflow looked like:

  1. Describe the feature to Claude
  2. Review and tweak the generated code
  3. Write tests afterward to lock in the behavior

It felt efficient. It was actually backwards.

Now it looks like:

  1. Write the test that describes the behavior I want
  2. Let the AI implement against that test
  3. Refactor together — me on the design, AI on the implementation

The difference isn't just the process. It's about who's driving. In the first flow, AI is making design decisions, and I'm approving them. In the second, I'm making design decisions, and AI is executing them. That distinction matters enormously at scale.

For me, TDD isn't a safety net for bad code anymore. It's a steering wheel.

Where I'm At

I'm still working through the course. It will still have a module called "Vibe Engineering" that gives instructions on how to do better prompting and engineering oversight of AI output.

When I finish, I'll write a proper retrospective. But I didn't want to wait until the end to share this shift, because it's already impacting my day-to-day work.

If you're an engineer who's been "vibe coding" with AI and wondering why things feel slightly out of control, this might be why.