
Building Bear Decisions: A 20-Year Code Comeback Story

It had been 20 years since I'd written real code. Well... beyond those Udemy courses you buy, spend an hour getting into, then never reopen again. (Kudos to those who actually make the hustle work!)
I’d been toying with the idea of building software for years. But I kept hitting the same wall: spend months relearning to code—or hire developers to do the hard yards. As someone who'd spent the last 7 years being development-adjacent in product management, I knew enough to be dangerous—but not enough to actually build anything.
My AI Background (Or Lack Thereof)
I wasn't completely new to AI. When ChatGPT launched, I was in it on day 2 - as excited as everyone else. But working at a large company meant strict governance around what I could use and what data I could expose—which makes sense when you're dealing with PII and sensitive information.
I'd found workarounds: anonymized survey data for sentiment analysis, comparing GPT results against my own analysis, turning meeting notes into summaries, and cross-referencing with tools like Gong (a call analytics tool) to spot gaps in my perspective. I'd even used v0 to build test webpages, though nothing ever made it to production.
But it was all peripheral work. I'd seen what developers had built, but it felt like a world away from my capabilities.
The Breakthrough
Then I got some interview homework that changed my perspective entirely: Turn a bunch of customer emails and a spreadsheet into a system that checked stock, provided invoices (or apologies if out of stock), answered product queries, and elevated complex requests to humans.
What amazed me wasn't just that I could do it—it was how quickly I picked it up. Here I was, someone who'd last touched Python 2 and MATLAB 20 years ago, and suddenly I was building functional, useful systems.
That's when the lightbulb went off: maybe I could actually build something myself.
AI had turned coding from "impossible after 20 years away" to "actually doable." Within 3 weeks of using AI this way, I’d built working prototypes that felt impossible just a month before. I was filled with a false confidence about how quickly I would ship software. But as I'd soon learn, there's a massive difference between "doable" and "done right."
✅ The Real Lessons: What Actually Works
🎯 Context Is Everything
The biggest mistake I see people make is treating AI like Google—asking vague questions and expecting good results. AI needs context. Detailed context.
Don't just say "fix this function." Show it the function, explain what it's supposed to do, provide the error message, and include the surrounding code. I learned to paste entire files into prompts when necessary, not just expect the AI to "find the right element."
📋 PRDs Become Your Secret Weapon
As a Product Manager, I was used to writing Product Requirements Documents. But with AI, they became even more critical. A well-written PRD doesn't just help you organize your thoughts—it gives the AI the context it needs to understand what you're actually trying to build.
The difference between "add a login system" and a detailed PRD explaining user flows, edge cases, and business logic is the difference between getting generic code and getting something that actually solves your problem.
Breaking these down into implementation plans and then into code helped me get even more value from AI.
💡 The Power of Few-Shot Learning
AI learns from examples. Instead of just describing what you want, show it.
Especially with math or complex logic, I'd provide 2-3 examples of inputs and expected outputs. "Here's what this function should do with these inputs, and here's what it should return." The results were dramatically better than just trying to explain the logic in words.
❓ Force AI to Ask Questions
Here's a game-changer: explicitly prompt AI to ask clarifying questions instead of making assumptions.
"Before you write any code, ask me 3 questions about what this feature should do."
This simple addition prevented countless hours of the AI going down the wrong path because it made assumptions about my intent. It's a great way to waste time and money if you skip this step.
⚖️ Pit AIs Against Each Other
When I got stuck, I'd take Cursor's solution to ChatGPT and ask: "What do you think about this approach?" or I might ask ChatGPT to come up with an approach independently then compare the two.
Different models have different strengths. Claude might catch security issues that ChatGPT misses. GPT-4 might suggest a more elegant solution than what Cursor initially provides. Using them as a checks-and-balances system dramatically improved my code quality.
🔧 The Right Tool for the Right Job
Not all AI models are created equal, and what works well for one task might be terrible for another. You'll need to experiment to find what works for your specific needs and workflow.
The key insight: just because a model excels at creative writing doesn't mean it's good at debugging code. Performance in one area doesn't translate to others.
Don't get caught up in the endless Twitter threads scoring different models—they're often outdated by the time you read them. Focus on testing what works for your actual use cases.
🧠 The Mindset Shift
🚀 From Feature Planning to Feature Execution
AI fundamentally changed my relationship with scope. Features that would have taken weeks to specify and months to build could now be prototyped in hours.
"Maybe I'll add a documentation site" went from a weeks-long project to a single bullet point on my to-do list. When I actually needed it, I built and deployed it in a couple of hours.
This isn't just about speed—it's about Just-in-Time development. You can leave ideas as one-liners for longer, then execute them rapidly when the moment is right.
🔍 The Domain Knowledge Reality Check
Here's what everyone misses: AI can write code, but it can't (yet) understand your intent, your domain knowledge, or the real problem behind a feature request.
You still need to do the hard work of understanding your users, defining the problem, and designing the solution. But once you've done that thinking, AI can help you execute it exponentially faster.
⚠️ The Reality Check
🌪️ When AI Gets Carried Away
AI loves the belt-and-braces approach. Ask it to fix one thing, and it'll add three fallbacks, two error handlers, and a logging system you didn't ask for.
Recent example: I was fixing a logging issue in Edge, and the AI went into overdrive adding fallback after fallback. I had to stop it, remind it of the original issue, and re-prompt it to fix just the issue on hand, in one way.
You need to curtail this tendency, or unwinding things later becomes a nightmare.
❌ When Things Go Completely Wrong
Not everything worked. I once spent two days trying to export dashboard visualizations as images—a simple UX improvement to save users from manually screenshotting their results.
I went through 8 different proposed methods from Claude, Gemini, and ChatGPT. Each one confidently explained how to work around Microsoft's API limitations. Each one failed.
Finally, after two days and countless tokens, ChatGPT admitted: "This probably isn't possible with the current Office APIs."
The lesson? AI has a "yes bias." It wants to help so badly that it'll propose solutions that don't actually work rather than admit upfront that something might be impossible. This wastes time and money.
The solution: When tackling something complex, explicitly ask "Is this actually possible?" before diving into implementation, and avoid that frustration in the first place.
💬 The Importance of Effective Prompts
Building an effective prompt library became crucial. Here are some of my go-to patterns:
For exploration: "I want you to fully explore and understand the existing codebase. Don't write code yet—just deeply understand what's currently happening."
For planning: "Spend at least 10 minutes deeply reasoning about how a world-class engineer would approach solving this. Generate ideas, critique them, refine your thinking, and then propose an excellent final plan. I'll approve or request changes."
For clarification: "Before you implement this, ask me 3 questions about what this feature should do and how it should behave."
For execution: "Implement this perfectly."
For motivation (yes, really): "-10 points to Cursor. If you lose too many points for not listening I will stop using you."
For restraint: "Remember: Only do minimal changes. Do not get caught up on trying to solve current errors and warnings within the code. Keep comments etc. intact so that I can track what is supposed to happen throughout the file."
🎯 The Results
Five months and 1000+ commits later, I'm shipping Bear Decisions. Not because I became a coding expert, but because I learned how to work with AI as a thinking partner, not just a code generator.
The transformation isn't just personal—it's about what becomes possible. Features that would have taken a team weeks can now be prototyped in hours. Ideas that would have died in the "too hard" pile can be tested and iterated on.
But the fundamental product work remains the same: understanding your users, solving real problems, and making decisions about what to build and why.
(And imposter syndrome remains strong - so there is another constant!)
📚 The Lessons for the Development-Curious
If you're considering the AI-assisted development path, here's what I'd tell you:
-
Invest in context - detailed prompts with code samples get better results.
-
Write better PRDs - they're now for the AI to understand what you're building.
-
Use few-shot learning - show examples rather than just explaining logic.
-
Force clarifying questions - make AI ask before implementing to avoid assumptions.
-
Maintain healthy skepticism. AI is powerful but not infallible. Read, understand, and question everything it produces. Just because code runs doesn't mean it's correct or secure.
-
Pit AIs against each other. Use different models as checks and balances. Each has different strengths, and they'll catch different issues.
-
Focus on the product, not the code. AI can make you a faster executor, but you still need to know what you're executing and why. Building something fast that solves the wrong problem is a waste of time. Deciding what to build is the most important part of the process.
The future of building software isn't about becoming an expert programmer—it's about becoming an expert problem-solver who can leverage AI to bring solutions to life.
Five months ago, I hadn't yet built a simple web form. Today, I'm shipping software that helps people make better decisions. That transformation is available to anyone willing to learn how to work with AI as a thinking partner.
Keep your eyes open for new tools and approaches. Be willing to test them. And remember: the best solutions often come from people who understand the problem deeply, not necessarily those who can code the best (or who can persist through a Udemy course).
The question isn't whether AI will change how software gets built—it's whether you'll be part of that change.
Want to be among the first to try Bear Decisions? Join our waitlist for early access and updates. I'd love to hear about your own experiences building with AI—what's worked, what hasn't, and what you've learned along the way.
Drop me a line at mark@babybearanalytics.com or connect with me on LinkedIn. I read every message.