AI: How to Use It and How Not to Use It
AI: How to Use It and How Not to Use It
There's a seductive lie circulating through social media and tech communities right now. It goes something like this: Tell an AI your idea, and it will create your portfolio for you. Tell it you want a blog post, and boom—published. Tell it you want a GitHub project, and suddenly you have a portfolio piece.
This is nonsense. And worse, it's destroying the credibility of people who believe it.
I built my career as a Senior Database Administrator over years of grinding through performance issues, debugging production incidents, and learning PostgreSQL inside and out. Every skill I claim on LinkedIn, every project in my GitHub, every technical post I write—these represent real understanding earned through real work. When I transitioned into AI-Database Architecture, I did the work to earn that transition. No AI wrote those years for me.
But I also use AI. Every single day. Perplexity and Microsoft Copilot for research. GitHub Copilot for coding. I use them to accelerate what I can already do, to explore ideas faster, to refine and polish work. The difference between me and someone churning out AI-generated blog posts isn't that I'm anti-AI. It's that I understand what AI is actually good for—and what it absolutely shouldn't do.
The Seduction of the Shortcut
AI tools are getting scary good at what they do. You can prompt ChatGPT, Perplexity, or Claude with "write me a blog post about database optimization" and get something publishable in seconds. It looks professional. It reads well. It might even rank on Google. And here's the trap: because it reads well, it feels like a shortcut to credibility.
It's not. It's the opposite.
Your portfolio—whether it's a blog, a GitHub repo, or a LinkedIn post—exists for one reason: to prove you can do the work. When hiring managers or collaborators look at your body of work, they're asking: Does this person actually understand this thing, or did they just ask an AI?
If you can't distinguish between the two answers yourself, your audience won't either. But they will eventually notice something's off.
The Authenticity Crisis
This isn't theoretical. Recruiters are now trained to spot AI-generated portfolios. They check metadata. They examine writing patterns. They look for inconsistencies between your LinkedIn, GitHub, and personal site. When portfolios are 100% machine-generated, the patterns show. The voice is generic. The examples don't quite make sense in real-world context. The depth is surface-level.
And here's what should worry you most: if employers can detect it, so can the people reading your content.
I agree completely with Alberto Chierici's point that went around LinkedIn recently. Most people hyping the latest AI tool haven't built anything real with it. They've just asked it to make something that looks like it was built. There's a massive difference between using a tool to amplify your capability and using a tool to fake capability you don't have.
The people who are actually shipping products, solving real problems, and building things that last? They understand their architecture inside and out. They use AI to accelerate that understanding, not to bypass it.
Here's How I Actually Use AI
Perplexity and Microsoft Copilot for Research: I come to Perplexity and Microsoft Copilot with a specific research question or problem I'm thinking through. They synthesize current information, give me multiple perspectives, and save me hours of search engine clicking. But I'm the architect. I read the results critically. I verify claims I'll use in my work. I synthesize it into something that reflects my actual perspective.
GitHub Copilot for Coding: Copilot suggests code. Sometimes it's exactly what I need and saves me typing. Sometimes it's wrong and I fix it. Sometimes it sparks an idea I hadn't considered. But I review every single suggestion. I understand what it's doing. If I commit it, I'm taking responsibility for it. I'm not copying and pasting and hoping it works. I'm collaborating with the tool because I know enough to know whether its suggestion is correct.
This Post: I wrote this post myself. The arguments are mine. The examples are mine. The perspective comes from years of DBA work and the experience of actually building open-source tools and transitioning careers. I used Perplexity and Microsoft Copilot to research current perspectives on AI ethics and portfolio credibility. I used Perplexity and Microsoft Copilot to help refine some passages and catch places where I wasn't as clear as I could be.
But the original ideas? Mine. The decision about what to include and exclude? Mine. The voice? Mine. The accountability? Mine.
That's the difference.
The Framework: Understand, Refine, Own
If you're going to use AI responsibly, here's the framework I follow:
Understand First: Don't ask an AI to generate content about something you don't understand at least somewhat. You don't have to be an expert, but you have to know enough to evaluate whether the AI's output is correct. If you're outsourcing the thinking, you're outsourcing credibility.
Use AI to Accelerate, Not Replace: Let AI handle research synthesis, rough drafts, refining language, catching typos, suggesting approaches you hadn't considered. Don't let it be the author. You're the author. The AI is the assistant.
Refine with a Critical Eye: Everything an AI generates needs your review. Is it accurate? Is it representing my actual perspective? Does it match my voice? Would I be comfortable defending this to a hiring manager or a technical peer? If the answer is "well, the AI wrote it, not me," that's your sign that something's wrong.
Own It: If your name is on it, you own it. Not just credit—responsibility. You're responsible for accuracy. You're responsible for the claims. You're responsible for whether it reflects actual capability or fake capability. You're the one who has to live with the consequences if someone calls you on it.
Why This Matters for Your Brand
The technical industry is ruthless about authenticity. Senior engineers can tell when someone actually understands their domain versus when they've copy-pasted from Stack Overflow or ChatGPT. It takes about 15 minutes in a technical conversation to figure out the difference. And once someone figures it out, your credibility doesn't recover easily.
Your blog, your GitHub, your portfolio—these are your word. They're your guarantee that you can do the work, that you understand the problems, that you've actually built things. When you let an AI write them without your real input, you're handing over your brand to a tool that doesn't have your reputation at stake.
I'm building my brand as someone who understands databases, understands AI, and knows how to integrate the two. Not because I had an AI write about those things. But because I've spent years learning technology, building things with it, and now sharing what I've learned. That's what makes the content worth reading.
The Uncomfortable Truth
Here's what I think people don't want to hear: Using AI to create your portfolio is the easy way. And the easy way is obvious.
Real portfolio work is harder. It requires you to think through problems yourself. It requires you to wrestle with ideas until they make sense. It requires you to have actual opinions, back them with actual understanding, and be willing to defend them.
But that's also what makes it valuable.
When I see someone's GitHub repo with actual commits and real problem-solving, I know they shipped something. When I read a technical blog post that shows someone working through a complex problem, I know they experienced that problem and figured it out. When I see someone in a LinkedIn post explain a technical concept with real depth and practical examples, I know they know that domain.
That's the portfolio that stands out. That's the portfolio that leads to conversations with people who matter. That's the portfolio that actually builds your career.
Use AI. Just Know What You're Using It For.
I'm not saying don't use AI. I'm saying use it with intention and honesty.
Use it to research faster. Use it to refine your writing. Use it to explore code approaches. Use it as a sounding board for ideas. Use it to learn things you don't know yet. Use it to accelerate what you're already capable of doing.
But don't use it to fake capability you don't have. Don't use it to outsource your thinking. Don't use it to create your portfolio without your fingerprints all over it.
Because the moment someone technical reads your work and asks you to explain your thinking, the difference between "I built this" and "I asked an AI to build this" becomes immediately obvious.
And that moment will define whether you're someone who uses AI as a tool, or someone who was using AI as a crutch.
The choice is yours. And the consequences are yours too.
If you're transitioning into AI-focused roles or building your technical brand, this distinction matters more than you think. Employers aren't just evaluating whether you know AI. They're evaluating whether you actually understand the problems you're solving.
That understanding can't be AI-generated. It has to be earned.