Why Your AI Coding Assistant Keeps Generating Security Vulnerabilities
Why Your AI Coding Assistant Keeps Generating Security Vulnerabilities
Last month I ran a security audit on a codebase where AI generated roughly 60% of the code. I found 23 security vulnerabilities. Twenty of them were in AI-generated code. That's not a coincidence. It's a pattern I've seen in every AI-heavy codebase I've reviewed.
Here's the contrarian take most security folks won't say out loud: AI coding assistants are systematically biased toward insecure code. Not because they're poorly built. Because they're optimized for functionality, not security. The training data rewards code that works. It doesn't reward code that resists attack.
The 6 Security Antipatterns AI Generates
After reviewing 14 codebases, I've cataloged the specific vulnerability patterns that AI generates most frequently. These aren't theoretical. Every one of these appeared in production code.
Antipattern 1: Missing Authentication Checks
AI generates endpoint handlers that process requests without verifying the user has permission. It happens because most code examples in training data skip auth for brevity.
// What AI generates (vulnerable):
app.get("/api/users/:id/billing", async (req, res) => {
const billing = await getBillingInfo(req.params.id);
res.json(billing);
});
// What it should generate:
app.get("/api/users/:id/billing", authMiddleware, async (req, res) => {
if (req.user.id !== req.params.id && !req.user.isAdmin) {
return res.status(403).json({ error: "Forbidden" });
}
const billing = await getBillingInfo(req.params.id);
res.json(billing);
});In one audit, I found 11 API endpoints that AI generated without authentication. Three of them exposed billing data. The team had auth middleware available. The AI just didn't use it.
Antipattern 2: SQL Injection via String Concatenation
Even in 2026, AI generates SQL with string interpolation. It does it less often than a year ago, but it still does it, especially for complex queries where the ORM syntax is less common in training data.
// AI-generated (vulnerable):
const query = `SELECT * FROM orders WHERE status = '${status}'
AND user_id = '${userId}'`;
const results = await db.raw(query);
// Safe version:
const results = await db("orders")
.where({ status, user_id: userId });Antipattern 3: Overly Permissive CORS
This one appears in almost every AI-generated API setup. AI defaults to allowing everything because that's what works during development.
// AI-generated (vulnerable):
app.use(cors({ origin: "*" }));
// What production needs:
app.use(cors({
origin: process.env.ALLOWED_ORIGINS?.split(",") || [],
credentials: true,
methods: ["GET", "POST", "PUT", "DELETE"],
}));Antipattern 4: Hardcoded Secrets and Keys
AI generates code with placeholder secrets that look real enough to pass casual review. I've found API keys, JWT secrets, and database passwords hardcoded in AI-generated code that made it to production.
// AI-generated (vulnerable):
const JWT_SECRET = "your-secret-key-here";
// Also AI-generated (still vulnerable, just sneakier):
const JWT_SECRET = process.env.JWT_SECRET || "fallback-secret-key";
// That fallback means the app works without the env var set.
// In production, if the env var is missing, you're running
// with a guessable secret.The fallback pattern is particularly dangerous because it passes code review. The reviewer sees the process.env reference and assumes it's secure.
Antipattern 5: Missing Input Validation
AI generates handlers that trust user input implicitly. No length checks, no type validation, no sanitization. It processes whatever comes in.
// AI-generated (vulnerable):
app.post("/api/comments", async (req, res) => {
const { content, postId } = req.body;
const comment = await prisma.comment.create({
data: { content, postId, userId: req.user.id },
});
res.json(comment);
});
// Secure version with validation:
import { z } from "zod";
const CommentSchema = z.object({
content: z.string().min(1).max(5000).trim(),
postId: z.string().uuid(),
});
app.post("/api/comments", async (req, res) => {
const result = CommentSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({ errors: result.error.issues });
}
const comment = await prisma.comment.create({
data: {
content: result.data.content,
postId: result.data.postId,
userId: req.user.id,
},
});
res.json(comment);
});Antipattern 6: Information Leakage in Error Responses
AI catches errors and sends the raw error message to the client. Stack traces, database column names, and internal system details leak through error responses.
// AI-generated (leaks info):
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({
error: err.message,
stack: process.env.NODE_ENV === "development" ? err.stack : undefined,
});
});
// That NODE_ENV check? AI doesn't know if it's set in production.The Security Prompt Framework
I've developed a prompting pattern that reduces AI security vulnerabilities by roughly 80%. I call it the SHIELD prompt prefix.
Before any AI code generation that touches user data or API endpoints, prepend this context:
Security requirements for this code:
- All endpoints must use [your auth middleware name]
- All user input must be validated with Zod schemas
- No string concatenation in database queries
- No hardcoded secrets (fail if env var is missing, no fallbacks)
- Error responses must not expose internal details
- CORS must be restricted to ALLOWED_ORIGINS env var
- All file uploads must validate type and size
Sounds simple. But it works because it flips the AI's default from "make it work" to "make it work securely." The 80% reduction came from tracking vulnerabilities before and after implementing this prompt prefix across a team of 9 engineers over 3 months.
The AI Security Checklist
Every PR with AI-generated code should pass this checklist:
| Check | What to Look For |
|---|---|
| Auth | Every endpoint has auth middleware. Every data access checks ownership. |
| Input | Every user input has a Zod schema or equivalent validation. |
| Queries | No string interpolation in SQL. Parameterized queries only. |
| Secrets | No hardcoded values. No fallbacks for sensitive env vars. |
| Errors | Error responses return generic messages. Details logged server-side only. |
| CORS | Explicit origin allowlist. No wildcards in production config. |
| Headers | Security headers set (CSP, X-Frame-Options, etc.). |
| Dependencies | New packages checked for known vulnerabilities. |
Automate What You Can
Don't rely on human reviewers to catch security issues. Automate the checks that can be automated:
# Add to your CI pipeline
npx eslint --rule 'no-restricted-syntax: [error,
{selector: "TemplateLiteral[parent.type=TaggedTemplateExpression]",
message: "No template literals in SQL queries"}]' src/
# Scan for hardcoded secrets
npx secretlint "src/**/*.ts"
# Check for known vulnerabilities in dependencies
npm audit --audit-level=highThe teams that treat AI security as an automation problem rather than a review problem have 5x fewer vulnerabilities reaching production. Humans miss things under deadline pressure. Automated scans don't.
The Bottom Line
AI doesn't generate insecure code because it's stupid. It generates insecure code because security is a constraint, and AI optimizes for the primary objective: working code. You need to make security part of the prompt context, part of the automated checks, and part of the review checklist.
Stop trusting AI-generated code with your users' data. Verify every endpoint, validate every input, and automate every security check you can. The 30 minutes you spend setting up security automation will save you from the breach that takes months to recover from.
$ ls ./related
Explore by topic