Over the last year, I've reviewed about a dozen projects where AI was used heavily in development. These were real products with real users, built fast and shipped to production.
Most worked fine. Some had interesting patterns that kept showing up across different codebases. These aren't unique to any one team or project, they're just common things that happen when AI writes code and humans skip the review step.
Here's what I noticed: AI is exceptional at writing code quickly. It's less aware of context, performance implications, and long-term maintainability. That's not a criticism of AI, it's just what it is right now.
Below are some patterns I saw repeatedly. Some affected user experience. Some created performance issues. Some were just things that made maintenance harder.
If you're using AI to build (like I am), this might be useful. Every pattern here is avoidable once you know what to look for.
A Note on "Frontend is Dead"
I keep seeing people say frontend development is dead because AI can convert Figma to code.
I agree the landscape has changed, but not in the way people think. The bar for junior frontend developers has just gone up.
Previously, you could get a junior role knowing just how to convert designs to code, then learn performance, accessibility, cross-browser compatibility, and maintainability on the job. That path is harder now. Companies expect you to know this stuff upfront because AI already handles the basic conversion.
But here's the thing: AI is actually great for learning all of this. It's an incredible brainstorming partner. It knows performance patterns, accessibility guidelines, and best practices. You just have to ask the right questions and actually review what it gives you.
The future for junior frontend devs isn't bleak. The criteria have just shifted. You need to learn the engineering fundamentals faster, but AI can help you do exactly that.
Category 1: Performance Killers
1. Infinite API Loops
What it looks like: API calls triggering on every render, creating request loops that hammer your backend.
Why AI does this: It doesn't fully understand React's useEffect dependencies. It might create an effect that updates state, which triggers a re-render, which runs the effect again.
How to catch it: Open your browser's Network tab. If you see dozens of identical requests firing continuously, you have a loop.
The fix:
- Add proper dependency arrays to useEffect
- Move API calls outside of render logic
- Use proper state management
// Bad - AI might generate this
useEffect(() => {
fetchData().then(data => setData(data));
}); // Missing dependency array = runs every render
// Good
useEffect(() => {
fetchData().then(data => setData(data));
}, []); // Runs once on mount
2. No Debounce on Search
What it looks like: A search bar that calls your API on every single keystroke.
Why AI does this: It implements the most obvious solution (user types, trigger search). It doesn't consider that "p-r-o-d-u-c-t" is six API calls when it should be one.
How to catch it: Type in the search bar while watching the Network tab. If it lights up with requests on every character, you've found the issue.
The fix: Add debouncing (300–500ms delay) before making the API call.
// Use a debounce utility
import { debounce } from 'lodash';
const debouncedSearch = debounce((query) => {
searchAPI(query);
}, 300);
Real impact: One project I worked on was making 50+ search requests per second during peak usage. Debouncing reduced it to 2–3 requests per second.
3. Over-fetching Data Instead of Designing Proper APIs
What it looks like: A dashboard that calls multiple APIs to fetch full datasets just to calculate simple numbers like totals and counts.
Example:
- Call /api/users → count users in frontend
- Call /api/posts → count posts in frontend
- Call /api/orders → sum revenue in frontend
All just to display:
Total Users: 12,430
Total Posts: 98,201
Revenue: $143,200
Why AI does this: AI treats the backend as a raw data source. It focuses on "how do I get the data" instead of "what data should the UI actually need". So it pulls everything and processes it on the frontend because that is the simplest path to working code.
Why this is a problem:
- Large unnecessary payloads
- Slower page load
- More memory usage in the browser
- Higher backend load
- Worse performance on mobile networks
The real fix (frontend responsibility): Frontend engineers should not silently accept inefficient APIs.
Instead:
- Communicate the UI requirements clearly to the backend developer
- Explain what the screen actually needs (for example: totals, counts, aggregates)
- Ask for purpose-built endpoints designed for the UI
For example: "This dashboard only needs total users, total posts, and total revenue. Can we expose a /dashboard/stats endpoint that returns just these numbers?"
Good API design is a collaboration, not something the frontend or backend should solve in isolation.
4. Massive Uncompressed Images
What it looks like: Multi-megabyte PNG or JPG files being used for simple backgrounds or decorative images.
Why AI does this: It uses whatever images are provided in the project without considering file size or format optimization.
How to catch it: Slow page loads, especially on mobile. Check image file sizes in DevTools Network tab.
The fix:
- Use WebP for photos and complex images (70–80% smaller than PNG/JPG)
- Use SVG for logos, icons, and simple graphics
- Compress all images before adding them to the project
- Use PNG/JPG only when WebP isn't supported and you need those formats
Real impact: One signup flow had a 4MB background image. Users on slower connections saw a visible load delay. Switching to compressed WebP (200KB) eliminated the lag.
5. No Lazy Loading on Images
What it looks like: Every image on the page loads immediately, even ones the user can't see without scrolling.
Why AI does this: It doesn't consider viewport optimization or prioritize what's visible first.
How to catch it: Slow initial page load despite only a few images being visible. Check the Network tab - you'll see all images loading at once.
The fix: Add loading="lazy" to images outside the initial viewport.
<img src="below-fold-image.jpg" loading="lazy" alt="Description" />
Real impact: A landing page loaded 20 images on initial render. Only 3 were visible. Lazy loading the rest improved Time to Interactive by 2 seconds.
6. Loading 100 Items, Showing 4
What it looks like: API returns 100 folders, but only 4 are displayed in the UI.
Why AI does this: It fetches all data upfront and handles display filtering on the frontend, which is the simplest implementation.
How to catch it: Check the Network tab response size versus what's actually rendered. Large payloads with small displays are a red flag.
The fix:
- Implement pagination on the backend
- Use infinite scroll with batch loading
- Add server-side filtering
Real impact: One project loaded every user's files (sometimes hundreds) but only showed 4 at a time. Switching to paginated loading reduced initial API payload from 2MB to 50KB.
7. No Table Virtualization for Large Datasets
What it looks like: A table that renders 1000+ rows directly in the DOM, causing laggy scrolling and slow interactions.
Why AI does this: It doesn't understand the performance cost of rendering large lists. It implements a basic map over data without considering DOM size.
How to catch it: Laggy scrolling, high memory usage, slow initial render on tables with many rows.
The fix: Use virtualization libraries that only render visible rows:
- react-window
- react-virtualized
- TanStack Virtual
import { useVirtualizer } from '@tanstack/react-virtual';
// Only renders visible rows, massive performance gain
Real impact: One admin dashboard with 2000 table rows became unusable after ~500 rows. Adding virtualization made it smooth even with 10,000 rows.
Category 2: Code Quality Issues
8. 2000+ Line Single Files
What it looks like: A single component file containing everything (API calls, business logic, multiple components, state management) stretching thousands of lines.
Why AI does this: It has no concept of "this file is getting too large." It keeps adding to what you tell it to modify.
How to catch it: If you need to scroll for 30+ seconds to reach the end of a file, it's too big.
The fix:
- Split components into separate files
- Extract API calls into services
- Move business logic into custom hooks
- Separate UI components from container components
Real impact: One page component was 2,800 lines. Breaking it into 12 smaller, focused files made it actually maintainable.
9. Inconsistent Naming Conventions
What it looks like: camelCase, snake_case, PascalCase, and kebab-case all mixed together in the same codebase.
Why AI does this: It's trained on millions of codebases with different conventions. Without explicit guidelines, it mixes styles.
How to catch it: Your code "looks wrong" even if it works. Hard to search for functions because naming isn't predictable.
The fix:
- Establish naming conventions in your project
- Use ESLint rules to enforce them
- Document conventions in your README
Standard conventions:
- PascalCase for components: UserProfile
- camelCase for functions and variables: getUserData
- UPPER_CASE for constants: API_BASE_URL
10. Repeated Utility Functions Across Files
What it looks like: The same helper function (like date formatting or string manipulation) written in 5 different files with slight variations.
Why AI does this: It doesn't search your existing codebase before writing code. When you ask for a feature, it writes everything it needs from scratch.
How to catch it: Similar function names appearing across multiple files. Code that "feels" duplicated.
The fix:
- Create a lib/utils or helpers directory
- Centralize common functions
- Import them wherever needed
- Follow DRY (Don't Repeat Yourself) principle
// Instead of this in 5 files:
const formatDate = (date) => { /* ... */ }
// Do this once in lib/utils.ts:
export const formatDate = (date) => { /* ... */ }
// Then import everywhere:
import { formatDate } from '@/lib/utils';
11. No Folder Structure in Growing Projects
What it looks like: All components dumped directly in the pages directory or a flat components folder, with no organization as the project grows.
Why AI does this: It puts files wherever you tell it, with no consideration for future maintainability or logical grouping.
How to catch it: Difficulty finding files. A components folder with 50+ files in a flat list.
The fix: Implement feature-based or domain-based folder structure.
src/
features/
auth/
components/
hooks/
api/
dashboard/
components/
hooks/
api/
shared/
components/
utils/
Real impact: One project had 60+ components in a single folder. Reorganizing by feature cut file-search time significantly.
Category 3: UX Breaks
12. Non-functional Buttons and Broken Routes
What it looks like: Buttons that don't do anything, or navigation links that go to wrong pages or 404s.
Why AI does this: It implements the UI without thoroughly testing the navigation flow. It might create placeholder functions that were never filled in.
How to catch it: Click through every interactive element in your app. Seriously, just click everything.
The fix:
- Manual testing of all user flows
- Add route guards and error boundaries
- Implement proper error states for failed navigation
Real impact: One app I reviewed had 8 buttons that literally did nothing (onClick handlers pointed to empty functions). Users were clicking and wondering if the app was broken.
13. Non-SEO-Friendly URLs
What it looks like: Using query parameters instead of proper route segments for navigation.
Bad: /home?page=about
Good: /about
Why AI does this: Query params are easier to implement and work functionally. AI doesn't prioritize SEO unless explicitly told.
How to catch it: Check your URL structure. If every page is /?page=something, you have this issue.
The fix: Use proper route-based navigation with frameworks like Next.js App Router or React Router.
// Instead of /?page=product&id=123
// Use /products/123
Real impact: One landing site had all pages as query params. After switching to proper routes, organic search traffic increased 40% within 2 months.
14. Design Inconsistencies (Colors, Fonts, Spacing)
What it looks like: Colors and font sizes that don't match the Figma design. Random spacing. Inconsistent visual hierarchy.
Why AI does this: It approximates designs rather than implementing them precisely. It might pick "close enough" colors or sizes if exact values aren't specified.
How to catch it: Side-by-side comparison with Figma. Use a design QA tool or browser extension.
The fix:
- Extract design tokens from Figma (colors, typography, spacing)
- Create a theme configuration file
- Reference tokens throughout the code, never hard-code values
// Bad
<h1 style={{ color: '#1a1a1a', fontSize: '32px' }}>Title</h1>
// Good - using design tokens
<h1 className="text-primary text-3xl">Title</h1>
Category 4: Accessibility & Standards
15. Divs in Email Templates Instead of Tables
What it looks like: Email templates using modern div-based flexbox/grid layouts.
Why AI does this: It uses modern web standards that work great in browsers, but email clients (especially Outlook) don't support them.
How to catch it: Send test emails to multiple clients (Gmail, Outlook, Apple Mail). Layouts break in some of them.
The fix: Use table-based layouts for emails. Yes, it's 2026. Yes, tables are still required for emails.
<!-- Email templates need table-based layouts -->
<table width="100%" cellpadding="0" cellspacing="0">
<tr>
<td>Content here</td>
</tr>
</table>
Real impact: One transactional email looked perfect in Gmail but was completely broken in Outlook (40% of recipients). Switching to tables fixed it.
16. Wrong or Missing Meta Tags
What it looks like: Incorrect site names, missing Open Graph tags, copy-pasted meta descriptions from template projects.
Why AI does this: It copies from templates or previous examples without updating project-specific information.
How to catch it: View page source. Check what appears when you share links on social media.
The fix: Audit and update all meta tags before launch.
<meta name="description" content="Actual project description" />
<meta property="og:title" content="Your actual site name" />
<meta property="og:image" content="Your actual social image" />
Real impact: One client's new site was sharing links with their old company name in the preview. Took weeks to notice.
Category 5: Best Practices Ignored
17. Inconsistent Use of Relative and Absolute CSS Units
What it looks like: Random mixing of absolute units (px) and relative units (rem, em, %) with no clear pattern.
Why AI does this: It has no strong opinion on when to use which unit unless explicitly guided, so it copies whatever style appears in its examples.
How to catch it: Inspect element styles. You will see font-size: 16px in one place and font-size: 1rem in another.
The fix: Follow this rule:
Use relative units for:
- font sizes (rem, em)
- spacing (margin, padding)
- layout dimensions where possible (%, vw, vh, rem)
These respect user accessibility settings and scale better across devices.
Use absolute units for:
- borders
- shadows
- small decorative elements like icons
/* Good practice */
.text { font-size: 1rem; } /* Scales with browser settings */
.container { padding: 2rem; } /* Scales proportionally */
.card { width: 60%; }
.border { border: 1px solid; } /* Fixed, doesn't need to scale */
Real impact: One app had accessibility issues because text sizes were in pixels, users who increased browser font size saw no change.
18. Poor Form Validation with useState
What it looks like: Complex forms managed with multiple useState hooks, messy validation logic, and unnecessary re-renders.
Why AI does this: useState is the simplest solution for managing form state, so that's what it reaches for.
How to catch it: Form code is verbose and hard to follow. Performance issues with many form fields.
The fix: Use dedicated form libraries like react-hook-form with schema validation (Zod or Yup).
// Instead of this mess:
const [name, setName] = useState('');
const [email, setEmail] = useState('');
const [errors, setErrors] = useState({});
// ... 20 more lines of validation logic
// Use this:
import { useForm } from 'react-hook-form';
import { z } from 'zod';
const schema = z.object({
name: z.string().min(1),
email: z.string().email(),
});
const { register, handleSubmit } = useForm({
resolver: zodResolver(schema),
});
Real impact: One form with 15 fields had 200+ lines of state management. Migrating to react-hook-form cut it to 50 lines and eliminated re-render issues.
19. Wrong Icon Library (Not Matching Designer's Choice)
What it looks like: Icons from different libraries mixed throughout the app, creating visual inconsistency.
Why AI does this: It defaults to icon libraries it knows best (usually lucide-react), regardless of what your designer actually used in Figma.
How to catch it: Icons look slightly different in style and weight across the app. Designer notices immediately.
The fix:
- Ask your designer which icon library they used
- Specify it explicitly when working with AI
- Enforce it in code reviews
Real impact: One project mixed Heroicons, Lucide, and Font Awesome. Looked amateur. Standardizing on one library immediately improved visual consistency.
Conclusion
AI has fundamentally changed how fast we can build. What used to take days now takes hours.
But AI doesn't understand your performance requirements, your design system, your backend architecture, whether something will scale to 10,000 users, or if code is maintainable six months from now.
That's still our job as developers. And honestly, that's the interesting part. The repetitive stuff gets automated. The thinking stays with us.
Use AI as a brainstorming partner. It knows a lot about performance optimization, accessibility patterns, and best practices. Ask it questions. Let it generate code. Then review, refine, and make it production-ready.
Every pattern in this list is preventable. Build a review habit. Know what to look for. Test before you ship.
AI accelerates development. Human judgment makes it actually work.