Adding AI to Your Website (Without the Hype)
The Problem with “AI-Powered”
Every week there's a new pitch: AI chatbots, AI-generated landing pages, AI-written blog posts, AI everything. Most of it is noise. Slapping a chatbot on your homepage doesn't make your site smarter. It makes it slower, more expensive, and often more annoying for the people trying to use it.
But there are places where AI genuinely helps. The key is knowing the difference between “this is a good use of AI” and “this would be simpler with an if-statement.” In our experience building production websites, the best AI integrations share three traits: they solve a problem that's hard to solve with rules alone, they fail gracefully when the model is wrong or unavailable, and they stay invisible to the user.
Where AI Actually Helps
After building and maintaining dozens of production sites, here are the use cases where we've seen AI deliver real value — not as a gimmick, but as infrastructure:
Spam and Abuse Detection
This is the use case we keep coming back to. Every website with a contact form, comment section, or booking system gets spam. Traditional defenses (CAPTCHAs, keyword blocklists, rate limiting) work up to a point. But modern spam is sophisticated. It uses proper grammar, avoids obvious keywords, and often looks like a real inquiry at first glance.
An LLM can read a form submission the way a human would: understanding context, detecting incoherence, and spotting patterns that no regex will catch. A message that says “I am interested in your services please contact me at this definitely-not-spam link” passes every keyword filter but fails the common-sense test that an LLM applies naturally.
Content Moderation
If your site has user-generated content (reviews, comments, forum posts), AI moderation catches nuanced violations that keyword filters miss. Sarcasm, coded language, and context-dependent toxicity are hard to detect with rules. An LLM understands that “great job breaking everything again” isn't a compliment.
Intelligent Search
Traditional site search matches keywords. AI-powered search understands intent. A visitor searching for “how to make my site faster” should find your article about CDNs, even if the word “faster” never appears in it. Semantic search using embeddings is one of the most practical AI upgrades for content-heavy sites.
Personalization That Isn't Creepy
AI can tailor what visitors see based on behavior without tracking them across the internet. A returning visitor who previously browsed your pricing page might see a different call-to-action than a first-time visitor from a blog post. The line between helpful and invasive is thin, though. This only works when it feels natural, not surveilled.
How We Built It: A Three-Layer Spam Pipeline
Rather than theorize, here's exactly how we handle form submissions on this site. Our consultation form uses a three-layer validation pipeline where AI is the last resort, not the first line of defense.
Layer 0: Bot Detection (Free, Instant)
Before any validation logic runs, we catch the obvious bots. Two techniques, both invisible to real users:
- Honeypot field. We include a hidden form field (visually invisible, marked
aria-hidden, withtabIndex={-1}) that a human will never see or fill in. Bots scraping the page fill every field, so a non-empty honeypot means instant rejection. Cost: zero. Effectiveness against dumb bots: near 100%. - Time-based check. We record when the form loads. If a submission arrives in under three seconds, no human filled that out. It's a bot blasting through forms at machine speed. This single check eliminates a surprising volume of automated submissions.
Layer 1: Deterministic Rules (Free, Sub-Millisecond)
Submissions that pass the bot check hit a set of hard-coded rules. These are fast, predictable, and never cost a cent:
- Format validation. Email must be a valid format. Name, company, and message must be within length limits. Required fields must be present.
- Spam keyword patterns. We maintain a short blocklist of terms that are never part of a legitimate business inquiry: pharmaceuticals, gambling, and crypto giveaway scams.
- URL density. A message with more than three URLs is almost certainly spam. Legitimate consultation requests rarely contain links at all.
- HTML injection. Messages containing HTML tags are stripped and flagged. No one writing a real inquiry includes script tags.
This layer catches the majority of spam. It's the workhorse of the pipeline. Every check is a simple string operation with no network calls, no latency, and no cost.
Layer 2: AI Analysis (Cheap, ~1 Second)
Only submissions that pass both prior layers reach the LLM. This is important: by the time a message gets to the AI, it's already passed basic bot detection and format checks. The AI only needs to answer one question: does this read like a real business inquiry?
We send the submission to an LLM with a system prompt that tells it to act as a spam detection assistant. The prompt is deliberately permissive, erring on the side of allowing borderline submissions through rather than blocking potential leads. The model returns a simple JSON response: legitimate: true or legitimate: false with an optional reason.
The critical design decision: if the AI is unavailable or times out, the submission is accepted. We set a five-second timeout. If the model is down, slow, or returns garbage, the submission goes through. A missed spam message is annoying; a lost lead is worse. The AI is an enhancement, not a gate.
Why This Order Matters
The pipeline is ordered by cost and speed. Layer 0 is free and instant. Layer 1 is free and sub-millisecond. Layer 2 costs a fraction of a cent and takes about a second. If you reversed the order and sent every submission to an LLM first, you'd pay for every bot, every spam blast, every empty form submission. You'd also add a second of latency to every submission, legitimate or not.
In practice, Layer 0 catches roughly 40% of unwanted submissions. Layer 1 catches another 50%. The AI only evaluates the remaining 10%, the ambiguous messages that actually need judgment. That means the AI processes a handful of submissions per day instead of hundreds, keeping costs negligible.
The Mistakes We See
Working with clients who want to “add AI” to their websites, we see the same patterns:
Using AI Where Rules Work Fine
If you can write down the logic in an if-statement, you don't need AI. Email validation, required field checks, date formatting — these are deterministic problems with deterministic solutions. An LLM will get these right most of the time, but “most of the time” isn't good enough when “all of the time” is achievable with three lines of code.
Making AI the Single Point of Failure
AI models go down. APIs have outages. Response times spike. If your form stops accepting submissions because the AI layer is unavailable, you've turned an enhancement into a liability. Always design AI integrations with a fallback: if the model is unreachable, what happens? The answer should never be “nothing works.”
Adding AI for the Marketing Bullet Point
“AI-powered” in a feature list doesn't impress anyone who's been paying attention. Users don't care whether your spam filter uses regex or GPT-4. They care that they don't get spam. The technology should be invisible. If your users notice the AI, something has probably gone wrong.
When to Add AI to Your Site
Here's a simple framework for deciding whether an AI integration is worth building:
- Can you solve it with rules? If yes, use rules. They're faster, cheaper, more predictable, and easier to debug. Only reach for AI when the problem genuinely requires judgment or understanding of natural language.
- What happens when the AI is wrong? Every model hallucinates, misclassifies, or times out eventually. If the cost of a wrong answer is high (blocking a real customer, showing incorrect prices, giving bad medical advice), you need human review in the loop. If the cost is low (letting a borderline spam message through), AI-with-fallback is fine.
- Is the cost proportional to the value? AI API calls have real costs. For a consultation form that gets 20 submissions a day, paying fractions of a cent per AI check is negligible. For a high-traffic forum with thousands of posts per hour, the same per-request cost could add up fast. Run the numbers.
- Does it degrade gracefully? Your site should work without the AI layer. If you can't toggle the AI off and still have a functional (if slightly less smart) system, your architecture is too tightly coupled to a service you don't control.
Practical Takeaways
- Layer your defenses. Put cheap, fast checks first. Use AI as the final layer for ambiguous cases, not as a replacement for basic validation.
- Fail open when appropriate. For non-critical AI checks (spam detection, content suggestions), let the submission through if the model is unavailable. A false negative is usually cheaper than a lost user.
- Keep the AI invisible. The best AI integrations are ones users never notice. They don't see a chatbot; they see a form that just works, a search that understands them, a site that feels smart without being showy.
- Start with one use case. Don't try to AI-everything at once. Pick the problem where rules are clearly failing (usually spam or search) and build a disciplined integration for that. Expand from there.
- Monitor and iterate. Log what the AI catches that rules didn't. If the AI is flagging submissions that the deterministic layer should have caught, add a new rule. Over time, your rule layer gets smarter and the AI handles less.
AI on the web is most powerful when it's boring. Not as a flashy chatbot or a “talk to our AI” landing page gimmick, but as quiet infrastructure: catching spam humans would miss, surfacing the right content at the right time, and making your site work better without anyone knowing why. That's the kind of AI worth building.
Want to understand the infrastructure that makes all of this possible? Start with how DNS routes your visitors and how CDNs keep your site fast. No amount of AI matters if your page takes five seconds to load.
Need help with your infrastructure?
Whether it's DNS, deployment, or full-stack architecture — Code43 can help you get it right.
Book a Consultation