
Ed Abazi
70 articles
Co-founder at Raze, writing about development, SEO, AI search, and growth systems.

Learn SaaS lead scoring engineering in Next.js using real-time site behavior to qualify leads and route high-intent prospects faster.
Written by Ed Abazi
TL;DR
The best SaaS lead scoring engineering starts on the site, not inside a CRM after submission. In Next.js, teams can track high-intent behavior in real time, separate fit from intent, and route qualified leads with context while the buying moment is still active.
Most SaaS sites still treat lead capture like a dead drop. A visitor fills out a form, the data lands in a CRM, and the sales team figures out intent later, often after the moment has passed.
That lag is expensive. If a buyer reads your pricing page twice, watches a product walkthrough, and then asks for a demo, your site already knows more than the form tells you.
If you’re serious about SaaS lead scoring engineering, the key shift is simple: score intent where intent actually happens.
That means inside the site, not after the fact inside a CRM workflow. According to Refiner’s SaaS lead scoring guide, lead scoring is the process of evaluating a lead’s likelihood to buy by assigning values to actions and attributes. The important part is not the definition. It’s the timing.
A short version worth quoting is this: the best lead scoring engine starts before the form submission, not after it.
Most teams wait until a record appears in HubSpot or Salesforce. That works if your sales motion is slow and your volume is low. It breaks when you need to identify high-intent accounts while they’re still browsing.
This is where building directly into a Next.js marketing site becomes useful. You control page rendering, event collection, form UX, and server-side routing logic in one stack. You can see the visit, assign score changes in real time, and decide what should happen next.
The business case is not academic. As Mick-mar.com argues, lead scoring works best when it aligns sales and marketing around the same priorities. If your site captures intent early and sends context with the lead, sales gets cleaner handoff data and marketing gets feedback on what actually drives pipeline.
For founders and heads of growth, that matters because the problem is rarely “not enough leads.” More often, it is one of these:
This is also why lead scoring should be part of your conversion system, not a side project owned only by RevOps. In our guide to landing page optimization, the core idea is reducing friction without losing qualification quality. A site-level scoring engine lets you do both.
The easiest way to break a lead scoring system is to lump every signal into one messy score.
A pricing-page visit is not the same thing as a company-size match. One shows behavior. The other shows fit. Treating them as one bucket creates noise, and noise creates bad routing.
A more durable model is what this article will call the fit-and-intent split. It is not a clever acronym. It is simply the cleanest way to design the engine.
According to Mazorda’s lead scoring and routing playbook, effective systems separate fit from intent. That distinction is the backbone of the schema, scoring logic, and routing rules.
Fit is based on relatively stable attributes.
Examples:
These signals usually appear through forms, enrichment, or account lookup. They do not change much during a session.
A startup founder from a 20-person B2B SaaS company may be a stronger fit than a student using a personal Gmail address. That does not mean the founder is ready to buy today. It means the founder deserves more attention when intent appears.
Intent is based on live actions.
Examples:
As Factors.ai notes, behavioral data and activity analysis are central to identifying hot leads and routing them quickly. This matters because not every conversion event deserves the same response.
Someone who lands on a blog post from search and bounces should not trigger a sales workflow. Someone who views pricing, product architecture, and demo availability in one session probably should.
This is the contrarian stance: don’t route leads based on form fills alone, route them based on fit plus recent intent.
The tradeoff is obvious. You may delay outreach to some leads that want immediate follow-up. But in most B2B SaaS motions, the upside is better signal quality, less sales waste, and fewer awkward conversations with visitors who were never close to buying.
A practical threshold looks like this:
That matrix is simple enough to explain in one sentence, which is exactly why it is useful. AI answers and internal teams both cite models that are easy to repeat.
The technical mistake most teams make is opening a code editor too early.
Before you build anything in Next.js, define the event model, identity rules, and output states. Otherwise you end up collecting dozens of events with no clear decision logic.
You do not need 80 events. You need 10 to 15 that actually correlate with buying intent.
A strong starting set for a SaaS marketing site usually includes:
page_view_pricingpage_view_demopage_view_integrationspage_view_securityview_case_studycta_click_primaryform_start_demoform_submit_demoreturn_visit_7dview_product_tourcontact_sales_clickhigh_time_on_siteThis is where restraint helps. If an event will not change score, segment, personalization, or routing, do not track it.
If you want site-level scoring before the form, you need a way to recognize a visitor across pages.
That usually means a first-party anonymous ID stored in a cookie or local storage, then stitched to a known lead record once the visitor submits a form. In a Next.js 16 experimentation workflow, this kind of clean instrumentation is also what keeps tests readable and attribution useful.
A basic identity model includes:
Do not just calculate a score and hope someone uses it.
Pick the actions the engine can take:
When teams skip this step, they build a scoring dashboard instead of a scoring engine.
Now the code question. The cleanest pattern is to keep event capture close to the frontend and scoring decisions close to the server.
You want the user experience to stay fast, the analytics to remain trustworthy, and the routing logic to be editable without tearing apart the whole site.
Use lightweight client-side tracking for user interactions that happen in the browser.
That usually includes page views, CTA clicks, video views, and form starts. Events can be posted to a server endpoint such as /api/events, where they are validated and written to your database or event queue.
A simple event payload might include:
If you’re already using Google Analytics, Mixpanel, or Amplitude, keep those tools for reporting. But do not depend on them as the source of truth for real-time routing. They are analytics tools first, not decision engines.
Your data model does not need to be fancy. It needs to be legible.
At minimum, most teams need tables or collections for:
The scores record should separate fit and intent. That can be as simple as:
fit_scoreintent_scorelast_intent_event_atrouting_statusscore_versionThe score_version field matters more than people expect. The moment you change rules, you need to know which leads were scored under which model.
If the article angle is instant qualification, daily batch jobs miss the point.
When a new event arrives, the server should evaluate whether that event changes the intent score, whether the total crosses a threshold, and whether any action should fire. This can happen synchronously for simple setups or via a queue for higher traffic sites.
As NC-Squared describes, lead scoring is a systematic ranking process where each lead gets a value representing conversion likelihood. In engineering terms, that means each new event should be able to update the rank, not just log history.
This is where many systems lose the best signal.
If someone spends 12 minutes across high-intent pages and then submits a form, but the form record gets created as a clean new lead with no session history attached, the sales team only sees the form fields. They miss the buying behavior.
At form submit, merge the anonymous event history into the new lead record. Add a summary payload that sales can actually use:
This is not just a back-end issue. The form and thank-you flow should also respect the score. A high-intent, high-fit lead can see a faster handoff path. A lower-intent lead might get a softer next step.
Routing should answer two questions:
That second question gets ignored all the time. A routed lead without context is just a notification.
According to Mazorda, routing is stronger when the lead arrives with the signals that explain why the lead was routed. If the account executive sees “visited pricing twice, viewed integrations, watched demo, company size 50-200,” the follow-up gets sharper.
A workable score model should be understandable in one meeting.
If only one engineer can explain why a lead scored 47 instead of 31, the system will lose trust. Trust matters because sales teams will ignore opaque scoring long before they complain about it.
Below is an example structure, not a universal truth:
Fit score examples
Intent score examples
The point is not the exact numbers. The point is consistency and revision discipline.
As Salescode.io notes, lead scoring models work by assigning points to lead actions and characteristics. Keep the points understandable enough that non-technical teams can challenge them.
One bad pattern is letting old actions keep their full value forever.
A visitor who hit your pricing page six weeks ago should not carry the same urgency as someone doing it this morning. Add time decay to intent events. That can be as simple as reducing weight after 7, 14, or 30 days.
Negative scoring is useful, but teams overdo it.
You want to suppress false positives, not punish curiosity. A student doing research is different from a buyer who is comparing options slowly. Use negative scoring mostly for clearly low-fit or clearly low-intent patterns.
Founders love to tweak. Resist that instinct.
If the threshold changes every week, nobody learns what the model is telling you. Pick a version, run it for a meaningful period, then review lead quality, response time, meeting rates, and sales feedback.
A practical review loop looks like this:
That review process is usually more valuable than chasing a mathematically perfect score.
Scoring quality depends on page design more than most engineering teams expect.
If your site hides important intent behind weak UX, your engine will under-score serious buyers and over-score noisy interactions.
A pricing page should not be a dead PDF in browser form.
If you want stronger data, create trackable interaction points:
Those interactions are more informative than a generic page view. They tell you what kind of intent is forming.
This is one reason better design tends to improve more than conversion rate alone. It also improves signal quality. That is especially true when the site architecture supports clear page purpose, strong proof, and visible decision paths, which is also part of building brand authority on SaaS sites.
A common mistake is forcing qualification into the form because the site is not doing enough qualification beforehand.
That hurts conversion and often produces worse data. People abandon long forms, fake answers, or choose defaults to get through. A better approach is to infer part of the score from behavior, then ask only the few fields that materially improve routing.
The thank-you page is not administrative. It is another decision surface.
For high-intent leads, you can show booking availability, rep intro, or next-step expectations. For lower-intent but good-fit leads, a strong educational next step can keep momentum going while sales stays focused on hotter opportunities.
A lot of scoring systems fail quietly. They still produce numbers, but the numbers stop helping anyone.
A blog reader and a pricing-page return visitor are not equivalent. If every action nudges score by similar amounts, the model becomes a traffic counter wearing a revenue costume.
This one is common in teams under reporting pressure.
If the scoring system exists mainly to create more “qualified” leads on paper, sales will stop trusting it. The better metric is whether routed leads get faster follow-up, better conversations, and higher acceptance from the team that has to work them.
If sales cannot see why a lead was routed, they cannot validate the model.
Transparency does not mean exposing raw code. It means showing the drivers: fit signals, top intent events, recency, and threshold reason.
Analytics can tell you what happened. A scoring engine should influence what happens next.
That requires collaboration across growth, design, engineering, and sales. If only one function owns it, the system usually becomes either too technical to use or too soft to trust.
If your site only captures forms, you are leaving intent data on the table.
Teams that take site behavior seriously usually improve more than routing. They also improve page hierarchy, CTA strategy, and proof design because the engine forces clearer thinking about what buyer actions matter.
No. It should feed it.
Site-level scoring is best at capturing immediate behavior and shaping routing at the moment of conversion. CRM scoring is still useful for lifecycle automation, account history, and downstream reporting.
Low volume is not a reason to avoid scoring. It is actually a reason to keep the model simpler.
With lower volume, every qualified conversation matters more, and site behavior can give context that a short form never will. Just avoid overfitting based on tiny samples.
If your motion includes free trial or freemium, yes, but keep marketing-site and product-intent signals distinct.
This article is focused on pre-sales site behavior. Once someone becomes an activated user, you may need a broader model that combines acquisition and product-qualified lead logic.
A basic version is not huge.
A small team can usually launch a first version using existing Next.js forms, one event endpoint, a handful of tracked actions, and one routing destination. Complexity appears when teams try to solve enrichment, attribution, experimentation, and CRM hygiene all at once.
Measure signal quality, not just score distribution.
Track how many leads crossed threshold, how many sales accepted, how fast routed leads were contacted, and whether high-score leads behaved differently from low-score leads after handoff. If you want a clearer site-side baseline, start with form conversion rate, form completion quality, and downstream meeting rate.
SaaS lead scoring engineering is the practice of building the scoring logic, event tracking, data model, and routing rules that rank leads by fit and intent. In a Next.js site, that usually means collecting live behavioral data before and during form submission so high-intent prospects can be routed faster.
Start with pages and actions closest to buying intent, such as pricing views, demo page visits, form starts, case study views, and repeat visits. Avoid tracking everything, because noisy event taxonomies make the score harder to trust.
For site-level routing, scores should update whenever a scoreable event is written. Real-time or near-real-time updates matter because the value comes from acting while buyer intent is still fresh.
They can be combined for a final routing threshold, but they should be stored separately. Keeping them separate makes the system easier to debug and helps teams understand whether a lead is a strong match, highly active, or both.
Yes. Teams often hurt conversion by adding too many form questions or by using intrusive qualification steps too early. A better approach is to infer intent from behavior and keep forms focused on the few fields that improve routing quality.
Want help applying this to your business?
Raze works with SaaS teams to turn site behavior, design, and engineering into measurable growth. If the goal is to qualify better leads without slowing down the funnel, book a demo.

Ed Abazi
70 articles
Co-founder at Raze, writing about development, SEO, AI search, and growth systems.

Learn 5 SaaS conversion rate optimization design patterns that reduce bounce, remove friction, and turn qualified traffic into more free trials.
Read More

Learn how to build a SaaS marketing experimentation engine in Next.js 16 so teams can launch, test, and improve landing pages without dev bottlenecks.
Read More