build vs buyecommerceshopifyd2cclaude code

Build vs Buy: Replacing a $300/Month Reviews Tool

Ayushya Patel
Ayushya PatelSDE-1
Mar 14, 2026·15 min read
Build vs Buy: Replacing a $300/Month Reviews Tool

TL;DR

  • A D2C skincare brand was paying $300/month for Fera Reviews. API responses were sluggish. Custom sorting was impossible. The admin panel made daily moderation painful.
  • We built a custom reviews system on AWS Lambda + PostgreSQL in one week of coding. Four purpose-built Lambda functions, Prisma ORM, Zod validation, S3 media uploads, Terraform infrastructure.
  • A detailed PRD turned Claude Code from a tool that guesses into a tool that executes. AI handled 60-70% of the pattern-heavy backend work. The PRD took two days; the build took five.
  • Monthly cost dropped from $300 to $40-60 (80% reduction). API responses now come back in under 100ms.
  • Migrated 27,000 existing reviews with zero data loss using a reusable CSV import pipeline.
  • The client now controls their own sorting logic, moderation workflows, and data. No more feature requests into the void.

A D2C skincare brand on Shopify had 27,000+ product reviews running through Fera, a popular third-party reviews SaaS. $300/month. Embeddable widgets, star ratings, basic admin panel. It worked fine until it didn't.

Reviews aren't a secondary feature for skincare. Customers want to see real results on real skin before they commit to a purchase. The reviews section is a core conversion driver, and the tool powering it was holding the brand back.

We replaced the entire system in one week. Custom-built on AWS Lambda and PostgreSQL. 80% cheaper. Sub-100ms responses. Fully owned.

Here's the breakdown.


Five Problems That Forced Our Hand

Sluggish API responses. Review submissions and page loads went through Fera's API, and the lag was noticeable. On mobile, where most of the brand's traffic comes from, customers would tap "submit" and wait, not sure if their review actually went through. That kind of hesitation kills conversions.

An admin panel that slowed down the team. The brand's team moderates reviews every single day. Loading lists, searching, filtering: everything in Fera's dashboard felt heavier than it should. A task that should take minutes dragged on.

No custom sorting logic. The brand had a specific (and smart) requirement: reviews on the product page should surface five-star reviews first, then reviews with written descriptions, then reviews with images. This priority-based, conversion-optimized sorting was simply not possible with Fera's configuration.

A ceiling on every feature request. Review summaries, product grouping for variant reviews, custom admin workflows: every request got the same answer. "Not on our roadmap."

$300/month for a tool fighting against them. That's $3,600 annually for a widget they couldn't customize, an API that added latency to their storefront, and data locked inside someone else's system.


The Build-vs-Buy Calculus

Standard advice for growing startups: buy. Don't reinvent the wheel. Focus on your core product.

But the client had genuinely outgrown Fera. The limitations weren't edge cases. They were blocking the daily workflow of the team and hurting the storefront experience for every visitor.

We evaluated the situation through three lenses:

FactorFera (Stay)Custom Build
Monthly cost$300$40-60
API response timeSluggish, third-party dependency< 100ms (own infra)
Custom sort logicNot supportedFull control
Data ownershipLocked in vendor100% self-owned
Admin panel UXSlow, genericBuilt for our exact workflow
Time to implementAlready running1 week build + 1 week PRD
Long-term flexibilityFeature requestsShip whatever we need

Cost economics. Fera was $300/month. A custom AWS Lambda + PostgreSQL system would run $40-60/month. That's an 80% reduction. The build cost would pay for itself in the first month.

Performance requirements. The brand needed fast API responses, both for the storefront and the admin panel. Fera's latency was a side-effect of routing through a third-party API on every request. No amount of caching or client-side workarounds on our end could fix that.

Customization ceiling. The priority-based sorting requirement alone justified the build. Looking ahead, the brand wanted bulk import/export, custom moderation workflows, and eventually AI-powered review summaries. With Fera, each of these would require either a feature request (hope they build it) or a hacky workaround (hope it doesn't break). With a custom system, every feature is a pull request away.

The math was clear. We decided to build.


How AI Turned a Month-Long Build into One Week

A custom reviews system with four Lambda functions, a PostgreSQL database, Shopify webhook integration, a media upload pipeline, and a 27,000-row migration. A few years ago, this scopes to a month of engineering work. We did it in one week of coding.

That wasn't because AI wrote the code for us. It was because we structured the work so AI could execute against a clear spec instead of filling in blanks on its own.

The PRD as an AI Contract

We spent the first two days writing a comprehensive Product Requirements Document that specified every detail: the three-layer architecture, database schema with exact column types and constraints, API routes with request/response shapes, middleware patterns, error handling conventions, naming standards, and the exact typing patterns we'd follow.

This wasn't a waterfall document that nobody reads. It was a direct input for Claude Code. When an AI coding agent has a spec this precise, it stops improvising. It stops making assumptions about your schema. It stops inventing its own naming conventions halfway through. It just builds what you told it to build.

What We Fed Claude Code

Every coding session started with the same context: the PRD, plus the existing codebase. Claude Code could see the patterns already established and extend them consistently.

Database layer. The PRD specified every Prisma model, every relation, every index. When we asked Claude Code to build the reviews domain, it didn't have to guess whether productId was a string or an integer, whether soft deletes used a deletedAt timestamp or a boolean flag, or how to handle the junction table for review-to-media relationships. All of that was in the spec. The generated Prisma schema matched our design on the first pass.

API routes. Each endpoint was specified with its HTTP method, path, request body shape (Zod schema), response shape, error codes, and which middleware it needed (auth, validation, rate limiting). Claude Code turned these specs into Lambda handler code that followed the exact pattern of handlers we'd already written. Consistent file structure, consistent error handling, consistent response formatting across all four Lambda functions.

Business logic. The tricky parts like incremental rating aggregation, Shopify metafield sync, and the priority-based sorting algorithm were described in the PRD with enough detail that Claude Code could implement them correctly. We still reviewed every line. But the review was "does this match the spec?" not "what is this trying to do?"

Where AI Saved the Most Time

The biggest wins weren't in complex logic. They were in the repetitive, pattern-heavy work that makes up 60-70% of any backend build:

  • CRUD operations across five domains (reviews, products, customers, ratings, media). Same pattern, different fields. Claude Code generated all five with consistent error handling, pagination, and filtering.
  • Zod validation schemas for every API endpoint. Tedious to write by hand, trivial for AI when the request/response shapes are already specified.
  • Prisma query builders with correct includes, selects, and where clauses. Once Claude Code saw the pattern from the first domain, it replicated it across the rest.
  • Terraform infrastructure code. Lambda functions, API Gateway routes, RDS configuration, S3 buckets, IAM policies. Standard patterns that AI handles well.

Where AI Didn't Help

AI didn't make the architectural decisions. It didn't decide on the three-layer architecture, or that we needed four Lambdas instead of one, or that rating aggregation should be incremental. Those decisions came from the PRD, which came from engineering judgment and experience with serverless systems.

AI also struggled with the migration pipeline. The edge cases in Fera's data export (duplicate customers, orphaned media references, inconsistent date formats) required human debugging. We'd find a failing row, figure out why, fix the transform logic, and re-run. Claude Code helped write the transform functions, but diagnosing the data issues was on us.

The Multiplier Effect

The PRD took two days. The build took five. Without AI, that five days of coding would have been three to four weeks.

The takeaway is simple: the better your spec, the more useful AI becomes. A vague requirement gets you vague code that you'll rewrite anyway. A precise requirement gets you working code on the first pass.


Architecture: Three Layers, Four Lambdas

ComponentChoiceWhy
ComputeAWS Lambda (4 functions)Pay-per-invocation, zero server management
DatabasePostgreSQL 17 via RDS ProxyConnection pooling for serverless, battle-tested
ORMPrismaType-safe queries, auto-generated client
ValidationZodRuntime schema validation with TypeScript inference
MediaS3 presigned uploads + CloudFront CDNDirect client uploads, global edge caching
BundleresbuildIncremental builds with SHA-256 hash manifest
InfrastructureTerraformFully codified, reproducible deployments

We designed a clean three-layer architecture that separates concerns without over-engineering.

Presentation Layer (Lambdas): Route matching, middleware wiring, orchestration. Four Lambda functions handle admin, auth, public storefront, and sync/webhooks respectively.

Business Layer (Domains): Core business logic, Prisma queries, input validation, response formatting. Each domain (reviews, products, customers, ratings, media) is self-contained.

Integration Layer (Infrastructure): External system connectors for the database, Shopify APIs, and S3. Isolated so swapping a provider never touches business logic.

Strict dependency rule: each layer only imports from layers below it. Never upward.

Why Four Lambdas, Not One

Instead of one monolithic Lambda handling everything, we split by responsibility:

FunctionPurposeTimeout
AdminModeration, dashboard queries30 seconds
AuthLogin, session management10 seconds
ReviewsPublic storefront display and submission10 seconds
SyncShopify webhooks, bulk operations15 minutes

The admin Lambda gets 30 seconds because complex dashboard queries sometimes need it. The sync Lambda gets 15 minutes for bulk imports and webhook processing. The customer-facing Reviews Lambda stays lean: fast cold starts, tight timeout, minimal memory allocation.

This split keeps cold starts fast where customers feel them while giving background jobs all the headroom they need.


How We Got Sub-100ms Responses

Three problems needed solving to hit that number. Each one shaped a design decision.

Problem: Rating recalculation gets slower as reviews grow. Most review systems run COUNT/AVG queries across all reviews every time a review status changes. At 27,000 reviews, that query gets expensive. Our solution: atomic increments and decrements on a pre-aggregated rating table. When a review is approved, the star count increments by one. When rejected, it decrements. O(1) performance whether the product has 10 reviews or 10,000.

Problem: Shopify product pages show stale ratings. Batch syncing ratings on a cron job means the storefront is always behind. We push ratings to Shopify metafields inline, in the same request that approves the review. The product page reflects the new rating within seconds. For grouped products (variants sharing reviews), all products in the group receive the combined weighted average automatically.

Problem: N+1 queries hiding in every domain. Shopify ID resolution, customer lookups, media attachment fetches. Each domain had patterns that would silently fan out into dozens of database calls. We audited every query path: batched Shopify ID resolution into single queries, replaced find-then-create with upserts, swapped full entity fetches for lightweight existence checks where only validation was needed. These add up. They're a key reason the API consistently responds in under 100 milliseconds.


Migrating 27,000 Reviews Without Losing One

The scariest part of replacing any platform is the migration. The brand had 27,000 reviews accumulated over years. Every single one had to come across cleanly: ratings, text, customer information, media attachments, timestamps.

We built a dedicated CSV import pipeline directly into the system.

Export: Extracted all reviews from Fera as CSV, including product mappings, customer details, and media URLs.

Transform: Mapped Fera's data format to our schema, handling edge cases like missing fields, duplicate customers, and orphaned media references.

Load: The bulk import endpoint processes each row independently. If row 5,000 has a data issue, it doesn't block the other 26,999. Partial success by design, not by accident.

Verify: Automated reconciliation confirmed every review, rating, and media attachment landed correctly. Rating aggregations were recalculated from ground truth and synced to Shopify metafields.

The import pipeline wasn't a throwaway migration script. It's a permanent admin feature. The brand can bulk-import reviews from any source in the future: a different SaaS tool, a CSV from a marketing campaign, reviews collected at a physical event.


Results

MetricBefore (Fera)After (Custom)
Monthly cost$300$40-60 (80% savings)
API response timeSluggish (third-party overhead)< 100ms
Custom sort logicNot possibleFully implemented
Admin panel speedSlow, laggyInstant
Data ownershipLocked in FeraOwn database, own rules
Build timeN/A1 week
Reviews migratedN/A27,000+
Bulk import/exportLimitedFull CSV pipeline

The brand's operations team went from dreading daily review moderation to finishing it in minutes. The admin panel loads instantly. Search and filtering are snappy. The custom sorting logic surfaces the most compelling reviews first, which helps conversion without any manual curation.

"Great work on it. We can replace many third-party services with custom services like this."

That line from the client mattered more to us than any benchmark.


What We Got Wrong

The "one week" framing is incomplete. The build took one week. The PRD, architecture decisions, and migration planning took another week before that. "One week of coding" is accurate. "One week total" is not.

Lambda cold starts are real. For the customer-facing Reviews Lambda, cold starts occasionally push first-request latency to 1-2 seconds. Provisioned concurrency would fix this, but we didn't configure it in v1 to keep costs low. For a storefront where most page loads hit warm functions, this was an acceptable tradeoff. It wouldn't be for an API with bursty, infrequent traffic.

PRD-first development isn't always the right call. We spent 40% of the total timeline writing a spec before touching code. It paid off because the requirements were well-defined and the scope was clear. On a different project with fuzzier requirements, that spec might need heavy revision mid-build, and the wasted effort adds up.

We haven't built the features we said were "next." AI review summaries, advanced spam protection, role-based admin access: the architecture supports all of it. None of it exists yet. Knowing you can build something and actually shipping it are different things.


When Build Beats Buy

This wasn't about proving SaaS tools are bad. Fera works fine for brands at a certain scale. The problem was staying on a tool after outgrowing it.

Three signals told us the client had crossed that line:

The tool's limitations blocked daily workflows. Not edge cases. The core moderation task the team does every single day was slower than it needed to be.

The performance gap was structural. Every request routed through a third-party API. That overhead was baked into the architecture, not something we could optimize around.

The customization ceiling was permanent. The features the brand needed weren't bugs to fix. They were product decisions Fera had made differently.

When all three are true at the same time, the calculus flips. The cost of building is bounded (we scoped it to one week of development). The cost of staying is open-ended: limited features, someone else's priorities, data you don't own, and $3,600/year on top.


What's Next

The system is live and running well. On the roadmap:

  • AI-powered review summaries so customers get the gist without scrolling through dozens of reviews
  • Advanced spam protection with rate limiting and duplicate detection on the public submission endpoint
  • Customer sync with periodic enrichment from Shopify's Admin API for verified purchase badges and customer location data
  • Role-based admin access with granular permissions for moderators vs. administrators

We'll build these on our own timeline, to our own spec. That's the whole point of owning the system.

Tags:build vs buyecommerceshopifyd2cclaude code