When most startups were spinning up servers, Enterpret went all-in on serverless. Today, that architecture powers a massive AI platform for customer-feedback analysis, helping companies like Canva, Atlassian, Perplexity, and Notion stay close to their users.When most startups were spinning up servers, Enterpret went all-in on serverless. Today, that architecture powers a massive AI platform for customer-feedback analysis, helping companies like Canva, Atlassian, Perplexity, and Notion stay close to their users.

How Enterpret built a scalable AI platform with just two engineers

2025/12/01 21:26

When most early-stage SaaS startups were provisioning servers and planning Kubernetes clusters, Enterpret quietly went the other way. The team, barely three people at the time, decided to build an enterprise-grade AI feedback platform primarily on AWS Lambda.

It was a contrarian bet. Five years later, it remains one of the foundational decisions that shaped the company’s architecture, culture and speed.

A constraint-driven beginning

In the earliest days, Enterpret needed to ingest immense volumes of customer feedback data — bursts of text, context, and metadata coming in waves whenever a client synced historical data or when a public event went viral. The heavy compute sat on ingestion and enrichment; the actual user-facing queries were comparatively light.

Capital was scarce, engineering capacity even more so. “We didn’t have the luxury of always-on compute. Maintaining clusters wasn’t realistic with two engineers and an intern,” Chief Architect Anshal Dwivedi recalls.

Lambda offered something traditional compute couldn’t: elasticity without cost drag. You paid only when something ran. Idle was free.

Enterpret launched with eight microservices and around 35 Lambda functions, a small surface area, but fast to evolve. It allowed the team to move with urgency, without burning runway on infrastructure.

What made the decision notable wasn’t the early commitment to serverless; it was how deliberately it engineered an exit ramp. If workloads ever demanded something more persistent, migrating to ECS would require little more than swapping a deployment wrapper. The business logic would remain untouched.

That foresight would turn out to be one of the most important choices the team made.

The monorepo that kept the system coherent

As the product footprint expanded, Enterpret faced a new problem: managing growth without splintering the codebase. Its response was another decision that goes against conventional startup advice — a single Go monorepo for every backend microservice, shared library, and infrastructure configuration.

Rather than chaos, it delivered consistency.

A model change could be made once, reviewed once, and deployed everywhere. Error codes, logging formats, and tracing standards remained uniform across services — a blessing in a distributed system where debugging normally involves spelunking across repos and log streams.

Refactoring became routine, not risky. IDE-level type-checking guarded against silent breakage. Deployments stayed predictable.

That same monorepo now houses 26 services, up from the original eight. Deployments happen several times a week, with the team moving quickly because the underlying structure never fractured.

A lightweight RPC layer that still holds up

Very early on, the team ran into a limitation: AWS API Gateway didn’t support gRPC natively, yet Enterpret needed a compact, binary-first communication layer suited for Lambda.

The typical path would have involved workarounds or adoption of heavier frameworks. Instead, they built a lean RPC abstraction that supported multiple encodings — protobuf over HTTP for efficiency, JSON for flexibility, and compatibility for gRPC downstream.

It took a few days to shape, not months. Yet it remains the backbone of service communication even now. Compression, distributed tracing, metrics and client generation were layered on without touching individual services — the compounding effect the team now optimises for deliberately.

When Lambda stopped being the right answer

Growth eventually revealed the limits of serverless.

Frontend analytics surfaced the first crack; cold starts added noticeable latency when dashboards fired dozens of parallel queries. Provisioned concurrency would have reduced the lag, but not without making the system expensive to run. Migrating those workloads to ECS brought the P95 down and costs along with it.

Long-running jobs followed. Lambda’s 15-minute cap worked for most async tasks, but report generation and exports needed more breathing room. Enterpret turned to AWS Batch backed by spot instances, achieving the same flexibility at a fraction of the cost.

There were other restrictions too such as Lambda’s 6MB payload cap, API Gateway’s 29-second timeout. The team routed around these with S3-based response offloading and request batching, but the lesson was clear: the right tool changes over time.

Because of how the team architected the system, migration was rarely a rewrite. Often, it was an hour.

Cost discipline as philosophy

In a bootstrapped-speed phase, cost is not a metric but a survival constraint. Enterpret audited everything: memory allocation, idle compute, cold starts, cross-service chatter. Many Lambda functions still run on 128MB, made possible by Go’s efficiency.

At one point, a CloudWatch bill eclipsed total compute spend. It prompted stricter observability hygiene, alerting thresholds, billing reviews and architecture choices rooted not in idealism but in operational reality.

The discipline stuck.

The playbook Enterpret now gives others

Looking back, Dwivedi says the company would make the same choices again. Serverless gave the speed, cost control, and focus when the team needed it most. The monorepo, the RPC abstraction, the migration-ready design, all of it would stay the same.

But the company would be more cautious about force-fitting workloads that don't belong on Lambda. Earlier, one of its data collection services required long-running execution, so the team stitched it together with AWS Step Functions and checkpointing logic to bypass the timeout. It worked, but maintaining it was painful. AWS Batch would have been the right call from day one.

His advice to other engineering teams boils down to a few principles:

Keep infrastructure dead simple. Enterpret didn't host a single piece of infrastructure itself for four years. Managed services and boring technology beat clever solutions every time. "The startups that survive aren't the ones with clever infrastructure; they're the ones that stayed focused on their product while the cloud did the heavy lifting," Dwivedi notes.

Be ruthless about cost. It directly impacts runway. Set spending alerts, review bills weekly, question every line item. Small leaks compound into hemorrhages.

Design for horizontal scale from day one. The perceived effort gap between "quick-and-dirty" and "scalable" is often an illusion. A few good abstractions and clear service boundaries take marginally more time upfront but save you from rewrites later.

Don't chase cloud agnosticism too early. Enterpret committed fully to AWS. When you constrain yourself to what works everywhere, you're optimizing for the lowest common denominator. You get better systems by embracing what your cloud does best, not what every cloud does adequately.

Five years on, the architecture still holds.

The journey continues

Today, Enterpret processes hundreds of millions of customer feedback records. Many of the systems the team wrote in the first year are still running—not just running, but thriving. They've evolved, scaled, and adapted because the team found that compounding threshold early and stuck to it.

The company is now building agentic architectures, pushing into new territories of what AI can do with customer feedback. The landscape keeps evolving, and the team is still learning what works.

"Some patterns from our serverless journey translate beautifully. Others need rethinking entirely," shared Dwivedi.

The lesson isn't that serverless is the answer for everyone. It's that small, thoughtful decisions compound over time. Design systems that evolve rather than expand. Choose clarity over cleverness. And when you hit the limits of a technology, migrate; don't rewrite.

This is just a glimpse of what Enterpret builds. Read more about the same on their engineering blog here.

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0,03552
$0,03552$0,03552
-3,39%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Valour launches bitcoin staking ETP on London Stock Exchange

Valour launches bitcoin staking ETP on London Stock Exchange

The post Valour launches bitcoin staking ETP on London Stock Exchange appeared on BitcoinEthereumNews.com. Valour Digital Securities, a subsidiary of DeFi Technologies, has launched its Bitcoin Physical Staking exchange-traded product (ETP) on the London Stock Exchange, the firm announced on Friday. The listing expands Valour’s yield-bearing bitcoin product beyond mainland Europe, where it has traded since November 2024 on Germany’s Xetra market. The ETP is restricted to professional and institutional investors under current UK regulations, with retail access expected to open on October 8 under new Financial Conduct Authority rules. The product, listed under ticker 1VBS, is physically backed 1:1 by bitcoin held in cold storage with Copper, a regulated custodian. It offers an estimated annual yield of 1.4%, which is distributed by increasing the product’s net asset value (NAV). Yield is generated through a staking process that uses the Core Chain’s Satoshi Plus consensus mechanism. Rewards earned in CORE tokens are converted into bitcoin and added to the ETP’s holdings. Valour has emphasized that while the process involves short-term lockups during stake transactions, the underlying bitcoin is not subject to traditional staking risks such as slashing. The launch comes as the UK begins to loosen restrictions on crypto-linked investment products. Earlier this year, the Financial Conduct Authority moved toward allowing retail access to certain crypto exchange-traded notes and products, a shift that will test demand for regulated, yield-bearing bitcoin exposure. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/valour-launches-bitcoin-staking-etp
Paylaş
BitcoinEthereumNews2025/09/20 02:48
Optum Golf Channel Games Debut In Prime Time

Optum Golf Channel Games Debut In Prime Time

The post Optum Golf Channel Games Debut In Prime Time appeared on BitcoinEthereumNews.com. FARMINGDALE, NEW YORK – SEPTEMBER 28: (L-R) Scottie Scheffler of Team
Paylaş
BitcoinEthereumNews2025/12/18 07:21
Read Trend And Momentum Across Markets

Read Trend And Momentum Across Markets

The post Read Trend And Momentum Across Markets appeared on BitcoinEthereumNews.com. Widely used in technical analysis, the MACD indicator helps traders read trend
Paylaş
BitcoinEthereumNews2025/12/18 07:14