There is a narrative in technology that assumes centralisation: data moves to the cloud, computation happens there, results come back. For cloud infrastructure There is a narrative in technology that assumes centralisation: data moves to the cloud, computation happens there, results come back. For cloud infrastructure

Scott Dylan: Edge Computing and AI — Why the Cloud Isn’t Always the Answer

2026/03/15 16:30
6 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

There is a narrative in technology that assumes centralisation: data moves to the cloud, computation happens there, results come back. For cloud infrastructure companies, this narrative is convenient. For companies actually building AI systems that need to work reliably, quickly, and securely in real-world environments, this narrative is increasingly limiting.

Edge computing — the shift of computational capability closer to where data is generated and where decisions need to be made — is not new. What is new is its urgency. The combination of real-time AI requirements, privacy regulation, network bandwidth constraints, and the emerging complexity of IoT and autonomous systems is making edge computation not a niche architectural choice but a central requirement for entire categories of application.

Scott Dylan: Edge Computing and AI — Why the Cloud Isn’t Always the Answer

I have been watching this shift closely through NexaTech Ventures because it represents one of the most significant architectural transitions in technology infrastructure since the move to cloud computing itself.

Where Cloud Architecture Breaks Down

Cloud computing was built on an assumption that proved correct for the first two decades of the internet: it is cheaper to ship data to central compute resources than to distribute computation across the network. For most web applications — search, social media, e-commerce — this remains true. But for an expanding set of applications, the assumption is breaking down.

Consider autonomous vehicles. A self-driving car makes safety-critical decisions in milliseconds based on sensor data. Sending raw sensor data to a distant cloud service, waiting for a response, and receiving the decision back is not only inefficient; it is fundamentally unworkable. The latency is unacceptable and the reliability requirement cannot be met. The computation must happen on the vehicle itself, in real time, using local processing.

Or consider privacy-regulated applications in healthcare or financial services. GDPR and similar regulations increasingly require that sensitive personal data be processed in specific jurisdictions and under specific security controls. Streaming medical data or financial transaction details to a cloud service in another country, even for legitimate analysis, creates compliance complications that make centralised processing legally and operationally risky.

Or consider manufacturing on the factory floor. A manufacturing facility generating terabytes of sensor data from production equipment cannot realistically stream all of it to a cloud service for analysis. The bandwidth cost is prohibitive, the latency for real-time process adjustments is unacceptable, and the operational resilience risk is too high. The computation needs to happen locally.

These are not edge cases. These are core categories of emerging application. And cloud computing architecture, by design, is poorly suited to all of them.

The Technical Shift Required

Edge AI requires different technical architecture than cloud-based AI. The machine learning models need to be smaller, more efficient, and optimised for resource-constrained devices. The inference pipelines need to be robust to intermittent network connectivity. The security model needs to work for distributed systems rather than centralised data centres. The update and versioning mechanisms need to push changes to thousands or millions of edge devices efficiently and securely.

These are hard problems, and they require different approaches than cloud AI development. The companies solving them are not cloud computing companies; they are new companies building edge-optimised AI infrastructure.

Several technical trends are converging to make this transition possible. Model compression and quantisation techniques are improving rapidly, allowing sophisticated AI models to run on edge devices with fractional compute resources. Specialised hardware — TPUs, NPUs, and other AI accelerators — is becoming available in edge devices, providing the computational capability necessary. Open standards for edge deployment are emerging, breaking lock-in to proprietary platforms.

At NexaTech Ventures, we are backing companies in three categories within edge AI infrastructure. First, model optimisation and deployment platforms that take large AI models and compress them for edge execution. Second, edge inference engines optimised for low-latency, distributed execution. Third, edge orchestration systems that manage deployment, updates, and monitoring of AI workloads across distributed edge infrastructure.

Where Europe Is Positioned

Europe’s infrastructure advantage in edge computing is subtle but real. The continent has invested heavily in telecom infrastructure and 5G deployment, which provides the network capacity and low-latency connectivity necessary for edge computing. European data protection regulation, far from being a handicap, is driving demand for edge computing solutions that keep sensitive data local.

More importantly, European manufacturing, automotive, and industrial sectors are driving genuine demand for edge AI. German automotive companies need edge AI for autonomous vehicles. Italian manufacturers need edge compute for precision manufacturing. Dutch agriculture needs edge AI for precision farming systems. This creates a virtuous cycle where demand drives investment in edge AI infrastructure, which attracts talent and capital, which improves the capability of the technology, which drives further adoption.

The American edge computing narrative is currently dominated by cloud companies attempting to extend their platforms to the edge. AWS, Google Cloud, and Azure are all offering edge services. But these are fundamentally cloud-centric architectures with edge tacked on. The transformative edge AI architecture is being built by companies that start with the assumption that computation happens at the edge and cloud is the exception, not the rule.

The Investment Case

Edge computing and edge AI represent a structural shift in how software is deployed and run. It is not a temporary trend or a niche market. It is a fundamental architectural transition driven by real technical requirements that cloud computing cannot satisfy.

The investment opportunity sits in multiple layers. At the infrastructure layer, companies building edge-optimised AI platforms and deployment tools are creating durable competitive advantage. At the application layer, companies that are rearchitecting their software for edge execution — autonomous vehicles, industrial systems, healthcare devices — will achieve performance and reliability advantages that will be difficult to displace.

At NexaTech Ventures, we look for edge AI companies that understand both the technical requirements and the operational challenges. The best companies do not just optimise algorithms; they build complete systems for edge deployment, including monitoring, security, update management, and operational support.

The shift from centralised cloud to distributed edge computing represents the most significant infrastructure transition in technology since the migration to cloud. The companies that position themselves early in this transition will build substantial and defensible businesses.

Scott Dylan is the Founder of NexaTech Ventures. He writes on technology infrastructure, AI, and deep tech investment. Read more at scottdylan.com.

Comments
Market Opportunity
Cloud Logo
Cloud Price(CLOUD)
$0.0378
$0.0378$0.0378
+0.34%
USD
Cloud (CLOUD) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Nvidia (NVDA) vs AMD: The Ultimate AI Stock Showdown for 2025

Nvidia (NVDA) vs AMD: The Ultimate AI Stock Showdown for 2025

Nvidia (NVDA) dominates AI chips with superior margins and ecosystem. AMD challenges but trails. Compare both stocks to determine your best AI investment. The post
Share
Blockonomi2026/03/15 19:42
New Research Paper: Why Ripple Will Never Abandon XRP

New Research Paper: Why Ripple Will Never Abandon XRP

Crypto researcher SMQKE has shared excerpts from an academic publication to support the argument that XRP will remain integral to Ripple Labs’ operation. In a post
Share
Timestabloid2026/03/15 19:02