While llms.txt helps AI read the web and APIs help them connect, neither solves the infinite customization found in the economically important tasks in enterprise software. The real solution lies in computer-use agents that operate at the pixel level, learning from human demonstrations to navigate screens directly. This approach bypasses brittle connectors, allowing AI to handle complex workflows while humans remain in the loop for critical verification.While llms.txt helps AI read the web and APIs help them connect, neither solves the infinite customization found in the economically important tasks in enterprise software. The real solution lies in computer-use agents that operate at the pixel level, learning from human demonstrations to navigate screens directly. This approach bypasses brittle connectors, allowing AI to handle complex workflows while humans remain in the loop for critical verification.

The Screen Is the API

"Why not just use llms.txt to understand the page?"

My friend was watching an AI agent work through a complex enterprise workflow. Clicking through menus, filling forms, handling the kind of nested configuration screens that were the definition of scope creep.

It was a reasonable question. Everyone is excited about llms.txt right now. A simple text file that tells AI systems what your website contains. Finally, the thinking goes, we have a standardized way for machines, or LLMs, to understand the web.

But my friend was confusing two very different problems. Reading is not doing.

The web did not become useful when machines learned to read it. It became useful when machines learned to act on it. And right now, the reading part is limited and we must shift focus to the doing.

Reading Isn’t the Hard Part

Let me be clear about what llms.txt actually does. It is a curated map for LLM inference. A structured way for language models to understand what exists on a website and where to find it. 

This is useful for bringing information to an LLM. But it is not a control mechanism. It does not let AI systems actually do anything. The gap between reading and acting is where the real work begins.

The Action Space

When people talk about AI automation, they usually mean APIs. Expose endpoints, let the AI call them, and you have automation. Simple.

Except it is not simple at all.

APIs expose only what developers choose to expose. They represent a curated subset of functionality that someone decided was worth the engineering effort to formalize. And in enterprise software, that subset is usually tiny compared to what users actually need to do.

Then came MCP, the Model Context Protocol. MCP tries to solve the connector problem. Instead of every AI system needing custom integrations with every application, you build one MCP connector and any MCP-compatible AI can use it.

This is an improvement. It solves the M×N problem where M AI systems need to integrate with N applications. But it assumes someone builds the connector in the first place.

Building these connectors is still hard. It requires understanding both the application and the MCP protocol. Most enterprise software will never get proper MCP support because the economics, I believe, are hard to justify. \n \n Attempts to automate API to MCP conversion have become popular, but they mostly produce brittle, low-level tools. As Han Lee and others point out, REST APIs are designed around nouns (resources with GET/PUT/POST/DELETE), while MCP works best when tools are verbs (deleteRow, createTask). Auto-wrapping one into the other hides that mismatch instead of solving it.

The M×N×P Problem

There is a deeper issue that neither APIs nor MCP can address. Call it the P variable: interface diversity.

P represents the number of unique ways the same software can be configured. And in enterprise software, P grows to enormous scale.

Consider SAP. A single SAP S/4HANA server contains tens of thousands of customizing tables. Every implementation is different. Every organization has its own approval chains, its own business rules, its own custom ABAP developments.

Here is a concrete example. Take something supposedly simple: a purchase order approval workflow. In a real SAP implementation, this involves parallel approval processes with all-or-nothing requirements. Custom rules like auto-approve if a contract covers the full purchase order amount. Multi-level approval chains where limits are maintained in custom tables. Dynamic role assignment based on cost center responsibility.

None of this is standard.

The approval chain requires both the Department Manager and Finance Department to approve simultaneously. Either rejection kills the whole workflow. 

Then come the rules. If the purchase order references a contract and the totals match, auto-approve. Otherwise, check approval limits in a custom table. If the first approver lacks sufficient authority, cascade to the next level.

And the approvers themselves? Assigned dynamically. Sometimes it is the Manager of Workflow Initiator. Sometimes the Cost Center Responsible. Sometimes specific users pulled from yet another custom table.

This is one workflow in one module. 

It requires domain-knowledge-specific consultants to implement because the out-of-the-box logic is too simple for how real organizations actually work.

This is the M×N×P problem. Even if you solved M×N with perfect connector protocols (like the MCP), you would still face the reality that every enterprise implementation is effectively a unique interface.

Computer-Use as the Universal Layer

There is one interface that is universal: the screen.

Computer-use agents operate at the pixel level. They see what humans see. They click where humans click. They navigate the same menus and fill out the same forms.

This sounds crude compared to elegant API calls. But it has one massive advantage: it works with everything. No connector required. No API exposure decisions. No MCP protocol adoption. If a human can do it, a computer-use agent can learn to do it.

The question is whether computer-use works well enough for production use. And here the research is early but encouraging.

The Demonstration Effect

The SCUBA benchmark tests AI agents on real Salesforce CRM workflows. In zero-shot settings, meaning no task-specific training, open-source models achieved less than 5% success rates. Even strong models that perform well on generic desktop benchmarks failed catastrophically when confronted with actual enterprise software.

But with demonstrations, meaning examples of humans completing the workflows, success rates jumped to 50%. Simultaneously, time and costs (of the agents) dropped by 13% and 16% respectively.

General capability is not enough. You need specific training on specific workflows.

Data Efficiency

In my experience, collecting computer-use trajectories is painful. Domain experts rarely understand what actually challenges a model. The infrastructure stacks on top of brittle web environments. Building those environments is pure tedium. When every example costs this much, data efficiency stops being nice-to-have.

Which is why the PC Agent-E research matters. Trained on just 312 trajectories, the model achieved a 141% improvement over the base model.

312 examples. Not millions. Not even thousands. A few hundred carefully chosen demonstrations of the exact workflows.

The model outperformed Claude 3.7 Sonnet with extended thinking on the WindowsAgentArena benchmark. And it generalized well to different operating systems, suggesting the learned behaviors were not brittle.

The economics of enterprise AI automation are simple: you do not need massive datasets. You need the right datasets from the right workflows.

The Honest Trade-Off

Now for the uncomfortable part. Generalization is necessary but not sufficient for high-stakes operations.

The same research that shows promising results also reveals gaps. Some agents that perform well on generic benchmarks like OSWorld achieve less than 5% success on specialized enterprise environments. Despite advances, today's RL systems struggle to generalize beyond narrow training contexts.

The sim-to-real gap persists. An agent that performs flawlessly in simulation may fail in production due to unmodeled variables. 

For high-volume, repetitive workflows like expense approvals, CRM updates, and standard procurement, trained computer-use agents are approaching production readiness. The error rate is acceptable because any single mistake is recoverable.

For one-off, high-stakes operations like schema migrations, financial reconciliations, and compliance configurations, the calculus is different. A database configuration error can cost millions. A compliance failure can trigger regulatory action.

The honest answer is that computer-use can handle navigation and execution for these tasks, but humans must remain at verification checkpoints. The agent does the clicking. The human confirms the consequences.

This is not a failure of the technology. It is appropriate risk management. And it still represents an enormous productivity gain. Navigating to the right screen, filling in the right fields, and preparing the right configurations is most of the work. Human verification at critical decision points is the remaining essential piece. At least for now.

Down The Middle: Agents and Humans

The path forward is not pure automation or pure human control. They are hybrid workflows where computer-use agents handle the interface complexity while humans handle the judgment calls. Human-in-the-loop is already the norm for production AI agents.

This requires new infrastructure. You need training pipelines for enterprise-specific demonstrations. You need simulation environments that match production configurations. You need checkpoint mechanisms that pause for human review at appropriate moments. Companies like Applied Compute, Theta, Osmosis, and Scale AI are starting to build this infrastructure.

But the hard technical problem, making computers reliably operate arbitrary interfaces, is being solved. The remaining problems are organizational and economic. Those problems have a tendency to get solved when the benefits are large enough.

The best agents still fail on most real enterprise tasks. But a few years ago they could barely hit a single submit button. The screen is the only universal interface. That's where the work should go.

\n

\

Market Opportunity
Tx24 Logo
Tx24 Price(TXT)
$0.00491
$0.00491$0.00491
-20.55%
USD
Tx24 (TXT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Bank of China Launches Cross-Border Digital RMB Payments in Laos

Bank of China Launches Cross-Border Digital RMB Payments in Laos

Bank of China completes first cross-border digital RMB payment in Laos, marking a key milestone in digital currency use.
Share
coinlineup2025/12/28 04:58
Crucial Fed Rate Cut: October Probability Surges to 94%

Crucial Fed Rate Cut: October Probability Surges to 94%

BitcoinWorld Crucial Fed Rate Cut: October Probability Surges to 94% The financial world is buzzing with a significant development: the probability of a Fed rate cut in October has just seen a dramatic increase. This isn’t just a minor shift; it’s a monumental change that could ripple through global markets, including the dynamic cryptocurrency space. For anyone tracking economic indicators and their impact on investments, this update from the U.S. interest rate futures market is absolutely crucial. What Just Happened? Unpacking the FOMC Statement’s Impact Following the latest Federal Open Market Committee (FOMC) statement, market sentiment has decisively shifted. Before the announcement, the U.S. interest rate futures market had priced in a 71.6% chance of an October rate cut. However, after the statement, this figure surged to an astounding 94%. This jump indicates that traders and analysts are now overwhelmingly confident that the Federal Reserve will lower interest rates next month. Such a high probability suggests a strong consensus emerging from the Fed’s latest communications and economic outlook. A Fed rate cut typically means cheaper borrowing costs for businesses and consumers, which can stimulate economic activity. But what does this really signify for investors, especially those in the digital asset realm? Why is a Fed Rate Cut So Significant for Markets? When the Federal Reserve adjusts interest rates, it sends powerful signals across the entire financial ecosystem. A rate cut generally implies a more accommodative monetary policy, often enacted to boost economic growth or combat deflationary pressures. Impact on Traditional Markets: Stocks: Lower interest rates can make borrowing cheaper for companies, potentially boosting earnings and making stocks more attractive compared to bonds. Bonds: Existing bonds with higher yields might become more valuable, but new bonds will likely offer lower returns. Dollar Strength: A rate cut can weaken the U.S. dollar, making exports cheaper and potentially benefiting multinational corporations. Potential for Cryptocurrency Markets: The cryptocurrency market, while often seen as uncorrelated, can still react significantly to macro-economic shifts. A Fed rate cut could be interpreted as: Increased Risk Appetite: With traditional investments offering lower returns, investors might seek higher-yielding or more volatile assets like cryptocurrencies. Inflation Hedge Narrative: If rate cuts are perceived as a precursor to inflation, assets like Bitcoin, often dubbed “digital gold,” could gain traction as an inflation hedge. Liquidity Influx: A more accommodative monetary environment generally means more liquidity in the financial system, some of which could flow into digital assets. Looking Ahead: What Could This Mean for Your Portfolio? While the 94% probability for a Fed rate cut in October is compelling, it’s essential to consider the nuances. Market probabilities can shift, and the Fed’s ultimate decision will depend on incoming economic data. Actionable Insights: Stay Informed: Continue to monitor economic reports, inflation data, and future Fed statements. Diversify: A diversified portfolio can help mitigate risks associated with sudden market shifts. Assess Risk Tolerance: Understand how a potential rate cut might affect your specific investments and adjust your strategy accordingly. This increased likelihood of a Fed rate cut presents both opportunities and challenges. It underscores the interconnectedness of traditional finance and the emerging digital asset space. Investors should remain vigilant and prepared for potential volatility. The financial landscape is always evolving, and the significant surge in the probability of an October Fed rate cut is a clear signal of impending change. From stimulating economic growth to potentially fueling interest in digital assets, the implications are vast. Staying informed and strategically positioned will be key as we approach this crucial decision point. The market is now almost certain of a rate cut, and understanding its potential ripple effects is paramount for every investor. Frequently Asked Questions (FAQs) Q1: What is the Federal Open Market Committee (FOMC)? A1: The FOMC is the monetary policymaking body of the Federal Reserve System. It sets the federal funds rate, which influences other interest rates and economic conditions. Q2: How does a Fed rate cut impact the U.S. dollar? A2: A rate cut typically makes the U.S. dollar less attractive to foreign investors seeking higher returns, potentially leading to a weakening of the dollar against other currencies. Q3: Why might a Fed rate cut be good for cryptocurrency? A3: Lower interest rates can reduce the appeal of traditional investments, encouraging investors to seek higher returns in alternative assets like cryptocurrencies. It can also be seen as a sign of increased liquidity or potential inflation, benefiting assets like Bitcoin. Q4: Is a 94% probability a guarantee of a rate cut? A4: While a 94% probability is very high, it is not a guarantee. Market probabilities reflect current sentiment and data, but the Federal Reserve’s final decision will depend on all available economic information leading up to their meeting. Q5: What should investors do in response to this news? A5: Investors should stay informed about economic developments, review their portfolio diversification, and assess their risk tolerance. Consider how potential changes in interest rates might affect different asset classes and adjust strategies as needed. Did you find this analysis helpful? Share this article with your network to keep others informed about the potential impact of the upcoming Fed rate cut and its implications for the financial markets! To learn more about the latest crypto market trends, explore our article on key developments shaping Bitcoin price action. This post Crucial Fed Rate Cut: October Probability Surges to 94% first appeared on BitcoinWorld.
Share
Coinstats2025/09/18 02:25
Stunning $592 Million Transfer Permanently Reduces Uniswap Supply

Stunning $592 Million Transfer Permanently Reduces Uniswap Supply

The post Stunning $592 Million Transfer Permanently Reduces Uniswap Supply appeared on BitcoinEthereumNews.com. UNI Token Burn: Stunning $592 Million Transfer Permanently
Share
BitcoinEthereumNews2025/12/28 04:57