Kalshi crosses billion-dollar mark as DC’s legal dust begins to settle

2025/06/26 02:22

With a courtroom battle barely in the rearview, Kalshi is reportedly raising over $100 million at a valuation topping $1 billion. The timing suggests a calculated bet: that regulated prediction markets are finally finding legal and institutional footing.

On June 25, Bloomberg reported that Kalshi, the federally regulated prediction market, is raising over $100 million in a funding round led by crypto investment giant Paradigm. The deal would push its valuation above $1 billion and put it in the same league as its unregulated competitor, Polymarket, which is also rumored to be aiming for unicorn status with a fresh $200 million capital injection.

Kalshi’s raise comes just weeks after the Commodity Futures Trading Commission abandoned its legal fight to block Kalshi from offering political event contracts, effectively greenlighting a market that lets users bet on election outcomes under U.S. oversight.

As legal clouds lift, Kalshi turns to growth and distinction

The CFTC’s recent surrender in its case against Kalshi marks a turning point. For months, the agency argued that political betting threatened market integrity, but Judge Jia Cobb’s September ruling, later upheld, found the CFTC overstepped its authority.

The agency’s abrupt withdrawal in May, without explanation, suggests regulators may be shifting tactics rather than conceding entirely. Advocacy groups like Better Markets warn the precedent could invite manipulation and distort election integrity, but for investors, it signals a rare alignment: a crypto-native business model operating within U.S. law.

While Kalshi has not publicly detailed how the capital will be deployed, the company is likely looking to expand its footprint ahead of the 2026 midterms and further develop its exchange infrastructure while scaling its compliance architecture.

The CFTC retreat effectively removed one of the biggest obstacles to Kalshi’s long-term operation inside the U.S., and the company is keen to set precedents for how risk, opinion, and information might be traded legally in the open.

By contrast, Polymarket, Kalshi’s closest competitor, continues to operate in murkier waters.

Regulation vs. rebellion: the billion-dollar split in prediction markets

Polymarket is nearing a $200 million raise at a comparable valuation, per The Information. Despite being banned for U.S. users, the platform has thrived, processing $3.2 billion in election bets in 2024 alone.

Its integration with X embeds real-time prediction data into social feeds, blurring the line between gambling and crowd-sourced forecasting.

But Polymarket’s success comes with risks. CFTC Chair Rostin Behnam has repeatedly singled out offshore platforms “providing exposure to U.S. customers,” a thinly veiled reference to the market’s VPN-reliant user base.

While backers like Peter Thiel’s Founders Fund and Vitalik Buterin bet on its censorship-resistant model, the looming question is whether regulators will tolerate its growth, or clamp down harder.

Clause de non-responsabilité : les articles republiés sur ce site proviennent de plateformes publiques et sont fournis à titre informatif uniquement. Ils ne reflètent pas nécessairement les opinions de MEXC. Tous les droits restent la propriété des auteurs d'origine. Si vous estimez qu'un contenu porte atteinte aux droits d'un tiers, veuillez contacter service@support.mexc.com pour demander sa suppression. MEXC ne garantit ni l'exactitude, ni l'exhaustivité, ni l'actualité des contenus, et décline toute responsabilité quant aux actions entreprises sur la base des informations fournies. Ces contenus ne constituent pas des conseils financiers, juridiques ou professionnels, et ne doivent pas être interprétés comme une recommandation ou une approbation de la part de MEXC.
Partager des idées

Vous aimerez peut-être aussi

California AI Bill: Crucial SB 53 Faces Uncertain Veto from Newsom

California AI Bill: Crucial SB 53 Faces Uncertain Veto from Newsom

BitcoinWorld California AI Bill: Crucial SB 53 Faces Uncertain Veto from Newsom The digital frontier is rapidly evolving, and with it, the urgent need for robust governance. For those in the cryptocurrency space, understanding the broader regulatory landscape for emerging technologies like Artificial Intelligence (AI) is paramount, as these areas often intersect. A recent development from the Golden State has sent ripples through the tech world: the passage of the California AI bill, SB 53. This legislation aims to introduce significant changes to how large AI companies operate, but its future remains in the hands of Governor Gavin Newsom, creating a period of considerable uncertainty. What is SB 53 and Why is This California AI Bill So Significant? California’s state senate recently gave its final approval to SB 53, a landmark piece of legislation focused on AI safety. Authored by state senator Scott Wiener, the bill seeks to establish new transparency requirements for major AI developers. Wiener describes SB 53 as a measure that “requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs & creates a public cloud to expand compute access (CalCompute).” This bill is significant because California is a global hub for technological innovation. Any AI safety regulation enacted here could set a precedent for other states and even federal policy. The legislation touches on several critical areas: Transparency: Large AI labs would need to disclose their safety protocols. This aims to provide greater insight into how powerful AI models are developed and deployed. Whistleblower Protections: Employees at AI labs would receive protections, encouraging them to report safety concerns without fear of retaliation. CalCompute: The bill proposes creating a public cloud to expand compute access, potentially democratizing AI development and research. The core objective is to balance the rapid advancement of AI with the need to mitigate potential risks, ensuring responsible development and deployment of this transformative technology. Gavin Newsom AI Stance: A History of Caution and Concern The fate of SB 53 now rests with Governor Gavin Newsom. His decision is keenly awaited, especially given his past actions regarding AI legislation. Last year, Newsom vetoed a more expansive AI safety bill, also authored by Senator Wiener. While acknowledging the importance of “protecting the public from real threats posed by this technology,” Newsom criticized the previous bill for applying “stringent standards” to large models regardless of their deployment context or data sensitivity. He instead signed narrower legislation targeting specific issues like deepfakes. This history highlights the nuanced approach Governor Newsom has taken toward AI regulation. He is clearly aware of the technology’s risks but also cautious about imposing overly broad or potentially stifling regulations on innovation. Senator Wiener has stated that the current SB 53 was influenced by recommendations from an AI expert panel convened by Newsom himself after his prior veto, suggesting a more tailored and considered approach this time around. The question remains: will this revised bill meet his approval, or will concerns about its scope still lead to a veto? Industry Reactions to California’s Tech Policy AI Initiatives The prospect of new tech policy AI in California has elicited strong reactions across Silicon Valley. The industry is divided, reflecting the complex challenges of regulating a rapidly evolving field. Opposition from Giants: OpenAI and Andreessen Horowitz A number of prominent Silicon Valley companies, venture capital (VC) firms, and lobbying groups have voiced opposition to SB 53. OpenAI, while not specifically mentioning SB 53 in a recent letter to Newsom, argued for regulatory harmony. They suggested that companies meeting federal or European AI safety standards should be considered compliant with statewide rules, to avoid “duplication and inconsistencies.” This stance underscores a preference for unified, potentially less fragmented, regulatory frameworks. Andreessen Horowitz (a16z), a major VC firm, has also been vocal. Their head of AI policy and chief legal officer recently claimed that “many of today’s state AI bills — like proposals in California and New York — risk” violating constitutional limits on how states can regulate interstate commerce. This argument raises a fundamental legal challenge to state-level AI regulation, suggesting that such laws could overstep their bounds by impacting companies operating across state lines. The firm’s co-founders have even linked tech regulation to their political leanings, advocating for a 10-year ban on state AI regulation, aligning with some positions taken by the Trump administration. Support from Anthropic: A Blueprint for AI Governance? In contrast to the opposition, AI research company Anthropic has publicly come out in favor of SB 53. Anthropic co-founder Jack Clark stated, “We have long said we would prefer a federal standard. But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.” This perspective suggests that while a federal standard might be ideal, state-level initiatives like SB 53 can serve as valuable models for future regulation, filling a current void in comprehensive AI governance. This divergence of opinion highlights the ongoing debate within the tech community about the most effective and appropriate ways to govern AI. Some prioritize innovation and fear over-regulation, while others emphasize the urgent need for safeguards to ensure responsible development. Navigating the Nuances: Key Amendments and Regulatory Tiers Understanding the details of SB 53 is crucial, especially how it has evolved to address previous concerns. Politico reports a significant amendment: companies developing “frontier” AI models that generate less than $500 million in annual revenue will only need to disclose high-level safety details. In contrast, companies exceeding that revenue threshold will be required to provide more detailed reports. This tiered approach aims to tailor regulatory burdens based on a company’s size and potential impact, potentially alleviating concerns about stifling smaller innovators while ensuring scrutiny for larger, more influential players. This amendment reflects an attempt to create a more balanced AI safety regulation, acknowledging that not all AI developers pose the same level of systemic risk. It’s a pragmatic adjustment, potentially making the bill more palatable to a wider range of stakeholders, including Governor Newsom. Comparison: Newsom’s Vetoed Bill vs. SB 53 Feature Previous Vetoed Bill Current SB 53 Scope of Application Applied stringent standards broadly to large models. Targets “large AI labs” with transparency requirements. Revenue Tiers Not explicitly mentioned as a distinguishing factor. Introduces revenue tiers ($500M) for disclosure levels. Specific Provisions Less detailed on specific safety protocols and compute access. Explicitly includes transparency protocols, whistleblower protections, and CalCompute. Influence on Bill Authored by Wiener, faced Newsom’s broad criticism. Influenced by Newsom’s expert panel recommendations. The Future of AI Governance: A Pivotal Moment for California The passage of the California AI bill, SB 53, marks a pivotal moment in the ongoing global discussion about AI governance. Whether Governor Newsom signs or vetoes it, the debate it has ignited underscores the urgent need for clear and effective frameworks to manage the power of AI. This legislation, and the reactions to it, offer valuable insights into the complexities of balancing innovation, safety, and economic impact. For the broader tech and cryptocurrency communities, this legislative effort highlights a growing trend: governments are actively seeking to understand and regulate emerging technologies. The outcome in California could influence how other jurisdictions approach AI, shaping the future landscape of technological development and its ethical implications. Conclusion: The Unfolding Impact of SB 53 As SB 53 makes its way to Governor Newsom’s desk, the tech world watches with bated breath. This AI safety regulation is more than just a piece of state legislation; it’s a test case for how democracies grapple with the profound challenges and opportunities presented by artificial intelligence. The debate between fostering innovation and ensuring public safety is at its core, with industry giants and advocates for responsible AI development offering contrasting visions. The final decision by Gavin Newsom AI policy will undoubtedly have a lasting impact, not just on California, but potentially on the global conversation around tech policy AI for years to come. To learn more about the latest AI market trends, explore our article on key developments shaping AI models features. This post California AI Bill: Crucial SB 53 Faces Uncertain Veto from Newsom first appeared on BitcoinWorld.
Partager
Coinstats2025/09/14 03:10
Partager