Recently, Grok pushed out its new Companions feature, which attracted yet more controversy. Companions is the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns. This article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots.Recently, Grok pushed out its new Companions feature, which attracted yet more controversy. Companions is the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns. This article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots.

Disproving the "Innovation Against Safety" Doctrine in AI Regulation

2025/11/03 01:00

Over the past decade or so, the breakneck pace of AI development has no doubt guaranteed the well-being of millions of people, and, with slight effort to stay on such a trajectory, the technology will certainly stay this way for decades more to come.

\ In my opinion, however, recent actions undertaken by many AI companies, as well as the governments of many leading AI developers, in aggregate constitute a deviation from the path to the benefit of humanity. Yet, with some new research pointing towards the potential harms of AI chatbots, it is necessary that we begin considering the possibility of regulation to limit the extent of their availability.

\ Inspired by the implications of the Grok Companions feature, this article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots, and proposes how future legislation might control or prohibit safety lapses within current chatbot models.

Grok’s Troubles

Grok has always maintained a spot as one of the most contentious commercial AI models since its inception, periodically becoming a symbolic spotlight for the issue of corporate control over AI models in Elon Musk’s hilariously unsuccessful attempts to use it as a tool to advance a pro-right agenda on X.

\ Yet, recently, Grok pushed out its new Companions feature, which attracted yet more controversy. On the surface, the Companions feature is a series of chatbots in reminiscence of previous chatbot services like those offered by Meta AI and Character.AI, yet it outdoes all these in a surprisingly absurd way. The first two companions include Rudi, a swearing Red Panda, and Ani, a blonde anime girl, both made up of a fine-tuned version of Grok as well as an accompanying avatar.

\ Speculative media have, unsurprisingly, focused most of their attention on Ani. A variety of online reports corroborate the chatbot’s inherently romantic features, with several reviewers taking particular note of the ‘love levels’ a user may achieve to unlock increasingly sexual conversations, along with accompanying changes to the avatar. WIRED reviewers also noted the AI model’s readiness to openly talk about BDSM topics, as well as its clingy style of speech and inconsistent child filter.

\ Since I do not have the willingness to purchase the 30$ per month SuperGrok subscription to access the Companions feature, I was unable to independently verify some of the claims about the chatbot; the internet, on the other hand, seemed to agree on one thing: this particular chatbot was excessively bold. Rudi, for how questionable it seems, attracted far less controversy. The cartoon Red Panda tends to sling insults and dark jokes that many found unfunny and ridiculous. Many reviewers tended to sideline this character, instead dismissing it as a less important one, mostly catered towards Gen-Z kids.

\ To tell the truth, I found both chatbot characters rather dull. Instead, what interested me was the distinct process and reception of this otherwise dime-a-dozen romantic chatbot. First of all, Companions is, among the products released by the “industry leaders” of AI (e.g., OpenAI, DeepMind, Anthropic, Meta), the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns from alleging long-term psychological effects to exploiting vulnerable demographics.

\ The distinct paucity of regulation surrounding chatbots like these stood out to me immediately, in addition to the fact that, other than answering to a few dissenting voices, xAI was able to release the product with impunity. This all points toward the major question of technology regulation: Should new technology be closely watched to safeguard users, or given free rein to grow and be developed?

Responsibility and Innovation

As with all incipient technologies, the psychological effects of AI chatbot use on humans are neither scientifically proven nor empirically apparent. Many people have long surmised that such technologies could potentially exacerbate existing problems, and initial reports have found a negative correlation between well-being and chatbot usage.

\ Despite this, these relatively unknown technologies are still well in the process of invading the mainstream media. In considering whether or not these technologies are indeed harmful or not, technology commentators and policymakers alike overlook the crucial point that such a consideration should, idealistically, never be a necessary concern in the first place within commercial technologies. Airline passengers would not be happy knowing that their plane might experience catastrophic failure.

\ Likewise, clinical trial participants would not bode well with knowing that numerous animals had not preceded them in the testing process. One of the most key principles of engineering is that regardless of anything, safety always comes first. To get an idea about the potential dangers of these chatbots, in any case, we only need to look at the examples of two teens whose suicides have been linked to AI being complicit in their suicidal ideation.

\ Many proponents of the current “develop now, fix later” doctrine point to the obvious: we’re locked in a race of innovation with China. My response to this is one of complete agreement: we are, in fact, locked in an AI “arms race”, and the products of our time will likely be adapted within the arsenals of cyber-warfare, among many other things. Despite this, I contend that the need for innovation is not a case to disregard safety—we can never assume that rapid technological progress is mutually exclusive with consumer safety. I anticipate and object to two notable objections to this claim:

\

\ There are plenty of ways to test the reliability and safety of products within beta-testing settings. While these tests have no doubt been conducted (notably, OpenAI rolls out new models to Pro users before other types of users), it is not an overstatement to say that the mass deployment of many commercially available chatbots is conducted in such a way that disregards user safety, with many ChatGPT models failing to divert or end conversations even when users signal distress. Even if commercial deployment were necessary to find many of these issues, it would be much more reasonable if adequate safeguards were taken to ensure the safety of vulnerable user groups, which is currently not the case.

\

\ Chat transcripts are usually not processed verbatim as part of RLHF processes used by companies like OpenAI and Google. While they may in fact inform the safety and engagement model of corresponding chatbots, separate data pipelines, mostly high-quality, technical data created or verified by humans, influence the aspects of AI training most pertinent to developing reasoning performance and other types of specialized knowledge (e.g., coding, math solving, etc.). There is, therefore, a scant case to claim that the widespread distribution of these AI chatbots is a prerequisite to the rapid advancement of AI capabilities.

\ Hopefully, I have shown that the need for innovation isn’t the root cause of these safety lapses—rather, the concerted lack of effort on safety protocols and testing is. Yet, the practical course of action to correct this persistent quality remains a matter of debate.

The Role of Regulation

The obvious solution to the aforementioned lack of safety standards is to simply increase government regulation of the practice of training and distributing chatbots. What is not obvious, however, is how this highly ambiguous proposal would be done in practice. In the early 20th century, the United States learned through Prohibition the important lesson that harsh, all-encompassing bans on a harmful product don’t work. Instead, banning alcohol without stripping the substance of its desirability simply led to a black market fever, increasing instead of decreasing the total alcohol consumption.

\ In the late 20th century, to combat the mass consumption of cigarettes, the US government took a different approach: instead of outright banning the use of cigarettes, they reduced the social desirability of tobacco products through publishing widely circulated reports detailing how they might cause skin cancer, mandating cigarette companies to place visible disclaimers on every product, and limiting the pervasiveness of cigarette advertisements. These subtle actions resulted in a continuous decline of cigarette consumption from a historic peak of almost 4000 to roughly 800 cigarettes per capita per annum.

\ To take away from history, governmental control over unsafe chatbots should go beyond legal barriers of consumption and development. They should also seek to lessen the perceived social permissibility of consuming these products, whether through campaigns or public research. Despite this, it is still unclear the degree to which the government can actually influence wider social shifts, with current public opinion directed towards viral social media trends to a greater extent than towards political-economic shifts. In all, there is really no downside to a few promptly instated, yet well-constructed, regulations on AI chatbots in the current world.


Written by Thomas Yin

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Bitcoin Holds Price Range Despite Drop to $108K as Whale Rotation Keep Traders Cautious

Bitcoin Holds Price Range Despite Drop to $108K as Whale Rotation Keep Traders Cautious

Your daily access to the backroom
Share
Blockhead2025/11/03 15:03
Series A funding, Series B retirement: A crash course in wealth for crypto founders

Series A funding, Series B retirement: A crash course in wealth for crypto founders

Source: Fortune Original title: Crypto founders are getting very rich, very fast—again Compiled and edited by: BitpushNews In the startup world, we are used to stories where founders work hard for years and eventually become millionaires when their companies go public or are acquired. Such wealth stories are also playing out in the cryptocurrency field, only this path to riches is often much shorter. A prime example is Bam Azizi. He founded the crypto payments company Mesh in 2020, and this year completed an $82 million Series B funding round. Normally, this kind of financing should all be invested in the company's development, but this time at least $20 million went directly into Azizi's personal pocket. This money comes from "secondary sales"—investors buying shares from founders or other early participants. This means that while the funding amount looks impressive, the actual amount reaching the company's account may be significantly less. However, for founders, they don't need to wait years; they can achieve financial freedom in the blink of an eye. This isn't necessarily a bad thing. A Mesh spokesperson pointed out that the company's partnership with PayPal and the launch of its AI wallet are progressing well. However, the problem is that in the current bull market, founders are cashing out early through secondary sales, making a fortune before the company has truly proven its value. Luxury mansion worth tens of millions Azizi is not an isolated case. In this bull market that began last year, Bitcoin has soared from $45,000 to $125,000, creating countless wealth myths. In mid-2024, the crypto social platform Farcaster completed a $150 million Series A funding round, with at least $15 million used to acquire shares held by founder Dan Romero. This former Coinbase employee has never hidden his wealth. In an interview with Architectural Digest, he showed in detail his $7.3 million mansion on Venice Beach, a four-building estate that the magazine called an "Italian-style garden." Although the renovation was a success, Farcaster's development has not been smooth sailing. According to reports, the platform currently has fewer than 5,000 daily active users, far behind competitors such as Zora. Romero has not responded to this. Omer Goldberg also benefited. Of the $55 million Series A funding round his security company Chaos Labs raised this year, $15 million went to him personally. This company, backed by PayPal Ventures, has become a significant voice in the blockchain security field, but has also remained silent about the deal. Why are venture capitalists willing to pay? According to industry insiders, secondary sales are becoming increasingly common in the current hot cryptocurrency market and popular sectors such as AI. Top venture capital firms like Paradigm and Andreessen Horowitz often agree to acquire shares from founders to secure lead funding for high-quality projects. For investors, this is essentially a gamble. The common equity they acquire offers limited returns, far less than preferred stock in conventional financing. But in an industry accustomed to making grand promises, whether it's appropriate to be so generous in rewarding founders who haven't yet succeeded is certainly debatable. Veteran cryptocurrency observers will be familiar with this scenario. During the ICO craze of 2016, countless projects easily raised hundreds of millions of dollars by issuing tokens. They promised to revolutionize blockchain technology and surpass Ethereum, but most have since disappeared. At the time, investors tried to use "governance tokens" to constrain the founders, but one venture capitalist admitted, "They're called governance tokens, but they don't actually govern anything." By the time the new bull market arrived in 2021, financing models began to resemble the traditional Silicon Valley model, but the phenomenon of founders cashing out in advance still existed. In a $555 million funding round, executives at payment company MoonPay cashed out $150 million. The market was beginning to cool down when the media reported that the CEO had spent $40 million to buy a luxury mansion in Miami. OpenSea, a once-star project, followed a similar path, with its founding team cashing out a significant portion of their funding. However, as the NFT craze faded, the company is now forced to seek a transformation. You are building a faith community. Why don't venture capitalists stick to a more traditional incentive model—allowing founders to meet their basic financial needs in Series B or C rounds, but only allowing them to receive huge returns once the company truly succeeds? Veteran trading lawyer Derek Colla pointed out the key: most cryptocurrency companies are "asset-light" and do not require the huge capital investment that is required in the chip industry, so this capital naturally flows to the founders. He further explained, "This industry is extremely reliant on influence marketing; there are far too many people willing to throw money at founders. Essentially, you're building a belief community." Secondary market expert Glen Anderson put it more bluntly: "In hype cycles like AI and cryptocurrency, you can easily cash out as long as you tell a good story." However, he emphasized that founders cashing out does not mean they have lost faith in the project. Colla, the lawyer, believes that massive cash-outs do not diminish a founder's enthusiasm. He cites MoonPay as an example: although the founder faced criticism over the mansion incident, the company's business continued to thrive. Farcaster's failure was not due to the founder's lack of effort; "he worked harder than most people." However, he also acknowledged that truly great entrepreneurs choose to hold shares for the long term because they believe these shares will multiply in value when the company goes public. "Great founders never want to sell on the secondary market," Colla concluded. In this industry brimming with both opportunities and bubbles, wealth comes and goes quickly. As a new wave of wealth creation sweeps in, perhaps we should consider: what kind of incentives can truly nurture great companies?
Share
PANews2025/11/03 15:00