Rise Act provides AI guardrails, but lacks details



Civil liability law usually doesn’t have a good conversation for dinner party conversations, but it can have a huge impact on the development of emerging technologies such as artificial intelligence.

If the number of times is drawn, the responsibility rules can Creating barriers By exposing entrepreneurs (in this case, AI developers) to face unnecessary legal risks to achieve future innovation. Probably our Senator Cynthia Lummis presented responsible innovation and security expertise last week (Rise) Act of 2025.

The bill aims to protect AI developers from being sued in civil court so that physicians, lawyers, engineers and other professionals “can understand what AI can and can’t do before relying on it.”

While some criticized the bill for its limited scope and its flaws in terms of transparency standards, early responses to sources that Cointelegraph has linked to Cointelegraph were mostly positive and questioned the provision of responsibility shields for AI developers.

Most representations are ongoing work, not finished documentation.

The Rise Law is an “gift” for AI developers?

Hamid Ekbia, a professor at the Maxwell School of Citizenship and Public Affairs at Syracuse University, said the Loomis Act was “timely and needed.” (Loumis Call it The country’s “first legislation on responsibility reform for professional-level AI” is being used. )

However, the bill tilts the balance, not AI developers. The Rise Act requires them to disclose model specifications publicly so that professionals can make informed decisions about the AI ​​tools they choose to use, however:

“This brings most of the risk burden to the knowledgeable professionals,” requiring developers to only require developers to “transparency” in the form of technical specifications (model cards and specifications) otherwise provides them with broad immunity.”

Not surprisingly, some people quickly made “gifts” to AI companies. The Democratic Party describes itself as a “leftist in the central political community”, Famous In one of the forums, “AI companies do not want to be sued for the failure of their tools, and if the bill passed will achieve that.”

Not everyone agrees. “I won’t call the bill an AI company’s ‘gift’,” Felix Shipkevich, president of the law firm, told Cointelegraph.

Shipkevich explained that the immunization provisions proposed by the Rise Act seem to be to avoid liability for the unpredictable behavior of large language models, especially without negligence or intent to cause harm. From a legal perspective, this is a rational approach. He added:

“Without some form of protection, developers can face unlimited output, they have no actual control.”

The scope of proposed legislation is quite narrow. It focuses on scenarios where professionals use AI tools when dealing with clients or patients. For example, financial advisers can use AI tools to help develop investment strategies for investors, or radiologists can use AI software programs to help interpret X-rays.

Related: Senate passes concerns about systemic risk, Genius Stablecoin bill

RISE ACT doesn’t really solve the situation where there is no professional intermediary between AI developers and end users, like when using chatbots as digital companions for minors.

Such civil liability case Recently, a teenager committed suicide after fighting an AI chatbot for several months. The deceased’s family said the software was designed in such a way that it was not very safe for minors. “Who should be responsible for the loss of life?” Ebia asked. Proposed Senate legislation does not resolve such cases.

“Clear and unified standards are needed so that users, developers and all stakeholders understand the rules of the road and their legal obligations,” Ryan Abbott, professor of law and health sciences at the University of Surrey’s Law School, told Cointelegraph.

But this is difficult because AI can cause new types of potential harm given the complexity of technology, opacity and autonomy. Abbott, who has a degree in medicine and law, said the medical field will be particularly challenging in terms of civil liability.

For example, doctors have historically outperformed AI software in medical diagnosis, but there has been recent evidence that in some areas of medical practice, humans have “actually worse results than letting AI do all the work,” Abbott explained. “This raises all sorts of interesting questions of responsibility.”

Who will compensate if the doctor is no longer in trouble? Will malfeasance insurance cover it? Maybe not.

The AI ​​Futures project of the nonprofit research organization temporarily endorsed the bill (in consultation with the drafting of the bill). But executive director Daniel Kokotajlo explain AI developers require insufficient disclosure of transparency.

“The public should know what goals, values, agendas, biases, instructions, etc. companies are trying to empower powerful AI systems.” Cocotagelo said the bill does not require this transparency and is therefore not far enough.

Furthermore, “companies always have the option to take responsibility rather than be transparent, so whenever a company wants to do something that the public or regulators don’t like, they can opt out,” Kokotajlo said.

The EU’s “rights-based” approach

Compared with the responsibility provisions in the 2023 AI Act of the European Union, is this the first comprehensive regulation of AI by major regulators?

The EU’s AI responsibility stance has been constantly changing. The EU AI Responsibility Directive was first conceived in 2022 withdraw In February 2025, some people said it was the result of lobbying in the AI ​​industry.

Nevertheless, EU law often adopts a human rights-based framework. As Famous In a recent UCLA Legal Review article, a rights-based approach “emphasizes the authorization of individuals”, especially end users such as patients, consumers or customers.

By contrast, a risk-based approach, similar approaches are based on processes, documentation and evaluation tools. For example, it will focus more on bias detection and mitigation measures than on providing specific rights to affected people.

When Cointelegraph asked Kokotajlo, his “risk-based” or “rules-based” approach to the U.S. was more suitable for the U.S., he replied: “I think the focus should be based on risk and focus on those who create and deploy technology.”

Related: Encrypted users are vulnerable when Trump removes consumer watchdogs

The EU usually takes a more aggressive approach to addressing such problems, adding Shipkevich. “Their law requires AI developers to show up in advance that they comply with security and transparency rules.”

Need clear standards

The Lummis Act may require some modifications before making it into law, if any.

“As long as this proposed legislation is seen as a starting point, I have a positive view of rising behavior,” Shipkevich said. “After all, it’s reasonable to provide some protection for developers who have not acted negligently and have no control over how their models are used downstream.” He added:

“If the bill evolves to include actual transparency requirements and risk management obligations, it can lay the foundation for a balanced approach.”

According to Justin Bullock, U.S. vice president of policy for innovation (ARI), “The Rise Act presents some strong ideas, including federal transparency guidance, a limited range of safe harbors and clear provisions for AI professional adopters, although Ari does not endorse the legislation.

But Bullock is also concerned about transparency and disclosure, namely ensuring the necessary transparency assessment is effective. He told Cointelegraph:

“Releasing a model card without a strong third-party audit and risk assessment can lead to a false sense of security.”

Still, all in all, the Loomis Act “is a constructive first step in the dialogue about federal AI transparency requirements,” Brock said.

Assuming that the legislation has been passed and signed into law, the law will come into force on December 1, 2025.

Magazine: Bitcoin invisible tug between suit and Cypherpunk