The article discusses a shift in focus away from broad debates over AI regulation deadlines toward a deeper concern: the way AI is embedded into everyday commercial systems, particularly quoting and contracting platforms. These systems are central to how businesses set prices, create commercial terms, manage risk, and recognize revenue. As AI tools are increasingly used to automate and optimize these core processes, they may soon be classified as high‑risk under regulations like the EU AI Act, meaning stricter requirements for transparency and oversight.
One major worry is that many organizations think they’re prepared for regulation on a policy level, but haven’t examined how decisions are actually made inside their operational systems. Quoting and contracting tools touch multiple business functions — sales, legal, finance and compliance — and AI layers across them can blur responsibility. Companies often struggle to explain how AI influences pricing decisions or contract term generation, leading to fragmented accountability even though these decisions have significant commercial and regulatory weight.
The article argues that traditional governance — framed as policies sitting outside systems — won’t be enough. Instead, oversight must be integrated inside the systems where decisions happen. This means using AI‑enabled configure‑price‑quote (CPQ) and contract lifecycle management (CLM) tools that generate clear audit trails and decision logic. With these structures in place, teams across functions can better trace how AI arrived at a decision, support ongoing human oversight, and intervene before small errors escalate into larger risks.
Finally, the piece suggests that the ongoing debate about regulatory timelines should be seen as an opportunity to strengthen internal readiness. Organizations need clear ownership of AI‑driven decisions across departments, agreed standards for documenting and reviewing those decisions, and practical processes for continuous oversight. Ultimately, gaining customers’ and regulators’ trust will depend on demonstrating that commercial decisions powered by AI are fair, consistent and compliant — with quoting and contracting systems serving as an early proving ground for real organizational maturity in AI governance.