By Joe Z, co-founder of DeAgentAI
Today’s artificial intelligence systems aren’t just applications; they’re becoming essential infrastructure in sectors ranging from finance and healthcare to transportation and education. Yet most AI models remain black boxes. Only a minority of people trust them. A 2025 global survey covering 48,000 respondents across 47 countries found that 66 % of people use AI regularly, yet only 46 % are willing to trust AI systems[14]. Seventy percent believe national and international regulation is needed[15]. More than half (56 %) report that they have made mistakes at work because of AI[16]. These figures reveal a paradox: AI adoption is high, but confidence is low.
Trust cannot be manufactured through branding; it is built on transparency and accountability. When AI systems make decisions—whether to approve a loan, recommend medical treatment or trade securities—the affected parties deserve to know how those decisions were reached and to challenge them when they are wrong. The COMPAS recidivism algorithm, widely used in U.S. courts, illustrates what happens in the absence of transparency. An investigation by ProPublica found that the proprietary model wrongly labeled Black defendants as high‑risk almost twice as often as White defendants; Black defendants were 77 % more likely to be flagged for violent crime and 45 % more likely to be predicted to commit any crime[17]. The model correctly predicted violent crime only 20 % of the time[18]. Because the calculations were secret, defendants could not challenge the scores, and courts could not identify the source of bias[19].
Treating transparency as an optional feature allows such harms to propagate. It also undermines markets. In 2010, the Flash Crash showed how hidden algorithms interacting in unpredictable ways could wipe out nearly a trillion dollars in minutes[6]. When the people running the market cannot see the logic of the machines, small data errors can cascade into systemic failures[7].
Transparency must therefore be embedded as infrastructure. This means building systems that record and expose the logic, data sources and decision paths of AI models. It means requiring technical documentation and audit logs by default. The European Union’s AI Act takes steps in this direction. Providers of general‑purpose AI models must keep technical documentation and transparency records and cooperate with regulators[20]. High‑risk models—those used in credit, employment, critical infrastructure and law enforcement—must perform standardized evaluations, report incidents and ensure cybersecurity[21]. National market surveillance authorities can withdraw AI systems that fail to meet these obligations[22], and penalties can reach 7 % of global annual turnover[23]. These measures recognize that transparency is not a luxury; it is a prerequisite for safe deployment.
However, regulatory compliance should be the floor, not the ceiling. Companies should strive for open and interpretable models whenever possible. When proprietary considerations prevent full disclosure, independent auditors should be allowed to evaluate models under confidentiality agreements. Decision recipients should have access to explanations and the ability to appeal automated decisions. In critical sectors, source code should be escrowed with regulators to enable post‑incident analysis.
Transparency also requires robust data governance. AI systems are only as trustworthy as the data they train on. The Flash Crash was exacerbated by small data errors[24]. Risk‑assessment algorithms amplify historical biases when data reflect discriminatory policing[17]. Addressing these issues demands industry‑wide standards for data quality, documentation of datasets’ provenance, and mechanisms for individuals to correct their data.
Finally, society needs cultural infrastructure: education on AI literacy and expectations for disclosure. People should know when they are interacting with an AI system and have the right to opt out of automated decision making. The global survey found a public mandate for national and international AI regulation[15]; this mandate should translate into enforceable rights to explanation and redress.
The physical infrastructure of the industrial age—roads, bridges, water systems—was built to public standards because society depended on it. AI is becoming equally foundational. Making transparency optional would be like building roads without guardrails. Trustworthy AI requires transparency to be designed into the very architecture of systems, not slapped on as an afterthought. Without it, adoption will stall and the technology’s benefits will be undermined by hidden risks.