Originally published on digitaljournal.com

How are CIOs are approaching AI governance issues and an increasingly patchwork world of regulations? To gain an insightful insight, Digital Journal spoke with Doug Gilbert (CIO and Chief Digital Officer at Sutherland).
Digital Journal: As a CIO, are you concerned about your ability to manage regulatory and governance issues with AI deployments? Do you see this concern from other IT leaders?
Doug Gilbert: The rapidly evolving AI regulatory landscape is a universal concern for CIOs, and I share their apprehension about staying ahead. Laws like the EU AI Act, California’s AB 2013, Texas Responsible AI Act, and Colorado AI Act (2026) create a fragmented environment demanding agile compliance. The challenge lies not just in meeting current rules but anticipating their global proliferation.
AI ethics isn’t an abstract ideal – it’s a practical necessity for trustworthy, unbiased outputs aligned with business and user needs. Gartner’s 2024 finding that 30% of generative AI projects will be abandoned by 2025 due to poor data quality highlights this. Without proactive governance, including robust master data management (MDM), CIOs risk compliance penalties and reputational damage from unintended consequences, such as biased AI outputs undermining trust. Proactive governance is essential to ensure compliance and deliver value.
DJ: If there are concerns about regulatory and governance issues, what’s causing the problem? Do the number of AI regulations already passed and in the works lead to more challenges?
Gilbert: The complexity and fragmentation of AI regulations drive significant governance challenges. A patchwork of requirements (like the EU AI Act’s emphasis on explainability, California’s AB 1008 enhancing CCPA privacy protections, and proposed CPPA automated decision-making rules) creates operational burdens. For example, aligning AI systems with both EU transparency and California privacy mandates required a six-month data pipeline overhaul, with California’s SB 942 imposing $5,000 daily penalties for non-compliance.
A deeper issue is treating ethics as an afterthought rather than a core pillar, akin to cybersecurity. Many organizations view ethics as “do no harm,” neglecting structured processes like bias testing or decision logging. Poor MDM exacerbates this, feeding inconsistent data into AI and amplifying errors. The lesson is clear: without integrating ethics and clean data into governance, CIOs face ongoing compliance struggles and heightened regulatory and reputational risks.
DJ: What’s the solution to regulatory compliance and governance issues?
Gilbert: The solution is embedding ethics and compliance into the AI lifecycle from the outset, aligned with business objectives. For instance, at Sutherland we adapted Google’s AI ethics framework to establish consistent, industry-tailored principles, ensuring flexibility without starting from scratch. Ethical checks are integrated at every stage – vendor selection, data training, and deployment. We rejected a vendor whose AI lacked decision logging, later discovering its outputs reflected biases from flawed data assumptions.
Tools like AI Fairness 360 enable bias testing, and decision logging ensures auditability, meeting regulations like California’s AB 2013 and the EU AI Act. Robust MDM delivers clean, unified data, critical for compliance and trust. Treating ethics as foundational, like cybersecurity, transforms compliance into a driver of better outcomes. Engaging business leaders to align AI with goals naturally surfaces ethical needs; stakeholder workshops reduced our governance resistance by over 50%. Proactive ethics enhances trust and ROI while mitigating risks.
About The Interviewer: Dr. Tim Sandle is Digital Journal’s Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.
