Credit: Peshkova / Shutterstock
The business benefits of artificial intelligence — like enhanced customer experience, greater efficiency, and better risk management — are now part of many digital strategies. But when it comes to securing AI systems, many organizations are still playing catch-up.
“People are trying to figure out how best to use AI, but few are thinking about the security risks that come with it from day one,” said , a senior lecturer and principal research scientist at MIT Sloan. “That’s the big problem right now.”
To help close that gap, Pearlson and Nelson Novaes Neto, an MIT Sloan research affiliate and CTO of Brazil-based C6 Bank, developed a framework to help technical executives and their teams ask the right questions before AI systems are too far along in development to be secured effectively. Their report, “An Executive Guide to Secure-by-Design AI,” condenses hundreds of technical considerations into 10 strategic questions aimed at identifying risks early and aligning AI initiatives with business priorities, ethical standards, and cybersecurity requirements.
“The idea was to give technical executives a structured way to ask important questions early in the AI systems design process to head off problems later,” said Pearlson, who teaches the MIT Sloan Executive Education course “Cybersecurity for Leaders.”
Why AI risk demands a different approach
AI systems aren’t just another layer of enterprise software. Their defining traits — data dependence, continuous learning, and probabilistic outputs — expose them to a new and growing class of cyber threats.
Among the most urgent AI threats, according to the report:
- Evasion and poisoning attacks, where malicious inputs skew outputs or corrupt training data.
- Model theft and inversion, in which attackers steal proprietary systems or reconstruct sensitive information.
- Prompt injections, where attackers manipulate inputs to force systems to leak data or behave maliciously.
- Privacy attacks, which exploit vulnerabilities to steal sensitive information processed by the AI system, breaching data privacy and confidentiality.
- Hallucinations, where systems confidently produce false outputs that erode trust or misinform users.
These risks can’t be patched over later, and traditional IT architecture frameworks, security models, and security standards from NIST and ISO each address part — but not all — of the problem. “None of them live at the intersection of AI, security, and design,” Pearlson said.
Each of the categories in the new AI Secure-by-Design Executive Framework contributes to managing risks associated with these AI attack vectors and others.
10 questions for executive AI readiness
The framework translates building a complex security architecture into 10 clear, strategic questions. Each question is aligned with a category of AI system development and is intended to guide conversations early in the design process.
“These are simple questions,” Pearlson said. “But they’re action-packed and designed to uncover deep issues that can change the course of a project.” Each question below is aimed at surfacing vulnerabilities, identifying trade-offs, and ensuring that AI security is woven into strategy — not bolted on after deployment.
- Strategic alignment: How can AI initiatives be aligned with our organizational objectives, budget, values, and ethics?
- Risk management: What methodologies can we use to identify, assess, and prioritize AI-specific risks?
- Control implementation: What controls and tools will we implement to mitigate the identified AI risks?
- Policy, standards, and procedure establishment: What policies and procedures will we put in place to ensure data quality, privacy, ethics, and cybersecurity?
- Governance structure: What governance structure will oversee AI project development, deployment, security, and operations?
- Technical feasibility: Is the proposed AI architecture compatible with our existing infrastructure?
- Resource allocation: What level of security effort is required, and how will we allocate resources?
- Performance and security monitoring: What metrics will we use to track AI effectiveness and security?
- Continuous improvement: What mechanisms will support ongoing monitoring and adaptation of our AI practices?
- Stakeholder engagement: How will we communicate the importance of AI security, privacy, and ethics to foster shared responsibility?
Putting the framework to work at C6 Bank
To evaluate the framework in practice, the researchers pilot-tested their framework with C6 Bank, a digital bank headquartered in São Paulo. With no physical branches and more than 30 million customers, C6 has adopted a tech-forward approach and integrates AI across customer service, fraud detection, and back-end operations.
With that level of innovation, however, comes caution. The leadership team recognized early on that existing frameworks didn’t fully address the architectural and security demands of modern AI systems. In applying the secure-by-design framework, C6 executives worked through each of the 10 questions — an exercise that surfaced 19 critical design considerations, from the need for model-agnostic infrastructure to new governance models tailored to AI’s unique risks.
This helped the organization develop a four-part platform that separated experimental AI efforts from production-grade systems that interfaced with customers directly, giving C6 space to innovate without compromising trust. It also helped to improve resource planning.
Ultimately, C6 built internal tools, governance processes, and best-practice guides tailored to AI. Its legal and compliance teams used the framework as a springboard to draft an AI-specific manual that outlines expectations and regulatory risks around AI, to foster stakeholder confidence in such initiatives.
A smarter starting point for AI
As organizations race to embed AI across their operations, the mandate is clear: Design secure systems from the start or risk embedding vulnerabilities that will erode trust later.
The framework doesn’t eliminate all AI risk, Pearlson said, but it does provide a practical foundation that will prompt better questions, clearer decisions, and more resilient designs from the top down.
“What I hope is that this helps other organizations ask smarter questions earlier so they can avoid the mistakes that happen when security is an afterthought,” she said. “The most powerful thing about these 10 questions is that they force you to think ahead.”