Using AI shouldn’t stop us from thinking.
We all agree that "we should use our brain" and discern when AI is appropriate and when it’s not. However, there's a subtler issue at play here.
We tend to suspend our judgement when presented with AI’s capabilities.
And... vendors know this, often communicating only the strengths without discussing the downsides of their AI products.
The result: people have unrealistic expectations that AI can’t meet. They buy a tool thinking it will solve all their problems and end up disappointed.
As pointed out by the blog Eight2Late, vendors should focus on:
- Not overselling AI’s abilities
- Being transparent about product limitations
- Showing customers how to engage with AI meaningfully (augment, not replace)
This is particularly relevant in information security and ISO 27001 preparation. In the past year, I’ve seen many AI tools promising "compliance with the standard."
Those familiar with ISO 27001 know that AI can assist, but no tool can guarantee compliance, especially when human actions are required: completing action plans, management reviews, etc.
I understand the urge to "oversell" abilities. Competition and the excitement of offering a new solution can drive this.
But ultimately, it’s a losing strategy because undelivered promises result in disappointed customers.
Additionally, in crucial areas like information security, being transparent about product limitations is vital. Documenting these limitations on product pages and within the product itself can make a difference. Providing user guidance on where to "keep using your brain" is essential too.
In short, we’re still figuring out AI's role in our professional lives. Marketing often inflates this role, which doesn’t always benefit the buyer.
When it comes to information security, here’s what I now look for, both as a buyer and a maker:
- Transparency about AI’s limitations
- Areas where AI can’t be trusted (likely to produce wrong information)
- Encouraging critical engagement, not blind trust
- Recommendations for additional knowledge sources beyond AI, like expert-led courses
- Avoiding hype; AI is just a tool, not a thinking entity
- Highlighting dangers and risks associated with using the AI system
These steps can make AI use more ethical in security.
Now you might be asking.
How the ISO 27001 Copilot fits into this vision?
The ISO 27001 Copilot is designed with these principles in mind. It ensures AI serves as a valuable tool without overshadowing human expertise.
- Transparency on Limitations: The ISO 27001 Copilot makes clear what it can and cannot do. It assists with documentation, provides guidance, and streamlines processes but doesn’t replace human judgment and decision-making.
- Encouraging Critical Engagement: The Copilot encourages users to critically engage with the information it provides. We made it clear in the user interface that the assistant can make mistakes. We also published a series of resources to watch out for common mistakes done along the ISO 27001 implementation process. We believe these educational efforts emphasize the importance of human oversight and validation.
- Complementary Knowledge Sources: The Copilot often recommends additional resources like expert-led courses or official ISO documentation, ensuring users have a comprehensive understanding of ISO 27001. It recognizes that AI is a tool to augment human expertise, not a standalone solution.
- Avoiding Hype: The ISO 27001 Copilot avoids presenting itself as a magical solution (that's exactly what we're doing now, but we also did it here). We insist on the fact that the ISO 27001 Copilot is only an assistant—processing data and providing structured guidance based on trained patterns. Users remain the decision-makers, they own their ISMS.
- Highlighting Dangers: we're is transparent about potential risks associated with using AI. We invite our users to follow principles of interacting safely with AI systems. We also have a trust center to help customers understand the risks associated with the use of the ISO 27001 Copilot.
By adhering to these principles, the ISO 27001 Copilot aligns with a more ethical and realistic approach to AI in information security. It helps users navigate the complexities of ISO 27001 compliance while ensuring they remain in control, making informed decisions based on a combination of AI assistance and their expertise.
We hope you'll appreciate this clarification.