According to IBM and AWS, the generational AI innovation race is leading to security gaps

7 Min Read

Discover how companies are integrating AI into production responsibly. This invitation-only event in SF explores the intersection of technology and business. Find out how you can attend here.


What will it take to secure generative AI?

This is evident from a new study published today by IBM And Amazon Web Services (AWS) There is no simple ‘silver bullet’ solution to securing gen AI, especially now. The report is based on a study conducted by the IBM Institute for Business Value, which surveyed leading executives from U.S. organizations. While gen AI is a top initiative for many, the survey shows that there is a lot of enthusiasm for security. 82% of C-suite leaders stated that safe and reliable AI is essential to business success.

That said, there is a dichotomy in the results and a difference from what actually happens in the real world. The report shows that organizations are only securing 24% of their current generative AI projects. IBM isn’t the only company with a report raising security concerns. PwC recently reported that 77% of CEOs are concerned about the cybersecurity risks of AI.

It’s no coincidence that IBM is working with AWS on several approaches to help improve that situation in the future. Today, IBM also announced the IBM X-Force Red Testing Service for AI to further advance generative AI security.

“In all the customer conversations I’ve had, I see leaders being pulled in different directions,” Dimple Ahluwalia, Global Senior Partner for Cybersecurity Services at IBM Consulting, told VentureBeat. “They are certainly feeling pressure from both their internal and external stakeholders to innovate using gen AI, but for some of them that means security becomes an afterthought.”

Innovation or safety? Gen AI implementations usually choose just one

While it may seem like having security in place is common sense for any type of technology implementation, the reality is that this is not always the case.

The report shows that for 69% of organizations, innovation takes priority over security. Ahluwalia noted that organizations have not yet fully embedded security across all industries. The report also makes clear that business leaders understand the importance of security and are addressing that issue to make generation AI production deployments more successful.

“People are so excited that they’re rushing to see if they can make productivity gains, see if they can see how they can be more competitive,” she said.

Ahluwalia said the same thing happened in the early years of the cloud, when every conversation had to include a discussion about moving workloads to the cloud, often without proper security oversight.

“That’s what’s happening now with generational AI, everyone is feeling compelled and rushing to get to it,” Ahluwalia said. “The plans have not been carefully thought through and as a result I think safety will also suffer.”

Guardrails and policies are the keys to gen-AI security

How can and should organizations improve?

See also  Rishi Sunak is appealing to voters to entrust Britain's security to the Tories

The report recommends that organizations should start with governance to build trust in AI generation. This includes establishing policies, processes and controls that are aligned with business objectives. 81% say generative AI requires new security management models.

Once governance is established, strategies can focus on securing the entire AI pipeline using the tools and controls available. Collaboration between security, technology and business teams is needed. There may also be benefit in leveraging technology partners’ expertise in strategy, training, cost accountability and compliance navigation.

How IBM X-Force Red Testing Service for AI fits into this

In addition to guardrails and governance, there is also a need for validation and testing.

The new Testing Service for AI from IBM X-Force Red is IBM’s first testing service specifically tailored for AI. The new service brings together an interdisciplinary team of experts in the fields of penetration testing, AI systems and data science. The service will also draw on the expertise of IBM Research, which developed the Adversarial Robustness Toolbox (ART).

The concept of a “red team” in security generally means that there is a group that takes an adversarial approach to proactively attacking resources to help discover where gaps exist.

Chris Thompson, Global Head of of the models themselves. According to him, there has been no traditional Red Team focus on stealth and evasion to date. Rather, the focus has been on getting models to do something they shouldn’t do, such as producing malicious content or accessing sensitive RAG datasets.

“Attacks against the AI ​​app generation itself are very similar to traditional application security attacks, but with a new twist and an expanded attack surface,” Thompson said.

See also  AceCryptor Attacks on the Rise in Europe – Week in Security with Tony Anscombe

At this point in 2024, he noted that IBM is seeing more convergence with what is considered true red teaming. The approach IBM is taking is to look at the broader attack paths into AI generation. The four areas of AI red teaming that IBM has developed services around include: AI platforms, the pipeline used to tune and train the models (MLSecOps), the production environment on which the AI ​​generation applications run, and the generation of AI applications themselves.

“In line with traditional red teaming, we are also focusing on missed detection opportunities and reducing the time it takes to detect potential advanced threat actors that successfully target these new AI solutions,” said Thompson.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *