BigID's Data Leader Series: Week 3 - Data Security Protection with AI
June 4th Session

BigID's Data Leader Series: Week 3 - Data Security Protection with AI

By: Sara Diaz Business Analyst Intern at BigID

After a week-long hiatus, BigID jumped back into the dataverse with the third installment of the Data Leaders Series: AI Fundamentals for Business Executives. Stephen Gatchell, Sr. Director of Data Advisory, and Shalonda Willis, Sr. Demand Generation Manager, launched us into our current session, “Data Security Protection with AI.” 

The event commenced with an engaging fireside chat moderated by Gatchell, Nimrod Vax, Head of Product and co-founder of BigID, and discussed the widespread adoption of generative AI in organizations, noting that "over 70% of organizations are already using generative AI across the board." He highlighted the urgency with which businesses are implementing AI technologies, such as Jet BT and Microsoft Copilot, without waiting for security organizations to complete risk assessments. Vax pointed out the significant concerns regarding data exfiltration and insider threats due to the extensive access provided by tools like Copilot, which can lead to over-privileged access issues.

Vax also emphasized the need for managing the "risk posture" of AI applications, ensuring compliance with frameworks such as NIST, and performing thorough AI risk assessments. He showcased BigID's role in the AI space, stating, "BigID has always been at the frontier of leveraging AI," and detailed their innovations in data classification and discovery using generative AI. He assured that BigID's solutions are designed to comply with privacy regulations and maintain data security, stating, "We do it in a way that doesn't violate your privacy regulations and privacy obligations. We don't share data among tenants. We don't train on your data," highlighting BigID’s commitment to high standards of data privacy and security.

"BigID has always been at the frontier of leveraging AI.”- Nimrod Vax, Head of Product and co-founder of BigID

Stephen introduced the Global Report on Generative AI, providing key insights into the current landscape of AI adoption. He emphasized the comprehensive nature of the report, which included 327 interviews with IT decision-makers across various industries and regions. Stephen highlighted the multifaceted focus of the report, noting, "How do we balance the innovation of AI with managing the regulatory and the security and privacy issues?" He noted that despite some uncertainties about the impact of AI, the majority of companies believe in its positive potential. "If your company is not doing AI, it seems like most companies are going to do it," he concluded, underlining the inevitability of AI integration.

Following Stephen's overview, Alex Bulis, VP of Education and Community at BigID, demonstrated the platform's capabilities in redacting sensitive data from generative AI responses. Alex remarked, "AI needs a lot of data to do its job," highlighting the risks associated with sensitive information being inadvertently included in AI-generated outputs. He showcased BigID's chat function, designed to ensure secure and compliant data handling. Alex explained, "If that data holds something sensitive, we can actually see a big problem here." Using a hypothetical company, Workstream, Alex demonstrated how seemingly innocent AI interactions could reveal sensitive information like usernames, emails, and credit card numbers. He emphasized the importance of secure AI practices, stating, "This data can also hold some very sensitive information," and provided a live example of how BigID's system can redact such information to prevent data breaches.

During the session, participants engaged in poll questions to identify the main security challenges in implementing AI. The majority, 57%, cited ensuring data privacy and protection as the top concern, followed by addressing compliance with industry-specific regulations at 17%. Securing AI models and algorithms from exploitation was noted by 13%, while managing access controls and permissions effectively garnered 9%, and guarding against adversarial attacks and vulnerabilities was highlighted by 4%. Another poll revealed that 40% of respondents agreed their security teams worked collaboratively across privacy, governance, and data, with 20% strongly agreeing. To lighten the mood, Business Analyst Intern Sara Diaz hosted a fun AI Undercover game, where participants distinguished AI-generated music covers from those created by humans.

Shay Azulay, Senior Director of Product Management at BigID, outlined key insights and actions for organizations preparing to adopt AI. "First and foremost, preparation and then also governance," Shay emphasized, highlighting the need for a strong foundation and robust governance frameworks. He reiterated, "Data is the fuel for AI," stressing the importance of data quality and accessibility. "What is the data that is accessible to AI models and who can access this data?" Shay questioned, underlining the critical role of data infrastructure in AI adoption. Moreover, he emphasized the importance of proactive risk management, stating, "Proactive risk management is paramount."

"Proactive risk management is paramount."- Shay Azulay, Sr Director of Product Management at BigID

Azulay further illustrated various real-world applications of AI, emphasizing its tangible benefits and challenges. "Many companies use AI-powered chatbots to handle customer inquiries 24/7," he explained, noting that for such use cases, generative AI is ideal. Additionally, he highlighted predictive analytics, where deep learning excels, aiding in predicting customer behavior and market trends. "AI isn't science fiction; it's powering real-world business results today," Shay emphasized, inviting organizations to leverage AI for their specific use cases.

Addressing challenges associated with AI adoption, Shay highlighted talent and skills gaps and ethical and regulatory concerns. "Recruiting and retaining qualified AI professionals can be challenging," he noted, emphasizing the need for ethical considerations and responsible use of personal data. Shay emphasized the importance of data quality in machine learning models, stating, "The higher the quality of the data, the better will the machine learning model be." He stressed the importance of thorough risk and compliance assessments and implementing security controls around data used by AI to mitigate risks effectively.

Maor Pichadze, Director of AI Product Development at BigID, delved into the technology BigID offers to ensure robust data management. "We have to ensure robust data," Maor explained, highlighting BigID's unique capability to discover structured data across various enterprise platforms. He emphasized their focus on identifying AI assets and associated datasets to understand potentially sensitive information. "That is something that we plan to help the organizations to know their AI shadow AI assets," Maor stated, underlying BigID's commitment to providing visibility into AI-related data.

Maor discussed BigID's ability to handle datasets securely, including data synthesis to ensure sensitive information protection. "We have the ability to handle the dataset... to make sure that no sensitive information such as personal information is on that dataset," he assured. He emphasized the importance of safeguarding data within AI models, especially considering the vast amount of data used in training. "In order to safeguard it, you need to make sure that the data is safe. That's what we are aiming for," Maor concluded, highlighting BigID's commitment to data security and privacy in AI applications.

"In order to safeguard it, you need to make sure that the data is safe. That's what we [BigID] are aiming for.”- Maor Pichadze, Director of AI Product Development at BigID

Kyle Kurdziolek, Director of Cloud Security at BigID, highlighted the evolving security landscape with the emergence of AI, stating, "This emergence of AI really brought in different attacks that we needed to account for as a security team." He emphasized the importance of understanding data ingested within AI systems to mitigate risks associated with evasion, privacy breaches, and poisoning attacks. Kyle emphasized the need for fundamental security measures and a defense-in-depth approach to protect AI systems: "So we have various different mechanisms where, when you look at the fundamentals of how you want to secure a system, you can take that same practice and just pivot to how this is gonna be applying to the AI itself."

Kyle stressed the importance of extending existing security practices to AI systems, stating, "From compliance, architecture, cloud security, vulnerability management, and instant response, we have to extend and enhance a lot of the different things that we have already been doing for the past however many years into now a new area." He emphasized the need for adapting quickly to account for AI-specific security concerns and aligning security measures with business objectives. Kyle emphasized leveraging proven security fundamentals to address the challenges posed by emerging technologies like AI: "I want to create this story of how AI itself is not as challenging as what a lot of companies seem to be, but it's just more so information overload and taking a deep breath to understand how can we apply the fundamentals that have been tested, try and true for the past however many years to today with today's emerging technologies."

"...AI itself is not as challenging as what a lot of companies seem to be, but it's information overload. [Let's] understand how can we apply the fundamentals that have been tested, try and true for the past to today with today's emerging technologies."- Kyle Kurdziolek, Director of Cloud Security at BigID

The session concluded with Robin Sutara, Field Chief Data Strategy Officer at Databricks and former CDO at Microsoft, who emphasized the urgency for organizations to address generative AI, stating, "This is probably the number one topic on most board-level conversations at this point." She noted that while companies recognize the importance of leveraging generative AI for competitive advantage, many are uncertain about how to proceed. Robin highlighted common questions from boardrooms regarding the market trends, competitive landscape, and risk management associated with generative AI adoption:

"So when I sit in these boardrooms, the three questions I get asked the most often are, where's the market going in relation to generative AI? Where are my competitors when it comes to generative AI? And then where are we in comparison to our competitors?" She stressed the need to balance innovation with risk mitigation to ensure successful adoption: "Really what they're asking is how do we think about generative AI and balance the risk that it poses to our organization versus the pace of innovation that we know we need to take advantage of generative AI in a way for our organization."

Robin also discussed the challenges and considerations surrounding the governance of generative AI, stating, "The other issue... driving CISOs and legal officers crazy, is the governance, the limited amount of governance that you have around generative AI." She emphasized the importance of organizations owning their data and models while ensuring privacy and compliance with regulatory requirements: "The first is, we definitely think that you should, you own your, you should own the data and the models, and how do we make sure that we have the right governance in place to allow you to have privacy across all of that?" Robin highlighted the need for comprehensive control and privacy measures to preserve each organization's unique value contribution: "If everybody's using chat GPT, then nobody really has a differentiation, right?... We wanna make sure that you have complete, complete capability across those ecosystems."

"If everybody's using chat GPT, then nobody really has a differentiation, right?... We want to make sure that you have complete, complete capability across those ecosystems."- Robin Sutara, Field Chief Data Strategy Officer at Databricks

At BigID, we're dedicated to supporting organizations in managing their data effectively, ensuring it's used ethically, securely, and compliantly across its lifecycle. If you're eager to delve deeper into topics like privacy, security, strategy, and governance in the realm of AI and data, explore our workshops and webinars available on the BigID University page and BigID’s website.

Be sure to watch out for our next session on Global Privacy Regulations & Upcoming Changes with AI Governance focusing on AI Fundamentals for Business Executives where we delve into an overview of upcoming privacy regulations in the US and globally to learn how this will affect your role in governing and protecting data.

Tara Tyson Rowell MBA CIS

Business & Technology Leader | CTO CoS EVP | AI, IoT & Data Strategy | AI Ethics | Executive Mentor

6mo

I was there and loved it. I was better at knowing the AI-generated music than I was on the art! 😁

Matt Pollard

Helping C-Suite executives align their technology strategy to navigate managed IT services, AI, & data privacy | Published Author and Industry Speaker | FOLLOW for insights to remain agile, secure, and ahead of the curve

6mo

Thank you for highlighting the critical intersection of AI and data security! Appreciated the emphasis on businesses to adopt AI responsibly, ensuring robust data protection and compliance, while leveraging AI solutions for transformative growth!

To view or add a comment, sign in

More articles by Peggy tsAI

Insights from the community

Others also viewed

Explore topics