Skip to content

The AI Data Gold Rush: How General Counsels Can Lead With Integrity and Insight

Close up shot of multiple laptops on a table, and hands on the keyboards

Discover how general counsels can navigate the challenges of the AI gold rush. Learn to balance innovation with integrity, mitigate risks, and lead proactive conversations on AI.

Authors

  • Danielle Sheer

    Chief Trust Officer

    Commvault

Privacy & Cybersecurity

If generative AI is a gold rush, customer data is the mother lode everyone is scrambling to mine. With immense pressure to innovate quickly and a fear of lagging behind the competition, companies have never found the allure of vast customer data so enticing.

Yet, like any rush, this one is fraught with practical and legal challenges, and chasing the hype can lead to hasty, reputationally damaging decisions. Balancing the race for innovation with responsibility and transparency is crucial for every tech company — and as general counsel, you play a key role.

In my role as Chief Trust Officer at global data protection company Commvault, I’m well aware of the tension between leveraging the next-gen AI tech and protecting customer information. I’m excited about AI’s potential to automate low-level tasks, spot cybersecurity risks, and more. But I’m also here to offer some thoughts on how GCs can guide AI innovation thoughtfully.


Key Takeaways:

  • The rush to harness generative AI's use of customer data brings significant practical and ethical challenges that most companies are not yet prepared for, including privacy concerns from data collection, cybersecurity threats, and regulatory non-compliance.

  • Tech companies should balance innovation with integrity, ensuring AI initiatives are aligned with solving real customer problems rather than chasing trends.

  • GCs can play an integral, proactive role in this effort by asking critical questions, facilitating essential conversations, being an advocate for the customer, and bringing teams back to the key question of “What are we doing, and why?”


First, recognize that each of your C-suite colleagues has their own priorities when it comes to driving AI innovation. Your Head of Product wants to unveil cutting-edge tech, while your CMO is focused on keeping the company relevant and in the spotlight. Your Head of Sales needs to reassure prospects that either your AI capabilities match those of competitors or that your products aren’t mining their data, and your CEO must satisfy investor demands.

GCs, of course, need to understand and mitigate the risks that come with these pushes for innovation, of which there are many:

  • The extensive data collection and mining required for generative AI models raise serious privacy concerns.

  • AI models can perpetuate and amplify biases, leading to discriminatory outcomes and potential legal liabilities.

  • Without robust security measures, integrating AI can expose your company to cyber threats.

  • Hastily deploying AI solutions without considering the evolving regulatory landscape can result in non-compliance and legal repercussions.

Sounding the alarms on potential risks rarely gets tech lawyers far, so take a more proactive role. How?

Ask the questions no one is taking the time to ask.

Facilitate the conversations no one is having.

And, most importantly, ensure AI initiatives are grounded in solving real problems rather than chasing trends.

This might be as simple as steering boardroom conversations back to basics. When the question "How do we use more AI?" is inevitably posed, reframe it: "What customer problems are we trying to solve, and what are the best tools to solve them?"

Asking questions across the business can bring issues and solutions to light.

Engage with product and engineering teams and discuss the AI developments they’re considering. Are they aligned with the company’s goals and values? Ask your customers’ opinions. What are customers asking for most? Do these solutions solve those problems or merely chase the AI hype?

When AI terminology is thrown around in marketing materials, dig deeper. What do those buzzwords words actually mean? Do you understand it when you read the collateral? Does that language represent what’s happening across the business? Does it truly resonate with customers?

Be an advocate for customers, and ask questions around transparency and accountability. What customer data is being used in AI development? If you were in the customers’ shoes (and truly understood every word of your T&Cs), how would you feel about your information being used? Do your company’s practices express care and consideration, or are you stretching trust?

Ultimately, these questions are trying to help everyone understand: “What are we doing, and why?” Bringing the conversation back to your customers’ real problems and the best tools that best address them — AI or otherwise — drives the type of innovation that’s truly meaningful to your business.

At Commvault, our business is built on deep consumer trust, so we’ve taken the “slow and steady wins the race” approach. We started by creating a cross-functional AI governance council, and the first deliverable was outlining our own governance principles. Once we agreed on a set of working principles, we created a policy and process for including AI technology both in our product development lifecycle and in our corporate enterprise IT programs.

In our case, in every single instance of generative AI use, a briefing document (like an enhanced data map) is completed to holistically understand the use, impact, access, and alignment with governance principles. These briefing documents are shared with the AI governance council and the management team to keep everyone on the same page and instill a sense of integrity in the process of leveraging generative AI.

It’s tempting to jump on the AI bandwagon out of fear of missing out. But the real casualties of this gold rush will be those companies that sacrifice sustainable, ethical practices for quick wins, only to face customer mistrust and regulatory backlash. The true winners will be the ones who adopt AI thoughtfully, prioritizing what matters most: solving customer problems and earning trust and loyalty.

What other advice do top GCs have on navigating the legal and ethical landscape of AI? Apply for membership to The L Suite and join the conversation on our Braintrust.


About The L Suite

Called “the gold standard for legal peer groups” and “one of the best professional growth investments an in-house attorney can make,” The L Suite is an invitation-only community for in-house legal executives. Over 2,000 members have access to 300+ world-class events per year, a robust online platform where leaders ask and answer pressing questions and share exclusive resources, and industry- and location-based salary survey data.

For more information, visit lsuite.co.