top of page

How Do Legal Teams Address AI Risks?

  • Writer: Giuliana Bruni
    Giuliana Bruni
  • Mar 27
  • 2 min read
Cartoon character in glasses hits a document with a gavel. Text reads "How Legal Teams Address AI Risks." Dark blue background.

The implementation of AI technologies has transformed various industries, offering numerous advantages in efficiency and productivity. However, this rapid adoption also presents significant legal risks that organisations must navigate, including issues related to intellectual property, data privacy, and compliance with licensing agreements. As legal operations teams face these challenges, it is crucial for them to identify specific pitfalls, such as the potential for copyright infringement when using AI-generated code, and establish frameworks for compliance. 


One of the foremost legal risks associated with AI technologies is the inadvertent use of open source code without proper attribution. Recent research published in arXiv:2408.02487 [cs.SE] underscores the gravity of this issue, revealing that top-performing large language models (LLMs) produce between 0.88% and 2.01% of code that is strikingly similar to existing open source implementations. This statistic highlights the potential for copyright infringement and other legal challenges that can arise. 


A more recent study using the SCANOSS platform expands on these findings, showing that when evaluated against a broader open source dataset, approximately 30% of AI-generated code exhibits at least 10% similarity to existing open source implementations, while 1% remains similar even at a more stringent 30% threshold (Goni, 2025). This reinforces concerns that AI-generated code could incorporate licensed material without proper attribution, posing significant risks. 


To mitigate these risks, legal operations teams typically follow legal standards such as ISO/IEC 5230, which provides a framework for open source license compliance. This way organisations can establish structured processes for managing their open source components, ensuring that they remain compliant with licensing obligations. This not only helps to minimise legal exposure but also enhances stakeholder trust by demonstrating a commitment to best practices in risk management. 


Furthermore, legal operations teams should focus on developing clear policies and procedures that address the use of AI technologies within their organisations. This includes regular reviews and analysis of codebases to identify and document software dependencies, verifying open source license obligations, and establishing protocols for responding to detected compliance issues. A structured approach to risk management is critical for navigating the complexities of generative AI and compliance. 


Modern Software Composition Analysis (SCA) tools play a vital role in this process. By leveraging solutions like SCANOSS, which is the first open source SCA platform with AI plagiarism detection, legal teams can identify undeclared open source software dependencies, detect AI-generated code resembling open source implementations, and maintain an accurate Software Bill of Materials (SBOM). 

Adopt SCANOSS today

Get complete visibility and control over your open source.

bottom of page