
Generative AI refers to algorithms capable of generating new content, such as text, images and music by learning patterns from existing data. Popular examples include ChatGPT, DALL-E and other large language models (“<span class="news-text_medium">LLMs</span>”).
Traditional AI focuses on classification and decision-making based on existing data, while generative AI creates new data and outputs by understanding patterns from the provided dataset.
Generative AI poses new challenges around intellectual property, privacy, content liability and data ownership. Concerns also include biases, misuse and security risks due to AI’s capability in generating synthetic content which may appear authentic.
Recent lawsuits have targeted organisations for alleged copyright infringement due to the unauthorised use of copyrighted works in training datasets. There have also been privacy-related claims over misuse of personal data and the inappropriate deployment of generative AI tools.
Regulators globally are increasingly focused on establishing frameworks for AI accountability and transparency. For example:
Key ethical issues include:
Businesses should be aware of requirements such as:
Generative AI’s outputs raise questions about ownership. If AI-generated works are based on training data owned by third parties, issues of copyright infringement and originality come to the forefront. Recent legal cases explore whether AI-generated content can be copyrighted and who holds the rights in relation to it.
To safeguard their IP, businesses should:
Human oversight remains crucial to ensuring generative AI outputs are accurate, ethical and legally compliant. Businesses are advised to implement review processes, especially in high-risk areas such as content creation, marketing and automated decision-making.
The EU AI Act introduces stringent compliance obligations for high-risk AI applications, which can include generative AI systems. These obligations entail risk management measures, transparency requirements and enhanced scrutiny of the training data used.
Businesses should establish cross-functional AI governance teams, engage in regular compliance audits, and stay updated on evolving regulatory standards. Legal teams should focus on risk assessments and scenario-planning based on the anticipated regulations.



