I’m a marketer, not a lawyer—though I’ve been recognized for some tough mediation and negotiation skills by my three sons. Still, don’t take what I write below as legal advice.
With that disclaimer out of the way, I want to introduce you to some of the legal implications of using generative AI in your marketing. If you are using an LLM such as ChatGPT, Copilot, Gemini, Nano Banana, Claude, Midjourney, Firefly, or a similar tool to generate text, images or video, you need to understand a few legal basics that govern your rights as a creator.
1. If a Machine Created it, You Don’t Own It
Say you need an image for an ad and you don’t want to spend thousands on a custom photoshoot. You type a few thoughtful sentences into your image creation LLM of choice and, voilà, you have an image you are happy with. Great!
Now, you may think that because you came up with the brilliant prompt that produced this image that it is rightfully yours and nobody else should be entitled to use it. But you would be mistaken. In March 2026, the Supreme Court declined to take up the case Thaler v. Perlmutter, leaving in place a circuit court ruling that a copyrightable piece of art must be created by a human being. Your fabulous image is free for anyone to copy.
Of course, this rule applies to written material, too. If you use an LLM to produce a piece of writing with few if any changes, it is in the public domain. As I’ve discussed before, this is a poor practice for any Visible Expert that wants to have a unique point of view.
2. If You Significantly Modify it, You Can Protect It
If, however, you make significant changes to a piece of creative that’s generated by an LLM, you are likely to be able to protect it under US copyright law. Your changes have to be consequential—minimal edits don’t offer these protections. Exactly where this threshold lies, unfortunately, is a gray area that has not been clearly defined in the courts. But if the end product has been largely shaped by you, you are probably in the clear.
If you are concerned about someone contesting your ownership, or if you are planning to copyright the piece, you will need to be able to prove that you made changes. Again, what constitutes proof is a little unclear, so the more documentation you can assemble the better. Of course, this is far easier to do if you know ahead of time that you may need to defend the work.
3. You Can’t See What Created the Creative
Another category of risk is that it can be very difficult, if not impossible, to know what inputs were used to generate the AI output. By inputs, I don’t mean your prompts. I mean the data the LLM was trained on.
You have no idea whose intellectual property you may be infringing upon. This is particularly true of AI-generated images and video. Most recent text chatbots, at least, provide sources for some of their information. But when you generate an image with Midjourney, who knows what artist it is mimicking. This situation could potentially expose a firm to litigation. Other services, such as Getty and Adobe Firefly, use only licensed material in their training sets—providing creatives with a low-risk way to create unique images. However, in our experience these tools often produce less satisfying results than others.
Currently, numerous lawsuits are working their ways through the courts to determine what constitutes “fair use” and whether data scraped from the internet must be licensed by the LLMs. Stay tuned!
4. Don’t Expose Your Data
Did you know that your free or even $20-per-month LLM account is sucking everything you give it into its training database? If your firm is using an LLM, you will need an enterprise account that erects a wall between your team and the training bots. Unless you don’t mind your firm’s sensitive information becoming publicly available on the internet, you must make this investment. And if you use AI tools on client work, you simply have no choice.*
Make sure that your entire team understands the importance of using only these protected AI tools on sensitive data. Your investment is only as valuable as the people you entrust with it. And if you use external contractors, make sure you have appropriate controls and policies in place.
Let me be very clear that I am a proponent of AI tools. But when you use them, you need to understand that much is still in limbo. The chances of a lawsuit are probably small, but not infinitesimal—especially for large firms that might make juicy targets. Protect yourself with documentation, strong policies, and secure software. Also, talk to your insurance broker about what coverage you might need to protect your firm from AI-related claims. Because this is largely uncharted territory, not all insurance companies cover this type of claim.
AI offers a wealth of opportunities to marketers. But with all those pearls come some perils. If you open your eyes to the possibilities and plan for the risks you’ll be just fine.
*Some paid, lower-tier, non-enterprise plans allow users to opt-in to a no-training environment. But in an organization of any size, that’s not a practical or safe solution.
