AI is becoming harder to ignore in professional services, and not in the fun “wow, productivity” way. It’s in the newsfeed‑choking caricature “art” that makes everyone snort once and then quietly die inside. It’s in the long, weird call‑to‑action copy. It’s in the emoji confetti. It’s in a LinkedIn feed that increasingly reads like it was written by the same beige robot with a ring light and a growth mindset.
And it’s a problem, not because I’m anti‑technology, but because so much of what’s being produced is slop.
The first ethical question: is it worth the cost?
Here’s the thing about “free” AI: it isn’t free. It’s just that you’re not the one paying the bill. But the cost exists anyway—energy, computing power, water, and all the infrastructure that lets you type a prompt and receive an answer that sounds intelligent.
I’m not suggesting you should not use AI. Used properly—paired with thought, strategic prompt design, internal know‑how, and human research—it can superpower your practice and your prospecting. But “can” isn’t the same as “should,” and if the output is a disposable caricature, generic copy, an AI headshot, or yet another post that says “Here are 5 things you NEED to know,” then I’m going to ask: was that worth the resources you just burned?
The second ethical question: who paid for the training?
There’s also a social cost we like to politely sidestep: large language models are trained on material they didn’t “earn” in any meaningful way. That material includes labour and creative work scraped, repackaged, and statistically remixed into outputs that look frictionless.
When you use AI, you’re participating in an extractive supply chain of data and labour. That doesn’t mean you’re a villain for opening the tool. It means you don’t get to pretend it’s morally weightless, especially when you’re using it to generate junk. If you want an ethical posture here, consider what’s being extracted and why you’re extracting it.
The third ethical question: what are you signalling about your standards?
Now for the part that gives the legal community the ick: branding is ethics, too.
If you put AI‑generated “art,” “photography,” or boilerplate thought‑leadership into the world, you are making a claim about your rigor. You are telling clients (and referrers, and counsel, and potential hires) what you think good work looks like, what you think “professional” means, and what you’re willing to sign your name to.
Most audiences recognise the difference between a premium brand and a cottage industry one. We all know the gap between Mecca Cosmetica and a templated logo on a market stall banner. Neither is ‘wrong’. They’re just different offers, at different price points, with different client expectations.
Lawyers, however, don’t get to pretend they’re a bargain stall while charging like Mecca. If you want to be viewed as strategic, careful, and high‑value, your outputs have to match. “Cheap methodology” is incoherent with “premium expertise.”
Why unconsidered use of AI damages lawyers
Lawyers are selling strategy and clarity under uncertainty. We’re selling risk management, persuasion, and trust. Which means the bar for our profession is higher.
AI slop doesn’t just look tacky—it suggests your relationship with the technology is immature. It suggests you’re chasing volume and that you don’t know when to use tools and when to use thinking. And it quietly signals something else: that you’re probably still charging by the hour. Newlaw practitioners already understand this tension. As generational change reshapes who buys legal services, “time” becomes a less persuasive proxy for value. The market is moving and if your marketing screams “I used a shortcut,” don’t be surprised when prospective clients treat you like a commodity.

