Artwork for podcast Self Publishing for Professionals
How to Avoid Publishing Scams and Choose the Right Publishing Path for Your Book
Episode 5024th February 2026 • Self Publishing for Professionals • Write For You
00:00:00 00:24:13

Share Episode

Shownotes

In this Book Blueprint episode, I coach AJ Tyler, author of "Immigrant Men" which hit number one in two Amazon categories and debut novel "Death and Short," through critical publishing challenges. 

AJ opens with a pressing issue facing newly published authors: floods of scam solicitations from fake reviewers and fraudulent marketing services. I advise verifying any unsolicited offers through national writers' associations, checking LinkedIn to confirm company affiliations, and requesting author references because these scammers easily fabricate contact information.

We tackle publishing options where AJ asks how to choose between self-publishing, hybrid, and traditional publishing. Publishers often want to mainstream your book, changing your voice to fit their vision. I emphasize 90% of readers never notice the publishing company, they only care about quality content, professional editing, good flow, and valuable information or compelling plots with well-developed characters.

For marketing, I recommend reverse engineering by determining your budget first, then building campaigns around platforms you already use and content formats you enjoy. 

Start marketing six months ahead, recruit beta readers, and leverage Substack's author community plus BookTube and BookTok. I warn against paying someone to get you on podcasts as those are typically scammers charging thousands for two or three placements. Book awards are worth the investment if you’re building an author career or writing series, especially in saturated fiction markets where awards provide crucial social proof.

Until next time, keep writing, dreaming, and creating—your book is waiting to be born!

Podcast Resources

Start on your author journey today by Booking a Discovery Call

Unsure where to start when it comes to writing your book? Download your Book Clarity Blueprint today!

Connect with AJ Tyler at his author website

Disclaimer: The information in this podcast is for educational and informational purposes only. The content shared by the host, guests, and any affiliates is not intended to substitute for professional legal or financial advice or any professional advice specific to your situation. Always seek the advice of a qualified professional with any questions you may have.

The opinions expressed on the show by the host or guests are those of the individuals and do not necessarily reflect the views of Unicorn Publishing Company. Unicorn Publishing Company, the host, guests, and affiliates are not responsible or liable for any decisions made by listeners or actions taken hereto based on the information discussed in this podcast. By listening to this podcast, you acknowledge and agree to release Unicorn Publishing Company, the host, affiliates, and guests from any liability.


Transcripts

[MUSIC PLAYING]

LYNN: Hi, friends and future authors. This is Lynn Liquity Smarges back for another episode of Publishing for Professionals, where today we have an awesome guest expert on. I refer him out all the time. And before we jump into our guest expert episode today, I'm going to give a big welcome back to my returning listeners. Thanks again for joining me for another episode of Publishing for Professionals. And welcome to my new listeners. I'm so glad you stopped by to check out my podcast. Please make sure to hit the follow button wherever you're at, if that's on an audio player, or hit the subscribe button if you're watching Publishing for Professionals on YouTube so every week you can get it dropped straight into your player.

Today, I have an incredible conversation lined up for you. I have Gordon Firemark with me today. And he is an intellectual property entertainment and media lawyer from LA, where he helps all sorts of creators across media protect their work, their businesses, their families with smart legal strategies. He is a producer and host of two podcasts: Entertainment Law Update, which is a monthly legal news roundup for entertainment industry professionals, and Legit Podcast Pro, which features weekly legal strategy tips for the creator economy. He's also an entrepreneur, educator, and occasional theater producer. And you can also check him out on Substack, where I follow him there. Gordon, welcome to Publishing for Professionals.

GORDON: Well, hi, Liquity. It's great to see you.

ecording, it's in November of:

GORDON: That was going to be what I said. The Wild West is very true. There are currently 50 different lawsuits going against AI companies right now. And all of them, well, almost all of them seem to deal with the training on pre-existing copyrighted material, for the most part. So you've got the Getty Images, you've got the Book Publishing Association, you've got individual authors suing over how things were trained. One of the AI companies, Anthropic, just agreed to a $1.5 billion settlement with a group of authors. So there's a lot going on in this space.

What that typically means, though, is that there aren't a lot of answers yet, because the 50 lawsuits could theoretically have 50 different outcomes that would muddy the waters even further. I think it's pretty clear to say the direction things seem to be going in is that merely training on data that's out there that's found on the internet probably isn't copyright infringement. But if the trained system then outputs stuff that is substantially similar to an existing copyrighted work, that can be copyright infringement.

And for authors, we can't hold the system responsible. The machine doesn't have a bank account that we can go after for damages. So it's the person who publishes the material, the company, or the person who publishes the material. If it's substantially similar to a pre-existing copyrighted work, then that's going to be copyright infringement, regardless of how it sprung into existence.

So that's point number one. Where are the legal ramifications? There's lots of them. First off, the systems sometimes hallucinate. So if you're doing factual-based material, you need to fact check really carefully. Because if you get something wrong, well, if it's about a person, you could be defaming that person with false information. There was a case a couple years ago now involving an AI-generated article. The journalist was using an AI to write an article that was about a particular person. Well, it turns out there was another person with the same name and so the AI made the assumption about the other person who also was convicted of a felony. So now we're talking about this person in Florida, and they're telling it like he's the person who was convicted in Colorado or whatever it was. So that's a big risk.

You've got to fact check the heck out of things, especially if you're using AI. And really just being good journalists in the way we do our writing, whether you're writing in journalist form or not, I think you have to think like one and make sure that you can back up everything that the AI puts out there for you.

LYNN: One of the things I know one of my friends came across, he works for three of the Big Five publishing companies, and he does copy and proof editing. Someone had written a book, and it wasn't through the Big Five, it was through somebody else. But anyway, they had eight quotes, and they clearly took all of them from AI, and none of them were correct. Some of them were the wrong author. Some of them were completely incorrect words altogether.

So I see people all the time, and they're like, oh, I'm going to use AI to research. I would tell people that's the completely wrong thing to do. Would you agree with that? Would you say don't, if you want to use AI to research, kind of use it like Wikipedia as a base, but then actually go and verify the information, or just not use AI to research at all, because it's a waste of your time?

GORDON: Well, no. I think it's a valid tool. And if you think of it as a tool and not as a replacement for your own real mental effort, I think that's fine. And again, as a research tool, if you're doing research well, you're going to turn up stuff that turns out not to be accurate or not to be true, or that needs more explanation for the nuance to be understood. So go ahead, use the tool. And I love the tools. I use them a lot. But I try to make sure that either I already know what it's saying is true, or I verify.

And sometimes, you shouldn't even trust yourself on that. You may think you know. But that quote sounds accurate. Well, yes and no. You can, A, by abbreviating a quote, you could change its meaning. Or the system could just get it wrong. And the wording isn't quite what the original person said. And so misquoting and misrepresenting things, especially if you're representing it as a quote, can be dangerous.

But what's the old Shakespearean thing about first kill all the lawyers? That's only part of the sentence in the play. And honestly, I don't know what the rest of it is. But it's a very different meaning. It was something like, if you want to have an authoritarian state, first kill all the lawyers. That's King Lear. And so it changes the meaning dramatically to use just the shortened part of it. And the AIs are really good at capturing the concept, but not the precise wording. In fact, they're programmed to not duplicate the precise wording of an original quote. They're trained to paraphrase, essentially, so that they aren't infringing copyright left and right.

LYNN: Right. I know one of the things AI does not do, it does not understand context. Like you were saying, yeah, it will literally spit out what has been fed into it. But then also, that's the quality of what's getting fed into it and how good is that quality that's categorized that's getting fed into it, along with the context. And the other thing that's working against us, and this is our own brains, because our own brains love to fill in stuff, even if we don't know, because our own brains want to be sure that we know. So even if we're not sure, our brain's like, yeah, that's it, you got it. And so between the AI not being right and our own brains trying to fill in context, there's a huge space in there where we can get into some really murky waters, legally speaking, if we publish that information.

GORDON: Well, yeah. And also, just how far we trust the systems to get things right. I mean, if the training set is nothing but nursery rhymes and fairy tales, then we're going to have AI thinking that geese lay golden eggs, and beanstalks grow to where the giants live. And the moon is made of green cheese, though that's another thing. So garbage in, garbage out is true of any computer system, really, any data system at this point. So you have to trust that the data is either accurate or just voluminous enough to weed out the bits that aren't quite correct.

LYNN: Got it. So what would you recommend for new authors and repeat authors who are listening to this podcast? What would you recommend are safe ways to use AI as a tool when they're writing their book? What would you recommend or ways you can generally use it? Obviously, you want to use it with care, but what are some ways you would suggest?

GORDON: Well, I mean, if you're writing fiction, I think it can be a great tool for character development. Building out what would this character do next, how would this character respond when this other character does this, that, and the other thing. For nonfiction, I think you don't let it write the whole body of the text. It's great to use it as an outlining tool, maybe as a structuring tool to say, well, let's lay a little more of the hero's journey over this particular aspect of things, or let's adopt this kind of a framework for analyzing this part of the problem or something like that. But then you have to get in there and actually do your own words, or the very tedious part of having it generate sentence by sentence rather than paragraph or page by page.

LYNN: No, I know one of the things that some of my clients, well, not even some of my clients, anyway, I know people have used AI for is they're like, OK, I like this paragraph, but there's something missing. I need to dress it up. And they wrote the original paragraph, but then they put it in AI and say, can you please write this in a more blah, blah, blah style? So I mean, I'm guessing that is obviously not copyright infringement because you're taking your own words and putting it in AI and asking it to basically dress it up.

GORDON: Yeah. Again, using it as a proofreader, as a copy editor, those kinds of things, that's a reasonable, I think a reasonable way to use these tools. Again, if its output is accurate and does what it's supposed to be doing. There's another component of this that we have to talk about, and that is that when you use an AI tool to create a work, you can't own it. The copyright law has been very clear for about a century now that if the author of a work isn't a human, then it is not entitled to copyright protection.

We have cases involving paintbrushes being held by an elephant's trunk, and it paints a picture, and nobody can claim to own the image. Likewise, there was a case a few years ago involving monkey selfies where the photographer left the camera out for these particular primates to do something, and they were taking selfies. So it was really, they're kind of funny pictures.

LYNN: I saw that video. It's hysterical.

GORDON: So and the courts have been very clear that that's not entitled to copyright protection. So when a case comes along that a guy decided to have his AI system generate a full work, the copyright office said no. And he's a computer science guy. He has now petitioned the Supreme Court to review that case that said no, he's not entitled to a copyright. So I don't think they're going to take the case, but if they do, it'll be in the next year, and we'll know more after that. That's the nature of the law, things advance and proceed.

Now there have been a few situations where the human author has been able to show, well, I put enough of my own creative effort into this to at least be entitled to claim copyright in those elements that I contributed. So in one case, it was actually an image called "A Single Slice of American Cheese" is the title of the image. And there was enough interim prompting to change this part to that color, change this shape to that, those kinds of things that really made it a human generated work assisted by a tool.

LYNN: So OK, great. Well, we're going to take a break right here. But I've got like three or four questions for you when you come back, Gordon. This has been really great.

OK, my friends. So we're going to take a quick break, hang tight, and we're going to come back with Gordon and more on what is legal and what is not legal with AI.

[MUSIC PLAYING]

Friends, book a discovery call today. And remember to sign up for the next virtual writing retreat that is coming up probably in January.

So we're here today with Gordon Firemark. He is the podcast lawyer. If you are in the entertainment industry and you are wondering if something is legal or you're not sure you want a contract to look over, Gordon is your guy to call. So before the break, we were talking about AI and how it applies to publishing. So Gordon, I have a couple of questions for you based on what we talked about on the first half.

One is Amazon, when you are publishing on Amazon, it has a checkbox where it says, if you used AI to create your material for your book, you need to check that box. So what are the implications if someone does use AI but they don't check that box on Amazon? And is that something that you're aware of?

GORDON: Well, I mean, Amazon could certainly delist the book. They could say you violated their terms of service, that kind of thing. You could be accused of perpetrating some kind of a fraud if you don't disclose the nature of the work. I haven't actually read the Amazon terms of service to know exactly what they're getting at with that. But transparency and honesty is always important.

And a lot of times, when you make a deal with a publisher, the publisher wants you to promise up and down that this is original work with you and that there's no infringement, no violation of anybody's rights. And honestly, if you've used AI to do it, you can't say with real certainty that that's the case. So that's your sort of risk in using the AI tools.

We were talking about the ownership issues and how an author can't own an entirely AI-generated work. If there's enough creative input from a human, then yes, you can. The other thing you need to know is when you use AI tools and you do go to register your copyrights, the law says you're supposed to disclaim anything that isn't original to you. So you're supposed to make sure that the registration, the system understands that, oh, where I quoted from this article, that's not mine, where I used this other source material or it's a translation from something. You have to disclose that there was source material and disclaim ownership of the non-original stuff.

And if you don't do that, you can be accused of defrauding the copyright office and can invalidate your copyright altogether. And it could actually lead to damages and penalties if you try to sue somebody who infringed what you think is your work when, in fact, it wasn't really your work in the first place.

LYNN: OK, so that brings me to another question because I see these advertisements pretty often and it drives me crazy as a ghostwriter and a professional editor. But you see these advertisements where people come on and they're like, you can make six figures, seven figures having AI write your book and then you publish it and you can have AI write your book in one day and publish it. But if you're having AI write your book like you just said, you don't actually own that material. So if you don't own the copyright, then you can't legally make money off that book.

GORDON: Well, you can make money if people will pay you for it. It's just a question of whether you have a basis for stopping somebody else from also circulating, distributing, copying, making derivatives based on that book that isn't yours in the first place. You can't claim copyright infringement, but as long as people will buy it, there's lots of examples of books that are in the public domain that somebody has republished in some form. And I'm thinking of Napoleon Hill's "Think and Grow Rich" is an example where anybody wants to, they can take it and put the text into a system and reformat things a little bit and publish their own edition. And that's perfectly legit. That's what the public domain is all about.

LYNN: So if someone uses AI to directly write a book and they're not just using public domain, they're literally copying and pasting it, because that's what a lot of these, I see literally book programs that do this. So I wrote back to them and I said, well, isn't that copyright infringement if I take AI and use it for my book? And it says, oh, well, it actually only writes the first paragraph of each chapter. So people are signing up for this thinking, oh, AI is going to write my book. I'm going to write a book in a day. I'm going to publish it. And then I'm going to make six figures from it, which we all know does not work that way. But even writing that first paragraph, obviously that's the way involved. But then the person would still have to write the rest of the chapter.

When, what would you recommend for, especially new authors, who are using AI? What should they avoid altogether? Because we talked about, obviously, not copying and pasting. But what are some other areas that might get into legally gray area using AI as a tool that you would recommend new authors to stay away from because it could potentially be harmful and/or get involved in a legal lawsuit?

GORDON: Well, a little bit of the old white guy talking here now. I'm just going to say, maybe I'm a little bit of a fuddy-duddy about this kind of thing. My son is a high school senior, and he's applying to colleges. He wants to be an author. He wants to be a novelist. And he doesn't sit down and write. I mean, as much as I encourage him, hey, keep a journal. You're going on this trip. Write something down every day, those kinds of things.

But the bottom line, my advice to authors is there is nothing for it but sitting down and writing. Making it a habit. Stephen King, I think, says he has to write a certain number of words a day before he can do anything else. And that's just his discipline. The fact of it is, don't rely on the tool as a crutch. I mean, if you need it for an outline, if you need to make sure you're not missing a key point or something like that, then great. But sit down and write it yourself.

And the act of writing and rereading and rewriting and reorganizing, that's what authorship really is. And yes, in the mainstream publishing world, you have editors, copy editors, story editors, all those kinds of folks coming into the equation. But ultimately, they're just to guide the human author in what is, I think, ought to be a very human process. In the nonfiction world, I would say maybe it's a little less critical. But I think what I will do for my next book is I will probably get an AI tool like ChatGPT. After all, it's called Chat. I would have it prompt me and ask me questions that I then give my answers to and then let it maybe digest and synthesize that stuff into a more structured format as a first draft of a book. But it's still my words coming out of my mouth. And the systems are really quite good at prompting you to elaborate on a point or explain something differently or those kinds of things. So I think that's interesting.

LYNN: So I have another question for you based on that. So one of the things that's really big in the publishing world now is some editors refuse to edit anything that is written with AI at all. And so if there was an editor who took someone's work and they edited it and they didn't know they used AI to write it and that book got involved in a lawsuit, would that editor have any type of responsibility in that lawsuit? Could they get sued? Like they edited a book someone gave them if they did not disclose it was written with AI?

GORDON: Yeah. I mean, failing to disclose, if you're dealing with a publisher, yeah, I mean, if they have made a policy decision not to do this, it could hurt their reputation for something to come out that is not original with the author they're attributing it to. So you could actually be tarnishing their brand by duping them into publishing a book that isn't really yours.

LYNN: So what about from the freelance editor standpoint? So I'm a freelance editor. And I tell people, I'm not going to edit copy that has been AI generated. And so that person's like, no, no, no. Not AI generated. And I edit their copy. And then they end up getting sued down the line for using AI generated copy that's copyright infringement. Would I as a freelance editor be sued as a secondary party on that because I participated in writing and publishing that book as an editor? Or is that more of a gray murky area?

GORDON: I don't think it's a very gray area. I think, I mean, technically, yes, you could be considered a contributory infringer of some sort. I don't think that, I'm sure there is precedent for it, but I don't think there's a lot of precedent for the idea of suing the editor who unknowingly edited AI created work or infringing work of any kind. If you recommended using a piece of passage from another work, and it turned out that that's where the infringement lawsuit came from, yes, I think you could be held liable. When the author gets sued, they turn around and sue you.

But I don't think there's too many scenarios where the lawyers are going to go after the editor who was really just sort of a gun for hire unless there's evidence that they contributed the infringing material somehow themselves. Publishers would be held liable. And if the editor is an employee of the publisher, well, then maybe they lose their job over it. But in the ordinary world, I wouldn't expect plaintiffs to go after editors. The editor is not making a name for themselves on the basis of this book that came out. That's the author. So go after the author.

LYNN: So if someone is using AI as a tool, but they want to double check that the content that they're using is not plagiarized, and they put it on a platform such as Grammarly, and you do the plagiarism checker, and you see, oh, there is more than, because I know the standard in industry is 5%, right? Because it's very hard to have anything at zero. So say it comes up as 10%, and you rewrite it in Grammarly so that your plagiarism checker says 5% or under. Now, is that a really good way to make sure that your thing is not plagiarized? Or do you have another tool or another way that you like to check for plagiarism that's a better tool to use?

GORDON: I don't know that there, I don't know what tool is good and what's not. I will say that one thing to be aware of when you're using a tool like Grammarly is that when you upload the work, you are training the AIs. You're giving it more of a data set. So that may be something we want to think twice about. On the other hand, we want these tools to help us make sure we're not ripping anybody off or whatever, the plagiarism thing.

The thing about it is the law isn't as clear or specific. And there's debate about whether the law should always be crystal clear and black-letter, bright lines between things. But the fact of it is in copyright law, the standard for whether something is copyright infringement is substantial similarity. Well, what does that mean? It's usually in the eye of the beholder, the jury. So there's definitely some personal bias in there. It's not just black and white like fact issues. And sometimes what's substantially similar is not necessarily the exact wording, but the overall impression that something creates.

Facts and ideas are not something that can be protected under copyright law, but stylistic things and the particulars of how we express something can be. So the real question is, is what's been copied protectable under copyright law in the first place? And if so, is this new thing substantially similar to that original? And then you get into, is there some excuse? Is it fair use or something like that? But yeah, I mean, it is a nuanced question. What's substantially similar? And different juries will come up with different answers.

LYNN: And so I know there's been a lot of things going on with AI, what you can and can't do. Is there a good website? Obviously, we can go to your website and check out something. I know you've got great information on there. But is there any type of government website or official website where you can go and check out, what are the current laws on AI? Or no, I'm assuming no.

GORDON: Well, again, we're not far enough into the litigation of these cases to have real answers definitively. We have some idea that the training may be considered fair use, that it's sort of the same way that humans train themselves to write is by reading other people's stuff and getting ideas and getting a feel for structures and things like that. There is a website that I follow that is tracking all these different lawsuits. The title of the website is ChatGPTIsEatingTheWorld.com. And it's a great place to go just for the latest updates on what's happening in these various cases. You probably need to have a little bit of legal training to really understand it fully. But I think that it'll get you started in the right direction.

LYNN: Awesome. For people who are listening, I'll definitely throw that in the show notes. That's awesome.

OK, so Gordon, this has been fabulous to get a little more clarity on AI, because we can never be 100% clear. But this is really a great start, I know, for a lot of people who are listening. So I know you have people who are going to want to know more about you and where they can find you. I know you have two awesome online courses for entrepreneurs. One is the Easy Legal for Entrepreneurs. And then you also have one for podcasters. Obviously, people can sign up for both if they're an entrepreneur and podcaster. And that's easylegalforpodcasters.com. So tell us about your two courses and who is a good fit for these.

GORDON: OK, well, first of all, don't buy them both. If you are primarily an online content creator, like a YouTuber or a podcaster, Easy Legal for Podcasters is the one to get. If you are more in the online course, digital business kind of space, then Easy Legal for Digital Entrepreneurs is the one. Everything, all my stuff is available if you go to GordonFiremark.com. That's the easiest way to find that. My law firm blog and stuff is at Firemark.com. But everything else is GordonFiremark.com.

What these programs are is basically a system for getting protections in place to secure your intellectual property and your ownership of whatever the enterprise is. Who owns the show if you've got a co-host, those kinds of issues. So systems for onboarding your talent, onboarding your team, for protecting your copyrights, your trademarks, your brands, and lots of forms and templates to help you do that the fast, easy, affordable way. Easy Legal for Podcasters or Entrepreneurs.

LYNN: And we all know it is much easier and way less stressful and time-consuming to front load all of those things than to find yourself on the back end of it in a legal battle.

GORDON: Well, I'll say the biggest mistake that people make in the legal arena, in business and in the law, is waiting too long. Thinking that they can wait, they don't need to worry about this. They're too small to be on the radar until they're not. And you have to ask, do you want to stay small? Or if you want to grow, maybe getting these protections in place so you can sleep well at night and know by taking the big, bold actions that drive real success, you're confident that you can do that because you've got the legal stuff squared away.

LYNN: That is awesome. Great. No, thank you so much for coming on today, Gordon. I know my listeners got a lot of great value out of listening to this conversation today.

GORDON: That's been my pleasure. It's always great to see you, Liquity.

LYNN: Always great to see you too.

All right, my friends, remember to hit that follow and subscribe button so you get next week's show dropped straight into your podcast player. And remember, until next week, this is Lynn Liquity, reminding you to keep writing, keep dreaming, and keep creating. Your book is waiting to be born.

[MUSIC PLAYING]

Links

Chapters

Video

More from YouTube