
Brilliant Ideas
If you've ever caught yourself thinking, “What if this idea could actually work?”. You're in the right place. This is the podcast where I chat with solopreneurs who’ve taken their ideas from “hmm, what if?” to “wow, look at this!”—and turned them into successful courses, memberships, and eBooks.
I'm your host, Alyssa Bellisario—a Professor turned Digital Product Strategist. I help you break down your brilliant ideas into profitable courses, memberships, and eBooks, while teaching you how to build automation funnels that can scale your business to consistent $20K months with a lot less stress. Tune in, get inspired, and see why hundreds of solopreneurs trust me for expert guidance on everything from digital products, AI, curriculum designing, list building, selling strategies, sales funnels, automations, and launch tactics that drive results.
Whether you're at the beginning stages of creating your course, membership, or eBook, or are looking to take your business to the next level, each episode is designed to help you take immediate action and guide you toward your next step.
Brilliant Ideas
#24: The Dark Side of Social Media: AI, Bots, and Shadow Banning with Tim O'Hearn
Tim takes us behind the social media curtain to reveal uncomfortable truths about what's really happening in our feeds. Having operated in the gray areas of Instagram growth, he shares startling insights: up to 40% of social media traffic might be fake, strategically engineered through automated engagement tactics designed to trigger reciprocation. This isn't speculation – it's based on firsthand experience running services that manipulated algorithms for follower growth.
The conversation ventures into the increasingly credible "Dead Internet Theory" – the notion that most online content isn't created by humans anymore but by AI and bots. What makes this particularly fascinating is how quickly we've moved from theory to reality. When first proposed, AI-generated content wasn't widely available. Today, distinguishing between human and AI-created text or images has become nearly impossible for the average user. Tim shares a personal encounter with a Reddit karma-farming bot that offered plausible but nonsensical responses to his research questions, highlighting how pervasive this issue has become.
For business owners, these revelations create strategic dilemmas. The more businesses rely on AI for content creation, the more they contribute to the very problem they might criticize. Meanwhile, shadow banning – when platforms limit content reach without notification – presents another layer of risk. Tim traces this practice back to early internet forums, explaining how platforms use it to avoid the support overload that comes with explicit bans. His practical advice? Build an email list as insurance against unpredictable platform changes. Unlike social followers, your email subscribers remain accessible regardless of algorithm shifts or account restrictions.
Ready to see social media through new eyes? Grab Tim's book "Framed: A Villain's Perspective on Social Media" for an even deeper dive into these topics from someone who's operated on both sides of the digital divide.
Connect with Tim:
Send me a text if you loved this episode!
Rate, Review, & Follow on Apple Podcasts
Your feedback helps me reach more solopreneurs like you.
It’s super easy—just click here, scroll to the bottom, tap those five stars, and hit “Write a Review.” I’d love to know what resonated most with you in this episode!
And don’t forget to hit that follow button if you haven’t already! There’s plenty more coming your way—practical tips, inspiring stories, and tools to help you grow a business that makes a real difference. You won’t want to miss out!
Let's Connect on Instagram
alyssabellisario.com
Search your favorite episodes HERE
This podcast is produced, mixed, and edited by Cardinal Studio. For more
information about how to start your own podcast, please visit www.cardinalstudio.co
or e-mail mike@cardinalstudio.co
Because, when I was a kid, safety on the internet meant something totally different than what it means today.
Alyssa:Welcome to Brilliant Ideas, the podcast that takes you behind the scenes of some of the most inspiring digital products created by solopreneurs just like you. I'm your host, alyssa, a digital product strategist who helps subject matter experts grow their business with online courses, memberships, coaching programs and eBooks. If you're a solopreneur with dreams of packaging your expertise into a profitable digital product, then this is the podcast for you. Expect honest conversations of how they started, the obstacles they overcame, lessons learned the hard way and who faced the same fears, doubts and challenges you're experiencing, from unexpected surprises to breakthrough moments and everything in between. Tune in, get inspired and let's spark your next big, brilliant idea.
Alyssa:Welcome back to the Brilliant Ideas Podcast and if you're new here, I'm Alyssa, and this is the podcast where we dive into the real stories behind big ideas, smart strategies and the things no one tells you about when you're building a digital product business. Today, we're talking about something we all use every day social media. We're diving into the hidden side, the part that most people don't see or don't want you to see. My guest today is Tim. He's an author of the book Framed A Villain's Perspective on Social Media. If you've ever questioned what's really going on behind your feet, or if you just want to know why people are still buying fake followers, this episode is for you. Let's get into it. Welcome to the show, tim.
Tim:Thanks so much for coming on today.
Alyssa:Thanks, alyssa. Yeah, and I'm really excited about this topic because it's one of those things where we think of social media as people posting, engaging and sharing content. But the more that we dig, the more you realize that there's this whole unseen layer shaping what we see, what goes viral and even how we interact, and so it's not really about influencers, ads or algorithms. I think there's a whole other conversation about that but I think it's also about how much of what we see online is actually real versus how much is carefully engineered behind the scenes, and so my question for you is how much of what we see on social media is fake and real? You can kind of break that down.
Tim:Sure, I think it's really cool to start off with a question like that, because it requires challenging a lot of what we see and believe every day. Regardless of somebody's experience with social media, they're probably on it and they probably have usage patterns and opinions that are quite well developed. You know for my friends, you know for my friends or for me, we've been on social media for 15 years or more, so we're really ingrained. We take a lot of what we see at face value perhaps, but the question remains what is real and what is kind of real and what is maybe, you know, algorithmically augmented? So when I put everything together to write my book, I was thinking about what the behavior was that I was partaking in, which was algorithmic fake engagement on Instagram.
Tim:So what we were doing was getting people more followers by logging into their accounts and then shooting across DMs, comments, likes, with the idea that they would be reciprocated and the rate of return or the rate of reciprocation was right about 10% or 15%. So the idea was, if you send 100 follows per day, you might expect to get 10% of those back, or 10 follows. If you scale that up in an automated fashion across days, weeks and months, that could actually be a pretty decent follower growth for a new account for someone who's trying to get to their first 1K or first 2K followers, especially on Instagram. So really we had to extrapolate that and think, if we're doing this and we're offering this service and we're one of the smaller players, how much overall might be fake on these platforms? And the numbers that we got to by the late you know, we'll say the late 2010s, about 2018 or 2019 was as high as 40% of all traffic might've actually been fake. It's just very hard for us to see that without being insiders at a company like.
Tim:Meta.
Alyssa:Wow, that's kind of unsettling to think about that, right, Like how much of what we see online. We don't really know that, those numbers, because they're not shown to us, whether it's bot engagement or fake followers. But that actually ties into something bigger which you know a lot about, which is this dead internet theory. Now, I know some of it. I know a little bit about this For my listeners who haven't heard of it. The dead internet theory is this idea that most of the content that we see online isn't actually created by real people anymore, but generated by AI and bots. But what's interesting is, when you look at how much AI-generated content is already out there, it makes you wonder, like how much is this theory actually true? Like, I'm curious, like, what is your take on this? Like, is the dead internet theory real or is it just some other internet myth? Because I don't know. I kind of think it's true.
Tim:Yeah, more and more people are coming around to say maybe it's less of a theory and it's more being proven. It's just to what extent is it true, rather than is this a myth? What I find funny is when it was first proposed, ai generated content was not commercially available, was not publicly available. So when I first started writing essays, thinking about you know what is the future of the internet, what is this? I called it late stage Instagram, like what's going to happen once we become so profit driven that we might have bots running the show. This was before we could conceptualize how much content could be created by a computer, so that was maybe four years ago. Today it's totally different, where most pictures in your feed could be fake and the majority of text you see on Reddit. You probably wouldn't be able to tell real versus fake. So that's been a really concerning topic that I've had to address head on, even while I was doing research for my book.
Tim:So I published my book Framed A Villain's Perspective on Social Media in late February and while I was doing some of the very, very last research for the book, I reached out to people on Reddit and I said, hey, there's this part of the early internet, it's not really well captured anywhere. Do you remember what happened on Myspace in 2007? Do you remember some of these experiences? One of the responses I was met with was actually provided by a bot and specifically it was a karma-farbing bot that was going to newly posted threads that might have had the potential to go viral and it was posting plausible but ultimately nonsensical responses. And it was only obvious there because we were talking about MySpace in 2006 and 2007, which a lot of the AI had been poorly trained on.
Tim:So when we think about the dead Internet theory and that, like now, I had a personal experience with it. You've probably had personal experiences with saying, like, is everybody just a shill now? Like, is any of this real? Of course, we are not going to get to a point where everyone on the internet, like you're real and I'm real. I think we can be pretty confident in that. And of course, you have what I call the intimate network, which is people who you interact with a lot because you also know them in real life or you've you've gotten some other proof that they're, that the identity matches to a human being. But as we go further and further and we get more people on platforms such as Reddit, I think the chances of this being true are much higher and I would say the capacity for feeds to be manipulated by fake content are. You know, it's a lot larger.
Alyssa:Yeah, and so what do we do with that information? Like, how do we, you know, for businesses who are on there selling their services, their products and things like that? Like, how, where do we go from here, does it? Because if what we're selling and we're posting on Instagram it's not it's, is it like we're not getting as much engagement? It could become a real problem, right?
Tim:it could become a real problem, right? Yeah, for sure. And it's a balancing act, especially for businesses, because now we see a lot of businesses are using AI to write their headlines and they're using AI to write their call to action, and they're oh, you know what. I'll use AI to publish this picture because I don't really have original content and you have this weird state where, even though there is a human making the decisions, it could just as easily be a bot making the decision to use AI for advertising. So, as a human who's using AI to generate headlines, you're one step closer to the dead internet theory, to making it true, or at least to creating this state where a new person on the internet will see your content. I mean, imagine somebody logs into Facebook now or logs into Reddit now for the very first time. It's plausible that a lot of what they see is at least AI augmented, meaning somebody used AI to help them write, to help them with conversions or to help them just with the content in general. So I think what a business owner can do is ideally not use AI at all, which is getting harder and harder. But as far as enforcement it takes, I think large scale agreement that we don't want bots doing certain things and it has to be this agreement between the platforms and, of course, they have their own decisions to make regarding that. And then also platform users. So if you're a user on Facebook and you've worked really, the platforms and of course, they have their own decisions to make regarding that. And then also platform users. So if you're a user on Facebook and you've worked really hard to get, say, like 5,000 fans, you're going to be annoyed if someone is quickly gathering fans because they're using a bot. So you would naturally say, hey, I support enforcement of bots and people using AI-generated content. It gets much more difficult when the platform has to decide well, if I ban everyone who's ever used AI and I restrict them from using this, where are my users? Where are my customers? The platform has this tough balancing act where, if they enforce things too you know too much, they might wrongly flag your content as being AI generated, which is really really tough. So I think the ultimate enforcement action is probably to say, if we are in agreement, that bots are bad, and especially bots using AI to seem human are bad, the only solution is taking verification to the extreme. So things like we saw with Twitter in 2009,.
Tim:When they introduced the blue checkmark, it was actually an experiment. It wasn't like oh hey, this is going to be the main mark of authority on the Internet for the rest of history. That's what it became, but that wasn't their intention. In fact, it was only to prevent impersonation for individuals. Businesses couldn't get the blue checkmark at all in 2009. It was not until later.
Tim:So we have to take that to the extreme where we think now blue check marks are much more commoditized and anyone on platforms I've even submitted my ID, but when I log in, do we know that it's actually me? Verification, which is every time you log in or at some regular interval, you would be prompted for either a fingerprint or a face scan or something to ensure that it's actually you using your account. This is extremely expensive to implement. Nobody wants to be the first one, but the example I gave recently on a podcast is that there are Uber drivers using other people's identities to give rides. So if there's one place where we should probably start trialing this, it's probably Rideshare, and then, beyond Rideshare, it's probably delivery apps, because in New York there's a huge black market for Grubhub accounts, and then, of course, that trickles down to well. Is there a black market for Reddit accounts, twitter accounts, linkedin accounts? The answer is yes, but we should probably start where there's actually human safety concerns.
Alyssa:Wow, yeah, and also it's also. I think it's up to the platforms to start talking about this and like enforce that. You know that businesses should not be using AI to create their content for them, like, but it just seems like that's where everything's going. Like, if you're not using AI in some capacity in your business, then you're kind of like outcasted in this deserted island. Like when I had said that you know, when you create a course, for example, you don't want your AI to be creating the course for you. It could help you, assist you, but ultimately you're the expert here. Like you have to be the one to create the course outline and to think about all those things. Like a bot is helpful, but you know, I just find, like the we're losing that kind of personal touch, you know, and so things are just like um, it just seems like we're going in that direction and it's really hard to see. Like now can I create?
Alyssa:And a lot of people I find a lot of business owners are now relying on it so much. Like bots have become their business, like their assistant, their content creator, or like their their, where they get their ideas. Like I had a client who said to me recently. She's like I feel stupid now, like I feel like I can't think anymore for myself because I just asked ChachiBT to help me, like, with my ideas. And I just feel like I rely on AI so much for my business to create my ideas and my content that, like I can't think for myself anymore and I just it's crazy that we have so quickly relied on bots to do everything for us.
Alyssa:And you know, I'm kind of going down a rabbit hole here, but I'm curious about the content that, yes, okay, the content we see on Instagram great, but what about the content that we don't see? So, like I'm referring to things like shadow banning here, like there's a lot of mixed information online to say that it doesn't exist. And then others who experience like a shadow ban themselves, where they're like going on or they're saying like I have been shadow banned and they'll say, like some context around that, but it makes you question just to think of it. Is it real or is it not? And I want to ask you what is exactly a shadow ban? What is it? And then, is it a real thing or is it just something that people claim when their engagement drops and just to draw more attention to their accounts?
Tim:Yeah, there's definitely a spectrum there and I think it's a really important point now because the concept of a shadow ban is being used in perhaps dangerous ways. So I would expand it and address the spectrum by saying we're also talking about algorithmic interference, which is saying we have a certain set of beliefs about how a feed algorithm will post and share and expand the reach of our content, and a lot of times there can be manual overrides that are completely contrary to what a user expects, that seriously inhibit reach, and a lot of that is what people now call a shadow ban. In my book I actually found that the best historical research for the concept of shadow banning is Urban Dictionary and it sounds ridiculous but this, this website that's been around since the dawn of time basically nothing ever gets deleted. So you can see how shadow banning was first suggested in like 2007 or 2008 about like online forums, so things that, like, gamers were using, and the examples that were given were very early internet gamer driven. So they're saying using and the examples that were given were very early internet gamer driven. So they're saying, oh, this guy was flaming. We don't even know what the term flaming means anymore. It means that you were starting a flame war, which was a type of trolling which we now say is toxic, or there's other terms we use today, but this is an ancient term as far as internet history is concerned. Then you see more definitions popping up where they're talking about Twitter, and then the political aspect, which has been really relevant in the United States. They start saying, hey, it might be one side of the spectrum versus the other and the platform is aligned in one way. It's silencing voices on the other side and we see that morph out where more recent examples do concern platforms like Meta.
Tim:So what is a shadow ban? It is essentially a ban or an effective ban, in that your reach is zero, or like zero, while the user has no obvious indication that the reach has been restricted. Why would they even bother doing this? It's a control system that a user has to uncover on their own because they're not getting as much reach or as much of what they would expect. The difference is that if you log into your phone on whatever app and you see a ban message, you're going to freak out. It's going to be like a very, very chaotic, stressful situation.
Tim:I've suffered so many bans I've lost count because I was breaking the rules for years industriously. But for other people, if it's their business, they only have one account or two accounts. Losing that account access is really, really awful. So if you look on Reddit and you look at r slash Instagram, the entire feed is people complaining about their accounts having been banned. Because it's emotional. If your life is tied to your account, it's emotional. So I propose that shadow banning is the response to the insane support load that's generated from actual bans and it's saying, hey, we don't actually have to ban this person. Let's just make them think they're still a part of the conversation. But they're not. And when they realize this, in a couple of weeks or a month maybe we'll lift the ban. It could be used as a temporary ban, but also it won't immediately cue the user into knowing that they've been restricted, and I think that's the real usefulness of it for these platforms and why it's proliferated.
Alyssa:That's really stressful for a small business owner.
Tim:If that happens, yeah Right, it's everything.
Alyssa:Yeah, I mean, I feel like if that happened to me, I would be like I mean, I don't have, you know, thousands of followers, but I'm still emotionally tied to my Instagram account. That would be awful if that happened to me. And the thing is, too, is that there's no explanation. I find that, if it does happen, there should be a long explanation as to what you broke and why, and then how do you get it back, and that all comes, you know, from customer support and all those things. But I think, beyond that, it's something to think about the types of content that we're creating for our businesses and what we can and cannot say and what other things that we should. You know what is appropriate, what is not appropriate, you know.
Alyssa:And so I feel like this ties really nicely into something that I always like to end with on my podcast. This is called the brilliant bites of the week, and this was created because I wanted my listeners to feel like okay, you know, if I were to be listening to this episode, what could I end Like? What could I do something right now for my account, or what could I? A strategy that they could implement right away, that you know is helpful and could give them maybe a different perspective on social media. So what kind of what advice or insight or something that you can leave or take away that they, that they can use right away? That would be helpful.
Tim:Sure, and I think it's really close to what we're addressing with shadow bans. There's so much to say on this that it's actually two chapters in my book and it's named Safeguarding and Shadow Bans Part 1 and Part 2. And it talks about the moving of the goalposts as far as what is appropriate and what internet safety actually means, because when I was a kid, safety on the internet meant something totally different than what it means today and what's appropriate to say on a platform. You know the conversations people are having, the words that you can and can't use are totally different. Anybody looking for that advice or that you know kind of the way to avoid a lot of this is to think about how to construct an audience in a way that's kind of, I would say, organic or even like agnostic of the platforms.
Tim:So if your account gets banned, if there's some enforcement action, you don't own your followers forever.
Tim:You can't just export them and take them somewhere else.
Tim:If you haven't already exposed them to your other channels, you're out of luck. Maybe some of them will find you, maybe they follow you on YouTube or your website. But my advice is to understand this or to maybe avoid participating in the crazy moving of the goalposts is to start a mailing list, and I know it's such simple advice, but it's like if you lose your Instagram, your TikTok I still get people messaging me about losing TikToks. I never even operated on TikTok, but it's such a huge business segment for certain people that I say you could have mitigated this if you were funneling people to a mailing list, because in that case, you do own it and it's however you interface with email. No single platform can control it. So I think that's a really interesting thing for those who maybe don't have as much of they don't care as much about going deep into the nature of why social media is what it is, and it'll actually help them funnel the right customers to the right channels, depending on what they're posting and what they're selling.
Alyssa:Yeah, I agree with that, Email lists all the way, because you just don't know, especially when it goes down and everyone freaks out, you're wondering like, yeah, I should have built an email list. So that's great advice, and I think we've only scratched the surface of how much social media is really shaping what we see, believe, engage with for hours. So I just want to thank you for pulling back the curtain on this and giving us a deeper look at the side of the internet that you know most people do not talk about, and you know I've had other guests on here who've talked about social media, but more in the way of like influencing, growing your social media content and like like shares and all those things, and so you give a really different perspective, which is really good. Now, before we wrap up, I know that you dive even deeper into all of this in your book Framed a villain's perspective on social media. Can you share a little bit about what readers can expect from that?
Tim:Sure, framed is a really exciting book because I only created it because the existing books were not interesting to me. I would read them and I would say, yeah, there are some good points here, but this is written by somebody who is too far away from what's actually going on. So it was never an insider. It was usually some wealthy, older person who had gone through tech or gone through the journalism space, and they cited all their sources, but nobody was coming out and saying here's what I did.
Tim:As a villain in the space, you get just as close to an insider as you can because you're knocking against the wall that they created. You know you're the one breaking the rules and you're the one you know it's really this push and pull versus the business itself. So what Framed is is a memoir, equal parts, a technological essay collection, and it's really a book of interest for anyone who is on social media, whether as a user or a business owner who's looking to get just a little bit deeper. So it definitely goes deeper than you know, than this conversation. I think we just scratched the surface, but Frame Now is it's out in paperback. It's about 440 pages of purely original content and I think it'll have a big impact on how we look at social media going forward.
Alyssa:And it's also in the show notes of where you can grab a copy as well. So I just want to thank you, Tim, for being a guest on today.
Tim:Yeah, thanks, Alyssa.
Alyssa:No worries. So for everyone listening, if you've got thoughts on this one, send me a DM on Instagram. I'd love to hear your opinion on this and hanging out with us, and I'll catch you next time on another brilliant idea. Thanks for tuning into this episode of brilliant ideas. If you love the show, be sure to leave a review and follow me on Instagram for even more insider tips and inspiration. Ready to bring your next big, brilliant idea to life? Visit AlyssaVelsercom for resources, guidance and everything you need to start creating something amazing.