Medium has recently updated its content guidelines to explicitly ban the use of AI-generated content, regardless of whether it is properly labeled or not. This policy shift is a significant development in the digital publishing landscape, highlighting a growing concern over the integrity of content on platforms renowned for their strong SEO performance and visibility on Google.
The decision by Medium reflects a broader trend among online platforms grappling with the challenges posed by AI-generated spam, which not only dilutes the quality of content but also exploits these platforms’ capabilities to rank well in search results. Google’s algorithm has been instrumental in enhancing the visibility of sites featuring user-generated content, aiming to uncover and promote diverse voices and experiences from “hidden gems” across the internet. However, this has inadvertently made platforms like Medium targets for spam, packed with low-quality AI writings and affiliate marketing links.
Hints For The Future
The move by Medium may well be a harbinger of changes across the industry, as other platforms could adopt similar measures to safeguard their content ecosystems. Beyond the immediate impacts on content policy, this development hints at a potential societal shift. There is growing discourse around the implications of AI in everyday life, particularly concerning job displacement and the authenticity of digital interactions. This discourse is shaping a narrative where there could be a significant public pushback against AI technologies, possibly evolving into social or political movements advocating for content that is verifiably human-created.
Such developments suggest a future where digital platforms not only need to manage how content is created and labeled but also how they can maintain public trust in an era increasingly dominated by artificial intelligence. This could lead to more stringent regulations and perhaps a new focus on transparency and authenticity in digital content creation.
March 2024 Core Update
The Google March Core Update has brought significant changes to the search visibility landscape, particularly affecting user-generated content platforms such as Medium, Substack, and LinkedIn Pulse. While many have highlighted the visibility increase of sites like Reddit and Quora, not all such sites that host user-generated content have seen favourable changes. Medium, Substack, and LinkedIn Pulse have all been affected negatively. However, notably, the decline in search visibility is not uniform across these three platforms; while Medium and Substack have seen domain-wide impacts, LinkedIn has only experienced changes in its Pulse subfolder. This selective targeting by Google’s algorithms reflects an increased sophistication in handling “Parasite SEO,” where certain sections of websites that foster affiliate or sponsored content are penalised to preserve the integrity of search results.
Interestingly, this strategy aligns with recent industry discussions, particularly remarks made by Google’s Gary Illyes, highlighting ongoing efforts to enhance search quality and user experience by penalising manipulative SEO practices. The update’s focus appears sharply on subfolders of major publishing sites rich in commercial content, such as “best vacuum cleaners in 2024” or daily horoscope articles, which are often not directly related to the core themes of the publishing sites.
Despite the punitive measures on specific content categories, the core news sections of these sites remain largely unaffected. This distinction underscores Google’s commitment to promoting high-quality, informative content while clamping down on sections that compromise on content quality for commercial gains.
Site Reputation Abuse Update
This trend of targeted adjustments by Google sets a clear precedent for content creators and SEO professionals: the importance of maintaining high-quality content that genuinely adds value to readers. It also signals to digital marketers that manipulative tactics such as “Site Reputation Abuse” are likely to face harsher penalties under the evolving algorithmic guidelines, particularly as these updates become more frequent and refined. SEOs have been given plenty of time to prepare for the “Site Reputation Abuse” update which is scheduled to roll out on 5th May 2024.
Conclusion
The recent updates and trends within digital publishing, notably Medium’s ban on AI-generated content and the selective targeting by Google’s March Core Update, signal a pivotal shift in how content is managed and perceived on major online platforms. These changes underscore a concerted effort to enhance the quality and authenticity of digital content. As Google refines its algorithms to penalise manipulative SEO practices and promote high-quality user-generated content, platforms like Medium are preemptively setting standards to combat AI spam and preserve content integrity. SEO agencies and marketers in general must sit up and take note or risk having the rug pulled out from under them in the future.
This proactive stance by content platforms is reflective of a broader societal awareness and skepticism regarding the role of AI in content creation. It suggests a potential future where public trust hinges on transparency about the origins of digital content and a clear distinction between human and AI-generated materials. This movement could lead to more rigorous content policies across the web and possibly ignite a societal pushback against the pervasive influence of AI in professional and creative realms, emphasising the value of human touch in digital interactions. We’ve experienced various communities react in different ways to AI-generated assets. For example, AI-generated graphics and imagery being perceived as taking work away from graphic artists and illustrators. It’s clear that the future of AI in digital content creation won’t be straightforward.
If you’d like to learn more about AI in marketing or any of our other services, please get in touch.