
When was the last time you had trouble coming up with an idea without using ChatGPT, searching Google, or skimming the internet for ideas? The idea of doing something without assistance seems almost exhausting to many people. These days, artificial intelligence (AI) technologies are a seamless part of our lives. A culture of mental dependency has been brought about by the growth of robotics, cognitive offloading, and algorithmic impact. These days, AI is more than just a technology; it’s a trillion-dollar enterprise led by billionaires who have seen the enormous financial potential of controlling not only what people purchase but also their thoughts. The most influential people in the tech industry are busy influencing the direction of cognition itself, while the majority of us are busy asking ChatGPT for assistance with projects or captions. “Democratizing intelligence” is a term that Elon Musk, Sam Altman, Sundar Pichai, and Mark Zuckerberg frequently use. However, underneath the flashy advertising is a more profound reality: AI is a dependency-based industry. The objective is to keep us interested, not to empower us.
The Convenience Trap
Convenience is the main selling feature of AI, but it also comes at a cognitive cost. Humans are becoming increasingly complacent as technology advances. However, our freedom is a hidden cost of that ease. The practice of outsourcing thought processes to technology is known as cognitive offloading by psychologists (Sachdeva, 2022). We teach our minds to rely on computers rather than ourselves each time we allow a bot to complete our ideas. Students are not the only ones doing it. AI-generated suggestions are becoming increasingly trusted by engineers, doctors, and lawyers, even when the algorithms are incorrect (Coiera et al., 2018). AI has made critical thinking a luxury in a world where speed is more important than accuracy. Naturally, this is a business model and not an issue. The richest AI investors, like Bezos and Musk, have invested billions of dollars creating ecosystems that are intended to maintain our mental dependence. According to Beamer (2024), AI has promoted a “era of cognitive laziness” in which individuals’ priorities speedy responses over curiosity. And that’s exactly what it is. In The New York Times, Chomsky (2023) states that ChatGPT “mimics intelligence but lacks understanding,” and that people’s technological lack of understanding is reflected when people depend on these systems without question. Users frequently take AI’s formal language or authoritative tone as fact rather than using their judgement, which flattens intellectual depth. Consider this: the less thought we put into it, the more information we produce for them. Every inquiry we pose and every prompt we type turns into useful training material. Data was originally referred to as “the most valuable resource of the century” by Sam Altman of OpenAI. He didn’t explicitly state that their earnings are being driven by your data, or the remnants of your cognitive process.
The Decline of Creativity
The ability to think critically and creatively is rapidly declining as people are becoming more reliant on AI. In the past, creativity was human and chaotic. It required failure, passion, and patience. Right now? It’s a membership. The internet is overflowing with AI-generated tales, art, and melodies. Do you need a poem? ChatGPT or Gemini will generate one in a matter of seconds. However, the effort that gives art its significance is lost in the process. While AI may mimic creativity, caution that it cannot reproduce genuine uniqueness or emotional subtlety. Investors continue to promote the false notion that it can, nevertheless (Byrge et al., 2025).
Journalists and designers today use AI to plan, brainstorm, and even carry out their tasks. Although it seems effective, there is a catch. That loss of creative confidence is beneficial to those investing in AI, it is not an accident. AI systems are designed to reward consistency (Newton & Roose, 2025). The algorithms choose outputs that are well-liked, dependable, and efficient. Because it has the same recycled design and is optimized for interaction, a lot of internet material has a déjà vu sense to it. Replication is what trends, not authenticity. Creativity is seen as data by the millionaire investors driving this change, many of whom have no artistic background. Stories, songs, and paintings may all be used as algorithmic teaching inputs. It’s creativity that has been dehumanized and turned into stuff for profit. “AI resembles intelligence but lacks understanding,” as Chomsky (2023) cautioned. The industries it presently supports may be characterized in the same way.
In contrast, humans use emotion and intent to generate meaning. We lack the complexity and sensitivity that make art and philosophy really human when we substitute mechanical synthesis for creative effort (Chomsky, 2023). This panic is being felt even by creative industry experts.

What AI Wants Us to Think:
AI’s ability to dictate our thoughts is a more heinous effect than cognitive sloth and creative decline. Each time a user accesses an online platform, what users like, ignore, and what they see next are all determined by algorithms. It feels personal at first. However, it is very predictive. Algorithms evaluate users’ viewing patterns, forecast their preferences, and present them with material that is consistent with their previous actions. This eventually produces a digital mirror that reflects our preexisting opinions back to us while excluding opposing viewpoints. To characterize this situation, Pariser (2012) created the phrase “filter bubble.” He cautions that personalization is the backbone for intellectual isolation by limiting exposure to different viewpoints, this impact is further amplified when utilized in AI-driven media. Algorithms, in contrast to human editors, are solely committed to engagement, they are not dedicated to truth or balance. AI systems are programmed to maximize emotional reactions instead of cognitive growth as businesses place a higher priority on profit and user retention (Sharma, 2024). It’s capitalism, not an elaborate scheme.
Users may think they are making well-informed decisions when, in fact, AI curation is quietly influencing their decisions. The issue is not limited to social media. Users’ perspectives are shaped by algorithms that offer recommendations in news, commerce, and educational sites (Sachdeva, 2022). AI may reinforce prejudices without being held accountable because of its moral neutrality. Depending on how intentionally we choose to use it, Longstaff (2024) argues, “AI may be a danger to humanity, but it could also transform it.” Humans run the risk of becoming complicit in our own brainwashing if we don’t challenge or oppose the ideological sway of AI systems.
The Billionaire Ethics of AI: The Moral Blind Spot
AI’s influence on cognition, creativity, and culture go much beyond the individual. Political structures deteriorate when people cease to think critically as a collective. AI-driven disinformation in politics has the ability to sway voter behavior by taking use of emotive algorithms and automation bias (Beamer, 2024). It has the potential to standardize thinking in education to the extent that uniqueness is treated as an exception. Professional decision-making in industries like healthcare, banking, and law is similarly impacted by cognitive offloading. Computerized bias can lead experts to ignore mistakes and trust machine recommendations without sufficient verification (Coiera et al.,2018). This complacency might result in widespread systemic failures as it spreads across industries, giving algorithmic authorities crucial supervision. Philosophically speaking, the development of AI calls into question what it is to be human.
According to Longstaff (2024), moral and intellectual consciousness has always been what distinguishes our species. The wealthiest people in technology enjoy talking about ethics, mostly because they want to define it. While constructing his own AI business, xAI, to ‘save humanity,’ Musk issues a warning about the dangerous risks of AI (Milmo, 2023). Altman actively pursues profit via OpenAI’s business ties while preaching about “being in line with human values” (Russell, 2025). It’s the morals of a billionaire: it always comes back to them.
However, humans run the danger of sacrificing such intricacy for convenience as AI becomes more integrated. He thinks that the difficulty is in preventing people from becoming more artificial rather than in making AI more human.
Wakeup Call:
The future doesn’t have to be bleak despite these cautions. Since AI is a mirror of human purpose, it is not intrinsically dangerous, how we utilize it is the problem. So, how do we respond, then? First, we start thinking again, really thinking not just taking in or responding to information online. AI should not be viewed as a sacred text, but rather as a hammer. It is not a teacher; it is a tool. The most beneficial application of AI is as a working catalyst (something that generates ideas rather than brings them to completion) (Byrge et al., 2025). This implies that rather than settling for algorithmic gloss, we need to bring effort and experimentation back into the creative process. Additionally, education must change from concentrating on teaching students how to utilize AI to explaining why it works and who benefits from it. In addition to technical training, there must be an advocate for more AI literacy. It should be as basic as code to comprehend ownership, prejudice, and manipulation (Wang et al., 2025). To support this rather than being delegated, innovation may be reconsidered as a communication between a person and a computer.
Artificial intelligence should be used as a means for creative inspiration instead of convergent production, it should be a place where creativity begins rather than a destination (Byrge et al., 2025). Therefore, people must deliberately return to their way of reflecting in order to re-establish equilibrium between technology assistance and intellectual independence.
Those who utilize digital media need to actively fight against algorithmic captivity. Pariser (2012) advises purposefully looking for different viewpoints in order to break the “filter bubble.” In real life, this entails a variety of data sources, manual content curation, and skepticism towards internet suggestions that seem overly tailored. Therefore, it is an ethical responsibility of programmers and legislators to create AI systems that encourage critical participation as opposed to passive consumption. Openness and responsibility among all parties involved are necessary for ethical AI (Sharma, 2024). The decline of human cognition might become permanent in the absence of such control.
Conclusion:
Humanity now has incredible powers thanks to AI, but it has also made thinking optional. We lose the ability to develop the very qualities that make people unique such as reasoning, creativity, and moral judgment, the more we depend on computers to make judgements. The effects of this reliance are becoming evident in both the newsroom and the classroom.
The risk is not that AI will become more intelligent than humans, but rather that people will cease trying to use their own intelligence. “The false promise of AI is that it can think for us,” warns Chomsky (2023). Unless we voluntarily relinquish them, no machine can ever fully imitate human comprehension, emotion, or moral complexity. Regaining our capacity for independent thought might be the most significant thing we can do in this era of artificial intelligence.
By Ashlee Perry: 21409400
References:
Beamer, T. (2024). The Harmful Effects of Artificial Intelligence (AI) and Projected Future Implications. Tech Business News. https://www.techbusinessnews.com.au/blog/the-harmful-effects-of-artificial-intelligence-ai-and-projected-future-implications/
Byrge, C., Guzik., E., & Gilde. (2025). Artificial Intelligence and the Creative Process: Does AI-Creativity Extend Beyond Divergent Thinking? ScienceDirect. https://doi.org/10.1016/j.yjoc.2025.100105
Chomsky., N. (2023). The False Promise of ChatGPT. The New York Times. https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Coiera, E., Lyell, D., & Magrabi, F. (2018). The Effect of Cognitive Load and Task Complexity on Automation Bias in Electronic Prescribing. Human Factors, 60(7), 1008-1021. https://doi.org/10.1177/0018720818781224
Daria, B., Irina, S., Novikova, E., & Vladimir, M. (2025). Analysis of the Advantages and Disadvantages of Distance Education in the Context of the Accelerated Digital Transformation of Higher Education. Sustainability, 17(10), 4487. https://doi.org/10.3390/su17104487
Irina, S., Vladimir, M., Novikova, E., & Daria, B. (2025). Analysis of the Advantages and Disadvantages of Distance Education in the Context of the Accelerated Digital Transformation of Higher Education. Sustainability, 17(10), 4487. https://doi.org/10.3390/su17104487
Milmo, D. (2023). Elon Musk Launches AI Startup and Warns of a “Terminator Future.” The Guardian. https://www.theguardian.com/technology/2023/jul/13/elon-musk-launches-xai-startup-pro-humanity-terminator-future
Newton, C. & Roose, K. (2025). Everyone is Using A.I. for Everything? Is That Bad? The New York Times Magazine. https://www.nytimes.com/2025/06/16/magazine/using-ai-hard-fork.html
Pariser., E. (2012). The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think. Penguin Books.
Russell, M. (2025). The Ministers of Thought. Business Insider. https://www.businessinsider.com/mark-zuckerberg-sam-altman-minister-of-thought-meta-openai-2025-10
Sachdeva., C. (2022). Causes and Consequences of Cognitive Offloading. University College London. https://discovery.ucl.ac.uk/id/eprint/10164146/2/Final_thesis.pdf
Sharma, S. (2024). Benefits or Concerns of AI: A Multistakeholder Responsibility. ScienceDirect. https://doi.org/10.1016/j.futures.2024.103328
Wang, K., Cui, W., & Yuan, X. (2025). Artificial Intelligence in Higher Education: The Impact of Need Satisfaction on Artificial Intelligence Literacy Mediated by Self-Regulated Learning Strategies. Behavioral Sciences, 15(2), 165. https://doi.org/10.3390/bs15020165