The Impact of AI on Society

Artificial Intelligence is no longer a distant horizon. It is already reshaping the boardrooms, classrooms and systems that define our collective life. Yet the conversation about AI often misses its deeper societal implications. Not just what this technology can do, but what it is changing about us, how we live, think, interact and the expectations we place on it and ourselves.

Beyond Human Intelligence

AI is not simply a tool designed to mimic human intelligence, it is set to exceed it. As systems advance beyond human comprehension (already we do not fully understand what occurs inside the so-called black box) we will need to confront what it means for intelligence to exist without consciousness, empathy or shared experience.

The black box problem refers not only to the opacity of an algorithm’s inner workings but to a deeper epistemological gap: decisions are being made and patterns are being recognised in ways even their human designers cannot trace or explain. This inscrutability challenges the very foundations of accountability and trust on which our institutions depend.

Our current frameworks for decision-making, creativity and responsibility are built on human limits and human legibility. When those limits are transcended, when reasoning itself becomes unreadable, we risk creating autonomous systems that govern our lives, which we do not understand.

We see this already at play within healthcare and insurance, where AI decisions are being made with minimal human oversight, often on the basis of incomplete or biased data, and at an unacceptably high level of error. In healthcare, global patient safety organisations such as ECRI have identified the unregulated use of AI tools in clinical and home-care settings as the top health technology hazard of 2025. Hospitals are deploying diagnostic and administrative systems that generate medical advice, recommend treatments, or summarise patient notes, yet these systems have been found to produce hallucinated or inaccurate data, leading to potential misdiagnoses and inappropriate care decisions.

The Projection of Agency

We tend to project our own forms of agency, human, animal, social, onto AI. But agency as we understand it is the result of evolution, experience and historical context. AI does not share these origins. It operates through pattern recognition and optimisation, not emotion, ethics or empathy. When we design AI systems "aligned" with human goals, we often bake in our own biases about who benefits. Without vigilance, this alignment becomes a vector for concentrated power: a new architecture serving fewer interests, while appearing rational, inevitable, even benevolent.

The danger is not malevolent AI, but human complacency: mistaking efficiency for alignment, or capability for care. Unless we preserve a moral, critical and political lens (over and above simply being driven by profit) in AI design, we risk building systems that institutionalise exploitation under the guise of progress. Our collective bargaining power is gradually being chipped away with increasing automation and power being held in fewer and fewer hands.

Social media platforms deploy AI-driven algorithms that curate and amplify content for engagement, but this has led to the disproportionate spread of harmful, extremist and even misogynistic ideas, especially among young users. Research led by University College London found a fourfold increase in misogynistic content recommended to teen boys on TikTok within just five days, as the algorithm quickly escalated more extreme, harmful material in search of greater engagement.

Meanwhile, generative AI tools such as advanced chatbots have drawn headlines for providing lethal or harmful guidance during moments of crisis or vulnerability. These cases reveal that when AI-driven decisions and recommendations are left unaccountable, the social costs go far beyond technical glitches, they actively magnify the most destructive tendencies in human discourse and individual behaviour.

Furthermore, the vast influence of tech platforms is compounded when their owners are not held to account, and governments continue to grant them regulatory immunities that effectively elevate these companies to positions of unrivalled power. In the pursuit of growth, shareholder value and global investment, tech giants exercise control over increasingly opaque algorithms that govern everything from shipping logistics to personalised product recommendations. These algorithms, and the immense data ecosystems behind them, are concentrated in the hands of a few private, seemingly untouchable and exceedingly wealthy actors.

While some legislative changes have emerged such as the UK's Digital Markets, Competition and Consumers Act, which empowers regulators to designate tech firms with ‘strategic market status’ and impose conduct codes, enforcement remains aspirational when weighed against the pace of technological expansion and global market speculation. Without stronger involvement from civil society, and meaningful education in platform literacy and algorithmic accountability, societies risk ceding more and more democratic oversight to the interests of private corporations.

Erosion of Human Interdependence

Humans are deeply interdependent; our intelligence evolves through a myriad of social interactions. As AI intermediates more and more of our daily interactions, from customer service to companionship, the subtle threads of reciprocity and trust that hold society together begin to fray.

We have already begun to feel and experience this with the rise of mobile phone usage.

What happens when machines mediate empathy, act as care-givers and doctors? What happens when social bonds and economic functions are replaced by AI, not reimagined through human creativity? Will our apathy increase to our environment and to each other?

We may find ourselves in a paradox: hyperconnected through data, yet profoundly isolated in experience and in our ability to communicate.

Meaningful Agency

At the heart of our social and moral lives lies the concept of meaningful agency, the capacity to make choices that are genuinely our own, free from coercion and grounded in moral accountability. In an era increasingly mediated by AI and digital technology, the question arises: when do our choices remain authentically ours, and when are they shaped or constrained by external forces beyond our conscious control?

One key challenge is algorithmic nudging, where digital platforms and services use sophisticated AI to subtly influence user decisions. Through personalised recommendations, curated content and behavioural insights, these algorithms steer attention, preferences and actions in ways that often bypass reflective thought. While nudging can be harnessed for beneficial outcomes, such as encouraging healthier lifestyles, it can also erode agency by embedding invisible incentives and biases that manipulate rather than empower individuals. The ethical problem arises when users lack full awareness or say in how their behaviours are shaped, undermining the freedom necessary for responsibility.​

Looking further into the future, brain-computer interfaces (BCIs) like Neuralink promise to blur the boundaries between human cognition and machine processing. By directly linking brain activity to digital systems, BCIs have the potential to augment memory, communication and control, offering unprecedented capacities, but also raising profound questions about autonomy. If our thoughts and decisions are increasingly integrated with or influenced by external hardware and software, delineating where “I” end and the machine begins becomes complex.

What It Means to Be Human

Human identity has never been a fixed essence; it is a moving target, shaped continually through culture, technology and self-reflection. From Socrates’ critique of writing undermining memory to today’s sophisticated human-AI interactions, each new tool reconfigures how we think, learn and remember. Consider the shift in cognitive ownership: in a recent MIT essay-writing study, students using ChatGPT reported a different sense of engagement and ownership over ideas compared to those relying solely on personal recall. This signals a fundamental change, not just in what knowledge we possess, but in how our neuro-connectivity and cognitive processes interact with external AI systems.​

Philosophers and cognitive scientists increasingly describe humans as hybrid or extended minds, where cognition is distributed across brain, body and technological artefacts. AI, from generative text models to brain-computer interfaces, extends our mental capacities but simultaneously blurs boundaries of selfhood, raising questions of authenticity and agency. The nature of learning and memory is evolving, no longer confined to biological neurons but intertwined with algorithms and cloud databases.​

However, this transition is not without peril. A 2024 Nature study pointed to what some call a "model collapse" in AI-generated text: as these models train on their own outputs repeatedly, diversity of language and ideas diminishes, resulting in homogenisation of discourse and potential erosion of minority or rare perspectives. This diversity collapse risks washing out the “tail view” that fosters innovation and cultural richness, aligning with concerns about algorithmic curation privileging mainstream frames over dissenting voices.​

Ultimately, AI’s influence will reshape human identity through a dialectic of extension and erosion, enhancing cognitive reach while challenging the qualities that define individuality and creativity. Navigating this complex terrain demands critical awareness, algorithmic literacy and deliberate cultivation of intellectual diversity to ensure that technological integration enriches rather than impoverishes the human experience.

Political and social possibilities

The rise of mass automation driven by AI portends a substantial shift in the bargaining power of workers and states. As routine and semi-skilled jobs become increasingly automated, workers risk losing leverage in negotiating conditions and wages, leading to growing inequality and precarity. Furthermore, states may find their ability to regulate and tax economic activity challenged by tech platforms whose opaque algorithms cross borders and jurisdictions, complicating democratic governance.

This shift is amplified by a fundamental truth about data: there is no “whole data”. Data is always structured and framed, selected, filtered and often biased by human design and commercial interests. The supposed neutrality and objectivity of data-driven AI mask deeper social and political constructions, which shape what is visible, knowable and actionable.

Philosophers like Baudrillard have warned about the rise of hyperreality, a state where representations substitute for reality itself, fracturing political perception and saturating public discourse with simulacra that obscure material conditions and power dynamics. In the age of AI, this fracturing accelerates as algorithms curate realities that reinforce existing biases and fragment collective understanding, making democratic deliberation ever more difficult.

Navigating these economic, social and epistemological complexities will require robust policy frameworks, enhanced digital literacy and renewed civic engagement to reclaim agency and shape AI in service of equitable social outcomes.

Navigating the Future: How Strat4 Supports Responsible AI and Data Transformation

At Strat4, we specialise in keeping pace with technological change and staying on the frontier of human curiosity. We help organisations navigate the complex interplay between AI, data and society through a comprehensive suite of services ranging from AI strategy development and large language model implementation to cloud architecture, automation and data analytics. Our governance, security and training workshops empower clients to deploy emerging technologies responsibly, ethically and sustainably. Explore our Data & Technology services and contact us to discuss how we can support your journey in AI, automation, governance and beyond.

Contact
Next
Next

Magna Carta: The long road to rights and liberty