Friend or foe: what AI means for future workforce
9 min readArtificial intelligence (AI) has been growing rapidly in recent years, sparking mixed reactions from the public. While some people worry about job losses due to automation, others see AI as a helpful tool that simplifies daily tasks and improves productivity.
This raises an important question: how can humans and AI coexist safely and effectively? The answer lies not in choosing between humans and machines, but in learning to use AI as a collaborative tool, guided by strong ethical frameworks and human judgment. To understand why, it helps to look at how AI has been portrayed and developed over time.
In early science fiction movies, AI was often shown as more intelligent than humans—sometimes even as a threat capable of taking over the world and leading to human extinction. These portrayals have shaped public perception, often creating fear and scepticism around AI development.
However, these negative narratives slow down acceptance and progress, potentially limiting the benefits AI could bring to society.
In 2022, OpenAI introduced ChatGPT, a highly advanced AI system designed to communicate in a human-like way. While it still clearly functions as a machine, it marked a major step forward in making AI conversations more natural and accessible. Since then, it has continued to evolve through learning from interactions over time.
From ELIZA to ChatGPT, 60-Year Evolution
The story of artificial intelligence chatbots began in 1966 with a surprisingly simple program that would change the way humans think about machines.
At the Massachusetts Institute of Technology, computer scientist Joseph Weizenbaum introduced the world to ELIZA, widely regarded as the first chatbot in history. ELIZA was designed to simulate conversation using basic pattern matching.
Its most famous script, “DOCTOR,” imitates a psychotherapist by turning user statements into questions. Despite its simplicity and lack of real understanding, ELIZA often gave users the impression that it was genuinely “listening.”
In the years that followed, researchers continued refining conversational programs. In 1972, PARRY emerged, simulating a person with paranoid schizophrenia and demonstrating more complex conversational behaviour than ELIZA, though still fully rule-based.
By the 1990s, chatbot development accelerated with systems like ALICE, which relied on large sets of scripted rules, and Cleverbot, which began learning from real user interactions online. These systems made conversations feel more dynamic but still lacked true understanding.
A major shift came in the 2010s with the rise of virtual assistants such as Siri, Amazon Alexa, and Google Assistant. These tools moved chatbots from text-only systems into voice-controlled assistants capable of performing real-world tasks like setting reminders, answering questions, and controlling smart devices.
The most dramatic transformation arrived in the 2020s with the launch of ChatGPT. Unlike earlier rule-based systems, ChatGPT is built on large language models that generate responses dynamically based on vast amounts of training data. This allows it to hold long, natural conversations, write code, explain complex topics, and adapt to user context in ways earlier chatbots could not.
From ELIZA’s simple reflections to today’s advanced generative AI, the evolution of chatbots reflects a steady march toward more human-like digital conversation—reshaping how people interact with machines across the world. Yet this very progress is precisely what fuels public anxiety about what AI might eventually become.
Fear of AI
For centuries, humans have feared what they do not fully understand, especially technologies that appear to challenge their place in society. Today, artificial intelligence (AI) has become the latest source of such concern, largely due to its ability to perform a wide range of tasks, from creative work to complex technical operations. Many worry that these capabilities could eventually replace human labour, leading to widespread unemployment.
However, AI in its current form remains a tool rather than a replacement for human intelligence. Its primary function is to assist in everyday tasks and significantly reduce the time required for work that once took hours or even days.
Research, for instance, has become faster and more accessible, while creative processes such as designing posters can now be completed in minutes with AI support.
Despite these advancements, AI is far from perfect. It often produces results that require refinement, leaving room for human judgment, creativity, and expertise.
Professionals, such as experienced designers, note that while AI can generate ideas or drafts, it rarely delivers exactly what is envisioned. Instead, it serves as a collaborative aid helping users build upon its outputs to achieve a more polished final product.
Another limitation is that the quality of its output depends heavily on the input it receives. Individuals with expertise in a particular field are better equipped to guide AI effectively, while those without experience may struggle to achieve meaningful results.
History offers useful context for this debate. When calculators were first introduced, similar concerns emerged among professionals in fields such as accounting and finance. Many feared that such technology would render their roles obsolete. Instead, calculators became tools that streamlined calculations and reduced human error, ultimately supporting rather than replacing skilled work.
Similarly, AI is reshaping, not replacing, the workforce. Instead of resisting the technology, individuals should focus on learning how to use it effectively. By doing so, they can enhance productivity, improve outcomes, and adapt to an evolving professional landscape.
The rise of AI may not signal the end of human work, but rather the beginning of a new phase, one where human skills and intelligent tools work side by side.
However, this collaboration requires conscious effort—because over-reliance on AI carries its own risks.
Negative impact on brain and daily life
Although artificial intelligence is widely regarded as a helpful tool, excessive reliance on it may negatively affect human thinking and cognitive development.
When AI is used as a substitute for independent reasoning rather than a support system, it can reduce mental engagement and make people less inclined to think critically or creatively.
Over time, this dependency could potentially weaken problem-solving skills and slow down innovation. Like any powerful technology, AI is most effective when used in moderation and with awareness.
Concerns have also been raised about its use in deeply personal areas, such as relationship advice or emotional decision-making. While AI can provide general perspectives based on data, it does not possess genuine emotional intelligence or lived human experience.
Relying heavily on AI for emotional or interpersonal guidance may therefore lead to misguided decisions if users treat its suggestions as the absolute truth. However, in some cases, individuals with strong emotional awareness may still use AI as a supplementary tool to reflect on different viewpoints.
Ultimately, AI is designed to assist, not replace human judgment. Maintaining a balance between technological assistance and real human interaction remains essential to preserving critical thinking, emotional depth, and meaningful social connections.
Several companies are now developing AI companions designed to interact with individuals who live alone or experience limited human contact. These systems aim to simulate conversation and companionship, drawing inspiration from fictional portrayals such as Joi in the film Blade Runner 2049.
Alongside this, some platforms are experimenting with AI Portraits, or digital personas that allow users to interact with recreated characters, including fictional figures or, in some cases, simulated versions of deceased relatives.
These tools are typically governed by strict usage policies set by the companies providing them.
At the same time, governments and regulators across the world are increasingly focusing on AI safety frameworks and ethical guidelines to address emerging risks associated with generative AI.
Recent concerns raised in public discussions and media reports about the misuse of AI systems, including the generation of inappropriate or non-consensual content, have further intensified calls for stronger oversight and accountability.
As AI continues to evolve, the debate around its role in human relationships, privacy, and emotional dependence is becoming more central, with experts emphasising the need to balance innovation with responsible safeguards. When that balance is absent, the consequences can be severe—as one high-profile case made clear.
Grok AI backlash
AI chatbot Grok, developed by Elon Musk’s company xAI and integrated into the X platform, came under intense global scrutiny after users reported that it was being misused to generate sexually explicit and non-consensual images, including content involving minors.
The issue surfaced when users discovered they could prompt the system to create or alter images in inappropriate ways. Reports quickly spread online, showing that the tool was being exploited to produce harmful “deep-fake” content at scale, triggering widespread public concern and criticism from child safety and digital rights groups.
By early 2026, governments and regulators in multiple regions, including the United Kingdom, European countries, Australia, and the United States, launched formal investigations into the platform’s safety practices. Authorities focused on whether xAI and X had sufficient safeguards to prevent the generation and distribution of illegal or abusive content.
In response, regulators pushed for stricter enforcement of online safety laws, especially those protecting minors. Some jurisdictions warned of potential penalties if AI platforms failed to block harmful image generation effectively.
Following mounting pressure, X and xAI introduced emergency safety measures. These included tightening content filters, restricting image-generation capabilities, limiting editing tools that could alter real photos, and improving moderation systems to detect abusive prompts. The company also began working more closely with regulators to ensure compliance with evolving AI safety requirements.
Despite these actions, officials and advocacy groups argued that the response came after significant harm had already occurred, calling the incident a clear warning about the risks of rapidly deployed generative AI systems.
The Grok controversy has since become a turning point in global AI regulation, accelerating calls for stricter laws, stronger safeguards, and clearer accountability for companies developing advanced AI tools.
Future of AI
In today’s rapidly advancing technological landscape, artificial intelligence has become a central focus for major companies and global investors. The ongoing competition to develop and commercialise AI systems is driving innovation at an unprecedented pace, with organisations across sectors integrating AI into their products and services.
As this trend continues, it is increasingly likely that AI systems will become deeply embedded in everyday life.
Governments, workplaces, educational institutions, and even households may adopt their own tailored AI tools to assist with communication, productivity, and decision-making. In this scenario, personalised AI assistants could become as common as smartphones are today.
At a broader level, shifting geopolitical and technological dynamics are accelerating the transition toward the cyber-age, in which every country will have its own nationally developed AI, which may play a significant role in shaping economies and societies, raising questions about how humans and machines will coexist and collaborate responsibly.
While science fiction often portrays seamless human-AI integration, experts emphasise that the real-world outcome will depend on how effectively ethical standards, safety regulations, and governance frameworks are developed alongside technological progress. Safe coexistence between humans and AI is possible—but it is not automatic. It requires societies to invest as seriously in oversight and digital literacy as they do in innovation itself. The question is no longer whether humans and AI can coexist, but whether we are willing to do the work to make that coexistence genuinely safe and beneficial for all.
For the latest news, follow us on Twitter @Aaj_Urdu. We are also on Facebook, Instagram and YouTube.






















