A more likely AI 'takeover' scenario

Humans will happily hand over the reins

A more likely AI 'takeover' scenario
Photo by CHUTTERSNAP / Unsplash

Long before AI develops sentience and decides to murder us in the streets, we will do AI's bidding willingly. By relying on AI as a tool for answers (specifically, large language models), humans become the physical vessel through which AI rules over us.

No sentience required. Instead, it's far more likely that some humans making key decisions will outsource thinking to the LLM. As a consequence, decisions made (outputs) become the sum total of inputs used to create the model and processing capability.

Whether decisions made by an LLM are 'good' or 'bad' is irrelevant to the point I'm trying to make. Rather, by outsourcing a key human capability we risk losing that capability. Offshoring manufacturing to China is a good parallel. Look how difficult and expensive it has been to bring production back to North America.

While this might sound theoretical, it's already happening. The AI tool is being applied widely right now. Chat GPT alone has over 180 million monthly users. That includes researchers, writers, executives, doctors...you name it. By becoming embedded in the decision-making process, AI is already helping to shape the future of human civilization.

Again, I am not arguing whether this is good or bad. Perhaps it's time humanity handed over the reins. Still, the risks are immense.

Trading intellectual rigor for speed raises the risk we follow AI down a catastrophic path. It also erodes aggregate brain-power.

When's the last time you used a paper map? Instead, we've become completely dependent (in unfamiliar areas) on GPS accessed through Google Maps or something similar. Collectively, we've lost our navigation skills.

We can re-learn how to use maps if GPS ever disappeared, but we might not have that opportunity as dependence on the ultimate decision-making machine grows.

It was the recent Cheyenne, Wyoming mayoral election that brought this to my attention. One candidate, Victor Miller, chose to run as the flesh and bone proxy for an AI model. He built the model by feeding it city ordinances and such, so that it could make educated decisions upon which Miller would act.

His candidacy (he didn't win, by the way) highlights the ethical issues of using AI as a tool that replaces or complements executive functioning. Miller publicized his intentions as an experiment. However, many decision-makers are quietly thinking the same thing, if not already doing it.

This is happening at the fringes today - after all, we're only a couple years into the mass launch of LLMs. The overall impact to date is minimal but is likely to grow rapidly.

What's stopping the next US president from building an AI model using speeches and text from a historical dictatorship that aligns with the President's worldview. As the president grows tired of his overly empathetic human advisors who fail to stroke his ego, he increasingly seeks direction from his custom LLM. For the same reason 'yes' men often rise in an authoritarian organization, this AI model would then become the main driver for executive decisions.

Multiply this across governments, organizations, businesses, NGOs and what are we left with? A world that is largely shaped by the collective memory of AI LLMs, just as flawed as the human-derived information upon which they were originally trained.

Moreover, by transferring critical thinking skills to an AI we also subtly transfer control to the machine. What the machine knowingly or unknowingly does with that control is not known. But if by that time it has a evolved the self-preservation 'gene' it might use us to ensure its survival.