AI and ignoring Asimov

As we enter this current age of ‘AI’ mania, debate rages about legislation and ethical responsibility of developers of these tools. Followers will be familiar with my scepticism of the capabilities of some of these programs. I fully respect the smart coding that drives these applications but we remain a long way from concepts of ‘singularity’ or ‘sentience’. Ultimately, these programs remain driven by coding logic that relies on ‘if’, ‘or’, ‘else’ statements to create ‘AI-like’ results.

However, like the early internet age, this has become the new wild-west of technology development. Few rules, few ethical guidelines and little oversight has meant that even in their infancy, these applications have been corrupted to produce horrid content.

AI is far from a new concept: books, movies, tv shows have explored the concept for decades – suggesting both positive and negative impacts for humanity. A seminal concept was introduced 83 years ago (1942) by author Isaac Asimov is his short story “Runaround” which introduced his proposal for the “Three Laws of Robotics”.

  1. A Robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A Robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A Robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A product of the age, Asimov pictured ‘Robots’ as physical representations of humans and so his laws may be interpreted as their threat to a humans physical harm. In our age, our ‘robots’ are not physical, they are digital, and yet we should still be able to apply a similar set of rules to responsible developers of these technologies. Those harms imagined by Asimov should equally be applied to the digital harm caused by financial and social loss experienced by victims of those seeking to exploit users.

If I could be so bold, I would suggest a new ‘law’ – every AI program must have a physical core. By this, I mean that every application must have a physical, tangible base whether that be a data centre or server. This would ensure that every application has a physical ‘off’ switch. Programs that ignore this law would be able to self-replicate and perpetuate beyond their creators control. Any AI entity that didn’t apply this law would be nothing less than a virus.

There will always be ‘bad-actors’ in a technology space like this who seek to exploit the applications for either financial, personal or influential gain. We do however, as a society, have a chance to set our social expectations and norms on these developers and punish those who ignore them. No legislation is global but, to date, even developers in advanced economies have been given free-reign to release these ‘robots’ without consequence.

Time will tell on the legal implications of some of these programs producing copyright infringing content, defamatory statements, misleading financial advice etc…

Embrace productivity improving technologies, embrace new science, embrace the future… but always have an ‘off’ switch.

PPS: yep, that image was AI generated…