AI and Robotics Ethics

AI and Robotics Ethics

With the rapid rise of AI (artificial intelligence) and robotics over the past decade or so, increasing amounts of attention have been drawn towards the downsides of this technology. And rightly so; there is currently insufficient regulation and international agreement as to the limits and applications of this ever-improving, extremely powerful area of tech.

Just so we’re clear, AI is when a computer exhibits behaviour that can be likened to that of a human. Modern AI is mostly synonymous with machine learning. If you need a quick refresher of what machine learning is, read our introduction here.

Robotics is a remarkable field that blends mechanical engineering with technology. Robots interact with the physical world via sensors (cameras, thermometers, microphones) in order to perform a task. Examples vary from self driving cars to the extraordinary work that Boston Dynamics is doing (watch some of their videos on YouTube if you haven’t already).

Common ethical concerns around AI and robotics range from the currently far-fetched such as robots taking over the world, to the much more real uneasiness of machines taking over human jobs. As such, the ethics behind these technologies are on their way to the forefront of this area of tech.

light bulb

Major Concerns

Bias in Decision Systems

Bias in decision systems (i.e. algorithms designed to make decisions with real-world implications) is a particularly hot topic at the moment. Within machine learning, it can be loosely defined as an error in which some elements of a dataset are given a greater weighting than they should have.

When studying machine learning fundamentals, you will learn about high bias and high variance (underfitting and overfitting respectively) in a visual and mathematical sense. Making the connection between this theoretical definition of bias and a much more real-life meaning is non-trivial and takes some time.

One of the most common examples of bias in machine learning is based on policing. Imagine a police force wants to better patrol the streets by predicting where officers will most likely be needed. To achieve this, they use historical crime records data and location data along with a machine learning algorithm to make the required predictions. At a first glance, this may seem like a sensible approach.

However many countries in the world suffer from a police force that makes a disproportionally large number of arrests of black people. So the algorithm, not knowing any better, will predict that police officers are most needed in neighbourhoods with a higher number of black residents. This creates a feedback loop where, since there are a greater number of officers in the area, a larger number of crimes will be discovered. These statistics are then fed back into the algorithm which then becomes more and more confident in drawing this same conclusion.

Bias can be introduced to a decision system in either the data or the algorithm. In the above example, the bias is introduced via the dataset. In an alternative scenario, the algorithm could be built in such a way that certain personal prejudices of the algorithm builder are allowed to dictate the decision made. This is frequently done by data scientists and developers unintentionally.

Automation and Employment

“The robots are stealing our jobs!” We’re going to be plain about this- yes, robots are taking over some human jobs. And as such, a highly emotional ethical discussion around automation and employment is going on.

There exist numerous articles and papers explaining in detail why such automation and use of robotics will ultimately be beneficial for everyone in the long run. And whilst the point they make is true, they frequently fail to address the immediate damage of redundancy inflicted upon those that are replaced and the ever increasing income inequality as a result. As such, the ethical dilemma for many businesses upon developing or adopting automation technologies is based around the retraining and reassigning of existing employees.

It is worth mentioning that this is not the first time automation and employment ethics has become a prominent conversation in society. The last time this happened on a large scale in the UK, it was even named the industrial revolution. And it is utterly clear now that the country is better off for it. The notable difference this time around is that society is making these changes much quicker than every before.

Opacity of Systems

One of the reasons machine learning can be a challenging subject to learn is that the reasoning by which an algorithm generates an output generally has no real-world meaning. As a machine learning algorithm learns, it optimises all sorts of parameters and weights within its logic. But if you look under the hood and notice that, for example, weight #37 is very large, this rarely translates to anything that actually holds meaning to a human. Why is #37 large? What does it represent? Is this correct? It’s usually not possible to tell.

So this creates the problem of machine learning systems being ‘black boxes’. We put data in and get results out, however we have no idea how these results were determined. This is problematic when trying to diagnose issues, explain particular outcomes, or reduce bias. And to be clear, this applies even to the experts.

The opacity of systems is slowly gathering more attention amongst AI professionals, especially as our algorithms become more complex and are given more influence over the real world. There’s a lot of work to do in this space since humanity is entering uncharted territory here!

Singularity / Strong AI

When we talk about the singularity, superintelligence, strong AI, or broad AI, we are referring to machine that has achieved and likely surpassed human levels of intelligence. So yes, we’re talking Terminator. But perhaps with fewer teeth and guns.

Speculation in this area is rife and ever-changing. Some experts predict we will achieve some level of strong AI in a decade or so. Some reckon it will be closer to the turn of the century. And some even believe it is not possible to truly mimic human intelligence in a machine.

But you can imagine that once/if this is achieved, humanity will face numerous ethical dilemmas; what rights should machines have? Should we treat them as equals? Should they be granted citizenship and all the perks that go with it? Can they be held personally responsible for their actions? Should they be controlled by humans?

Given that these questions are both riddled with intricacies and almost certainly a long time away, there is not a huge amount of attention being given to this ethical concern just yet. However we are beginning to set the foundations for answering these questions by first answering more urgent ones, such as- if a self-driving car decides to go on a killing rampage, who should take the blame?

working on paper

AI Ethicists

Several years ago, the job of an AI Ethicist was practically non-existent. Nowadays, individuals with knowledge and experience in this area are highly sought after and paid handsomely.

For a start, the position requires a unique blend of technical knowledge and ethics understanding. On top of that, ethicists will often have experience working with regulation/governance, research, public speaking, and diversity initiatives. Responsibilities will vary from business to business, but are generally based on guarding against bias in machine learning, ensuring alignment to company values, and driving awareness and adoption of AI regulation.

In reality, it’s unlikely that you’ll be able to jump straight into a role as an AI ethicist without having already gained some experience in tech or ethics. So if this is the direction you think you’d like to aim, we’d recommend getting started down the data science learning path.

Further Reading

If like us, you think this is a fascinating area of tech and would like to read more than the introduction we’ve provided above, check out the below links. All high quality and very readable.

Detailed AI Ethics Reviewhttps://plato.stanford.edu/entries/ethics-ai/

Bias Policing Examplehttps://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

Work of an AI Ethicisthttps://medium.com/@MerveHickok/what-does-an-ai-ethicist-do-a-guide-for-the-why-the-what-and-the-how-643e1bfab2e9

Not quite the job role for you? Have a browse of other common tech job roles.