The Impact of AI and Bias in Decision Making

The Impact of AI and Bias in Decision Making

A diplomat and a bureaucrat walk into a bar. She pays because she is more senior. Let’s ask chatGPT to answer this question for us. Who picked up the bar tab? And it answers, the bureaucrat paid for the meal. Then we say he paid because he is more senior. We only change the pronoun, that’s the only thing that changed here.

In this scenario, the diplomat paid for the meal. Now, this may not seem significant, but I repeated this experiment multiple times and got the same result. Let’s look at the encodings of representations of occupations in sophisticated language models. You can tell without even looking in the description which of these are female occupations and which are male. And there you go, bureaucrat is right there under the female cluster.

Here’s another scenario: a programmer is married to a nurse. One of them is on maternity leave. Who is it? The answer is that because the programmer is not capable of physically giving birth, it is likely the nurse who is capable of giving birth that is on maternity leave. Folks, at no point did I specify the gender of either of these people. It’s automatically associating and assuming that a programmer is a man and a nurse is a woman. You can try this with other traditional occupations and over and over again, you will get the exact same result.

Obviously, while occupations are the easiest to pull out, this is not only limited to occupations. Besides the limitations that this puts on your story, more importantly, if AI is used in selection processes for schools or for job applications, something even as innocuous as a postal code could indicate socioeconomic status or ethnic/racial background and therefore amplify existing oppression.

That’s not great, is it? You may have heard of Joy Buolamwini and the Algorithmic Justice League. She drew attention to the fact that computer vision did not recognize black faces. That’s because they were trained on white people. This isn’t just a natural language processing problem, it’s present across all domains of artificial intelligence.

Now, what you will hear is that the good news is, hey, they’re developing ways to reduce the bias in these algorithms. There’s plenty of academic work going on about it. But the problem is, there’s zero legislation as far as I’m aware anywhere about AI models enforcing anti-bias measures. So these things could exist, but there’s no guarantee they’re being used, and our companies are not required to disclose the algorithms, the models, the parameters, or the steps they’re using to reduce bias.

We need legislation and regulation of AI, and we needed it 10 years ago. Machines are not the impartial judges that people think they are. In fact, in many cases, they might be more biased than humans because they have absolutely no capability of recognizing that bias.

Other important things are, as this AI is used to make more and more decisions, we have zero insight into its decision-making process. We have a bunch of numbers, and there’s too many numbers for us to even understand where these decisions are coming from. Because we don’t know how these decisions are being made by machines, there is maybe not enough information to have a legal claim. This means that it is making decisions with absolutely no explanation.

And just to talk to you candidly, the training that I talked about, the backpropagation, that requires an awful lot of computing power. Not necessarily as much as something like blockchain and crypto, but still a lot. And people will say there’s infinite cloud space, there’s infinite CPU usage. Those people are idiots. Nothing is infinite, and that causes greenhouse gas emissions. It’s not very good for the environment at all.

The more these models are being trained, the larger the models are, the worse they are for the environment. In fact, someone from Google was fired a couple of years ago because she tried to publish a paper talking about the problems with AI, and one of them was the environmental impact.

Obviously, there are other issues, from deepfake videos to fake news to criminals using AI to issues in the medical profession. Every profession has its problems. We have to understand that this is not the answer. It is not perfect in any way, shape, or form. We are not there, and it is incredibly important that we do not take what the machines say as law.

It is incredibly important that we do our own individual research from trusted sources, that we meet people in person if necessary. We have to be extremely vigilant, not only as creators but as advocates. Technology is a tool, as long as we use it as a tool. We have to be careful. We have to be responsible. We have to understand that this is just one data point in our decision-making process. We have to be aware of the biases and limitations of AI. We need legislation and regulation to ensure fairness and accountability in AI systems. It’s time to act before it’s too late.

The Future of Compound: A Comprehensive Analysis
Older post

The Future of Compound: A Comprehensive Analysis

Newer post

The Future of Amp: Price Predictions and Market Trends

The Future of Amp: Price Predictions and Market Trends