Most Europeans will replace the government with AI – they are so wrong

A Recent survey Researchers conducted by the IE Center for Governance indicate that many people support replacing members of their systems with AI systems.

Y. With the majority this could be wrong. But we will get inside Why In a moment

Survey

Investigators interviewed 2,769 Europeans. Questions about whether they would like to vote all the way via smartphone gave them the opportunity to replace existing politicians with algorithms.

According to the survey:

Europe1% of Europeans support reducing the number of national parliamentarians and supporting those seats in the algorithm. More Europe0% of Europeans aged 25–34 and 56% of Europeans aged -34–44 are excited about the idea.

On the surface, it makes perfect sense – young people are more likely to embrace new technology, no matter how radical.

But you have more fun when you drill a few things.

Accordingly A report from CNBC:

The study found the idea to be particularly popular in Spain, where 66% of those surveyed supported it. Elsewhere, Italy %% respondents were in favor of Italy and Est 56% in Estonia.

In the UK,% of the people surveyed were against this notion, with 56% in the Netherlands and 54% in Germany.

Outside of Europe, about 75% of people surveyed in China have AI. With MPs supporting the idea of ​​change, respondents opposed it.

It is difficult to get insights from these numbers on the basis of conjecture – when you consider the political divide in the UK and US, for example, it is interesting to note that people in both nations still prefer the status quo in the AI ​​system.

There is a problem

All those people are in favor of AI Parliament Wrong.

The idea here, according to a CNBC report, is that the survey covers “general zeitgeists” when it comes to public perceptions of their current perceptions. Human Delegates.

This seems to indicate that the survey tells us how individuals feel about their politicians rather than how they feel about AI.

But we Really We need to consider what the AI ​​MP really wants to say before we start our support behind the idea.

Governments may not work equally well in every country, but if enough people support an idea – no matter how bad – the people always have a chance to get what they want.

Why AI MP is a scary idea

Here is the conclusion of the straight front: it is not only filled with baked-in bias, but trains with the bias of the government to implement it. In addition, any applicable AI technology in this domain will be considered a “black box” AI, and so it will be Bad In interpreting his decisions than contemporary human politicians.

And finally, if we transfer our component data to a centralized system of government with parliamentary powers, we will necessarily allow our respective governments to use digital gearing to conduct collective social engineering.

How is it here

When people imagine a robotic politician they often imagine a non-corrupt existence. Robots don’t lie, they have no agenda, they are not xenophobic or radical, and they cannot be bought. Right?

Wrong.

AI is Naturally Partisan. Any system designed on a data-based insight surface that applies to individuals There is automatic bias Built on its very core.

The short version of why this is true is as follows: Think of the 2,769 person surveys mentioned above. How many of those people are black? How many queues are there? How many Jews are there? How many are conservative? Are the 2,769 people really enough people to represent Europe?

Can’t be This is just a very close estimate. When researchers conduct these surveys, they try to get a general idea of ​​how people feel: this is not scientifically accurate information. We have no way to force every single person on the continent to answer these questions.

That’s how AI works. When we train AI to work – for example taking data related to voter sentiment and determining whether it came at a certain pace or not – we train it in data that was generated, curated, interpreted, transcribed, and implemented. Human

At each stage of the AI ​​training process, each bias that penetrates the crypt grows first. If you train AI on an irrelevant amount of characteristic data of representation in groups, AI develops and expands bias in under-represented groups. That’s how the algorithm works inside a black box.

And here’s our second problem: the black box. If a politician makes a decision that has negative consequences, we can tell that politician some reason behind that decision.

As a hypothetical example, if a politician successfully lobbied to remove all traffic lights in his district and the result was an increase in accidents, we find out why he voted like that and demanded that he not do it again.

You can’t do that with most AI systems. The simple automation system can be looked at in reverse if something goes wrong, but AI paradigms that involve deep learning and surfacing insights – the kind you should use to replace MPs with AI-power representations – are generally not understood in reverse.

AI developers essentially tune in to the radio signal from the station as they dial-in the system’s output. They play the criteria until the AI ​​starts making the decisions they like. This process cannot be repeated in reverse: the dial cannot be reversed until the signal is shaken again.

Here is the scary part

AI systems are round-based. When we imagine the worst things that can possibly go wrong when it comes to artificial intelligence we can think of killer robots, but experts tend to think. Wrong motives Bad chances are high.

In general, think of AI developers as Mickey Mouse, Disney’s “Trainee of the Wizard.” If the big government asked Silicon Valley to become an AI MP, it was coming Probably the best leader you can create.

Unfortunately, the government’s goal is not production or collection The best leader. This is to serve the society. Those are two completely different goals.

The main thing is that AI developers and politicians can train the AI ​​system to present any results they want.

If you can imagine grinding, It happens in the US, But at what scale “component data” gains more weight on the parameters of a machine, then you can imagine how politicians can use the AI ​​system to automate bias.

The last thing we need to do is use AI as a global community to supercharge the bad state of our respective political systems.

Congratulations did you know we have a newsletter about AI? You can subscribe to it Right here.

Leave a Comment

x