Aritificial Intelligence

A.I. is good. Too much A.I. is bad

Relax. We are not talking about Skynet or the Deus Ex Machina from The Matrix. That level of Artificial intelligence (A.I.) is decades away at least. But then again if we follow The Matrix logic and the best A.I. can do is use human beings as batteries, then we may be at that level of A.I. advancement.

The Matrix A.I. took over the world to turn us into batteries [Warner Bros]

A.I. and bots are good. They are more efficient at tasks and less prone to errors like squishy humans. Anywhere where automated bots and A.I. can be used, it should be so and the world would be a much better place. Right? Well, no.

Do not let them take our jobs

This is perhaps the major concern when it comes to automation. Automating everything will make a lot of jobs obsolete, driving up poverty on a global scale. This will affect the same corporations as less income for the public translates to less money spent on your product or service. To put it into perspective, there are more than 120,000 mini bus taxis in South Africa. Let us say each driver supports two people on his income. That’s 360,000 people left with no reliable income should all cars become autonomous.

This will be the first point in our argument. Too much A.I. is bad.

Somebody take away Google’s A.I. privileges

When software engineers at Google asked what should be automated and what should not, the owners said YES! The engineers then went on to create some of the worst A.I. implementations on the planet. Google search is good, except when it is not and then it is bad. That is where it should end for them. Focus on improving your search A.I. and employ people for the rest.

Google being Google then put A.I. in everything. YouTube is regulated by terrible A.I. that is bad at detecting spam or copyright infringement. Then there is the non existent customer service at Google. When you have a problem with a Google product, do not waste your time dancing around with the bots. Head straight to social media platforms and cause enough of an uproar that you get the attention of real human beings from Google. That is how bad the A.I. situation at Google is. They overdid it and it is making everyone except their shareholders miserable. Too much A.I. is bad.

A.I. in defense and privacy protection

Computational power is increasing at an alarming rate. So much so that it makes sense to offload defense systems management and privacy protection to A.I. However, as one side grows in strength, so does the other. The same resources are also available to malicious actors. This leaves us in a race where as each side advances, so does the other. It gets to a point where humans cannot keep up with tracking and eliminating threats. To counter this, more defense systems become automated. Where do we draw a line and say we have given these bots too much power? There needs to be a line.

It is all software and software is buggy

Some day, humanity may become advanced enough to create Ultron. For those who live under a rock, Ultron is an A.I. created by human beings with some space magic mixed in. That A.I. then went on to design and create an advanced robot that fell in love and started wearing sweaters. I digress. The point here is, any product created by a human being will always have design flaws. Software will always be buggy. Some of you will defend the A.I. and say it is more than just software, we feed it large amounts of data so that it can make the best decision. It is still making a decision based on parameters defined by a human being. The error factor will ever be present. Ultron decided to make the world a better place by destroying humanity. Too much trust in A.I. is bad.

Who wins, the pros or the cons?

There is no denying that automation and bots are more efficient and reliable. This however depends on the task. We could use clinical datasets to train A.I. that prescribes medication based on symptoms presented by clients in hospitals. However, the risk is very high in that the A.I. may prescribe incorrect medication for conditions not covered in the datasets. Another example is self driving cars. On second thoughts, no need to discuss this. Have you seen those videos of Teslas going crazy?

We can agree that using a blanket approach to automation is not good. Let Google use their A.I. to police their platforms. But instead of banning people’s livelihoods, flag content for human review. Ease in automation and allow for adaptation of people’s skills to the new systems to prevent shock. Let a human being stay close to and regularly evaluate or monitor bots with each application in each field/industry. The last thing we want is the A.I. on a nuclear armed drone to go berserk.