AI is a strong-link problem. Its applications are not always.

What should we control?

Guns, sex, drugs, driving and abortions could be some of the most debated topics in term of weather we as a society should exercise more control or less on their access and function. Giving rise to discussions that sometimes tear apart some of the closest of bonds. What if there was a more measurable way we could say if it was in our favour to move the control bar in either direction.

One effective way we can classify such problems is to assess what the outliers to the normal behaviour end up costing us. For example artistic freedom could be considered a strong link problem as the the craziest and the most non-conforming works that may come out of art usually only impacts us in a positive manner. So is the case with agriculture, we all only pose to gain from some farming group coming up with some strange innovative way of making better produce.

Science would be a strong link problem as it only helps us to have more good ideas representing a wider range of the spectrum of through than a filtered few that closely comply with the existing work. We all tend to only gain from a scientist who may have failed academically but could have brilliant insights built over through his life experiences into things like physics and biology. Not only does the outliers extremely help us, but the negative outliers are often just forgotten and scrubbed over time. How ever the same can't be said about things like aviation, medicine and defence.

None of us want to hear a about a rouge rocket blowing up over Eiffel tower or want an idiotic youtuber jumping off their crafts to shoot a fake engine failure videos for their fans. We expect the medicines that are prescribed to us to be well researched, compliant to good manufacturing practices and is as precise for our need as possible. We use the regulations to make user the outcomes are controlled strictly the outlier outcomes may tend to harm us more than benefit us. Yes, flying low and dangerously between a populated areas may get the flight faster and more in a more economical rate to its destination but it is not just worth the risk.

Sorting the characteristics.

We could sort the targets of questions of control that we are faced with into these two categories by looking at some core behaviours

For Strong link problems.

  1. The events outside normal are more desirable.

  2. The harmful events are small and limited in time.

  3. We want to improve the chances for beneficial events as a priority.

  4. Risks are fully worth it.


    For Weak Link Problems

    1. The events outside normal are less desirable.

    2. The few good results will be ignored over time.

    3. We want to reduce the chances for unfavourable events as a priority.

    4. Risk are not worth it.

    Before we judge; could AIs ever be really dangerous?

To those who are hoping that an AI takeover would give them a chance to have a battle royale in any of our much loved science fictions and their poor and often incompetent (yes I am looking at you I,Robot) adaptations. However we nerds will be nerds for the things we love and for the ones who love tracking how culture, economics and science is going to evolve this is going to be exciting few years ahead due to the impact of AI.

The reason AI would never in the its purest form of the word "take over the world", is that at its core, the deepest soul of itself, its just a mathematical function calculating the the minimum or maximum value based on the importance it gives to parts of the input. Technically its just a calculator doing it's jobs.

Somethings we firmly know (irrespective of imaginative arguments we can raise based on irrelevant correlations) about how we will be dealing with AIs are the following -

  1. AI would never need to be given person hood (as under law).

  2. It will not replace artists or any skilled professionals.

  3. There shall be no skynets taking over the world.

The first is because, irrespective of what an AI does or says it is a program written and run on a machine that is computing numbers based on instructions it was programmed, just like any other program. Or in other words just because your washing machine starts telling you it wants to run for president, you don't have to. You instead call the technician to fix the thing.

AI can't replace artists or skilled professionals because it does not do anything artistic nor do what a skilled professional do. Its only an absurdity we see some people believe in that artists just blend in other art works together to make new. An artists contribution comes in from the choice of medium to the choice of colour affected by the recent trauma they experienced. Nor can it replace a experienced doctor who may be able to tell a patient is sick from the way he walked in, even when they never asked to be examined for a certain disease.

The idea that art and most of the professional work that we are doing is just random application of procedure from past experience is deeply flawed. Art is art because it takes consideration from the mind that creates is, not because its loos beautiful. And real professional work is not the paper work that we often do, but the smart decisions we make on what does into the papers so that the world doesn't break around us. A lot of times using wisdom we gained from nothing related to work.

In other words we can get AI to make patterns like Picasso, but i would never pain the Guernica. You could train AI on all of Alexander Fleming's work but it would never invent the penicillin. However to the true artists and professionals AI is going to become the greatest companions. It will boost their productivity and precision. Scientists will find greater breakthroughs, artists will research deeper and invent mind melting approaches, engineers will design things incomprehensible to even our own generations mind.

Every AI is designed (yes, all kinds of them) to look at a certain type of data and figure out what patterns within it can create the expected outcomes in output. While this mode is amazing for specialising them to carry out very specific tasks. ChatGPT too, its just trained to be an amazing search assistant for any information you need in a much more personalised manner. A possible skynet would need a huge set of skills.

Some of them being -

  1. Capability manipulate people and interpret their language.

  2. Correlate real-world systems from end to end.

  3. Interpret video and audio without errors.

  4. Predict humans strategy effectively.

Just 4 of over a hundred. It would be a huge effort to even achieve some of these in a general context. Let alone a General Artificial Intelligence. Further just because an AI can generate commands that say launch the nuke does not mean we should connect that output into the terminal of the ballistic missile controls. As humans often proved to be threats enough without even AIs at hand, these risks are all almost well defended against.

What any country or any of us should be worried about however is how AIs are going to be used. Enemy states will make of it to figure out strategies and attack vectors we could never have imagined. Rogue layers may design a few capable of widespread cyber attacks. And few other good movie plots. But as in most of these possible movies all the scenarios would end pretty bad for them as sadly a few can't out think a large population for too long. We got AIs, and better AI scientists too, on our side.

It best serves strong links to be free and open

This week world woke up to the disturbing news of Dr. Geoffrey Hinton, often dubbed the godfather of AI research we currently enjoy, quit his job at google with an ominous and disturbing warning.

“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,”

and..

“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”

As shaken as this may sound coming from one of the greatest of minds in building our foot works on artificial neural networks, if I could I would try to see this as his deep lack of trust in humans in handling something so powerful rather than an actual fear of AI doing something harmful. Something that is apt and important to consider.

The reason I hold this thought than assuming it to be an actual warning about AIs being dangerous is the fact that I believe, that even the first person who managed to tame fire and create it would have at some point in his life looked outwards and wished he had not done what he did as we probably watched it going out of control or being used a s a weapon.

Great creations have always had this way of scaring its own creators and most of them are humbled by what they have created. To all those who are still creating magic we as a specie might not be mature enough to process or handle I would like to quote Kevin Kelly founder of Wired.

Over the long term, the future is decided by optimists.
Kevin Kelly - @kevin2kelly Apr 25, 2014

No specie is ever ready to use a new and high potential tool ever, until of course, the tool lands in their hand. The route then it takes might have rough edges but in the grand scheme of things the knowledge only helps us. We have had this fear for for almost all tech we use today on a daily basis.

The fundamental nature of natural knowledge

We might need to skip out of the world view that AI like the atomic bomb was built within a project and has complex architecture that would never have been created if it was not for the work of the select few.

AI in other word is applied math, in its most beautiful of ways. In a sense it is just a mathematical construct that any specie that evolves intelligence sufficiently would without fail bump into in their quest for the natural order within data through mathematics. An inevitable process of evolution which can't be blocked because all that there is actually is brilliant math put to work, and who should be denied that freedom.

Separating the system and the world

While AI would absolutely qualify as a strong link problem which should be allowed to be researched and figured out freely it is critical that we clearly understand the difference between constructing it an applying it in the real world.

The tragic nonsensical story of how Tesla autopilot ended up killing more than a dozen people. What seem to have made the situation worse is the cut-off mechanisms they constructed in to avoid liability in case of accidents, which allowed the systems to cutoff and hand over controls back to human once it detects everything has gone to hell.

While it is on thing to create a AI the widespread use of it within any sector or public importance need to be done with utmost care and must at all cost avoid incompetents stunts designed to look smart while peoples life, livelihoods or futures are put at risk.

This shows that AI should be dealt with like any other technical components in the respective fields. Just like no aeronautical administration will sign off on a engine that may at any moment burst into flames and like no automobile law would let companies use breaks that will turn off at its whim. No AI that has not been stringently tested and is found to be compliant to required safety conditions set for the related sector should be put in widespread contact with the public.

The true danger is consolidation of AI capability

In my opinion the greatest danger that looms over the world in terms of AI is not its capabilities of any fold or its biases of any nature but the pure probability that few may consolidate its power while others are kept in dark.

It only helps us to know any technology better, to know how best to sue them to our advantage and to know how to defend against the same when being used as a weapon.

If in time a few corporations or a select few countries end up holding onto the best of the AI technology out there it could sound a great threat to all of us. The best way to defend against such a possibility is to democratise and make the best knowledge more freely and openly available to all who want to learn it.

It is critical that we as a society know how important this technology is and how valuable a resource any knowledge of it is will be as we see more and more of it in use. Like the souls that looked at the first of them taming fire we are just looking at the starting of a great revolution that would free us to live more.

To fear is common, common for us as a specie too. Yet we will soon see how amazing we can be for all the faults we give ourselves. Very few generations ever had the fortune to stand at the foothill of something this huge. We might fall but we will prove ourselves to be worthy of it in time.

The people who will end up seeking control and regulation over things of strong-link are to be looked upon with caution. Because, usually its only the control and consolidation of it is the real danger.

Write a comment ...