A New AI Threat By Brian Simpson

Nation First has made an interesting point worth noting here about the AI threat. While it can be argued that AI has not surpassed human intelligence just yet researchers are attempting to make AI using human brain tissue. Work on this Dr Frankenstein idea is occurring in Australia with complete disregard of the moral consequences, as seems to be the standard now with AI research. The justification is that we are in a new type of  arms race with China and Russia, who do not care a jot abut morality, so unless the West does this research, it will be done by the enemies, who will use it against the West.

I call this the AI very slippery slope. It seems a real and present danger with no easy solution since it is a fact that communist China and Russia will not be constrained by moral or theological considerations. This situation is known as the parable of the tribes:

https://link.springer.com/article/10.1007/BF02277232#:~:text=The%20parable%20of%20the%20tribes%20offers%20a%20theory%20of%20social,among%20the%20many%20cultural%20options

The parable of the tribes offers a theory of social evolution to explain why civilization has developed as it has, in particular, why its major transformations of human life have not better served human needs. It challenges the commonsense view that people have freely chosen among the many cultural options. Another selective process has operated, one not under human control and not a function of human nature. Before civilization, all life was governed by a complex, biologically-evolved order. For a creature to develop culture to the point that it can invent its way of life appears to offer freedom, but this freedom is a trap. For what is freedom for any single society is anarchy in an interactive system of such societies. Anarchy — unprecedented in the history of life — makes inevitable a struggle for power amongst societies. This ceaseless competition, combined with open-ended possibilities for cultural innovation, inevitably drives social evolution in an unchosen direction: ways of life that do not confer sufficient power, regardless of how humane intrinsically, are eliminated, while the ways of power are inexorably spread throughout the system.”

https://nationfirst.substack.com/p/machines-wont-end-humanity-but-this

“The past few years have seen some truly incredible advancements in artificial intelligence (A.I.) technology.

Even science fiction writers from a mere decade ago could not have envisioned some of the breakthroughs that have been made recently.

This, of course, has led to fear that machines will be the end of humanity.

However, that simply cannot be the case as conventional A.I. — no matter how advanced it becomes — cannot replicate the ingenuity of the human mind.

Machine A.I. works on patterns and probabilities.

When confronted with a new problem, it draws on existing data to identify what is likely the answer.

Moreover, A.I. might use logic, but logic is not the same as commonsense.

For instance, logic would dictate that a tomato being a fruit would go well in a fruit salad.

Commonsense, however, tells us that would be a terrible idea.

Intuition is something that cannot be replicated by 1s and 0s.

It is a distinctly human trait that A.I. cannot emulate.

Also, the human brain is simply far, far more energy-efficient.

To even achieve the simple ability to ‘talk’ in a sensible way requires A.I. to harvest and process immense quantities of data, which in turn, requires vast quantities of energy.

We, on the other hand, can learn to communicate intuitively.

Our ability to make meaningful information from nothing will always place us at an advantage compared to machines, which are reliant on existing databases to create new information.

So, what exactly could be the real threat to humanity? In fact, it is something being worked on in Australia.

Researchers here are trying to play God by doing the unthinkable; attempting to create a new form of artificial intelligence that makes use of human brain tissue to overcome these limitations.

Such an A.I. has the potential to develop true consciousness and when it does, what would it think of its creator?

If history has shown anything, is that it is mankind’s primal nature to dominate those it sees as inferior.

This dark trait was only constrained by self-imposed moral systems, and especially the Christian faith with its golden rule that was taught by Christ Himself:

… Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind.
This is the first and great commandment.
And the second is like unto it, Thou shalt love thy neighbour as thyself.

— Matthew 22:34-40

Will this A.I. believe or hold any such morals?

Or will it combine the cold logic of machines with the ambition and learning adaptability of a human being to overthrow and eliminate competition… us?

Because this new A.I. can do better and more efficiently what we are capable of, sooner or later, once advanced enough, it will come to ask questions about the redundancy of humanity.

Once it does, what will it think or plot?

Such an A.I. would be a dangerous adversary, able to think like us but also holding all the advanced capabilities of a machine.

But the scientists working, and more importantly, their billionaire funders seem too short-sighted to think of such implications.

To them, this is just a potential avenue to make themselves even richer.

But they that will be rich fall into temptation and a snare, and into many foolish and hurtful lusts, which drown men in destruction and perdition. For the love of money is the root of all evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows.

— 1 Timothy 6:9-10

From both a moral standpoint and a rational perspective, this is research we should oppose at all costs.”

 

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Monday, 29 April 2024

Captcha Image