“Before we work on artificial intelligence,
why don’t we do something about natural stupidity?
This bit of doggerel appeared on the old computer machine the other day and got me thinking some serious thoughts. Thoughts like, just because new tools of our technologically-driven world allow us to do things not previously possible, should we?
After some considerable thought, my answer is, “No. Not always.”
While humankind traditionally embraces discovery and new ways of living, in some cases, we’ve found ourselves with the ability to “do” without asking “Now that we can, should we?” “Do we understand what we’re doing?”
The current ability to keep brain-dead and other near-terminal bodies alive is one such instance in which technology has made something possible but we’ve not yet developed solid ethics to answer the question “Should we?
Or, we can clone animals. This naturally leads to the query, “What about humans?” To which we should immediately further ask, “Should we?”
Scientists at M-I-T have disclosed they’ve bred lab rats that age in reverse! Now, we’ll see public clamor to try it on humans. Is someone - anyone - going to quickly and loudly ask “Should we?”
Not many folks remember Josef Mengele - Doctor Mengele. He was assigned to the Auschwitz prison camp in 1942, and contributed to the deaths of millions of Jews. But, his real Nazi “fame” came from his ghastly experiments to develop a “master race.” Pictures, published after World War II, showed the depth of depravity of his “work.” It’s still impossible to understand why someone - anyone - would have submitted thousands to such brutality.
Mengele told the Nazi hierarchy he could create the “master race” through medically inhumane means. But no one asked, “Should we?”
Extreme case? Yes. Definitely. But it brutally teaches us that we - as humans - are capable of trekking off into uncharted domains without considering necessary ethics to deal with such issues and without asking “Should we?
Now, we’re faced with “artificial intelligence.” Developers are boldly touting what it will do for us - how it will change our lives for the better - how it will advance our “civilization.”
As a nation - as a world - we should just as loudly be asking “Yes, we can, but should we?” I’ve not heard of anyone or any serious intellectual body raising the issue of developing concomitant ethics - rules - standards - limitations - before we dash headlong into this new “computerized world.”
A.I. is not something to necessarily be afraid of. But, the capabilities of this astounding science are so immense we need to figure out what we’re going to do with it - how we’re going to use it - what aspects of it we should pursue - what limitations (and how many and for what) we should adopt.
In a very real sense, A.I. amounts to humans turning over many “duties” that have traditionally been our responsibilities - our tasks - our ways of living - to inanimate, but very intelligent, machines.
Is anyone truly asking which responsibilities, which tasks, which real world ethics we should apply? Is anyone looking at limitations - what failsafe protections we need to develop before we rush into this new ”Utopia?”
I don’t hang around the M-I-T labs on a regular basis. Nor am I on mailing lists of other major research institutions. So, some of what’s happening in these “lofty towers” may be getting past me.
But, I’d like to think that, as we consider what to do with this new science, someone - many someone’s - are toiling with the simple but world-changing question: “We can, but should we?”