I very often see false choices about paths forward. One of the most common ones is the question about the future of humanity regarding AI, robotics, computers, cloning, vaccines, genetic engineering, you name it. Any new area of technology with a tremendous potential is always a multi-edged sword, and someone will always ask the question of whether we should do it. I think that all the morality questions are good and worthwhile, but also problematic. So I am introducing, hopefully not for the first time, a structure to deal with such issues:
- If we don’t do it and they do, what happens?
- What do we want to happen?
- What then do we do?
If we don’t do it and they do, what happens?
Let’s not imagine that we will all agree and then not do something that could have a dramatic
effect on the World. Sure, we have had nuclear, chemical, and biological weapons treaties that have slowed the development of such weapons and limited their further proliferation to only the rogue states that are crazy enough to actually start a nuclear war. But it’s pretty hard to make a large-scale nuclear weapon, the material is hard to get and refine, the engineering is non-trivial, and if I give you a legitimate design, it’s still not that simple to implement.
Who are the “we” and “they”? In the cyber arena, we and they are anyone, anywhere, any time. The cat is out of the bag before it is grown up enough to notice. AI is not new, generative AI is not new, large language models are not new, but our global social recognition of them is new, and now we are afraid. Too late.
So if they do and we do not, I think we become subject to their lording it over us. Whatever the technology is, if you are not competitive, you are either wise because it doesn’t work, or a fool because you lose to the asymmetric advantage. Unless you have your own such advantage to counterbalance it. And don’t cry “poor” because most of it is very affordable.
Of course you can steal to catch up then try to stay up or ahead. And when I say steal, I mean to take without permission or compensation. You can buy it from people who stole it of course, and you can pay for it in one form or another. In fact you will likely have to regardless. The further behind the more you will have to pay for it. And you might not like the price.
What do we want to happen?
Now that we know what happens; we all get subjugated or pay a heavy price for being wrong; we should reasonably ask ourselves what we want to happen. So the cyber overlords, or the clones, or the genetically engineered superior race, or the irradiated mutants, AI, or whoever / whatever takes over. Is that a good thing or a bad thing? And for whom?
If you are a futurist, which to some extent we all are, you think about what the future will look like and how you imagine it might be better or worse. If you believe it inevitable, you seek the inevitability curve. The inevitable arc of history toward whatever end you have in mind.
What then do we do?
One way or another, once you have looked at the futures in front of you and us, it’s time to make a decision. Is there anything you can do about it? Can you get what you want? What about all the other people and what they want? What level of conflict do you want to get your point of view to be adopted? Or do you care? Are you too young to imagine it, too old to care?
Almost everyone will ultimately do little or nothing of consequence to these so-called big issues. Until and unless these big issues fall on them and start dragging them down, most people will not consider them, will not act on anything they might consider, and many are resigned to the reasonable conclusion that there is nothing they can do about it.
All of this gets back to the old question:
If you could kill Hitler when [name a time or event] would you?
A standard answer 50 years ago would be YES for 95% or more of the world population.
A standard answer 100 years ago would be “Who is Hitler and why would I want to kill him?”
But here we are now
Looking back on it, it’s easy to say woulda coulda shoulda. But we are only permitted to look forward, and our predictions are apparently inadequate to convince us to do the right thing unless we are lunatics. Which is to say, it would have taken a lunatic to kill Hitler in 1929, and it would take a lunatic to predict who to kill today against the bad things of 2029.
Of course the closer we come, the more we might reasonably say we shoulda when
we coulda, and we woulda if we knew.
So here we are now, and the AI cat is out of the bag, as is the genetic engineering cat, the nuclear cat, the other cats, and we are now left trying to herd them.
But where do we want them to go? Away might be the desired answer, but it’s not likely to work. We can barely outlaw murder effectively, and in doing so we have not eliminated the problem. We just let people kill other people for the most part, and if we can find them, we pay for food and housing and health care for the rest of their lives to keep them away from us.
It’s still illegal to open or close an umbrella in the presence of a horse in New York City, play Dominoes on Sunday in Alabama, whisper in someone’s ear while they are moose hunting in Alaska, drive more than 2,000 sheep down Hollywood Blvd in California, and so forth. Making it illegal is not likely to solve the problem, and even the Cannabis trade survived prohibition.
I started this article with a simple proposition. That we should start asking the right questions.
Of course that implies that the questions others are asking are somehow the wrong ones.
Sorry about that.
My real point is that it’s a competitive world, and in order to have an actual effect, we need to make decisions based on the realities we face, not the realities we would like to be facing. By the time each person who decided to kill Hitler came to the conclusion to do so, it was too late. And in the end, the only person who could and did was Hitler himself. It’s already too late to kill AI, but you can position yourself to lose the future by shunning it. So act now or suffer the consequences… figure out what you want and how to get it.