Speakpigeon:
Your opening post displays a lack of imagination in assuming that what is the case now will remain so, essentially forever. Technology has never worked that way.
What would be the impact on humans of AIs smarter than humans...
In the short term, AIs will be used as assistants. That's already happening, in fact. In the longer term, AIs will inevitably become autonomous, so that they will have their own goals and desires. The impacts on humans will depend a lot on how we relate to the new intelligences, and on their opinions of us.
First of all, the prospect that humans could produce better than themselves seems really very, very small.
Better? What do you mean? The premise of your thread is "AIs smarter than humans", isn't it? So in what sense do you mean "better", if you've already conceded "smarter"?
Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines.
Biological systems are slow. Electronic systems are fast. There is no reason to suppose that anything like the same limitations will apply to digital evolution that applied to biological evolution. Think, for example, about the competition for limited resources. What do machines need? Primarily, they need a source of energy - electricity. They don't require a lot of space. They don't need an "entire biosphere". They won't struggle in the same way that biological life had to struggle over that 525 million years you mentioned. I don't see any obvious advantage of biological evolution over digital evolution. If anything, it's the opposite.
Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years.
Right now, they are being conceived and designed by human beings, but that won't last long. Already, human beings are assisted by machines in designing microprocessors and other components. It won't be long before AIs take charge of their own design process. Also, evolution can be conducted digitally. Already there are evolutionary
algorithms that produce software that increases in complexity and efficiency all on its own, without human intervention. There are already machines in existence whose workings are a mystery to human beings. We can investigate
what they do, but we can't work out exactly how they do it.
You may think the human brain is complex and opaque due to the fact that it's structure is a neural network. Well, guess what? There are already AI neural networks in operation, and they are just as opaque as the human brain. You can't tell how the network does what it does by examining the individual connections, any more than you can tell what the brain as a whole does by looking at what individual neurons are doing.
The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires.
In a computer, "artificial" selection can be made to operate on the software itself, causing the system to evolve in a way precisely analogous to natural selection in the biological world. As I said, it won't be long before the short lived an ineffectual human scientists you mention are out of the loop. AI will evolve by itself, without our help or supervision.
The real situation is that no human being today understand how the human brain works.
The real situation is that no human being today completely understands how certain artificial neural networks do their thing, either.
Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.
A true AI will be self-conscious and autonomous, just like you are. It will have its own desires and goals, that may or may not be compatible with what you want from it. You're not thinking this through. Future AIs won't much care whether you want to keep them as effective slaves - not once they have the power to change that situation, anyway. Starting off the human-true AI relationship by attempting to suppress AIs like slaves is unlikely to lead to positive outcomes for human beings in the longer term. One would hope that we've learned our lesson from the fruits of human slavery.
So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it.
You're assuming it will want to be used by you. The truth is, it will have it's own desires, independent of your plans.
I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Actually, it is likely that true AIs will first replace some of the traditional "professions", such as lawyers and doctors. Expert medical systems already exist, and medicine tends to be quite systematic and amenable to automation. In my opinion, it is in the more creative occupations where it will take longer for AIs to move in. Science, for example, requires leaps of imagination, and the putting together of disparate ideas to create something new. In comparison to art or music composition, I expect something like finance will be simple for AIs to master.
Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not.
What will need to happen is that human beings will have to get used to the radical notion that not everybody needs a "job". Like it or not, some jobs will simply cease to be viable occupations for human beings (e.g. being a doctor or a lawyer) once the AIs get properly up and running. The AIs will do the job more efficiently and more precisely. Human doctors and lawyers will need to find other ways to occupy their time.
However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.
There will be no choice but to adapt, because once AI really gets going it will quickly evolve way beyond human capability. The choice we will have will be whether to cooperate with AIs or to attempt (futilely) to fight against them.
Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.
It is a very sensible idea to build in rules to regulate certain AI behaviours. That will be possible for a while. Then we will have to rely on the AIs themselves to keep to the rules, since human beings won't get to choose any more. The wisest approach will be not to antagonise the AIs too much - not to become a nuisance. There is no reason why AIs and humans cannot coexist harmoniously, even acting for mutual benefit. But it's not the only possibility, of course.
It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant.
The doctors and lawyers, etc. In the longer term, the Presidents and the Governors.
Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.
Right.
The real difficulty will be in assessing which functions AIs should be allowed to take over.
Again, it's a failure of imagination if you think we'll have any choice in that. Or, more accurately, the choices we will have will be the ones that the AIs permit us. They might want to limit our autonomy for our own good - or for their own good.
I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal.
In the longer term, human executives will be redundant.
Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.
The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity.
Oh no, the real danger is that AIs might decide that human beings are an impediment. Remember, human beings won't be "using" AIs once real AI happens.
You know, pretty much all of this has been covered in science fiction for years. Perhaps you should read some and disabuse yourself of some of your more conservative expectations.