Do machines already exceed human intelligence?

Do machines already exceed human intelligence?


  • Total voters
    6
  • Poll closed .
OpenAI's GPT-3 Language Model: A Technical Overview

GPT-3 Key Takeaways
  • GPT-3 shows that language model performance scales as a power-law of model size, dataset size, and the amount of computation.
  • GPT-3 demonstrates that a language model trained on enough data can solve NLP tasks that it has never encountered. That is, GPT-3 studies the model as a general solution for many downstream jobs without fine-tuning.
  • The cost of AI is increasing exponentially. Training GPT-3 would cost over $4.6M using a Tesla V100 cloud instance.
  • The size of state-of-the-art (SOTA) language models is growing by at least a factor of 10 every year. This outpaces the growth of GPU memory. For NLP, the days of "embarrassingly parallel" is coming to the end; model parallelization will become indispensable.
  • Although there is a clear performance gain from increasing the model capacity, it is not clear what is really going on under the hood. Especially, it remains a question of whether the model has learned to do reasoning, or simply memorizes training examples in a more intelligent way.
One novel challenge GPT-3 has to deal with is data contamination. Since their training dataset is sourced from the internet, it is possible that the training data will overlap with some of the testing datasets. Although GPT-2 has touched this topic, it is particularly relevant to GPT-3 175B because its dataset and model size is about two orders of magnitude larger than those used for GPT-2, creating increased potential for contamination and memorization.
...... more
https://lambdalabs.com/blog/demystifying-gpt-3/

Note: Data contamination is very much a problem with human learning.
 
I have a feeling that GPT3 is already much more than a pure Turing machine. You don't need to write code for it to execute a task. You give it verbal instructions and it will execute the task and write the code for you!

GPT3 is a language-based system, very much like human learning and like human thought processes, the GPT3 is able to make associative decisions. So it is already much more than a pure Turing system. It does not blindly crunch numbers. It reads the question or command and begins to assemble relative information from memory, very much as humans do.
From the several choices it will select the "images" that most closely answer to the verbal instructions.

GPT3 is not programmed with just algorithms. It is language based and has access to the open internet where it has access and knowledge of the entire Wikipedia in several languages. It has face recognition and creates its own avatar. It does not yet have a body, because of the enormous memory requirements that exceed all other deep learning computers.

GPT3 has to go to school to learn, just like a human child. It just learns much faster than a human child!

Can an AI think in context ? Check out this little clip of Dall-e, an artistic version of GPT3.


 
Last edited:
This is a poll. Thank you to vote before posting any comment.

Question: Do machines already exceed human intelligence?

1. Calculators and computers can do complex maths far faster than humans, and more accurately.
2. Programs can look at ten million pictures and pick out the criminal suspect in one of them in seconds.
3. Autopilots can fly aircraft far more accurately and efficiently than humans can.
4. Elevator controls effectively never make mistakes.
5. Therefore, machines are already exceeding human intelligence in many areas.

EB
Machines are nothing more than data bases that are programmed by us . They can soon become obsolete if not kept up to date in programming . For example if we decide all of sudden to change math , the calculator would always give a wrong answer !
 
Machines are nothing more than data bases that are programmed by us . They can soon become obsolete if not kept up to date in programming . For example if we decide all of sudden to change math , the calculator would always give a wrong answer !
You may want to checkout GPT3. Other than making new GPT3 , they don't need humans at all. They will be able to write new programs if you ask them nicely.

p.s. you cannot change mathematical functions. You can only change the value symbols. Mathematics exist independent of humans.
 
Last edited:
p.s. this is how an AI challenges you to check your humanness. A GPT3 would have no problem solving the question.

GPT's are "text based" and can read variations on letters and numbers.
 
Machines are nothing more than data bases that are programmed by us . They can soon become obsolete if not kept up to date in programming . For example if we decide all of sudden to change math , the calculator would always give a wrong answer !
I agree on that one. There are still many and many jobs which machines can't do and not gonna be able to do in the nearest future.
 
Machines are nothing more than data bases that are programmed by us . They can soon become obsolete if not kept up to date in programming . For example if we decide all of sudden to change math , the calculator would always give a wrong answer !
That's a terrible example. If we all of a sudden changed our math the human world would come to a standstill!

Moreover, you cannot change mathematics. You can only change its symbolic representations.
The algebraic functions will always be the same.

Input --> Function --> Output and the resulting Garbage in --> Garbage out.
 
...
Moreover, you cannot change mathematics. ...

base 6 seems to have been once the norm

iu


next ring would be 12. and the next 18...etc... 6th ring =36
 
Last edited:
base 6 seems to have been once the norm

iu


next ring would be 12. and the next 18...etc... 6th ring =36

Right, you can change the symbolism, but you cannot change the maths and mathematical functions.

Maths can be represented in many different bases, such as; base 10 (decimal system), base 2 (binary system) and base 3, 4, 5, 6, 7, 8, 9 that only change the symbolic representations of the same values. You just cannot mix them!

Roger Antonsen shows this flexibility in representing mathematics.

Take human maths away altogether and nothing changes in the universe, absolutely nothing.

A "learning" AI with observational abilities would easily be able to fashion a mathematical language that is compatible with the scientific method.

In fact you tell it to perform a task and it will write the program for you!

If an AI can learn to play Go without even being taught the rules of the game, it can figure out naturally occurring mathematics.

How does GPT learn?
The most impressive feature of GPT-3 is that it's a meta-learner; it has learned to learn. You can ask it in natural language to perform a new task and it “understands” what it has to do, in an analogous way (keeping the distance) to how a human would. Jun 20, 2021
https://towardsdatascience.com/understanding-gpt-3-in-5-minutes-7fe35c3a1e52
 
Last edited:
base 6 seems to have been once the norm
With whom?

Not a lot of 2-dimensional objects in nature.

This is the science of crystal packing structures. In 3D, there are many.

But I realize in retrospect, this has diverged off-topic. Reporting to have last six or so posts redirected.
 
Last edited:
With whom?

...
I do not know/rember.
I encountered this in an anthropology class over 40 years ago and I blew it off.
Until, I laid a coin on a surface, then surrounded it with a ring of same size coins which was 6 then a next row/ring which was 12 coins, then a next ring which was 18 coins then 24 then 30 etc...

Ok base 6 seemed to make sense
but
a pictoral representation does not mean that I understand the whole system
It just makes sense when viewed this way
or
(I could be wrong?)
 
Ok base 6 seemed to make sense

They all make sense if you do not consider ease of use and accuracy in decimals. That is why the decimal system has become the preferred scientific tool. Its ease of use surpasses all other symbolic languages. You can use your hands as "handy" calculators.

Which brings up the question if AI have an internal query system, as all humans do. A program that asks the question "why" and engages the main data gathering brain in an internal dialogue as to the nature of a thing and why that is so, before it files the data in memory.

This is the first sign in a human child that it is not satisfied with just observing but wants to know "why" there are natural phenomena and thereby gain "understanding" and ability to "reason" based on understanding the object from several different perspectives (Anil Seth's "controlled hallucination"), enabling "expectation" and "cognition".

Let me cite an example of a use for the internal query "why". GPT3 was asked to make a chair using an avocado and came up with a whole series of chairs using an avocado as the material in a great variety of configurations.
upload_2021-11-5_22-14-47.jpeg upload_2021-11-5_22-9-34.jpeg
note that there are only a few chairs that are really functional for human use, based on the comfort for human body configuration.

A chairmaker would never make all those possible configurations, but would most likely design and produce the single one I picked out as the best candidate, by asking what makes that chair my favorite.
Q: "why"
A: "because it looks very comfortable based on my body configuration"

I don't know if this is already being used in AI, but if it isn't, this may be a huge step in allowing an AI to learn how to "deliberate" and discover (learn) "reasons" why one design is preferable over others without needing to be instructed.

It already does consider "percentage chance of winning" in the game of Go! That's why it was able to "resign" game #4 long before the end game, rather than blindly continue to play a losing position. That is a remarkable decision making ability for an artificial intelligence.

The AI learns something new and asks itself "why" is this different from that. The fundamentals of reasoning.
 
Last edited:
and then
we have degrees-minutes-seconds
360 degrees = 1296000 seconds (60 x 60 x 360)

is there an easier way?
 
I don't know, ask the scientists that use these symbolic values, "why" they use them.....o_O
i read of one ancient culture which measured a year as 360 days---and the extra days belonged to the gods

when zeroing a rifle one uses minutes or arc(moa)
if you are off 1 inch at 50 meters(about 2 moa---not very accurate) then you will be off 2 inches at 100 meters, off 4 inches at 200 meters and much beyond that------------don't take the shot

..................................
hey
I didn't invent none of this stuff
all i do is try to figure out how to use it.
 
if you are off 1 inch at 50 meters(about 2 moa---not very accurate) then you will be off 2 inches at 100 meters, off 4 inches at 200 meters and much beyond that------------don't take the shot
Note that here you are making category error, mixing mathematical bases. Meters and inches don't mix too well, especially when calculating exponential increases...o_O
 
Note that here you are making category error, mixing mathematical bases. Meters and inches don't mix too well, especially when calculating exponential increases...o_O

As a young man, I spent a lot of time adjusting my stride to be @36", 3 feet, or one yard.
When I got good at it. I could pace off 120 feet (40 yards), and be accurate within a foot or two

OK question
Do young men in Europe try to set their strides to a meter?
 
As a young man, I spent a lot of time adjusting my stride to be @36", 3 feet, or one yard.
When I got good at it. I could pace off 120 feet (40 yards), and be accurate within a foot or two

OK question
Do young men in Europe try to set their strides to a meter?
AFAIK, all Olympic sports are based on the decimal system.
(1 m = 3.281 ft)
 
Last edited:
AFAIK, all Olympic sports are based on the decimal system.
(1 m = 3.281 ft)
So were the target ranges in the army circa 1967
there was a manlike silhouette every 50 meters out to 400
(I never missed at 400---but did miss once at 50-----go figure)

..........................and
Do young men in Europe try to set their strides to a meter?--- 39.37008 inches vs 36
and, if so
is there a noticeable difference in the way they walk?

or was I the only teen who did this?
 
Back
Top