What do you think I am doing?In the program " The Unexplained " , Hosted by William Shatner , they delve into Genius ( S3 , Ep9 ) . If we did more of this research upon our Brain and what it is capable of , Humans can evolve intellectually enormously .
river said: ↑
In the program " The Unexplained " , Hosted by William Shatner , they delve into Genius ( S3 , Ep9 ) . If we did more of this research upon our Brain and what it is capable of , Humans can evolve intellectually enormously .
What do you think I am doing?
Really? By presenting the current state of AI in the new GPT3 and soon to be released GPT4 AI?The opposite .
Really? By presenting the current state of AI in the new GPT3 and soon to be released GPT4 AI?
GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3
Are there any limits to large neural networks?
https://towardsdatascience.com/gpt-...arameters-500x-the-size-of-gpt-3-582b98d82253
Wisdom .
"Learning" AIs can learn wisdom from published philosophy on the internet. In the examples I posted there are several instances where Lita displays great wisdom. Play the videos and be amazed.
Yesss the can ! They are "learning" AI and can learn to adopt new logical information.Sure but that can produce a mindset . Of which AI can't break out .
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.
GPT-4 will have as many parameters as the brain has synapses.
https://towardsdatascience.com/gpt-...arameters-500x-the-size-of-gpt-3-582b98d82253The sheer size of such a neural network could entail qualitative leaps from GPT-3 we can only imagine. We may not be able to even test the full potential of the system with current prompting methods.
Yesss the can ! They are "learning" AI and can learn to adopt new logical information.
You are seriously underestimating the new series of learning AI. They can learn 1000x faster than human children .
What can we expect from GPT-4?
https://towardsdatascience.com/gpt-...arameters-500x-the-size-of-gpt-3-582b98d82253
It's internal value logarithms.Logical information ? Is that so . And who or what program decides this .
Physical values have numbers.In the end mathematics , mathematicians creates AI . Hence they see the Universe as a bunch of numbers only . Hence the computer as the essence of the Universe , the Hologram Universe , such non sense , they are Wrong . The Universe is physical , before numbers .
AI will feel electronic dynamics.AI will feel electronics . Life will feel , Life , And the periodic table .
Simply put, language models are statistical tools to predict the next word(s) in a sequence. In other words, language models are probability distribution over a sequence of words. Language models have many applications like:
- Part of Speech (PoS) Tagging
- Machine Translation
- Text Classification
- Speech Recognition
- Information Retrieval
- News Article Generation
- Question Answering, etc.
A popular encoding method used in NLP is Word2Vec which was developed in 2014. The real boost to language models came in 2019 with the arrival of the “transformer”. You can read more about “attention” and “transformer” here in the paper in which it was proposed. Or leave us feedback and we will cover it for you in one of our blogs!
The first thing that GPT-3 overwhelms with is its sheer size of trainable parameters which is 10x more than any previous model out there.
Common Sense ReasoningIn general, the more parameters a model has, the more data is required to train the model. As per the creators, the OpenAI GPT-3 model has been trained about 45 TB text data from multiple sources which include Wikipedia and books. The multiple datasets used to train the model are shown below:
more......
Three datasets were considered for this task. The first dataset PhysicalQA (PIQA) asks common sense questions about how the physical world works and is intended as a probe of grounded understanding of the world. “GPT-3 achieves 81.0% accuracy zero-shot, 80.5% accuracy one-shot, and 82.8% accuracy few-shot (the last measured on PIQA’s test server). This compares favourably to the 79.4% accuracy prior to the state-of-the-art of a fine-tuned RoBERTa.”
more...There are few more results mentioned in the paper for tasks like reading comprehension, SuperGLUE, NLI, synthetic and qualitative tasks (arithmetic, word scrambling and manipulation, SAT analogies, News article generation, learning and using novel words, correcting English grammar). Let’s pick up the most interesting task of News Article Generation.
An update on the evolving world of AI
and
This could be what changes everything. AI creating original poetry through its analysis of algorithms. Original poetry. Then why not original books, and plays, and operas. Possibilities are endless.AI Poet
GPT-3 haiku generation
About AI Poet
Generate haikus from the perspective of a sentient robot.
Example: "human life. so sad. a blink of the cosmic eye. they die like pixels on my screen."
AI
And a major work: https://thenextweb.com/news/this-bi...edelic-visual-interpretations-of-famous-poems
AI is already doing all that.This could be what changes everything. AI creating original poetry through its analysis of algorithms. Original poetry. Then why not original books, and plays, and operas. Possibilities are endless.
I’m not being flip, my mind is a little blown right now.
"Poisons represent the meanings of resistant Umwelt-formations of other organisms. This is the case when a plant produces toxins in order to make its own Umwelt immune to the effects of the poisons of its enemies. The resistant Umwelt is immune to the meanings of the poisons that it produces. In other words, the poison is a remedy that does not change the meaning of the poison-producing organism, but rather protects it from the meanings of other organisms. In the Umwelt of the poison-producing plant, poisons are not harmful.
Poisons also demonstrate the immanentisation of meaning in its ‘minimal form’, that is, the realisation of the ‘logical possibility’ of a new kind of resistance to the tolerance of meaning. This occurs when an animal responds to the poison, not by building up resistance, but by immanently changing its form to produce a resistance in its Umwelt. The animal’s body takes on a meaning of resistance to the poison.
https://ignota.org/blogs/news/the-poison-path-1An example of such immanentised meaning is the metamorphosis of the monarch butterfly. The butterfly is poisonous to predators, and in the larval stage, feeds on milkweed plants. This Umwelt-form of the butterfly has developed resistance to the plant poison, so it must immanently change its form when it matures in order to continue to resist."
AI creating original poetry