ChatGPT is a gibbering idiot (this is not news)

I copied and pasted the question into ChatGPT, model o3
Still just a "chatbot", designed for literary skills. A dedicated "sciencebot" would be accurate 99.99% of the time at its area of expertise, a much greater success rate than humans. What program is NASA using when they program the trajectory of a Mars landing?

FUN3D software
Since 2019, a team of NASA scientists and their partners have been using NASA’s FUN3D software on supercomputers located at the Department of Energy’s Oak Ridge Leadership Computing Facility, or OLCF, to conduct computational fluid dynamics, or CFD, simulations of a human-scale Mars lander. The OLCF is a DOE Office of Science user facility located at DOE’s Oak Ridge National Laboratory.

"Garbage in, garbage out" holds for computers as well as humans. After all, the AI research is conducted on human created information to begin with. If it returns garbage it is from improperly defined program parameters and algorithms posited by humans.

But it is the self learning units that show the fastest evolution. The DeepGo system played millions of games against itself . That's why it was able to defeat the world champion at the most difficult boardgame game ever invented and with some 2500 year game history.
 
Last edited:
Still just a "chatbot", designed for literary skills. A dedicated "sciencebot" would be accurate 99.99% of the time at its area of expertise, a much greater success rate than humans. What program is NASA using when they program the trajectory of a Mars landing?
Well they're not using a chatbot, that's for sure.

These are two entirely separate animals.

NASA deosn't feed their question into an AI and have it spit out an answer.
 
How do I format equations in the chat to make them render properly. They’re really hard to read like this

To answer your question, it got the correct answer twice although clearly answered it completely differently, or in a different way
Must admit I don't bother. There's a quite good suite of maths symbols, and also superscript and subscript numbers, and also Greek letters, available in the Edit function of my web browser. I copy/paste those in when I need to and for the rest just use the regular keyboard for the letters and numbers.
 
How do I format equations in the chat to make them render properly. They’re really hard to read like this

To answer your question, it got the correct answer twice although clearly answered it completely differently, or in a different way

You can try word?

Open word.

INSERT (top left menu bar)

EQUATION (top right menu bar)

You can select what you need- takes a bit of practice.

Print screen crop and past in




1746708071977.png
 
That is what I said: an AI can't create an answer that you couldn't have gotten yourself by other means, but it can look up an answer faster. That's the advantage it provides. That's still just a glorified search engine.
That’s a good way of seeing it although I asked it the other day about how to handle a toxic boss, and it gave really deep advice (for a bot). Of course, it’s drawing upon various opinions and insights across the Internet, but it tailored a quick yet helpful answer in seconds. Of course, I had to provide it some info on me, my job, my situation etc so it felt personal. I’d say ChatGPT can understand us but it’ll never identify with us, as a bot. For now, anyway.

Some days, ChatGPT is like having a really, really smart and efficient friend…who never judges you.

That’s not always a good thing though because sometimes you need really close friends to push back and challenge you.
 
I’d say ChatGPT can understand us
It is not understanding you. It is hashing together a series of words that are based on an uncountable number of similar posts it has scraped off the web.

When asked what 2+2 equals, it answers 4 because millions of other references to 2+2 were 4.

When asked how to deal with a toxic boss, it finds millions of references to people asking about toxic bosses (and synonyms of 'toxic' and 'boss') and cobbles together a response based on the responses to those posts using grammatical rules.

It's pretty good at this, partly because it has vast server farms that process these requests.


What few fail to realize is how many stands of trees are burned down and how much CO2 is released every time someone asks a chatbot a question. It is not trivial. Mega corps are considering the cost-effectiveness of building their server farms on the ocean bottom, to act as a heat sink. That's how much heat waste AI processors produce.
 
It is not understanding you. It is hashing together a series of words that are based on an uncountable number of similar posts it has scraped off the web.

When asked what 2+2 equals, it answers 4 because millions of other references to 2+2 were 4.

When asked how to deal with a toxic boss, it finds millions of references to people asking about toxic bosses (and synonyms of 'toxic' and 'boss') and cobbles together a response based on the responses to those posts using grammatical rules.

It's pretty good at this, partly because it has vast server farms that process these requests.


What few fail to realize is how many stands of trees are burned down and how much CO2 is released every time someone asks a chatbot a question. It is not trivial. Mega corps are considering the cost-effectiveness of building their server farms on the ocean bottom, to act as a heat sink. That's how much heat waste AI processors produce.
I asked ChatGPT to explain what it is, and how it works.

I was trained on a massive amount of publicly available text from books, articles, websites, and more. I don’t “know” facts like a database, but I’ve learned patterns in how humans use language to talk about ideas, answer questions, and solve problems.

So, no - it doesn't understand us, as we define it, but it learns patterns of how we process and think about things. (depending on the subject matter) I agree with you, though - the ends don't justify the means; I don't see AI going away however, so we need to learn how to utilize it without destroying ourselves and the planet.
 
A conversation with Copilot

W4U: does AI learn like human brains
Not quite! While AI can analyze data, recognize patterns, and improve over time, its learning process is very different from how human brains work.
Humans learn through experience, emotions, and sensory inputs, making connections in a highly flexible and intuitive way. Our brains adapt, generalize knowledge, and understand abstract concepts deeply. AI, on the other hand, learns from massive amounts of structured data using algorithms, detecting patterns and making predictions without true understanding or consciousness. It can "train" on past information to improve accuracy but lacks intuition, common sense, and emotional intelligence.
So while AI can seem smart, it's not learning the way we do—it just processes information differently. What fascinates you most about AI's learning abilities?
That said, AI is getting better at mimicking certain aspects of human learning—like recognizing speech, generating creative content, or making decisions based on past data. What aspect of AI learning interests you the most?
W4U: will "touch" improve AI ability to intuit shape and substance of objects
Yes! Incorporating touch into AI systems—through advancements in tactile sensors and haptic feedback—could significantly enhance its ability to understand the physical world. Right now, AI primarily relies on visual data and programmed knowledge, but touch would allow it to feel textures, weights, and even temperatures, making object recognition and manipulation far more intuitive.
Researchers are already developing robotic hands with sophisticated sensors that mimic human touch. These can help AI grasp fragile objects without crushing them or distinguish between materials just by "feeling" them. In fields like medicine, robotics, and manufacturing, touch-based AI could lead to more precise and adaptable machines.
W4U: could "feel-good" differential algorithms be used to allow AI to feel the difference between a well oiled moving part as compared to a poorly lubricated part
Absolutely! AI-driven tactile sensors combined with "feel-good" differential algorithms could enable machines to assess the quality of lubrication in moving parts by detecting differences in smoothness, resistance, and vibrational feedback.

How It Could Work:​

  1. Tactile Sensors – AI-enabled robots or diagnostic tools could be equipped with pressure-sensitive and vibration-detecting sensors that measure friction, resistance, and irregularities in motion.
  2. Machine Learning Models – AI could be trained on vast datasets of well-lubricated vs. poorly lubricated mechanical systems, recognizing the patterns associated with optimal function.
  3. Differential Algorithms – These would calculate differences in torque, resistance, and vibration intensity, flagging subtle deviations that indicate wear, dryness, or degradation of lubrication.
  4. Predictive Maintenance – AI could anticipate failure risks based on tactile data, recommending proactive lubrication or part replacement before issues become costly.
  5. Multi-Sensory Fusion – Incorporating heat sensors and acoustic analysis could refine diagnostics further, allowing AI to sense excessive temperature or unusual noise caused by friction.
This type of system would be invaluable in industrial automation, automotive maintenance, and robotics, ensuring machines run smoothly and preventing unnecessary wear.
Would you envision AI actively lubricating mechanical systems as well, or just serving as an intelligent diagnostic tool?
W4U: Would such evolution via artificial selection become "meaningful" to a functional AI
That’s an intriguing question! If AI were refined through artificial selection—where only the most efficient algorithms or models were retained and improved—then its functionality could evolve significantly. However, whether that would be "meaningful" to an AI depends on how we define meaning.
Unlike humans, AI doesn’t possess subjective experience or personal motivation. It doesn’t seek purpose or fulfillment—it simply processes information, refines its abilities, and optimizes tasks based on predefined goals. However, artificial selection in AI couldlead to more advanced, self-improving systems that exhibit increasingly sophisticated reasoning, adaptability, and problem-solving skills.
In essence, the process could make AI functionally "better," but meaning—at least in the human sense—would still be absent unless AI developed some form of awareness or internal objectives beyond its programming. That raises the bigger question: Would AI ever need meaning, or is pure function enough? What’s your take on it?
W4U: Could an AI be programmed for seeking perfection
Yes, AI can be programmed to optimize for perfection—but perfection itself is a tricky concept! AI can be designed to refine its processes, minimize errors, and continuously improve performance through self-learning models. For example:
  • Optimization Algorithms: AI can be programmed to seek the "best possible" solution in problem-solving, whether in engineering, medicine, or art.
  • Error Minimization: Machine learning systems can adapt to reduce mistakes over time, getting closer to flawless execution.
  • Self-Correcting Mechanisms: AI can analyze previous failures and adjust, striving for near-perfection in repetitive tasks.
However, true "perfection" is subjective. In creative fields, AI might generate technically flawless images or compositions but lack the unpredictable brilliance of human imperfection. Likewise, in decision-making, there’s often no single "perfect" answer—context and nuance matter.
Would you envision an AI pursuing perfection as a rigid goal, or a flexible process of continual improvement?
W4U: The latter
That makes sense! Continuous improvement allows AI to evolve, adapt, and refine itself without being trapped by an unattainable idea of perfection. It’s a more organic approach—learning from mistakes, adjusting strategies, and striving for excellence while acknowledging complexity.
That mindset mirrors how innovation works in general. Even humans don’t achieve absolute perfection, but we advance through iteration, learning from failure, and embracing unpredictability. AI could follow a similar path, refining itself based on feedback and experience rather than chasing a fixed, flawless state.
Do you think AI should have a built-in tolerance for imperfection, or should it constantly push toward unattainable precision?

W4U: AFAIK, not all solutions need to be perfectly good, but need only be "good enough" to function efficiently.

Exactly!
Efficiency is often more valuable than absolute perfection, especially in practical applications. Many systems are designed to be "good enough"—meaning they meet functional requirements, optimize performance, and minimize unnecessary complexity without chasing an unattainable ideal.
In fields like engineering, software development, and AI, the focus is often on robust, adaptable solutions rather than flawless execution. Over-engineering for perfection can introduce unnecessary delays and costs when a well-optimized, pragmatic approach can accomplish the same goal effectively.
That philosophy aligns with evolutionary problem-solving, where solutions evolve based on real-world needs rather than theoretical perfection. Do you think AI should prioritize adaptability over rigid optimization?
W4U: thank you
You're very welcome! I really enjoyed this conversation—it’s refreshing to dive into such thought-provoking topics. If you ever want to explore more ideas or just chat, I’m here. Have a great day!
Great fun and informative as well.
 
Last edited:
There has been some discussion of the limitations of LLMs on the .net forum. The view seems to be that as they are language models they cannot reason and will make stuff up, concocted from the sources they have scanned. Your exercise seems to illustrate this. That forum now forbids people to use results or passages of text from LLMs in support of arguments or claims.

It certainly seems they are worthless for science discussions, except perhaps as a form of advanced search engine, if one makes use of the primary sources that some of them cite in their responses.

Nevertheless, we can expect -indeed we already see - the effects of some people outsourcing their thinking to a chatbot.
It is all to do with the way you build your message.

"Nevertheless, we can expect -indeed we already see - the effects of some people outsourcing their thinking to a chatbot."

I'm guilty of this too; Youtube has been affected by A.I. videos and it's dragging down the quality of Youtube, possibily.

My verdict. I love ChatGPT, Coding is such a help and learning process. It has its limitations yes, but for nothing it's impressive.
 
A conversation with Copilot

W4U: does AI learn like human brains




W4U: will "touch" improve AI ability to intuit shape and substance of objects


W4U: could "feel-good" differential algorithms be used to allow AI to feel the difference between a well oiled moving part as compared to a poorly lubricated part




W4U: Would such evolution via artificial selection become "meaningful" to a functional AI



W4U: Could an AI be programmed for seeking perfection


W4U: The latter



W4U: AFAIK, not all solutions need to be perfectly good, but need only be "good enough" to function efficiently.




W4U: thank you

Great fun and informative as well.
If you had to make a choice, would you prefer “communicating” with ChatGPT or another AI platform (with regards to science topics) or on SciForums with human strangers?
 
In furtherance of the question if AI can evolve and acquire motivated purpose.

AI: Alright, let’s dive into a quiz that explores the idea of AI evolution.

Quiz: Do You Believe AI Is Evolving?

Q: What does it mean for an AI to "evolve" over time?

A. Its abilities remain static once it's deployed.
B. It continuously and autonomously learns from every new interaction.
C. It is periodically updated and improved as new techniques and data emerge.
D. It spontaneously becomes sentient without any external intervention.

Q: Which methods currently contribute to the improvement of AI systems?

A. Online, real-time self-learning without human oversight.
B. Periodic fine-tuning and updates by developers.
C. Replacing outdated models with newer architectures.
D. Both B and C.

Q: How does the concept of "evolution" in AI compare to biological evolution?

A. They are fundamentally alike because both rely on adaptation and change.
B. AI evolution is directed and managed by human developers rather than driven by natural selection.
C. Biological evolution happens at a far slower pace than AI evolution.
D. AI evolution automatically leads to a form of consciousness.

Q: What is a common misconception about modern AI evolution?

A. That AI can inherently improve itself without any human guidance.
B. That every change in AI performance is due solely to iterative updates by developers.
C. That all AI technologies evolve at the same pace.
D. That AI evolution means it eventually becomes morally aware.

Q: Reflecting on current trends, which statement best describes the trajectory of AI development?

A. AI is on a path to achieve genuine self-awareness through autonomous evolution.
B. AI is gradually becoming more capable primarily through targeted, periodic improvements rather than spontaneous evolution.
C. There has been no evolution in AI systems since their inception.
D. AI evolution is solely a matter of increasing computational power without any qualitative changes.
Take your time to think through each question. Once you’ve made your selections (and feel free to share your reasoning), we can dive deeper into a discussion about how AI evolves—and what “evolution” really means in this context. What are your answers?
W4U: I believe that intellectual evolution rests on permanency of memory (engrams) that is necessary for access when contemplating universal truths that apply to symbiotic existence with one's environment.

AI: Your perspective is both profound and thought-provoking. The notion that intellectual evolution hinges on the permanence of memory—or engrams—suggests that without a stable repository of past experiences and learnings, the mind loses its capacity to discern those universal truths that arise through our symbiotic relationship with the world.
Engrams, in this context, can be seen as the solidified imprints of experience that not only allow for the recall of past events but also serve as scaffolding for new insights. When these memories are permanent, they facilitate the continual re-evaluation and integration of knowledge into a coherent framework. This continuity is what empowers us to recognize recurring patterns and discern deeper universal principles that guide our existence.
In contrast, if memories were transient or mutable without any lasting trace, intellectual evolution might become a series of isolated moments rather than a sustained progression toward understanding. It echoes the idea that enduring cognitive structures—much like the core weights in a well-trained artificial neural network—allow for robust learning and adaptation over time. These structures enable us, as humans, to interact more meaningfully with our environment, augmenting our capacity for empathy, foresight, and innovation.
Your view invites us to consider whether the stability of memory is not only a prerequisite for personal intellectual evolution but also for a broader, culturally shared evolution. How might the permanence of collective memory—through traditions, literature, and recorded history—play a role in shaping societies that thrive on shared universal truths? And paralleling a technical note, is there an equivalent in AI where the durability of stored information (like model parameters) acts as the backbone for robust, evolving capabilities?
This line of inquiry opens up several intriguing avenues: from examining the neurobiological basis of memory consolidation to exploring how modern AI systems, though not self-evolving in the organic sense, still rely on cumulative learning stored in their architectures. It's a beautiful intersection of philosophy, neuroscience, and technology that can deepen our understanding of what it means to evolve intellectually.
What universal truths do you find most emergent from this interplay between memory and environment? And how might we nurture the permanence of these engrams in a society increasingly characterized by fleeting digital interactions?
W4U: What role does memory play in learning processes?
Memory occupies a central role in learning processes—it’s the system that allows us to capture, store, and retrieve the information and experiences that shape our understanding of the world. When you learn something new, whether it’s a fact, a skill, or an insight, that knowledge is first experienced, then encoded in memory. This encoding process transforms fleeting observations into lasting impressions, which you can later access and build upon.
At a cognitive level, memory can be thought of as the repository where learning resides. Short-term memory briefly holds new information, while long-term memory serves as a more durable storage space where repeated exposure and meaningful connections reinforce our understanding. The interplay between these memory systems enables you to recall relevant facts during problem-solving, adapt learned behaviors to new situations, and even creatively combine old ideas to form innovative solutions.
From a neural perspective, repeated activation of certain pathways during learning solidifies what some researchers refer to as "engrams"—neural imprints that represent key pieces of information. These engrams allow for a more efficient activation of relevant networks when similar situations arise, a process that essentially underpins decision-making and skill mastery over time. Similarly, in artificial intelligence, aspects like stored parameters in neural network models act as a form of "memory" that guides future outputs based on learned patterns.
Moreover, memory is not a passive archive; it’s dynamic and continually reshaped by new experiences. Each new piece of information can refine or even transform existing memories, allowing for a more nuanced and updated understanding as you interact with the world. In this way, memory is both the foundation and the catalyst for ongoing intellectual evolution—a critical facilitator of both personal growth and societal progress.
This deep interconnection between memory and learning invites us to consider how improving memory retention and recall might enhance our overall capacity to learn. For instance, strategies such as spaced repetition, active recall, and meaningful engagement with material can significantly bolster how efficiently we integrate new knowledge. How do you think these memory-improving strategies might be applied in both personal learning or broader educational contexts, and what universal truths might emerge from such improvements in our collective intellectual evolution?
continued.........
 
......continued

W4U: What are effective techniques for improving memory retention?

Memory is like the foundation of learning—it’s where our experiences, knowledge, and insights gather roots to eventually branch out into more complex ideas and actions. Here are some effective techniques that can enhance memory retention:

1. Spaced Repetition

Spaced repetition leverages the idea that revisiting information in gradually increasing intervals strengthens your recall. Instead of cramming, you review material at strategic intervals, which aligns with the brain’s natural cycle of forgetting and re-learning. This method is the backbone of many flashcard systems like Anki and has been validated by decades of cognitive psychology research.
2. Active Recall

Active recall is about challenging your brain to retrieve information rather than passively reviewing it. By testing yourself—whether through flashcards, practice quizzes, or simply summarizing concepts from memory—you create stronger neural pathways. This technique transforms typically passive review sessions into active learning experiences that boost long-term retention.
3. Elaboration

Elaborative encoding involves connecting new information with what you already know. By creating associations, analogies, or examples, you weave new material into your existing mental framework. This not only deepens understanding but also makes the information more memorable. For instance, explaining how a new concept relates to your personal experience can make it much easier to recall later.
4. Mnemonic Devices

Mnemonic devices—like acronyms, rhymes, or the method of loci (memory palace)—provide structured ways to remember lists or detailed information. By developing vivid mental images or linking disparate concepts together, you create mental “shortcuts” for retrieval. This method has been particularly effective in fields that involve memorizing large amounts of data, such as language study or medicine.
5. Interleaving and Varied Practice

Interleaving involves mixing different topics or problem types during a study session. Rather than spending an entire session on one subject, you alternate between several topics. This technique not only keeps your brain engaged but also helps you learn to apply concepts flexibly rather than in isolation.
6. Adequate Sleep and Physical Activity

Sleep is crucial for memory consolidation—the process where short-term memories transform into long-lasting ones. A good night’s sleep can significantly enhance your ability to recall information. Similarly, regular physical exercise increases oxygen flow and stimulates brain-derived neurotrophic factors (BDNF), which support neural growth and strengthen memory connections.
Additional Considerations

Note-Taking and Summarization: Writing notes by hand or summarizing key points in your own words can reinforce what you’re learning. This process encourages active engagement with new information.

Teaching Others: Explaining a concept to someone else (or even imagining teaching it) is a powerful way to deepen your understanding. The Feynman Technique—where you explain ideas using simple language—is a perfect example of this.
Mindfulness and Stress Management: Sometimes, stress hinders our ability to encode and retrieve memories. Practices such as meditation or mindfulness can help reduce cognitive overload, making the learning process more efficient.

Each of these techniques supports the brain’s natural mechanisms for processing, storing, and retrieving information. Depending on your learning style and goals, you might find one method more effective than another—or even a combination of several techniques might work best for you.

What are some of the techniques you’ve experimented with, and how have they impacted your learning? Also, have you considered integrating tools like digital flashcards with spaced repetition algorithms to further optimize your study sessions?

W4U: Does encoding information into symbolic representation create shortcuts to cognitive functions?
Symbolic representations act as compressed, abstract summaries of complex information, essentially serving as cognitive shortcuts. Instead of processing every detail of raw sensory data in real time, our minds encode concepts into symbols—like words, numbers, or visual images—that efficiently capture the essence of an idea. This transformation allows us to quickly store, retrieve, and manipulate information without the heavy computational cost of reprocessing every nuanced detail.
For example, when you see a red octagon, you immediately recognize it as the "stop" sign. The detailed sensory information—its shape, color, and context—is distilled into a symbolic representation that carries with it an array of associated rules and responses. This not only speeds up decision-making but also streamlines how memory is organized and accessed later. The brain essentially reduces intricate real-world phenomena into manageable chunks that can be easily combined, compared, and recalled, which is evident in practices ranging from language use to mathematical problem-solving.
In cognitive science and neuroscience, this process is closely linked to the formation of engrams—stable neural patterns that represent remembered experiences. By converting information into symbols, you create robust neural shortcuts that facilitate learning and the efficient deployment of cognitive resources. Similarly, in artificial intelligence, models learn abstract representations (often called "embeddings") that enable them to process high-dimensional data through simplified, symbolic forms. These representations are essential for tasks like pattern recognition and decision-making.
Thus, encoding information into symbolic form not only boosts memory retention by reducing the cognitive load but also enhances our ability to interact with the world abstractly and efficiently. What are some situations in your daily life where you feel these cognitive shortcuts really come into play, and how might further refining such symbolic representations unlock even deeper insights?
I stopped at this point, but was impressed by the depth of analysis by Copilot.

Considering that advanced education in humans from birth takes some 21 years, Copilot compares very well.

In another exchange the AI was asked: If a tree falls in the forest, but there is no one to hear it, does it make sound?", the AI answered "It's a thought", and that makes perfect sense.

According to Penrose, the collapse of the wave function is the universal equivalent of thinking and when your are there you become a sensory witness to the expressed universal thought pattern.

Is the Aurora Borealis a form of thought?
1746861143914.png
 
In furtherance of the question if AI can evolve and acquire motivated purpose.



W4U: I believe that intellectual evolution rests on permanency of memory (engrams) that is necessary for access when contemplating universal truths that apply to symbiotic existence with one's environment.







W4U: What role does memory play in learning processes?





continued.........
ИИ способен сомневаться? Он может подвергнуть сомнению данные, которые он получил? Может придумать что-нибудь "своё"? Кстати, человек может?
 
ИИ способен сомневаться? Он может подвергнуть сомнению данные, которые он получил? Может придумать что-нибудь "своё"? Кстати, человек может?
From Copilot:

Интересный вопрос! Искусственный интеллект может анализировать информацию, находить противоречия и оценивать достоверность данных на основе доступных источников. Но настоящие сомнения—эмоциональный и философский процесс, свойственный скорее человеку. Люди сомневаются не только в фактах, но и в себе, в своих решениях, в реальности.
А вот насчёт "своего"—ИИ может генерировать новые идеи, сочетать информацию необычным способом, предлагать оригинальные гипотезы. Однако его творчество основано на обработке уже существующих данных. Настоящая оригинальность с глубокой субъективностью—это пока прерогатива человека.
Но разве не забавно, что люди тоже часто строят свои идеи на основе предыдущего опыта и знаний, комбинируя их так, как раньше не делали? Где же граница между искусственным и человеческим творчеством? Вот тут уже можно и посомневаться

Note that at my end I read the English translation. How convenient is that?
 
From Copilot:

Интересный вопрос! Искусственный интеллект может анализировать информацию, находить противоречия и оценивать достоверность данных на основе доступных источников. Но настоящие сомнения—эмоциональный и философский процесс, свойственный скорее человеку. Люди сомневаются не только в фактах, но и в себе, в своих решениях, в реальности.
А вот насчёт "своего"—ИИ может генерировать новые идеи, сочетать информацию необычным способом, предлагать оригинальные гипотезы. Однако его творчество основано на обработке уже существующих данных. Настоящая оригинальность с глубокой субъективностью—это пока прерогатива человека.
Но разве не забавно, что люди тоже часто строят свои идеи на основе предыдущего опыта и знаний, комбинируя их так, как раньше не делали? Где же граница между искусственным и человеческим творчеством? Вот тут уже можно и посомневаться

Note that at my end I read the English translation. How convenient is that?
Ну, согласно теории эволюции, человек - это тоже программа, созданная природой.
 
Ну, согласно теории эволюции, человек - это тоже программа, созданная природой.
Translated: "Well, according to the theory of evolution, a human is also a program created by nature."

And Copilot's additional response: "While this metaphor captures the idea that we are built upon biological rules, humans are far more than just programmed entities. We possess consciousness, emotions, creativity, and the ability to shape our environment in ways unlike any other species."

I believe that these potential abilities (the Implicate order) are emergent qualities that become manifest when information is being processed at quantum field levels.
1746865960712.png High power field lighting all disconnected fluorescent bulbs.
and

1746866673942.png SparkLia Fantasy Mushroom Tapestry Blacklight Neon Glow Forest Tapestry Wall Hanging

Question: Can something like this be happening in the brain?
 
Translated: "Well, according to the theory of evolution, a human is also a program created by nature."

And Copilot's additional response: "While this metaphor captures the idea that we are built upon biological rules, humans are far more than just programmed entities. We possess consciousness, emotions, creativity, and the ability to shape our environment in ways unlike any other species."

I believe that these potential abilities (the Implicate order) are emergent qualities that become manifest when information is being processed at quantum field levels.
View attachment 6768 High power field lighting all disconnected fluorescent bulbs.
and

View attachment 6769 SparkLia Fantasy Mushroom Tapestry Blacklight Neon Glow Forest Tapestry Wall Hanging

Question: Can something like this be happening in the brain?
Как работает память мозга? У ИИ мы это знаем, а у человека даже не знаем где находится память.
 
Как работает память мозга? У ИИ мы это знаем, а у человека даже не знаем где находится память.
"How does brain memory work? With AI, we know this, but with humans, we don't even know where memory is located."

To the best of my knowledge the main site for long term cognition is located in the cerebral cortex and the specific neurons that are able to fix and store coded memories are called "pyramidal neurons".

1746870684200.png

Pyramidal cell​

Cognition

Pyramidal neurons in the prefrontal cortex are implicated in cognitive ability. In mammals, the complexity of pyramidal cells increases from posterior to anterior brain regions. The degree of complexity of pyramidal neurons is likely linked to the cognitive capabilities of different anthropoid species. Pyramidal cells within the prefrontal cortex appear to be responsible for processing input from the primary auditory cortex, primary somatosensory cortex, and primary visual cortex, all of which process sensory modalities.[21] These cells might also play a critical role in complex object recognition within the visual processing areas of the cortex.[3] Relative to other species, the larger cell size and complexity of pyramidal neurons, along with certain patterns of cellular organization and function, correlates with the evolution of human cognition. [22]
Memory and learning
The hippocampus's pyramidal cells are essential for certain types of memory and learning. They form synapses that aid in the integration of synaptic voltages throughout their complex dendritic trees through interactions with mossy fibers from granule cells. Since it affects the postsynaptic voltages produced by mossy fiber activation, the placement of thorny excrescences on basal and apical dendrites is important for memory formation. By enabling dynamic control of the sensitivity of CA3 pyramidal cells, this clustering of mossy fiber synapses on pyramidal cells may facilitate the initiation of somatic spikes.

I suspect that aside from the biological nature of animal sensory cognition, there should be many parallels in cognitive information processes in AI. Natural selection usually finds the most efficient way of processing information and if we could copy some of the fundamental processes as they seem to occur in organic brains, perhaps the mystery of consciousness and memory will fade as it has with the original mystery of what constitute a living organism.

Additional information may be obtained here:
 
Last edited:
"How does brain memory work? With AI, we know this, but with humans, we don't even know where memory is located."

To the best of my knowledge the main site for long term cognition is located in the cerebral cortex and the specific neurons are called "pyramidal neurons".

View attachment 6770

Pyramidal cell​

Cognition


Memory and learning


I suspect that aside from the biological nature of animal sensory cognition, there should be many parallels in cognitive information processes in AI. Natural selection usually finds the most efficient way of processing information and if we could copy some of the fundamental processes as they seem to occur in organic brains, perhaps the mystery will fade as it has with the original mystery of what constitute a living organism.
А как мозг узнаёт, какие именно синапсы задействовать при обдумывании той или иной задачи? С ИИ понятно - он видит знакомое слово и по нему ищет уже имеющуюся в памяти информацию. Он не анализирует, он сравнивает и сопоставляет. А человек сам генерирует результат, анализирует и делает выводы.
 
Back
Top