Can AI learn to understand humor.

It may be British slang. An ocean-going vessel, e.g. tugboat, is one designed for rough seas and is much bigger and more robust than the average tug. Hence ocean-going has become a word denoting a very serious, large scale example of whatever it is.


So I am mixing my metaphors, absurdly, for the sake of humour, and thereby providing evidence I am not an AI bot.;)
Interesting. I could be wrong, but I don't think I've ever encountered Americans using that outside of specific, appropriate contexts.

BTW David Lynch just died. Hopefully, that doesn't alter the tone too much.
 
I'm not even going to bother looking up "ocean-going"; rather, I'm just going to ponder the possible meaning or meanings of that one for a while.


Edit: Is it some sort of weird reference to Moby Dick perhaps?
It might be a phallus-y to infer any subtext in the great white whale's name.

BTW David Lynch just died
Really sorry to hear that. I admired his work. And I'm pretty sure no AI could understand his surrealism - there is no algorithm for David Lynch.
 
“…antisymmetric tensors……”?

No, not just confusion with tension. Ocean-going ballocks is what it was.

I had Gawdzilla on Ignore so am not a good person to ask about him. Though I see he made a brief - and characteristically unilluminating - appearance on the .net site recently. He didn't even manage to get in any references to Purdue, so I don’t know what he was doing there. :biggrin:
He is actually a good guy, pretty smart. Terse perhaps.
 
Really sorry to hear that. I admired his work. And I'm pretty sure no AI could understand his surrealism - there is no algorithm for David Lynch.
Yeah, I think humor and surrealism possess certain qualities that are both difficult to teach and to learn. A sort of intentional, willful incongruence--but taken only so far. If taken too far, or not far enough, it simply does not work. A bit like the uncanny valley.

You can find plenty of examples--or create your own--of AI generated images as responses to purportedly incongruous prompts, but they never seem to "work". They either come across as too deliberate or as kind of half-assed and missing the point.

With the evolution of drum machines to AI which creates drum patterns, the various methods introduced over the years to impart the "human" quality or feel are instructive. Take random variance of tempo or random insertion of ghost notes and accents, as an example: studies of actual drumming behaviors have revealed that said randomness is typically a patterned sort of randomness--unless, of course, you focus upon bad drummers, in which case it's just random and sounds like crap.
 
Last edited:
Do you mean tense? ;)

I’ve never known him make a comment that added to the sum of human knowledge. Pointless one-liners seem to be his thing. But you are more patient than I.
That's his style I suppose. My patience is finite like everyone else and I make exceptions for dishonest posters.
There my patience is wafer thin.
Luckily, not many of those!
 
OK, back to some RW developments. And return to the topic, shall we?

AI with consciousness - but pain-free?​

Researchers at the University of Bern have developed a new model for the emergence of consciousness. The model suggests that one day artificial agents, i.e. systems which imitate human thought, could gain consciousness. To regulate our interaction with such systems, the researchers propose an agreement between humans and machines: the ‘Human-AI Deal’.
Over the past decade, the capabilities of artificial agents have increased rapidly. Today, they can solve complex problems, learn languages and are capable of self-improvement. In other words, they exhibit broadly intelligent behaviour. However, whether intelligent artificial agents are in principle capable of developing consciousness remains highly controversial. Depending on the answer, there would be far-reaching consequences for morality and human self-understanding.
Inextricably linked to the possibility of artificial consciousness is the question of how consciousness arises in the human brain. Researchers from the Computational Neuroscience Group at the Department of Physiology of the University of Bern and the University of Amsterdam have now proposed new answers to both issues. Their work has been published in “AI and Ethics”.

Is consciousness required to experience "discomfort" or "reward"?
Instead of starting with the brain, should we not begin with a much simpler example, such as the ability of the slime mold to make choices?
A slime mold is not conscious, yet it can make decisions between "bad tasting" foods and "delicious" foods. Slime molds love oats and when presented with a choice will always "choose" the oats. What does this ability to choose indicate? A inate form of preference?
And how could this be related to other choices between good and bad, comfort and discomfort, humor and slander?

What is the difference between a slime mold and an AI? Is there something about biochemistry that gives it additional survival oriented properties?
 
Last edited:
I asked DeepSeek if it understood humor...

Yes, I can understand and even generate humor to some extent! While I don't experience emotions or "get" jokes in the way humans do, I’ve been trained on a wide range of humorous content, so I can recognize patterns in jokes, puns, wordplay, and other forms of humor. I can also create jokes or witty responses based on context.

For example, here's a classic pun:
Why don’t skeletons fight each other?
Because they don’t have the guts!


If you have a joke or want me to come up with something funny, feel free to ask!
 
Saw a fascinating YT vid about ChatGPT, and why it can't produce an image of a "full glass of wine". No matter how you tell it that the image it is showing you is not "full", and no matter how much it accepts that the image it produced is not a "full to the brim" glass of wine, it just can't produce the right image. This is because it has no direct reference for a "full glass of wine" (i.e. up to the top of the glass) and doesn't understand the basics of how liquid fills a glass.
Fascinating video:

So, if it can't produce an image of a full glass of wine, what hope does it have for understanding humour. It can regurgitate examples of what it has heard, and possibly even make slight adjustments and changes to existing joke structures, but it will never understand. At least not yet. There may be breakthroughs, or sheer complexity that produces a more general AI, but, for now, it at best has the appearance of intelligence without the understanding.
 
Saw a fascinating YT vid about ChatGPT, and why it can't produce an image of a "full glass of wine". No matter how you tell it that the image it is showing you is not "full", and no matter how much it accepts that the image it produced is not a "full to the brim" glass of wine, it just can't produce the right image. This is because it has no direct reference for a "full glass of wine" (i.e. up to the top of the glass) and doesn't understand the basics of how liquid fills a glass.
Fascinating video:

So, if it can't produce an image of a full glass of wine, what hope does it have for understanding humour. It can regurgitate examples of what it has heard, and possibly even make slight adjustments and changes to existing joke structures, but it will never understand. At least not yet. There may be breakthroughs, or sheer complexity that produces a more general AI, but, for now, it at best has the appearance of intelligence without the understanding.
Perhaps it been trained by the French or Italians, and simply cannot compute such a barbarous concept as filling a glass with wine right to the top. :)
 
Saw a fascinating YT vid about ChatGPT, and why it can't produce an image of a "full glass of wine". No matter how you tell it that the image it is showing you is not "full", and no matter how much it accepts that the image it produced is not a "full to the brim" glass of wine, it just can't produce the right image. This is because it has no direct reference for a "full glass of wine" (i.e. up to the top of the glass) and doesn't understand the basics of how liquid fills a glass.
Fascinating video:

So, if it can't produce an image of a full glass of wine, what hope does it have for understanding humour. It can regurgitate examples of what it has heard, and possibly even make slight adjustments and changes to existing joke structures, but it will never understand. At least not yet. There may be breakthroughs, or sheer complexity that produces a more general AI, but, for now, it at best has the appearance of intelligence without the understanding.
Это он с русскими не пил. У нас бы его научили пить из граненого стакана водку, и закусывать солёным огурцом, а на утро ещё и похмеляться. Анекдот по теме. Немец пишет у себя в дневнике вечером: "сегодня я пил с русскими, чуть не умер". Запись в дневнике утром:"лучше бы я умер вчера".
 
So, if it can't produce an image of a full glass of wine, what hope does it have for understanding humour
It produced a full glass of wine as per haute cuisine, wine never gets served to the brim. It would not allow for swirling.
The highest level I found was still not full to the brim, but looked already too full to me.
1740393363450.png
Have you asked it to create a "funny" full glass of wine. That would be a true test from a direct request.
 
LLM type AI is really not suited to conceptual modeling, which is the cognitive ability needed for such understanding. It is essentially a stochastic parrot, and cannot deeply learn concepts by modeling the world out there. Absent that ability, it has no way to really grasp meaning. Human brains are meaning machines - we grasp not just syntax and statistical data, we grasp what things mean in a bigger picture sort of way. As Penrose and many others have pointed out, a lot of human cognition is not really algorithmic.

An AI might be able to construct a joke, but it won't laugh, not really. As the phrase goes, there is no there there.

As someone, John Searle maybe?, pointed out, by the time you've gone to the trouble to create an artificial neural network that can faithfully reproduce what a biological brain does and can actually experience qualia and full consciousness and intuitive understanding of a world that it can interact with, you have really just gone and made an artificial biological brain, something that we can now make with about 10-30 minutes of unskilled labor, followed by nine months of huge grocery bills.
 
LLM type AI is really not suited to conceptual modeling, which is the cognitive ability needed for such understanding. It is essentially a stochastic parrot, and cannot deeply learn concepts by modeling the world out there. Absent that ability, it has no way to really grasp meaning. Human brains are meaning machines - we grasp not just syntax and statistical data, we grasp what things mean in a bigger picture sort of way. As Penrose and many others have pointed out, a lot of human cognition is not really algorithmic.

An AI might be able to construct a joke, but it won't laugh, not really. As the phrase goes, there is no there there.

As someone, John Searle maybe?, pointed out, by the time you've gone to the trouble to create an artificial neural network that can faithfully reproduce what a biological brain does and can actually experience qualia and full consciousness and intuitive understanding of a world that it can interact with, you have really just gone and made an artificial biological brain, something that we can now make with about 10-30 minutes of unskilled labor, followed by nine months of huge grocery bills.
Вы рассуждаете, как профессор Преображенский из "Собачьего сердца" Булгакова. На Западе слышали об этом гениальном русском писателе?
 
You reason like Professor Preobrazhensky from Bulgakov's "Heart of a Dog". Has anyone in the West heard of this brilliant Russian writer?
Yes. His novel, The Master and Margarita is a brilliant satire. I will look for "Heart of a Dog." Though I doubt I am similar to a professor who performs such experiments. I hope there was not some translation error with my previous post.
 
An AI might be able to construct a joke, but it won't laugh, not really. As the phrase goes, there is no there there.
I asked and the AI said that it understood the concept of humor as a form of misdirection and the result of an unexpected "punchline".

If AI is programmed to select the next "best" word in a sentence, is that not a form of "expectation" which allows for modification in context.

According to Anil Seth, this is very similar to what the brain does.
The brain constantly makes predictions12. It uses our knowledge to predict the action that will occur and shapes our expectations based on past experiences and context.
Is that not a programmable equation?
1740513448472.png The regions marked A and B are the same shade of gray.!

But regardless of how hard you try to see these squares as the same shade, your brain won't let you

Is that problem of hard-wired "intuition".
 
Yes. His novel, The Master and Margarita is a brilliant satire. I will look for "Heart of a Dog." Though I doubt I am similar to a professor who performs such experiments. I hope there was not some translation error with my previous post.
"Мастер и Маргарита" довольно таки сложное для понимания. Чтобы его хорошо понимать нужно достаточно хорошо знать христианство. Булгаков был сыном профессора в Духовной академии.
 
Yes. His novel, The Master and Margarita is a brilliant satire. I will look for "Heart of a Dog." Though I doubt I am similar to a professor who performs such experiments. I hope there was not some translation error with my previous post.
"Собачье сердце" есть ещё фильм, но он на русском.
 
"Why did the chicken cross the road?"

It was in the way.

Why was there no cross walk?
 
Back
Top