Can AI learn to understand humor.

It may be British slang. An ocean-going vessel, e.g. tugboat, is one designed for rough seas and is much bigger and more robust than the average tug. Hence ocean-going has become a word denoting a very serious, large scale example of whatever it is.


So I am mixing my metaphors, absurdly, for the sake of humour, and thereby providing evidence I am not an AI bot.;)
Interesting. I could be wrong, but I don't think I've ever encountered Americans using that outside of specific, appropriate contexts.

BTW David Lynch just died. Hopefully, that doesn't alter the tone too much.
 
I'm not even going to bother looking up "ocean-going"; rather, I'm just going to ponder the possible meaning or meanings of that one for a while.


Edit: Is it some sort of weird reference to Moby Dick perhaps?
It might be a phallus-y to infer any subtext in the great white whale's name.

BTW David Lynch just died
Really sorry to hear that. I admired his work. And I'm pretty sure no AI could understand his surrealism - there is no algorithm for David Lynch.
 
“…antisymmetric tensors……”?

No, not just confusion with tension. Ocean-going ballocks is what it was.

I had Gawdzilla on Ignore so am not a good person to ask about him. Though I see he made a brief - and characteristically unilluminating - appearance on the .net site recently. He didn't even manage to get in any references to Purdue, so I don’t know what he was doing there. :biggrin:
He is actually a good guy, pretty smart. Terse perhaps.
 
Really sorry to hear that. I admired his work. And I'm pretty sure no AI could understand his surrealism - there is no algorithm for David Lynch.
Yeah, I think humor and surrealism possess certain qualities that are both difficult to teach and to learn. A sort of intentional, willful incongruence--but taken only so far. If taken too far, or not far enough, it simply does not work. A bit like the uncanny valley.

You can find plenty of examples--or create your own--of AI generated images as responses to purportedly incongruous prompts, but they never seem to "work". They either come across as too deliberate or as kind of half-assed and missing the point.

With the evolution of drum machines to AI which creates drum patterns, the various methods introduced over the years to impart the "human" quality or feel are instructive. Take random variance of tempo or random insertion of ghost notes and accents, as an example: studies of actual drumming behaviors have revealed that said randomness is typically a patterned sort of randomness--unless, of course, you focus upon bad drummers, in which case it's just random and sounds like crap.
 
Last edited:
Do you mean tense? ;)

I’ve never known him make a comment that added to the sum of human knowledge. Pointless one-liners seem to be his thing. But you are more patient than I.
That's his style I suppose. My patience is finite like everyone else and I make exceptions for dishonest posters.
There my patience is wafer thin.
Luckily, not many of those!
 
OK, back to some RW developments. And return to the topic, shall we?

AI with consciousness - but pain-free?​

Researchers at the University of Bern have developed a new model for the emergence of consciousness. The model suggests that one day artificial agents, i.e. systems which imitate human thought, could gain consciousness. To regulate our interaction with such systems, the researchers propose an agreement between humans and machines: the ‘Human-AI Deal’.
Over the past decade, the capabilities of artificial agents have increased rapidly. Today, they can solve complex problems, learn languages and are capable of self-improvement. In other words, they exhibit broadly intelligent behaviour. However, whether intelligent artificial agents are in principle capable of developing consciousness remains highly controversial. Depending on the answer, there would be far-reaching consequences for morality and human self-understanding.
Inextricably linked to the possibility of artificial consciousness is the question of how consciousness arises in the human brain. Researchers from the Computational Neuroscience Group at the Department of Physiology of the University of Bern and the University of Amsterdam have now proposed new answers to both issues. Their work has been published in “AI and Ethics”.

Is consciousness required to experience "discomfort" or "reward"?
Instead of starting with the brain, should we not begin with a much simpler example, such as the ability of the slime mold to make choices?
A slime mold is not conscious, yet it can make decisions between "bad tasting" foods and "delicious" foods. Slime molds love oats and when presented with a choice will always "choose" the oats. What does this ability to choose indicate? A inate form of preference?
And how could this be related to other choices between good and bad, comfort and discomfort, humor and slander?

What is the difference between a slime mold and an AI? Is there something about biochemistry that gives it additional survival oriented properties?
 
Last edited:
I asked DeepSeek if it understood humor...

Yes, I can understand and even generate humor to some extent! While I don't experience emotions or "get" jokes in the way humans do, I’ve been trained on a wide range of humorous content, so I can recognize patterns in jokes, puns, wordplay, and other forms of humor. I can also create jokes or witty responses based on context.

For example, here's a classic pun:
Why don’t skeletons fight each other?
Because they don’t have the guts!


If you have a joke or want me to come up with something funny, feel free to ask!
 
Back
Top