• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

To What Extent Can AI Derive/Interpret Meaning?

Ex-User (13503)

Well-Known Member
Local time
Today 10:54 AM
Joined
Aug 20, 2016
Messages
575
-->
How likely is an AI to correctly guess the meaning of a novel symbol? If that's possible, then what about symbols that have multiple and/or layered meanings? And what types of meaning? I suppose the act of producing meaning should be on the table as well. How could that be accomplished by AI?

I'm sure there's been work done on this, but I don't know where to start because it's not my field.

Pre-emptive thanks for any replies.
 

Cognisant

Prolific Member
Local time
Yesterday 10:54 PM
Joined
Dec 12, 2009
Messages
10,593
-->
How likely is an AI to correctly guess the meaning of a novel symbol?
How likely are you?

Any kind of image recognition software is going to need to be trained before it's capable of doing anything, we're no exception, except that human infants are born able to recognize faces (despite having no idea what a face is or why it's important) because this "training" has occurred over many generations of infants that were more likely to survive if they established an emotional bond with their carers.

Indeed it's a misnomer to say a newborn can "recognize" faces, it's not recollecting any kind of memory, it's biologically hardwired to smile at any face-like thing it sees. This kind of predetermined self-bootstrapping is incredibly powerful, it basically enables us to take short cuts in the learning process, faces are important and by being predisposed to focus on them a human infant learns how to read expressions MUCH faster than an AI with equivalent processing power.

And there's no reason to think this is limited to infants...
Isaac Arthur Machine Rebellion 9:02-9:58

Currently AI can do any one specific thing a human can do, but making an AI that can do everything a human can do is impractical and even though an AI can learn anything a human can learn an AI doesn't have that genetic heritage of instinctual short-cuts helping it.

Humans are terrifyingly sophisticated machines.
 

Black Rose

An unbreakable bond
Local time
Today 3:54 AM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Humans have a connection from the vision center to the language center that other apes do not have. And since humans have symbols imposed on them from the outside what is actually happening is what Noam Chomsky calls universal grammar, which is a hierarchical understanding of phases together (vision language connection). When talking to infants language learning is all about acquisition. The hierarchy needs to be filled in, So any new symbol that is understood comes from the context it is used in, a new way to fill in the hierarchy has already acquired.

Since it is only a matter of filling in a hierarchy with context, we could make an A.I. like this and it would learn new symbols in the way specified. Solving other cognitive problems are amenable to the same process.

(for intrinsic motivation A.I. needs a limbic system to mediate reinforcement learning)
 

Ex-User (13503)

Well-Known Member
Local time
Today 10:54 AM
Joined
Aug 20, 2016
Messages
575
-->
I expect one approach would be for an AI to identify the patterns and structures in known languages and look for them in a novel presentation to identify meaning therein. I'd also expect such patterns to reflect how linguistic information is processed in the human brain, which would render meaningless anything introduced by the private language argument. Chomsky looks interesting enough.

So then, in order to find, say, a novel cryptographic technique, we'd need to examine modes of communication (or interpreting the world) that transcend or circumvent linguistics. Pareidolia and instinct seem like good examples to start with, but what are they, really? Within the Jungian paradigm, Ni? Other paradigms? Calling it genetic just seems like a massive overgeneralization.
How likely are you?
More likely than most. But why? If you were to randomly draw something, ascribe a meaning to it, and dare me to guess it, I'd obviously fail, but some factors in its creation or design surely have corelates to its meaning, such as how the way one writes a letter G points to meaning in graphology.
 

Minuend

pat pat
Local time
Today 11:54 AM
Joined
Jan 1, 2009
Messages
4,142
-->
I'm not updated on AI at all, so ignore this unless you're extremely bored.

I think AIs will be able to do this easily, if they are not already able to. Human intelligence and understanding is... overrated. We can program and create AI with a larger library of knowledge, and that will have a significant impact. Most people would have no or a limited understanding of a symbol already, if an Ai has all the knowledge of or surrounding the topic... well....

I can very easily see AI be superior in a lot of fields, as humans usually need a lot of dedication to get on a competent, good level. Ais can easily use a vast database of knowledge and calculate. Humans have huge biases of knowledge or perspectives they have recently experienced. They see things in terms of how they view the world.

I think we're also getting closer to upload human brains, which will create a lot of new problems. https://en.wikipedia.org/wiki/Connectome
 

Ex-User (9086)

Prolific Member
Local time
Today 10:54 AM
Joined
Nov 21, 2013
Messages
4,758
-->
What kind of symbols are we talking? Can you give examples?

We don't have general AI, a neural network or an expert system can be trained to recognise any symbol in terms of already known symbols which isn't any less than what a human would be capable of when presented with a novel input.

Depending on its training a system can order graphical, sound or other input and assign it to categories. It may learn to store related inputs closer and give them more connections.

Current machine learning systems can be trained to find patterns in the training data. The limitation of contemporary systems is that they cannot reliably switch their mode of operation from probabilistic logic to two-value logic. Any amount of training data that is similar but has a different result will tend to destroy the value network

For example a system can be taught to compose sentences and will get 99% of the words right, but the grammatical order will be nonsense, or it can make grammatically reasonable sentences that are too restricted in content or message. It can try to hold a conversation, but will tend to lose track of a topic, or will stick to one topic too much etc. If it knows enough to have a good response to anything, it won't be specialised enough to choose the specific response that is required consistently as the bias from its wide domain will cause it do diverge.

If you teach the system "a spaceship is great because you can use it to fly closer to (view) the stars" and "a telescope is great because you can use it to view the stars" if it didn't recognise that a spaceship is a vehicle and if it didn't already connect movement verbs to vehicles, after it learns "a spaceship can be used to fly around the moon" it will tend to make a mistake of saying "a telescope can be used to fly around the moon".

If it made a special node "vehicles" in its network it can make a mistake of composing a sentence "a jet can be used to fly around the moon" instead. It is actually quite unrealistic to expect a network to learn such a pure node as "vehicles" from random data. Far more likely error is that the network will make a node "transport" or "transport and kittens" where it will store horses, cars, aircraft, spaceship, sci-fi teleporters and will be too biased by my little pony, cat pictures or other random crap to sophisticate the node transport into machine, space, air, animal, ground, fantasy and so on.


Simply put the learning algorithms create networks that are too environment specific, exposing them to more environments only makes them less accurate with respect to each specific area. This shows that creating networks is likely part not whole of what is required to make more generalised solutions, though with greater supervision, data selection and value assignment there is good room for improvement of this technology.
 

QuickTwist

Spiritual "Woo"
Local time
Today 4:54 AM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
Currently AI can do any one specific thing a human can do, but making an AI that can do everything a human can do is impractical and even though an AI can learn anything a human can learn an AI doesn't have that genetic heritage of instinctual short-cuts helping it.

I'm throwing my hat in with this answer.

Humans are still way more complicated than any AI.
 

Cognisant

Prolific Member
Local time
Yesterday 10:54 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Absolutely, that's essentially the problem, it's not that we can't make thinking machines but rather the standards we're holding them to (the capabilities of the average human) are absurdly high.
 

Pizzabeak

Banned
Local time
Today 2:54 AM
Joined
Jan 24, 2012
Messages
2,667
-->
Pretty likely, given the algorithm. It would just associate and branch off. That's why they say the biological organism is like a computer itself, with the motherboard being like a brain. You can't tell yourself the meaning you derive. You only know so much as you're allowed to know, given the tools you've received to use a system. Some A.I. would suck more than others until the ultimate one has been developed. That's like making a security bot and telling it not to detect objects on its event based photon pixel axis grid board space. This question isn't deep, and A.I. can know what it means to be king at chess. So they could get so cognizant they'll try to overtake humanity to kill us all before we can enter the supercomputer itself.
 

Ex-User (14663)

Prolific Member
Local time
Today 10:54 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Depends on what we mean by "meaning" I guess. If you have a sentence like

"It felt like grass under my feet"

you can easily make an AI which can tell you roughly what this sentence is communicating: the subject "I" is referring to some feeling in their feet etc. But in terms of qualia, the machine itself will not have a clue what this sentence "means", since, in order to have that knowledge, you need an experience of having walked on grass – in particular the human experience of walking on grass.

So far we don't know
1) whether you can make a machine have qualia
2) whether cognition corresponds to computation

both of which need to hold in order to claim that machines can interpret meaning the same way we do.
 

aiyanah

_aded
Local time
Today 11:54 AM
Joined
Oct 7, 2018
Messages
233
-->
slim to none.
without death as a marked end-point i cannot see how an AI can derive meaning from anything.
function, yes, the AI can define anythings function, but not meaning, might require neurochemistry for that.

you know what, maybe if it looks like the MAGI system from NGE it'll be able to derive meaning, or the "plugged in" humans from neal ashers novels.
 

Artsu Tharaz

The Lamb
Local time
Today 7:54 PM
Joined
Dec 12, 2010
Messages
3,134
-->
I think that anything covered by Jungian type, which includes meaning, can be made functional into an AI thingo.

It's what comes after that that would be rather.;. difficult.
 

Ex-User (14663)

Prolific Member
Local time
Today 10:54 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Just read a very good article on this

[...] it’s a very different task to recognize items humans have labeled with the letters f l o w e r compared to understanding what a human thinks of when they see a flower. To a human being, a flower is the often visually appealing part of a plant that is attempting to reproduce. Some people like being given flowers as a gesture of affection or care. An artificial intelligence whose entire existence consists of just a playing field of meaningless shapes and colours (that we call flowers) has never been given the chance to learn anything about humanity, the universe, or biology. Therefore it also can’t have any idea of what these concepts are.

Of course we can try to make it have an idea. We can train an artificial intelligence to recognize the words “humanity”, “the universe” and “biology” and produce sentences like “I know these things” when asked about them, but they are still essentially meaningless to the AI itself. It’s like teaching a parrot to repeat sentences that describe quantum physics; it doesn’t make the parrot a physicist. In many ways, parrots are way smarter than any AI today. In AI research, we really have no actual idea of how to teach an AI to understand complex human-world issues yet.
https://liljat.fi/2017/11/humanoid-robot-sophia-sad-hoax-harms-ai-research/
 

Pizzabeak

Banned
Local time
Today 2:54 AM
Joined
Jan 24, 2012
Messages
2,667
-->
If you teach the system "a spaceship is great because you can use it to fly closer to (view) the stars" and "a telescope is great because you can use it to view the stars" if it didn't recognise that a spaceship is a vehicle and if it didn't already connect movement verbs to vehicles, after it learns "a spaceship can be used to fly around the moon" it will tend to make a mistake of saying "a telescope can be used to fly around the moon".
This is one of the best analogies I've seen yet so far from this angle. You wouldn't even teach it that, because that's not what spaceships are for. So in some instances, it wouldn't create a node there, or would just associate that with other networked pattern connections, to define it more and differentiate it. The engineers working at the labs such as NASA would then be able to change the programs to learn it more if it can't get it right with the machine learning they gave it. So it could depend on the first definition or insight of it it realized. So it would already have those, and that'd be a common programming error video games would make if you were making one and testing it, and couldn't get it to work right or how you want, that's when you'd find bugs that need patches. It wouldn't be a glitch.


If it made a special node "vehicles" in its network it can make a mistake of composing a sentence "a jet can be used to fly around the moon" instead. It is actually quite unrealistic to expect a network to learn such a pure node as "vehicles" from random data. Far more likely error is that the network will make a node "transport" or "transport and kittens" where it will store horses, cars, aircraft, spaceship, sci-fi teleporters and will be too biased by my little pony, cat pictures or other random crap to sophisticate the node transport into machine, space, air, animal, ground, fantasy and so on.
It isn't that cut and dry. Or literal.
Modern AI, including Kurzweil's concepts of the singularity, would likely be more complex than that, and realistic. Its learning capabilities would only be mildly impressive, if not at all what would be expected. It should be able to learn those fast, and understand aircraft aren't for space, so those definitions aren't that new and don't require much workaround to compartmentalize, or fix. For example, you could have an aircraft transporter explode and the diagnostics won't always automatically know the cause. Most coding languages aren't that different, and when you learn one you can learn the others faster because they're similar. So you'd have to give it a "meaning" category, where it could take and compile all the necessary components of information to a main cloud to store and define it as important. So electricity or static isn't all important in neuronal connections or for computer chips. The only difference is they don't have a similar component "cerebral spinal fluid" necessary for its bit processing mechanics.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 4:54 AM
Joined
Aug 30, 2017
Messages
100
-->
So far we don't know
1) whether you can make a machine have qualia
2) whether cognition corresponds to computation
I think these can be answered.
1) Currently, humans only have the capability of creating machines that manipulate the physical word - protons, neutrons, electrons, and such particles. The motion of these things on their own is meaningless, which is evidence to me that qualia must supersede the physical. If so, then a machine cannot be made that has qualia.
Adding to this, current machines aren't designed in any way to "have" qualia anyways: Computers are merely simulators. As a favorite quote of mine goes, "You can simulate water in a computer, but you can't make it wet."

2) I'm not sure what you mean by cognition, but I think you're referring to the ability to possess an "idea" - one by which you have the qualia "within the mind" rather than through some "external" senses/stimuli. Am I right?
In that case, I think it's quite evident that computation *alone* does not result in cognition. For current computers, what happens is merely the shuffling around of electrons. The locations (and motions) of these electrons are meaningless. (After all, we can abstract this machine by noting that the machine is designed to have specific paths of travel of electrons but that these paths are independent of actual hardware or tangible manifestation. Like making figure 8s with clouds in the sky, they could be made bigger, smaller, elongated, and morphed in ugly ways that ensures that any "special" arrangement that might otherwise result in cognizance doesn't happen even though the computation ability of the computer is retained. In other words, I can design a circuit board any way I want as long as the electrons go to the correct parts.)
In fact, we can go on to say that the entirety of the computer and all its operations are perfectly meaningless. They only possess meaning when something is projected onto their monitors and then examined by some external viewer. This is not because somehow meaning can only be obtained through an external viewer, but because the fundamental design of the machine (the computer) was only meant to be meaningful in its interaction with (that is, in its conveying of information to) human beings.
Like I said, computers are merely simulators. AI will never transcend the limitations of the machine in which it runs, so unless it is reproduced in a "better" machine, it won't be any better.

Now obviously, if someone believes that the physical world is all that exists, then these questions form a concerning dilemma. But, at least in my opinion, you have to really want that conclusion (that is, the conclusion that the world is only physical) to ignore the fact that the motion of electrons is completely meaningless on its own.
But Want is a powerful thing, no?

My implied stance here is that meaning means correlating one experience with another and ultimately back to some "experience" - a word I interpret as relating to qualia.
 
Top Bottom