• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Placebo Effect

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
First, a few links, in case you're unfamiliar.

Wikipedia

Scientific American (excerpt only)

The Skeptic's Dictionary

An article in Wired magazine

Most of you, I assume, already are familiar with the placebo effect. So, here's my question.

If the placebo effect is neurological (or, if all psychological phenomena are ultimately reducible to neurological phenomena), shouldn't the placebo effect work on artificially intelligent beings as well?

Dave
 
Local time
Today 5:30 AM
Joined
May 21, 2010
Messages
15
-->
Not necessarily. We humans have a remarkable propensity to fool ourselves into believing strange things (something I think most computer programmers would like to avoid when designing AI). Also, actual positive responses to the placebo effect weigh heavily on the autonomic nervous system, which has nothing to do with intelligence.
 

RubberDucky451

Prolific Member
Local time
Today 5:30 AM
Joined
May 22, 2009
Messages
1,078
-->
Location
California
I've heard that broken pedestrian crossing buttons in New York now serve as placebo buttons.
 

Cognisant

Prolific Member
Local time
Yesterday 5:30 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Although the human brain is particularly vulnerable, in theory any cognition engine can fall prey to disinformation and obfuscation, and with a sufficient understanding of said engine's workings there's no reason why one couldn't take advantage of this.

As a general rule of thumb, anything advanced enough to reliably pass the Turing test can be hypnotised, it's simply a matter of causing significant cognitive dissonance, thus driving it into a highly impressionable state, from which you can coerce it into believing (and thus doing) anything you want.

Or in terms that are easier to understand, you present it with as much perspective-broadening information as possible, by perhaps using open sentences that could interpreted many ways (read a bible, or any "holy" book for that matter for examples) until its perspective is so broad (i.e. its so confused) that any reasonable sounding information you present it will be accepted as fact, then you build whatever self-supporting conceptual/ideological framework in its mind that will prompt it to do what you want it to do.

Cultists and to some extent evangelists use such tactics, but they're charlatans compared to the mind-bending wizardry of proper psychology based hypnotherapists, the former relying mainly upon emotional manipulation, whilst the latter use the more objective method I just explained for the purposes of helping people (as opposed to robbing them) for example convincing a smoker they don't feel cravings for cigarettes, then reinforcing this belief with the fact that the smoker has been "hypnotised" thus whenever the smoker does feel a urge to smoke they remember they've been hypnotised into not feeling it, and from this faith in the hypnotist’s power over them, they suppress/forget the urge.

Of course the hypnotist has no such power, as I’ve explained before it’s like how a monster in someone’s nightmare becomes more powerful as they become more afraid of it, except hypnotists use trust & faith instead of fear.
Although admittedly being afraid of a hypnotist’s supposed mystical/psychiatric powers makes them so much more effective.

You are getting sleepy, soo sleepy :D
 

y4r5xeym5

Lurker Extraordinaire
Local time
Yesterday 11:30 PM
Joined
Jan 27, 2010
Messages
333
-->
Location
Texas
Eh, it's actually possible (mainly in prototype languages being used to develop intuition patterns in AI) to trick an AI, but all it would really do is shatter most of the program's logical foundation for thinking and analyzing.

And that would be a nightmare. :storks:
 

Cognisant

Prolific Member
Local time
Yesterday 5:30 PM
Joined
Dec 12, 2009
Messages
10,593
-->
In a way, that's the whole idea.

But you're right, I'm assuming said AI can recover from extensive "confusion", although it's a fair assumption imo, in the real world where one is constantly assaulted by fuzzy data such an inflexible construct would quickly succumb, perhaps behaving not unlike an autistic child, while one that's too flexible will be vulnerable to something like schizophrenia.
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
Although the human brain is particularly vulnerable, in theory any cognition engine can fall prey to disinformation and obfuscation, and with a sufficient understanding of said engine's workings there's no reason why one couldn't take advantage of this.

As a general rule of thumb, anything advanced enough to reliably pass the Turing test can be hypnotised, it's simply a matter of causing significant cognitive dissonance, thus driving it into a highly impressionable state, from which you can coerce it into believing (and thus doing) anything you want.
This is pretty much what I was getting after. If psychology is neurology, if Turing (or whomever after him) is more right and Brentano is less right, then, since the placebo effect actually works, and it works with one kind of information processor, it ought to work with other kinds of functionally similar information processors.

Dave
 

Inappropriate Behavior

is peeing on the carpet
Local time
Today 12:30 AM
Joined
Sep 21, 2008
Messages
3,795
-->
Location
Behind you, kicking you in the ass
Doesn't the placebo effect require a certain level of faith? A belief that it will work that fools the mind into doing what needs fixing. It's like the old biblical addage, God helps those that help themselves. This works because the belief/trust motivates the individual to do what is needed to be done and help remove any fears about all the things that might go wrong. The power of positive thinking etc.

For such a thing to be programmed into an AI I'd guess, would only be needed if said AI suffered from some sort of fear of failure or things going wrong. Does anyone really want to design an AI with humun emotional foibles? I mean besides Cognisant..
 

JoeJoe

Knifed
Local time
Today 6:30 AM
Joined
Jan 6, 2009
Messages
1,598
-->
Location
Germany
Is it at all possible to confuse AI? Has it been done yet? Won't an AI, when given information, automatically label information as false, if it does not fit with what he knows so far, or build a series of "truth systems" where for each information, the truth is dependant on the truth of different information.
And what would be achieved by inducing a placebo effect on AI?
If our technology is so far, that we can add AI to an organic body (replace a cat's brain with AI :eek:) and we would want to give it placebo medicine I suspect that the AI will constantly check if the information it got corresponds with the information it gets from the body. I suppose, that if we can already process ALL information artificially, that is processed by a cat brain, we could also give the command: "Heal faster!", and the AI would be able to send neurological signals conciously, that as yet only work unconciously, which increase the healing speed.
 
Local time
Today 5:30 AM
Joined
May 21, 2010
Messages
15
-->
If psychology is neurology, if Turing (or whomever after him) is more right and Brentano is less right, then, since the placebo effect actually works, and it works with one kind of information processor, it ought to work with other kinds of functionally similar information processors.

This is just plain arrogant. You cant just reason that since some particular (the human mind)has a certain property (is subject to the placebo effect) that all other particulars will have that property too just because they have some other property(consciousness) in common.

Also most of the time the placebo effect is not even seen in the first particular. So there is even less reason to believe that AI would respond to the placebo effect.
 

Agent Intellect

Absurd Anti-hero.
Local time
Today 12:30 AM
Joined
Jul 28, 2008
Messages
4,113
-->
Location
Michigan
There is neurology in psychology. The placebo affect is not just a change in the mind - it's actually a physical change in the brain. It's called neuroplasticity and neuromodulation.

The 'area for function X' in the brain is not "hardwired" as is the general misconception. The brain is plastic, in that it changes as time goes on and situations/circumstances evolve. The parts of the brain that are "for" certain functions have fluctuating borders that adapt to changes in the body and environment (1) (2). These areas can even take over areas of the brain that have a part of the body become non-functional (1) (2) (TED talk by Neuroscientist Michael Merzenich) (TED talk by Neuroscientist Ramachandran). Even talking (psychoanalysis) can 'rewire' the brain (1). In fact, parts of the brain can almost completely take over for other parts.

What's interesting about this is that a lot of this plasticity can be done via the 'mind' (downward causation). What this seems to imply, to me, is that the epiphenomenon of the 'mind' is pushing the physical 'objects' of the brain around.

This would mean that, for an AI to "fall victim" to the placebo affect, it's complex dynamic system would require the mind as an epiphenomenon. Simply simulating, to any degree of accuracy, a mind, would not allow for the downward causation required for the 'mind' to exert control over the physical complex system from which said mind emerges. There would need to be a recursiveness in which the mind can emerge from the system, and can then influence change on the system from which it emerges, which then influences the mind etc.
 

Cognisant

Prolific Member
Local time
Yesterday 5:30 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Does anyone really want to design an AI with humun emotional foibles? I mean besides Cognisant..
The 100% rational mind cannot work on a human level, it's that simple.

We live in a world comprised of fuzzy data, and to deal with fuzzy data effectively you need a capacity for subjective thinking; emotions are the language of the subconscious and it is upon the subconscious that our self conscious minds are built, now granted I can't say its perfect, but I know this, it works.

Build me a 100% rational AI that can pass the Turing test and doesn’t shut down every five minutes because it can’t cope with the sheer incomprehensibility of reality, and I’ll eat my words while I ride a flying pig over the frozen plains of hell.
 

Dormouse

Mean can be funny
Local time
Today 5:30 AM
Joined
Aug 14, 2009
Messages
1,075
-->
Location
HAPPY PLACE
I'm inclined to say an AI fabrication would only fall victim to the placebo effect if it wasn't fully aware of all of it's functions. For the placebo effect to trigger any reaction there would have to exist a plane the AI wasn't monitoring.

Basically we'd have to outfit it with a subconscious. I'm under the impression the placebo effect would only take hold if it didn't understand completely how it works. Which would be kinda humorous.

I may or may not be completely off the mark here, I'm only halfway through AI, er, Agent's links and there's a bit else I should be looking up.
 

Cognisant

Prolific Member
Local time
Yesterday 5:30 PM
Joined
Dec 12, 2009
Messages
10,593
-->
I'm inclined to say an AI fabrication would only fall victim to the placebo effect if it wasn't fully aware of all of it's functions. For the placebo effect to trigger any reaction there would have to exist a plane the AI wasn't monitoring.
Naturally, can you imagine a system that's entirely self monitoring, if you try drawing a process schema it becomes obvious that such as system is nothing more than a feedback loop, i.e. entirely useless.
 

Da Blob

Banned
Local time
Yesterday 11:30 PM
Joined
Dec 19, 2008
Messages
5,926
-->
Location
Oklahoma
First, a few links, in case you're unfamiliar.

Wikipedia

Scientific American (excerpt only)

The Skeptic's Dictionary

An article in Wired magazine

Most of you, I assume, already are familiar with the placebo effect. So, here's my question.

If the placebo effect is neurological (or, if all psychological phenomena are ultimately reducible to neurological phenomena), shouldn't the placebo effect work on artificially intelligent beings as well?

Dave
Nope the placebo effect is the fruit of faith and machines can not have faith...
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
Doesn't the placebo effect require a certain level of faith?
Faith? I don't think so, no.
The power of positive thinking etc.
Well, "positive" thinking is still thinking. And, what I'm getting at here is whether we should understand "thinking" as neurological (and therefore replicable in a functionally similar machine) or merely psychological (whatever that might mean).

Won't an AI, when given information, automatically label information as false, if it does not fit with what he knows so far, or build a series of "truth systems" where for each information, the truth is dependant on the truth of different information.
Why would it? This isn't what happens in "real" intelligence, so why does it have to happen in artificial intelligence? Heck, maybe in order for something to truly be intelligent, it has to be able to be affected by the placebo effect.
And what would be achieved by inducing a placebo effect on AI?
Beats me. :)

This is just plain arrogant.
Ouch. I wouldn't go that far.
You cant just reason that since some particular (the human mind)has a certain property (is subject to the placebo effect) that all other particulars will have that property too just because they have some other property(consciousness) in common.
Sure, I might be going too far. But, I can't really know unless I ask.

There is neurology in psychology. The placebo affect is not just a change in the mind - it's actually a physical change in the brain. It's called neuroplasticity and neuromodulation.

The 'area for function X' in the brain is not "hardwired" as is the general misconception. The brain is plastic, in that it changes as time goes on and situations/circumstances evolve. The parts of the brain that are "for" certain functions have fluctuating borders that adapt to changes in the body and environment (1) (2). These areas can even take over areas of the brain that have a part of the body become non-functional (1) (2) (TED talk by Neuroscientist Michael Merzenich) (TED talk by Neuroscientist Ramachandran). Even talking (psychoanalysis) can 'rewire' the brain (1). In fact, parts of the brain can almost completely take over for other parts.

What's interesting about this is that a lot of this plasticity can be done via the 'mind' (downward causation). What this seems to imply, to me, is that the epiphenomenon of the 'mind' is pushing the physical 'objects' of the brain around.

This would mean that, for an AI to "fall victim" to the placebo affect, it's complex dynamic system would require the mind as an epiphenomenon. Simply simulating, to any degree of accuracy, a mind, would not allow for the downward causation required for the 'mind' to exert control over the physical complex system from which said mind emerges. There would need to be a recursiveness in which the mind can emerge from the system, and can then influence change on the system from which it emerges, which then influences the mind etc.
Now this is a post I can sink my teeth into. Thanks!

I'm inclined to say an AI fabrication would only fall victim to the placebo effect if it wasn't fully aware of all of it's functions. For the placebo effect to trigger any reaction there would have to exist a plane the AI wasn't monitoring.

Basically we'd have to outfit it with a subconscious. I'm under the impression the placebo effect would only take hold if it didn't understand completely how it works. Which would be kinda humorous.
Interesting imlpication. The placebo effect requires ignorance. (From a certain point of view this might seem obvious, but, we know, for example, in RCTs, the subjects are aware that they may be receiving a placebo. Heck, I'm pretty sure it won't be hard to find evidence that the placebo effect works just fine in some cases when the person knows it's a placebo....)

Nope the placebo effect is the fruit of faith and machines can not have faith...
For all the areas where we agree, I'm going to disagree with you on this one, purely as an academic exercise. I other words, I want to hold out on the view that the placebo effect is possible for non-faith entities until it's demonstrated otherwise to me, largely because I find this to be a fun thing to think about.

Thanks everyone; I figured this bunch could help me think about this issue.

Dave
 

JoeJoe

Knifed
Local time
Today 6:30 AM
Joined
Jan 6, 2009
Messages
1,598
-->
Location
Germany
Why would it? This isn't what happens in "real" intelligence, so why does it have to happen in artificial intelligence? Heck, maybe in order for something to truly be intelligent, it has to be able to be affected by the placebo effect.


I guess this question can only be answered once we are so far... :rolleyes:
 

Da Blob

Banned
Local time
Yesterday 11:30 PM
Joined
Dec 19, 2008
Messages
5,926
-->
Location
Oklahoma
Well here are a couple of of oldies but goodies...

"How Mental Systems Believe" By Daniel T. Gilbert, American Psychologist 42(2) 107-119
(Spinoza versus Descartes)

Mentalese not Spoken Here: Computation, Cognition and Causation
" by Jay A. Garfield, Philosophical Psychology 10(4)

(Please excuse me, but I will not conform to APA standards for references unless compelled to do so...lol)

The Placebo effect is an effect of the cause(s): suggestibility, the power of suggestion or the "Willing Suspension of Disbelief", things that as of today machines lack...

It is an attitude towards the Future, a very beneficial (in most cases) product of human imagination. Whether one calls it belief, faith, or superstition it is a powerful psychological force that manipulates an individual's future.

I certain do agree with A. I. 's take on this issue, but the temporal component needs to be considered in addition to the spatial process he describes... Actually it is a 'Five Dimensional' phenomena when taken as a whole... with 5 not just four dimensions of recursion...

EDIT: How odd! I assumed everyone knew of dimensional recursion, but such does not seem to be the case, The AI people are still hung up over simple 2-D recursion programming and this paper came up as well in the search results for keywords "Philosoophy, Recursion, Dimensions" Any suggestions for better keywords (?)

In his review of my book, Wittgenstein Flies A Kite: A Story of Models of Wings and Models of the World, Peter Simons implies that I take a coincidence as my starting point and, from it, attempt to infer a connection (between the methodology of scale models and the view of propositions in Wittgenstein's Tractatus.) That's not how I present
things. Rather: Wittgenstein made a connection between scale models and
propositions. I provide the historical context to explain what the term "scale model" meant to someone situated as he was, and, more importantly, explain the development of the accompanying notion of similarity in that historical context. Simons tries to argue against historical context being very important, but his argument relies upon mistaken ideas about how isolated Wittgenstein was throughout 1914. It is unfortunate that Simons acts as though identifying a causal path were the main issue, and misstates or neglects so much of the book that should be of interest to philosophers of science.....
 

Inappropriate Behavior

is peeing on the carpet
Local time
Today 12:30 AM
Joined
Sep 21, 2008
Messages
3,795
-->
Location
Behind you, kicking you in the ass
When I was around 10 or 11, I took part in a drug study. I was born with asthma that hindered me greatly during my pre-teen years. My doctor explained the test to me thusly; I was to have my normal medication's dosage (Slo-phyllin I believe it was) cut in half and was to be augmented with the shiny new pill twice a day. I was told I would be getting one of 3 possibilities: The test drug, the second half of my normal medication or a placebo.

I found the whole thing exciting as hell. This thread reminded me of that time and what may be relevant (or not, I dunno) is that during that time I had two very strong desires. One was to be on this new med (the name of which I have long forgotten, it was 30 years ago) and two was for this new med to be the BIG cure.

If memory serves, it lasted for a year and my "job" was to rate my condition every day and whether or not it was an improvement over the norm of my life prior to the test. It's still a little fuzzy memory wise, I hadn't thought about it in years. I do though remember clearly that I marked my conditions as being an improvement. A pretty big improvement.

It turned out I was on the new drug.

Looking back, I have to wonder about a couple things. Was I being a truly impartial grader? Could I have been influenced by my desires to an extent that I graded myself as showing improvements when in fact I really wasn't? Or perhaps I truly felt better but it was the placebo effect more than the drug? It could also have been that the drug simply worked better and was the cause of my thinking that I was better.

Of course if it was the latter, there was no placebo effect. The first two possibilities however stem from desire which is akin to faith but with a little extra umph. If a doctor gives a sick patient a sugar pill and tells him it's a new drug that will cure his illness, it's not so much the patients faith in the doctor as that patients desire to get better that causes the placebo effect.

So for an AI to benefit or be fooled by a placebo effect it seems to me some form of desire on that AI's part needs to emerge from it programming. I suppose theoretically such is possible but don't we need a few breakthroughs before we get to that point? I guess then to answer your original question we ought to figure out if desire is just a neurological phenomenon or if it something more. I strangely find myself tempted to say it is something more.
 

Da Blob

Banned
Local time
Yesterday 11:30 PM
Joined
Dec 19, 2008
Messages
5,926
-->
Location
Oklahoma
On my desktop, amongst the two dozen papers I have started but probably will never finish is one entitled, "Enhancing the Placebo Effect, the real Route to Recovery" It is a facet of my drug recovery program that I have outlined in my facebook Notes and elsewhere.

It turns out that the Placebo Effect is actually the only ingredient that works in so many different types of recovery programs, support groups or self-help regimes, so it would seem logical that if one studied the placebo effect in detail, with an eye to enhancing it via application of Charisma, or hypnotic drugs (or whatever) one could increase the recovery rate for a wide range of recovery programs...

In fact despite, all the psychological academic nonsense that dominates the field and recieves the bulk of the government financing - the only thing that has been proven, time after time to be effective, is the Placebo effect. This is especially true for psychological therapy...

It is a mystery to me as to why this variable is always a facet of a control group, but never the focus of an experiment itself. Good God, the placebo rate is as high as 45% for some drugs and other sorts of therapy, in fact if the treatment rate surpasses the placebo rate by just 1 or 2 % the treatment is said to be statistically successful. So 2% improvement due to a new drug is significant but 45% improvement due to the Placebo Effect is not?

:slashnew:
:confused:
:storks:
:rolleyes:
 

JoeJoe

Knifed
Local time
Today 6:30 AM
Joined
Jan 6, 2009
Messages
1,598
-->
Location
Germany
The symptoms of cancer and AIDS patients always worsen after they receive the diagnosis. Is this also a kind of Placebo effect? This has nothing to do with desires, as IB suggested. :confused:
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
(important stuff)...

So for an AI to benefit or be fooled by a placebo effect it seems to me some form of desire on that AI's part needs to emerge from it programming. I suppose theoretically such is possible but don't we need a few breakthroughs before we get to that point? I guess then to answer your original question we ought to figure out if desire is just a neurological phenomenon or if it something more. I strangely find myself tempted to say it is something more.
I agree. In order for there to even be a placebo effect, there has to be an ability of the thing experiencing the effect to want some outcome. So, is the ability to want a function that can be replicated in a machine?

This is kinda the core of my question. If a thing is truly artificially intelligent, that is, intelligent like a human being, then, will it also be like a human being in other mental capacities besides the ability to solve problems? Will it be able to want, imagine, feel, etc.?

If not, then it's not really intelligent in the sense I'm most interested, and so the whole conversation is irrelevant to me.

If so, then it should be affected by the placebo effect.

Dave
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
The symptoms of cancer and AIDS patients always worsen after they receive the diagnosis. Is this also a kind of Placebo effect? This has nothing to do with desires, as IB suggested.
"Always"? That's impressive and it certainly suggests scientists ought to be taking a look at that. This effect is certainly one based on the individual's beliefs (if not desires, per se).

Dave
 

ohrtonz

Active Member
Local time
Yesterday 11:30 PM
Joined
Aug 13, 2009
Messages
117
-->
Location
USA
Im trying to think of something here... Let's say the AI I supposed to shoot enemies. It has programming to determine if it should shoot or not. So it has "faith" that its calculations are correct to shoot? Say one calculation is the emotion of someones face. This can be mistaken and not all bad guys are going to show the emotion. It will miscalculate and do the wrong thing. It doesnt sound like placebo to me. Maybe Im grasping at straws trying to find placebo in it.
 

Inappropriate Behavior

is peeing on the carpet
Local time
Today 12:30 AM
Joined
Sep 21, 2008
Messages
3,795
-->
Location
Behind you, kicking you in the ass
I agree. In order for there to even be a placebo effect, there has to be an ability of the thing experiencing the effect to want some outcome. So, is the ability to want a function that can be replicated in a machine?

This is kinda the core of my question. If a thing is truly artificially intelligent, that is, intelligent like a human being, then, will it also be like a human being in other mental capacities besides the ability to solve problems? Will it be able to want, imagine, feel, etc.?

If not, then it's not really intelligent in the sense I'm most interested, and so the whole conversation is irrelevant to me.

If so, then it should be affected by the placebo effect.

Dave

I like to think there are forms of intelligence out there that doesn't conform to human standards or foibles.

It sounds though as if you are interested in an artificially created, human like intelligence. The programming then would have to somehow allow for spin-off subroutines generated by the AI that compels it to seek a certain (desired) outcome.

Wait...

Unless you are interested in an AI preprogrammed to seek a certain outcome that can then be "fooled" into thinking the outcome is achieved even though the necessary steps have not been taken. I think it still comes down to desire or will. It has to somehow have that. I'm not sure we understand our own desires, so creating an AI with it is beyond our capabilities for some time to come I would imagine.
 

Da Blob

Banned
Local time
Yesterday 11:30 PM
Joined
Dec 19, 2008
Messages
5,926
-->
Location
Oklahoma
Well, actually, I think the question can't be resolved until an agreed upon definition of intelligence, then a derivative definition of Human intelligence is established. (lol, can it be said with certainty that all humans are intelligent...?)

However, I will still maintain that until a machine no longer lacks the capacity to "imagine", then desires of outcomes in the fantasy of the Future can not be contemplated by A.I.. If there is no future perceived, then there is no canvas for the placebo effect to be painted upon...
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
...

Unless you are interested in an AI preprogrammed to seek a certain outcome that can then be "fooled" into thinking the outcome is achieved even though the necessary steps have not been taken. I think it still comes down to desire or will. It has to somehow have that. I'm not sure we understand our own desires, so creating an AI with it is beyond our capabilities for some time to come I would imagine.
This gets to a couple related issues.

1) To what degree are humans pre-programmed? I suspect that those who view all this stuff from a materialist, reductionist, scientific-realist perspective see the brain as a hybrid between an organic computer and an organic pharmacy. The input/output devices for the human brain are fairly complex, but, in the end, some will argue that we're all programmed: by genes, environment, peers, etc. In that case, we're just wet computers (albeit super-advanced ones), and it's just a matter of time before the dry computers match our processing power. So, the question then becomes, is the placebo effect a "symptom" of the fact that the human wet computer is so fuzzy and inefficient?

2) Are things like desire, belief, will, volition, intention, etc., functions of the brain or is some form of metaphysical dualism true? (There might be other alternatives, I don't wish to approach this as a simple dilemma.) But, if they are functions of the brain, then why can't they be functions of inorganic brains? If some form of metaphysical dualism is true, then we're stuck with leaving aspects of the mind unexplained.

Dave
 

Da Blob

Banned
Local time
Yesterday 11:30 PM
Joined
Dec 19, 2008
Messages
5,926
-->
Location
Oklahoma
This gets to a couple related issues.

1) To what degree are humans pre-programmed? I suspect that those who view all this stuff from a materialist, reductionist, scientific-realist perspective see the brain as a hybrid between an organic computer and an organic pharmacy. The input/output devices for the human brain are fairly complex, but, in the end, some will argue that we're all programmed: by genes, environment, peers, etc. In that case, we're just wet computers (albeit super-advanced ones), and it's just a matter of time before the dry computers match our processing power. So, the question then becomes, is the placebo effect a "symptom" of the fact that the human wet computer is so fuzzy and inefficient?

2) Are things like desire, belief, will, volition, intention, etc., functions of the brain or is some form of metaphysical dualism true? (There might be other alternatives, I don't wish to approach this as a simple dilemma.) But, if they are functions of the brain, then why can't they be functions of inorganic brains? If some form of metaphysical dualism is true, then we're stuck with leaving aspects of the mind unexplained.

Dave

1) Those who claim to be Objective about the matter, are really just clinging to their Religion of Determinism. They prefer to invest their faith in those concepts that have boundaries and are under human control.

2) dualism is just the tip of the iceberg, reality is really much more complicated than the simple differentiation that defines dualism. There are more than dual causalities....

I would suggest the problem with inorganic brains is one of structure. There are certain functions monopolized by the complexity of the organic, that have no parallels in the relatively simple inorganic. THe 'laws' of the organic are more sophisticated, with greater 'degrees of freedom'...

There are undoubtedly aspects of the human brain that can not even be 'scientifically' observed by humans - much less explained in an "objective' fashion...
 

Agent Intellect

Absurd Anti-hero.
Local time
Today 12:30 AM
Joined
Jul 28, 2008
Messages
4,113
-->
Location
Michigan
1) To what degree are humans pre-programmed?

Pre-programmed?

I suspect that those who view all this stuff from a materialist, reductionist, scientific-realist perspective see the brain as a hybrid between an organic computer and an organic pharmacy. The input/output devices for the human brain are fairly complex, but, in the end, some will argue that we're all programmed: by genes, environment, peers, etc.

Reductionism isn't really adequate for describing the way our universe works, especially the complex dynamic system of the human brain. Defining how a subatomic particle works in isolation tells us nothing about how it actually behaves in reality, and the same is true for a neuron or a neurotransmitter.

In that case, we're just wet computers (albeit super-advanced ones), and it's just a matter of time before the dry computers match our processing power. So, the question then becomes, is the placebo effect a "symptom" of the fact that the human wet computer is so fuzzy and inefficient?

I don't think human 'intelligence' could be simulated by a computer, but I think it could be re-created using synthetic means. If we want a synthetic human intelligence, we would have to create a synthetic human brain, not a computer program. Human intelligence emerges frome the structure and function of the complex dynamic system of it's physical substrate, and it requires at least a very similar system to recreate the epiphenomenon we call "mind".

2) Are things like desire, belief, will, volition, intention, etc., functions of the brain or is some form of metaphysical dualism true? (There might be other alternatives, I don't wish to approach this as a simple dilemma.)

How would metaphysical dualism (or any sort of pluralism) be a more adequate way for human intelligence to exist than through physical means?

But, if they are functions of the brain, then why can't they be functions of inorganic brains?

You should check out Project Blue Brain. (Video).

If some form of metaphysical dualism is true, then we're stuck with leaving aspects of the mind unexplained.

If the mind is the result of metaphysical dualism, how does that preclude it from explanation?
 

Vrecknidj

Prolific Member
Local time
Today 12:30 AM
Joined
Nov 21, 2007
Messages
2,196
-->
Location
Michigan/Indiana, USA
Agent Intellect: All good points, reminding me to be more thorough in the future.

Yeah, though. "Pre-programmed." For instance, a pregnant woman's body's condition determines many of the characteristics of the developing fetus. After all, her body is the fetus' environment. Many cases of adult heart disease have been attributed to that person, as a fetus, being carried by a malnourished woman. So, by "pre"-programmed, I mean prior to birth. (And not, for instance, prior to conception, or even something else.) That said, however, the information for how to make a brain is somewhere in the DNA, and the DNA is a fascinating amalgam of mommy and daddy parts which do exist prior to the fertilized egg.

So, yep, from at least a physical point of view, the brain is somewhat pre-programmed. Both in the sense of its development being affected by its environment, and in the sense that brains (for instance) have the ability to learn languages. The brain doesn't have to be programmed, post-birth, to be able to learn a language, that's built in.

(All of this is predicated on stuff I have good reason to believe but which, I suppose may be outdated or just plain wrong.)

Dave
 
Top Bottom