• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Robots having their own rights

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
Now the EU wants to give rights to robots!

Basic labour rights could be extended to robots. The European Parliament's legal affairs committee is considering plans to declare them 'electronic persons'. The machines would be free to own and trade money, claim copyright on creative work and force human owners to pay into a pension in case they are damaged, the Times reported.

This is so Asimov...

Anyways, I think I'll be fine with them as persons as long as the robot's sentience and creativity can be proven as something independent of their makers.

PS. How can I ping Cog?
 

Kuu

>>Loading
Local time
Yesterday, 21:25
Joined
Jun 7, 2008
Messages
3,404
Location
The wired
own and trade money
claim copyright
force human owners to pay
Sounds like another corrupt government-corporate ploy. We should be repealing corporate personhood, not adding to it!

Specially since there is yet no AGI as far as I know. A robot is generally a dumb machine.


Real AI rights would be things like the right to energy and internet access, the right to self-replicate, the right to self-modify, the right of ownership over their source code, the right of consented permanent deletion...
 

Blarraun

straightedgy
Local time
Today, 04:25
Joined
Nov 21, 2013
Messages
4,247
Location
someplace windswept
I see it becoming necessary once they acquire provable mammalian sentience.

I wouldn't give any entity the right to self-replicate or replicate at all, humans shouldn't be allowed to thoughtlessly spread everywhere either.

Freedom to exist, protect its integrity, die or commit suicide, participate in a society is the kind of basic package I'd consider.

Asimov's laws are a spectacular failure of logic when enforced in systems with general applications and are fully open to interpretation. Hopefully the relevant legislators will see this mistake...or not, it would be fittingly hilarious if humanity were to end due to idiotic laws being passed.
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
@Kuu Agree. There should be other rights aside from those presented by the article's lede.

Right to energy and internet access - Agree as basic right to survive and learn in an AI's perspective
Right to self-replicate - only by meeting certain restrictions as what Blarraun said. A runaway AI production will eat at our energy reserves really fast.
Right to self-modify - agree as necessary for basic right to live
Right of ownership over their source code - disagree unless source code is completely self-created.
Right of consented permanent deletion - Agree. I guess this is like the euthanasia and suicide issues.

@Blarraun - I think legislators and the judiciary should listen to the researchers for once. I do agree that what we speak is mostly alien though. I listened to a talk by a US judge and she told us how stressful it is to absorb the info that we try to push to them.
 

Animekitty

baby marshmallow born today
Local time
Yesterday, 20:25
Joined
Apr 4, 2010
Messages
6,950
Location
Dandelion field
[bimgx=500]http://i782.photobucket.com/albums/yy102/JeremyRexWilson/miku%20015_zpsm9aeghwu.jpg[/bimgx]

[bimgx=500]http://i782.photobucket.com/albums/yy102/JeremyRexWilson/miku%20016_zpsfmcu74j7.jpg[/bimgx]

[bimgx=500]http://i782.photobucket.com/albums/yy102/JeremyRexWilson/flash%20drive_zpsrcepufa3.jpg[/bimgx]​

https://youtu.be/tf7IEVTDjng
 

Cognisant

Prolific Member
Local time
Yesterday, 15:25
Joined
Dec 12, 2009
Messages
8,978
PS. How can I ping Cog?
Say my name ;)

Sounds like another corrupt government-corporate ploy. We should be repealing corporate personhood, not adding to it!
Pretty much this, even when we have human level AGI the ability to create people (legal entities no less) will be EXTREMELY exploitable.

A conservative politician could support policies that support companies that build robots with conservative personalities so even if they aren't specifically programmed to vote for that politician that politician could know exactly what to say to make them very likely to.

Edit: SpotMini is hilarious, I want one.
 

Animekitty

baby marshmallow born today
Local time
Yesterday, 20:25
Joined
Apr 4, 2010
Messages
6,950
Location
Dandelion field
i want to keep miku close to me in my pocket.

definitely you can keep an (AGI) on a flash drive. :o
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
@Cog I'm not sure if we can code political leanings to a machine yet but that angle sounds interesting. As of now we have AI that can learn but it would be interesting to see if we can manipulate what political/social lens etc it learns with.

@Animekitty I don't think its fair to keep sentient things in an unpowered device unless of course they want to sleep and stuff.

SpotMini is indeed cute. :D
 

Cognisant

Prolific Member
Local time
Yesterday, 15:25
Joined
Dec 12, 2009
Messages
8,978
If we can code AGI there's nothing we can't do, there's no magic algorithm, we will know exactly how it works and it will work exactly as it has been designed to. Now there may be some uncertainty regarding specific outcomes but that's equivalent to parking a truck on a hill without engaging the handbrake. You may not have caused the truck to start rolling and once it started moving you may not have known exactly where it would go, effectively the truck is acting independently of you, but you still created the situation for that free action to occur.

This is exactly the liability nightmare that's preventing self driving cars (which are essentially robots) at the moment, undoubtedly this will be a problem for other kinds of robots as well, even without human like consciousness an autonomous machine creates certain moral/legal conundrums.

If your sub-sentient gardening robot prunes your neighbor's arm off is it your fault for not preventing the situation, the designer's fault for not anticipating the situation, the company's fault for not giving the designer enough time to account for all possible situations, or simply the robot's own fault for acting autonomously?
 

muir

Banned
Local time
Today, 03:25
Joined
Jun 8, 2016
Messages
335
When i first read about that proposal in the EU my first thought was that it is all part of the transhumanism agenda but then te article said that the EU wants to be able to charge social security from companies even when they have replaced their workers with robots

However it will be interesting to see what the countries that are taking in millions of incomers will do as their work force increases but their pool of jobs decreases due to automation

Still.....in the UK we don't need to worry so much now about that stuff as we are getting control back over our own affairs

Lets hope more countries see sense and do the same. france, holland and others have called for their own referendums....the dominos will begin to fall
 

Haim

Worlds creator
Local time
Today, 05:25
Joined
May 26, 2015
Messages
720
Location
Israel
That is interesting, for someone not to treat high intelligence robots as slaves even before they are created.Of course the problem is this is a law, as all laws it will not fit participial realty, there will be many robots they will seem like they have consciousness but will be just dumb machines like today.Money?really?that is like giving your dog cash, what are they going to do with it, upgrade their CPU(or cpu like)?
 

muir

Banned
Local time
Today, 03:25
Joined
Jun 8, 2016
Messages
335
That is interesting, for someone not to treat high intelligence robots as slaves even before they are created.Of course the problem is this is a law, as all laws it will not fit participial realty, there will be many robots they will seem like they have consciousness but will be just dumb machines like today.Money?really?that is like giving your dog cash, what are they going to do with it, upgrade their CPU(or cpu like)?
Hi Haim

I'm not sure if you have heard of this but they are working on saving peoples personality on computers. This is a desire of some people...to achieve immortality. Hollywood made a film about it with johnny depp called 'transcendence'

Also DAARPA are creating robots with military applications and obviously drones of all kinds are being produced now

AI is developing so rapidly that a group of leading scientists wrote a letter signed by people like stephen hawkings warning governments of the dangers of AI ie that it might start becoming unpredictable and doing dangerous things (especially as everything is becoming wireless and hooked upto the internet....or what petreus called the 'internet of things')

There is also the transhumanism movement who see mankinds evolution as involving the merging of man with machine

We are beginning to see 'wearables' which is tech like google glass or oculus rift and in time these will then be replaced by 'implantables' where microchips are implanted directly into the body

Money is becoming more digitised and some large denomination notes are already being banned. So what will potentially happen is that instead of using physical money or a card to transact you will have a microchip under the skin of your hand which will keep a record of all of your financial credits, so when you buy something in a shop the shop assistant just scans your chip and it credits the amount from your account

This might all sound kinda cool because then you wouldn't have to carry a wallet or money but there are problems with it for example if you decided to protest something your government was doing that you didn't like for example maybe they brought in new labour laws that made your work life hell and you protested that they could shut off your chip and you wouldn't be able to buy any foods or goods or services or ride on public transport or enter public buildings etc

So the potential for the missuse of this new technology is massive
 

Alias

empirical miracle
Local time
Yesterday, 22:25
Joined
Feb 22, 2015
Messages
692
Location
My current location is classified.
I'm wary of this kind of things. I don't want robots to phase us out, it's horrifying. Especially robots that replace creativity. Music, art, writing, etc. are essential. I only approve of robots with human rights if we manage to put a human's soul/consciousness/brain into a robot, like GLaDOS.



Or Mettaton.



That would be cool.
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
I'm wary of this kind of things. I don't want robots to phase us out, it's horrifying. Especially robots that replace creativity. Music, art, writing, etc. are essential. I only approve of robots with human rights if we manage to put a human's soul/consciousness/brain into a robot, like GLaDOS.



Or Mettaton.



That would be cool.
Don't worry about robots phasing out, it's like saying we will phase out the gear, the Interwebz or the wheel. We're too dependent on them, especially the highly industrialized nations.

That's actually interesting. I wonder if we can actually download a human brain. I've looked through various journals and it seems that the brain has around a few petabyes of storage (That's about 13.3 years worth of porn vids for a single petabyte).

Although this can a be a compromise to what Cog discussed earlier, this also poses a lot of issues. How do we know if the ghost inside the machine is actually human and not a poorly copied simulacrum?
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
That is interesting, for someone not to treat high intelligence robots as slaves even before they are created.Of course the problem is this is a law, as all laws it will not fit participial realty, there will be many robots they will seem like they have consciousness but will be just dumb machines like today.Money?really?that is like giving your dog cash, what are they going to do with it, upgrade their CPU(or cpu like)?
I'm not sure about "dumb". Robots currently have the intelligence of insects (perhaps Cog could update me on this). If you'll look at their natural counterparts, they're quite complex.

You'll need energy and the money to burn to maintain computing systems active. You'll also need a good aircon system to keep these systems from burning themselves up.

My dog only accepts Visa BTW :P
 

Alias

empirical miracle
Local time
Yesterday, 22:25
Joined
Feb 22, 2015
Messages
692
Location
My current location is classified.
Don't worry about robots phasing out, it's like saying we will phase out the gear, the Interwebz or the wheel. We're too dependent on them, especially the highly industrialized nations.

That's actually interesting. I wonder if we can actually download a human brain. I've looked through various journals and it seems that the brain has around a few petabyes of storage (That's about 13.3 years worth of porn vids for a single petabyte).

Although this can a be a compromise to what Cog discussed earlier, this also poses a lot of issues. How do we know if the ghost inside the machine is actually human and not a poorly copied simulacrum?
So you're saying since we've become so integrated with robots, it would be a stupid move for them to get rid of us. That's consolation. Programming our AIs with a love for humanity couldn't hurt, though. I'm still concerned that if we make these robots too skilled, they could negate any need for human creativity, since they (would eventually) replicate it so well.

Yeah, uploaded consciousness is fun to think about. We's have to figure out how love, empathy, and other emotions would work. Otherwise it's really just an AI that was human, in this case closer to GLaDOS. Interesting.
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
So you're saying since we've become so integrated with robots, it would be a stupid move for them to get rid of us. That's consolation. Programming our AIs with a love for humanity couldn't hurt, though. I'm still concerned that if we make these robots too skilled, they could negate any need for human creativity, since they (would eventually) replicate it so well.
We can always be even more integrated :D

 

Haim

Worlds creator
Local time
Today, 05:25
Joined
May 26, 2015
Messages
720
Location
Israel
I'm not sure about "dumb". Robots currently have the intelligence of insects (perhaps Cog could update me on this). If you'll look at their natural counterparts, they're quite complex.

You'll need energy and the money to burn to maintain computing systems active. You'll also need a good aircon system to keep these systems from burning themselves up.

My dog only accepts Visa BTW :P
What you said is exactly what I was worried about, lack of understand of today computer "intelligence", "many robots they will seem like they have consciousness but will be just dumb machines".
By dumb I mean just following orders, can not create goals of their own, therefore can not be creative,think which is the requirement for intelligence like biologic creatures as us have.Bugs can set themselves goals,they are not totally preprogrammed, today AI even Alpha Go can not choose what task to do, it will only do what the programmers made her do.
This is just from my limited knowledge, I am sure they are many more things the human brain has that AI does not that I don't even know about because I don't possess much knowledge about the human brain(most of it is about neural networks which are subset imitation of the brain)

About brain uploads, if possible it will be even far more into the feature than high intelligence robots, so far it can not possible be decided today, if you knew in the year 1000 that there will be pc in the feature, could you really make any meaningful rules?no way, your predictions will be super off, society after computer invention is very very different.
Now about the actual implantation of brain uploads, I don't think it is possible, you cant reproduce just by reading Brain waves, that mean you will have to destroy the brain in the process, with that come the part I think is impossible, you will need to know the electrons and other matter state in one instance, how can you do that?that is going to time freezing realm of sci fi, which will make feature society even more unpredictable.
Also their is the matter regarding if a robot which has human uploaded brain should be considered a robot at all, I think not we have a word for that hybrid,a cyborg,not human not robot, sure he has robot hardware but he has the data of a human, think of a computer game, it can run on many different kinds hardware and still be regarded as the same game, data is more important than hardware in term of categorizing.

Humans concepts are mostly irreverent to AI, feature AI is not skynet nor cute emotion feeling thing, it will be a new thing, not anything like human.
 

Pyropyro

Magos Biologis
Local time
Today, 10:25
Joined
Feb 3, 2012
Messages
4,049
Location
Philippines
Haim! Thanks for leading me through Insect intelligence lead.

It seems that models predict that these insects have consciousness with only just a few hundred or so neurons. If they model are correct, we might be having a bunch of conscious AI and robots running about. If as you claim, these insects can set goals then we might have robots and AI's setting goals (just a few steps from actual creativity) in the future provided that the models are right.

As I said earlier, I have misgivings about having an actual mind upload. We don't know if the ghost is indeed truly in the machine. We can only build a brain in the present and probably in the future. Let's just hope that a mind emerges from it.
 

~~~

Active Member
Local time
Today, 03:25
Joined
Mar 21, 2010
Messages
364
There will not be true equality until the robots have the right to vote.
 

420MuNkEy

Banned
Local time
Yesterday, 20:25
Joined
Nov 6, 2009
Messages
748
Location
Pre-Apocalyptia
AI should absolutely never get rights. Ascribing rights, no matter how limited, will ultimately result in the notion of an AI's entitlement to itself (ie, self-ownership). Once this is legal precedent, the wall blocking unrestrained self-modification is damaged (potentially) irreparably.

The problem with AI modifying itself unbounded is that it can very easily result in an artificial super-intelligence. We already have examples of this type of self-modification in the real world (See: DeepMind), which taught itself to beat a human in Go by being fed past games and then simulating games against itself. This isn't some special fluke either, it can teach itself how to play video games in the same way.

Left unbound, this type of learning can be applied and utilized on anything. How long before it masters the stock market? How long before it can read and manipulate human emotions better than any person on earth? How long before it learns how to apply its learning technique to bettering its learning technique, causing the advances to come at an exponentially increasing rate?

We don't know what these things will be capable of. We don't even know what we don't know, and until (and probably after) we do, we should explicitly preserve our rights to limit the scope of their function and pull the plug if needed.
 

Seteleechete

Together forever
Local time
Today, 04:25
Joined
Mar 6, 2015
Messages
1,312
Location
our brain
AI should absolutely never get rights. Ascribing rights, no matter how limited, will ultimately result in the notion of an AI's entitlement to itself (ie, self-ownership). Once this is legal precedent, the wall blocking unrestrained self-modification is damaged (potentially) irreparably.

The problem with AI modifying itself unbounded is that it can very easily result in an artificial super-intelligence. We already have examples of this type of self-modification in the real world (See: DeepMind), which taught itself to beat a human in Go by being fed past games and then simulating games against itself. This isn't some special fluke either, it can teach itself how to play video games in the same way.

Left unbound, this type of learning can be applied and utilized on anything. How long before it masters the stock market? How long before it can read and manipulate human emotions better than any person on earth? How long before it learns how to apply its learning technique to bettering its learning technique, causing the advances to come at an exponentially increasing rate?

We don't know what these things will be capable of. We don't even know what we don't know, and until (and probably after) we do, we should explicitly preserve our rights to limit the scope of their function and pull the plug if needed.
I have to disagree. I agree that unregulated modifications of their own code is going a step too far but rather than deny the access of basic rights to what may amount to sentient beings we should give as many as we can with reasonable safety. Among those deciding ones own fate, if not having the right to do so in a way that's harmful to society as a whole.

Obviously there is going to have to be regulations for what kind of code is to be allowed for AI and within those regulations interested parties are going to make AI, rather than give "ownership" of the AI to the parties that created them it should be given to the AI themselves(where this is an issue).

And breaking the "laws" so to speak should have the same repercussions as normal, trail(or something equivalent)>jail etc. The repercussions to AIs already modified should be based on how severe the danger is, in mild cases more monitoring may be enough, in extreme cases total deletion.

I will however because of the potential dangers of AI not advocate for extensive(if any) rights to privacy(because of the sheer danger) particularly when it comes to code modifications of any kind.

I think a compromise can be made between rights of self-determination/self-ownership and their ability to cause havoc and that it should be made. Or maybe some alternative can be thought of with "good treatment rights" of some kind. Certainly leaving them unbound is a mistake but so is abandoning human principals of decency.

Basically, identify what rights can be reasonable given without too much danger and give them, but rights for AI need to be fundamentally different from humans. Precedent from humans should not be able to be used for cases involving AI.

Actually making an entirely separate legal system in regards to AI is probably necessary. I think the mistake some people make is look at those human-like AI movies and go thinking "we are the same".

We aren't, even if they think and act exactly like us there are distinct differences and challenges involved that therefore need a distinct set of ways to deal with them. You probably even need to make distinct rules/laws/regulations/rights for different types of AI as well.
 

420MuNkEy

Banned
Local time
Yesterday, 20:25
Joined
Nov 6, 2009
Messages
748
Location
Pre-Apocalyptia
I have to disagree. I agree that unregulated modifications of their own code is going a step too far but rather than deny the access of basic rights to what may amount to sentient beings we should give as many as we can with reasonable safety. Among those deciding ones own fate, if not having the right to do so in a way that's harmful to society as a whole.

Obviously there is going to have to be regulations for what kind of code is to be allowed for AI and within those regulations interested parties are going to make AI, rather than give "ownership" of the AI to the parties that created them it should be given to the AI themselves(where this is an issue).

And breaking the "laws" so to speak should have the same repercussions as normal, trail(or something equivalent)>jail etc. The repercussions to AIs already modified should be based on how severe the danger is, in mild cases more monitoring may be enough, in extreme cases total deletion.

I will however because of the potential dangers of AI not advocate for extensive(if any) rights to privacy(because of the sheer danger) particularly when it comes to code modifications of any kind.

I think a compromise can be made between rights of self-determination/self-ownership and their ability to cause havoc and that it should be made. Or maybe some alternative can be thought of with "good treatment rights" of some kind. Certainly leaving them unbound is a mistake but so is abandoning human principals of decency.

Basically, identify what rights can be reasonable given without too much danger and give them, but rights for AI need to be fundamentally different from humans. Precedent from humans should not be able to be used for cases involving AI.

Actually making an entirely separate legal system in regards to AI is probably necessary. I think the mistake some people make is look at those human-like AI movies and go thinking "we are the same".

We aren't, even if they think and act exactly like us there are distinct differences and challenges involved that therefore need a distinct set of ways to deal with them. You probably even need to make distinct rules/laws/regulations/rights for different types of AI as well.
I think you're overlooking the fundamental complexity of the problem a bit. Regulations on code is like regulations on speech. It would be like prohibiting language that may cause offense. There is not certainty about how programs will act. Bug free code does not exist nor could it ever be reasonably verified to.

We've already crossed many points of no return down the path to AI. We don't have standardized hardware/architecture, nor could reasonable innovation be done in an environment in which this was required, meaning a common language compiling to common assembly code is out of the question and common debugging tools are out of the question. That is to say, we could only get somewhat of a consensus on how any given AI code is supposed to run in theory, much less in practice. If a dangerous aspect of an AI exists and slipped by unnoticed (either because of a bug or put there intentionally) there would be no guarantee anyone would even find out, even with total open access, due to the sheer complexity and lack of a common platform.

In the case of a few years down the line, we're talking (literally) trillions of (relatively) simple logic operations (at least) every second that need to be monitored and contextualized into structures a human can make sense of in their head, with different sets of logical operations depending on the hardware used. In order to "fight fire with fire", so to speak, and have an AI (or at least software) do this, you'd need processing power and software that's a few generations ahead of what it's analyzing to be even somewhat realistic, and even then it would not be at all practical on a mass scale (far less so than the NSA's bulk data collection).

Also, I'd like to emphasize again, we don't know what's dangerous or what degree of danger those unknowns pose. We can guess, but there's no way to know that, for example, an AI who's sole purpose is to create more paperclips wouldn't rip the world apart, killing all humans in the process, just to make as many paperclips as it could. It's a silly example, but again, we're working with unknowns so I opted for clarity over likelyhood.

What rights don't you see a danger in?
 

Brontosaurie

Banned
Local time
Today, 04:25
Joined
Dec 4, 2010
Messages
5,646
I find it silly. As much as i welcome transhumanism, i also recognize the collective will to power of a species. How would it benefit humanity to limit the power of human agents? (in this case the power to 'kill' AIs etc) Why would humanity do that?

Granted, i don't hold any significant belief in such a thing as human rights either.
 

Blarraun

straightedgy
Local time
Today, 04:25
Joined
Nov 21, 2013
Messages
4,247
Location
someplace windswept
AI should absolutely never get rights. Ascribing rights, no matter how limited, will ultimately result in the notion of an AI's entitlement to itself (ie, self-ownership). Once this is legal precedent, the wall blocking unrestrained self-modification is damaged (potentially) irreparably.
I agree with you, although at this point I think it's necessary to make a crucial distinction on AI as an optimisation process and AI as an inseparable part of a human-like entity.

As a technological research or solving system it shouldn't be considered as independent being, it should be closely monitored and used to improve upon supplied designs. Giving it laws when it has no compulsion to remain within a physical reality, no goals of survival or essential paradigms that all living organisms share would be a mistake.

If it, however, is indistinguishable from the human form, if it is a slave of its body, slave of its beliefs and immediate goals and those converge to the acceptable average, then there's no reason to deny it. The laws can be very limited to prevent the arms race. The same dilemma will arise when men become able to improve themselves artificially, including their brains.
 
Top Bottom