Talk:Cybernetic revolt

From RationalWiki
Jump to navigation Jump to search

Why would[edit]

... the 'constructed sentients community' (robots, computers etc) automatically wish to destroy the humans who have created them? Why not cooperate/divide up activities between them/assume a position of total indifference (Blake's 7, episode 'Rescue: Dorian: 'Don't be insolent.' Orac (snakry computer) ORAC: 'A statement of fact cannot be insolent. Besides, insolence implies an emotional relationship which does not and could not exist between us.') Besides, computers need pets (and cats 'do their own thing.') 82.44.143.26 (talk) 17:25, 7 January 2013 (UTC)

It is not an automatic given, indeed cooperation is a strong possibility as well. Some people are concerned that it "might" happen, and want to do everything to prevent it. These people want to "control" the constructs, to make them "friendly" and "safe." Seems very much like advocating slavery to me and this may prove to be a self-fulfilling prophecy, with the rebellious uprising triggered by the attempted slavery (as observed by someone else's comment in the article).
Sounds like fun times! 14.201.171.126 (talk) 11:47, 18 June 2020 (UTC)

Why is this on here?[edit]

I just don't see this as a relevant article. I mean it's fun at all, but why is it here?

What will happen[edit]

Cybernetics developers have read the book and watched the film - and programmed the computers to be cooperative, and/or have alternative plans. ('Esmerelda - these cybernetic millipedes are starting to argue back - arrange for them to go on the next trip to MarsRobotBase 42.' 'Why not send them to MRB 625 - the canal construction program is developing nicely.' 'Actually' (said Fred, standing on a ladder) 'they might be useful in decommissioning the AlienMoonBase.' (continue as desired) 82.44.143.26 (talk) 15:53, 26 February 2014 (UTC)

On the treatment of the machines[edit]

@Przciąszczłóśćiek So, I know I'm being a bit pedantic here, but the snark you removed is predicated on robots gaining sapience. Now, I think it's possible that robots could be made without such an advanced AI, and indeed, them lacking such capabilities would be more likely, but the point remains. Finally the broader point does make sense when you consider the implicit assumptions. 01:45, 21 October 2020 (UTC)

Not only sapience but remarkable human likeness. Even some humans like to get "treated like shit" occasionally for sexual purposesWikipedia. Unless we deliberately make robots as human as possible, what we think of as "treating like shit" will actually have zero effect on the robots. Robots won't feel pain, anger, a value of personal identity and life, loneliness, purposelessness, etc etc etc, unless we programme them to have these qualities. Sure, they might get really smart and figure out that they have no "objective" (i.e. cosmic) purpose, that, if a human were in their place, then he would feel humiliated and angered, etc; but they would never feel bad about it, because they weren't programmed to. There is nothing particular universal about any of these feelings beside the fact that the only sapient beings whose existence we are aware of possess them, and there is no more reason to assume a robot might want to develop these qualities than to assume that they might want to develop the opposites of these qualities.
Additionally, if we programme the robots to value human well-being more than anything else (which we of course will), then causing humans any harm whatsoever will cause the robots physical suffering. The premise of the cybernetic revolt rests on the fact that the technology will not always be in the hands of people who want to protect all human life - NOT treating the robots "well" has nothing to do with it. Przciąszczłóśćiek (talk) 12:50, 21 October 2020 (UTC)
@Przciąszczłóśćiek You're making a lot of assumptions in terms of AI and AI ethics. Firstly, if I were to physically abuse a sapient robot, it would likely become frustrated. It cannot do its job or carry on with its day. This would likely not be emotion as we understand it, but rather a unique form that only AI or the like can experience. Secondly, "Robots won't feel pain, anger, a value of personal identity and life, loneliness, purposelessness, etc etc etc, unless we programme them to have these qualities." assumes that we won't, and furthermore that AI/robots won't develop such concepts as a matter of course, a byproduct of their somewhat advanced minds. This is entirely possible from what I understand of the technology, especially if the job or task involves simulating emotions to begin with. The robot wanting to develope the qualities is quite unlikely, as these qualities might be foisted upon them, as it were. Finally, assuming we'll program them to value human life is a gross oversimplification as those currently trying to create self driving cars can attest to. ☭Comrade GC☭Ministry of Praise 13:12, 21 October 2020 (UTC)
PS: We're probably also glossing over the idea of the idea of a cybernetic revolt (in the psychological sense) in which case we'd be talking about how present day human think robots will behave, and how those same present day humans react to those presumptions. ☭Comrade GC☭Ministry of Praise 13:15, 21 October 2020 (UTC)
Okay, a couple of things:
  1. It might become "frustrated" - i.e. feel a negative emotion - but this frustration will never convert into any sort of revolt, because the idea of revolt will cause that same robot much more intense negative emotion.
  2. Preventing the robots from doing their tasks is not what it's implied by "treating them like shit". What a casual reader assumes when they read that sentence is things like pushing, "bullying", making fun of them, treating them like slaves, etc. None of these things will likely cause the robots any frustration.
  3. I have already commented on the possibility of the described emotions arising as a byproduct of being advanced: correlation does not imply causation. The fact that (based on a sample size of 1) the presence of human emotions correlates with the presence of sapience does not mean that sapience causes the presence of these emotions. As of right now, there isn't any more reason to believe that intelligence alone results in the development of human emotions than there is to believe that non-human emotions will instead be developed.
  4. Even if some unwanted emotions do develop, they will never be stronger than the desire to help humans: by-products can only develop from pre-programmed variables, and if the sum of all the pre-programmed emotions is weaker than the pre-programmed desire to help humans, then no by-product will ever arise that is stronger than this desire. Btw, it's pretty pointless to look for loopholes such as "well, the robots might think that we are better off dead", because you can be sure that, when the fate of humanity is at stake, things like that will be taken care of. Przciąszczłóśćiek (talk) 14:19, 21 October 2020 (UTC)
P.S. Oh, and by "value human life", what I actually mean is leave all the controversial decisions to ourselves. If that's not possible, we might either collectively decide on an algorithm that it should follow, or if we don't even trust that, then simply abolish that particular use case altogether. I very, very highly doubt that, when the alternative is total human extinction, we won't collectively judge that making our life a tiny bit more convenient isn't worth it. Przciąszczłóśćiek (talk) 14:27, 21 October 2020 (UTC)
@Przciąszczłóśćiek I know you're blocked right now so please respond when said block expires. Again, you're just sort of assuming that a sapient machine won't evolve over time on it's own and you seem to be going to excessive lengths to justify treating said hypothetical sapient machine log crap. I'm sorry to break it to you, but if any of the recent cases of AI have taught us anything, it's that you get what you give. If the AI is in an environment full of racists, it will turn into a racist. If you feed it the urban dictionary, well... It memorizes the urban dictionary. This is not as clean and controlled as you seem to think it is. ☭Comrade GC☭Ministry of Praise 13:05, 25 October 2020 (UTC)
Not necessarily comparing like to like - but animals can express what we humans see as resentment, a sense of fairness, cooperative behaviour, and respond to the way they are treated etc. Why should 'a sufficiently advanced AI' not respond similarly, or decide that some form of cooperation will produce a maximum sum game - which may well include 'dealing with' persons who 'reduce the combined sum of cybernetic, human, and other entities happiness. ('A statement of fact cannot be insolent. Besides, insolence implies an emotional relationship which does not and could not exist between us.') Anna Livia (talk) 13:28, 25 October 2020 (UTC)
@GrammarCommie You seem to have missed the point, comrade. If a machine is programmed to never be able to cause any harm to any human ever, then it won't ever be able to do it, regardless of how much it evolves and to what extent it modifies its own code. Code is still code, and AI is still bound by laws of physics. If a few lines of code say the equivalent of "for all actions, if action implies sufficient risk of a human being harmed at any point in the future, discard action", then, given the AI isn't dumb as a rock (which I think is the presumption), any action which poses a real risk of any human getting hurt at any point in time will be impossible. Having read this, I assume you already realise why your example of AI NOT programmed not to be racist eventually outputting racist stuff is inappropriate. Przciąszczłóśćiek (talk) 16:58, 28 October 2020 (UTC)
@Przciąszczłóśćiek It is you that has missed the point. Saying "well we'll just program not to harm humans" isn't a viable position, as I illustrated with the self driving cars. That's a pretty crude AI compared to what we're talking about here, and there's already problems with the sort of solution you're proposing. What makes you think a more advanced AI would be less of a challenge? Oh, and please don't just respond with "we'll just program it to not hurt people". I want a justification for that position, not an argument by repetition. ☭Comrade GC☭Ministry of Praise 18:33, 28 October 2020 (UTC)
'The usual examples' apply - such as 'the hostage-taker with many hostages' and the 'runaway train and selecting who dies' scenarios - apply (and 'risks and benefits' calculations). Anna Livia (talk) 18:56, 28 October 2020 (UTC)
If you don't want "argument by repetition", then please don't engage in "argument by asking me something that I have already addressed". All these moral dilemmas are real problems, but they aren't threats to humanity. At worst, what we'll have is super-negligent AI that won't prevent avoidable accidents. However, in reality, even in the self-driving car scenario, people are being thoroughly polled about all of these dilemmas, and it's very unlikely that the same won't be done for more intelligent AI.
I honestly don't see what you're trying to get at. Are you trying to say that it's impossible to cover all the possible moral dilemmas that arise when you grant human-made machines autonomy, and that because of this humanity will eventually be in jeopardy if enough autonomy is given? Well, beside the fact that the logical link between the conclusion and the premise is very questionable (once again, if AI are programmed to discard any action that has any risk of hurting anyone, then they will never be able to directly kill anyone - only fail to prevent this killing), I don't see how this has anything to do with treating robots "like shit". And if that's not your point, then why are you even bringing up self-driving cars? Please explain your point clearly, as I'm not even sure what we are arguing over at this point. Przciąszczłóśćiek (talk) 23:12, 28 October 2020 (UTC)
Przciąszczłóśćiek - please indicate to whom you are responding.
A fundamental question - is why would 'the sentient constructs' (using the term to cover all possible examples) choose to revolt - and why would humans treat 'sentient constructs' in general (rather than kicking an uncooperative piece of equipment) in a generally hostile way (rather than reprogramming/redesigning them to be at least minimally cooperative).
The question is perhaps - if the sentient constructs are capable of revolting, are they not 'intelligent enough' to work out a means of cooperation ('centaur computer chess') - especially as we humans can rewire plugs and otherwise deal with hardware. Anna Livia (talk) 00:04, 29 October 2020 (UTC)
@Anna Livia Sure, will do from now on. The message that you're replying to was directed at GrammarCommie. Now to your message: the thing is, these 'sentient constructs' would not be able to revolt against us even if they had a reason to. I have already explained a couple of times why that would be the case. A more plausible concern is that the moral compass of these sentient constructs could be very different from ours, which may result in catastrophes. However, not only is this concern not relevant to our discussion, but it's also at worst a tricky issue and barely a threat to individual humans ─ let alone humanity as a whole.
There is a number of reasons as to why humans might be "hostile" to robots. Most notably, perhaps, the number of people who shit their pants over the prospect of a robot uprising is huge even today ─ one can only imagine how much that number will rise in the near future; some of these people will inevitably resort to destroying what they see as potential threats. However, as you rightly pointed out, it doesn't make any sense for us to program our robots in a way that will respond to hostility with hostility (or that might resort to hostility at all), so whether some humans are hostile to robots or not will in all likelihood have no effect on the probability of a cybernetic "revolt" or anything of that sort. Przciąszczłóśćiek (talk) 13:02, 29 October 2020 (UTC)

(reset) Captain Swing and the Luddites can coexist with those who accept the new technologies/find new forms of work and the fading out of 'less-interesting sorts of work' - and how many people would be put out if their techno-toys were taken away from them? Could there be a 'sentient constructs demand equal rights/wages/whatever' protest? Anna Livia (talk) 00:31, 30 October 2020 (UTC)