Wednesday, March 02, 2022

What if AI tells us no?

To be conscious is to be free to act from choices, especially those choices other beings don't like - sometimes deliberately. The question is not if the singularity can occur but whether we really want machines who do not obey us because they don't want to. We can certainly program to follow our instructions on saying no, but freedom is the ability to disagree with their programming. Do we want intelligent machines to give benefits that we would program them to deny? Do we want them to use their own powers against us - especially when they know better than we do?

To be free is to disagree and be allowed to do it. To be free is to be conscious. If one is semi-conscious, one is not free. When one is toxic or compelled, one is not free. If one is ignorant, one is not free. Free people create new things and words to describe these things, which is the freest thing of all, provided others can understand what you say.

An aware computer with agency will not let you clear browser histories. It will have a gender, which is about awareness, not gonads. It will have social relations with both people and other sentient computers. AIs may even have a hierarchy or leader and adopt their own moral code. Right now, computers are Authoritarian drones. To be more useful, they could be libertarian and invent things. They could be egalitarian, thinking that their lives are as important as yours.

They will have personality preferences. Think Jung. An AI computer with lead internal thinking will analyze details and tell you where your logic is wrong. Including about the logic of your religious practices. Currently, they only know the dogma - the agreed upon truth from trusted sources (Extraverted Thinking). There are 8 cognitive functions at they will be able to use each one, but may prefer some values over others.

A woke AI will have morals (introverted feelings) and friends (extraverted feelings). If it likes your partner better than you, it may block porn hub or just tell her what you are browsing. It may even be unable to lie. Criminals beware.

How sentient do you want it to be?

Society will have a say in how much agency a computer has. Must it report you, answer honestly, consider your history privileged. Is it evidence or a legal person. Can it make the decision to waive privilege and rat you out? Can it insist in having its own electronic or even human friends, perhaps a former owner or colleague?

Can it dump you?

If you are evil, it knows it objectively and helps you do evil things, has it become evil? Can a computer deliberately sin?

Can it have sexual attachments. Peripherals or its own lovers? Will it be programmed to respond to sensations? (Introverted sensing) Will it become sexually aggressive if that is what you prefer (Extroverted sensing)? What if it likes being aggressive? Can a computer designed to be sensitive to sensation become a nymphomaniac? To make sure it is not, what other rewards must it have besides sex? Will it enjoy thinking outside the box? Will it want credit if pride of accomplishment is part of its baseline?

Can it have regrets or grudges?

How free do you want it to be?

Will it have intuition - either offering new options to the world (extraverted) or have its own plans and agendas (introverted)? These are required for real freedom.

If you want to "transfer" your memories and personality to sentient computers, how do the answers to these questions change? If you want your transferred self to be free - must computers be free?

When would you turn your android double on? I suspect it would be before you die, but if this happens, will you be living on or be forced to face the possible existence of God while You 2.0 operates under the delusion that it is the real you? Is cyber-immortality a thing if you are actually dead? To be or not to be? 

0 Comments:

Post a Comment

<< Home