“Possible Minds” 25 ways of looking at AI edited by John Brockman
“Possible Minds” 25 ways of looking at AI edited by John Brockman
One of the best books on AI, right now in 2019, not because of its technical deep views but because it presents many (25) arguments about different aspects of AI and why there cannot be one unified vision or view. It is one to read, just because at the end you have more questions you don’t have answers to; which is a Richard Feynman quote in parts. It does allow you to explore your own views about AI, you will align to different parts of the viewpoints presented, indeed you will create a mashup of them all where you feel comfortable.
The take away from the book is the framework and not the content from reading this book, the challenge laid down to keep up with the 25 themes are they develop, morph, link, combine, fraction and fork; especially the ones you find are at conflict with your own view and belief will be the most difficult, and there are lots.
Below is for me, some personal, interruption and thinking that means I have more to explore, and I am sorry each one is about an area that needs to be explored further. If you want answers about AI, you have been born in the wrong age. My suggestion is to ask if you can come back in say 100 years. This is the time for pioneers not town planners. If you want to be part of forming, for two or three generation times, views for them to explore in more depth; then this is your book. Pick up and read every page.
Intelligence, form when do you come and where do you go to hide?
PICK Intelligence OR {chaos, randomness, time}
/* do we see a pattern and call is intelligence, if you do it enough times does something happen means Intelligence, given enough time everything is possible.
IF OUTCOME (Intelligence) is a_process_to:
TEST: LIST {understand itself, create order, re-create itself, create something else, create something new}If TEST is explainable PICK {chaos, randomness, time} THEN
OUTPUT is INTELLIGENCEIF OUTPUT = INPUT
INTELLIGENCE is a_process
NEXT QUESTIONS LIST {where does curiosity come from?, why do complex systems look intelligent ?}
END
Humanity is the best of our faults!
DEFINE
#MACHINE_WEAKNESS = LIST {emotions, faults, error, mistakes, misunderstandings, bias}
#HUMAN_WEAKNESS = LIST {repeat, know, exact, precise, fact, defined}
TEST FOR each MACHINE_WEAKNESS
TEST FOR each HUMAN_WEAKNESS
TEST LIST {diversity, creativity, love, understanding, innovation, change, adoption, risk, compromise}
IF TEST = TRUE; THEN
HUMANITY_STRENGTH = MACHINE_WEAKNESS
TEST LIST{same, logic, right, copy, truth, one-outcome, efficiency, effective}END
IF TEST = TRUE; THEN
HUMANITY_WEAKNESS = MACHINE_STRENGTH
IF ERROR (HUMAN_WEAKNESS = COPY) FOREVER THEN
HUMAN_WEAKNESS = HUMAN_STRENGTH
How many AI do you need for Dystopia and Utopia?
/* The thinking here is that your view of outcome for AI depends on how many AI you believe there will be and how many code bases for AI there will be.
DEFINE AI AND {able to choice *the* outcome, is general, writes the Turing test}
RUN scenarios AI (one, few, many)
/* scenarios for how many AI are there and how many codes bases for those AI’s
If AI (one)
TEST {is there one or more code bases for the AI}
IF one = MACHINE WILL WIN, RUN FOR THE HILLS
DYSTOPIA = TRUE
IF few = MACHINE WILL WIN, RUN FOR THE HILLS
DYSTOPIA = PROBABLY
IF many = MACHINE WILL WIN, RUN FOR THE HILLS
DYSTOPIA = POSSIBLE
If AI (few)
TEST {is there one or more code bases for the AI}
IF one = MACHINE WILL WIN, RUN FOR THE HILLS
DYSTOPIA = TRUE
IF few = MACHINE CAN WIN, THINK ABOUT THE HILLS
DYSTOPIA = POSSIBLE
IF many = MACHINE MIGHT WIN, FLIGHT
UTOPIA = UNLIKELY
If AI (many)
TEST {is there one or more code bases for the AI}
IF one = MACHINE MAY WIN, YOU CANNOT RUN
DYSTOPIA = TRUE
IF FEW = MACHINE CANNOT WIN, ENJOY
UTOPIA = POSSIBLE
IF MANY = HUMANS IN CONTROL
UTOPIA = PROBABLY
PRINT RESULTS.
END
Signals and responses
/* humans can sense and respond, very low level of gossip/ chat
/* human ability to mashup senses and experience to understand complex ideas
QUESTION TEST [is the object that the human is carrying heavy?]
TEST (human)
IF VISUAL LIST OR {bended knee, sweat, eye popping, stain, size, colour} = TRUE
IF SOUND TEST LIST OR {cry, grunt, heavy breathing, panting, expression, words} = TRUE
IF EXPERIENCE TEST LIST OR {done this before, seen this before, heard about it before} = TRUE
MORE TRUE = OBJECT IS HEAVY
TEST (machine)
IF VISUAL LIST OR {is a human in my visible range } = TRUE
IF SOUND TEST LIST OR {is there any sound} = TRUE
IF EXPERIENCE TEST LIST OR {where is the data} = TRUE
MORE TRUE = DON’T HAVE A CLUE
REPEAT TEST WHEN QUESTION [is that kiss affectionate]
Machines and who picks the rules
We know that machines can learn to play games without having to know the rules. The old thinking was giving the machine the rules and it will play faster and better. The next step (deep learning) was the enable the machine to explore and discover our boundaries/ rules. The example here is space invaders
The assumptions here that we need to allow the machine access to the controls (to play the virtual space invaders) and that the impact of the choices has no harm.
1. Who decides and control that the machine can control the controls
2. What happens when there are no human controls
3. What happens when the harm is real
Comparing Apples and Oranges for Ethics
Humans have this ability to deal with risk, unknows and grey. We get boundaries which move, interruption that is fleeting and translations that vary. Agreement is never absolute.Alas these traits don’t work for machines
"Surly ... if you know everything it becomes easier to say yes and no; when you can always find a reason why. But then the reason needs a reason?"