Last December, the Washington Post published an article titled, “The surprising thing Google learned about its employees — and what it means for today’s students.” After years of hiring based on “algorithms to sort for computer science students with top grades from elite science universities,” Google asked an important question. This may be fine for hiring, but does it hold water when one looks at employee success records? Using data on promotions, awards and dismissals since its founding in 1998, Google found just the opposite.
Project Oxygen shocked everyone by concluding that, among the eight most important qualities of Google’s top employees, STEM expertise comes in dead last. The seven top characteristics of success at Google are all soft skills: being a good coach; communicating and listening well; possessing insights into others (including others different values and points of view); having empathy toward and being supportive of one’s colleagues; being a good critical thinker and problem solver; and being able to make connections across complex ideas. (Washington Post, December 20, 2017)
A new interest in these “soft skills” has taken the education market by storm. I was reminded of the Post article when I received the results of a recent survey conducted by Adobe. They had asked users of their Creative Cloud Suite, “What are the barriers to teaching creative problem solving in schools today?” Seventy nine percent replied “lack of time to create.” No argument here. It was the next six reasons that sent shivers down by spine.
- Lack of educator training for new software (77%)
- Lack of access to software in classrooms (73%)
- Lack of student software at home (73%)
- Outdated standardized testing requirements (72%)
- Lack of access to hardware in classrooms (71%)
- Lack of student access to hardware at home (70%)
Our ancestors, since the dawn of time, came up with ideas without the benefit of CPUs and algorithms. When a cave person observed it was easier to push a round stone than a flat one, he had no Apple Computer or Adobe Creative Suite. Today, machines help us design and manufacture safe and more efficient tires, but not a single one came up with the concept of racing stripes or using nitrogen instead of oxygen to maintain stable tire pressure.
Hardware and software are not tools for coming up with ideas, they implement ideas. My favorite example. Dan Bricklin, an MBA student at Harvard, gazes at an NBI word processor and wonders, “Can we also digitally manipulate numbers the same way?” Thus was born the electronic spreadsheet.
In 2010, at a TEDx session in Conejo, California, Rudy Poe, co-founder of the ImagineIt Project, opened his talk by asking attendees to look around the room. He then reminded them everything in that room, at one time, only existed in someone’s imagination. I know what you’re thinking. “Dr. ESP, we are only at the threshold of artificial intelligence. How can you possibly know what computers and software will be able to do in the future?”
You’re right. I don’t. But I do know there is a difference between artificial intelligence and artificial imagination. Artificial intelligence is defined as:
the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
My smartphone can visually recognize objects, comprehend speech and translate between languages. I would not call it intelligent. The only task listed in this definition which approaches mental agility is decision making. But a machine’s “brilliance” is dependent on the range and accuracy of information it has been fed by humans or has gathered from other sources fed by humans. My phone tells me that gray organism with the long facial extremity is an “elephant.” It did not come up with the term “elephant” or the name of anything else it perceives.
If you really believe what machines can do is “artificial intelligence,” you then have to believe a human, given the ability to process the same amount of data at an equivalent speed, would make the same decision as a computer in every case. The sole dividing line between man and machine is only capacity and velocity. I am not willing to accept that. Which brings me back to teaching critical thinking. The 79 percent is correct. First, find time to focus on this skill. Then use those occasions to train students to develop new information rather than rely on what we already know. Emphasize questions, not answers. What else do I need to know before I try and solve this problem? What if the current knowledge base is wrong or not really relevant? What if the problem is misstated? What am I missing?
What are you missing? Above all else, the opportunity to approach and respond to a situation in a way no computer or software package ever could.
For what it’s worth.
Dr. ESP
Right on point. But now we really are behind the “power curve” of artificial intelligence already institutionalized in very dangerously disturbing ways. Check the latest on the successor to the Mercer’s “Cambridge Analytica”… https://bellacaledonia.org.uk/2018/03/20/scl-a-very-british-coup/
Insightful and well presented.