More About Alexa and Its AI Siblings

Following up on my previous rant about Alexa in the classroom, two good, related articles from Wired on the subject of artificial intelligence that are worth your time to read.

In one the writer highlights sections of reports to regulators from both Alphabet (Google’s parent) and Microsoft that warn of possible “risk factors” in future products.

From Alphabet:

New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.

Microsoft was more specific:

AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.

On the other hand, Amazon, in a report to stockholders, is more worried about governments regulating their products than they are about Alexa activating Skynet at sometime in the future.

The other post is a long excerpt from a book being published this month called “Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think”.

It covers some pieces of recent history in the development of artificially intelligent products and the difficulty of programming a machine to understand the many ways that humans communicate.

I’m undecided about reading the whole book, but this part of it is worth 15 minutes.


The image is the user interface of HAL, the malfunctioning artificial intelligence from the 1968 film “2001: A Space Odyssey”. It also links to an interesting New York Times story of how the sound of HAL was created.

Hey, Alexa! What Are You Doing In The Classroom?

It’s very hard to escape all the hype around those voice-activated, quasi-AI powered personalities: Amazon’s Alexa, Apple’s Siri, Google’s Assistant.1

And, of course, some people bring up the idea of using them in the classroom.

A couple of weeks ago, I sat in on an ISTE webinar2 by a professor of education who was going to explain how we could use Alexa, what he classified as an “internet of things technology”, for teaching and learning.

Notice his thesis was centered on how to use Alexa with students. Not why.

Ok, I can certainly see how there might be a case for hands-free, artificially intelligent devices with certain students, those with visual and motor impairments. Maybe even to support students with reading disabilities.

But are these tools that can really help most students learn?

Currently Alexa and her competitors can only answer specific questions, when they aren’t monitoring the room for requests to place an Amazon order. Sometimes getting to those answers takes several attempts (as unintentionally demonstrated in some of the examples) as the human tailors the question format to fit the algorithms inside the box.

(I wonder how students with far less patience than the presenter would react to Alexa’s confusion.)

He also demonstrated some “add-ons” that would allow a teacher to “program” Alexa with what, to my ear, amounted to little more than audible flashcards and quiz-type activities.

So far, pretty basic stuff. But, when it comes to this supposedly new category of edtech, I have more than a few questions that go beyond how well the algorithm can retrieve facts and quiz kids.

Do we really want to be training our kids how to speak with Alexa (or Siri, or Google)? If we’re going to spend time helping them learn how to frame good questions, wouldn’t it be better if they were working on topics that matter? Topics that might have multiple or open-ended answers?

Instead of two-way artificial conversations with Alexa, how about if the kids learn the art of participating in a meaningful discussion with their peers? Or with other people outside of the classroom?

But if you really want to bring an AI voice into the classroom, why not use it as a great starting point for students to investigate that so-called “intelligence” behind the box?

Let’s do some research into how Siri works? Why does Google Assistant respond in the way it does? Who created the algorithms that sit in the background and why?

What might Amazon be doing with all the verbal data that Alexa is collecting? What could the company (and others?) learn from just listening to us?

The professor didn’t include any of that in his presentation, or anything related to the legal and ethical issues of putting an always-listening, network-connected device from a third party in a setting with children.

Some people in the chat room brought up COPPA, FERPA, and other privacy issues, but the speaker only addressed questions regarding this complex topic in the final few minutes of the session. As you might expect, he didn’t have any actual answers to these concerns.

Anyway, the bottom line to all this is that we need to consider suggestions of using Alexa, or any other always-listening device, in the classroom with a great deal of skepticism. The same goes for any other artificially intelligent device, software, or web service used by students.

At this point, there are far too many unanswered questions, including what’s in the algorithms and how the data collected is being used.


I have one of those HomePods by Apple in my house. I agree with the Wirecutter review: it’s a great speaker, especially for music, but Siri is definitely behind Alexa and Google Assistant in its (her?) artificial intelligence. On the other hand, I have more trust in Apple to keep my secrets. :-)

1. I excluded Samsung’s Bixby from that list because I’ve know absolutely no one who has actually used it, despite being release two years ago.

2. You can see the webinar here but you’ll need to have a paid ISTE membership. His slide deck is available to everyone, however.

Alexa: Don’t Screw Up My Kid

Articles about new technologies in the general media usually fall into one of two categories: breathless, this-is-the-coolest-thing-ever puff pieces or those it’s-gonna-kill-you-if-you’re-not-careful apocalyptic warnings. Occasionally writers manage to do both at the same time, but that’s rare.

A recent piece in the Washington Post leans toward that second theme by letting us know right in the headline that millions of kids are being shaped by know-it-all voice assistants. Those would be the little, connected, always-listening boxes like Amazon’s Alexa and Google’s Home that sit unobtrusively on a side table in your home waiting to answer all your questions. Or order another case of toilet paper.

Many parents have been startled and intrigued by the way these disembodied, know-it-all voices are impacting their kids’ behavior, making them more curious but also, at times, far less polite.

Wow. Must be something in a new study to make that claim, right?

But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

Siri 800x300

I would say we’re all beginning to ponder the possibilities, good and bad, of artificial intelligence. For society in general in addition to how it will affect children as they grow.

But are the ways kids interact with these devices any different from technologies of the past?1

Boosters of the technology say kids typically learn to acquire information using the prevailing technology of the moment — from the library card catalogue, to Google, to brief conversations with friendly, all-knowing voices. But what if these gadgets lead children, whose faces are already glued to screens, further away from situations where they learn important interpersonal skills?

I don’t think you need to be a “booster” of any technology to understand that most children, and even some of us old folks, have the remarkable ability to adapt to new tools for acquiring and using information. If you look closely, you might see that many of your students are doing a pretty good job of that already. And those important interpersonal skills? Kids seem to find ways to make those work as well.

Anyway, the writer goes on trying to make his case, adding a few antidotes from parents, some quotes from a couple of academics, and mentioning a five-year old study involving 90 children and a robot.

However, in the matter of how children interact with these relatively new, faceless, not-very-intelligent voices-in-a-box, there are a few points he only hints at that need greater emphasis.

First, if your child views Alexa as a “new robot sibling”, then you have some parenting to do. Start by reminding them that it’s only a plastic box with a small computer in it. That computer will respond to a relatively small set of fact-based questions and in that regard is no different from the encyclopedia at the library. And if they have no idea what a library is, unplug Alexa, get in the car and go there now.

Second, this is a wonderful opportunity for both of you to learn something about the whole concept of artificial intelligence. It doesn’t have to get complicated, but the question of how Alexa or Home (or Siri, probably the better known example from popular culture) works is a great jumping off point for investigation and inquiry. Teach your child and you will learn something in the process.

Finally, stop blaming the technology! If a parent buys their child one of these…

Toy giant Mattel recently announced the birth of Aristotle, a home baby monitor launching this summer that “comforts, teaches and entertains” using AI from Microsoft. As children get older, they can ask or answer questions. The company says, “Aristotle was specifically designed to grow up with a child.”

…and then lets it do all the comforting, teaching, and entertaining, the problem is with a lack of human intelligence, not artificial kind.

Disrupting Education

How do we disrupt our education system?

That’s the question asked of Peter Diamandis on a recent podcast2 (jump to the 5:40 minute mark). Diamandis is not an educator but is considered a “big thinker” as the CEO of the X Prize Foundation and several other out-there ventures. So, he must a good person to ask about transforming education, right?

He starts off pretty well, by differentiating between the socialization mission of most schools (which has evolved with changes in society over the decades) and the academic learning process, which he says is 150 to 200 years old and “sucks”.

Then he veers way off track.

Diamandis says the solution to fixing classrooms where “half the class is bored, the other half of the class is lost, and even the best teachers can only teach to the median” is to be found in artificial intelligence, AI.

…in the case of education, what I believe is going to happen is that we’re going to develop artificial intelligence systems – AI’s – that are using the very best teaching techniques. Basically an AI can understand a child’s language abilities, their experience, their cognitive capabilities, where they’ve grown up, even know what their experiences are through the days, and give that individual an education that is so personalized, and so perfect for their needs in that moment that you couldn’t buy it. And the beautiful thing about computers and AI is that they can scale at minimal incremental costs.

AI for me is the answer to global dematerializing, demonetizing, democratizing education. We have to separate learning things from actually socialization and being inspired, and so forth. I think humans are going to be a part of that, always will be, but AI is going to be the way I learn something, where an AI can really deliver the information in a way that’s compelling and meaningful.

Of course, Diamandis is equating learning with the content expert-delivered information model that is the norm in most schools, and which he correctly notes has been in place for centuries.

However, his vision of transforming education by essentially removing the human element and replacing it with a set of algorithms makes for a very depressing future.