wasting bandwidth since 1999

Category: media issues (Page 1 of 19)

Pandemic Grades

Education reporting in The Washington Post is very often tone deaf (see also almost everything from Jay Mathews). It has been especially so during the pandemic.

A recent case in point is an article based on statistics from the overly-large school district with the blaring headline “Failing grades spike in Virginia’s largest school system as online learning gap emerges nationwide”. A similar story a week later declares “Failing grades double and triple — some rising sixfold — amid pandemic learning1, followed today by yet another article about failing grades in another Northern Virginia district.

Continue reading

Bad Commentary

Freedom of Blabber

Returning to the topic of reporting on COVID-19, coverage in The Washington Post has been actually pretty good over the past eight months. Not perfect (leaning too much on the political angle), but certainly a whole lot better than the information provided by television.

One area in which they have fallen short is in writing about the impact of the pandemic on education. Their reporters jump all over a story when the conflict is pretty easy to explain, but rarely go deeper into how the crisis could affect kids, families, teachers, and the community.

Continue reading

Free Comes With a Cost

IMG_2161

This article, with the provocative title Google’s got our kids, is about a year old, but the message is still one that every educator needs to understand. Especially if you’ve turned your classroom over to Google’s Classroom.

The author, a teacher who uses Google products with her students, makes the point that, although GSuite for Education and their other free or super-cheap products can be beneficial to schools and teachers, we also need to remember that the company has motives that are different from “normal” education vendors.

Unlike textbook publishers, Google has a “very strong interest not only in training the workforce of the future in G Suite, but also in forming positive and powerful brand associations in the minds of its littlest consumers”. Most of those kids sitting in front of a Chromebook running Google’s browser are too young to understand brand marketing.

Google’s “Be Internet Awesome” curriculum is another great example of the company selling itself to kids, specifically delivering the “message that Google is a trustworthy arbiter of online safety and privacy”.

The irony of a curriculum that teaches kids how to safeguard their privacy online yet is produced by a company known for its less-than-transparent use of personal data is a little on the nose, but the explicit lessons in Be Internet Awesome are too basic to be objectionable.

Pragmatic as the content is, it also transmits implicit lessons about the Google brand, whose brand colors, icons, and font are slathered over everything from student handouts to classroom posters to, for some reason, paper doll patterns for making your very own Internaut.

I doubt the students, or most of their teachers, get the irony.

In the end, the author admits that Google provides some useful tools, and that even the Be Internet Awesome curriculum “speaks to a real need schools have to prepare students for life in a digital world”.

However, we must understand that that these “free” resources still come with a cost.

The issue isn’t that Google has nothing of value to offer schools — clearly, it has — but rather at what price are we buying it. If it’s too steep we might want to recall lessons from our own educations, not about how to be savvy, polished consumers of technology, but about how to be citizens.


The image is from the Kalamazoo Public Library Flickr account, and is used under a Creative Commons License. Look closely at the screen. The student is viewing a message from a coding activity that incorporates characters from the game Angry Birds. Another example of brand marketing in a “free” educational product.

More About Alexa and Its AI Siblings

Following up on my previous rant about Alexa in the classroom, two good, related articles from Wired on the subject of artificial intelligence that are worth your time to read.

In one the writer highlights sections of reports to regulators from both Alphabet (Google’s parent) and Microsoft that warn of possible “risk factors” in future products.

From Alphabet:

New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.

Microsoft was more specific:

AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.

On the other hand, Amazon, in a report to stockholders, is more worried about governments regulating their products than they are about Alexa activating Skynet at sometime in the future.

The other post is a long excerpt from a book being published this month called “Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think”.

It covers some pieces of recent history in the development of artificially intelligent products and the difficulty of programming a machine to understand the many ways that humans communicate.

I’m undecided about reading the whole book, but this part of it is worth 15 minutes.


The image is the user interface of HAL, the malfunctioning artificial intelligence from the 1968 film “2001: A Space Odyssey”. It also links to an interesting New York Times story of how the sound of HAL was created.

Hey, Alexa! What Are You Doing In The Classroom?

It’s very hard to escape all the hype around those voice-activated, quasi-AI powered personalities: Amazon’s Alexa, Apple’s Siri, Google’s Assistant.1

And, of course, some people bring up the idea of using them in the classroom.

A couple of weeks ago, I sat in on an ISTE webinar2 by a professor of education who was going to explain how we could use Alexa, what he classified as an “internet of things technology”, for teaching and learning.

Notice his thesis was centered on how to use Alexa with students. Not why.

Ok, I can certainly see how there might be a case for hands-free, artificially intelligent devices with certain students, those with visual and motor impairments. Maybe even to support students with reading disabilities.

But are these tools that can really help most students learn?

Currently Alexa and her competitors can only answer specific questions, when they aren’t monitoring the room for requests to place an Amazon order. Sometimes getting to those answers takes several attempts (as unintentionally demonstrated in some of the examples) as the human tailors the question format to fit the algorithms inside the box.

(I wonder how students with far less patience than the presenter would react to Alexa’s confusion.)

He also demonstrated some “add-ons” that would allow a teacher to “program” Alexa with what, to my ear, amounted to little more than audible flashcards and quiz-type activities.

So far, pretty basic stuff. But, when it comes to this supposedly new category of edtech, I have more than a few questions that go beyond how well the algorithm can retrieve facts and quiz kids.

Do we really want to be training our kids how to speak with Alexa (or Siri, or Google)? If we’re going to spend time helping them learn how to frame good questions, wouldn’t it be better if they were working on topics that matter? Topics that might have multiple or open-ended answers?

Instead of two-way artificial conversations with Alexa, how about if the kids learn the art of participating in a meaningful discussion with their peers? Or with other people outside of the classroom?

But if you really want to bring an AI voice into the classroom, why not use it as a great starting point for students to investigate that so-called “intelligence” behind the box?

Let’s do some research into how Siri works? Why does Google Assistant respond in the way it does? Who created the algorithms that sit in the background and why?

What might Amazon be doing with all the verbal data that Alexa is collecting? What could the company (and others?) learn from just listening to us?

The professor didn’t include any of that in his presentation, or anything related to the legal and ethical issues of putting an always-listening, network-connected device from a third party in a setting with children.

Some people in the chat room brought up COPPA, FERPA, and other privacy issues, but the speaker only addressed questions regarding this complex topic in the final few minutes of the session. As you might expect, he didn’t have any actual answers to these concerns.

Anyway, the bottom line to all this is that we need to consider suggestions of using Alexa, or any other always-listening device, in the classroom with a great deal of skepticism. The same goes for any other artificially intelligent device, software, or web service used by students.

At this point, there are far too many unanswered questions, including what’s in the algorithms and how the data collected is being used.


I have one of those HomePods by Apple in my house. I agree with the Wirecutter review: it’s a great speaker, especially for music, but Siri is definitely behind Alexa and Google Assistant in its (her?) artificial intelligence. On the other hand, I have more trust in Apple to keep my secrets. :-)

1. I excluded Samsung’s Bixby from that list because I’ve know absolutely no one who has actually used it, despite being release two years ago.

2. You can see the webinar here but you’ll need to have a paid ISTE membership. His slide deck is available to everyone, however.

« Older posts

© 2021 Assorted Stuff

Theme by Anders NorenUp ↑