wasting bandwidth since 1999

Tag: information (Page 1 of 3)

World (Information) Domination

Many writers marvel at this age of information. A large and growing collection of the world’s knowledge is now available to anyone with an internet connection. Think of the learning, the transparency, the wisdom.

The reality, of course, is that information is largely filtered through web search engines – mostly Google. And many governments around the world are working to control that filter.

Specifically, they are trying to force Google and other search companies to hide results that they or their citizens find objectionable for one reason or another. Not just in their countries, but world-wide. The so-called “right to be forgotten”.

The executive director of the Wikimedia Foundation, parent of Wikipedia, is worried that this “creates ugly precedents that could jeopardize the future of our open and free Internet”.

If any country can demand the worldwide removal of search results, vast sections of history, science and culture could disappear from the global Internet. This could infringe on our ability to learn about the history of Tiananmen Square, the potential medical properties of cannabis, the discoveries of Darwin, or unsavoury allegations against the U.S. president-elect.

If every country had the chance to punch memory holes in the Internet, we would swiftly find ourselves with history scrubbed of essential records. Politicians could challenge ugly but accurate charges. Corporations could erase histories of fraud and double-dealing. The implications are unprecedented.

She uses the example of a case before the Canadian Supreme Court in which one company is trying to force Google to hide information about a competitor. But that’s certainly not the only one.

France’s data protection authority is also demanding Google “apply the French balance between privacy and free expression in every country by delisting French right to be forgotten removals for users everywhere”. Other governments in Europe and elsewhere are watching closely.

Here in the US, there are debates over whether we should have a “right to be forgotten” online, similar to the concept established by the European Courts for their citizens in 2014. However, be careful what you wish for.

The unintended consequences of “forgetting” history are just now starting to emerge. Like handing a private company the power to censor information. Or allowing government agencies and politicians control over information sources available to not just their citizens, but the rest of the world.

Our Own Private Kitchen Strainer

In his book Too Big To Know, David Weinberger makes this excellent observation about information filters,information overload

First, it’s unavoidably obvious that our old institutions are not up to the task because the task is just too large: How many people would you have to put on your library’s Acquisitions Committee to filter the Web’s trillion pages? We need new filtering techniques that don’t rely on forcing the ocean of information through one little kitchen strainer.

It may be obvious to Weinberger and others that our old expert-based systems for filtering information are no longer adequate, but not to the leadership here in our overly-large school district (and, I suspect, elsewhere in the American education system).

We have some very specific kitchen strainers that attempt to inhibit teachers and others from using most digital resources until they have been blessed by the right people. A process that often takes months, discourages most teachers from even making the attempt, and is roundly ignored by many.

Part of that process includes very small teams of specialists who spend a lot of time carefully collecting and analyzing resources for a list of approved instructional products, or writing and editing materials lovingly added to the “curriculum assessment resource tool”, our homemade database for “approved” instructional materials (and magic test generator). Everything, of course, must be filtered through the specific classification schemes for classifying the knowledge dispensed in the classroom, as established by the district or state.

Although this year a section was added to that database allowing teachers to share materials they’ve created, which is a step in the right direction, it has not been particularly popular.1 I suspect a large part of that is due to the fact that teachers who really want to share their work and ideas already have found much better tools available on the open web.

When presented with a choice, a rigid and very closed environment really won’t appeal to those educators who have already discovered the value of sharing in the world outside their schools.

Our Information Stinks

As a guest writer in the Post’s Answer Sheet blog points out, a good deal of the debate over education reform in the past two decades has centered around two concepts: choice and accountability.

Choice, of course, usually comes back to charter schools and vouchers, and accountability may as well be a synonym for standardized test since almost no other ideas of what it means to assess student learning seems to be considered.

Neither has done much to improve American education and probably have done a great deal of harm by narrowing the discussion of what public schools are and should be. But why have choice and accountability not lived up to their claimed potential?

Critics have a whole host of explanations, some of which are quite compelling, and some of which are burdened by political agendas. But the simplest answer, which also happens to be true, is that both movements are dependent on good information about school quality. And, frankly, our information stinks.

Both of these models, of course, are dependent on accurate information about school quality.  Whether parents have the power or accountability officers do, the central assumption is the same: that we can measure school quality precisely enough to make high-stakes decisions.

As the writer correctly points out “standardized test scores provide a very narrow picture of what happens inside schools”. As for charter schools and most private schools, they aren’t doing much if anything different from the public schools. They are working with a selected group of students whose parents are very motivated.

He concludes with a list of five criteria for rating schools that, while certainly not perfect, would be a much better alternative to test scores.

I especially love number one, how much time do students spend on art, music and other creative activities?, and number 5, which asks how well did the education they received help students five to ten years later.

However, back here in our real world, this is the unfortunate bottom line of our current education policy in this country.

Test scores, as many parents and policymakers already know, are misleading.  But they aren’t going away.  They aren’t going away in state or federal decision-making.  And they aren’t going away in the role they play in parental decisions about school choice.  In fact, the opposite is happening: test scores are insidiously taking hold in policy discourse and among the public as a perfectly acceptable measure of quality.  They aren’t.  And, as such, it is our job not only to resist narrow and simplistic measures of educational quality, but to demand access to the data we really need–information that allows us to make thoughtful decisions about our schools.

Thoughtful decisions about our schools. Wouldn’t that be a nice change?

The Case for Daily News

Ok, kiddies, let’s start with a little history lesson.

Not too long ago, many middle class households in this country (like the one in which I grew up) basically got their news twice a day. In the morning, a stack of paper containing a summary of the previous day’s important events (and lots of advertising) was delivered to the house and usually scanned over breakfast.

In the evening many of those same people gathered with the family to watch a 30 minute summary of the news considered important by the big three television networks, and presented by people named Walter, Tom, Peter, or possibly Barbara or Diane.

Today, of course, it’s a well-worn cliche that we live in a 24/7 world of information, with an unknown number of websites and at least three cable television channels, all spitting out an unbroken stream of raw data. 

The problem is that very little of what comes through that stream rises to the level of valuable, or even useful.

Author Theodore Sturgeon said that 90% of everything is crap (aka Sturgeon’s Law). He was responding to critics who derided the low quality of science fiction and argued that the vast majority of books, movies, consumer goods, and more also fell into that category. He was writing in the 50’s but more than a half decade later, we should raise Sturgeon’s valuation of media and products closer to 99%.

Although the newspapers, network news programs, and weekly news magazines of the past century were not perfect, they did serve as an information filter and presented a relatively accurate picture of current events. Even if it did take most of those organizations a long time to catch up with major societal shifts like civil rights and the Vietnam war.

I’m certainly not advocating for returning to a time when a few news outlets determined what we should know and when. Those traditional institutional filters are rapidly falling apart, and as David Weinberger, Clay Shirky, Howard Rheingold, and other smart observers of the trend are saying, we need to develop our own network of filters to help us identify that rare 1% of the data flow that actually provides knowledge, insight, and value.

I simply think there’s a case to be made for that 70’s model of twice-a-day news consumption, especially during major news events like the recent hunt for those responsible for the bombings at the Boston Marathon. While many people watched one or more of the cable news channels or refreshed their browsers at media sites for hours on end, the river of material coming from television and the web was largely a waste and regularly hit 100% crap. It’s actually worse on a “normal” news day.

In addition to creating better filters for ourselves, we also need to do a much better job of helping our kids learn how to filter the flow and separate the small nuggets of useful information from the huge sludge pile of raw data that flows from today’s media (“crap detecting” in Rheingold’s language, by way of Hemingway and Postman). There is no better skill for us to teach our students during the time they spend in our classrooms.

Buyer Beware

As I’ve mentioned in other rants, I speak to many groups on the topic of managing information while on the go and using multiple devices to do it. While each person needs to figure out the process that works best for them, almost everyone now depends on interconnected services and applications that can sync to some kind of storage in the now-legendary cloud.

It turns out those web-based services are not yet to the point of being completely dependable. Case in point, back in March Google pretty much lopped off one of the cornerstones of the information management process I use and advocate when they announced the shutting down of Reader, their service that is the "cloud" behind (above?) many, if not most, RSS aggregator applications. Which means that millions of us who depended on Reader (plus more than a few software publishers) are looking for alternatives before July 1.

Last week my process potentially took another hit when the developer of Instapaper, another application I depend on every day, posted that he was selling the popular read-later service. Considering how many small web/app companies have disappeared lately because their new owners wanted the people and technologies* but not the product, I had reason to be concerned.

However, there’s a big difference between this announcement and Google’s. Instapaper’s owner was very up front and transparent about the sale. Between posts to his blog and discussions on several podcasts, he made it clear that his first concern was for the users of the service. A core part of the deal was that the development of Instapaper continue.

It remains to be seen if everyone involved follows through on this plan, but this situation illustrates the big difference between Google and this individual developer (other than one is an 800 pound gorilla).

Google’s business is selling advertising and it’s users (and the data they generate) are the product being sold. The shutdown of Reader is one more sign that leaders of the company have decided anything not generating revenue must be changed or deleted.

Maybe not something to worry about but certainly something to consider before you begin to rely on a product, service, or app (from Google or any other company) that may disappear on short notice.


*One of the latest examples is Posterous, a simple blogging site that was bought by Twitter in 2012 and shut down a few days ago.

« Older posts

© 2023 Assorted Stuff

Theme by Anders NorenUp ↑