MOOCs Repurposed!

Dear Commons Community,

Jeffrey J. Selingo, author of MOOC U: Who Is Getting the Most Out of Online Education and Why, had an article yesterday on MOOCs in a special education section of the New York Times. The title of the piece is Demystifying the MOOC but I believe a better title would have been something like MOOCs Repurposed. He covers what is now familiar ground namely, the rise and hype of MOOC technology, the backlash in 2013, and its future direction.  I covered some of theses same issues earlier this year for University Outlook.    Here is an excerpt from the Selingo article:

“The companies that rode to fame on the MOOC wave had visions (and still do) of offering unfettered elite education to the masses and driving down college tuition. But the sweet spot for MOOCs is far less inspirational and compelling. The courses have become an important supplement to classroom learning and a tool for professional development.

They are instruments in what George Siemens, who co-taught the first MOOC, in Canada in 2008, calls “the shadow learning economy,” which happens alongside formal education, much in the way textbooks supplement courses.

That is the success story for massive open online courses as they graduate from the hype cycle’s “trough of disillusionment” into the “slope of enlightenment,” on their way to the “plateau of productivity.”

In essence, MOOC technology will be integrated into blended learning environments that allow for extensive interaction with well-financed course content as developed by MOOC providers and others that faculty can use as they see fit. Whether MOOC companies can develop their new role into viable financial enterprises remains to be seen.

Tony

 

 

Interview with Megan Smith: Chief Technology Officer of the U.S.A.

Dear Commons Commnity,

The New York Times Magazine features an interview with Megan Smith, Chief Technology Officer of the United States and former Google executive. Her responses include several kernels of wisdom regarding technology. For example (the questions are in bold typeface):

“Your office is working to make large sets of government data public. What’s the hope there? Scientists and universities and the general public can do extraordinary things with it. It could be weather or climate data; it might be data from the Department of the Interior or NASA or water data. Whole industries are being built from things that taxpayers have helped the government know.”

“How did you get onboard early with science and engineering? I went to an inner-city school in Buffalo. We had no money. But our teachers believed in hands-on active learning — there was a mandatory science fair, which was critical. We just had to do this stuff.”

“As a grad student in the ’80s, you helped build a solar car, and today all we have are iPhones that we use mostly for playing Candy Crush. Do you feel as if tech has lost its way? It’s going both ways — look at Tesla and look at where the mainstream car companies are going.

But it’s taking a long time. Technology always takes longer than you think, but it comes.”

I agree with several of Ms. Smith’s comments namely that in education active learning and doing things are critically important more so than test-prep curricula that permeate in too many of our public schools. And lasting technologies always take longer.   They evolve and don’t have to “disrupt”.

Tony

David Brooks: The Machines are Coming! The Machines are Coming!

Dear Commons Community,

New York Times columnist, David Brooks, examined artificial intelligence  yesterday and has concluded that the day of the machines is near. He relies mostly on Kevin Kelly’s recent analysis of the state of artificial intelligence.

“Some days I think nobody knows me as well as Pandora. I create a new music channel around some band or song and Pandora feeds me a series of songs I like just as well. In fact, it often feeds me songs I’d already downloaded onto my phone from iTunes. Either my musical taste is extremely conventional or Pandora is really good at knowing what I like.

In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.

He writes that the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.

This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection, and better algorithms.”

The implications of this is twofold:

“The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power. The Internet is already heralding a new era of centralization. As Astra Taylor points out in her book, “The People’s Platform,” in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them. Gigantic companies like Google swallow up smaller ones. The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head…

The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do. For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.

On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.”

Brooks concludes by painting two divergent futures:

“In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions. I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.

In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.

In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.

In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

I don’t use Pandora but I do use Amazon from whom I get a daily message offering suggestions on books in which I might be interested. Usually I take a quick peek and delete the message although every once and a while I take a longer peek. If Brooks is right, my peeks will be longer and longer as the machines (A.I. driven customer systems)  become more sophisticated.

There are also ramifications for education. Big data and learning analytics were featured prominently at the recently concluded Online Learning Consortium’s Annual Conference.  Like Pandora’s customers, students are presented every time they log into a course with assignments, grades, and other performance indicators.  Faculty and advisers likewise receive alerts of students not keeping up with course requirements.

Tony