Google CEO Sundar Pichai:  Artificial Intelligence More Important than Fire or Electricity!

Dear Commons Community,

Sundar Pichai, CEO of Google, was quoted as saying that artificial intelligence is going to have a bigger impact on the world than some of the most ubiquitous innovations in history.  As reported by CNBC:

“AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,” says Pichai, speaking at a town hall event in San Francisco.

A number of very notable tech leaders have made bold statements about the potential of artificial intelligence. Tesla boss Elon Musk says AI is more dangerous than North Korea. Famous physicist Stephen Hawking says AI could be the “worst event in the history of our civilization.” And Y Combinator President Sam Altman likens AI to nuclear fission.

Even in such company, Pichai’s comment seems remarkable. Interviewer and Recode executive editor Kara Swisher stopped Pichai when he made the comment. “Fire? Fire is pretty good,” she retorts.

Pichai sticks by his assertion. “Well, it kills people, too,” Pichai says of fire. “We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, AI is really important, but we have to be concerned about it.”

Indeed, for many, so much about artificial intelligence is unknown and therefore scary. However, Pichai also points out that “it is important to help people understand that they use AI today. AI is just making computers more intelligent and being able to do a wide variety of tasks and we take it for granted whenever something happens and we adopt it,” he says.

“So for example, today, Google can translate across many many languages and people use it billions of times a day. That’s because of AI.

“Or if you … go to Google and search for images of sunset, or if you go to Google photos and search for images of people hugging, we can actually pull together and show pictures of people hugging.

“This is all because of AI. …

…And as a tech executive would, Pichai says AI has the potential to make our lives even better in the future.

“AI holds the potential for some of the biggest advances we are going to see. You know whenever I see the news of a young person dying of cancer, you realize AI is going to play a role in solving that in the future, so I think we owe it to make progress,” the Google CEO says.

That being said, it is still important to think about humanity’s future with artificial intelligence, Pichai says. “It is right to be concerned, absolutely, you have to worry about it otherwise you are not going to solve it.”

I agree with Pichai that AI will have a profound impact on humanity.  The answer to the question no one knows is:  When this is going to happen?

Tony

Devin Nunes Letter:  A “Nothingburger”!

Dear Commons Community,

The media has spent the last several days reporting on a politically charged memo written by House Republicans that accused F.B.I. and Justice Department leaders of abusing their surveillance powers to spy on a former Trump campaign adviser suspected of being an agent of Russia.  The memo was released yesterday and reactions have been swift.  Democrats, the F.B.I. and the U.S. Department of Justice have been most vocal in their opposition.   Donald Trump and most Republicans see the memo as a stunning indictment of the F.B.I and the intelligence community.

The three-and-a-half-page memo, written by Republican congressional aides, criticized information used by law enforcement officials in their application for a warrant to wiretap the former campaign adviser, Carter Page, and named the senior F.B.I. and Justice Department officials who approved the highly classified application.

But it fell well short of making the case promised by some Republicans: that the evidence it contained would cast doubt on the origins of the Russia investigation and possibly undermine the inquiry, which has been taken over by a special counsel, Robert S. Mueller III. The Page warrant is just one aspect of the broader investigation.

Bret Stephens, a New York Times columnist, has labelled the memo a “a nothing burger” (see his column below).  Republican Senator John McCain said  “The latest attacks against the FBI and Department of Justice serve no American interests ― no party’s, no President’s, only Putin’s…The American people deserve to know all the facts surrounding Russia’s ongoing efforts to subvert our democracy, which is why Special Counsel Mueller’s investigation must proceed unimpeded. Our nation’s elected officials, including the president, must stop looking at this investigation through the lens of politics and manufacturing political sideshows. If we continue to undermine our own rule of law, we are doing Putin’s job for him.”

A sideshow and “nothingburger” indeed!

Tony

============================================

Devin Nunes’s Nothingburger

Bret Stephens 

Feb. 2, 2018

Gertrude Stein once said of her hometown of Oakland, Calif., “There is no there there.” That about says it for Devin Nunes’s notorious memo, too.

By this I do not mean that Nunes, the California Republican and chairman of the House Intelligence Committee, has uncovered no potential wrongdoing in his three-and-a-half-page memo, which was declassified Friday over vehement objections from senior F.B.I. and Justice Department officials. More about the possible wrongdoing in a moment.

The important questions, however, are:

First, did the F.B.I. have solid reasons to suspect that people in Donald Trump’s campaign had unusual, dangerous and possibly criminal ties to Moscow?

Second, did this suspicion warrant surveillance and investigation by the F.B.I.?

The answers are yes and yes, and nothing in the Nunes memo changes that — except to provide the president with a misleading pretext to fire deputy attorney general Rod Rosenstein and discredit Robert Mueller’s probe.

Let’s review. Paul Manafort, the Trump campaign chairman until August 2016, is credibly alleged to have received $12.7 million in “undisclosed cash payments” from then-Ukrainian President Viktor Yanukovych, a Russian stooge. Had Manafort not been exposed, he might have gone on to occupy a position of trust in the Trump administration, much as Reagan campaign manager Bill Casey wound up running the C.I.A. He would then have been easy prey to Russian blackmail.

George Papadopoulos, the young adviser who pleaded guilty last year to lying to the F.B.I., spent his time on the campaign trying to make overtures to Russia. In May 2016 he blabbed to an Australian diplomat that Moscow had political dirt on Hillary Clinton — information that proved true and was passed on to U.S. intelligence. This was the genesis of an F.B.I. counterintelligence investigation, as the Nunes memo itself admits.

And then there’s Carter Page, the man at the center of the Nunes memo. By turns stupid (his Ph.D. thesis was twice rejected), self-important (he has compared himself to Martin Luther King Jr.), and money-hungry (a suspected Russian agent who tried to recruit him in 2013 was recorded saying he “got hooked on Gazprom”), Page happens also to be highly sympathetic to the Putin regime. The Russian phrase for such characters is polezni durak — useful idiot. No wonder he was invited to give a commencement speech at a Russian university in the summer of 2016. That’s how assets are cultivated in the world of intelligence.

Given the profile and his relative proximity to team Trump, it would have been professionally negligent of the F.B.I. not to keep tabs on him. Yet the bureau only obtained a surveillance warrant after Page had left the campaign and shortly before the election, and it insisted throughout the campaign that Trump was not a target of investigation. How that represents an affront to American democracy is anyone’s guess.

The memo does seem to have uncovered conflicts of interest at the Justice Department, most seriously by then-Associate Deputy Attorney General Bruce Ohr, whose wife was working for Fusion GPS (and thus, by extension, the Clinton campaign) on opposition research on Trump. The memo also claims this relationship was not disclosed to the Foreign Intelligence Surveillance Court when the Justice Department applied for a surveillance warrant on Page.

That’s a significant omission that already seems to have led to Ohr’s demotion, according to Fox News. Then again, the Nunes memo has its own “material omissions,” according to an adamant and enraged F.B.I. Who do you find more credible: Nunes or F.B.I. Director Christopher Wray?

Nor does the Nunes memo claim that the information provided by the F.B.I. to the foreign intelligence court was, in fact, false. The closest it gets is a quote from ex-F.B.I. Director James Comey saying the Steele dossier was “salacious and unverified,” and then noting the anti-Trump bias of various officials involved in the case.

Come again? The Stormy Daniels story is also salacious and almost certainly accurate. “Unverified” is not a synonym for “untrue.” And since when do pundits who make a living from their opinions automatically equate “bias” with dishonesty?

The larger inanity here is the notion that the F.B.I. tried to throw the election to Clinton, when it was the Democrats who complained bitterly at the time that the opposite was true.

 “It has become clear that you possess explosive information about close ties and coordination between Donald Trump, his top advisers and the Russian government,” then Senate Minority Leader Harry Reid angrily wrote James Comey in late October 2016. “The public has a right to know this information.”

Maybe so. But the G-Men kept quiet about their investigations, and Trump won the election. How that represents evidence of a sinister deep-state conspiracy is a question for morons to ponder. As for Devin Nunes, he has, to adapt an old line, produced evidence of a conspiracy so small. In modern parlance we’d call it a nothingburger, but the bun is missing, too.

 

The Tyranny of Metrics!

Dear Commons Community,

John Wallach, a colleague at Hunter College posted the article below on a faculty LISTSERV. Written by Catholic University history professor, Jerry Z. Muller, this essay critically examines the emphasis on metrics in higher education.

Tony

===============================================

The quest to quantify everything undermines higher education!

By Jerry Z. Muller 

A cultural pattern has become ubiquitous in recent decades, engulfing an ever-widening range of institutions. Now it has come for the university. Call it a meme, a discourse, a paradigm, or a fashion. I call it metric fixation. It affects the way people talk about the world, and thus how they think and how they act. The key components of metric fixation are:

  • the belief that it is possible and desirable to replace judgment, acquired by experience and talent, with numerical indicators based upon standardized data.
  • the belief that making such metrics public assures that institutions are carrying out their purposes.
  • the belief that the best way to motivate people is by attaching rewards and penalties to their measured performance.

These assumptions have been on the march for several decades, and their assumed truth goes marching on.

The pernicious spillover effects became clear to me during my time as chair of the history department at the Catholic University of America. Such a job has many facets: mentoring and hiring; ensuring that necessary courses get taught; maintaining relations with the university administration. Those responsibilities were in addition to my roles as a faculty member: teaching, researching, and keeping up with my field. I was quite satisfied.

Then, things began to change. Like all colleges, Catholic gets evaluated every decade by an accrediting body. For my university, that body is the Middle States Commission on Higher Education. It issued a report that included demands for more metrics on which to base future “assessment” — a buzzword in higher education that usually means more measurement of performance. Soon, I found my time increasingly devoted to answering requests for more and more statistics about the activities of the department, which diverted my time from research, teaching, and mentoring faculty members. New scales for evaluating the achievements of our graduating majors added no useful insights to our previous measuring instrument: grades.

Gathering and processing all this data required the university to hire ever more specialists. Some of their reports were useful; for example, spreadsheets that showed the average grade awarded in each course. But much of the information was of no real use, and read by no one. Yet once the culture of performance-documentation caught on, department chairs found themselves in a data arms-race. I led a required yearlong departmental self-assessment — a useful exercise, as it turned out. But before sending it up the bureaucratic chain, I was urged to add more statistical appendices — because if I didn’t, the report would look less rigorous than that of other departments.

My experience left me wondering about the forces fueling this waste of time and effort. The Middle States Commission operates with a mandate from the Department of Education. Under the leadership of Margaret Spellings, the department had convened a Commission on the Future of Higher Education, which published a report in 2006 emphasizing the need for greater accountability and the gathering of more data, and directing the regional accrediting agencies to make “performance outcomes” the core of their assessment. That mandate filtered down to the Middle States Commission, and from there, ultimately, to me.

Once the culture of performance-documentation caught on, department chairs found themselves in a data arms-race.

Metric fixation, which seems immune to evidence that it frequently doesn’t work, has elements of a cult. Studiesthat demonstrate its lack of effectiveness are either ignored or met with the claim that what is needed are more data. Metric fixation, which aspires to imitate science, resembles faith.

Not that metrics are always useless or intrinsically pernicious. They can be genuinely useful. But not everything that is important is measurable, and much that is measurable is unimportant. (Or, in the words of the familiar dictum, “Not everything that can be counted counts, and not everything that counts can be counted.”) Universities, like most organizations, have multiple purposes, and those which are measured and rewarded tend to become the focus of attention, at the expense of other essential goals. Similarly, many jobs have multiple facets, and measuring only a few of them creates incentives to neglect the rest. When universities wake up to this fact, they typically add more performance measures. That creates a cascade of data — information that becomes ever less useful — while gathering it sucks up more and more time and resources.

In the process, the nature of academic work is transformed in ways that are often harmful. Like most professionals, academics resent the imposition of goals that may conflict with their professional ethos and judgment, thus lowering morale. And they inevitably become adept at manipulating performance indicators through a variety of methods, many of which are ultimately harmful to the health of a university.

In the attempt to replace judgments of quality with standardized measurement, some rankings, government institutions, and university administrators have adopted as a standard the number of scholarly publications produced by a college’s faculty, and determined these publications using commercial databases. Here is a case where standardizing information can degrade its quality.

The first problem is that these databases are frequently unreliable: Having been designed to measure production in the natural sciences, they often provide distorted information in the humanities and social sciences. In the natural sciences and some of the behavioral sciences, new research is disseminated primarily in the form of articles in peer-reviewed journals. But that is not the case in fields such as history, in which books remain the pre-eminent form of publication, so a measurement of the number of published articles presents a distorted picture. But that is only the beginning of the problem.

When individual faculty members, or whole departments, are judged by the number of publications, whether in the form of articles or books, the incentive is to produce more publications, rather than better ones. Really important books may take many years to research and write. But if the system rewards speed and volume, the result is likely to be a decline in truly significant scholarship. That is what seems to have happened in Britain as a result of its Research Assessment Exercise: a great stream of publications that are both uninteresting and unread. Nor is the problem confined to the humanities. In the sciences as well, evaluation by measured performance favors short-term publication over long-term research capacity.

In academe, as elsewhere, that which gets measured gets gamed. Take impact factors. Once developers recognized that not all articles were of equal significance, they created techniques to measure each article’s impact. That took two forms: counting the number of times the article was cited, and considering the prestige — or impact factor — of the journal in which it was published, a factor determined in turn by the frequency with which articles in the journal are cited. (This method, mind you, cannot distinguish between the following citations: “Jerry Z. Muller’s illuminating and wide-ranging article on the tyranny of metrics effectively slaughters the sacred cows of so many organizations” and “Jerry Z. Muller’s poorly conceived screed deserves to be ignored by all managers and social scientists.” From the point of view of tabulated impact, the two statements are equivalent.)

Metric fixation, which seems immune to evidence that it frequently doesn’t work, has elements of a cult.

Moreover, in an attempt to raise their citation scores, some scholars formed informal citation circles, the members of which made a point of citing one another’s work as much as possible. Some lower-ranked journals requested that authors include additional citations to articles in the journal, in an attempt to improve its “impact factor.”

What, you might ask, is the alternative to tallying up the number of publications, the times they were cited, and the reach of the journals in which articles are published? Professional judgment. In a department, evaluation of faculty productivity can be done by the chair or by a small committee of colleagues, who, consulting with other faculty members when necessary, draw upon their knowledge of what constitutes significance. In the case of major decisions, such as tenure and promotion, scholars in the candidate’s area of expertise are called upon to provide confidential evaluations, a more elaborate form of peer review.

Citation databases may be of some use in that process, but numbers also require judgment grounded in experience to evaluate their worth. That judgment is precisely what is eliminated by too great a reliance on metrics. As Carl T. Bergstrom, a biologist at the University of Washington, puts it, “All too often, ranking systems are used as a cheap and ineffective method of assessing the productivity of individual scientists. Not only does this practice lead to inaccurate assessment, it lures scientists into pursuing high rankings first and good science second. There is a better way to evaluate the importance of a paper or the research output of an individual scholar: read it.”

 

Among the strongholds of metrics is the Department of Education, under a succession of presidents, Republican and Democratic. During President Obama’s second term, his Department set out to develop an elaborate “Postsecondary Institution Ratings System.” It was intended to grade all colleges, to disaggregate its data by “gender, race-ethnicity and other variables,” and eventually to tie federal funds to the ratings, which were to focus on access, affordability, and outcomes, including expected earnings upon graduation. The plan ran into opposition from colleges and Congress. In the end, the Department settled on a stripped-down version, the College Scorecard, unveiled in September 2015.

It was the product of good intentions, meant to address real problems in the provision of higher education, especially the extremely spotty record of for-profit institutions offering career-oriented education in fields like automotive repair, culinary arts, or health aids, which had been expanding by leaps and bounds. But in reaction to a genuine problem at the low end of the for-profit sector, the department responded with far-reaching demands that had consequences for all colleges.

What the advocates of greater accountability metrics overlook is how the increasing cost of college is due in part to the expanding cadres of administrators, many of whom are required to comply with government mandates. Reward for measured performance in higher education is touted by its boosters as making universities “more like a business.” But businesses have a built-in restraint on devoting too much time and money to measurement — at some point, it cuts into profits. Ironically, since universities have no such bottom line, government or accrediting agencies or the university’s administrative leadership can extend metrics endlessly. The effect is to increase costs or to divert spending from the doers to the administrators — which usually suits the latter just fine. It is hard to find a university where the ratio of administrators to professors and of administrators to students has not risen astronomically in recent decades. Metric fixation contributes to the mushrooming of administrators.

In the case of the College Scorecard, some of the suggested objectives of the original plan (the Postsecondary Institution Ratings System) were mutually exclusive, while others were simply absurd. The goal of increasing college graduation rates, for example, is at odds with increasing access, since less-advantaged students tend to be not only financially poorer but also worse prepared. The better prepared the student, the more likely she is to graduate on time. It might be possible to admit more economically and academically ill-prepared students and to ensure that more of them graduate; but only at great expense, which is at odds with another goal of the Department of Education: holding down costs.

Another metric that colleges were to supply was the average earnings of students after graduation. Not only is this information expensive to gather and highly unreliable — it is downright distortive. Many of the best students will go on to one or another form of professional education, ensuring that their earnings will be low for at least the time they remain in school. Thus a graduate who proceeds immediately to become a greeter at Walmart would show a higher score than her fellow student who goes on to medical school. But there would be numbers to show, and hence “accountability.”

Even if you leave aside the accuracy and reliability of these metrics, consider the message they convey. Initiatives like the College Scorecard treat higher education in purely economic terms: Its sole concern is return on investment, understood as the relationship between the monetary costs of college and the increase in earnings that a degree will ultimately provide. Those are, of course, legitimate considerations. College costs eat up an increasing percentage of family income or require the student to take on debt; and making a living is among the most important tasks in life.

But it is not the only task in life, and it is an impoverished conception of college that regards it purely in terms of its ability to enhance earnings. If we distinguish training, which is oriented to production and survival, from education, which is oriented to making survival meaningful, then metrics are only about the former.

The sort of lifelong satisfaction that comes from an art-history course that allows you to understand a work of art; or a music course that trains you to listen for the theme and variations of a symphony; or a literature course that heightens your appreciation of poetry; or a biology course that opens your eyes to the wonders of the human body — none of these is captured by the metrics of return on investment. Nor is the fact that college is a place where lifelong friendships are made, often including that most important of friendships, marriage. All of these benefits should be factored in when considering “return on investment”: but because they can’t be quantified, they are ignored.

The hazard of metrics so purely focused on monetary considerations is that, like so many metrics, they influence behavior. Universities at the very top of the rankings already send a huge portion of their graduates into investment banking, consulting, and high-end law firms. Those are honorable professions, but is it really in the best interests of the nation to encourage universities to direct their best and the brightest to choose those careers?

A capitalist society depends on a variety of institutions to provide a counterweight to the market and its focus on monetary gain. To prepare students for their roles as citizens, as friends, and above all to equip them for a life of intellectual richness — those are among the proper roles of college. Conveying marketable skills is a proper role as well. But to subordinate higher education to what can be quantified is to measure with a dangerously crooked yardstick.

 

 

Day 3 of the EDUCAUSE Learning Initiative (ELI) Annual Meeting!

Dear Commons Community,

My last day at the EDUCAUSE Learning Initiative (ELI) Annual Meeting was enjoyable.  It started with a panel discussion given by me and colleagues Chuck Dziuban, Patsy Moskal, Mary Niemiec, and Karen Swan on Higher Education’s Digital Future Is Closer Than We Think!   The feedback was very positive from the one hundred or so attendees in the audience.

I also attended a session given by Eric Fredericksen on a survey/study he conducted of leaders of online education at community colleges. He presented lots of data on their opinions of  issues and the future directions of online education at their institutions.  It was well-done.

The morning concluded at a general session during which Diane Oblinger presented her view of future technology specifically robotics and artificial intelligence.  She proposed that in a super-connected world, new sources of expertise are emerging and educators are challenged to re-conceptualize learning and research. She raised the question that in a world in which computers are increasingly capable: How do we prepare students for a new division of labor between people and machines?

The afternoon and evening were spent at meetings with the Online Learning Consortium’s Board of Directors.  Dinner was with Frank Mayadas, his wife Judy, and Ken Hartman.

A long and enjoyable day!

Tony