Contact North | Contact Nord: AI and the Future of Teaching and Learning!

Dear Commons Community,

Contact North | Contact Nord, an open-access resource for Canadians enrolled/interested in online courses, has an article this morning entitled, AI and the Future of Teaching & Learning.  It is a good summary of several of the critical aspects of AI as applied to instruction.  Its opening sentence mentions “a start-up company [that] recently created a 19-lesson, fully online, three-hour multimedia course in just 10 hours using ChatGPT.”   The video above is a brief interview with Alex Londo who is the CEO of the start-up.  Worth a view.

The entire Contact North | Contact Nord is below.



Contact North [ Contact Nord

AI and the Future of Teaching & Learning

In Minnesota, a start-up company recently created a 19-lesson, fully online, three-hour multimedia course in just 10 hours using ChatGPT,(link is external) the artificial intelligence tool launched in November 2022.

ChatGPT found images and relevant video materials and developed a quiz to assess learning. Subsequent courses created by this same team are being created in less time — just one hour to create a three-hour learning module. Elsewhere, ChatGPT is used to create multimedia webpages(link is external) that can be quickly inserted into websites, and to create code in python (and other computer languages) that can be incorporated into apps or web spaces.(link is external)

ChatGPT is one of many similar AI services that use natural language to respond to user questions or requirements. Other AI systems can generate art(link is external), video(link is external), text to audio(link is external), music(link is external), simultaneous translation(link is external), solve math problems(link is external), provide career guidance(link is external) or engage in a deeply personal conversation.(link is external)

In higher education, AI systems and applications can be used to:

  1. Strengthen and automate student support: AI can provide students with instant answers to their questions and concerns. This includes academic support, such as help with coursework or research projects, and more general support such as information about campus resources and services.
  2. Improve course management: AI can help with course management tasks, such as posting announcements and answering frequently asked questions about assignments or exams. It can be used to “nudge” students to complete assignments, log into their learning management system or prepare for an exam.
  3. Increase student engagement: AI can facilitate student engagement in online or hybrid courses, for example, by acting as a discussion moderator or by providing prompts for group discussions.
  4. Provide research assistance: AI can help students with research tasks, such as finding and accessing relevant articles or data sources, reviewing available papers and books, and suggesting readings or videos for review. This could be exceptionally helpful for project-based or work-based learning.
  5. Expand tutoring: AI (especially chatbots) can provide individualized tutoring support, particularly in subjects with limited availability of human tutors. This is already occurring on sites such as is external), which offers a combination of chatbots and people to support its registered learners.
  6. Retention and completion: Using AI for “real-time” analytics and data to predict student performance and using these data to focus tutorial or student supports on those students most at risk of dropping out or failing.
  7. Pathway advising: Course choice is a major challenge for students. AI is increasingly being used to provide 24/7 course choice advice, using current student performance data to suggest which “next course” is best for them, given their program profile and career intentions.
  8. Student counselling: A growing number of online counselling systems display not only high levels of empathy with students struggling with stress, anxiety or depression but also high levels of efficacy. An evaluation by the UK’s National Institute for Clinical Excellence (NICE) showed that Velibra (link is external)— used without therapist guidance alongside usual care — was more effective than usual care alone in people with social anxiety disorder.(link is external)

Not all Good News

There are several pitfalls associated with AI in higher education, including:

  1. Lack of personalized support: AI systems are generally not able to provide personalized support to students based on their individual needs and learning styles. Indeed, the lack of empathy and genuine connection with those the systems are serving is a major criticism of many current AI systems. Although work is under way to add “artificial empathy” to client-facing systems, most systems do not connect well to their users.
  2. Dependence on technology: Using AI as a support resource depends on technology and Internet access, which we know in Canada is not available to all students. This can create a digital divide, with some students accessing more support resources than others. During the COVID-19 pandemic, this was a very real issue.
  3. Ethical considerations: There are ethical considerations to using chatbots in education, such as the potential for students to become overly reliant on automated support or not fully understand the limitations of the chatbot — that it is not the same as “chatting” with an instructor. The chatbot can only use the data and algorithms available and has no access to intuition, insights about the specific student. the class the student is a part of or the struggles many students have with specific forms of learning. A parallel is the number of automotive accidents (some fatal) with the inappropriate use of GPS systems (in the United Kingdom alone, in-car GPS devices caused an estimated 300,000 car accidents).
  4. Limited scope: AI systems are only able to provide support within the scope of their programming and “training.” If a student has a question or concern that falls outside this scope, the system may not be able to provide a helpful response. For example, most AI systems are poor at predicting economic futures. Chatbots and other AI systems have to be trained to respond to questions. In one very specific example, ChatGPT was asked to answer all the questions on the Institute of Chartered Accountants of England and Wales assurance exam. It scored 42% — less than the pass mark of 55%. The system was weak when more nuanced understanding and approaches were required. There were also some wrong answers and questionable mathematics. (link is external)
  5. Lack of transparency – We cannot easily trace the sources of information or data-pathways used to create responses in an AI system. Nor is it often clear what the algorithmic biases are in analytic systems that predict student success or failure. This lack of transparency is deeply problematic and is an issue many AI developers are working on.
  6. Abuse of AI – Students can use AI systems like ChatGPT to cheat. In fact, New York City schools, concerned about this possibility, have sought to ban it(link is external) as have others(link is external).  The concern about the abuse of AI is real(link is external) and has led to the development of a new kind of plagiarism detection system that can detect AI-generated materials(link is external).

Responsible and Trustworthy AI

For the above reasons, frameworks for the development of both responsible and trustworthy deployment of AI are emerging. Some are supported by major vendors (e.g. IBM, Google and Microsoft) as well as by the OECD(link is external). These frameworks require AI deployments in colleges and universities to be:

  • Inclusive – Significant efforts are made to ensure all students have access and support for their use of AI rather than AI being in the service of the privileged. To make this effective, the transparency of AI and exposure of bias within AI systems is essential. The intention should be to make education more accessible to all rather than less so.
  • Empathic and human-centered – Although accuracy and appropriateness of responses are critical, AI systems intended to interact with people should be empathic, warm and genuine. They must be able to respond not just accurately but, in a tone, and manner that reflects the identity of the user. They must also become increasingly sensitive to user needs.
  • Transparent and explainable –Transparency means enabling people to understand how an AI system is developed, trained, operated, and deployed so users can make more informed choices about the outputs such systems produce. A user needs to understand how AI came to the conclusion it did: What were its sources of information and how was it trained to use and interpret these sources?
  • Robust, secure and safe – To function, AI systems need access to significant datasets, including personal data about students, their backgrounds, performance and interaction with college or university systems. Such AI systems need to be able to withstand cybersecurity threats and be safe for students and staff to use. Colleges and universities are a target for such attacks.
  • Accountable – This refers to the expectation that organizations or individuals ensure the proper functioning, throughout their lifecycle, of the AI systems they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and that they demonstrate this through their actions and decision-making process (for example, by providing documentation on key decisions or conducting or allowing auditing where justified). AI systems must meet regulatory and legal requirements that all the university or college staff are required to meet — for example, with respect to disabilities and exceptionalities or privacy.

What to Expect in the Next Five Years

The launch of ChatGPT caused a significant stir in higher education, but we have seen nothing yet. As AI becomes more widespread, transparent, responsible and integrated (text + video + art + music + translation all in one place) we can expect more instructors to experiment and explore. Microsoft, which has an exclusive licence to deploy ChatGPT across its systems, intends to invest US$10 billion(link is external) over the next 3-5 years, integrating it into Office products used by the vast majority of educational institutions and its search engine.

New AI systems for assessment are also emerging, enabling automated item generation(link is external), real-time assessment of soft and hard skills during a simulation or game, automated grading(link is external) and personalized and adaptive assessment(link is external). Some of these systems are already integrated into widely available LMS systems (e.g. Examity(link is external) is integrated into Brightspace at Purdue University in the US) and others will follow.

We can also expect the more widespread deployment of more empathic and responsive chatbots as tutors, student advisors and counsellors. In the UK, the technology coordinating body for colleges and universities (JISC)(link is external) is supporting several sophisticated deployments of AI tools and resources: chatbots(link is external), question generators(link is external), research suggestions for readings(link is external) and other tools(link is external). They are evaluating them for their effectiveness and efficiency. The chatbot ADA(link is external) is in widespread use.

As more instructors and students experiment with AI, more best practice examples will emerge of the effective use of AI to support teaching and learning. We can expect a flood of new AI tools and examples of effective practice.

President Biden’s State of the Union Speech – Six Takeaways!

Biden touts economic progress and spars with Republicans in contentious  State of the Union address

Dear Commons Community,

President Joe Biden gave his 2023 State of the Union Speech last night in front of the US Congress.  He was both feisty and compassionate.  I was happy to see Republican House Speaker Kevin McCarthy applaud, even stand for some of Biden’s message.  Below are six takeaways courtesy of the Associated Press.  They are a good recap of several of the important points made.  You can view Biden’ State of the Union at the end of this post.  Or if you prefer, the entire text of the State of the Union can be found here.



The Associated Press

Biden speech takeaways: More conciliation than conflict


The State of the Union address tends to have a ritual rhythm. Grand entrance. Applause. Platitudes. Policies. Appeals for Unity, real or imagined.

President Joe Biden checked those boxes, and a few more, during his speech to a joint session on Congress on Tuesday. In part, he seemed to be laying the foundation to run for a second term. “We’ve been sent here to finish the job,” he said.

Biden made calls for unity and tried to emphasize conciliation over conflict, easier to do in this rarefied setting, seemingly impossible to sustain in such divided times.

Takeaways from the president’s State of the Union address:


Biden’s speech almost defiantly ignored the bitter divisions between Republicans and Democrats and his own low standing with the public.

He returned repeatedly to common ground, making the case that both parties can back U.S. factories, new businesses being formed and the funding of 20,000 infrastructure projects. When Biden hit each of these themes, Republican House Speaker Kevin McCarthy politely clapped, evening standing to applaud at one point.

It’s a sign that Democrats and Republicans can at least agree to a shared set of goals, even if they have very different views of how to get there.

In the midterm election campaign, Biden warned of Republican extremists. On Tuesday night, he portrayed them as partners in governance during the first two years of his presidency.

But then came a Biden comment that generated boos and hoots from Republicans: Biden said some in the GOP were bent on cutting Social Security and Medicare.

That sparked a raucous back and forth that seemed more in line with the reality of the actual relationship between the parties.


Biden used the speech to highlight his focus on the common man, calling out billionaires who pay lower tax rates than the middle class and airlines that treat their passengers like “suckers.”

It amounted to a dare to Republican lawmakers who increasingly claim to represent blue-collar workers.

“No billionaire should pay a lower tax rate than a school teacher or a firefighter,” Biden said in one of the bigger applause lines of his speech.

The president brought back an idea from last year to put a minimum tax on billionaires so they don’t pay a lower rate than many middle-class households. Biden had pitched a 20% tax on the income and unrealized financial gains of households worth $100 million or more. The administration estimated it would generate $360 billion over 10 years. That would in theory help fund some priorities and possibly reduce the deficit.

But Biden’s tax plan might be more about scoring political points, as he couldn’t get it past West Virginia Democrat Joe Manchin in the Senate last year.

He was straightforward in saying he would stop airlines from charging fees in order to sit families together, saying that children were being treated like luggage. He wants to ban hidden resort fees charged by hotels and penalties charged by cell service providers.

“Americans are tired of being played for suckers,” Biden said.


Biden had been on a winning streak countering the United States’ rising military and economic competitor China.

Then Beijing brazenly floated a spy balloon across the United States, an embarrassing episode for Biden that culminated last weekend with him ordering the Pentagon to shoot the craft out of the sky over the Atlantic Ocean.

The incident has dominated headlines, with some Republicans arguing that it demonstrates Biden has been wobbly on Beijing.

Biden briefly addressed the incident directly: “As we made clear last week, if China’s threatens our sovereignty, we will act to protect our country. And we did.”

Lost in the noise is the Biden administration’s increasingly aggressive efforts to counter China. like agreements with the Philippines and Japan to adjust or expand the U.S. military presence in those countries.

The balloon drama overshadowed all of that.


Last year’s State of the Union was dramatically shaped by Russia’s invasion of Ukraine, which started days before the address.

At that moment, the chances of Ukraine staying in the fight with a more formidable Russian military seemed highly unlikely. Nearly a year later, Ukraine is firmly in the fight.

Biden took a moment to pay tribute to Ukraine, addressing one of his guests, Ambassador Oksana Markarova, as representing “not just her nation but the courage of her people.”

He also applauded Congress for giving the Ukrainian what it needed to face Russia’s brutal aggression; the United States has already committed nearly $30 billion in security assistance since the start of the war.

In private, administration officials have made clear to Ukrainian officials that Congress’ patience with the cost of the war will have its limits. But with Tuesday’s address, Biden offered an optimistic outlook about the prospects of long-term American support.

“Ambassador, America is united in our support for your country,” Biden said looking toward Markarova in the gallery. “We will stand with you as long as it takes.”


Among Biden’s guests were the parents of Tyre Nichols, the 29-year-old black man whose beating death at the hands Memphis, Tennessee, police has reignited a national debate on policing.

Efforts to reduce police excesses have been sharply restricted by resistance in Congress, and there’s little prospect of federal action.

Still, Biden expressed awe at the grace of Nichols’ mother, RowVaughn Wells, who following his death has talked of her son’s “beautiful soul” and hopeful certainty that “something good will come from this.”

Biden, 80, also acknowledged in plain terms that as a white man he’s enjoyed a privilege that Nichols’ parents — and Black parents writ large — do not.

“Imagine having to worry whether your son or daughter will come home from walking down the street or playing in the park or just driving their car,” he said. “I’ve never had to have the talk with my children — Beau, Hunter and Ashley — that so many Black and Brown families have had with their children.”


Biden uttered the phrase “finish the job” at least a dozen times during his address. It sounded like the makings of a slogan he might employ for a reelection campaign.

But it is highly unlikely he will be able to finish the job on many of the things he referenced, like an assault weapons ban, universal preschool for 3- and 4-year-olds and forcing companies to stop doing stock buybacks.

At least not during this term.

New York State Legislators rip Gov. Kathy Hochul’s plan to expand charter schools!

Dear Commons Community,

New York State senators gathered at City Hall on Friday to protest a plan in Governor Kathy Hochul’s budget to increase the number of charter schools in the city.

The proposed changes, outlined in her budget, would scrap a regional limit on charters issued in the city and revive so-called “zombie” charters — schools that count toward a statewide charter cap but have ceased operations.  As reported by The New York Daily News.

“This is a really flawed proposal,” said Sen. Shelley Mayer, chair of the education committee. “The governor should withdraw it.”

Hochul’s proposal could pave the way for an estimated 100 additional charters citywide, though Hochul would keep a statewide cap at 460 operators. Roughly 275 charter schools currently operate in NYC.

The budget would also invest $34.5 billion in overall education, including a $2.7 billion increase in Foundation Aid, the state’s need-based formula that will be fully funded for the first time ever this year.

“While the governor is giving with one hand fully funding Foundation Aid, she’s yanking the rug out from other kids by diverting money potentially to new charter schools,” said Sen. John Liu, who chairs the NYC education committee. “And that is not right.”

Sen. Robert Jackson, who took legal action as part of the Campaign for Fiscal Equity that gave rise to the state formula, added: “We’ve won that Foundation Aid fight, and we will fight to make sure that all of the children in New York City get a good education.”

The lawmakers were joined by city and union officials, members of the Panel for Educational Policy, and other parent advocates. In total, eight of 63 state senators blasted the proposal on Friday, while other legislators have rebuked it on social media.

City Comptroller Brad Lander said the proposal would be “devastating for the city’s budget,” putting traditional public schools in a bind financially.

“They are going to be faced with this devil’s bargain of like, do I lose my music teacher or my art teacher or have larger class size?” said Lander.

Charter school backers in a pamphlet distributed Friday morning accused the legislators of misinformation and “downright untruths.”

“Instead of creating and perpetuating myths about public charter schools, elected officials should take the time to listen to families in their own communities who are overwhelmingly in favor of charters,” said James Merriman, chief executive officer of the New York City Charter School Center.

“Legislators attempting to block that growth aren’t fighting the so-called ‘charter industry.’ What they are doing is standing in the way of parents simply trying to do what’s right for their kids,” he added.

The Charter School Center said that lifting the cap would allow for schools designed to serve vulnerable children, like students with dyslexia, impacted by the juvenile justice system or in the foster care system. Charters are also held accountable by authorizing entities and government agencies, and must prove their value every few years as part of a renewal process, they said.

Despite blowback from the state’s upper house, the governor continued to stand by her proposals.

“Hochul believes every student deserves a quality education, and NYC parents and students deserve the same access to educational options as those in the rest of the state,” said a spokesperson for the governor. “We are proposing common sense fixes that will give New York families more options and opportunities to succeed.”

It will be interesting to see how this evolves over the next several months!


U.S. Treasury Secretary Janet Yellen: ‘You don’t have a recession’ when U.S. unemployment at 53-year low

Awaiting Yellen at Treasury: Yet another daunting crisis | AP News

Janet Yellin

Dear Commons Community,

U.S. Treasury Secretary Janet Yellen said yesterday she saw a path for avoiding a U.S. recession, with inflation coming down significantly and the economy remaining strong, given the strength of the U.S. labor market.

“You don’t have a recession when you have 500,000 jobs and the lowest unemployment rate in more than 50 years,” Yellen told ABC’s Good Morning America program.

“What I see is a path in which inflation is declining significantly and the economy is remaining strong.”

Yellen said inflation remained too high, but it had been falling for the past six months and could decline significantly given measures adopted by the Biden administration, including steps to reduce the cost of gasoline and prescription drugs.

U.S. Labor Department data released Friday showed job growth accelerated sharply in January, with nonfarm payrolls up by 517,000 jobs and the unemployment rate dropping to a 53-1/2-year low of 3.4%.

The strength in hiring, which occurred despite layoffs in the technology sector, reduced market expectations that the U.S. Federal Reserve was close to pausing its monetary policy tightening cycle.

Yellen told ABC that reducing inflation remained Biden’s top priority, but the U.S. economy was proving “strong and resilient.”

Three separate pieces of legislation – the Inflation Reduction Act, the CHIPS Act and a massive infrastructure law – would all help drive inflation down, along with a price cap imposed on the cost of Russia oil, she said.

Yellen called on Congress to raise the U.S. debt limit, warning that failure to do so would produce “an economic and financial catastrophe.”

“While sometimes we’ve gone up to the wire, it’s something that Congress has always recognized as their responsibility and needs to do again.”

The U.S. government hit its $31.4 trillion debt ceiling last month, prompting the Treasury Department to warn that it may not be able to stave off default past early June.

Republican U.S. House of Representatives Speaker Kevin McCarthy and President Joe Biden met last week for talks on raising the debt limit and have agreed to meet again, but the standoff has unsettled markets.

Yelling makes a lot of sense.  And we hope that unemployment stays low!



Two accused of plotting to disable Baltimore power grid!

Feds: Maryland woman, Florida man planned to attack power grid

Dear Commons Community,

A Maryland woman conspired with a Florida neo-Nazi leader to carry out an attack on several electrical substations in the Baltimore area, officials said yesterday.

The arrest of Sarah Beth Clendaniel, of Baltimore County, was the latest in a series across the country as authorities warn electrical infrastructure could be a vulnerable target for domestic terrorists. It wasn’t immediately clear whether she had a lawyer to speak on her behalf.

There was no evidence the plot was carried out or any record of damage to local substations. As reported y the Associated Press.

Clendaniel conspired with Brandon Russell, recently arrested in Florida, to disable the power grid by shooting out substations via “sniper attacks,” saying she wanted to “completely destroy this whole city,” according to an unsealed criminal complaint unsealed. The complaint also included a photo of a woman authorities identified as Clendaniel wearing tactical gear that bore a swastika and holding a rifle.

U.S. Attorney Erek Barron praised investigators for disrupting hate-fueled violence.

“When we are united, hate cannot win,” he said at a news conference announcing the charges.

Authorities declined to specify how the planned attack was meant to fulfill a racist motive but suggested the defendants wanted to bring attention to their cause.

According to the complaint, Clendaniel was planning to target five substations situated in a “ring” around Baltimore, a majority-Black city mostly surrounded by heavily white suburban areas.

“It would probably permanently completely lay this city to waste if we could do that successfully,” Clendaniel told a confidential informant, according to the complaint. She was living outside the city in surrounding Baltimore County, officials said.

Russell has a long history of ties to racist groups and Nazi beliefs, as well as past plans to attack U.S. infrastructure systems, according to the complaint. It also wasn’t clear Monday whether he had a lawyer.

In recent months, concerns about protecting the country’s power grid have been heightened by attacks, or threatened attacks.

In Washington state, two men were arrested last month on charges that they vandalized substations weeks earlier in attacks that left thousands without power around Christmastime. One suspect told authorities they did it so they could break into a business and steal money.

A gunfire attack in December on substations in central North Carolina caused power outages affecting tens of thousands of customers. Law enforcement officials have said the shooting was targeted, though no arrests have been made. Lawmakers there have proposed legislation to toughen penalties for intentionally damaging utility equipment.

Baltimore Gas and Electric, which controls the local power grid, thanked law enforcement and said Monday that there was no damage to any substations, that service wasn’t disrupted and that there are currently no known threats to facilities.

“The substations are not believed to have been targeted out of any connection to BGE or Exelon, or because of any particular vulnerability,” BGE said in a news release. “We have a long-standing partnership with law enforcement and state and federal regulators of the grid to secure critical infrastructure; this work is even more important now as threats have increased in recent years.”

What a pair!




Faculty Now Including Critiques of ChatGPT in Writing Assignments!

The ultimate homework cheat? How teachers are facing up to ChatGPT |  Science & Tech News | Sky News

Dear Commons Community,

Across the United States, universities and school districts are scrambling to figure out how  to use and respond to chatbots like ChatGPT that can generate humanlike texts and images. But while some are rushing to ban ChatGPT to try to prevent its use as a cheating aid, some faculty are looking to leverage it to spur more critical classroom thinking. They are encouraging their students to question the hype around these rapidly evolving artificial intelligence tools and to consider the technologies’ potential side effects.  As reported by The New York Times.

The aim, these educators say, is to train the next generation of technology creators and consumers in “critical computing.” That is an analytical approach in which understanding how to critique computer algorithms is as important as — or more important than — knowing how to program computers.

The New York City Public Schools are  training a cohort of computer science teachers to help their students identify A.I. biases and potential risks. Lessons include discussions on defective facial recognition algorithms that can be much more accurate in identifying white faces than darker-skinned faces.  In Illinois, Florida, New York and Virginia, some middle school science and humanities teachers are using an A.I. literacy curriculum developed by researchers at the Scheller Teacher Education Program at the Massachusetts Institute of Technology. One lesson asks students to consider the ethics of powerful A.I. systems, known as “generative adversarial networks,” that can be used to produce fake media content, like realistic videos in which well-known politicians mouth phrases they never actually said.

With generative A.I. technologies proliferating, educators and researchers say understanding such computer algorithms is a crucial skill that students will need to navigate daily life and participate in civics and society.

“It’s important for students to know about how A.I. works because their data is being scraped, their user activity is being used to train these tools,” said Kate Moore, an education researcher at M.I.T. who helped create the A.I. lessons for schools. “Decisions are being made about young people using A.I., whether they know it or not.”

In one of my graduate classes, I have given my students an assignment of writing  a traditional paper or writing a paper that has been informed (maybe started) by ChatGPT.  All of the 25 students in this class are experienced educators, mostly teachers.  About half of the students have opted for the latter. 

The ChatGPT assignment reads in part as follows:

“You can use the essay produced by ChatGPT as the beginning of your paper.  To complete the assignment, you will likely have to write approximately four-five more pages rather than the seven as indicated in the original Assignment No. 1.  You can use your own discretion as to how many additional pages you need to complete the assignment. 

“I would also like you to add one paragraph to this assignment at the end of your paper answering the following as best you can.

How well did you feel ChatGPT assisted you in completing the assignment?

Do you believe that you could have done as good, better, or not as good paper without using ChatGPT?

Would you consider allowing students in your own classes to use ChatGPT for essay assignments?

What recommendation do you have, if any, for other teachers or educators in using ChatGPT?”

These papers are due at the beginning of March.  


P.S. Here are two websites suggesting how to  use ChatGPT for essay assignments in K-12 and college-level courses.


Powerful Koch Brothers Political Network (Americans for Prosperity) Announces It’s ‘Turning The Page’ on Trump

Koch Political Operation Spends Big on GOP Primaries - EXPOSEDbyCMD

Dear Commons Community,

The powerful network of political groups funded by conservative billionaire Charles Koch and his late brother, David Koch, has announced it’s “turning the page” on Donald Trump and seeking another Republican to back in a 2024 run for the presidency.  As reported by The New York Times and The Huffington Post.

“To write a new chapter for our country, we need to turn the page on the past,” Emily Seidel — CEO of the network’s flagship organization, Americans for Prosperity (AFP)— wrote in a memo released yesterday.

“The best thing for the country would be to have a president in 2025 who represents a new chapter. The American people have shown that they’re ready to move on, and so AFP will help them do that,” she added.

AFP doesn’t specifically name Trump, but its perspective on him, and preferring someone “new,” is clear.

A related super PAC — AFP Action — is also “prepared to support a candidate in the Republican presidential primary who can lead our country forward, and who can win,” the memo added. AFP Action spent roughly $80 million in the 2022 election cycle, according to the campaign finance tracking website OpenSecrets.

AFP also stressed that it would no longer sit out primaries but instead will become active in the contests early, following the poor performance in general elections of GOP extremists backed by Trump in various primaries. The AFP memo argued in the memo that the Republican Party is “nominating bad candidates who are advocating for things that go against core American principles.”

AFP was founded in 2004 by oil barons and manufacturers Charles and David Koch, who largely funded the right-wing “tea party” movement of the Republican Party. David Koch died in 2019.

Charles Koch has since decried the division in the nation that he sees as harming American life and even its business climate — and claims he wants to help heal the rift.

Boy did we screw up; what a mess,” he wrote about the blistering partisan divide in his 2020 book, “Believe in People.”

Trump has blasted the brothers — who have supported free trade rather than Trump’s economic nationalism and isolationism — as “globalists” and a “total joke.

The AFP position would be a significant boost to a Republican Party without Trump as its leader in 2024.


Eric Trump Cries over Brexit and Dad’s Losses at Scottish Golf Courses!

Trump's Scottish golf resorts to get over $1M in tax relief

Dear Commons Community,

Brexit, hailed by Donald Trump as a “great thing” for Britain and for business, is now being blamed by his son, Eric,  for millions of dollars in losses suffered by the former president’s golf courses in Scotland.

Donald Trump often extolled Britain’s pullout from the European Union following the 2016 Brexit vote. He compared the sentiment fueling the move to the feelings of his own isolationist supporters upset about immigration. “They’re angry over borders. They’re angry over people coming into the country,” he said that year. “People want to take their country back.”

Trump also claimed Brexit would help his businesses. “When the pound goes down, more people are coming to [his golf course in] Turnberry,” he predicted then.

But Trump’s Turnberry resort in South Ayrshire, Scotland, has reported a pretax loss for 2021 of £3.7 million, the Financial Times wrote Thursday. Trump International Golf Club Scotland Ltd., the parent of his course in Aberdeenshire, reported an additional loss of around £696,000, according to financial statements published Wednesday.

The news comes amid Brexit’s lasting influence on the flow of workers into the country. Eric Trump, who signed off the financial accounts for Turnberry as director of the company, wrote that the “staffing pool” had been indirectly affected by Brexit, according to the BBC. He said that a “lack of access to European staff for businesses in general … [resulted] in greater demand for the individuals previously available at the resort.”

Consequently, Brexit has “impacted our business as supply chains have been impacted by availability of drivers and staff, reducing deliveries and the availability of certain product lines,” the younger Trump wrote.

In addition, “prices have increased, from additional freight and import duty charges,” he added.

He also partly blamed COVID-19, pointing out that the United Kingdom’s response to the pandemic triggered a three-month closure of the resort at the start of 2021.

Donald Trump bought Turnberry in 2014 for $60 million. He built the Aberdeenshire course after he purchased 1,400 acres of land in 2006.

Turnberry has yet to show a profit in almost a decade of ownership by the Trump family, noted the Financial Times. Aberdeenshire has lost millions.

Another Trump business turned to *&^%!


Questions Raised about China Spy Balloon Shot Down Yesterday!

Dear Commons Community,

The Associated Press has an  article this morning recapping what we know — and don’t know — about the Chinese balloon shot down yesterday!  The white orb that drifted across U.S. airspace this week and was shot down by the Air Force over the Atlantic on live television (click on to see video)   triggered a diplomatic maelstrom.


What was it?   China insists the balloon was just an errant civilian airship used mainly for meteorological research that went off course due to winds and had only limited “self-steering” capabilities. It also issued a threat of “further actions.”

In a statement after the craft was shot down, China’s Ministry of Foreign Affairs said the use of force by the U.S. was “an obvious overreaction and a serious violation of international practice.”

It added: ”China will resolutely uphold the relevant company’s legitimate rights and interests, and at the same time reserving the right to take further actions in response.”

The United States says it was a Chinese spy balloon without a doubt. Its presence prompted Secretary of State Antony Blinken to cancel a weekend trip to China that was aimed at dialing down tensions that were already high between the countries.

The Pentagon says the balloon, which was carrying sensors and surveillance equipment, was maneuverable and showed it could change course. It loitered over sensitive areas of Montana where nuclear warheads are siloed, leading the military to take actions to prevent it from collecting intelligence.

A U.S. Air Force fighter jet shot down the balloon yesterday afternoon off the Carolina coast. Television footage showed a small explosion, followed by the balloon slowly drifting toward the water. An operation is underway to recover the remnants.

A look at what’s known about the balloon — and what isn’t:

The Pentagon and other U.S. officials say it was a Chinese spy balloon — about the size of three school buses — that moved east over America at an altitude of about 60,000 feet (18,600 meters). The U.S. says it was being used for surveillance and intelligence collection, but officials have provided few details.

U.S. defense and military officials said that the balloon entered the U.S. air defense zone north of the Aleutian Islands on Jan. 28 and moved over land across Alaska and into Canadian airspace in the Northwest Territories on Jan. 30. The next day it crossed back into U.S. territory over northern Idaho. U.S. officials spoke on condition of anonymity to discuss the sensitive topic.

The White House said Biden was first briefed on the balloon on Tuesday. The State Department said Blinken and Deputy Secretary Wendy Sherman spoke with China’s senior Washington-based official on Wednesday evening about the matter.

In the first public U.S. statement, Brig. Gen. Pat Ryder, the Pentagon press secretary, said Thursday evening that the balloon was not a military or physical threat — an acknowledgement that it was not carrying weapons. He said that “once the balloon was detected, the U.S. government acted immediately to protect against the collection of sensitive information.”

Even if the balloon was not armed, it posed a risk to the U.S., said retired Army Gen. John Ferrari, a visiting fellow at the American Enterprise Institute. The flight itself, he said, could be used to test America’s ability to detect incoming threats and to find holes in the country’s air defense warning system. It may also have allowed the Chinese to sense electromagnetic emissions that higher-altitude satellites cannot detect, such as low-power radio frequencies that could help them understand how different U.S. weapons systems communicate.

On Wednesday as the balloon loitered over Montana, Biden authorized the military to shoot it down as soon as it was in a location where there would not be undue risk to civilians. Due to its massive size and altitude, the debris field of its sensors and the balloon itself was expected to stretch for miles. So, top military and defense leaders advised Biden not to take it down over land, even when it was over sparsely populated areas.

At 2:39 p.m. Saturday, as the balloon flew in U.S. airspace about 6 nautical miles off the coast of South Carolina, a single F-22 fighter jet from Virginia’s Langley Air Force Base — flying at an altitude of 58,000 feet — fired an AIM-9X Sidewinder into it. The Sidewinder is a short-range missile used by the Navy and Air Force primarily for air-to-air engagements, the missile is about 10 feet long and weighs about 200 pounds.

Live news feeds showed the moment of impact, as the balloon collapsed and began a lengthy fall into the Atlantic.

The F-22 was supported by an array of Air Force and Air National Guard fighter jets and tankers, including F-15s from Massachusetts and tanker aircraft from Oregon, Montana, Massachusetts, South Carolina and North Carolina. All pilots returned safely to base and there were no injuries or other damage on the ground, a senior military official told reporters in a Saturday briefing.

As the deflated balloon was slowly drifting down, U.S. Navy vessels had already moved in, waiting to collect the debris.

The Federal Aviation Administration had temporarily closed airspace over the Carolina coast, including the airports in Myrtle Beach and Charleston, South Carolina, and Wilmington, North Carolina. And the FAA and Coast Guard worked to clear the airspace and water below the balloon.

Once the balloon crashed into the water, U.S. officials said, the debris field stretched at least 7 miles, and was in water 47 feet deep. That depth is shallower than what they had planned, making it easier to retrieve pieces of the sensor package and other parts that may be salvageable.

Officials said the USS Oscar Austin, a Navy destroyer, the USS Carter Hall, a dock landing ship, and the USS Philippine Sea, a guided missile cruiser, are all part of the recovery effort, and a salvage vessel will arrive in a few days. They said Navy divers will be on hand if needed, along with unmanned vessels that can recover debris and lift it back up to the ships. The FBI will also be present to categorize and assess anything recovered, officials said.

As for intelligence value, the U.S. officials said the balloon’s voyage across the U.S. gave experts several days to analyze it, gather technical data, and learn a lot about what it was doing, how it was doing it and why China may be using things like this. They declined to provide details, but said they expect to learn more as they gather and scrutinize the debris.

We will have to wait and see what our military says!


ChatGPT Is Kicking Off an A.I. Arms Race!

Why Generative AI Like ChatGPT Will Be the Defining Tech of This Year

Dear Commons Community,

Kevin Roose, the technology columnist for The New York Times, has an insightful piece this morning entitled, “How ChatGPT Kicked Off an A.I. Arms Race.”  He reviews how the company OpenAI which developed ChatGPT got started and how it is now attracting investments and interest from major American companies.  Microsoft, Amazon and Google are looking to its amazing success of having  logged more than 30 million users and getting roughly five million visits a day. That makes it one of the fastest-growing software products in history. (Instagram, by contrast, took nearly a year to get its first 10 million users.) Baidu, the Chinese tech giant, is preparing to introduce a chatbot similar to ChatGPT in March, according to Reuters.  Roose also mentions that a new OpenAI version of ChatGPT will arrive later this year and its abilities may make the current version look “quaint.” 


The entire column is below.



The New York Times

“How ChatGPT Kicked Off an A.I. Arms Race”

By Kevin Roose

Feb. 3, 2023

One day in mid-November, workers at OpenAI got an unexpected assignment: Release a chatbot, fast.

The chatbot, an executive announced, would be known as “Chat with GPT-3.5,” and it would be made available free to the public. In two weeks.

The announcement confused some OpenAI employees. All year, the San Francisco artificial intelligence company had been working toward the release of GPT-4, a new A.I. model that was stunningly good at writing essays, solving complex coding problems and more. After months of testing and fine-tuning, GPT-4 was nearly ready. The plan was to release the model in early 2023, along with a few chatbots that would allow users to try it for themselves, according to three people with knowledge of the inner workings of OpenAI.

But OpenAI’s top executives had changed their minds. Some were worried that rival companies might upstage them by releasing their own A.I. chatbots before GPT-4, according to the people with knowledge of OpenAI. And putting something out quickly using an old model, they reasoned, could help them collect feedback to improve the new one.

So they decided to dust off and update an unreleased chatbot that used a souped-up version of GPT-3, the company’s previous language model, which came out in 2020.

Thirteen days later, ChatGPT was born.

In the months since its debut, ChatGPT (the name was, mercifully, shortened) has become a global phenomenon. Millions of people have used it to write poetry, build apps and conduct makeshift therapy sessions. It has been embraced (with mixed results) by news publishers, marketing firms and business leaders. And it has set off a feeding frenzy of investors trying to get in on the next wave of the A.I. boom.

It has also caused controversy. Users have complained that ChatGPT is prone to giving biased or incorrect answers. Some A.I. researchers have accused OpenAI of recklessness. And school districts around the country, including New York City’s, have banned ChatGPT to try to prevent a flood of A.I.-generated homework.

Yet little has been said about ChatGPT’s origins, or the strategy behind it. Inside the company, ChatGPT has been an earthshaking surprise — an overnight sensation whose success has created both opportunities and headaches, according to several current and former OpenAI employees, who requested anonymity because they were not authorized to speak publicly.

An OpenAI spokesman, Niko Felix, declined to comment for this column, and the company also declined to make any employees available for interviews.

Before ChatGPT’s launch, some OpenAI employees were skeptical that the project would succeed. An A.I. chatbot that Meta had released months earlier, BlenderBot, had flopped, and another Meta A.I. project, Galactica, was pulled down after just three days. Some employees, desensitized by daily exposure to state-of-the-art A.I. systems, thought that a chatbot built on a two-year-old A.I. model might seem boring.

But two months after its debut, ChatGPT has more than 30 million users and gets roughly five million visits a day, two people with knowledge of the figures said. That makes it one of the fastest-growing software products in memory. (Instagram, by contrast, took nearly a year to get its first 10 million users.)

The growth has brought challenges. ChatGPT has had frequent outages as it runs out of processing power, and users have found ways around some of the bot’s safety features. The hype surrounding ChatGPT has also annoyed some rivals at bigger tech firms, who have pointed out that its underlying technology isn’t, strictly speaking, all that new.

ChatGPT is also, for now, a money pit. There are no ads, and the average conversation costs the company “single-digit cents” in processing power, according to a post on Twitter by Sam Altman, OpenAI’s chief executive, likely amounting to millions of dollars a week. To offset the costs, the company announced this week that it would begin selling a $20 monthly subscription, known as ChatGPT Plus.

Despite its limitations, ChatGPT’s success has vaulted OpenAI into the ranks of Silicon Valley power players. The company recently reached a $10 billion deal with Microsoft, which plans to incorporate the start-up’s technology into its Bing search engine and other products. Google declared a “code red” in response to ChatGPT, fast-tracking many of its own A.I. products in an attempt to catch up.


Mr. Altman has said his goal at OpenAI is to create what is known as “artificial general intelligence,” or A.G.I., an artificial intelligence that matches human intellect. He has been an outspoken champion of A.I., saying in a recent interview that its benefits for humankind could be “so unbelievably good that it’s hard for me to even imagine.” (He has also said that in a worst-case scenario, A.I. could kill us all.)

As ChatGPT has captured the world’s imagination, Mr. Altman has been put in the rare position of trying to downplay a hit product. He is worried that too much hype for ChatGPT could provoke a regulatory backlash or create inflated expectations for future releases, two people familiar with his views said. On Twitter, he has tried to tamp down excitement, calling ChatGPT “incredibly limited” and warning users that “it’s a mistake to be relying on it for anything important right now.”

He has also discouraged employees from boasting about ChatGPT’s success. In December, days after the company announced that more than a million people had signed up for the service, Greg Brockman, OpenAI’s president, tweeted that it had reached two million users. Mr. Altman asked him to delete the tweet, telling him that advertising such rapid growth was unwise, two people who saw the exchange said.

OpenAI is an unusual company, by Silicon Valley standards. Started in 2015 as a nonprofit research lab by a group of tech leaders including Mr. Altman, Peter Thiel, Reid Hoffman and Elon Musk, it created a for-profit subsidiary in 2019 and struck a $1 billion deal with Microsoft. It has since grown to around 375 employees, according to Mr. Altman — not counting the contractors it pays to train and test its A.I. models in regions like Eastern Europe and Latin America.

From the start, OpenAI has billed itself as a mission-driven organization that wants to ensure that advanced A.I. will be safe and aligned with human values. But in recent years, the company has embraced a more competitive spirit — one that some critics say has come at the expense of its original aims.

Those concerns grew last summer when OpenAI released its DALL-E 2 image-generating software, which turns text prompts into works of digital art. The app was a hit with consumers, but it raised thorny questions about how such powerful tools could be used to cause harm. If creating hyper-realistic images was as simple as typing in a few words, critics asked, wouldn’t pornographers and propagandists have a field day with the technology?

To allay these fears, OpenAI outfitted DALL-E 2 with numerous safeguards and blocked certain words and phrases, such as those related to graphic violence or nudity. It also taught the bot to neutralize certain biases in its training data — such as making sure that when a user asked for a photo of a C.E.O., the results included images of women.

These interventions prevented trouble, but they struck some OpenAI executives as heavy-handed and paternalistic, according to three people with knowledge of their positions. One of them was Mr. Altman, who has said he believes that A.I. chatbots should be personalized to the tastes of the people using them — one user could opt for a stricter, more family-friendly model, while another could choose a looser, edgier version.

OpenAI has taken a less restrictive approach with ChatGPT, giving the bot more license to weigh in on sensitive subjects like politics, sex and religion. Even so, some right-wing conservatives have accused the company of overstepping. “ChatGPT Goes Woke,” read the headline of a National Review article last month, which argued that ChatGPT gave left-wing responses to questions about topics such as drag queens and the 2020 election. (Democrats have also complained about ChatGPT — mainly because they think A.I. should be regulated more heavily.)

As regulators swirl, Mr. Altman is trying to keep ChatGPT above the fray. He flew to Washington last week to meet with lawmakers, explaining the tool’s strengths and weaknesses and clearing up misconceptions about how it works.

Back in Silicon Valley, he is navigating a frenzy of new attention. In addition to the $10 billion Microsoft deal, Mr. Altman has met with top executives at Apple and Google in recent weeks, two people with knowledge of the meetings said. OpenAI also inked a deal with BuzzFeed to use its technology to create A.I.-generated lists and quizzes. (The announcement more than doubled BuzzFeed’s stock price.)

The race is heating up. Baidu, the Chinese tech giant, is preparing to introduce a chatbot similar to ChatGPT in March, according to Reuters. Anthropic, an A.I. company started by former OpenAI employees, is reportedly in talks to raise $300 million in new funding. And Google is racing ahead with more than a dozen A.I. tools.

Then there’s GPT-4, which is still scheduled to come out this year. When it does, its abilities may make ChatGPT look quaint. Or maybe, now that we’re adjusting to a powerful new A.I. tool in our midst, the next one won’t seem so shocking.