March 26, 2026

Achieving Academic Outcomes

Enhancing Student Success

The Limits of Artificial Intelligence in Professional Military Education | Politics

The Limits of Artificial Intelligence in Professional Military Education | Politics

Why Military Colleges Should Not Go “All AI – All the Time”

As the nation grapples with how to incorporate artificial intelligence (AI) into military education, it must resist the temptation to frame the debate as a binary choice—either banning AI from the classroom or adopting permissive rules that allow students to use the technology as a substitute for genuine reading, writing, research, and reasoning. In my article, “A Guide to Collaborating With AI in the Military Classroom,” I argue for a middle way in which professional military education (PME) strategically adopts AI, while ensuring that students do not become overly dependent on the technology for problem-solving. My thesis is that, in order to partner effectively with a machine, students must first develop substantive knowledge, core skills, and independent judgment. An overly permissive approach to AI will seriously undermine PME’s mission to prepare leaders to fight and win America’s wars.

In a resounding rejection of this middle way, Professor Jim Lacey of Marine Corps University published an article titled “Transitioning Professional Military Education to All AI—All the Time,” in which he dismisses the notion that an overreliance on AI can have a deleterious effect on student learning. According to his account, the rapid adoption of AI across PME is the only responsible way forward. Lacey states that the views of professors who have “opted for obsolescence” hold opinions that are no longer relevant, and critics who argue that AI might undermine critical thinking will be “pushed into history’s trash bin.”

It is clear that we have substantive disagreements regarding both the potential opportunities and the risks associated with adopting AI in military education. Given our disparate views concerning the way forward—and the potentially serious consequences of pursuing the wrong course of action—I offer this partial refutation of Professor Lacey’s claims in the hope that it will inform decision-makers as they craft the nation’s military education policy.

An AI Policy Should Incorporate All of PME, Not Just the War Colleges

A general approach to the inclusion of AI in a curriculum must account for how the military educates service members differently, depending on students’ capabilities at each stage of their careers. In drafting “A Guide to Collaborating With AI in the Military Classroom,” I put forward three general principles that should be incorporated into curricular design, regardless of whether faculty are teaching at service academies, staff colleges, or war colleges. These principles include the following:

  • Students Are Obligated to Understand the Inherent Fallibility of AI
  • Students Should be Aware of the Programmer’s Invisible Hand
  • Students Need to Know How to Operate Without the Aid of AI

The purpose of this paper is not to prescribe how to incorporate AI into specific courses, but rather to highlight potential student vulnerabilities and offer suggestions for how they can be managed within a broad curricular framework across PME. Even as AI is incorporated into PME, faculty must ensure that the technology does not supplant student progress in reading, writing, and critical thinking.

In his “All AI—All the Time” rebuttal, Jim Lacey takes issue with my general framework, arguing that PME is not “grade school.” He maintains that students entering PME already know how to read and write. He further expresses doubt that “there is a PME student alive who does not know that AI systems are fallible and often make things up.”

It would be far better for all concerned if we kept a simple yet often overlooked fact at the forefront of our decision-making process: every war college student is rapidly approaching or already past the age of 40. While all of Woessner’s concerns may be appropriate for high school students, they are senseless for adult professionals. PME is not grade school. Every student entering our hallowed halls knows how to read and write, and our current programs do almost nothing to enhance those skills. Moreover, I doubt that there is a PME student alive who does not know that AIs are fallible and often make stuff up. Thus, no one is “surrendering” their judgment to AIs, now or for some time to come.

Professor Jim Lacey’s formulation of the problem illustrates several points of disagreement concerning how to craft an AI policy for PME. First, while Lacey’s paper purports to define a way forward for all of PME, his examples are drawn largely from senior service colleges. Noting that war college students are typically in their forties, he asserts that an emphasis on reading and writing is “senseless.” Putting aside the question of whether senior service colleges already devote considerable resources to writing instruction, the fact remains that most PME students are not in their forties. The students at Marine Corps University, with whom Professor Lacey works, represent a distinct minority of service members, most of whom are at the tail end of long and distinguished careers. By contrast, the majority of students in PME—those attending the nation’s service academies—are, in fact, recent high school graduates. They would benefit from instruction that emphasizes reading, writing, and critical thinking. To the extent that an overreliance on AI might undermine these foundational skills, instructors should think carefully about incorporating such tools into the classroom. The impact of AI on students attending the nation’s service academies (i.e., the U.S. Naval Academy, the U.S. Military Academy, the U.S. Air Force Academy, the U.S. Coast Guard Academy) should be at the forefront of education policy in PME.

Second, notwithstanding Lacey’s doubt “that there is a PME student alive who does not know that AIs are fallible and often make stuff up,” as I note in my October article, these systems nevertheless have the capacity to generate seemingly plausible claims based on loaded assumptions or faulty premises. Knowing, in the abstract, that a system can make an error does not, in and of itself, help students remain on their guard or give them the tools needed to identify potential mistakes in AI reasoning. These are sophisticated systems with potentially serious cognitive and informational biases. The very fact that Professor Lacey casually dismisses the notion that students might come to defer to AI systems only underscores why PME should proceed with caution. This is a question that has not been thoroughly studied and therefore deserves serious attention.

If the purpose of Jim Lacey’s “All AI—All the Time” article was merely to promote expanded use of AI in the senior service colleges, his aggressive push to incorporate the technology into the JPME curriculum might be easier to justify. A lieutenant colonel earning a master’s degree at Marine Corps University may have sufficient life experience to second-guess an AI system, compared to a midshipman attending the United States Naval Academy. Twenty-year veterans of military or interagency service are better equipped to engage with this technology than students straight out of high school. It is essential that, as educators chart a path forward for AI, they do not presume that all PME students are the same. Accordingly, PME must account for the differences between eighteen-year-old Naval Academy students and forty-year-old O-5s attending the Naval War College.

Surrendering to AI in the Classroom is a Choice, Not an Inevitability

An important component of Jim Lacey’s push for adopting AI in the classroom is his assumption that students already have access to these tools. Why, he asks, try to put the genie back into the bottle? Quoting again from Lacey’s “All AI—All the Time” article:

We are already living in a world where 85% of college students admit to regularly using AI, where high schoolers are writing in The Atlantic about AI “demolishing” their education, and parents are finding AI cheat sheets in their grade schoolers’ laundry. If we can assume that PME students are at least as clever as a grade-schooler, we must also accept that every one of them is, or soon will be, using AI throughout the academic year. Moreover, given the speed at which AI is being integrated into the administrative and warfighting infrastructure of every Service, educators have a duty to ensure that their students are as familiar with AI tools as possible.

As an educator in PME, I can attest to the fact that students are finding ways to incorporate AI into their coursework. This adaptation goes well beyond using AI to summarize readings or outline papers. Some students at the National Defense University, many of whom have long commutes, use online resources to transform book chapters or academic papers into virtual podcasts. They feed course material into an AI program that converts it into virtual, long-form discussions of the major themes in the assigned readings. Putting aside the problem of allowing an AI to act as a constant intermediary between author and reader, students are doing what students have always done. Like rational actors in a classic game-theoretic framework, self-interested students seek to accomplish their assigned tasks at the lowest possible cost. AI is simply the newest tool for utility maximization.

As with any new technology, educators must ask two fundamental questions. First, does reliance on the technology enhance or degrade learning outcomes? Second, if the tool degrades students’ education, what steps, if any, can schools take to discourage its widespread adoption? As with prior innovations, schools will have to adapt to new tools, restricting access in some cases and surrendering to ubiquitous technology in others. Elementary schools can block the use of pocket calculators during an arithmetic exam but are largely powerless to restrict student access at home.

Looking to how schools adapted to pocket calculators, faculty may not be able to prevent students from using AI at home, but they retain the option of restricting access to the technology in the classroom. Presuming students are rational actors who value good grades, colleges can selectively incentivize the development of vital skills, such as reasoning, by administering assessments (e.g., class discussions, live exercises, or bluebook examinations) that prevent students from accessing AI. To secure good grades, students must become accustomed to working through problems on their own. Once students understand that passing specific courses (e.g., statistics, introductory American government, chemistry, or constitutional law) requires the ability to work independently, they will be compelled to limit their reliance on AI when completing assigned tasks. They may still use AI to help master basic skills, either as a tutor or as a debating partner. In the end, students will be assessed based on what they have learned, not on whether they can write clever prompts.

In advocating a middle way, I have never taken the position that we can or should completely restrict access to AI, nor have I denied that this technology will transform education. The real question is how AI will become part of the curriculum and on what terms. Despite Jim Lacey’s insistence that “full integration of AI into professional military education is inevitable,” there remains considerable debate over how to incorporate these tools into the curriculum. It is not clear from his “All AI—All the Time” approach whether he would embrace any restrictions on the use of AI in the classroom.

PME Should Shun a One-Size-Fits-All Approach to AI in the Classroom

As educators consider how and when to deploy AI in the classroom, they must, of necessity, tailor its application to the unique needs of their students. In most instances, these decisions will not be made by high-level administrators issuing blanket orders either to restrict or to adopt AI. Rather, faculty with specialized expertise must incorporate elements of AI into the curriculum while accounting for differences in venue (e.g., service academy versus war college), subject matter (e.g., statistics, logistics, grand strategy), and application (e.g., training versus education). Logically, how AI should be deployed in the United States Military Academy’s course on Constitutional and Military Law will, of necessity, differ from the Eisenhower School’s use of AI in its course on Global Supply Chain and Logistics.

Take the United States Military Academy’s course on Constitutional and Military Law. Could faculty develop an “All AI—All the Time” strategy for learning constitutional and military law? Absolutely. Students could use AI to summarize the day’s readings, use ChatGPT to formulate responses for in-class discussion, and write clever prompts to answer whatever questions might arise during an exam. With the aid of AI, undergraduates could bypass much of the tedious work associated with learning legal terminology, court precedent, and theories of law—and still earn a good grade. For all its ease and efficiency, a radical integration of AI into the coursework would make students hopelessly dependent on a machine to resolve even rudimentary legal questions. This would ultimately undermine the very purpose of the course.

During the eighteen years I taught courses on legal history, institutional power, and civil liberties at Penn State Harrisburg, my goal was to equip students with the ability to work intuitively through legal questions before consulting authoritative sources. If an executive, citing national security exigencies, were to seize NVIDIA’s AI chip manufacturing facilities, students would immediately view the move with skepticism, recalling the Supreme Court’s ruling on President Truman’s seizure of steel mills (i.e., Youngstown Sheet & Tube Co. v. Sawyer). While many legal questions turn on subtle differences in fact patterns or on a possible reversal of precedent by the Supreme Court, having a working knowledge of course concepts permits students to formulate an informed opinion about a controversy before turning to the views of pundits, politicians, or political operatives. Although I taught my law courses long before the advent of AI, the need to develop independent judgment rooted in key terminology, history, and case law remains the same. Students still benefit from completing assigned readings, briefing cases in class, and resolving vexing legal questions in long-form bluebook examinations.

An updated version of a constitutional law course might include a lesson on the use of AI in legal research or an exercise in which students critique AI-generated legal analysis. For students to engage effectively with AI, educators must create spaces in which they can develop core competencies, foundational knowledge, and critical thinking skills. This would entail placing limits on the use of AI to encourage students to develop the habit of resolving problems on their own. Only after students have developed these core competencies can they engage with AI as a partner rather than simply follow its lead.

In my article, “A Guide to Collaborating With AI in the Military Classroom,” I provided an example of ChatGPT making a subtle but significant error in its application of the Geneva Convention. Having taught the Geneva Convention as part of my legal courses, I immediately spotted the mistake. By citing passage after passage from the text of the convention, I was able to get ChatGPT to acknowledge its error and reverse its erroneous conclusion. Without knowledge of the text of the Geneva Convention and an appreciation for AI’s capacity to make mistakes, I might have accepted its flawed legal analysis at face value. While AI can serve as an effective starting point for research on questions of ethics, law, and policy, it can just as easily go off the rails. Only students who master the fundamentals will have any hope of telling the difference.

The inherent challenges of incorporating AI into a single course such as West Point’s military and constitutional law illustrate why PME cannot adopt a one-size-fits-all approach to incorporating new technology into the curriculum. Every course is different. Every college must tailor its learning objectives to the unique needs of its students. Rather than directing all faculty to incorporate AI into the classroom, educators should engage in thoughtful discussions about how the technology will affect key learning objectives for students at different institutions and across different subject matters. In some instances, AI will be helpful; in others, it will not. Despite Jim Lacey’s call to go “All AI—All the Time,” PME should resist the temptation to open the floodgates and hope for the best.

Faculty Will Continue to Matter, Even in the Age of AI

Perhaps the point on which Professor Jim Lacey and I have the sharpest difference of opinion concerns how the adoption of AI will affect the need for Title 10 faculty. Whereas we both recognize that this innovation will force the military to rethink elements of its education strategy, Lacey goes much further, predicting that AI will eliminate the need for most faculty and render those who refuse to fully embrace the technology superfluous.

In an earlier article, “Peering into the Future of Artificial Intelligence in the Military Classroom,” I stated that AI would not replace Title 10 professors, but that many professors would be replaced by professors who are comfortable with AI tools.  I no longer believe that statement to be true. AI advances in just the last six months have made it an existential risk to all but a few Title 10 professors.  We are rapidly entering an educational environment where only those who master human-AI teaming are likely to survive.

In considering Jim Lacey’s assertion that most traditional faculty will eventually be driven to extinction, we must resist the temptation to downplay the disruptive nature of AI out of concern for the livelihoods of Title 10 employees. While I have the greatest respect for civilian faculty working in PME, our ongoing service to the nation must be justified in terms of the value we bring to the education of the military. If Lacey is correct that our function as educators can be performed more effectively and at considerably less expense by chatbots, we deserve to go the way of ice harvesters, mechanical typesetters, and telephone switchboard operators. The security of the nation must supersede all considerations of mere tradition or personal job security. Yet, even with this in mind, it is far from certain that traditional faculty stand at the precipice of oblivion.

There is a kernel of truth to Jim Lacey’s assertion that AI will alter the landscape of military higher education, particularly when it comes to learning rudimentary skills such as mathematics, science, and even basic writing. Where there are clearly defined rules and the material lacks subjectivity, AI can be an excellent teaching assistant—not merely because it has memorized procedures for problem-solving, but because it can answer questions and provide impromptu feedback.

For the twenty years that I taught statistics to undergraduates, an inordinate amount of my time was spent creating practice problems, writing quizzes, marking-up assignments, and providing written feedback on exams. Teaching without graduate assistants, there was a practical limit to how many students I could mentor at one time. If I were teaching statistics today, I could streamline most of these tasks and teach twice as many students without falling behind. Indeed, one of AI’s more extraordinary features is that it can answer student questions and give the faculty member individualized feedback on each pupil’s progress. In the context of teaching undergraduate statistics, ChatGPT could easily take the place of a skilled graduate assistant, interacting with students, and handling all but the most vexing problems. This has the potential to boost faculty productivity and reduce the number of professors required to cover the assigned material.

As I have never taught physics, chemistry, biology, engineering, accounting, or basic writing, I am loath to speculate about how well these AI “graduate assistants” might perform in other disciplines. At least in theory, the technology could permit colleges and universities to educate more students with fewer faculty members. The productivity gains associated with relying on AI graduate assistants will vary from discipline to discipline. While this may gradually reduce the need for instructors over time, Jim Lacey’s blanket pronouncement that AI will eliminate all but a few Title 10 positions is premature at best.

Where Lacey’s displacement hypothesis most clearly breaks down is at the nation’s senior service institutions, such as National Defense University (NDU). As a JPME school, NDU is charged with preparing officers—typically O-5s and O-6s—for joint duty assignments by developing their ability to think critically, operate collaboratively, and apply joint doctrine to complex operational and strategic problems. Like virtually all other senior service institutions, NDU shuns the “sage on the stage” approach to instruction, in which faculty lecture at students, rendering them passive recipients of received knowledge. Under the seminar model, faculty lead groups of fifteen to twenty students through discussions of course materials organized by topic and supported by extensive readings. More than mere facilitators, faculty serve as mentors, providing general introductions to topics, posing preliminary questions, and helping students work together to think through vexing, and sometimes seemingly intractable problems.

By its nature, much of the material covered in JPME is subjective, value-laden, and steeped in uncertainty. There is no single approach to working through a strategic dilemma. The most appropriate course of action depends on how one prioritizes strategic goals, assesses available means, and manages risk. Complicating matters further, much of the instruction at senior service colleges centers on collaboration with fellow professionals to consider problems from different points of view, draw on varying expertise, and manage personalities that, as in the real world, can interfere with problem solving. In short, the types of problems that JPME institutions confront bear little resemblance to the rote, value-neutral, procedure-based problems for which AI is most useful.

While programs like ChatGPT can tutor students in statistics, there is no prospect that they will replace faculty as mentors to experienced military officers or interagency professionals. Far from being driven into extinction, only humans have the capacity to teach students to evaluate strategic problems independently, thereby instilling the requisite skepticism needed to make effective human–machine collaboration possible.

Conclusion

Despite our differences, there are many points on which Professor Jim Lacey and I appear to agree. First, AI is a revolutionary technology that has the potential to transform professional military education. Schools must embrace the technology lest their curricula fall into obsolescence. Second, students are clever, industrious, and self-interested. It is unrealistic and ultimately self-defeating to attempt to restrict their access to these tools outside the classroom. Faculty need to rethink how they assess learning, if only to acknowledge the facts on the ground. Finally, as technology enthusiasts, Professor Lacey and I both see the potential for AI to turbocharge faculty productivity, giving individual instructors the capacity to accomplish more with increasingly scarce resources. If professors are encouraged to leverage AI, they can free themselves from some of the more mundane tasks associated with teaching, leaving more time to mentor students.

The points on which Professor Jim Lacey and I fundamentally disagree concern the scope and pace of educational reform. His “All AI—All the Time” approach to curricular reorganization understates the problems inherent in incorporating a new technology into military education systems that vary by institution type (e.g., service academies, staff colleges, and war colleges), subject matter (e.g., statistics, logistics, and grand strategy), and purpose (i.e., training versus education). Indeed, even as PME leaders move to modernize instruction, it remains the responsibility of provosts, academic deans, and department heads to craft curricular guidance that incorporates AI into the classroom only where doing so is likely to improve learning outcomes. For some topics, such as constitutional law, AI may add very little to longstanding course learning objectives. In other cases, such as planning and logistics, AI might be essential to helping students solve real-world problems. These decisions must be made on a case-by-case basis, grounded in experience, rather than driven by a desire to rush the technology into the classroom.

Finally, it is noteworthy that part of our disagreement over the pace of reform stems from differing assumptions about the trajectory of this emerging technology. Whereas I recognize that AI systems have markedly improved since I first began to experiment with large language models three years ago, I do not expect the technology, in the foreseeable future, to advance to the point where it can supplant faculty as curricular designers, instructors, or mentors. By contrast, Jim Lacey has gone from publicly extolling the indispensability of Title 10 faculty to asserting that “AI advances in just the last six months have made it an existential risk to all but a few Title 10 professors. We are rapidly entering an educational environment where only those who master human–AI teaming are likely to survive.” As I see it, faculty still matter, including those whose instruction is not tightly integrated with AI systems. AI cannot constitute an existential risk to Title 10 professors until such time as the technology is capable of drafting original PME curricula, leading course discussions, advising students, or managing interpersonal dynamics in seminars. Taken at face value, Lacey’s assertion that most Title 10 faculty face extinction rests on the assumption that the pace of technological change will soon allow AI to replace many of the tasks currently performed by more traditional scholars, instructors, and mentors.

Perhaps he is right, and I am underestimating the powers that the wizards of Silicon Valley are conjuring in their arcane workshops; perhaps those who fail to see the future will, as Lacey claims, be “pushed into history’s trash bin.” However, a remark variously attributed to physicist Niels Bohr and baseball great Yogi Berra provides some perspective. “It’s tough to make predictions, especially about the future.” As we cannot know for certain how this technology will unfold, we should continue to rely on what works—namely, people. Until such time as AI systems can truly supplant humans as intellectuals, mentors, and role models, I will continue to advocate the middle way—to embrace the technology where it can be helpful, while resisting efforts to forcibly integrate artificial intelligence into every classroom. To bet our future on the hope and expectation that AI systems can supplant all but a few technologically savvy faculty strikes me as reckless. The stakes are simply too high to place the future of the force in the hands of machines.


Matthew Woessner, Ph.D., is the dean of faculty and academic programs at the College of International Security Affairs at the National Defense University. He previously served on the faculty at the Army War College and Penn State University, Harrisburg. The views expressed in this article are those of the author and do not necessarily reflect those of National Defense University or the U.S. government.

 

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.