Skip to Content

Guide to AI Policy

28 principles for AI policy that “loves people into being.”

Chat GPT Image May 15 2026 06 53 13 PM

Every AI policy reveals what an institution believes about the people this technology will touch: their dignity, their formation, their freedom, their work, and their future. At its deepest level, AI policy begins with love rightly ordered: love for the people affected by the technology, love for the mission of the institution, and love for the common good that wise governance exists to protect.

AI policy is about people. The student whose application will be scanned, the patient whose chart will be summarized, the employee whose performance will be predicted, the donor whose appeal will be drafted: each one is a face. Not a data point. A face. And a face is the place where ethics and economics begin.

Just yesterday, my friend Michael Lee of the Harvard Human Flourishing Program reminded me that the highest aim of human institutions is “loving people into being.” The idiom names what the best schools, hospitals, and companies have always done. They have not merely served people; they have helped them become more fully themselves. The teacher who saw something the student could not yet see. The leader who built a room inside which a person could grow. The institution that defended a person from its own efficiency.

AI policy is where this love will or will not survive contact with scale.

AI is the most remarkable technology we have ever had for progress. But an instrument is only as good as the hand that holds it, and the hand is only as good as the heart that moves it. A policy is how an institution moves its hand. It tells the world how it intends to love its people into being.

Below are twenty-eight principles, each anchored in a long human tradition. Read them as expressions of a single commitment: the people this policy will touch are not the second concern after performance optimization. They are the first concern, and the last, and the reason the policy exists at all.

What follows is a summary. If you would like the full guide, write to me. There is no work I would rather do with you.

Why This Framework Integrates Many Traditions

Institutional AI policy is a systems-complex problem. Many traditions, many constituencies, many forms of risk, many forms of harm, all braided together. Lawyers see the contract. Ethicists see the dignity. Engineers see the system. Educators see the student. Each is right; none alone is enough. A policy worth signing must hold many traditions in the same hand.

The principles below represent my integration. They draw on, among other inheritances:

  • Regulatory foundations: NIST AI RMF, EU AI Act, OECD AI Principles, UNESCO Recommendation on the Ethics of AI, GDPR, FERPA, HIPAA
  • Legal and constitutional foundations: Magna Carta, the U.S. Constitution, due process, fiduciary duty, agency law, administrative law, the human rights tradition
  • Philosophical foundations: Aristotle on purpose, Confucius on the rectification of names, Roman law on definitions, virtue ethics, care ethics, systems thinking
  • Theological foundations: Imago Dei, Catholic social teaching on the common good, Jewish teshuvah, Laudato Si', stewardship traditions
  • Professional and research ethics: medical ethics, the Belmont Report, IRB protocols, academic integrity, legal ethics, engineering ethics, accounting ethics
  • Safety and reliability traditions: aviation safety, just culture, product safety, incident reporting, post-market surveillance
  • Security and information governance: zero trust, least privilege, chain of custody, records management, archival stewardship
  • Governance and assurance: corporate governance, board oversight, internal audit, enterprise risk management, insurance, procurement, vendor management
  • Educational and formative traditions: the liberal arts, apprenticeship, John Henry Newman, Maryanne Wolf, the formation of judgment
  • Civic, labor, and inclusion traditions: civil rights, universal design, restorative justice, labor dignity, worker voice, accessibility
  • Communication and public trust traditions: media literacy, verification, correction, institutional transparency
  • Ecological traditions: environmental stewardship, sustainability, intergenerational responsibility

Over the past year, I have read the leading AI policies coming out of universities, hospitals, federal agencies, financial institutions, and major employers. They are competent. They are improving. But three quiet weaknesses run through some of them.

They tend to be written from a few disciplines or traditions, which means they miss the human, formational, and ontological gravity of what AI does to a person and an institution. They tend to be written for compliance rather than for flourishing, so they pass audit but do not move the institution toward what it could become. And they tend to be written in haste, without the deeper traditions that could anchor them, so they age the moment the technology changes.

The principles below are an offering of integrated thinking, from many traditions, for institutions that want their AI policy to be more than a compliance document.

Part I. Core Principles

Part I gathers the structural commitments. Each principle is a condition under which an institution can keep loving people into being at scale.

1. AI Must Serve Institutional Purpose. Capability is never the reason. Mission is. Tradition: Aristotelian teleology; the Preamble to the U.S. Constitution; Magna Carta; fiduciary stewardship; Catholic social teaching on the common good; Protestant vocational theology; the university tradition of ordered inquiry.

2. AI Requires Distinct Governance. Ordinary software is governed by IT. AI is governed by the institution, because AI produces work that looks like judgment. Tradition: NIST AI Risk Management Framework; OECD AI Principles; UNESCO Recommendation on the Ethics of AI; the EU AI Act; administrative law; academic integrity.

3. The Policy Must Name What It Governs. A policy that cannot define what it covers cannot govern what it covers. Tradition: Confucian rectification of names; Roman law of definitions; common-law statutory drafting; canon-law precision in obligation; information governance.

4. Oversight Must Match Risk. Brainstorming an agenda and deciding who gets hired are not the same act. The policy must know the difference. Tradition: EU AI Act risk classification; just-war proportionality; the Basel Accords; clinical-trial protocols; administrative-law proportionality; enterprise risk management.

5. Data Use Must Be Governed Before AI Use Is Permitted. Most AI risk is data risk wearing a new costume. Tradition: GDPR data minimization; FERPA; HIPAA; common-law confidentiality; chain-of-custody doctrine; privacy-by-design; data stewardship.

6. Human Responsibility Cannot Be Delegated to AI. Machines can recommend. They cannot carry conscience. Tradition: Imago Dei; Kantian dignity and the kingdom of ends; care ethics (Gilligan, Noddings); the Universal Declaration of Human Rights; natural justice; due process; the FAA pilot-in-command doctrine.

7. AI Outputs Must Be Treated as Claims Requiring Verification. AI is often fluent, confident, and wrong. Fluency is not evidence. Tradition: The scientific method; common-law evidentiary standards; academic citation; reasoned decision-making; retrieval-augmented generation; peer review.

8. Material AI Involvement Must Be Disclosed. Trust depends on knowing whose mind you are reading. Tradition: Academic integrity; truth-in-advertising; informed consent; GDPR transparency principles; the Rome Call for AI Ethics; fiduciary candor; professional responsibility.

9. Vendor Use Does Not Transfer Institutional Responsibility. You can outsource a tool. You cannot outsource the duty of care. Tradition: Product-liability doctrine; supply-chain ethics; OCC and EBA outsourcing guidance; third-party risk management; fiduciary duty; contract law.

10. AI Literacy Is an Institutional Duty. A policy cannot govern people who do not understand the instruments in their hands. Tradition: Civic education; professional formation; the apprenticeship tradition; UNESCO capacity-building; EU AI Act AI-literacy obligations; organizational learning theory.

11. AI Systems Require Continuing Oversight. What you approved a year ago is not what is running today. Models change while you sleep. Tradition: Cybernetics; aviation incident reporting; clinical post-market surveillance; internal audit; the Deming cycle; complex-systems theory.

12. AI Harm Requires Containment, Correction, and Repair. Good policy describes proper use. Better policy knows what to do when something breaks. Tradition: Restorative justice; Jewish teshuvah; Christian reconciliation; product recall doctrine; medical error disclosure; due process; cybersecurity breach response.

13. AI Policy Must Be Periodically Reauthorized. AI changes faster than most institutional policies. Yours should expire on a calendar, not by accident. Tradition: Legislative sunset clauses; Roman legal review; the Benedictine chapter review; corporate governance audit cycles; regulatory reauthorization.

14. Every AI Principle Must Become an Institutional Practice. A principle that cannot reach Tuesday morning is not yet a principle. It is a wish. Tradition: The Roman maxim that law without sanction is empty; compliance-by-design; internal controls; Deming quality systems; fiduciary accountability.

Part II. Specialized Principles

Part II turns to the particular places where this love is most easily lost: in memory, in voice, in authorship, in access, in the records of consequential decisions. Each principle stands guard at one of those doors.

15. AI Must Strengthen, Not Weaken, Human Formation. A university that lets AI replace learning weakens education. A company that lets AI replace judgment weakens leadership. Tradition: Aristotelian virtue formation; the liberal arts; Maryanne Wolf on deep reading; Hannah Arendt on thinking and responsibility; John Henry Newman's idea of the university.

16. Augmentation Must Be Distinguished from Substitution. Help and replacement differ in kind, not degree. The policy should know which is which. Tradition: Labor dignity and worker-voice traditions; medical physician-extender doctrine; the extended mind thesis; Erik Brynjolfsson on the Turing Trap; Catholic social teaching on work; human-centered design.

17. Agentic AI Requires Heightened Governance. When AI moves from generating words to performing actions, mistakes become events. Tradition: Agency law; the doctrine of delegated authority; cybersecurity least-privilege; power-of-attorney limitations; zero-trust architecture.

18. Institutional Memory Must Be Protected and Properly Grounded. AI can sound like it knows your institution when it does not. The voice it borrows is yours. Tradition: Archival stewardship; academic citation; common-law precedent; records management; chain-of-custody; founder tradition and organizational identity.

19. Synthetic Media and Impersonation Must Be Strictly Governed. A voice can now be lifted from a person and made to speak words she has never said. The institution must guard the faces and voices in its keeping. Tradition: Right of publicity; defamation law; informed consent; anti-fraud doctrine; human dignity; the image-bearing theological tradition; media ethics.

20. External AI Communications Require Institutional Control. Polished language can still be false. A donor appeal generated in seconds can promise what it cannot deliver. Tradition: Truth-in-advertising; public-relations ethics; fiduciary candor; contract law; securities disclosure; donor stewardship; institutional voice.

21. Intellectual Property and Authorship Must Be Protected. AI blurs authorship. The policy must restore the line. Tradition: Copyright law; trade-secret doctrine; academic authorship norms; moral rights; work-made-for-hire doctrine; research integrity.

22. AI Use Must Account for Environmental and Resource Stewardship. Every model has a cost in water, in power, in money. Mission justifies the cost, or it does not. Tradition: Environmental stewardship; Laudato Si'; sustainability governance; fiduciary stewardship; intergenerational responsibility; responsible innovation.

23. AI Adoption Must Preserve Access, Fairness, and Inclusion. Some people are advantaged by AI. Others are quietly removed by it. The policy must see both. Tradition: Disability rights; universal design; civil rights law; the Belmont Report's justice principle; Catholic social teaching on solidarity; inclusive design.

24. AI Governance Must Have Clear Roles and Decision Rights. Governance fails when everyone is interested, but no one is responsible. Tradition: Corporate governance; separation of powers; RACI models; internal controls; university shared governance; board duty of care.

25. AI Use Requires an Intake and Approval Process. Without intake, an institution does not know what it is using. The unknown is the first cost. Tradition: Administrative procedure; procurement governance; clinical trial protocols; regulatory sandboxing; compliance-by-design.

26. AI-Assisted Decisions Must Leave a Record. Institutions cannot learn from decisions they cannot reconstruct. Tradition: The administrative record doctrine; common-law evidentiary practice; audit trails; chain-of-custody; reasoned decision-making.

27. Shadow AI Must Be Managed, Not Ignored. If you forbid AI without providing useful tools, your people will find their own. They already have. Tradition: Shadow IT remediation; BYOD governance; just culture in aviation and medicine; safety culture; institutional ethics of permission and provision.

28. Affected Persons Have a Right to Human Review. When a decision affects a person's life, that person is owed a human face. Tradition: Due process; natural justice; audi alteram partem; GDPR automated-decision protections; restorative justice; human dignity.

Conclusion

Read the list slowly, and a pattern emerges. The newest technology is being asked to answer the oldest questions. What is the purpose of an institution? Whom do we owe? What does it mean to love someone into being inside a system built for scale?

The traditions are still here, still generous, still ready. We are at the beginning of this work, not the middle. We are laying foundations; our successors will build the rooms. What matters is that the foundations are anchored in something older than the technology they govern and oriented toward the people that technology was always meant to serve.

This is shared work. Write to me if you would like the full guide, or a thinking partner as you draft your own. The best AI policies of the next decade will not come from any single mind; they will come from leaders willing to write together, learn from one another, and anchor their work in the traditions that came before. I would be honored to think this through with you.