A photo of Christina Terra and Noha El Attar

Learning Br.AI.n: Wisdom after Intelligence

By Noha El Attar, Professor of Management Practice in Organizational Behavior and Christina Terra, Professor, Department of Economics.

This project did not start in a corporate boardroom, or a formal strategic planning session. It began in a far more ordinary, and therefore more revealing, place: a shared lunch between colleagues. By the time dessert arrived, the talk drifted toward a question that had been rising in the background of all our work but had not yet been named directly: what is artificial intelligence doing to us?

Not only to our jobs, or to productivity, or to institutional efficiency, but to our minds. To our habits of attention. To the way we remember, judge, imagine, and decide. And, more quietly but more urgently, to our sense of what it means to be human in a world where thinking itself is increasingly shared with machines.

We invited learners directly into the uncertainty during a three day seminar for 800 bachelor learners across our campuses, deciding to co-create meaning rather than deliver conclusions. This initiative, which we came to call Learning Br.AI.n, expanded to nearly six hundred learners across three continents, supported by a committed team of thirty facilitators and professors.  

The Three Pillars of Cognitive Resilience

To help learners navigate the accelerating evolution of artificial intelligence, we organized the program into three interconnected pillars. Each pillar combines scientific understanding with reflective practice, emphasizing how learners should relate to their own cognition.

The First Pillar: Understanding the Machine within.

Before learners could meaningfully decide how to use artificial intelligence, they first needed a working model of the biological system they were trying to support: their own brains. Learners are introduced to foundational principles of neuroscience, not to turn them into specialists, but to cultivate literacy about their own cognitive architecture. Contrary to the myth of purely rational learning, neuroscience shows that emotion is not a distraction from cognition; it is a gatekeeper. In an environment saturated with digital stimuli, learning to protect attention becomes a moral and existential act.

The Second pillar: Augmentation rather than Replacement.

The second pillar addresses artificial intelligence directly. Students are given an understanding of how AI systems work. The goal is clarification paired with judgement. A central principle guides this pillar: artificial intelligence should augment human thinking, not replace it. Cognitive science is clear that effortful thinking is not a flaw of learning; it is the mechanism through which understanding deepens.

The Third Pillar: Cognitive Traps and Digital Temptations

The third pillar confronts the shadow side of intelligent tools. Powerful technologies are never neutral ; they reflect the view of their creators, they shape habits, incentives, and self-perception. We examine how AI systems can fragment attention, weaken memory formation, and create an illusion of understanding without comprehension.

Educating the Clever Animal

We, humans, are animals. Highly sophisticated, symbolic, language-driven animals, but animals nonetheless. This evolutionary inheritance explains why human cognition operates through at least two complementary systems: one rational, deliberate, and analytical; the other intuitive, emotional, and fast (Kahneman, 2011). For the past two centuries, modern education has overwhelmingly privileged the rational system. Logic, optimization, abstraction, and control became the dominant measures of intelligence.

Learning Br.AI.n proposes a reframing. What remains distinctly human are not our computational abilities, but our extra-rational capacities: intuition, imagination, empathy, ethical sensitivity, and the ability to hold ambiguity without collapsing into certainty. These capacities do not accelerate systems; they stabilize them.

Two Trajectories: Mirror or Compass

As dialogue with learners deepened, an insight that AI amplifies existing leanings and therefore reveals two radically different trajectories.

The first trajectory is seductive. It aligns with efficiency and personal gain. In this path, AI becomes a mirror of humanity’s unresolved shadows, placing our rational capacities in the service of our animal instincts. Moral theology described these tendencies through the seven deadly sins: pride, greed, lust, envy, gluttony, wrath, and sloth.

Pride appears as the illusion of total control and mastery. Greed manifests as endless extraction of data and value. Lust becomes addiction to stimulation and novelty. Envy is set into competitive metrics and constant comparison. Gluttony takes the form of overconsumption of information. Wrath is amplified through outrage-driven systems. Sloth emerges as the quiet delegation of responsibility to machines. In this trajectory, AI does not introduce new vices; it scales existing ones.

The second trajectory is resistant to automation. It asks a different question: what might happen if artificial intelligence frees us not to dominate, but to deepen our humanity? This path leans into the brain’s true comparative advantage: moral judgment, courage, loyalty, justice, care, and the capacity for transcendence. These qualities preserve meaning. AI, in this trajectory, becomes a compass rather than a mirror.

Voices from the Future: Learners’ Reflections

The most compelling dimension of Learning Br.AI.n emerged in learners’ reflections. Across cultures, disciplines, and continents, a shared insight became visible: while knowledge is increasingly automated, character, judgment, and imagination are not. What surprised many educators was not only how thoughtfully learners engaged ethical questions, but how profoundly their sense of possible futures shifted over the course of the program.

For many learners, their ability to imagine their future had long been constrained by inherited narratives of success and feasibility. Their future selves were shaped by what seemed realistically attainable within existing structures: limited time, limited resources, limited reach. Artificial intelligence initially appeared as another force that might narrow those horizons further, increasing competition and rendering human contribution marginal. Yet, as learners explored AI through the lens of augmentation, a different imaginative movement emerged.

Learners reflected on how their sense of personal value evolved. Rather than striving to outperform machines on speed, memory, or efficiency, many began to imagine themselves as trustworthy humans: individuals capable of judgment when rules are insufficient, of judgement when data is ambiguous, and of responsibility when outcomes affect real lives.

One learner wrote that their goal was no longer to be the smartest person in the room, but the one who asks the most honest question. Another observed that in a world of instant answers, wisdom might consist in knowing when not to answer. Learners were not asking to be protected from artificial intelligence; they were asking to be trusted with responsibility.

Three patterns clearly emerged in this reorientation of how learners used imagination to think about their futures.

First, several learners described how AI expanded their sense of scale. One student who had previously imagined a career limited to local impact began envisioning work that could serve underserved communities across borders. With AI handling translation, data synthesis, and logistical complexity, the learner no longer saw global engagement as the privilege of large institutions alone.

Second, learners in creative fields reported a renewed sense of artistic ambition. Rather than fearing replacement, they imagined AI as a collaborator that could free them from technical constraints and allow deeper exploration of meaning, symbolism, and emotional resonance. One learner reflected that previously they had imagined their creative future as constrained by skill gaps; now they imagined themselves pursuing more ambitious projects precisely because technical barriers were lowered, not removed, but repositioned.

Third, Issues like climate change or systemic inequality had previously felt too complex or overwhelming for individual agency. Through the program, learners imagined futures in which AI-supported analysis could help them understand complexity more clearly, but where moral judgment would determine what to prioritize, whom to protect, and when to act.

Taken together, these reflections revealed a profound alteration. Artificial intelligence did not diminish learners’ sense of future possibility. When approached thoughtfully, it expanded it.

Education as a Practice of Freedom

The role of the educator is undergoing a profound transformation. In an age where information is abundant, educators can no longer be defined primarily as a providers of knowledge or trainers of rational skills. Instead, the educator becomes a guide who help learners explore meaning, judgment, and their shared humanity.

Paulo Freire famously described education as a “practice of freedom,” where knowledge is not dumped into passive learners. For Freire, true education cultivates critical consciousness: the ability to perceive social, ethical, and political contradictions and to act responsibly within them (Freire, 2021). When systems can generate answers instantly, uncritical acceptance becomes the greatest educational risk. From this perspective, Learning Br.AI.n is a living question, one that must remain open if it is to remain honest. Will intelligence, increasingly shaped by machines, be reduced to efficiency, speed, and optimization? Or can it be expanded into wisdom: the capacity to judge well, to care appropriately, and to act responsibly in complex situations?

References

Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Freire, P. (2021). Education for critical consciousness. Bloomsbury Publishing.

Laisser un commentaire