WAAC Logo
Back to News

2026-03-28

Keynote Speech by Nobel Laureate James J. Heckman at the 3rd World Conference on Artificial Consciousness

At the 3rd World Conference on Artificial Consciousness, held in Shenzhen on March 21, 2026, the World Academy of Artificial Consciousness presented certificates to Professor James J. Heckman, recipient of the 2000 Nobel Prize in Economic Sciences, and Professor Seeram Ramakrishna, Foreign Member of the Chinese Academy of Engineering. The conference was themed "Fundamental Theories and Practical Exploration of Artificial Consciousness, and Artificial Intelligence Empowering Proactive Health Medicine."

Certificate presentation at the conference

The following is the keynote speech manuscript delivered by James J. Heckman at the 3rd World Conference on Artificial Consciousness. In this talk, Heckman approaches artificial intelligence from the perspective of economics and argues that scientific progress requires more than prediction. It requires explanation, mechanism, and causal structure.

James J. Heckman keynote speech

Thank you very much for the opportunity to speak today. I come to this discussion as an economist. I am not a researcher working directly in artificial intelligence, but I use AI, I study its implications, and I believe it can contribute in a very important way to the scientific study of causality.

Today I would like to discuss three related themes. First, what we mean by causality in economics and in science more broadly. Second, why causal analysis cannot be reduced to conventional statistics alone, even though it is often treated that way. Third, how AI and machine learning can contribute to causal inquiry, especially when we are trying to build, compare, and interpret dynamic structural models.

I will also briefly discuss the use of AI and robots in research settings, particularly in projects involving human development and interaction. My central message is simple: if we want to understand and improve the world, we need more than prediction. We need explanation, mechanism, and causal structure.

A good science seeks to understand the mechanisms that generate outcomes. Whether we are talking about medicine, physics, or economics, the objective is not merely to establish that one variable moves with another. The objective is to understand what produces the outcome, what role different inputs play, and what kinds of interventions can change the course of events.

In that sense, good science is fundamentally causal. It asks what factors generate outcomes, through what channels they operate, and under what conditions interventions succeed or fail. If we really want to change the world in beneficial ways, then we must move beyond surface correlations and describe the underlying mechanisms that govern behavior and outcomes.

One of the key points I want to stress is that causality is not simply a conventional statistical problem. Statistics is useful, and in many settings it is indispensable, but causal reasoning requires more than fitting equations to observed data. It requires us to think about hypothetical worlds, alternative possibilities, and the consequences of interventions that may never have been directly observed in the data.

This means that causal inquiry begins with a conceptual framework. We formulate abstract models of how the world may operate, and then we confront those models with evidence. Data are essential, but data do not define causality by themselves. Causality is a way of thinking about the data, not a property that sits inside the data waiting to be extracted automatically.

At the heart of causal analysis is the exploration of counterfactual worlds. We ask: What would happen if we changed one element while holding other conditions fixed? What would the outcome have been under a different treatment, a different choice, or a different environment? These are thought experiments, but they are disciplined thought experiments. They are the foundation of serious causal reasoning.

This perspective is different from the narrow view that causal analysis is exhausted by field experiments or randomized trials. I am not dismissing such methods. They have an important place. But they are only one part of a broader framework. Proper causal analysis requires credible hypothetical models that are developed conceptually and then tested against real-world evidence. The model and the data must be brought into contact, but we must carefully distinguish between the stage of conceptualizing the mechanism and the stage of estimating it from observed data.

Machine learning can greatly facilitate one important part of causal analysis: the search for plausible explanations. In philosophical language, this is related to abduction, the effort to identify the most plausible explanation among several candidates. In real scientific inquiry, there may be multiple explanations for the same phenomenon. AI can help us organize information, compare patterns, and narrow the set of plausible mechanisms.

This is especially valuable because modern AI allows us to use many forms of information that were previously difficult to integrate: text, newspapers, online posts, contextual records, and many other unstructured sources, alongside standard numerical data. In this sense, AI expands the informational base available to science. It helps us assemble evidence more powerfully and more systematically than before.

At the same time, AI does not replace scientific judgment. The generation of causal hypotheses still requires thought, interpretation, and accumulated knowledge. AI is a tool that can strengthen inquiry; it is not a substitute for disciplined reasoning.

Too much contemporary discussion treats good science as if it were simply the estimation of a treatment effect for a particular outcome. That is too narrow. Scientific models and theories do more than estimate an average effect. They help summarize prior knowledge, synthesize earlier research, interpret current evidence, and place new findings within a wider body of understanding.

This broader perspective matters greatly for personalized education and personalized medicine. Different individuals respond differently to the same intervention. To understand these differences, we need structure. We need models that describe heterogeneity, decision-making, constraints, and evolving information. AI can help us manage the complexity of such data-rich environments, but the scientific aim remains causal understanding, not mere prediction.

A dynamic perspective is essential because people make decisions under incomplete information. At the moment a choice is made, the decision-maker often does not know what will later be revealed. This distinction between ex ante and ex post is critical. A decision that looks foolish after the fact may have been entirely reasonable at the time it was made, given the information then available.

That is why causal analysis must account for information, expectations, and the timing of decisions. It must also account for voluntary participation and self-selection. People do not simply receive treatments passively; they often choose into environments, programs, and relationships based on what they expect to gain, what costs they anticipate, and what they believe will happen next.

Another reason we need richer causal frameworks is that many important outcomes arise through social interaction. Disease spreads through interaction. Skills accumulate through families, schools, and peer groups. Social norms emerge, persist, and disappear through networks of influence. Many standard causal frameworks do not adequately capture these interaction effects.

We also need to distinguish internal validity from broader relevance. Internal validity asks whether we have correctly identified a mechanism in a given setting. That is important. But the broader scientific challenge is whether we understand the mechanism well enough to reason about other settings, other populations, and other interventions. A causal model should help us think beyond the narrow circumstances in which a single dataset was collected.

I find it helpful to distinguish three tasks. The first is model creation: building a coherent conceptual framework and generating counterfactual worlds. This is not merely a statistical exercise. It requires imagination, discipline, prior knowledge, and scientific judgment.

The second task is identification: asking whether, with ideal data, the mechanisms described by the model could in principle be distinguished and recovered. The third task is estimation: using real-world data, with all of its measurement error and limitations, to estimate parameters and test implications. AI and machine learning can be extremely useful for identification and estimation, especially in high-dimensional settings, but they should be understood as operating within a larger scientific architecture.

Randomized controlled trials can be very valuable because random assignment helps break certain dependencies and clarifies some causal comparisons. But randomization is not the only way to do causal analysis, and it is not always the best way for every problem. Depending on the setting, we may learn from measurement systems, latent variable models, factor models, instrumental variation, observational structure, and dynamic information.

The mistake is to elevate any single method into a universal gold standard. Methods are tools. Their value depends on the question being asked, the mechanism being studied, and the evidence available. Sound causal reasoning requires flexibility, not dogma.

Let me conclude with an example from research on parent-child interaction. This area is fundamentally dynamic and interactive. Traditional educational evaluation often relies on pencil-and-paper tests or infrequent assessments, but new technologies allow us to observe much more. We can place sensors in the room, collect multisensory measures, and study interaction in real time.

In work underway in Shenzhen and elsewhere, robots equipped with large language models can interact with children while also generating records of what the child is doing. These systems allow us to observe not only the child and the robot, but also the broader environment involving parents, teachers, and other surrounding influences. We still do not fully understand the role of these interactions, but that is precisely why we need dynamic models. AI gives us powerful new measurement and interaction tools, but interpretation still requires causal structure.

The future of AI in science should not be framed simply as better prediction. Its deeper promise lies in helping us organize information, construct plausible mechanisms, and study dynamic causal systems with greater richness and precision. But scientific progress still depends on clear concepts, credible models, and disciplined reasoning about what causes what.

If we want to improve education, medicine, social policy, and human development, then we must build frameworks that combine data with theory, evidence with mechanism, and computation with judgment. That, in my view, is the path toward a more serious and more useful science of causality. Thank you very much.