The Social Impact of AI Research: Lessons from the relAI Ethics Course

SVG Icon Editor

The "Social Architect" Blueprint. Image Credit: Generated by Google Nano Banana 2

Artificial Intelligence (AI) researchers typically view their work as precise, objective and politically neutral. We tell ourselves we are simply optimizing a loss function or improving diagnostic accuracy for heart diseases. In my PhD research, for example, I use machine learning to predict outcomes for patients with tricuspid regurgitation, a specific form of heart disease. On the surface, it’s pure math. But the moment that prediction influences who gets a bed or who qualifies for surgery, it stops being math. It becomes a choice. Too often, we hide that choice behind a checklist of GDPR compliance and fairness metrics used to secure publication or avoid litigation. We treat our models as mere math. We forget they are the new infrastructure of human life.

However, the first edition of the relAI ethics course led to a rather uncomfortable realization: There is no such thing as a neutral algorithm. When we build a system that automates a diagnosis, we aren't just assisting a doctor; we are potentially shifting the balance of labor, centralizing institutional control, and reinforcing existing systemic biases that no fairness metric can catch.

Our code doesn't just solve problems; it builds a world. Whether we like it or not, we aren't just researchers anymore. We are social architects.

The Invisible Blueprints

Every architect works from a set of blueprints. In the world of technology, these blueprints are what sociologists call "sociotechnical imaginaries" 1. They are the shared dreams a community holds about its future - silent scripts that tell us which futures are worth building, and who gets to live in them.
This isn't just theory, it’s reflected in every cent a nation invests in its labs. One society imagines AI as a tool for national efficiency, centralizing every heartbeat and prescription into a state database. Another imagines it as a tool for individual autonomy, keeping data locked on a patient’s local device. Both use the same math, but they build entirely different worlds.
Think about the technology your country doesn't have. Is healthcare a data-driven, centralized system designed for total coverage, or a fragmented landscape of private innovation? The difference isn't just the budget; it's the imaginary, the blueprint of what a "good society" looks like.

If these imaginaries are the dreams, then we, the researchers, engineers, and scientists,are the Social Architects.
We are the ones who pour the concrete and write the code. Without us, these visions remain fantasies. With us, they become the tangible arrangements of steel, concrete, and code that dictate how human life actually functions. We do not just deploy models. We build the actual rooms that society has to live in.

We Are Social Architects

To call ourselves "social architects" is to admit a truth we often try to code away: our work is never just math. It is a way of building order in our world.

For years, we have held on to the idea that "real" engineering is restricted to calculations and optimization. We treated our models as neutral tools, ignoring the fact that they are the new infrastructure of human life. But as we explored in the relAI ethics course, technical arrangements are "ways of building order" that settle social issues long before they ever reach a courtroom.
In the 1920s, the architect Robert Moses designed low-hanging overpasses on Long Island to deliberately prevent buses from reaching public beaches2. Because lower-income residents and people of color relied on those buses, the very height of the bridges enforced a system of segregation. The bridges didn't need a "No Trespassing" sign; the architecture did the discriminating for them.

Today, we are building the digital version of those overpasses. When I code a model for tricuspid regurgitation, I am not just calculating risk; I am deciding who the 'beach' is accessible to, and who might be blocked by a digital overpass they can’t see. Reliable engineering is not purely technical but a complex amalgamation of technology and the people it affects. As the architects of this new reality, we have to look past the clean data and face the complex systems we are changing. We have to ask:

  • Who is the invisible loser? I know who wins with my research, but who is being engineered out of the frame? Does my "discovery" eliminate thousands of jobs or strip agency from the people it aims to serve?
  • Is my model inherently biased? Am I building a bridge that certain demographics simply cannot pass? Does this model reinforce systemic inequality in the name of a higher accuracy rate?
  • How does my model change the power dynamics? Does my model empower the individual, or does it become a tool for "centralized, rigidly hierarchical" control?

Compliance is the Floor, Not the Ceiling

If these blueprints define our world, do the standard safety checks like data privacy or algorithmic fairness still matter?

The answer is: absolutely. But they are the baseline, not the finish line.

Data security, privacy protection, and fairness metrics are the building codes of our digital infrastructure. Just as a physical architect must ensure a skyscraper won’t collapse or catch fire, a social architect must ensure a model doesn't leak private data or discriminate against a protected group. These are the non-negotiables. Without them, we aren't just bad architects; we are dangerous ones.

However, checking a box for GDPR compliance or hitting a fairness metric on a static dataset does not mean the work is done. Fairness metrics are our structural stress tests. They ensure the building won't collapse, but they don't tell us if the building is a home or a prison.

While we ensure our models are fair and our data is secure, we cannot stop there. We must remain vigilant of the sociotechnical imaginaries we are reinforcing. You can optimize a model for 99% accuracy and perfect mathematical fairness, yet still build a system that centralizes power, erodes personal rights, or enforces a vision of "order" that excludes the vulnerable.

We must solve for fairness. But we must never mistake a cleared checklist for a just society.

Our Roles as Social Architects

But recognizing the politics of our blueprints is only the first step; we must now pick up our tools and build differently. The real work begins when we stop optimizing in a vacuum and start consciously building for the world as it actually exists.

As social architects, we are the ones who define the "technical zones" where the future is decided. Here is how we move from compliance to consequence:

  • Prioritize Inclusive Research: Instead of seeking purity in a vacuum, we must value "situated knowledge". Perform a "Site Visit." Don't just download a dataset; talk to the clinicians or patients the data came from. If you are predicting tricuspid regurgitation, as I am, spend a day in the ward. Document the human nuances that the raw numbers miss.

  • Design for Participation: We should build systems that invite, rather than exclude, the voices of the people they impact.Conduct a "Layman’s Audit". Present your preliminary results to a non-specialist audience. If a community can’t understand or audit your system, it isn’t a tool, it’s a chore.

  • Challenge Technical Fixes: Resist the urge to always offer AI as a comforting technological fix to structural societal problems. Evaluate the "No-Build" Option. Before training a model, ask if you are solving a data problem or a social one. Sometimes the most ethical thing an architect can do is realize that a building, or an algorithm, isn't the answer.

  • Move Beyond Compliance: Treat data security, privacy, and fairness not just as a checklist of legal liabilities, but as the foundation for broader social justice. Create an "Impact Map". Use your fairness metrics as a starting point, not a finish line, to investigate the holistic impact of your work.

By taking these steps, we ensure that our research actively prescribes a future where equity and human agency are the primary objectives. We move beyond the focus on sums and calculations to embrace a reality where we never lose sight of the humans behind the data.

The blueprint is on your desk. What kind of world are you building?


Note: This post reflects on lessons I learned during the first edition of the relAI ethics course, "AI, Ethics and Society: Opportunities and Challenges." The course was led by Prof. Dr. Ruth Müller (Department of Science, Technology and Society, TUM School of Social Sciences and Technology, Technische Universität München).


About the Author

Valentine Idakwo, MD, MSc is a physician and doctoral researcher specializing in Machine Learning for Valvular Heart Diseases at the LMU University Hospital. As a member of relAI, his work focuses on the intersection of clinical cardiovascular management and reliable AI systems.


References

1. Jasanoff, S., & Kim, S. (2009). Containing the Atom: sociotechnical imaginaries and nuclear power in the United States and South Korea. *Minerva*, 47(2), 119–146. https://doi.org/10.1007/s11024-009-9124-4

2. Winner, L. (1980). Do artifacts have politics? In *Daedalus* (109(1), 121–136.). https://www.jstor.org/stable/20024652

RELATED

  • Quantifying Uncertainty of the Treatment Effects

    Fig. 1. Pearl’s ladder of causation. The treatment effect, , as a random variable, is situated on the third, counterfactual level of causation. Source: Melnychuk et al. 2024. When we talk about understanding the effect of a medical treatment or policy intervention, we usually refer to the “average” effect: for instance, the average increase in a patient’s … Read more

    ... more
  • What If Our Machine Learning Labels Aren’t What We Think…

    Figure: Conceptual image depicting uncertainty in machine learning labels and annotator disagreement, featuring diverse annotators making varied assessments. Image generated by an AI model. Supervised Machine learning (ML) has become a powerful tool for solving complex problems across industries. From diagnosing diseases to automating hiring processes, its potential is undeniable. However, the reliability of machine learning models … Read more

    ... more
  • Conformal Prediction

    Distribution-free Uncertainty Quantification Uncertainty quantification in machine learning (ML) and aritficial intelligence (AI) lies at the heart of trustworthy and reliable decision-making in real-world applications. While ML models are celebrated for their remarkable predictive capabilities, they often operate in high-stakes environments, where incorrect or overconfident predictions can have serious consequences. Prime examples of such high-stake applications include … Read more

    ... more
  • Causality Part I: Does eating chocolate make you smarter?

    Correlation vs. causation "Correlation does not imply causation" – you’ve probably heard this phrase numerous times. Yet, when confronted with headlines such as "Coffee consumption is linked to higher mortality" or "Children who eat breakfast have better grades," often the first thought that comes to mind is that there is a causal connection. What else should account … Read more

    ... more
  • Using topological features to prevent topological errors

    Whenever we want to detect where certain structures are in an image — we might want to detect roads in a satellite photo, a specific organ in a medical scan, or the foreground object in a picture we took on our phone — we have a segmentation task at hand. One of the most natural ways to … Read more

    ... more