For investigative reporter Chikezie Omeje, an opportunity delayed is not an opportunity denied. Omeje was named an MIT Knight Science Journalism Fellow in early 2020, while he was completing a journalism master’s degree in New York City. At the time, he was planning to use the fellowship to study the connection between epigenetics and the developing brain: How do poverty and inequality during early childhood turn on or off certain genes that affect life achievements? Are governments intervening early enough to ensure that the potential of their youngest is not stunted?
But those plans unraveled when MIT’s physical campus closed due to the Covid-19 pandemic. Instead, the native Nigerian, stuck stateside, worked as a freelance reporter and then served as an Africa editor at the Organized Crime and Corruption Reporting Project.
Three years later, Omeje’s Knight Science Journalism Fellowship has finally begun, and he is approaching it with a new lens. He is particularly fascinated by how synthetic biology and artificial intelligence challenge basic assumptions about what it means to be human, and what regulations, if any, could help ensure that they are used for good.
In a conversation this September, Omeje spoke about what he’s looking to explore during his time at Cambridge, and how he plans to approach the technically and ethically complex topics of synthetic biology and artificial intelligence. (The following interview has been condensed and edited for length and clarity.)
Alex Ip: Tell us a little about what you’re working on.
Chikezie Omeje: Right now, I’m looking at synthetic biology and artificial intelligence. These two things have the capacity to change our world in a way that we’ve never thought of or witnessed in human history.
Alan Turing proposed that a computer will be assumed as “intelligent” when you can interact with the computer without knowing whether it’s computer or human. The large language models and chatbots we have now have passed the Turing test. What’s the implication of that?
And then there’s synthetic biology, the capacity to design and produce new organisms. If you’re religious, you believe that God created everything. But now we could even resurrect extinct organisms. These are things we’ve ascribed to a god, but now humans can do.
AI: You mentioned how through AI and synthetic biology, humans have almost achieved a God-like state. Do you consider yourself religious? How has that influenced your approach?
CO: I’m a secular humanist. I believe that as humans, we don’t need help from anywhere outside our humanity to solve our challenges. That’s why I’m so enthusiastic about what we can achieve. The cluster of technologies around AI and synthetic biology will give us the capacity to change our world in whatever form we want. We know the principles, it’s just designing the technologies to get us there ASAP.
AI and synthetic biology are challenging our basic assumptions about what it means to be human, asking us to redefine our realities. I just want to understand what is going on here, the stages that we have reached, how this is going to define our future, and — looking at the changes past technologies have brought to the world — where we have opportunities to regulate these new technologies.
Given they are developing at a very alarming rate, you begin to wonder: How do we regulate these technologies? Is it even possible? If you want to regulate, whose regulations are we going to follow? Is it that of China, the U.S., or Nigeria? For the first time, we may not have a unified international framework for regulating this technology.
AI: What made AI and synthetic biology so difficult to regulate, unlike past technologies like weapons of mass destruction?
CO: Nuclear weapons weren’t just developed in somebody’s backyard or produced by just one private factory. The Manhattan Project, funded by the U.S. government, assembled the great scientists at the time to develop this weapon. If you look at other countries that have done it. Countries would come together and say, “We are going to destroy ourselves if we continue to develop. Let’s pause this and stop other countries from doing that.”
But AI development’s been driven mostly by the private sector. Thousands, if not millions of people, are pushing it at their own pace, trying to develop a different aspect of AI and applying it to different areas. Do you go and say, ‘Oh, this company, stop building AI tools, or, this other company, stop doing this?’ It’s not so centralized that authorities can just say, ‘Okay, stop.’
AI: Most people cannot obtain weapons-grade uranium, but many have computers in their homes.
CO: Exactly. And they have the software to run their code, the data to train their algorithms. If uranium is mined in a particular country, they can issue a moratorium. But in the case of AI, how do you stop it?
It’s the same with synthetic biology. If somebody wanted to develop an organism to clean up the plastics clogging our waterways, the tools to do so are not that expensive. Anybody who has the knowledge and skill will be able to do this in their houses, release some of these organisms, and start experimenting with them.
If you’ve seen the news, one Chinese scientist claimed to have edited the genomes of two embryos to make their genes HIV-resistant.
AI: Yes, he’s based in the city next to the one I grew up!
CO: In this case, we know it because it became news. What if somebody is in a private laboratory synthesizing different DNAs and trying to create exotic organisms? How do you know about those things? That’s the larger question.
The development of AI and synthetic biology is like a car set in motion without brakes on. These are the conversations we should be having and I think these conversations are already happening in most places around the world.
AI: What would a successful fellowship look like for you? How are you taking advantage of what Cambridge has to offer?
CO: I’m taking “Politics and Policies: What Is the Impact of Data and AI?”, taught by Deborah Hughes Hallett at Harvard. I’m also taking a course in synthetic biology taught by Ron Weiss. It’s not that I’m going to start designing genetic circuits for certain gene expressions. I see this as an opportunity to understand the basics, so that when you talk to the people developing these technologies, you know the kind of questions to ask that will inform your reporting.
At the end of the day, I’m a journalist, not a scientist. I don’t know if I’m in a place to tell you how these technologies will change everything, but I know that they have the potential to change a lot of things. The bottom line is good journalism. Let’s inform people adequately, so they can make decisions about themselves, about their community, and even put pressure on leaders to take certain actions.
Alex Ip is a student in the MIT Graduate Program in Science Writing and editor-in-chief of The Xylom magazine.