Your biggest AI questions, answered

Will AI always give flawed answers? Can we prevent AI from compounding problems from our past? Four experts weigh in.

Muhammad Bagus Prasetyo
ByNeel Dhanesha and Charley Locke
October 15, 2024

Can we trust what we see?

an illustration of a person taking a photo
Muhammad Bagus Prasetyo

Fred Ritchin has been thinking about the future of the photograph for nearly half a century. He started to notice changes to the medium in 1982, working as picture editor at the New York Times Magazine; in 1984 he wrote an article for the magazine, “Photography’s New Bag of Tricks,” about the consequences of digital editing technology in contemporary photojournalism. In the decades since, he’s witnessed the shift from the early days of digital photo editing to AI imagery, in which amateur and professional users alike can use digital services to instantly generate realistic visuals.  

As AI images become increasingly common, Ritchin feels people need to find new ways to confirm that they can believe what they see. Of course, AI imagery hasn’t emerged out of thin air. Ritchin traces a through line from contemporary conversations on best practices for AI back to those in pre-Photoshop times about whether journalists should disclose altering photographs. In the early days of digital editing, National Geographic was criticized for digitally moving the Pyramids at Giza closer together for its February 1982 cover image. Today National Geographic photographers are required to shoot in RAW format—a setting that produces unprocessed, uncompressed images—and the magazine has a strict policy against photo manipulation.  

Ritchin’s view is that editors, publishers, and photojournalists should respond to the challenges of AI by setting clear standards; media and camera companies have begun developing options to automatically embed metadata and cryptographic watermarks in photographs to show when an image was taken and whether it’s been tampered with via digital editing or AI alterations. While Ritchin doesn’t call for rejecting AI entirely, he hopes to reinvent the unique power that photography once held in our personal and political lives. 

Consider Nick Ut’s 1972 photo of a Vietnamese girl running naked from a napalm strike, taken when a single image could command world attention. “Gen. William Westmoreland tried to say it was a hibachi accident; President Richard Nixon wanted to deny it,” Ritchin says. But the photo “helped to bring the war to a close faster, and a lot of people’s lives were not [lost]. That’s a big deal … But now, you could see that and think, Some 14-year-old in a garage somewhere could have made that; it’s not going to change my vote.”

Do we have to accept that machines are fallible?

an illustration of a person on a computer
Muhammad Bagus Prasetyo

In a particularly funny moment, a recent study showed that one of the most popular AI chatbots many people rely on has been sharing inaccurate coding and computer programming advice. That’s a big issue facing AI right now—these evolving algorithms can hallucinate, a term for what happens when a learning model produces a statement that sounds plausible but has been completely made up.

This is because generative AI applications such as large language models work, functionally, as a prediction program. When you ask a question, the AI sifts through its knowledge base for relevant information. Then, using that information, it predicts a set of words that it perceives as the desired response to your question. That prediction is followed by another prediction—another set of words—that it has been trained to expect should come next, and so on.

But Rayid Ghani, a professor at Carnegie Mellon University’s Machine Learning Department and Heinz College of Information Systems and Public Policy, says that process puts greater emphasis on probability than truth: Most generative AI models have been trained on large swaths of data from across the internet, but nobody has checked the accuracy of those data, nor does the AI understand what is or isn’t a trustworthy source. This is why, for example, we got the notorious goof from Google’s AI that suggested putting glue on pizza to keep the cheese from sliding off; the proposal relied on a years-old Reddit joke.

When humans make mistakes, Ghani says, it’s easy for us to empathize, since we recognize that people aren’t perfect beings. But we expect our machines to be correct. We would never doubt a calculator, for instance. That makes it very hard for us to forgive AI when it gets things wrong. But empathy can be a powerful debugging tool: These are human-made systems, after all. If we take the time to examine not only AI’s processes but also the flawed human processes underlying the datasets it was trained on, we can make the AI better and, hopefully, reflect on our social and cultural biases and work to undo them.

How do we confront the environmental impact?

an illustration of a lightbulb planted in the ground
Muhammad Bagus Prasetyo

AI has a water problem—really, an energy problem. A significant amount of heat is generated by the energy required to power the AI tools that people are increasingly using in their daily personal and professional lives. This heat is released into data centers, which provide those AI systems the computational support and storage space they need to function. And, as Shaolei Ren, an associate professor of electrical and computer engineering at UC River-side, is quick to note, cooling down the data centers requires an enormous amount of water, similar to the amount used by tens of thousands of city dwellers.

“When you use water for a shower, for example, it can be reused,” says Ren, whose research is focused on how to make AI more socially and environmentally responsible. “When water is evaporated to cool down a data center, it’s gone.” As lawmakers scramble to enact regulations and hold companies responsible for their energy and water use, Ren believes it will be important for us as individuals and as a society to better understand the real cost of asking an application like ChatGPT a question. 

Even before the current AI boom, data centers’ water and energy demands had steadily increased. In 2022, according to Google, its data centers used over five billion gallons of water, 20 percent more than in 2021; Microsoft used 34 percent more water companywide in 2022 than in 2021.  

AI stands only to compound the existing resource strain that data centers create on global energy grids: In 2026, electricity consumption at data centers will be double the amount in 2022, the International Energy Agency says. While the United States is just beginning to look at the environmental costs of data centers, the European Union’s energy commission pushed forward a regulation in March aimed at increasing transparency for data center operators and, ultimately, reducing fossil fuel dependence and resource waste.

“I explain it in terms my kid understands,” says Ren. “If you ask ChatGPT [3] one question, it uses the same amount of energy as turning on the light—in our home, a four-watt LED bulb—for one hour. If you have a conversation with an AI, like ChatGPT, for 10 to 50 questions and answers, it will consume about 500 milli-liters of water, or the size of a standard bottle of water.”

How can we stop AI from compounding the problems of the past?

an illustration of an AI chip in between humans' heads
Muhammad Bagus Prasetyo

When AI hoovers up all the data that humans have created, it becomes a mirror reflecting the stereotypes, racism, and inequities that continue to shape the world, says Nyalleng Moorosi, a senior researcher at the Distributed AI Research Institute. These biases, she explains, are often due to a lack of diversity among the people hired to build AI systems and tools, who rely too much on datasets that prioritize Western ideas of what is and is not valuable information.

The global majority today know what it’s like to have foreign systems foisted upon them, part of the fallout of colonization. Moorosi believes AI has the potential to replicate those systems—prioritizing perspectives and agendas of those in power while marginalizing Indigenous knowledge and cultural values.

The teams hired by tech companies usually have blind spots that they inevitably build into their AI tools. The key to altering the course, Moorosi believes, is to democratize AI: to incorporate the voices of people who speak hundreds of languages and think in thousands of ways that diverge from Eurocentric thought. That means moving AI development from the realm of big tech to the local level, empowering developers and engineers to tailor tools to their communities’ needs and experiences. The resulting systems, Moorosi feels, would be more respectful of their creators’ backgrounds. The South Africa–based Lelapa AI, founded in 2022, recently unveiled a language learning model that’s now the basis for a chatbot and other innovations catering to users who speak Swahili, Yoruba, Xhosa, Hausa, or Zulu.

“We absolutely have got to interrogate the question of power. We cannot expect the Googlers or the OpenAI people to understand all of us. We cannot ask Silicon Valley to represent all eight billion of us. The best way is for each one of us to build the systems locally,” Moorosi says. “My AI utopia is that people have the access and the audacity to deploy AI to solve their own problems.”

A version of this story appears in the November 2024 issue of National Geographic magazine.