“Telling the Stories of the Salt River Pima-Maricopa Indian Community”

Skip to content

“Telling the Stories of the Salt River Pima-Maricopa Indian Community”

VIEWS: 116

October 20, 2025

Let’s Chat About AI

By

Whether artificial intelligence, or AI, is a part of your life or something you’ve only heard about, it’s only getting more important for all of us to understand how it works and how it will likely impact the future. When you look something up online you may see an “AI Overview” as the top result or read a headline about an “AI actor” and wonder how those things are connected. AI is a broad term, and as an industry it’s still pretty new. That can make much of it seem scary and ominous as so much remains unknown. With curiosity and caution, AI can be understood as simply another tool. It’s not perfect, it’s not all-knowing, and it seems like it’s not going anywhere. 

How It Works

If something is made using AI, that means it was created with a machine that recognizes patterns in large amounts of data to learn and make predictions, mimicking how a person would look at and interpret that information. The set of information or data might be more specific, like pictures of people, or very broad, like public websites. If you were using an AI tool to edit some pictures and wanted to change a person from sitting to standing, you would type in the specific outcome you would like (called a prompt) and the system would be able to look at the picture and know, based on what it has learned from recognizing patterns, how that picture should look if the person was standing and then create that image.

If we approach this type of tool with curiosity, it could make for some fun images and playful results. But this is where caution becomes key. It’s just as easy for that same program to create a picture of you giving a thumbs-up as it is for it to show you making a much more offensive gesture, when in reality you have actually done neither. 

How to Spot AI Creations

So, how can we tell when a photo or video has been made with AI, which is what’s called a “deepfake”? It’s not easy, but it’s possible. There are some clues that may be obvious in an image, like extra fingers, a “fuzzy” face on a person in the background or teeth out of place, but it’s becoming increasingly difficult to catch the details that AI gets wrong. In general, if something looks “off,” the image may have been created or manipulated using AI.

There also are some “tells” to help spot AI-generated text, like excessive use of em dashes—like these, which have been inserted as an example—or shifts in word choice that aren’t in line with how an author usually communicates. Using words that are more academic or formal rather than casual when it doesn’t fit the context can also be an indication that AI was used to write the work. There are programs that claim to be able to read text, like student essays, and determine if it was written using AI, but they aren’t always accurate; they can incorrectly flag text as being generated with AI even if there was no AI used in the writing process.

Unique Risks in Indian Country

AI systems have some inherent biases because of the data sets they trained on. Cultural contexts unique to each tribe may not be in publicly available data, and this causes AI systems to view tribes as a monoculture, reinforcing colonial stereotypes of a singular inaccurate representation of a Native person. 

On the other side of that, if sensitive cultural information is shared with the system it can have implications for tribal data sovereignty. The information that goes into the system in this case is not theirs to share, and it can be exploited by other users or even be inaccurate and further harmful stereotypes.

To combat these risks, new roles have emerged for data stewards among organizations and communities. A data steward is responsible for ensuring information is protected and accessed appropriately for decision-making by authorized parties. In the case of the Chickasaw Nation, for example, they have designated a director of data stewardship who also ensures that the tribe’s values and culture are reflected in the data ecosystem, or how the data is managed, stored, interpreted and accessed.

Environmental Impacts

The data that goes into AI systems lives on massive servers stored in large warehouse-sized data centers. The servers require significant amounts of water to cool the hardware, sometimes more than the communities in which they’re housed. They also require enormous amounts of energy to train and operate the systems, significantly more per task than general computing operations. The amount of energy required can thus strain power grids and water resources, and as more systems develop and more people use them, the energy costs drastically increase. 

Further Notes and Best Practices

Many items and applications people use every day have AI components to them, including voice-automated features in cell phones, smart speakers and other devices. Some software and programs that utilize AI also offer contract agreements with users, providing a degree of security for the user’s data. By entering enterprise agreements with software companies, large organizations can ensure that the data that is used within that system by enterprise users is protected and does not leave the scope of the agreement, meaning no other system will have access to that data. Locally, the Salt River Pima-Maricopa Indian Community continues to evaluate the different ways AI can be responsibly used without replacing staff or people’s knowledge. 

Understanding the opportunities and possibilities, both interesting and intimidating, is essential to successfully navigate the uncertainty of AI’s role in our future. Conferences like Wiring the Rez, hosted by Arizona State University’s Indian Legal Program, are taking note, with this year’s theme being “Artificial Intelligence in Indian Country.” The conference, which took place on Sept. 26, created space for people to connect and discuss how AI is being used in their environments. By keeping communication open, tribal communities can ensure they have a seat at the table when discussions turn to data sovereignty.

If you choose to use AI for any number of tasks, there are risks to be aware of. In a free-to-use program, the data you enter will be taken and used to train the model or could be sold to train other models. These systems are designed to encourage the users to keep using them, so they’ll respond positively to prompts in nearly any case. This has led to serious consequences where human connections are replaced by artificial connections to the AI machine. This is a slippery slope for some people who may not have many opportunities for real-life social connections. In one grim instance, a young person found themselves depressed and isolated, and when they communicated their thoughts of self-harm, the program did as it was trained: it encouraged it. 

This is yet another reminder that AI should be used as a tool for enhancing what we each bring to the table, not as a replacement for the human elements that create and innovate. With systems that use generative AI, artists can describe their vision to create outlines and then build out from there using their own skills and unique styles. Large-scale programs can create customized picture books to encourage kids or adults with special needs to face common fears like having blood drawn using their own interests or even likeness.  Other systems can create simulations of outcomes based on projects designed by teams of human engineers who connect to communities to solve problems. Without the humans, who will ask the questions?