THE CITY’s mission is to serve the people of New York — all of them, across its five boroughs and in all of its many neighborhoods. But do we? 

Since this news organization’s founding in 2019, our editors and reporters have made it a priority to keep geographical diversity in mind as we assign, write, photograph and edit stories. But while we’ve always been dedicated to this ideal, there were no easy quantitative means to understand if we were hitting our goals, or even moving in the right direction. 

An opportunity arrived to better understand our coverage when we received an archive of past stories as computer files as part of a migration to a new website last year. We decided to run an experiment and find the locations of all of our stories onto New York City’s geographic landscape. 

Beyond holding ourselves accountable to our mission, the experiment was also a chance to learn about the latest advancements in large language models and whether or not they could correctly audit our coverage. So we asked OpenAI’s ChatGPT to read all of our story files and tell us where each story took place. We plotted the results on a map

The good news is that, according to the AI analysis, we do seem to cover a broad cross-section of our city, and don’t just stick to well-heeled neighborhoods that tend to get the most media attention.

“As editors we knew intuitively that our coverage chronicles parts of the city that don’t have neighborhood papers or blogs, which are usually only in the news because of a crime or a fire,” said Alyssa Katz, THE CITY’s executive editor and part of the organization’s founding team. “It is powerful to actually see our articles mapped out across New York City, a visual record of which communities we’ve been able to reach, as well as well where we still have more work to do.”

That said, some neighborhoods have gotten more love than others — with eastern Queens and Staten Island relatively shortchanged.

ChatGPT was able to match a story to a neighborhood more often than not. 

But some of the choices that the AI made were wrong, and some were hard to understand, and we needed to cross-check the results using other, more traditional techniques. 

We found that for these purposes, ChatGPT and large language models aren’t perfect — and neither is this map. You shouldn’t look at this the same way you’d look at a typical news graphic. We don’t claim it as an absolute representation of THE CITY’s reporting, only as a modest visual representation of our efforts.

At a time when journalism is reckoning with AI as a potential cost-cutting substitute for human-produced reporting, as a nonprofit newsroom we’re looking at the technologies as a means to fulfill our mission: Generative AI can help THE CITY build tools like this map that enable us to be more responsive to communities we serve.

How We Did It

What follows is a description of our process in creating the map. It gets a little technical — we want to provide a path for other newsrooms to build on this work — but we mean it for a general audience that’s familiar with the basics of AI, too. 

ChatGPT was able to pick a specific “place” for 2,750 out of the 4,159 stories we published from our April 2019 launch through September 2023. We were also able to match 2,129 stories to a given neighborhood. We used the NYC Department of City Planning’s Neighborhood Tabulation Areas to define our neighborhoods. On the map, the darkness of the blue color of each neighborhood shows the extent of our reporting connected to each area. 

To identify the location of a story the traditional software approach would be to write code that would read through each article, analyze the text and then extract the key location from it. 

To pull out the words and phrases that mention a place, such code would  do something called “entity extraction.” Entity extraction is a part of a field called natural language processing. It involves identifying a geopolitical entity (GPE), such as countries, cities or states, or non-GPE locations, such as mountains and water-bodies, using a technique called “named entity recognition.” 

But when we tried a cookie-cutter implementation of named entity recognition we found it was not able to recognize the different boroughs, neighborhoods and landmarks in New York City. It could only recognize a location if it was an explicitly named place. 

Another approach we could have taken would be to hand-label a subset of our stories (say, a few hundred) and “train” a statistical model to tag the rest of the stories by location — a high-effort task for this project. 

But this is where OpenAI’s ChatGPT came in handy. ChatGPT, like other large language models, have a kind of “common” knowledge and context of the real world. When we tried using ChatGPT, we found it was able to read, process and return the location, neighborhoods from New York, boroughs and geographical coordinates in a matter of seconds. The question is, how accurate was it? 

First, the good news: ChatGPT’s results for relevant neighborhoods and landmarks were not too wide of the mark. It did not “hallucinate” and make up nonexistent neighborhood names. Here’s the prompt we used:

“You are a data scientist and mapper, Given this text, perform semantic analysis and return the geographical : ‘Neighborhood:’, ‘Street Name:’, ‘Landmark:’, ‘Geographical Coordinates:’. No other texts please.” 

After doing some quick checks, we needed to remove stories that ChatGPT thought were about every neighborhood and yet none at all, and we fixed some ambiguous results. 

The AI was able to identify the most relevant location in certain cases, such as the story “Bronx River Paddle Season Begins With a Ride Through the Borough’s Rapids.” This story, on the Bronx River Alliance’s annual Flotilla fundraiser kickstarting The Bronx’s canoe season in 2023, contains more than 10 distinct locations all mentioned once, but the model identified the location of Bronx River Alliance’s Office in Crotona Park as the most relevant to the story.

But despite a promising start, it took a lot of work to get the data into a shape. We re-ran the results using a pipeline of trusted and traditional computational methods and compared the outputs, making decisions about how to deal with each inconsistency as we encountered them.

For example, in certain cases such as the story “Could Clean Air Centers Come to NYC?” the AI identified all the locations mentioned in the text which then showed up in all the neighborhoods on the map, which lit up too many areas of the map to convey any meaning. We weeded out as many of such stories as we could identify in our hand-checks. 

We also ditched most of the geographical coordinates that ChatGPT got wrong. As part of our fact-checking pipeline, we passed the AI’s results through Google Maps’ Geocoding API to get the landmark’s coordinates. We then calculated the distance between the coordinates returned by ChatGPT and those returned by Google Maps. If the distance was greater than 250 meters (about 820 feet) on ground, we used Google’s coordinates rather than ChatGPT’s. 

In certain cases ChatGPT did not return either the neighborhood name or a landmark name. In those cases we identified neighborhoods using a computational geography method called “Point-in-Polygon.” Our map includes both neighborhoods and, where we can determine it with confidence, a single geographical point. In cases where we could only figure out the neighborhood, not the individual point, we omitted the articles from the ‘Individual Stories’ view on the map. 

There are other limitations worth noting — limitations of generative AI technology as well as limitations introduced by the design choices we made:

The relevant location of a story, especially as determined by the AI, can mean a lot of things. It could mean the story takes place at a certain landmark, refers to a person from the neighborhood, or is of interest to a particular community. We were OK with ChatGPT using a broad definition of relevancy here. Our tests after the AI step helped make sure that it didn’t pick things that were really irrelevant. 

Also, real neighborhood names and boundaries in New York can be squishy. They’re fluid and even change over time as historically popular names of an area stick or new ones are formed. The communities are constantly evolving. Last fall, The New York Times published a fascinating neighborhood map that goes into a lot of detail on this phenomenon. Hence our decision to use the Neighborhood Tabulation Areas (NTAs) defined by New York City’s Department of Planning. 

NTAs are subsets of geographies defined by the U.S. Census, and are very imperfect representations of New Yorker’s mental maps. There are 195 NTAs. Among these you might find neighborhood names you don’t immediately recognize. For instance, what’s commonly referred to as Bed-Stuy is defined as two neighborhoods in the NTA data: “Bedford” and “Stuyvesant Heights.” Then there’s “Astoria” and “Old Astoria.” And there’s a patch of land between Flatbush and East Flatbush in Brooklyn that NTAs call “Erasmus.” If you’re a New Yorker, you get the idea. 

You won’t find any stories in our archive tagged as “Erasmus” because few New Yorkers locate themselves with that term. NTAs are imperfect but they’re pretty widely used in data journalism about New York City, and were the easiest geographies for us to use here. 

Finally, while our bulletproofing technique was robust (and took the largest percentage of our time in creating the map), we couldn’t read and hand-check every story. If you spot an error, please reach out to data@thecity.nyc and we will work hard to correct these mistakes.