From Curiosity to Impact: The Surdna Foundation AI Journey
The world of philanthropy is increasingly leveraging the potential of Artificial Intelligence (AI) to improve efficiency and scale impact. Starting in 2023, Exchange Design (XD) and the Surdna Foundation partnered to better utilize AI for grantmaking and racial justice initiatives. The Surdna Foundation supports social justice reform, healthy environments, inclusive economies, and thriving cultures across the United States. The foundation dismantles the barriers that limit opportunity to create more prosperous, culturally enriching, and sustainable communities.
To capture these insights, we interviewed our partners Jonathan Goldberg, the Vice President of Learning & Impact at the Surdna Foundation and Jeffrey Jimenez-Kurlander, formerly of the Surdna Foundation and currently the Manager, Strategic Learning & Evaluation at the Mellon Foundation. They shared lessons learned and insights from co-creating a customized evidencebase.ai and Slack integration to analyze unstructured data from grant reports and improve decision-making, as well as prototypes for data extraction and compliance tools. They noted challenges such as evolving AI technology, data quality, and securing organizational buy-in, and they also emphasized the importance of starting small and focusing on specific use cases. Looking ahead, they hope AI will enhance knowledge sharing, improve evaluation work, and enable predictive analytics to better serve communities and grantees, but they stress the need to maintain human relationships at the forefront of philanthropy.
Key insights include:
The Surdna Foundation's exploration into AI was initially sparked by a recognition of its potential to transform their work and the world following the public release of ChatGPT. This led to early brainstorming around using AI to analyze unstructured data like grant reports to identify common themes, but also highlighted the need for a partner in the AI journey to bring the ideas to fruition.
Surdna adopted a pragmatic approach by starting with a small, functional project – a chatbot to answer staff questions about grant-making policies. This served as a valuable test case and helped address an immediate need for information accessibility.
While facing challenges related to the rapid evolution of AI technology, data quality, and initial staff adoption, the Surdna Foundation found success in creating bespoke solutions tailored to their organizational culture, such as integrating the policy bot into their existing Slack workspace.
Looking ahead, the Surdna team envisions AI playing a significant role in enhancing knowledge and decision-making in philanthropy, going beyond basic efficiency gains to potentially democratize access to data, improve compliance, and advance evaluation through predictive analytics and sentiment analysis. However, they also caution against "gimmicky AI tools" that don't offer real value.
Key advice for other philanthropic organizations starting their AI journey includes securing serious buy-in from leadership and across the foundation, including resource allocation, starting with small, well-defined projects to test the waters, and recognizing the critical role of data quality.
* Interview edited for clarity and consistency
Let's just dive right in! Could you share a little bit about yourselves and your role with the Surdna Foundation?
Jonathan Goldberg: I've been at the Surdna Foundation, let's see, as of March March 17th it will be 28 years. I've worked in a lot of different roles, leading grants management, , information systems, building out our learning and evaluation practice, and I am a member of the senior leadership team at Surdna. The Surdna Foundation is focused primarily on social and racial justice in the United States. And we do that through several different program areas including inclusive economies, thriving cultures, sustainable environments, and our Andrus Family Fund, which is primarily focused on youth justice. In our roles Jeff and I work across the foundation with all of our teams. A lot of our work has been on understanding the data that we collect from grantees and others, and then translating that into learning and sharing that learning across the foundation and beyond.
Jeffrey Jimenez-Kurlander: Hi folks, I'm Jeff. I've been at Surdna—June 18th would be seven years. I joined as a program associate. As soon as I had the chance, I joined Jon's team because I love thinking about how we can use data to make strategic decisions around grantmaking and to help our program officer and our leadership team make institutional decisions. I joined Jon's team as a data analyst. I think that was the original title back then. Through the years, I was promoted to a learning and impact officer role, which allowed me to continue this data analytics focus while also looking at evaluation from a qualitative perspective. I also helped steward a grantmaking portfolio with Jon, primarily focused on emerging technology at the intersection of racial justice and social justice.
What were your motivations in reaching out to Exchange Design when starting your AI journey?
Jonathan Goldberg: It's interesting. I'm thinking back on this now, Jeff, and what I recall is having a conversation very early on, right afterChatGPT was released publicly. I remember saying, “I really don't know what this is, but I know it's important and it's going to transform the world, and we've got to be in on this, and we've got to really understand it because it's going to have implications for not only our work but our lives.” So, we started learning how it worked, what it was, and quickly got interested in the way it lifted up insights from unstructured data. We started brainstorming around things like, “We have lots and lots of grant reports that come in. We have metrics that we track. How can we use this tool to look across all of those documents and make sense of them, and identify common themes?” We got way ahead of ourselves and ran into some of the limitations that existed and sort of said, “Okay, the technology is going to get there, but we need to start at a more functional level.” So, we took a step back and looked about more basic ideas to experiment with.. I use that term very intentionally—experiment with—because one of the things that I was interested in with our department was to be a place where we could experiment and play and learn and use our vantage point as a way of just trying things and failing and trying and failing and finally, , hopefully, succeeding! So that was my motivation.
Jeffrey Jimenez-Kurlander: Yeah. I think it's really similar for me. I think one of the first experiences that I had was—I don't know if you remember this, Jon—but I remember sharing my screen with you and talking to you about this thing called ChatGPT. I asked it to define power building from the perspective of racial equity, something we talk about a lot at Surdna. When we got the response to the prompt from ChatGPT I think you responded that it defined it better than we did. That's how I knew we had something here that was important that we needed to deal with, and we needed to figure out how are we were going to bring it into our tech stack. There were two main motivations. First I really wanted to enhance data-driven decision-making. I think with numerous grants and initiatives across different program areas, and this is going to be true wherever I go from whether Surdna or Mellon Foundation, it's a challenge to synthesize all the information that's coming at us swiftly and effectively. AI tools promise to help identify trends, assess impact, and maybe one day, forecast future needs. Then the second one is that we want to help free up staff time so that they could focus on deeper relationship-building with our grantees. So, by using an AI tool to give program staff a head start on repetitive tasks or tasks that are important but take up too much time, like producing grant writeups, and summarizing grant reports, it could allow our program staff to be out in the field a little bit more.
What did you end up choosing to explore and how did that fit into your experience and your workflows?
Jonathan Goldberg: The first thing we decided to focus on was something that was at a simple level that we could kind of get our arms around because we were very much in learning mode. We chose to look at our grant-making policies. It was really designed to solve a pretty simple problem: providing accurate information and answering questions from our staff about our grantmaking policies and procedures. So, it could be anything from providing information about the approval process for a grant, how to make a staff matching grant, how long a grant can be, or any number of things that folks have regular questions about. We have documents that clearly explain all of our policies and procedures, but folks would rarely go to those documents. They would simply go to the source. It was easier for them to reach out to our grants manager, which is not ideal. So we thought, how can we get this information out to people in kind of a self-service kind of model. So we built a bot that lived in our Slack instance where anyone could post questions. We got to the point of building it and refining it and it was working. We haven't had enough time to really get staff fully into the mode of using it yet, but that is starting to happen. More importantly, it was a test case that we saw as a way of moving into other more expansive areas that I was really excited about. There were two other things that I was really interested in seeing. One was a way to extract information from our grants management database—in our case, it's Fluxx. How could we take that massive volume of information, which is primarily structured data, and make it more accessible for all of our staff. It would allow folks—and I think this is a universal problem in foundations—you'll never get foundation staff who are not technical to go in and figure out how to do searches in that database in a meaningful way because databases like Fluxx are complex. An AI tool, whether it was in Slack or simply using the AI tool itself, makes it much simpler to summarize large datasets. Simple questions like, how many grants did I make last year? What was the average grant size? would be as simple as asking the question in a prompt. And then there are things that are much more difficult to get using queries in the database, things that are not coded. So, what if you trained a tool on what organizing meant to our foundation, asked it to identify different sources of organizing work, things that related to a specific campaign, things that over time maybe you would code for, but you hadn’t thought to code for yet?. So, they're not structured in that way, but you could still get at it with this tool. So that was a really exciting project that I was looking forward to.
The other idea that I was really interested in that I think had really field-wide implications was using AI as a compliance and risk analysis tool. First, I wanted to train an AI tool in IRS regulations, and then run each new proposal through the tool to assess whether there were any issues, such as lobbying concerns or nonprofit status. I also wanted to train the tool on proposal language that might raise red flags and cause reputational risk or scrutiny from attacks from our adversaries. At Surdna we do not have in-house legal counsel, and we are staffed leanly, so an AI tool that could lift up potential problems quickly would allow us to focus our attention where it was most needed.? And we had Exchange Design build a prototype that was really impressive. It looked to me like this was something that was going to be a tool that grants managers across the country could make great use of. So those are the things I was hoping to get to before I retired.
What are some of the key challenges you’ve encountered? Are there any success stories you want to share?
Jeffrey Jimenez-Kurlander: So from the challenges perspective, I think some of it has to do with the state of the technology itself and less to do with y'all. And some of it had to do with how we wrote and saved our policy documents on our network. At the beginning, when we were trying to setup the grantmaking policy bot we were getting a lot of incorrect responses. That was partly because we weren't prompting the bot correctly, and partly because the policies themselves weren’t as clear as they needed to be. So that was one challenge we learned from, and I think another challenge is that it's just the cultural piece, and I think you all will find this no matter who you're working with. There are a handful of folks that are technologically savvy but often they won't go outside of our Slack workspace. They just won't do it. So we had to figure out a workaround and get into the Slack workspace, so that was a challenge that turned into a huge success story. The fact that you all are prepared to create bespoke solutions like that is such a value proposition. I would choose y'all over any of the bigger kind of organizations because I know that I can get something that's very specific to my use case and very specific to my culture and that is super important.
Jonathan Goldberg: So yeah, as you said, Jeff, there were two challenges. One is technical which is not a huge concern. The technology when we started was at a place, and you know this better than I, but probably once a week the technology would change and grow. It's constantly evolving and improving because there's always something new to offer. So that's not a hard one to overcome since Exchange.Design is on top of it. I think the harder challenge to overcome is the place that AI holds within each organization: what priority it holds because there are a lot of competing priorities – among them time, money, interest, all of those kinds of things. I think it was a real challenge for Jeff and I, often being in sales pitch mode internally to get resources and buy-in, which is a hard way to work.
But that was balanced by the sense of awe we experienced when we saw what was possible. I remember being on a Zoom call in my car on a Friday afternoon when I came up with the compliance tool idea. I think it was the following Monday when you showed me the prototype you built. It was almost shocking to me how good it was. t was brilliant. I mean, it really was further along than I had really even conceived. So, that's the promise and, I think the joy of this work.
Where do you see AI in the future of your work? How do you see these types of solutions in your day-to-day tasks?
Jonathan Goldberg: What I'm seeing from some of the vendors in our field thus far isn't actually great. Much of it has revolved around simplistic efficiency tools, like prepopulating sections of grant applications, that may or may not save a nonprofit a few minutes time. And that allows them to say they have AI built into their system as a sales gimmick. Now, there’s nothing wrong with improving efficiency, and I think there probably are a lot of interesting tools being built for individual organizations. But where I think the field should be thinking is more in terms of how is this going to add to our knowledge and decision- making, serving not only foundations’ needs but also their grantees’ needs. So, can AI help surface the efficacy of the work of a whole swath of nonprofits out there who could learn from each other and work better. That's doable! It's going to require a champion and some money.
Jeffrey Jimenez-Kurlander: I think if I can give you the optimistic version of where I see in my future is in the evaluation side of the work. A lot has to go right for it to get to where I'm hoping it takes us. I really do see AI becoming more deeply embedded in routine processes, from predictive analytics on funding outcomes to real time sentiment analysis on how communities feel. That's something that I can see and imagine, and eventually we might even see tools helping to collaborate with and coordinate multi-funder collaborations helping to identify synergies across foundations, even doing really cool things like simulating potential impact right before we even make a large grant. So things that we want to be able to articulate when we make grants help us make that a reality with real depth and detail. However, I deeply believe in the human element, and I think that's going to always remain central to the work. So, AI is going to always be able to illuminate patterns and provide efficiency, but I think philanthropy is always going to be grounded in relationships and context that numbers alone won’t provide, but I do see this futuristic version that can help just really advance evaluation work.
Jonathan Goldberg: Another area that AI could prove useful is projects that involve shared data to improve community services. This might be things like tracking trash pickup times, or crime statistics, or services provided to the LGBTQ community.They work with a series of nonprofit organizations typically in a city and those organizations are expected to report data in real-time through a web interface and that data gets collected and aggregated. And it strikes me that an AI predictive tool would really help decision-makers allocate resources properly. So there's another thing I'm excited about.
What advice would you have to someone at a foundation just starting their AI journey?
Jonathan Goldberg: For me it would be getting serious buy-in from the top of your organization and across your foundation. For us, I think, Jeff and I got excited on our own and just dove in, and then tried to get people excited about it. And there were some folks who were genuinely interested and excited about AI but institutionally it wasn’t a top priority in a way that would be transformative for the foundation’s work. So if you have a big appetite for using AI to change the way the organization works, it's very important to spend the time getting others on board before you start building tools!
Jeffrey Jimenez-Kurlander: And then tactically speaking, I think for the first kind of project always kind of start very small and test the waters. Also, do not underestimate the importance of data quality in this kind of space to make sure your data is in.