Thought Leadership Archives - data.org https://data.org/news/category/thought-leadership/ Tue, 07 Oct 2025 13:19:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://data.org/wp-content/uploads/2021/07/cropped-favicon-test-32x32.png Thought Leadership Archives - data.org https://data.org/news/category/thought-leadership/ 32 32 Pathways to Impact: Julian Stillman https://data.org/news/pathways-to-impact-julian-stillman/ Tue, 07 Oct 2025 13:19:51 +0000 https://data.org/?p=31794 Tell us how you found yourself at the HOPE Program, and any context around the role data and AI have played in the work that you do.  When I moved from Colombia to the United States in 2017, I was looking for a job where I could help change the…

The post Pathways to Impact: Julian Stillman appeared first on data.org.

]]>

Pathways to Impact is a series of conversations with data for social impact leaders exploring their career journeys. Perry Hewitt, Chief Strategy Officer of data.org, spoke with Julian Stillman, the chief of staff at the HOPE Program, a nonprofit advancing economic mobility, workforce development, and environmental sustainability in New York City.

Tell us how you found yourself at the HOPE Program, and any context around the role data and AI have played in the work that you do. 

When I moved from Colombia to the United States in 2017, I was looking for a job where I could help change the world. My first impulse was the United Nations, but someone suggested that I look for something in the nonprofit sector. I was fortunate to land at the Bedford Stuyvesant Restoration Corporation, the first community development corporation in the country. I started as a financial counselor and worked my way up to director. After several years, I had the opportunity to apply for a job as a chief of staff at the HOPE Program, where I could apply my skills to their initiatives in workforce development, with particular emphasis on the green economy. 

The HOPE Program was looking for someone data-oriented; My experience as Director of Program Compliance meant that working with data was part of my everyday responsibilities, ensuring we met reporting and compliance requirements. That background helped me secure the role. 

Once at HOPE, I started to evaluate the data to align with our new strategic plan, Home Of Prosperity and Empowerment, and with a focus on growth at our 40th anniversary. When you’re making decisions, you need good data! 

And data leads naturally to AI. The need to develop and deploy AI was something that we started to hear from our partners. Funders were suggesting that we consider AI tools to be more efficient in conducting research for grants and for grant writing. At first, everybody was familiar with only those tools that took notes during a meeting. Learning how to engage with generative AI tools like ChatGPT, Gemini, or Copilot was something new.  

We tried to start to bring it into our daily work, but we were not fully sure what we should do, given our concerns about data privacy. We sought guidance from private companies as well as other nonprofits on how to handle these concerns, and landed on creating a data policy as a first step. From practice to policy, data is the bedrock of a lot of my work here. 

AI is going to change and take away a lot of existing behaviors. And create new ones: AI tools and agents are becoming collaborators in daily work. We need to figure out how to become more efficient, and also how to retain the human touch and insight.

Julian Stillman Julian Stillman Chief of Staff The HOPE Program

Are you seeing widespread AI adoption in the social sector more broadly? 

Very recently, I attended a meeting with about 15 nonprofits. Before the meeting, they surveyed the group to ask where they stood in their AI journey, and I was surprised that only 7% reported actively using AI in their day-to-day work. This is not a statistically significant sample, but an example of what I see. Everybody’s trying to figure out the right tools to use and, most importantly, the right way to use them without compromising privacy. We understand that transparency is essential, and the importance of privacy for our participants and our community. That’s the context in which we, and many other nonprofits, are operating. 

We realize that change is coming. I remember when the adoption of the internet became widespread, and how much it changed work and the world. AI is going to change and take away a lot of existing behaviors. And create new ones: AI tools and agents are becoming collaborators in daily work. We need to figure out how to become more efficient, and also how to retain the human touch and insight. We’re going to use AI, but we want to be sure that we are doing it the right way.

Part of the Pathways to Impact series

Curated conversations with data and AI for social impact leaders on their career journeys

See all Pathways to Impact

What problem are you trying to solve at the HOPE Program, and where do data and AI fit in?

The HOPE Program helps New Yorkers build sustainable futures. We serve adults over 18 years old: 95% of our participants are from low-income communities; 90% are BIPOC, Black, Indigenous people of color. Around 50% of our participants have faced homelessness. We also work with a lot of formerly incarcerated people, a focus that we are known for in this sector.  

One HOPE priority–which I really admire–is that we serve communities that are highly impacted by environmental issues. For example, right now we are working in the South Bronx, where environmental, economic, and social disparities are all high.  

Data is a way to help us figure out how we should move forward to solve the systemic challenges we face. Now we’re taking that data and putting it into AI tools to help us deeply research how this data can affect what is happening in the world. That plays a role in steering our direction.  

Here’s a specific example: the US federal government recently changed its funding priorities. Our task was to figure out how to navigate these rapid changes without an expert on hand. We were able to use AI to check all the executive orders and understand the potential impact on the services our organization provides, particularly the training focused on the environmental or green sector. Using AI helped us understand the impact.  

Were there any unexpected blockers to your career entry or progression, or your move into this field?  

Language was more of a barrier than I expected. When I applied for that first job in the US, my initial phone interview in English was a big challenge. As someone who enjoys networking and spends a lot of time with people, I was frustrated—but it reminded me to continue working, to improve my professional English, and to build a network here.  

Living in New York has helped. There is a love for immigrants, for people coming from different places around the world. Practically everybody has an accent, and that helped me gain confidence. 

What community of people or resources bolsters your work today? 

I am supported by the community of people we serve.  

The community connection is powerful if you deeply believe in the mission of the organization that you’re working with or working for. I believe in what we do, absolutely. Maybe I’m not facing the same issues and challenges that our community faces, but as an ally, I try to see how my skills can help this community.  

I also have benefited from cohort work with people solving similar problems with data and AI. When I was with the Restoration Corporation, we participated in the data.org Data Maturity Assessment (DMA) cohort funded by Microsoft. We spent six months learning together about different topics and reviewing our results. We had a lot of interesting findings about our data culture, and it helped us shift the organization in the right direction. Almost 15 months later, I asked a former colleague how it’s going with the data culture. Apparently, it has really continued because of this cohort work. That is affirming. 

More than ever, we need to be thoughtful, ethical, and discerning about the outputs we get, whether they come from AI or any other advanced technology.

Julian Stillman Julian Stillman Chief of Staff The HOPE Program

Which skills—not necessarily related to data and AI—have helped you in your career?

I would say soft skills, which might seem counterintuitive because this is a workforce development organization, and we are training people to become better in their jobs. I always say the only thing that is going to make a difference is the soft skills and how you interact with people. You can learn a tool, you can learn the job responsibilities, but how you interact with others and show respect is critical. So much is about empathy and understanding.  

When the soft skills are missing, it makes a difference. For example, we noticed during COVID how young people started to become very attached to technology, and how they were missing interactions with others. There was a high cost: after COVID, they didn’t know how to be in spaces with other people. Soft skills matter. 

What advice do you have for someone interested in doing this work? What have you seen as differentiators for success?

Focus on what you want and where you believe you can make a difference in the challenges you want to solve. Be unafraid of taking on new opportunities, because if you’re afraid, you might be setting yourself up for failure. Gain some learning from each new opportunity—even a work experience that is not the right path can set you on a different, right direction for you.  

In my view, what really sets people apart today is the ability to bring together analytical, creative, and critical thinking. With so much information out there and AI now part of the process, it comes down to staying curious and asking the right questions. More than ever, we need to be thoughtful, ethical, and discerning about the outputs we get, whether they come from AI or any other advanced technology. 

What do you see emerging as the next big thing in data and AI for social impact?

In my case, I’m very focused on equity in data and AI and would like to see that emphasis become more common. Every time I’m analyzing data, I’m trying to evaluate an equity component, because I believe that it’s the only way to make decisions that are focused on social impact. The same is true with AI, which is becoming a collaborator. 

Increasingly, we are applying data and AI to better understand and improve green metrics. For example, reducing heat in buildings and realizing energy savings. Getting and acting on good metrics is something that we want to expand. We are starting to use AI to get some specific points that maybe we are not able to collect, but we are able to compare. We want to use more data and AI in the coming months to improve not only our participants’ outcomes but also those of their families and communities.  

We are committed to sharing those findings. Sometimes we make decisions based on the data we see, but in the end, we are serving the communities. It is important to share findings and ensure that the community understands your work and the metrics behind it. Their input is valuable. Maybe the community can help you to identify gaps and opportunities – a new metric, a new outcome, or another service that the community needs. 

What’s your don’t miss daily or weekly read?  

I spend a lot of time in two apps: The New York Times and Masterclass. These keep me up to speed on news and help me continue to work on soft skills. Masterclass helps me explore topics like how to become a better communicator and personal growth. A lot is changing all around us, but we can always work on ourselves. 

About the Author

Perry Hewitt

Chief Strategy Officer

data.org

Chief Strategy Officer Perry Hewitt joined data.org in 2020 with deep experience in both the for-profit and nonprofit sectors. She oversees the global data.org brand and how it connects to partners and funders around the world.

Read more

Series

Pathways to Impact

This data.org series interviews leaders in Data Science for Social Impact with a lens of how they got there, as well as the skills and experiences that have fueled their career progression.

See all Pathways to Impact

The post Pathways to Impact: Julian Stillman appeared first on data.org.

]]>
5 Key Learnings from Our AI2AI Challenge Awardees https://data.org/news/5-key-learnings-from-our-ai2ai-challenge-awardees/ Mon, 08 Sep 2025 17:42:43 +0000 https://data.org/?p=32048 Halfway through their grant period, the five organizations already have exciting results to share, having served more than 23,000 people and counting around the world.

The post 5 Key Learnings from Our AI2AI Challenge Awardees appeared first on data.org.

]]>

At the midpoint:

Quipu | Colombia: More than 5,000 people were enabled access to financial services, and 11,892 interacted with an AI financial assistant bot.

IDinsight | Ethiopia: 125 frontline health extension workers have provided 587 AI-assisted consultations, with 90 percent reporting that the tool has made their work easier and more efficient.

International Rescue Committee (IRC) | Global: 951 people affected by crises, conflict, and disasters received critical information from Signpost AI, including how to access legal assistance, information about job training, and ways to find housing.

Buzzworthy Ventures | India: 450 farmers have received beekeeping support.

Link Health | United States: 4,293 individuals have been screened for benefits access, with more than $968,000 in direct financial support unlocked.

Innovation is everywhere, if you know where to look.

And at data.org, our global search to support and scale innovative uses of data and AI has yielded extraordinary results. 

Our four global innovation challenges since 2020 have spotlighted small organizations with big ideas, as well as established organizations with a hunger to experiment with data and AI in new ways. In December 2024, we introduced the Artificial Intelligence to Accelerate Inclusion (AI2AI) Challenge awardees, in partnership with the Mastercard Center for Inclusive Growth

Halfway through their grant period, the five organizations already have exciting results to share, having served more than 23,000 people and counting around the world. Collectively, the organizations have also secured $1.4 million in additional funding to continue scaling their data and AI work and deepening their impact. 

The Buzzworthy team, along with local beekeepers in India.

With months still left before the grant period ends, the number of people reached and lives impacted will continue to grow, as will the potential for these solutions to be replicated in other regions and contexts. Here are five key findings at the grant midway point that other social impact organizations may consider in their own work:

  1. AI is a tool to reach underserved and overlooked populations.
    Through field engagement and state-level partnerships in India, Buzzworthy has introduced beekeepers to tools that offer tailored hive management guidance, enhancing the predictability and stability of income from beekeeping as a vital form of income diversification for smallholder farmers.
  2. AI is improving efficiency, accuracy, and decision-making for both organizations and their beneficiaries.
    In pilot locations, survey data from International Rescue Committee (IRC) is showing as much as a 66% efficiency gain once SignpostAI was integrated into local teams’ daily work of providing critical information to those affected by humanitarian challenges.
  3. Awardees are transforming internal data practices and fostering responsible data sharing for broader impact.
    Beyond direct financial inclusion tools for micro-, small- and medium enterprise entrepreneurs, Quipu has begun offering scoring tools and data insights to partner financial institutions in Colombia. These partners are now able to evaluate applicants with little or no credit history by integrating alternative data sources, such as SMS or mobile usage, into their decision-making processes.
  4. Adoption of AI solutions is driving continuous learning and increased confidence among frontline workers.
    Interviews with the Ethiopian Ministry of Health’s health extension workers (HEWs) in the IDinsight and Last Mile Health project found that engagement with the AI-powered call center significantly enhanced their confidence in performing their job well. This professional development has been particularly evident over time, with HEWs increasingly demonstrating the ability to manage cases independently, apply knowledge from previous consultations, and make informed decisions even in complex scenarios.
  5. Now more than ever, organizations must demonstrate trustworthiness to their end beneficiaries.
    Link Health is building trust with communities in Boston and Houston to ensure they are able to access public benefits. In contexts where personal information is particularly sensitive, Link Health is highlighting their responsible data security practices to the communities they serve. 
The Link Health team.

Accelerating Together 

Through the challenge, we’ve identified and supported bold leaders who are driving innovation and progress locally—and showing what’s possible to social impact practitioners around the globe. 

As a connector, convener, and catalyst, data.org is committed to sharing best practices and helping social impact leaders learn from one another. Our awardees are taking advantage of meaningful opportunities to connect through data.org and Mastercard Center for Inclusive Growth-led events, workshops, and introductions. There’s still much to learn through the duration of the AI2AI Challenge, and we’ll continue to lift up interesting insights and practical approaches.

Field building doesn’t happen in isolation—it happens in community. And together, our community will continue to accelerate social impact using data and AI.

About the author

Joanne Jan

Manager, Partnerships and Product

data.org

Joanne Jan is the Manager, Partnerships and Product at data.org. In this role, she collaborates with key stakeholders to strengthen social impact organizations’ capacity to use data.

Read more

The post 5 Key Learnings from Our AI2AI Challenge Awardees appeared first on data.org.

]]>
Pathways to Impact: Kevin Teo https://data.org/news/pathways-to-impact-kevin-teo/ Tue, 02 Sep 2025 18:27:15 +0000 https://data.org/?p=31882 Kevin Teo is the Chief Technology Officer and Head of AVPN's ImpactCollab platform, which aims to facilitate the identification of trustworthy and impactful impact organizations across Asia, enabling effective philanthropic giving.

The post Pathways to Impact: Kevin Teo appeared first on data.org.

]]>

Pathways to Impact is a series of conversations with data for social impact leaders exploring their career journeys. Perry Hewitt, Chief Strategy Officer of data.org, spoke with Kevin Teo, Chief Technology Officer and Head of AVPN’s ImpactCollab platform, which aims to facilitate the identification of trustworthy and impactful impact organizations across Asia, enabling effective philanthropic giving.

You had a wide range of diverse work experiences before pursuing a career in social impact. What led you to this work?

My career started in computer science, so I naturally went into the tech sector, in startups. That spanned a period of eight and a half years across numerous companies—some succeeded, some failed. The transition towards social impact occurred when I had the realization that, despite the energetic and innovative nature of the startup community, for me, it was missing an essential ingredient around purpose. That led me to pursue an alternative path.

I believe that everything happens for a reason. Right around that time, a door opened by way of the Global Leadership Fellows Program with the World Economic Forum. The program provided an opportunity to get engaged in social entrepreneurship. The subcommittee I was part of looked at social entrepreneurship in East and Southeast Asia. The role entailed bringing these passionate and innovative social entrepreneurs into the broader World Economic Forum convenings. Our goal was to enable the cross-pollination of ideas between global CEOs and leaders and the social entrepreneurs driving change on the ground. 

Forging those connections was a hugely invigorating experience. And it was such an exciting time to be doing this work, with Mohammad Yunus just getting recognized for his results in microfinance.

I took all that as a very clear sign to contemplate a career shift toward social impact.

In most of Asia, the dominant language is not English: it’s Chinese, Japanese, or something else. So when Asia-based SIOs are trying to fundraise, often they are translating themselves into English with a particular donor profile in mind. That approach impairs communication to all the other potential local supporters who could appreciate the nature of their work.

Kevin Teo Kevin Teo Chief Technology Officer and Head of ImpactCollab AVPN

You’ve held many roles at AVPN, and most recently have been leading the ImpactCollab there. What are you working on with data and AI?

At AVPN, we start with the premise that we are essentially surrounded by capital. When we talk about engaging in social impact, it necessarily means deploying that capital to support non-profit organizations to deliver on their mission and to grow.

The challenge is connecting the two worlds: the abundance of capital needs to find a fit with social impact organization (SIO) leaders, who all need capital to support their work. This is a shared need for both nonprofit organizations and social entrepreneurs.

We ask ourselves: how do we best engage that capital? How do we get the people making decisions around capital to take notice of the important work happening in social impact? How do leaders at SIOs, from both data and communication perspectives, gather and present the right information to attract funding?

We have found that in many instances, the SIOs’ case for support is constructed with just one funder in mind, like the Gates Foundation. Hundreds, if not thousands, of other possible, well-resourced supporters would be interested in the nature of an SIO’s work—so we think a lot about how we gather the data to get all the right pieces of information to the right audiences so capital can flow.

Bridging this gap between the organizations that need the resources and the people who actually have the resources is what we’re focused on with ImpactCollab.

Here’s another place we’re seeing a role for data and AI: language. Within the Asian context, we’ve got an additional layer of challenge around languages. In most of Asia, the dominant language is not English: it’s Chinese, Japanese, or something else. So when Asia-based SIOs are trying to fundraise, often they are translating themselves into English with a particular donor profile in mind. That approach impairs communication to all the other potential local supporters who could appreciate the nature of their work.

Next, imagine the people with the wealth being subdivided into different communities of culture and language with all the nuance that entails. 

That’s where LLMs, in particular, can play an interesting and important role. First, they allow a different way to navigate the taxonomy of areas of social impact. For example, in this sector, we sometimes use very specific terms to describe our work. If you’re in climate-related work, you’ll talk about mitigation and adaptation. To the person on the street, those terms are just jargon. That person would say, “I want to do something for the environment,” but they might not frame it in terms of mitigation or adaptation.

An LLM actually allows for natural language to be used, for someone to present an interest and then connect to a potential opportunity to act or support. With technology, you can even layer the language translation layer on top of that. So that’s something we’re looking to build quite extensively into ImpactCollab.

It’s great because a lot of these technologies are actually now available out of the box. With so many AI competitors, we don’t have to build these capabilities from scratch; we’re just tapping into what’s being produced at such a fast pace. We want to ensure we do this work responsibly and also take full advantage of emerging technology to deploy capital to social impact areas where it’s needed most.

Part of the Pathways to Impact series

Curated conversations with data and AI for social impact leaders on their career journeys

See all Pathways to Impact

Were there any blockers to your career entry or career progression?

I’m both a Stoic and an optimist, so when something crops up in my path, I tend to take that as a sign and a challenge rather than as a barrier.

Over the years, I have embraced the challenge of adapting to new cultures. I grew up in Singapore, went to the UK for my undergraduate degree, and then to the US for grad school. 

I am very grateful for those experiences: diving into what people were like in each region and local community, and learning what motivates them—how they work. I also saw that as a way to develop myself personally, and to see life from different perspectives.

What community of people or resources bolsters your work?

For me, it’s always been the social entrepreneur community. And for me, that’s less a formal affiliation or group, and more my community of friends and colleagues who have the social entrepreneur mindset.

Very different people can embody this mindset, wearing different hats and holding different titles. You can find them in a nonprofit, in government, or in a corporation. My personal affinity is to the group that thinks in a socially entrepreneurial fashion. That’s the group of friends and stakeholders that energizes me, and that I proactively look to cultivate over time.

Thinking longer term, I always want to be active in this space—even as I grow old and retire. I can’t imagine myself doing nothing. I want to be surrounded by people with the social entrepreneur mindset, intent on change. I intentionally cultivate a network of my peers, and also those who are 20 and 30 years younger than I am, to ensure I can stay engaged and connected to this work.

Given the breadth of your education and roles you’ve held, you possess a wide range of skills beyond technical ones. Which skill has offered the greatest return in your career?

I’ve mentioned Stoicism before—it’s a mindset and philosophy that has hugely benefited me: Keeping an open mind around different perspectives, especially when you’re in times of high stress or pressure.

We at AVPN often find ourselves in those situations because we work among Type A employees who charge ahead. Sometimes, that pressure can build up. But you don’t want it to become a conflict. That can happen without cultivating a more accommodating mindset, where you seek to understand why people said what they did, or why people reacted the way they did.

I rely on the Stoic mindset, which I try to inculcate in the team. It has benefited me tremendously, as I appreciate a broad range of perspectives and approaches. Those different approaches add value to my individual contribution, the organization’s work, and overall, they make us more effective as humans.

Now, with AI, we can facilitate better translation, as well as a much richer understanding of semantics between cultures and practices. We're able to expand from that very thin slice into a much broader view.

Kevin Teo Kevin Teo Chief Technology Officer and Head of ImpactCollab AVPN

What advice do you have for someone new to the field who is interested in doing this work?

I’d suggest first exploring volunteering—dipping their toe into the social impact space, assuming they are holding some other job. Engaging as a volunteer is an excellent introduction to some of the practical realities on the ground and the multifaceted challenges that are typically at play. 

It’s also important for them to reflect upon what makes sense for themselves and the issues that speak to them on a very personal basis. I’ve found that this has to work for each of us, personally, to be sustained on this journey of discovering our own purpose and place in society and life.

To take Singapore as a context, we’ve created a very robust process of getting young people through school and then into jobs. Once they have those first roles, that’s typically when they get to contemplate what it might mean to work at organizations on the ground that serve a broader purpose beyond individual self-interest. It’s then that they engage in a peer group to share experiences and learn together. It doesn’t matter whether you’re a fresh grad, a mid-career person, or even a retiree—finding what compels you is useful to unpack as you begin to work in the social sector.

What is the next big thing you see in data and AI for social impact?

I’ll focus on data and AI enabling a major cultural shift. In the Asian context, where AVPN focuses, there is such diversity of cultures, practices, and languages. The shift I anticipate and hope to see is a greater melding of cultures and mindsets across the region. As I said, so much of this work and communication is translated into an English medium, and as a result, we are seeing only a very thin slice of what is actually happening on the ground. 

Now, with AI, we can facilitate better translation, as well as a much richer understanding of semantics between cultures and practices. We’re able to expand from that very thin slice into a much broader view. You can begin to understand, for example, why Koreans approach social impact the way they do from a historical or possibly even religious perspective. Why do the Chinese do it that way? Why do the Indonesians do it that way?

Today, we are looking at the surface of understanding these differences, because all the work is being converted to English, and that’s how we’re sharing information. But if we are able to leverage tech to go into the true essence of our practices, similarities, and differences, then I think that there is a tremendous opportunity to learn from one another across this region. And then the same would be applicable globally; AI will enable greater understanding and fuel culture change.

It makes me think of a locksmith who tells you, “don’t make a key from a copy of a key—always go back to the master key.” Are you saying that moving everything in and out of English relegates you to an inferior copy or understanding?

Exactly. Because we’ve gone through translation layers over time—translations created and interpreted by people, and that’s inherently imperfect. I really love your key to a key analogy.

What is your don’t-miss daily or weekly read? How do you stay informed and not overwhelmed?

I’m going to go back to Stoicism again. 

I read Meditations by Marcus Aurelius—his reflections from his time as the emperor. I find it provides a valuable perspective beyond the news of the world and of the development sector. These readings help me stay grounded. 

About the Author

Perry Hewitt

Chief Strategy Officer

data.org

Chief Strategy Officer Perry Hewitt joined data.org in 2020 with deep experience in both the for-profit and nonprofit sectors. She oversees the global data.org brand and how it connects to partners and funders around the world.

Read more

Series

Pathways to Impact

This data.org series interviews leaders in Data Science for Social Impact with a lens of how they got there, as well as the skills and experiences that have fueled their career progression.

See all Pathways to Impact

The post Pathways to Impact: Kevin Teo appeared first on data.org.

]]>
5 Minutes with Fagoroye Ayomide https://data.org/news/5-minutes-with-fagoroye-ayomide/ Tue, 19 Aug 2025 17:30:16 +0000 https://data.org/?p=30885 With experience in both industry and academia, Fagoroye Ayomide has collaborated with international organizations and research groups, contributing to projects aimed at preserving linguistic diversity and improving AI accessibility.

The post 5 Minutes with Fagoroye Ayomide appeared first on data.org.

]]>
The Capacity Accelerator Network (CAN) is building a workforce of purpose-driven data and AI practitioners to unlock the power of data for social impact. With experience in both industry and academia, Fagoroye Ayomide has collaborated with international organizations and research groups, contributing to projects aimed at preserving linguistic diversity and improving AI accessibility. Ayomide is a CAN Africa Low-Resource Language Fellow.

In this rapidly evolving AI landscape, what was the “aha moment” when you realized the opportunity and the necessity to train AI on low-resource languages to unlock and accelerate Africa’s AI potential?

My “aha” moment came while searching for a text-to-​s​peech api and finding out that Yoruba was still not supported in the ​​Google Cloud Text-to-Speech. With all of Google’s massive technology, it utterly failed in Yoruba. It became evident that language diversity in Africa was not just underrepresented but was essentially missing in mainstream AIs. I realized that we have to move quickly before we have a massive reservoir of cultures invisible to AI. The need became very evident. The opportunity for big tech to bridge this gap and support efforts to train AI on low-resource languages is both urgent and transformative. This is not just to save language heritage but to support inclusive innovation in healthcare, education, governance​,​ ​and beyond​. The future of Africa’s AI depends on language equity as a foundation, not an option. 

When developing and training responsible AI for African and other low-resource language communities, practitioners must give ​priority​ ​to​ community-centered data collection, transparent ​use​ of ​models​, and long-term benefit sharing.

Fagoroye Ayomide Fagoroye Ayomide  Product Development and Innovation Lead NitHub

How does your work with low-resource languages move the needle for data and AI for social impact work? What are some of the biggest challenges you have faced in doing so?

My focus is on developing ethically sourced and linguistically valid speech data for low-resource languages​,​ specifically Yoruba and Hausa. This enables voice tools for different sectors (healthcare, education, citizenship engagement​,​ etc)​,​ particularly in underserved communities. One of our most significant challenges is infrastructure. Low-resource languages often have no digitized data, no standard orthographies, and variable speaker representation. ​There are​ institutional challenges​, ​​such as​​ ​under-resourced research and low levels of collaboration among technologists and linguists. However, by filling in the gaps, we empower the voices of the locals to shape AI as an instrument of inclusion. 

What are the diverse, interdisciplinary skills that are required to do this work effectively? Which one surprised you the most?

Effective work in AI for low-resource languages demands a fusion of skills​,​ which may include machine learning, computational linguistics, cultural anthropology, community organizing, trust building​,​ and ethics. What surprised me the most was realizing the importance of trust building during interactions with language speakers ​, ​as co-creators and not as merely data providers​ ,​ in ensuring quality data. It reminded me that the future of AI isn’t just about code and compute​;​​ ​it’s ​about ​the people. And unless we prioritize the people, our models will always remain incomplete. 

What key responsible practices should AI practitioners prioritize when developing and training AI systems in African—or other low-resource languages?

When developing and training responsible AI for African and other low-resource language communities, practitioners must give ​priority​ ​to​ community-centered data collection, transparent ​use​ of ​models​, and long-term benefit sharing. Practices such as participatory dataset design, multilingual documentation, and culturally sensitive model assessments must be adopted by practitioners. Some other guardrails include strict consent protocols and preventing models from perpetuating negative stereotypes. Trust from the community is a requirement. Without trust, communities will not cooperate, and the resulting data will ​be both​ ethically and technically imperfect. This trust is earned ​through​ respect, feedback loops, and respecting the rights of speakers as not merely data points but as rights-holders to the data. 

Inclusive AI cannot be built in silos. Governments offer policy frameworks, technologists bring tools, NGOs offer ​a ​ground-level perspective, and communities provide lived experience.

Fagoroye Ayomide Fagoroye Ayomide  Product Development and Innovation Lead NitHub

What is the importance of cross-sector collaborations in building inclusive AI? What advice would you offer to people interested in this work?

Inclusive AI cannot be built in silos. Governments offer policy frameworks, technologists bring tools, NGOs offer ​a ​ground-level perspective, and communities provide lived experience. Cross-sector collaboration ensures that the development of AI systems is linguistically fair, culturally relevant, and scalable. My advice to aspiring AI equity advocates is that they should start locally, stay humble, and collaborate widely. They should learn from linguists, community elders, and social scientists. They should also prioritize impact over novelty and remember that language is identity. Working in AI language equity is not just a technical challenge​,​ but a social justice mission. ​You must​​ build for​ and with​ the communities you aim to serve. 


The post 5 Minutes with Fagoroye Ayomide appeared first on data.org.

]]>
5 Minutes with MyKinzi Roy https://data.org/news/5-minutes-with-mykinzi-roy/ Wed, 09 Jul 2025 13:00:00 +0000 https://data.org/?p=31320 MyKinzi Roy talks about how working with a range of AI tools and her background in graphic design support Mississippi AI Collaborative's clients in creative and strategic ways.

The post 5 Minutes with MyKinzi Roy appeared first on data.org.

]]>
The Mississippi AI Collaborative (MSAIC) is an awardee of data.org and Microsoft’s Generative AI Skills Challenge. Through its ecosystem approach, MSAIC has engaged over 4,000 Mississippians in AI skills. Their AI Agency program connects AI-trained students with local nonprofits and small businesses to provide hands-on AI training and customized AI solutions. MyKinzi Roy, an AI Agency apprentice and recent graduate from Jackson State University, now leads as a Graphic Designer and the Brand Director for the Mississippi AI Collaborative.

When did you realize that AI could be more than just a skill, but a way to solve real challenges and contribute meaningfully to Mississippi’s future?

I realized the potential of AI when I was an apprentice at first with the Mississippi AI Collaborative (MSAIC) AI Agency. As we began to work with entrepreneurs in the Jackson, Mississippi area, I saw that AI wasn’t just a tech tool; it helped us turn ideas into impact.

When people understand how to use AI, they gain the same creative power and operational efficiency as larger, better-resourced organizations. But access and education are essential, and training and trust-building are just as important as the technology itself.

MyKinzi Roy MyKinzi Roy Graphic Designer and Brand Director Mississippi AI Collaborative

Tell us about your work at the AI Agency initiative. What inspired you to be a part of this initiative?

At the MSAIC AI Agency, we help small businesses and startups grow by integrating AI into their workflows. As an apprentice, I was trained on a range of AI tools and brought my background in graphic design to support clients in creative and strategic ways. We taught entrepreneurs how to use ChatGPT to develop business plans, then transformed those plans into investor-ready pitch decks using Gamma. We also introduced them to tools for building and embedding custom chatbots on their websites to improve customer experience. In branding sessions, we used Adobe Express’s generative AI features to help them create logos, define color palettes, and establish a cohesive brand identity. 

What initially drew me to the program was fear—fear of how AI might impact creative work. Social media often painted AI as a threat to artists and designers. But thanks to Dr. Brittany Myburgh’s encouragement, I joined the initiative and quickly saw a different side of AI. I realized it wasn’t replacing creativity; it was expanding it. That shift in perspective helped me overcome my fears and recognize AI’s potential to empower entrepreneurs, especially here in Mississippi. Now, I’m passionate about helping others see AI not as something to fear, but as a tool to amplify their ideas and impact.

Your work connects you directly with small businesses and nonprofits. How has applying AI in these real-world settings deepened your understanding of its social and economic potential?

Working directly with small businesses has completely reshaped how I view AI. It’s not just advanced technology—it’s a practical tool for leveling the playing field. Many of the entrepreneurs we serve at the MSAIC AI Agency have incredible ideas and strong foundations, but they often lack time, staffing, or access to resources. AI helps bridge that gap. We’ve seen firsthand how using tools like ChatGPT and Gamma have helped entrepreneurs polish their business ideas, and in some cases, win pitch competitions. That kind of momentum can be the spark that turns a vision into a fully operating business.

One example that stands out is a counseling professional we worked with. We helped her integrate a custom chatbot into her website, which now answers frequently asked questions about her services. This simple solution saved her hours of time each week and made her services more accessible to clients. Experiences like this deepened my belief that digital equity is essential. When people understand how to use AI, they gain the same creative power and operational efficiency as larger, better-resourced organizations. But access and education are essential, and training and trust-building are just as important as the technology itself. This work has shown me that AI, when used with intention, has the power to create real economic and social opportunity.

Through this journey, you have learned to use AI and helped small businesses understand its value. What’s one unexpected thing you’ve learned—and one thing you’ve taught others—that’s helped create a ripple effect beyond your own experience/ project?

One unexpected thing that I’ve learned throughout this journey is that mindset matters. I thought learning the tools would be hard, but my experience using the technology has enabled me to shift people’s minds or perspective about AI, letting them know AI isn’t necessarily here to replace your job. If used right, it’s here to help.

Something I’ve taught others was the idea that AI doesn’t have to be hard to work with. When we showed entrepreneurs how to use ChatGPT for tasks like writing social media captions, drafting emails, or even basic prompt engineering, they were amazed! The “I can do this!” moments unlocked a new level of confidence and potential for the entrepreneurs.

I saw AI as a tool to enhance my workflow and design process, even sometimes help me expand on ideas that probably would’ve taken me days or weeks to think of. The tools allowed me to work faster to create content and communications.

MyKinzi Roy MyKinzi Roy Graphic Designer and Brand Director Mississippi AI Collaborative

How has learning about AI helped your career?

Learning about AI has expanded my career. As a graphic designer, I saw AI as a threat to my creativity at first because of what I had seen on social media, but once I started working with the MSAIC, my perspective changed. I saw AI as a tool to enhance my workflow and design process, even sometimes help me expand on ideas that probably would’ve taken me days or weeks to think of. The tools allowed me to work faster to create content and communications. Instead of replacing me, it became like a partner. It’s also helped me beyond my art career, as I’ve gained skills in consulting, digital marketing, and more. 

Through the AI Agency, we’ve helped entrepreneurs build a business while using AI. This opportunity has pushed me out of my comfort zone, helped me grow into a multidisciplinary creative, and bridged the gap between art and technology. AI isn’t here to replace my job and creativity; it’s here to help and empower them. 


The post 5 Minutes with MyKinzi Roy appeared first on data.org.

]]>
Pathways to Impact: Dr. Alister Martin https://data.org/news/pathways-to-impact-dr-alister-martin/ Wed, 02 Jul 2025 13:53:40 +0000 https://data.org/?p=31256 Dr. Alister Martin is an ER physician and founder of Link Health, an organization that uses technology—including AI—to help low-income patients enroll in United States federal government assistance programs while they wait in healthcare settings.

The post Pathways to Impact: Dr. Alister Martin appeared first on data.org.

]]>

Pathways to Impact is a series of conversations with data for social impact leaders exploring their career journeys. Perry Hewitt, Chief Strategy Officer of data.org, spoke with Dr. Alister Martin, an ER physician and founder of Link Health, an organization that uses technology—including AI—to help low-income patients enroll in United States federal government assistance programs while they wait in healthcare settings.

Can you tell us about yourself and about your work at the intersection of tech and health?

I’m an ER physician and the founder of Link Health, which is focused on helping eligible patients enroll in cash assistance benefit programs. At Link Health, we’ve unlocked a way to have artificial intelligence help do that for thousands of Americans. 

If you’ve worked in—or even been in—an emergency department, you understand that our healthcare system is not functioning. You can only do so much work as an emergency physician until you begin to think, “There’s gotta be a better way to do this. There’s a better approach than just jumping in the river and trying to save the drowning person.” Eventually, somebody has to go upstream and figure out why all of these people are drowning in the first place. 

My work is really focused on solving this problem; throughout my career, it has taken multiple shapes. For example, it’s been helping people who are in healthcare waiting rooms be able to vote. Through our initiative, Vot-ER, we help people register to vote while they’re waiting to be seen. We believe that through the power of the vote, they can create a healthier and more robust healthcare system.

And that work also includes what we’re doing here with people and technology at Link Health. We’re trying to figure out how to use someone’s time in the waiting room to get them connected to the cash assistance benefit programs that they are already eligible for. The data on who is eligible is all there, but we need to make people aware and get them connected. Through that program, we’re proud to have helped over 3,300 patients enroll in vital federal benefit programs, ranging from rental and cash assistance to contributions toward a child’s 529 college savings account. Altogether, that’s more than $4.4 million in financial support distributed over the past two years. And we’re just getting started.

Our first iteration of Link Health required throwing a lot of humans at the work… but the reality is that we're never going to scale our impact like that.

Alister Martin Link Health Dr. Alister Martin Founder and CEO Link Health

Were there any unexpected blockers or pivots in your career journey?

I came to medicine from a low-income community. I had never been on the clinical side of a healthcare setting, so I quite frankly didn’t know what I was getting myself into. By the time I was in my third year at Harvard Medical School (HMS), which is when you do your clinical rotations, I felt very disappointed. I think I would use the word “heartbroken” about the way that the healthcare system works.

There were things that I witnessed as an idealistic 23- and 24-year-old that broke my heart, and many of them had to do with the way that we treated patients who were either low-income or uninsured. So I had a decision to make: do I stay in this field and continue committing the harm, or do I head in a different direction? And that’s when I left HMS for two years.

During that time, I went to the Harvard Kennedy School of Government to learn how government works. I don’t think I actually learned how government works; instead, I found more questions to ask. After that, I worked in politics for the Governor of Vermont, and that was an eye-opening experience in learning how to get things done. And then I came back and did my residency at Massachusetts General Hospital; I was committed to medicine, but to doing it a different way. This career detour improved my work as a physician and a changemaker.

Part of the Pathways to Impact series

Curated conversations with data and AI for social impact leaders on their career journeys

See all Pathways to Impact

You spoke of coming to medicine from a low-income community. How has that been an advantage or a disadvantage in your career?

I think that the great gift of those who live on the margins of society is that they learn how to survive on the margins of society. If you are then put at the center of society, where it’s a resource-rich environment, you can see clearly what those resources can bring. 

You learn on the margins that you have to practice survival-based efficiency, where every decision has to be made with the knowledge that you may not get this chance again. So when you move to a place where there are more resources, it’s like Disney World. You learn to spot the opportunities quickly. I think it’s a real advantage.

Were you able to identify the kind of changes and solutions you felt were needed when you returned to medicine? 

As a 20-something-year-old in medical school, I definitely didn’t understand what the solutions were. I just knew how bad the problem was. I know that there is a really wide health/wealth gap for Medicaid patients that I see here in Massachusetts. With an average income of $22,000 a year for Medicaid enrollees, how are we expecting these patients to be healthy? People are wrestling with the skyrocketing prices for groceries and with putting gas in their tanks. 

I don’t promise to know what the solutions are now–just what some of them could be. 

My experience working at the White House taught me that there is some money available: federal and state programs that many low-income patients are eligible for but aren’t accessing. And that planted the idea for Link Health as one possible solution.

We don’t necessarily have to overhaul the whole system–although one could argue that probably some part of that is necessary. But we can optimize what we have today, using new technologies and existing dollars.

Our first iteration of Link Health required throwing a lot of humans at the work. And we still have a workforce: today, we have almost a hundred certified patient navigators whom we’ve trained to go out in those waiting rooms and enroll people in these programs. But the reality is that we’re never not going to scale our impact like that. That’s why the work that we are doing with the AI2AI Challenge award from data.org and the Mastercard Center for Inclusive Growth is so important. If we can leverage artificial intelligence using large language models, we can more effectively blanket clinical spaces with an invitation to check if you are eligible for these programs, and then help you with the process of enrolling. We are learning that a great deal can be done with a very well-written algorithm. With this technology, we don’t need to rely on human intervention alone, but can focus on the places where humans in the loop are critical in aiding patients with their applications. 

Your medical degree and your data and AI vision clearly inform your contribution to social impact. Which other skills have offered the greatest return in your work — which abilities have really supercharged your career pathway?

The thing that comes to mind right away is one specific thing I learned during my time at the Kennedy School: the adaptive leadership framework. I have no disclosures here; this framework simply changed my leadership perception and practice. Interestingly, it’s taught by a physician, actually a psychiatrist. 

Here’s how I understand it: in medicine, you have all these houses of medicine, these different specialties. There are different ways to be a physician: an ear, nose, and throat doctor; a dermatologist; a rheumatologist, and each of these has their specific practices and ways in which they are a doctor. It took me going to the Kennedy School to realize that leadership is like that, too: there are lots of ways to exercise leadership.

The adaptive leadership framework holds that leadership is not a position: it’s a verb. Leading is an exercise, and in a well-run organization, every person in that organization is empowered to try to do the work of pushing the organization towards solving its real challenges–and not shying away from the reality of the challenges. The framework also taught me how to think politically about coalitions and build partnerships that are mutually beneficial to do the work that the community needs.

If we can leverage artificial intelligence using large language models, we can more effectively blanket clinical spaces with an invitation to check if you are eligible for these programs, and then help you with the process of enrolling.

Alister Martin Link Health Dr. Alister Martin Founder and CEO Link Health

What advice do you have for someone who is new to the data and AI for social impact field? 

The things that I would share right up front: you have to fall in love with the problem that you’re solving, not the solution that you are creating. You need to deeply understand the contours of the problem and why it exists before you settle on a solution. 

For example, when you try to address a problem, you have to make sure you understand the secondary consequences of that solution. Who stands to lose from you addressing or fixing this problem? The system is currently benefiting from the way the problem exists. You need to understand why that is.

It helps to have an understanding of community organizing. That’s the framework that I’m speaking to you from. As a community organizer, you’re not saying this is the solution that we need to move forward with. Instead, it’s more like orchestration. The piece of advice that I would share about tackling a problem is shifting from a mindset of “I have the solution, and I need to persuade people to go with it.”

The second piece of advice that I would share is this: you need to be using artificial intelligence yesterday. If you are not, you will be subsumed. It is an incredibly important resource, and I’ll leave it at that. Concretely, we do this through traditional skilling, but we also have watch parties for an hour every other week. For example, I’ll do a session where I am sharing my screen and showing you how I use ChatGPT’s new operator program, or giving an example of how deep research works. It can be as little as 10 or 15 minutes of an example, but it sparks learning and conversation. And then you can unlock the creativity of the team; people will give you ideas on ways to use this tool to maximize your productivity far and beyond what you could come up with alone. I really like the gelling that happens when we share and compare notes.

What’s your don’t-miss read?

The Politico Pulse daily newsletter is very, very good. It’s into the appropriate amount of detail on healthcare legislation.

About the Author

Perry Hewitt

Chief Strategy Officer

data.org

Chief Strategy Officer Perry Hewitt joined data.org in 2020 with deep experience in both the for-profit and nonprofit sectors. She oversees the global data.org brand and how it connects to partners and funders around the world.

Read more

Series

Pathways to Impact

This data.org series interviews leaders in Data Science for Social Impact with a lens of how they got there, as well as the skills and experiences that have fueled their career progression.

See all Pathways to Impact

The post Pathways to Impact: Dr. Alister Martin appeared first on data.org.

]]>
5 Minutes with Oluwaseun Nifemi https://data.org/news/5-minutes-with-oluwaseun-nifemi/ Wed, 04 Jun 2025 18:45:54 +0000 https://data.org/?p=30601 Oluwaseun Nifemi has been instrumental in advancing AI-driven solutions across sectors such as education, healthcare, digital and financial inclusion, governance, and advocacy.

The post 5 Minutes with Oluwaseun Nifemi appeared first on data.org.

]]>
The Capacity Accelerator Network (CAN) is building a workforce of purpose-driven data and AI practitioners to unlock the power of data for social impact. CAN Africa Language Fellow Oluwaseun Nifemi is advancing purpose-driven AI solutions across sectors and domains through her roles as a senior data scientist at EqualyzAI and a team lead at Data Science Nigeria.

In this rapidly evolving AI landscape, what was the “aha moment” when you realized the opportunity and the necessity to train AI on low-resource languages to unlock and accelerate Africa’s AI potential?

I realized how often low-resource African languages are left out of global natural language processing (NLP) advancements, as most machine translation models underperform for these languages, not because they are less important, but because the data, infrastructure, and high computing are not readily available. The gap created by this divide doesn’t just limit innovation but marginalizes millions of people, hindering access to critical sectors like primary health care, education, and agriculture, where AI is needed to bridge the gap.

The “aha moment” for me is that if we are serious about AI being a force for inclusive growth, we can no longer overlook the languages our people in Africa speak daily as a developmental imperative. Imagine AI-driven conversational agents that can offer basic medical advice in the Hausa language for a rural village in Northern Nigeria, bridging the gap created by the shortage of health professionals. We can democratize access to technology by enabling localized solutions that empower communities across the continent.

Projections suggest AI can contribute over $1.2 trillion to Africa’s GDP by 2030, which shows that we have a massive opportunity and an urgent responsibility. The necessity becomes clear: without AI models trained on Africa’s linguistic diversity, the continent risks being left behind in the global revolution. Training AI on low-resource languages is not just about catching up but creating truly inclusive and scalable solutions. The vision of AI that genuinely reflects the continent’s contexts drives my work to help accelerate Africa’s AI future.

The necessity becomes clear: without AI models trained on Africa's linguistic diversity, the continent risks being left behind in the global revolution.

Oluwaseun Nifemi Oluwaseun Nifemi Lead, Technical Delivery (Consulting & Services) Data Science Nigeria (DSN)

How does your work with low-resource languages move the needle for data and AI for social impact work? What are some of the biggest challenges you have faced in doing so?

Nigeria has over 500 languages, making it one of the most linguistically diverse countries in the world. However, over 90 percent of these languages are considered low-resource in Natural Language Processing (NLP), meaning they lack the digital resources, corpora, computational infrastructure, and datasets needed to build effective language models. And that’s a problem because, without language inclusion, we’re building technology that doesn’t serve everyone. My work focuses on closing that gap by training AI in local African languages and building localized AI solutions to unlock access to critical services in education, healthcare, agriculture, and finance for communities that have historically been left out. When a student in a rural area can learn in their mother tongue or a patient can describe symptoms to a chatbot that understands them, that’s impact.

But it has not been easy. One of the biggest challenges we faced was acquiring locally nuanced datasets. Community-driven data collection, such as crowdsourcing, is promising but slow and resource-intensive. Additionally, limited access to computational infrastructure hinders model training. These barriers slow progress and prevent low-resource communities from accessing effectively trained AI models in their local languages. Despite the hurdles, we’re seeing progress. Our homegrown Equalyz Crowd allows you to collect multi-modal datasets and be incentivized. Through our startup, equalyzAI, we have built a language-inclusive product that drives health, education, and financial inclusion. We move the needle by making inclusion the foundation, not an afterthought, fostering equitable development, preserving cultural heritage, and driving socioeconomic progress.

What are the diverse, interdisciplinary skills that are required to do this work effectively? Which one surprised you the most?

Developing effective low-resource language models that authentically reflect Indigenous communities’ natural conversational style, cultural nuances, and religious contexts requires an interdisciplinary blend of skills. Of course, you need strong technical skills in machine learning, speech recognition, and model optimization, especially for real-time applications like speech-to-text systems. But what often gets overlooked is just how crucial linguistic expertise is, particularly from native speakers who are also trained linguists. Their ability to capture subtle tonal shifts, idiomatic expressions, and grammatical structures is non-negotiable for accuracy in low-resource language processing.

Beyond linguistics and engineering, we also needed cultural and anthropological insight, with ethical data governance, because we’re representing people’s identities, histories, and worldviews. That’s why community engagement is at the center of the process. We’ve had to co-design data collection methods with local communities to build trust and ensure the outputs are validated in contexts (meaningful and respectful).

The identity element challenged me to think beyond the algorithm and focus on inclusive, ethical AI development that reflects the people it serves.

What key responsible practices should AI practitioners prioritize when developing and training AI systems in African—or other low-resource languages?

Developing AI for African and other low-resource languages demands responsible practices to ensure ethical and inclusive outcomes. Firstly, I strongly recommend Privacy-by-Design principles and robust consent protocols. Prioritizing participant sovereignty and culturally sensitive data is responsible AI development. Interdisciplinary teams, including data governance experts and legal compliance specialists, must enforce these guardrails to align with local regulations.

Secondly, it is important to address linguistic biases in training data. These biases can distort cultural representation and reduce model accuracy. Data Collectors should curate diverse datasets and account for dialectal variations to preserve meaning across contexts.

I attest that community trust is foundational. Engaging local communities fosters linguistic authenticity, improves data quality, and builds confidence in AI systems. Transparent collaboration, including co-designing data collection with indigenous stakeholders, ensures models reflect cultural nuances and meet community needs. Communities may resist participation without trust, undermining data integrity and model effectiveness. By prioritizing ethical stewardship and community trust, AI products can drive equitable impact that preserves cultural heritage and drives social progress in low-resource settings.

Beyond linguistics and engineering, we also needed cultural and anthropological insight, with ethical data governance, because we're representing people's identities, histories, and worldviews.

Oluwaseun Nifemi Oluwaseun Nifemi Lead, Technical Delivery (Consulting & Services) Data Science Nigeria (DSN)

What is the importance of cross-sector collaborations in building inclusive AI? What advice would you offer to people interested in this work?

I advocate for partnerships among AI startups, tech companies, academic institutions, governments, and local communities. This pool of expertise, resources, and perspectives addresses linguistic and cultural gaps in AI systems.

These partnerships minimize challenges like scarce datasets and limited infrastructure by leveraging shared resources, such as community-driven data collection or government-funded computing facilities. They also promote ethical practices, balancing technological advancement and cultural preservation.

I advise those interested in AI language equity to prioritize interdisciplinary learning and community engagement. Gain NLP, linguistics, and ethics skills and develop cultural competence to collaborate effectively with diverse stakeholders. Seek mentorship from experts in low-resource language AI and contribute to open-source projects to build practical experience. Finally, it is important to engage communities actively; their insights are critical for creating relevant, trustworthy AI systems.


The post 5 Minutes with Oluwaseun Nifemi appeared first on data.org.

]]>
5 Minutes with Tsosheletso Chidi, Ph.D. https://data.org/news/5-minutes-with-tsosheletso-chidi-ph-d/ Tue, 06 May 2025 18:15:38 +0000 https://data.org/?p=30545 Dr. Tsosheletso Chidi is a linguistic researcher, multilingual writer, poet, and literary curator. Tsosheletso was one of the first Africa Low-Resource Language fellows.

The post 5 Minutes with Tsosheletso Chidi, Ph.D. appeared first on data.org.

]]>
The Capacity Accelerator Network (CAN) is building a workforce of purpose-driven data and AI practitioners to unlock the power of data for social impact. CAN Africa Language Fellow Dr. Tsosheletso Chidi is a linguistic researcher, multilingual writer, poet, literary curator, and lecturer in the Department of African Languages and research fellow in the Computer Science Department at the University of Pretoria.

In this rapidly evolving AI landscape, what was the “aha moment” when you realized the opportunity and the necessity to train AI on low-resource languages to unlock and accelerate Africa’s AI potential?

My “aha moment” came when I realised that my intensive cultural and creative sectors background actually qualified me for this opportunity.  For so long, many of us working in language and the arts believed that AI belonged solely to engineers and data scientists. We excluded ourselves from conversations that deeply affect the futures of our languages and cultures. But then I recognised that our absence was the gap and our inclusion is the opportunity. Working with indigenous African languages, I saw how AI systems often mistranslate, misrepresent, or ignore them entirely. Training AI on these languages isn’t just a technical task — it’s a cultural necessity. Without it, Africa’s digital future risks being shaped by systems trained on foreign values. Inclusive AI can empower communities to define themselves in digital spaces not as data points, but as agents of meaning.

Working with indigenous African languages, I saw how AI systems often mistranslate, misrepresent, or ignore them entirely. Training AI on these languages isn’t just a technical task — it’s a cultural necessity.

Tsosheletso Chidi Tsosheletso Chidi, Ph.D. Lecturer, Department of African Languages and Research Fellow, Department of Computer Science University of Pretoria

How does your work with low-resource languages move the needle for data and AI for social impact work? What are some of the biggest challenges you have faced in doing so?

My work with low-resource African languages advances AI for social impact by centering people, not just data. I come from a literary and linguistic background, and I approach this work by asking: What’s the best way to engage with these languages meaningfully? That question continues to guide me. One of my biggest challenges is holding deep conversations with data scientists and asking hard questions like: Who is this for? My role is making sure African communities are not reduced to data sources, that our cultural nuances are respected, and that this work is not treated as a niche for profit. I see myself as a bridge helping to facilitate relationships between communities and AI practitioners. For me, social impact in AI means ensuring that African languages and the people who speak them are central to the design and purpose of these systems.

What are the diverse, interdisciplinary skills that are required to do this work effectively? Which one surprised you the most?

Linguistic expertise, community engagement, ethical research practices, technical literacy, machine translation, project management, advocacy, and policy awareness are diverse interdisciplinary skills required to do this work effectively. Linguistic and cultural knowledge is foundational, especially when working with indigenous languages that carry deep histories and nuanced meanings. At the same time, you need the technical ability to navigate the language of AI, machine translation, and data ethics — even if you’re not building the models yourself.

The skill that surprised me the most was community engagement. I had underestimated how central it would be to the success of AI projects involving low-resource languages. Building trust, working ethically with people, and communicating across power dynamics are not side tasks — they are the core of the work. Without community participation, even the most accurate models fall flat in impact and relevance. This work doesn’t sit neatly in one discipline. It thrives in the space between them, and that’s where I’ve found my purpose. Being able to connect the dots, sit at multiple tables, and bridge knowledge systems is what allows me to push for more inclusive, culturally grounded AI in Africa.

What key responsible practices should AI practitioners prioritize when developing and training AI systems in African—or other low-resource languages?

Key responsible practices include transparency about how data will be used, co-designing projects with language speakers, and ensuring that communities benefit from the tools being developed. AI practitioners must also avoid extractive data collection, where languages are sourced for model training with little regard for who owns, controls, or understands the outcomes. Community trust isn’t just important – it’s essential. Without it, you may get data, but not meaning. Communities need to see themselves reflected in the process, have access to the outputs, and feel respected in how their languages and stories are handled. This is especially true in African contexts where colonial histories have left deep scars around knowledge extraction. Guardrails should include ethical review processes tailored to cultural contexts, open dialogue between technologists and language practitioners, and mechanisms to track and respond to potential harm. Inclusion must be more than representation; it must be active collaboration. Ultimately, AI systems built for low-resource languages will only be sustainable if they are built with the people who speak them.

Communities need to see themselves reflected in the process, have access to the outputs, and feel respected in how their languages and stories are handled. This is especially true in African contexts where colonial histories have left deep scars around knowledge extraction.

Tsosheletso Chidi Tsosheletso Chidi, Ph.D. Lecturer, Department of African Languages and Research Fellow, Department of Computer Science University of Pretoria

What is the importance of cross-sector collaborations in building inclusive AI? What advice would you offer to people interested in this work?

Cross-sector collaboration is essential to building inclusive AI because language equity cannot be solved by one field alone. Technologists bring the tools, but linguists, cultural workers, educators, and communities bring the context. Without that blend, we risk building systems that are technically impressive but socially disconnected. In my work, I have seen how the most meaningful AI projects emerge when people from different sectors come together to listen, challenge assumptions, and co-create new approaches. To those interested in AI language equity, my advice is simple: start where you are, and bring your full skillset. You don’t need to be a coder to matter. You need curiosity, humility, and a deep respect for the languages and people you’re working with. Learn to speak across disciplines. Ask hard questions about ethics, power, and access. And most importantly, remember that inclusion is not just about who’s in the room, but about who gets to shape the outcome.


The post 5 Minutes with Tsosheletso Chidi, Ph.D. appeared first on data.org.

]]>
Charting the AI for Good Landscape – A New Look https://data.org/news/charting-the-ai-for-good-landscape-a-new-look/ Fri, 02 May 2025 15:36:06 +0000 https://data.org/?p=30073 We are revisiting the Charting the 'Data for Good' landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured.

The post Charting the AI for Good Landscape – A New Look appeared first on data.org.

]]>
More than 50% of nonprofits report that their organization uses generative AI in day-to-day operations. We’ve also seen an explosion of AI tools and investments. 10% of all the AI companies that exist in the US were founded in 2022, and that number has likely grown in subsequent years.  With investors funneling over $300B into AI and machine learning startups, it’s unlikely this trend will reverse any time soon.

Not surprisingly, the conversation about Artificial Intelligence (AI) is now everywhere, spanning from commercial uses such as virtual assistants and consumer AI to public goods, like AI-driven drug discovery and chatbots for education. The dizzying amount of new AI programs and initiatives – over 5000 new tools listed in 2023 on AI directories like TheresAnAI alone – can make the AI landscape challenging to navigate in general, much less for social impact. Luckily, four years ago, we surveyed the Data and AI for Good landscape and mapped out distinct families of initiatives based on their core goals. Today, we are revisiting that landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured.

A Quick Look Back

When we last charted the landscape, we found it helpful to categorize “Data/AI for Good” groups by the different philosophical outcomes they aimed to achieve. Some wanted to supercharge nonprofits with data science, while others sought to safeguard society from biased and unethical applications of machine learning. Others concentrated on training and nurturing new data scientists or funding the entire ecosystem. As we explore the landscape again, we see that many of these founding ideas remain consistent, and in this AI revolution, trust, investment, and skilling in data cannot be left behind. Data is the underpinning of AI and will continue to be the critical building block for the AI infrastructure.  However, there are two main changes we observed:

  • In the first landscape, all the programs we surveyed focused on supporting the field of data and AI for good. We didn’t focus on many examples of people using data science for good in their work because the list would have been enormous. Today, however, we are still early in our understanding of the ways AI can support social impact. Therefore, we are highlighting some example organizations across domains that are using AI for social impact.
  • In the first landscape, there were not many consumer products for data science and AI that we would mention. Data visualization tools like Tableau or machine learning platforms like AWS Sagemaker existed, but they still largely required technologists to run and were intended for technical tasks. In this wave of AI, we have seen a wave of consumer software, from general Large Language Models (LLMs) like ChatGPT to AI-ified programs like Salesforce. In this landscape, we name some examples of off-the-shelf AI software that nonprofits could use to illustrate the difference between off-the-shelf AI tools and custom AI solutions.

The Five Branches of AI for Good

First, let’s discuss the taxonomy we used for categorizing AI for Good programs. In reviewing a wide array of current AI for Good programs, we noticed they tended to fall into five main branches:

  1. Creating More AI for Good: This category includes initiatives that focus on expanding the ecosystem’s ability to build AI solutions for social impact. They often provide one or more of the following:
    1. Data: Open data catalogs like the Humanitarian Data Exchange (HDX) or Radiant Earth Foundation’s MLHub feed nonprofits, researchers, and innovators with clean, relevant datasets that can be used to train AI models for social impact.
    2. Cloud: In order to store all of the data needed for AI, cloud providers like AWS and Google have provided free storage and cloud credits to nonprofits and mission-driven organizations. In addition, very few organizations will have the local infrastructure needed to build and run AI models, so cloud providers like Amazon Web Services or Google Cloud Platform provide a wide breadth of online services that nonprofits can access at free or discounted rates to train and deploy AI.
    3. Compute: Tech companies like Google and NVIDIA have made some limited discounts on costs for GPU usage for nonprofits, while initiatives like Empire AI plan to provide GPUs to researchers in New York, so they can train and run models more cost-effectively.
    4. Models: Projects like BigScience’s BLOOM or Hugging Face’s Hub Community Models offer open-source AI models ready for fine-tuning by nonprofits and government agencies.
    5. Expertise: Groups like Zindi Africa and Tech to the Rescue connect AI professionals with social-sector organizations that do not have those technical skills in-house.
    6. Guidance and Frameworks: Programs such as OpenAI’s AI Academy or NTEN publish frameworks, tutorials, and practical resources to help nonprofits design, build, and acquire AI solutions to meet their mission.
    7. Funding: Grantmakers like Google.org’s AI Impact Challenge or the Mastercard Center for Inclusive Growth provide financial support to nonprofits, startups, and researchers working on social impact AI projects.
  2. Building Skills to Create More AI for Good: Another group of initiatives invests in capacity development through building skills and knowledge across diverse communities. They often offer workshops, online courses, mentorship programs, and in-person training. Programs like TechChange’s AI and Machine Learning Courses for Social Good, Fruit Punch’s AI for Good Challenges, and Data for Development’s ChatGPT Trainings target everyone from high school students to seasoned professionals looking to pivot into AI for social impact. This educational dimension is critical since nonprofits and grassroots organizations need to grow in-house capacity to design, implement, and maintain AI solutions that truly match their missions.
  3. Ensuring Responsible AI: As AI has become more prevalent, so too have concerns about issues like bias, privacy, and accountability. Indeed, one of the biggest shifts we’ve observed since our initial mapping is the growth of efforts dedicated to governing AI responsibly. Mozilla Foundation’s Trustworthy AI Initiative, IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, and the research from AI Now Institute represent some of the many groups providing ethical frameworks, research, and policy guidance. These organizations underscore the notion that “for good” cannot merely mean deploying AI anywhere but deploying it in a way that safeguards human rights, fairness, and transparency. They tend to provide frameworks and guidance on ethical practices, with a few groups, like ORCAA Consulting, providing ethical AI audits.
  4. AI Software for Nonprofits’ “Back Office”: Beyond programmatic uses of AI, there is also a growing demand for solutions that streamline nonprofit operations, communications, human resources (HR), fundraising, and data management. Platforms like Textio help reduce bias in job postings, DonorSearch AI offers donor scoring for fundraising, while AI-driven CRMs from Salesforce.org and Keela aim to maximize fundraising efficiency. Although these “back office” tools do not always directly tackle a programmatic challenge (e.g., health, education, or climate), they play a crucial role in saving nonprofits time and money, which can be redirected toward mission-critical work. We wanted to name this set of programs and software as its own branch in the taxonomy to distinguish it from custom AI solutions that nonprofits build for mission delivery and to acknowledge that most nonprofits will experience this type of AI before any other.
  5. Demonstrating AI Use in Key Issue Areas: As mentioned above, AI is so new that we felt it would be useful to highlight a few organizations that are applying AI to their program delivery as examples for the sector. We’ve selected a few examples in categories that data.org focuses on:
    1. Health: Jacaranda Health applies AI-assisted tools in developing healthcare infrastructures, while Google DeepMind’s AlphaFold fosters scientific breakthroughs in protein folding and drug discovery.
    2. Financial Inclusion: Grameen Foundation uses AI to help underbanked folks feel comfortable accessing capital the way they want to and microloan platforms like Tala analyze mobile phone data to offer small lines of credit to unbanked populations.
    3. Climate: Climate TRACE uses AI to track greenhouse gas emissions globally, SilviaTerra helps optimize forest management, and Earthshot Labs models ecosystem restoration.
    4. Education: Tools like Khan Academy’s Khanmigo offer AI-based personalized learning, while Amira Learning helps young readers practice literacy skills.
    5. Food: Chatbots like Digital Green’s Farmer. Chat provides customized support to farmers while programs like Zero Waste Zero Hunger use AI to 3D scan and image discarded food to quantify food waste.
    6. Gender: Initiatives such as Myna Mahila Foundation’s AI Chatbot and Girl Effect’s sexual health chatbots address issues ranging from women’s health to girls’ education barriers.

These targeted projects serve as exemplars, real-world demonstrations that AI can improve social outcomes when properly tailored and thoughtfully implemented. This list highlights a few of the many impactful initiatives in this space.

The Myna Mahila Foundation developed an AI chatbot providing sexual health and family planning information in local languages and dialects and trained 227 workers in prompt engineering. Photo by Myna Mahila Foundation

Trends and Observations

While these five branches capture where AI for Good is today, they also reveal new opportunities and challenges:

Generative AI is on the rise, but don’t forget “traditional” AI/ML

The organizations in our landscape serve a mix of Generative and non-Generative AI needs. That distinction is worth noting – there is still much value in traditional AI/ML approaches, such as prediction and classification. Many of the data and service providers, like AWS, are still necessary for driving innovation. What we observe is that more organizations are trying new Generative AI tools, because they are making their way into consumer products and are low barrier to entry. However, for custom in-house work, more sophisticated, but traditional, methods are needed. This finding aligns with observations from the Giving Tuesday AI Readiness assessment in nonprofits, which found that 70% of “high AI use” nonprofits reported using Generative AI, while 30% of that same group reported using AI tools for data organization and interpretation, and 20% for prediction.

Early Prototyping Is Easier, but Implementation is Hard

In our research, we found that many nonprofits can take advantage of some of the new AI software available or AI 101 training to create powerful proof-of-concept AI models for their work. We see many initiatives designed to help nonprofits with this stage, offering them data, training, and models. However, moving from prototype to deployment still poses challenges, from insufficient budget for ongoing maintenance to a lack of dedicated AI staff who can iterate and refine solutions. There are very few if any, programs or initiatives at this stage that help organizations move past a ChatGPT prototype to a working piece of software. This “implementation gap” highlights a significant need for support services that take an AI solution from demo to an operational reality in the nonprofit context.

Back Office vs. Programmatic AI

As mentioned above, AI in nonprofits often falls into two categories: improving back office processes (like HR, and donor management) or driving mission-related outcomes (like tracking deforestation or diagnosing diseases). The skills, tools, and training needed to adopt AI in these two domains can vary significantly. We see a developing divide in the marketplace between solutions built for operational efficiency for common operations and those aimed at a nonprofit’s unique programs. Philanthropy will need to develop ways for nonprofits to get AI support for both types of workflow.

Problem Identification Is Key

Jake Porway and Jenni Warren of Decoded Futures leading an AI workshop for nonprofits and technologists. Photo by Decoded Futures

One assumption many AI for Good programs make is that nonprofit partners already know which problems are ripe for AI intervention. In practice, identifying the right AI problem, where data is available, where impact is feasible, and where technology can truly address the root causes, requires specialized expertise. We need more consultative or discovery-focused services, like Decoded Futures – a program that helps nonprofits and technologists in NYC collaborate to identify AI-driven solutions – in this space to help mission-driven groups pinpoint where AI can add the most value.

The Rise of Responsible AI Awareness

Since our previous landscape analysis, the sector has made real strides in recognizing the importance of responsibility, bias mitigation, and inclusive design in AI. Numerous new initiatives, think tanks and policy groups are shaping robust guidelines to ensure AI serves humanity in equitable ways. This progress is encouraging, yet the field could still benefit from practical mechanisms that hold organizations accountable to principles that can be easily adopted and implemented.

Success Stories Need Common Evaluation Standards

Showcasing success stories is powerful, but many new AI for Good efforts lack clear metrics for measuring outcomes or calculating return on investment, especially in this early stage. Without rigorous and mutually agreed upon evaluation frameworks, it is difficult for funders, nonprofits, and governments to understand which interventions have the greatest social or environmental impact. Consequently, it remains challenging to rally large-scale funding or mobilize broader adoption. The landscape will need to develop more evaluation resources or organizations to move forward.

Where Do We Go From Here?

The rapid acceleration of data and AI for Good has yielded a new wave of thoughtful practitioners and some promising returns, but significant gaps remain. To address these, we recommend the following steps forward:

Educators in remote areas of Greece trained in generative AI by Generative AI Challenge awardee, The Tipping Point, in collaboration with 100mentors.

Filling the Evaluation Gap

It is currently difficult for social sector organizations to assess in advance how much impact AI will have for them, making it difficult to justify the investment. More robust evaluation and estimation models could solve this. Philanthropy could fund the creation of these metrics and frameworks. Tech companies as well could help by sharing their evaluation frameworks for their models and helping social sector organizations apply them to their work.

Cross-Sector Learning

The AI for Good field needs more robust knowledge-sharing across sectors. Whether it’s government, philanthropy, nonprofits, academia, or industry, each brings unique expertise and data. Encouraging multi-stakeholder forums, hands-on workshops where technologists and nonprofits work together, or peer-learning communities can accelerate solutions that meet real community needs. Tech companies also have a unique role to play in this solution by offering any guidance on best practices, be that in model selection, model use, or ensuring AI alignment and safety.

Increasing Resourcing

There is no doubt that one of the biggest bottlenecks to deploying AI in the social sector is resourcing. Philanthropic funders can increase the supply of flexible and experimental funding so nonprofits can experiment with AI. Technology companies can play a leadership role in providing discounted or pro bono versions of their models and services to nonprofits. They can also create or join programs to donate their technical staff’s time to support nonprofits in identifying and solving their AI problems or setting up the most efficient infrastructure for their needs.

A Continuing Broker Role

data.org’s Accelerate Conference — panel on bringing sectors together to accelerate the field of data and AI for social impact.
data.org’s Accelerate Conference — panel on bringing sectors together to accelerate the field of data and AI for social impact.

Finally, mapping this rapidly evolving landscape is an ongoing challenge. There’s a critical opportunity for neutral brokers, like data.org or other convening bodies, to maintain and update taxonomies of AI for Good. By doing so, they can help nonprofits find trusted partners, highlight responsible AI guidelines, and keep the momentum going for breakthroughs that genuinely change lives.

Since our first deep dive into Data for Good efforts, the field has grown tremendously – with actors moving from building to accelerating. More data, more models, more funding, and more guidance than ever before are flowing into mission-driven AI endeavors. At the same time, more clarity is needed for organizations seeking to navigate this maze, particularly when it comes to problem scoping, program design, long-term implementation, and reporting. We hope this updated view of the AI for Good landscape provides a helpful roadmap for innovators, funders, and practitioners to find each other, collaborate, and continue driving real, transformative impact.

References

AI For Good Impact Report | Division for Inclusive Social Development (DISD). social.desa.un.org/sdn/ai-for-good-impact-report.

“AI for Good Lab – Microsoft Research.” Microsoft Research, 4 Feb. 2025, www.microsoft.com/en-us/research/group/ai-for-good-research-lab.

AscendixTech. “How Many AI Companies Are There in the World?” AscendixTech, 15 Jan. 2025, https://ascendixtech.com/how-many-ai-companies-are-there/

data.org. “Charting the Data for Good Landscape – an Update.” data.org, 25 May 2022, data.org/news/charting-the-data-for-good-landscape-an-update.

Edge Delta. “The Future is Now: AI Startup Statistics in 2024.” Edge Delta, 2024, https://edgedelta.com/company/blog/ai-startup-statistics

Esser, Kat, et al. The Food Value Chain X Cloud Technology: A Landscape Analysis – North America. 2024, pages.awscloud.com/rs/112-TZM-766/images/AWS_Deloitte_Food_Value_Chain_X_Cloud_Technology.pdf.

Generosity AI Working Group. AI Readiness Survey Report 2024. GivingTuesday, 2024, https://ai.givingtuesday.org/ai-readiness-report-2024/#organizational-capacity-and-ai-readiness.

Hall, Brian. “Real-world Gen AI Use Cases From the World’s Leading Organizations.” Google Cloud Blog, 19 Dec. 2024, cloud.google.com/transform/101-real-world-generative-ai-use-cases-from-industry-leaders.

Hinde, Gillian, et al. AI For Impact: Strengthening AI Ecosystems for Social Innovation. report, 2024.

“Meet Five Organisations Using AI for Social Good.” EU About Amazon, 19 Apr. 2024, www.aboutamazon.eu/news/aws/meet-five-organisations-using-ai-for-social-good.

Panel, Expert. “How Tech Companies Can Grow Their Social Influence and Impact.” Forbes, 23 May 2024, www.forbes.com/councils/forbestechcouncil/2024/05/22/how-tech-companies-can-grow-their-social-influence-and-impact.

Disclaimer

The AI for Social Good landscape is dynamic and rapidly evolving. The information presented in the report represents a snapshot in time and may not reflect the full scope or future direction of ongoing efforts in this space.

About the Authors

Perry Hewitt

Chief Strategy Officer

data.org

Chief Strategy Officer Perry Hewitt joined data.org in 2020 with deep experience in both the for-profit and nonprofit sectors. She oversees the global data.org brand and how it connects to partners and funders around the world.

Read more

With inputs from Shubhi Vijay and Joanne Jan.

The post Charting the AI for Good Landscape – A New Look appeared first on data.org.

]]>
5 Minutes with Nikhila Vijay https://data.org/news/5-minutes-with-nikhila-vijay/ Mon, 27 Jan 2025 17:08:56 +0000 https://data.org/?p=29012 Nikhila Vijay is a research manager in the energy, environment, and climate change space at Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia.

The post 5 Minutes with Nikhila Vijay appeared first on data.org.

]]>
The Capacity Accelerator Network (CAN) is building a workforce of purpose-driven data and AI practitioners to unlock the power of data for social impact. Nikhila Vijay is a research manager in the energy, environment, and climate change space at Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia. Nikhila was one of the first India Data Capacity Fellows at J-PAL, working with the host organization, Janaagraha, a nonprofit transforming the quality of life in India’s cities and towns.

Tell us about your work with the Capacity Accelerator Network. What impact or outcome are you most excited or encouraged by? How do you measure your success?

I worked with the Research and Insights team at an organization called Janaagraha, a well-known foundation in India that works with governments and citizens to improve the delivery of infrastructure and services in urban areas.

The project I worked on focused on identifying pathways for cleaner energy transitions in household fuel usage among the urban poor in the state of Odisha. My primary responsibility was analyzing a survey dataset of over 5,000 respondents, which provided insights into household fuel usage and behaviour in low-income settlements. The goal was to identify potential cleaner energy sources, with a specific focus on cooking fuels.

Additionally, I worked on developing tools to:

  1. Link cooking fuel usage to health outcomes, one of the key evaluation criteria for cleaner fuels.
  2. Understand the costs associated with transitioning to cleaner fuels.

Given the substantial body of research linking adverse health outcomes to indoor air pollution caused by traditional cooking fuels, I was particularly excited about quantifying the costs of transitioning to cleaner cooking fuels. I was also excited to work with spatial data, and provide a visual map of fuel usage across Odisha. 

For me, success meant two things: first, completing the project deliverables to meet my team’s expectations and achieving the outcomes I had envisioned at the outset. Second, and more challenging in the short term, was creating outputs that could be used by relevant stakeholders to inform their decision-making processes.

It is essential to take time to understand specific objectives and activities of the government and other implementation or policy partners, and try to engage with them at each stage of the project.

Nikhila-Vijay Nikhila Vijay Research Manager The Abdul Latif Jameel Poverty Action Lab (J-PAL)

How has your approach and work evolved based on what you have learned and observed from your colleagues across the CAN network?

As the point person for data analysis on this project, I relied on the CAN network to identify spatial datasets at the district and sub-district levels and to navigate reliable publicly available health datasets. It pushed me to seek help – reaching out to colleagues from other projects, clearly explaining the outputs I wanted to achieve, and leveraging their expertise and contacts to support my work.

From my colleagues at Janaagraha, I learned how to meaningfully integrate different research methods and analyses with inputs from stakeholders across community, government, and industry, creating a cohesive and comprehensive framework for the study. Specifically, I gained valuable experience in co-creating energy pathways with inputs from local community members, and in designing a representative sampling approach in the absence of administrative data.

There can be a disconnect between academia or government institutions and social impact organizations doing the work on the ground. How do you build trust and increase adoption?

From my experience, the following approaches have proven effective:

  1. Involving relevant stakeholders in the design process. It is essential to take time to understand specific objectives and activities of the government and other implementation or policy partners, and try to engage with them at each stage of the project. This could be done by providing regular status updates, incorporating stakeholder feedback, and having a primary point of contact, among others. It is not possible to achieve this for every project, though, as it really depends on the scope of work and nature of your partnerships. 
  2. Recognizing that communication and advocacy are integral to the research process. In many cases, research efforts end with the publication of a paper or presentation. However, building trust and fostering adoption requires actively promoting your research and tailoring communication to meet the needs of each stakeholder. This process demands significant time, resources, and persistent effort but is crucial to ensuring that your work is meaningfully utilized.

It is important to note that despite identifying the best approaches to minimize disconnect, you can still be constrained by the initial theory of change design, funding, organizational capacity, and your own connections with the government and other stakeholders. Transparency about those kinds of limitations can help maintain trust and confidence.

My advice to data practitioners is that there is a trade off to working in the social impact sector, and to not be discouraged by the bureaucracy and limited resource capacity that is more common in this space.

Nikhila-Vijay Nikhila Vijay Research Manager The Abdul Latif Jameel Poverty Action Lab (J-PAL)

How is your data-driven work driving impact at the intersection of climate and health? What is the importance of an interdisciplinary approach to data training?

Using the National Family and Health Survey, I explored correlations between household cooking fuel choice and related health impacts, such as heart disease and respiratory issues on adult women, caused by indoor air pollution. I also spatially mapped this data at the sub-district and cluster level in Odisha to identify areas where this correlation was strong. 

In the case of my project, it became clear that we needed more robust data to measure these linkages, and that the sample size at smaller geographic units was not representative enough to draw localized insights. 

An interdisciplinary approach to data training is very important as it helps you ask the right questions you want from your data. If you are a generalist in the data space, you are usually connected with domain experts to advise you. In the absence of such experts, it is essential to have training in reading policy reports and research papers to help you understand a particular sector or linkages between two or more sectors. For example, such an approach can enrich your insights by contextualizing your analyses based on various demographic cuts, such as geography, caste, gender etc., that you may apply having had such training.

What advice do you have for data practitioners as they begin purpose-driven careers? Why should they apply their skills in the social impact sector?

Having worked previously in the corporate sector, I think there is a difference in rigor applied to designing and achieving goals using data between the private and public sector. While results and impact-driven work is normalized in the private sector, it is often less mature in the government and social impact sector. This is why we need skilled data personnel in the social impact sector who can improve service delivery through monitoring and reporting, and who can help develop quantifiable goals and measure the impact of programs so that funds are channeled to the most effective and efficient policies. 

My advice to data practitioners is that there is a trade off to working in the social impact sector, and to not be discouraged by the bureaucracy and limited resource capacity that is more common in this space. It is important to remember that you are working towards social and economic good in a sector that is meaningful to you, and that your skills are helping to improve livelihoods and address these structural challenges. 


The post 5 Minutes with Nikhila Vijay appeared first on data.org.

]]>
3 Rules to Accelerate AI Inclusion and Impact https://data.org/news/3-rules-to-accelerate-ai-inclusion-and-impact/ Tue, 07 Jan 2025 17:05:29 +0000 https://data.org/?p=28576 AI is everywhere. In the news. In our social media algorithms. At home and work. But who has access to it? How is it being used? And for the benefit of whom?

The post 3 Rules to Accelerate AI Inclusion and Impact appeared first on data.org.

]]>

What does it look like when AI is done right?

Bidart: Respecting our traditions, our cultures, and our way of communication

Nicoll: Eliminating barriers to information

Shukla: Absolute synchrony between agriculture, apiculture, food security – all led by empowered women

Ravinutula: Highest quality primary care and health access in developing countries

Ruxin: A reflection of the people that are designing it and the information that it’s fed

AI is everywhere. In the news. In our social media algorithms. At home and work.  

But who has access to it? How is it being used? And for the benefit of whom?

We tackled those questions with five powerful social sector leaders from around the world at “Shaping the Future of Inclusive Growth,” a webinar to celebrate the awardees of the Artificial Intelligence to Accelerate Inclusion Challenge. The AI2AI Challenge is data.org’s fourth global innovation challenge, made possible with support from the Mastercard Center for Inclusive Growth. 

Here are three rules for designing and deploying AI with intention:

  1. Lead with Local
    Leading with the local context is a pillar of our work at data.org, and was a clear priority in the AI2AI solutions. IDInsight depends on political buy-in and local expertise, scaling an AI-powered call center with more than 40,000 health extension workers providing real-time medical guidance on complex cases. BEEKIND, an initiative of Buzzworthy Ventures, includes humans in the loop for their AI app that troubleshoots beekeeping issues and optimizes hive placements.
  1. Build Trust
    The International Rescue Committee interacts with users at the worst moment of their lives. Their AI tool, Signpost, provides lifesaving information to displaced people across  30 languages, supporting 400 trained people in providing personalized answers in plain language, and with recognizable branding specific to each of the 30 countries in which they operate. Link Health cuts through stigmatizing, overly complex, and duplicative benefit enrollment processes to increase access to critical federal aid benefits – $80 billion of which go unclaimed in the U.S. each year.
  2. Think Outside the (Data) Box
    Strong AI tools rely on strong data. Eighty percent of businesses are informal or unbanked in Latin America, meaning they lack the information banks typically rely on to award loans. Women entrepreneurs are especially at risk of financial exclusion. Quipu set out to collect and analyze other meaningful inputs, like videos of the businesses.

These compelling takeaways from social impact leaders around the world reinforce why data.org seeks to source, support, and help scale innovative approaches that use data and AI to tackle some of the most intractable problems we face. Our global innovation challenges consistently identify groundbreaking solutions like these, making use of AI with a responsible lens and allowing us, as a platform for partnerships to democratize data, to begin to apply and grow those solutions to more people, in more places, across more sectors.

The post 3 Rules to Accelerate AI Inclusion and Impact appeared first on data.org.

]]>
5 Minutes with Alokita Jha https://data.org/news/5-minutes-with-alokita-jha/ Thu, 19 Dec 2024 19:53:51 +0000 https://data.org/?p=28522 Alokita Jha is a CAN India Data Fellow at the Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia, working with the host organization, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), where she is leveraging data for evidence-based policymaking.

The post 5 Minutes with Alokita Jha appeared first on data.org.

]]>
The Capacity Accelerator Network (CAN) is building a workforce of purpose-driven data and AI practitioners to unlock the power of data for social impact. Alokita Jha is a CAN India Data Fellow at the Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia, working with the host organization, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), where she is leveraging data for evidence-based policymaking. Alokita also graduated from the first cohort of the Professional Executive Development Program in Data Science for Social Impact at Ashoka University, as part of her CAN training.

Tell us about your work with the Capacity Accelerator Network. What impact or outcome are you most excited or encouraged by? How do you measure your success?

My work with the Capacity Accelerator Network (CAN) focuses on leveraging data science to drive climate and health research through interdisciplinary, data-driven approaches. A significant outcome of this work is translating research findings into actionable insights. One of my key initiatives involved linking climate variability with malnutrition rates and birth outcomes across Indian districts. Using two rounds of nationally representative National Family Health Survey datasets, the project establishes a robust baseline assessment of climate change’s impacts on children’s nutritional outcomes in India.

This research provides a spatial baseline of the health infrastructure’s capacity to deliver essential care for women and children in drought-prone districts. Identifying hotspot areas where health systems need strengthening helps address the projected impacts of climate change effectively.

I measure success by how well my research translates into actionable insights and how these learnings contribute to future projects. Moving forward, I aim to further my career in data-driven policymaking, focusing on sustainable and impactful solutions.

For data practitioners starting their careers, my advice is to align your technical expertise with a clear social purpose. Understand the needs of underserved communities and design solutions that incorporate their priorities and feedback.

Alokita-Jha Alokita Jha Data Fellow The Abdul Latif Jameel Poverty Action Lab (J-PAL)

How has your approach and work evolved based on what you have learned and observed from your colleagues across the CAN network?

Collaboration within the CAN network has profoundly influenced my approach and broadened my perspective. Engaging with colleagues from diverse disciplines has highlighted the importance of adapting global frameworks to regional contexts. 

For instance, insights from the network encouraged me to incorporate additional indicators into climate vulnerability assessments, creating a more comprehensive understanding of how climate change affects health. Initially, my work took a single-lens approach, but collaboration exposed me to innovative datasets and methods, helping me analyze climate and health pathways through multiple lenses. This interdisciplinary mindset has significantly enhanced my ability to generate actionable insights.

The collaborative environment has also enriched my technical expertise in data science, equipping me with innovative methods and practical strategies to tackle real-world challenges. This ongoing exchange of knowledge and capacity-building training sessions have allowed me to continuously improve the impact of my work.

There can be a disconnect between academia or government institutions and social impact organizations doing the work on the ground. How do you build trust and increase adoption?

To bridge the disconnect between academia, government institutions, and social impact organizations, it is crucial to establish a robust evidence base that serves as a shared foundation. Involving all stakeholders—government institutions, academia, and social impact organizations—at every stage of the process is essential, from evidence generation to decision-making and implementation.

Building trust requires transparency and consistent communication. A participatory approach ensures that stakeholders feel valued and are more likely to adopt and sustain proposed solutions. This collaboration not only aligns goals across groups but also enhances the relevance and scalability of interventions, fostering long-term trust and impact.

The social impact sector provides unique opportunities to witness the tangible benefits of your work, whether improving public health systems or addressing climate risks.

Alokita-Jha Alokita Jha Data Fellow The Abdul Latif Jameel Poverty Action Lab (J-PAL)

How is your data-driven work driving impact at the intersection of climate and health? What is the importance of an interdisciplinary approach to data training?

My data-driven work identifies and addresses vulnerabilities at the intersection of climate and health, focusing on the needs of vulnerable communities. By integrating climate data such as rainfall variability and droughts with health indicators like malnutrition rates and maternal health, I identify hotspots and prioritize interventions for regions most at risk.

I believe an interdisciplinary approach is critical to understanding the complexity of climate-health linkages. These interconnected issues require perspectives from various fields to develop nuanced insights and effective solutions. This holistic understanding is pivotal for sustainable interventions.

Interdisciplinary training plays a vital role by equipping practitioners with the ability to analyze complex datasets while understanding their broader societal implications. For instance, training in tools like Geographic Information Systems (GIS) empowers professionals to visualize and act on the intricate connections between climate and health, fostering both technical competence and impactful decision-making.

What advice do you have for data practitioners as they begin purpose-driven careers? Why should they apply their skills in the social impact sector?

For data practitioners starting their careers, my advice is to align your technical expertise with a clear social purpose. Understand the needs of underserved communities and design solutions that incorporate their priorities and feedback.

The social impact sector provides unique opportunities to witness the tangible benefits of your work, whether improving public health systems or addressing climate risks. Applying data for social good allows practitioners to address systemic inequities and contribute to solving urgent societal challenges. The work is deeply fulfilling, offering both a sense of purpose and a chance to make lasting contributions to the public good.


The post 5 Minutes with Alokita Jha appeared first on data.org.

]]>