#Tech news

Women in AI: Allison Cohen on building responsible AI projects


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Allison Cohen is the senior applied AI projects manager at Mila, a Quebec-based community of more than 1,200 researchers specializing in AI and machine learning. She works with researchers, social scientists and external partners to deploy socially beneficial AI projects. Cohen’s portfolio of work includes a tool that detects misogyny, an app to identify online activity from suspected human trafficking victims, and an agricultural app to recommend sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery at the Global Partnership on Artificial Intelligence, an organization to guide the responsible development and use of AI. She’s also served as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

The realization that we could mathematically model everything from recognizing faces to negotiating trade deals changed the way I saw the world, which is what made AI so compelling to me. Ironically, now that I work in AI, I see that we can’t — and in many cases shouldn’t — be capturing these kinds of phenomena with algorithms.

I was exposed to the field while I was completing a master’s in global affairs at the University of Toronto. The program was designed to teach students to navigate the systems affecting the world order — everything from macroeconomics to international law to human psychology. As I learned more about AI, though, I recognized how vital it would become to world politics, and how important it was to educate myself on the topic.

What allowed me to break into the field was an essay-writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help humans stay competitive in a labor market riddled with AI, which qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My invitation, and subsequent participation in that event, gave me the confidence to continue pursuing my interest in the field.

What work are you most proud of in the AI field?

One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women.

For this project, staffing and managing a multidisciplinary team of natural language processing experts, linguists and gender studies specialists throughout the entire project life cycle was crucial. It’s something that I’m quite proud of. I learned firsthand why this process is fundamental to building responsible applications, and also why it’s not done enough — it’s hard work! If you can support each of these stakeholders in communicating effectively across disciplines, you can facilitate work that blends decades-long traditions from the social sciences and cutting-edge developments in computer science.

I’m also proud that this project was well received by the community. One of our papers got a spotlight recognition in the socially responsible language modeling workshop at one of the leading AI conferences, NeurIPS. Also, this work inspired a similar interdisciplinary process that was managed by AI Sweden, which adapted the work to fit Swedish notions and expressions of misogyny.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It’s unfortunate that in such a cutting-edge industry, we’re still seeing problematic gender dynamics. It’s not just adversely affecting women — all of us are losing. I’ve been quite inspired by a concept called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock’s book, “Design Justice.” \

The theory claims that marginalized communities, whose knowledge and experiences don’t benefit from the same privileges as others, have an awareness of the world that can bring about fair and inclusive change. Of course, not all marginalized communities are the same, and neither are the experiences of individuals within those communities.

That said, a variety of perspectives from those groups are critical in helping us navigate, challenge and dismantle all kinds of structural challenges and inequities. That’s why a failure to include women can keep the field of AI exclusionary for an even wider swath of the population, reinforcing power dynamics outside of the field as well.

In terms of how I’ve handled a male-dominated industry, I’ve found allies to be quite important. These allies are a product of strong and trusting relationships. For example, I’ve been very fortunate to have friends like Peter Kurzwelly, who’s shared his expertise in podcasting to support me in the creation of a female-led and -centered podcast called “The World We’re Building.” This podcast allows us to elevate the work of even more women and non-binary people in the field of AI.

What advice would you give to women seeking to enter the AI field?

Find an open door. It doesn’t have to be paid, it doesn’t have to be a career and it doesn’t even have to be aligned with your background or experience. If you can find an opening, you can use it to hone your voice in the space and build from there. If you’re volunteering, give it your all — it’ll allow you to stand out and hopefully get paid for your work as soon as possible.

Of course, there’s privilege in being able to volunteer, which I also want to acknowledge.

When I lost my job during the pandemic and unemployment was at an all-time high in Canada, very few companies were looking to hire AI talent, and those that were hiring weren’t looking for global affairs students with eight months’ experience in consulting. While applying for jobs, I began volunteering with an AI ethics organization.

One of the projects I worked on while volunteering was about whether there should be copyright protection for art produced by AI. I reached out to a lawyer at a Canadian AI law firm to better understand the space. She connected me with someone at CIFAR, who connected me with Benjamin Prud’homme, the executive director of Mila’s AI for Humanity Team. It’s amazing to think that through a series of exchanges about AI art, I learned about a career opportunity that has since transformed my life.

What are some of the most pressing issues facing AI as it evolves?

I have three answers to this question that are somewhat interconnected. I think we need to figure out:

  1. How to reconcile the fact that AI is built to be scaled while ensuring that the tools we’re building are adapted to fit local knowledge, experience and needs.
  2. If we’re to build tools that are adapted to the local context, we’re going to need to incorporate anthropologists and sociologists into the AI design process. But there are a plethora of incentive structures and other obstacles preventing meaningful interdisciplinary collaboration. How can we overcome this?
  3. How can we affect the design process even more profoundly than simply incorporating multidisciplinary expertise? Specifically, how can we alter the incentives such that we’re designing tools built for those who need it most urgently rather than those whose data or business is most profitable?

What are some issues AI users should be aware of?

Labor exploitation is one of the issues that I don’t think gets enough coverage. There are many AI models that learn from labeled data using supervised learning methods. When the model relies on labeled data, there are people that have to do this tagging (i.e., someone adds the label “cat” to an image of a cat). These people (annotators) are often the subjects of exploitative practices. For models that don’t require the data to be labeled during the training process (as is the case with some generative AI and other foundation models), datasets can still be built exploitatively in that the developers often don’t obtain consent nor provide compensation or credit to the data creators.

I would recommend checking out the work of Krystal Kauffman, who I was so glad to see featured in this TechCrunch series. She’s making headway in advocating for annotators’ labor rights, including a living wage, the end to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments like intrusive surveillance).

What is the best way to responsibly build AI?

Folks often look to ethical AI principles in order to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after a number of decisions have already been made, including but not limited to:

  1. What are you building?
  2. How are you building it?
  3. How will it be deployed?

If you wait until after these decisions have been made, you’ll have missed countless opportunities to build responsible technology.

In my experience, the best way to build responsible AI is to be cognizant of — from the earliest stages of your process — how your problem is defined and whose interests it satisfies; how the orientation supports or challenges pre-existing power dynamics; and which communities will be empowered or disempowered through the AI’s use.

If you want to create meaningful solutions, you must navigate these systems of power thoughtfully.

How can investors better push for responsible AI?

Ask about the team’s values. If the values are defined, at least, in part, by the local community and there’s a degree of accountability to that community, it’s more likely that the team will incorporate responsible practices.



Source link

Leave a comment

Your email address will not be published. Required fields are marked *