Skip to main content

Keeping Human Agency in Tech to Minimize Bias

Keeping Human Agency in Tech to Minimize Bias

Repeatedly we’ve seen that biases persist when we remove human agency from technology. For example, recently an algorithm in the UK assigned biased and inaccurate A-Level grades to students based on socioeconomic standing, upholding a failing status quo and showing that AI can prove harmful if left to its own agency. Artificial intelligence and technology as created should uphold ethical standards and be helpful, not harmful, to the individuals they were built to support.

These were some of the main points from a conversation between Rumman Chowdhury (Ph.D., Global Lead for Responsible AI, Accenture Applied Intelligence), Megan Smith (CEO shift7, Former U.S. Chief Technology Officer), and Karen Hao (Senior AI Reporter, MIT Technology Review) during the Ellevate Mobilize Women Week Technology and Bias panel.

Algorithms are not inherently biased. They’re taught to be so just as humans are. So how do we better create algorithms that run by ethical standards, don’t impart biased results on society, and uphold the human aspect to technology? Our AI experts had a few key tips for building ethical and responsible AI.

Diversify the Teams Building Algorithms

One of Megan Smith’s main tips throughout the session: diversify the teams and diversify the agendas. You should have diverse teams working on data sets to build and train algorithms. This means having both technologists and non-technologists to bring in an interdisciplinary approach.

Rumman Chowdhury suggested adding data scientists to the team, especially in fields that address human services. As Chowdhury mentioned, “[The] beautiful thing about data science and AI is you’re solving real problems that affect real humans - you’re adding back the context to what’s taught [in school], which can be narrow, dry, and boring. You’re making things that help people’s lives.”

Data scientists need to know as much about society and humanity as about data and programming, so adding data scientists helps to expand the interdisciplinary makeup of teams producing tech that support diverse humans. To build diverse teams, it’s important to break down silos and incentivize interdisciplinary teams to find ethical issues within what you are building. Most technological jobs are not incentivized to find ethical issues and fix them, but companies can change that.

Diversify the Data Sets on Which Algorithms are Trained On

In healthcare, we see medical bias based on gender. Why? Because of the under-representation and under-diversification of the humans on which healthcare is learned and taught. For example, symptoms of a heart attack are different for women than men, as well as for white men versus black men, but the majority of research around symptoms and treatment has been conducted on white men.

This is the same issue we see in facial recognition technology. It doesn’t work as well as it could because there are not nearly enough young women of color that the algorithm is trained on, as Smith mentioned. When data sets are not accurately diversified to represent the population they’re being designed to support, algorithms will fall short of learning how to support all people.

Apply Consumer Pressure for Ethical Review

As Chowdhury mentioned during the MWW2020 session, there are virtually no technologies that are so vital that they can’t go through ethical review before being released to the world. Furthermore, those technologies that are critical for human development should have ethical review built into the development process to ensure they will actually support the humans for which they were designed. If not, companies will spend more time fixing the tech than was spent to produce it in the first place.

However, unlike in the public sector, in the private sector, there’s no access to due process when something goes wrong with an algorithm. Big tech companies will always generate revenue and be able to develop technologies quickly, but we can change that.

What are the levers of change you can pull at these companies as an employee, as a consumer, as an investor - what are the things that can be done to encourage ethical use technology? Change can be driven by regular people and the companies using the technology whose money keeps big tech running. These are economic decisions that impact big tech. Public pressure can drive these companies to create algorithms that incorporate success metrics that reflect social impact and ethics, like aligning business performance indicators to sustainable development goals.

It’s critical to keep human agency within the technology development process. In recent years as AI has expanded, we’ve started to see a personification of AI where we treat algorithms as having their own will and mindset. When an algorithm returns a result we don’t agree with and want to contest, it’s not satisfactory that we can’t explain why that was the result and to say it’s just what the algorithm decided.

AI is a function of how we build it, who builds it, and what we teach it over time. Humans should take the time and responsibility of training AI with diverse data sets and building ethical review into design and development processes to build technology in an appropriate way that betters the lives of the humans it is built to support, not the other way around.

----------------------------------------------------------------------------------------

Thank you to Accenture - U.S. for sponsoring Mobilize Women Week 2020. Is your company interested in sponsoring MWW2021? Contact corporate@ellevatenetwork.com for more information.




Have more questions? Follow up with the expert herself.

{{playbook.title}}

Continue learning with this Ellevate Playbook: