

Miriam Havelin

Danielle Lim

Julia Forrester

Laurissa Barnes-Roberts

Image: Steelcase

Image: Herman Miller archives
Al Horr, Y., Arif, M., Kaushik, A., Mazroei, A., Katafygiotou, M., & Elsarrag, E. (2016). Occupant productivity and office indoor environment quality: A review of the literature. Building and environment, 105, 369-389.
Becker, F. D., Gield, B., Gaylin, K., & Sayer, S. (1983). Office design in a community college: Effect on work and communication patterns. Environment and Behavior, 15(6), 699-726.
Bergström, J., Miller, M., & Horneij, E. (2015). Work environment perceptions following relocation to open-plan offices: A twelve-month longitudinal study. Work, 50(2), 221-228.
Bernstein, E. S., & Turban, S. (2018). The impact of the ‘open’workspace on human collaboration. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1753), 20170239.
Bernstein, E., Waber, B. (2019). The Truth About Open Offices. The Harvard Business Review. Retrieved from: https://hbr.org/2019/11/the-truth-about-open-offices
Berry, L. (November 12, 2018). Bürolandschaft: how the way we work has shaped the office. Medium. Retrieved from https://medium.com/interact-software/b%C3%BCrolandschaft-how-the-way-we-work-has-shaped-the-office-e360a53f25e1
Brem, A. (June 28, 2019). Open-plan offices are not inherently bad – you’re probably just
using them wrong. The Conversation. Retrieved from: https://theconversation.com/open-plan-offices-are-not-inherently-bad-youre-probably-just-using-them-wrong-113689
Brennan, A., Chugh, J. S., & Kline, T. (2002). Traditional versus open office design: A longitudinal field study. Environment and behavior, 34(3), 279-299.
Brookes, M. J., & Kaplan, A. (1972). The office environment: Space planning and affective behavior. Human factors, 14(5), 373-391.
Collie, M. (March 29, 2019) ‘It’s not one size fits all’: Why open office plans don’t work for everyone. Global News. Retrieved from: https://globalnews.ca/news/5043169/open-co-working-office-space-individual-independent-desks-cubicles/
Dans, E., (May 28, 2019). Why Are We Still Arguing About Open-Plan Offices? Forbes.
Davis, M. C., Leach, D. J., & Clegg, C. W. (2011). The physical environment of the office: Contemporary and emerging issues. International review of industrial and organizational psychology, 26(1).
Edwards, P., (October 4, 2017). The origins of our open office hellscape. Vox. https://www.vox.com/videos/2017/10/4/16414808/open-offices-history?fbclid=IwAR2GsTAL6He2if0BShExxCLQ7kajV8SjyBwsPG8jRA7L0sci5YxKKkdTKrU
Evans, G. W., & Johnson, D. (2000). Stress and open-office noise. Journal of applied psychology, 85(5), 779.
Frontczak, M. (2012). Human comfort and self-estimated performance in relation to indoor environmental parameters and building features.
Ganster, D. C., & Schaubroeck, J. (1991). Work stress and employee health. Journal of management, 17(2), 235-271.
Gibson, E., (2017). Frank Lloyd Wright designed the Johnson Wax offices as a forest open to the sky. Dezeen. https://www.dezeen.com/2017/06/14/frank-lloyd-wright-johnson-wax-administration-building-headquarters-racine-wisconsin-open-plan-office/
Hedge, A. (1982). The open-plan office: A systematic investigation of employee reactions to their work environment. Environment and Behavior, 14(5), 519-542.
Hongisto, V., & Haapakangas, A. (2008, June). Effect of sound masking on workers in an open office. In Proceedings of Acoustics (Vol. 8, No. 29, pp. 537-542).
James, G., (February 16, 2016). Open Office Plans Are a Lot Less Cost-Effective Than You May Think. LinkedIn Talent Blog. https://business.linkedin.com/talent-solutions/blog/hr/2016/open-office-plans-are-a-lot-less-cost-effective-than-you-may-think
Joseph, J. (2016). Do Open/Collaborative Work Environments Increase, Decrease or Tend To Keep Employee Satisfaction Neutral?.
K2 Space, The History of Office Design. https://k2space.co.uk/knowledge/history-of-office-design/
Kalish, A., This Is Why So Many Companies Insist on Open Offices Now. The Muse. https://www.themuse.com/advice/history-of-the-open-offices-exist-cubicles
Konnikova, M. (January 7, 2014) The open office trap. The Newyorker. https://www.newyorker.com/business/currency/the-open-office-trap
Kranzberg, M., Hannan, M. History of the organization of work. The Encyclopedia Britannica. https://www.britannica.com/topic/history-of-work-organization-648000/State-organized-farming#ref67054
Kristal, M. (June 6, 2013). The Living Office—The Action Office for the Digital Age? Metropolis. https://www.metropolismag.com/interiors/workplace-interiors/living-office-action-office-digital-age/
Lohr, S. (August 11, 1997) Cubicles Are Winning War Against Closed Offices. The New York Times. https://archive.nytimes.com/www.nytimes.com/library/cyber/week/081197cube.html
Marans, R. W., & Spreckelmeyer, K. F. (1982). Evaluating open and conventional office design. Environment and Behavior, 14(3), 333-351.
Marmot, A. (May 18, 2015). The future history of the government workplace. Civil Service Blog. https://civilservice.blog.gov.uk/2015/05/18/the-future-history-of-the-government-workplace/
Mee, J. F. (2019) Frederick W. Taylor. Encyclopedia Britannica. https://www.britannica.com/biography/Frederick-W-Taylor#ref963589
Oldham, G. R., & Brass, D. J. (1979). Employee reactions to an open-plan office: A naturally occurring quasi-experiment. Administrative science quarterly, 267-284.
Pochepan, J., (Febuary 20, 2019). The open office plan is a disaster. The Chicago Tribune. https://www.chicagotribune.com/business/success/tca-the-open-office-plan-is-backfiring-20180220-story.html
Recinos, A., (February 28, 2017). The Rise, Fall, and Triumphant Return of the Open Plan Office. GOOD & CO. Culture fit. Quantified. https://good.co/blog/rise-fall-triumphant-return-open-plan-office/
Rosen, L., Samuel, A. (2015). Conquering Digital Distraction. Harvard Business Review. Retrieved from: https://hbr.org/2015/06/conquering-digital-distraction
Sarkis, S. (April 28, 2019). Don’t Create An Open Office Space Until You Read This Article. Forbes. https://www.forbes.com/sites/stephaniesarkis/2019/04/28/dont-create-an-open-office-space-until-you-read-this-article/#bdf0b1014699
Saval, N. (April 23, 2014) The Cubicle You Call Hell Was Designed to Set You Free. Wired. https://www.wired.com/2014/04/how-offices-accidentally-became-hellish-cubicle-farms/
Saval, N. (May 9, 2014). A Brief History of the Dreaded Office Cubicle. The Wall Street Journal. https://www.wsj.com/articles/a-brief-history-of-the-dreaded-office-cubicle-1399681972?tesla=y
Schwab, K. (January 15, 2019). Everyone hates open offices. Here’s why they still exist. Fast Company. https://www.fastcompany.com/90285582/everyone-hates-open-plan-offices-heres-why-they-still-exist
Seddigh, A., Berntson, E., Platts, L. G., & Westerlund, H. (2016). Does personality have a different impact on self-rated distraction, job satisfaction, and job performance in different office types?. PloS one, 11(5), e0155295.
Seddigh, A., Stenfors, C., Berntsson, E., Bååth, R., Sikström, S., & Westerlund, H. (2015). The association between office design and performance on demanding cognitive tasks. Journal of Environmental Psychology, 42, 172-181.
Sundstrom, E., Burt, R. E., & Kamp, D. (1980). Privacy at work: Architectural correlates of job satisfaction and job performance. Academy of Management Journal, 23(1), 101-117.
Sundstrom, E., Herbert, R. K., & Brown, D. W. (1982). Privacy and communication in an open-plan office: A case study. Environment and Behavior, 14(3), 379-392.
Tank, A., (February 7, 2019) Why It’s Time to Ditch Open Office Plans. Entrepreneur. https://www.entrepreneur.com/article/327142
Taube, A. (October 7, 2014). The Man Who Invented The Cubicle Went To His Grave Hating What His Creation Had Become. Business Insider. https://www.businessinsider.com/cubicle-inventor-propst-hated-
creation-2014-10
Virjonen, P., Keränen, J., Helenius, R., Hakala, J., & Hongisto, O. V. (2007). Speech privacy between neighboring workstations in an open office-a laboratory study. Acta Acustica united with Acustica, 93(5), 771-782.
Wineman, J. D. (1982). Office design and evaluation: An overview. Environment and behavior, 14(3), 271-298.
Witterseh, T., Wyon, D. P., & Clausen, G. (2004). The effects of moderate heat stress and open-plan office noise distraction on SBS symptoms and on the performance of office work. Indoor air, 14(8), 30-40.
Wood, J., (November 5, 2018). Open-plan offices make workers less collaborative, Harvard study finds. World Economic Forum. https://www.weforum.org/agenda/2018/11/open-plan-offices-make-workers-less-collaborative-harvard-study-finds?fbclid=IwAR3MDe4hJlTRQTPype6SXLqHNgmyt7hbhj8PuRAbeWjLYI3s8D8ETnDzxPc

Image: Herman Miller archives
Bias in Emerging Technology
How to make artificial intelligence more equitable
Laurissa Barnes-Roberts, Julia Forrester, Miriam Havelin, and Danielle Lim are Master of Design graduate students in the Strategic Foresight and Innovation program at OCAD University in Toronto, Ontario. With professional experience in design and marketing in the public, private, and non-profit sectors, we aim to provide a comprehensive understanding of the technologies around us. Our research interests include digital communications, misinformation, and equity.
This service design brief and synthesis map were developed as part of the course Understanding Systems and contributes to the Strategic Innovation Lab, centre for participatory foresight, systemic design and social innovation, at OCAD U (https://slab.ocadu.ca/project/synthesis-maps-gigamaps).
Our Team
Introduction
Technologies that use artificial intelligence (AI) have become integrated into every part of human life, informing the news people see, the advertisements people are shown, and even the GPS directions people are given. The use of AI is expanding, and the powerful computers and complex algorithms behind these technologies are becoming increasingly advanced as companies rapidly invest in research and development around AI. Soon, AI will be widely used to help diagnose diseases, drive cars, and police neighbourhoods (Hawkins, 2018; Martin, 2019; Walch, 2019). These uses may seem like futuristic fictions, but they already exist in the world (and are gaining momentum).
ARTIFICIAL INTELLIGENCE
There are a number of definitions for artificial intelligence and while many of them are similar it is unsurprising that there is no consensus in this field regarding this definition.
The term artificial intelligence was coined by computer scientist, John McCarthy, who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). This definition focuses on the process and science of developing.
Other definitions such as “AI refers to any human-like intelligence exhibited by a computer, robot, or other machine…the ability of a computer or machine to mimic the capabilities of the human mind” (IBM, 2020), are aligned to the characteristics of the completed technology.
There are additional definitions such as the following written for Britannica which combine both; “Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience” (Copeland, 2020).
Broad categories of artificial intelligence include weak or narrow AI, strong AI, applied AI, and cognitive simulation (Britannica, n.d; Builtin, n.d; Frankenfield, 2021). AI is further divided and classified in a few other ways based on functionality, capability, field or research techniques.
For over a century now, authors, scientists, mathematicians, and philosophers have been theorizing about machines that could imitate human intelligence (Anyoha, 2017).
The dominant thinking in the 1950s, when the theory began gaining traction, was that human brains and computers were a “species of the same genus” – essentially, they were information processing systems that could “take symbolic information as input, manipulate it according to a set of formal rules, and in so doing… solve problems, formulate judgments, and make decisions” (Crowther-Heyck, 2008; Dick, 2019; Heyck, 2005; Newell & Simon, 1972).
Advances in technology and computing, particularly in the last 30 years, have led to the development of machines with advanced computing and analytical power and the emergence and growth of AI.
Today, AI is integrated into every part of daily life. Voice assistants like Siri and Alexa use AI to interpret what a user is saying and respond to requests. Streaming services like Netflix and Spotify, search engines like Google, and social media apps like Facebook and TikTok use AI to collect data about what content users engage with (including advertisements) and then recommend what users may want to see or hear next (Hao, 2018; Marr, 2019).
Navigation aids like Google Maps use traffic data and historical traffic patterns to recommend routes and predict how long it will take to get to a given destination (Lau, 2020). Even personal banking apps use AI to track typical customer behaviours and flag anomalies to detect fraud (Walch, 2020). While this technology has been unquestionably beneficial, it is not as benign as it might seem.
AI builds assumptions based on the patterns it finds, which allows it to, arguably, “make better decisions than humans because it can take many more factors into account and analyze them in milliseconds” (Gonfalonieri, 2019). Being able to make ‘better’ decisions than humans does not make AI faultless, though. Because of the humans who design them, algorithms are susceptible
to bias, which can become embedded in the technology at several points throughout its lifecycle (Silberg & Manyika, 2019).
Given the fallibility of technologies like AI (and the algorithms upon which they rely), the research conducted for this systems analysis was guided by the following research question:
How might we use a systemic approach to explore the AI ecosystem in order to suggest possible interventions to reduce bias and make the technology more equitable?
Using a systems-based approach to analyze the AI technology lifecycle and ecosystem in which these technologies are embedded, there emerged several possible intervention points. This brief will discuss the scope of the problem, outline the components and major stakeholders/actors, as well as their relationships, and discuss a few of the most influential potential interventions.
This brief is best read in conjunction with the corresponding synthesis map, which visually outlines the contents herein.
Lastly, two case studies have been added to the appendix in order to provide examples of the proposed interventions.
This was the focus of the lifecycle section of the synthesis map, and the basis for many of the causal loops, although the analysis uncovered possible interventions across the levels of the system from micro to macro.
The process of taking the idea for the technology from a prototype and funding through to launch is referred to as the development lifecycle. In the lifecycle, speed to market and producing a minimum viable product are essential to secure and retain funding for start-up or small enterprises to eventually deploy.
This is an ecosystem which is typically overwhelmingly male, white, and ‘techno-heroic’ (D’Ignazio & Klein, 2020). Author, game designer, and Georgia Institute of Technology professor Ian Bogost argues that developers “constitute a ‘tribe,’ separated from the general public…by the exclusive culture of computing education and industry” (2019). These factors combined create a homogeneous environment where different perspectives and assumptions are more likely to go uninterrogated and unchallenged.
As previously mentioned, the applications for AI are vast and extremely varied. In order to explore the system of bias in AI, strategic generalizations have been made. While specific, individual technologies may vary in terms of their lifecycle, company culture, and funding structures, commonalities exist in the ways in which different types of bias are embedded in these technologies.
This section will focus on key features of this system and relationships between
them and where bias is integrated into them.
What if people learned, however, that diagnostic AI is less accurate for non-male genders (Kaushal et al., 2020)? Or that self-driving cars are less likely to detect pedestrians with darker complexions (Samuel, 2019)? Or that predictive policing is more likely to negatively impact historically marginalized groups, such as BIPOC and LGBTQ+ communities, those living with mental illness, and/or those who are homeless or from low socioeconomic situations (Kenyon, 2020)? Would people be so quick to accept and adopt these technologies, no questions asked? Or would people treat AI as imperfect and fallible, like the people who create it?
In exploring the research question, we dissected the levels of systems involved and understand which levels have the most opportunity for intervention. The issue of implicit bias is woven in multiple ways within each level of this system.
Throughout the lifecycle, various activities occur at different levels:
• The micro layer describes what is happening on the frontline and behind the scenes and so is both the most visible and least transparent. It is where the AI is first coded by developers and then first introduced to users or customers.
• The meso layer moves outward to the local industry actors, as well as the immediate technology ecosystem such as the developers and organizations who create the AI.
• The exo layer encompasses a broader ecosystem, which includes government, and the AI technology sector and related industry actors (e.g. health care policy, data security).
• At the macro level, the largest societal forces are at work in the background, including societal values and beliefs, and the pressures and demands of capitalism.
A deeper analysis of the micro level was conducted first by looking at the development lifecycle, to explore the different places where bias is embedded into the system and possible leverage points. The micro level is where the algorithms are coded by developers in technology companies that are competing on speed to market and therefore also where implicit bias is introduced into the technology (Elsbach & Stigliani, 2020).
There is no one size fits all office design, and there is unlikely to ever be. What we are seeing now with the current proliferation of open office spaces is, I would argue, similar to what Fredrick Taylor did in the early 20th century, which is an attempt to formularize human workers. Taylor proposed Scientific Management to enable more efficient manufacturing and clerical work, modern proponents use open offices to push creativity and innovation, as though an open office is the solution to the lack of these in a company. In the end both designs are looking to increase “productivity”, but both unfortunately do so in a manner that causes individuals to experience stress and burnout.
The aspect missing from these equations, is the human factor; humans are complex, multidimensional beings, and in order to extract their full working potential, office spaces need to be designed with humans at the centre. The problem is not the open office per se but the mismatch between individuals and the workspaces they occupy, and the lack of awareness of the human component in designing said spaces. Offices should be designed to match the needs and the work of the humans that occupy these spaces.
When the Action Office was first introduced, before corporate space saving turned it into the modern cubicle, it was designed to be a flexible workspace that integrated humans and their needs. Propst envisioned the workspace with humans at the nexus exerting control over their immediate surroundings, creating the optimal environment. When many of the first modern open offices were introduced, they were vast rooms with individual desks separated by spaces and other elements to allow for privacy. Some were just break out spaces where people would come together to design or discuss creative ideas and then retire to the privacy of their own office spaces, such as in advertising agencies in the sixties and seventies (Brem 2019, Collie 2019). That is a far cry from the open offices of today with configurations of desks packed together in large rooms.
Some office environments have always been “open”, due to the nature of the work individuals in them conduct (think of a trading floor at a stock exchange) and the need for rapid and integrated communication is at the forefront. There are also offices which have been and remain “closed” due to the need for individual privacy and concentration. These types of workspaces may be where organizations could look to for insights into reconfiguring their own offices.
Endeavoring to create an office that boosts collective intelligence and fosters creativity might be in and of itself, a noble goal. However, to truly create offices where employees thrive and productivity increases, employers should first research the nature of the work their employees perform as well as the structures and cultures already present within their organization. “Leaders need to make the call about what collective behaviors should be encouraged or discouraged and how. Their means should include not just the design of workspace configurations and technologies but the design of tasks, roles, and culture as well.” (Bernstein 2019).
The culture of a workspace is influenced by the design of said space, and can have an effect on the individuals working in them, however change in design will not force immediate culture change. Open offices can still have strong hierarchies even though they promote horizontality. More interaction does not mean better interaction, especially in open office settings, co-presence does not equal collaboration, and can in fact have unintended negative social and psychological outcomes on people forced to work and interact in these environments, leading to an increase in dissatisfaction of the modern worker.
“The goal should be to get the right people interacting with the right richness at the right times” (Bernstein 2019). The answer to that is not open offices.
Citations
Our Team
Introduction
Definitions
Overview
Scope & Boundaries
-
AI Product Development Lifecycle
-
Mapping the System
Stakeholders and Actors
Intervention Strategies
-
Equitable Government-Supported Funding
-
Diverse Hiring and company Culture
-
Ethics Review Association
-
Public Education and Awareness
Discussion
Appendix A: Case Study - COMPAS
Appendix B: Case Study - Apple Card
References
Definitions
In order to discuss the larger system, alignment around the following key definitions is important:
ALGORITHMS
The definitions for an algorithm are also numerous, but at its most basic an algorithm is a set of instructions to achieve a goal (Downy, 2019; Merriam Webster, n.d). In computer science an algorithm is a set of steps or instructions which allow a computer program to accomplish a task (Cambridge Dictionary, n.d; Khan Academy, n.d).
Artificial intelligence technologies are often programed using a sequence of large and complex algorithms, written by developers and computer scientists. In computer science, algorithms are often referred to as code and the process of creating them is referred to as coding. The relationship between developers, algorithms, and AI technologies is crucial to understanding bias in this system.
BIAS
A bias is described as a preference or inclination for or against something (Bias, n.d.). Biases are part of what shapes the human experience. Humans use biases for mental efficiency, to aide sorting through information in the world, allowing people to make decisions more quickly (Vinney, 2018).
Because biases are an integral shorthand to human function (Stanborough, 2020), they are not always noticed by the person wielding them. These are unconscious or implicit biases (Hauser, 2018). Some biases are negative stereotypes, and if they are not examined, a person can act in alignment with these biases, even unconsciously, and perpetuate systemic prejudices through discriminatory actions (Buolamwini, 2019).
EMERGING TECHNOLOGIES
Emerging technologies, such as AI, are built by human developers. The choices that are made around programing algorithms, curating training data sets, and how the system is validated allow the biases of the developers, the companies for which they work, and the society in which these algorithms work to become embedded into the AI.
MACHINE LEARNING
Machine learning is a form of AI that enables a system to learn from data, identify patterns, and make decisions with having been explicitly programmed to do so (i.e. without additional human intervention) (Data Science and Machine Learning, 2020; Machine Learning, 2021). This data can take the form of anything that can be digitally stored – text, images, numbers, link clicks, etc.
Technologies that use machine learning collect as much data as possible about their users so that they can make informed predictions that anticipate those users’ future needs (Hao, 2018).
Overview
ARTIFICIAL INTELLIGENCE
There are a number of definitions for artificial intelligence and while many of them are similar it is unsurprising that there is no consensus in this field regarding this definition.
The term artificial intelligence was coined by computer scientist, John McCarthy, who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). This definition focuses on the process and science of developing.
Other definitions such as “AI refers to any human-like intelligence exhibited by a computer, robot, or other machine…the ability of a computer or machine to mimic the capabilities of the human mind” (IBM, 2020), are aligned to the characteristics of the completed technology.
There are additional definitions such as the following written for Britannica which combine both; “Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience” (Copeland, 2020).
Broad categories of artificial intelligence include weak or narrow AI, strong AI, applied AI, and cognitive simulation (Britannica, n.d; Builtin, n.d; Frankenfield, 2021). AI is further divided and classified in a few other ways based on functionality, capability, field or research techniques.
ARTIFICIAL INTELLIGENCE
There are a number of definitions for artificial intelligence and while many of them are similar it is unsurprising that there is no consensus in this field regarding this definition.
The term artificial intelligence was coined by computer scientist, John McCarthy, who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). This definition focuses on the process and science of developing.
Other definitions such as “AI refers to any human-like intelligence exhibited by a computer, robot, or other machine…the ability of a computer or machine to mimic the capabilities of the human mind” (IBM, 2020), are aligned to the characteristics of the completed technology.
There are additional definitions such as the following written for Britannica which combine both; “Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience” (Copeland, 2020).
Broad categories of artificial intelligence include weak or narrow AI, strong AI, applied AI, and cognitive simulation (Britannica, n.d; Builtin, n.d; Frankenfield, 2021). AI is further divided and classified in a few other ways based on functionality, capability, field or research techniques.
ARTIFICIAL INTELLIGENCE
There are a number of definitions for artificial intelligence and while many of them are similar it is unsurprising that there is no consensus in this field regarding this definition.
The term artificial intelligence was coined by computer scientist, John McCarthy, who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). This definition focuses on the process and science of developing.
Other definitions such as “AI refers to any human-like intelligence exhibited by a computer, robot, or other machine…the ability of a computer or machine to mimic the capabilities of the human mind” (IBM, 2020), are aligned to the characteristics of the completed technology.
There are additional definitions such as the following written for Britannica which combine both; “Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience” (Copeland, 2020).
Broad categories of artificial intelligence include weak or narrow AI, strong AI, applied AI, and cognitive simulation (Britannica, n.d; Builtin, n.d; Frankenfield, 2021). AI is further divided and classified in a few other ways based on functionality, capability, field or research techniques.
ARTIFICIAL INTELLIGENCE
There are a number of definitions for artificial intelligence and while many of them are similar it is unsurprising that there is no consensus in this field regarding this definition.
The term artificial intelligence was coined by computer scientist, John McCarthy, who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). This definition focuses on the process and science of developing.
Other definitions such as “AI refers to any human-like intelligence exhibited by a computer, robot, or other machine…the ability of a computer or machine to mimic the capabilities of the human mind” (IBM, 2020), are aligned to the characteristics of the completed technology.
There are additional definitions such as the following written for Britannica which combine both; “Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience” (Copeland, 2020).
Broad categories of artificial intelligence include weak or narrow AI, strong AI, applied AI, and cognitive simulation (Britannica, n.d; Builtin, n.d; Frankenfield, 2021). AI is further divided and classified in a few other ways based on functionality, capability, field or research techniques.
Scope & Boundaries of the System
LIFECYCLE OF A PRODUCT
Central to the understanding of this complex system is the technology, that is, a given AI product. The applications of AI are nearly limitless, and therefore the specific lifecycles of AI technologies are also extensive. However, in assessing from a system level there are commonalities in the various lifecycles where systemic bias emerges.
Based on research and inference, a generic AI product lifecycle was developed, it is intended to be broadly applicable to multiple AI products. While the specifics of a given AI product lifecycle will likely deviate in some ways from this layout, the broader process and, more importantly, the relationships to other components of the system are intended to be similar across technologies.

Early Stage
Mid Stage
Deployment stage
This lifecycle allows us to connect components of the system together, along a temporal range.
The points on the lifecycle fall into three broad categories.
1. Early stage. This phase is characterized by ideation and funding. In an established company with internal AI-based technologies, this could be a team pitching a new feature for an existing piece of technology or AI. In a start-up, this could be the raison d’être and funding would come from outside investors. In a small company, this could be a business model pivot or a new feature and could be funded internally or require external funding depending on the company’s size and assets. Part of this stage regularly involves creating a prototype or minimum viable product in order to pitch it.
2. Mid stage. This phase is characterized by team selection, research, further product development, prototyping and rounds of testing, including beta testing, redevelopment and redesign. At the end of this phase, a viable product is ready for deployment. In an established company this could look like team member selection, UX research, deployment of a feature/product among a set of users or clients, testing, Q&As and redevelopment. In a small company or startup this could look like hiring or outsourcing a team, research and narrowing of scope as well as beta testing a product.
3. Deployment stage. This phase is characterized by the deployment of a given AI product into general use for the intended market. Depending on the product in question, this phase could look very different among distinct companies, but is broadly characterized by product launch, sales and marketing, and engagement with stakeholders outside of the company. As many technology companies have growth as part of their business models, included in this phase are plans/intention for growth, scaling or expansion.
MAPPING THE SYSTEM
The system map is an important component of the system, which explores the prominent stakeholders and their interdependencies and influences upon one another.
This system map is not meant to be an exhaustive list of all the stakeholders in each AI system, rather, a high-level overview of major stakeholders more generally.

The AI industry
Broadly, the AI industry encompasses a number of key actors.Within a given company there are executives and other senior management, developers, designers, and data scientists, as well as other employees. These companies are also engaged with funders, investors as well as shareholders, in some
cases.
Generally, these companies can have two main functions; they can be involved in business-to-business (B2B) AI technology or they can be involved in business-to-consumer (B2C) types of technology, (though some large technology companies do both). These distinct characterizations usually change the nature of the AI in use.
Commercial Clients
Business-to-business AI companies supply commercial clients with software or hardware that allows them to accomplish their goals. These types of AI technologies are generally implemented as a tool to accomplish an end purpose.
For example, COMPAS is a technology that assigns a score (between 1 and 10) to a given person who has committed a crime. The score is intended to indicate the likelihood that the individual will reoffend (Angwin et al., 2016).
The technology is used by certain judicial systems in the US to help in deciding sentencing and parole possibilities for offenders. Another example is AI technologies used to pre-screen patients for specific diseases.
Although in their early stages, some of these technologies are being implemented through partnerships between hospitals and AI companies (Daley, 2018).
Due to the nature of AI, commercial clients and technology companies regularly work in partnerships when building technology for specific fields such as health, human resources, and policing among others.
Society
Society as a whole is considered one of the stakeholders as there are very few individuals who are exempt from the influence of AI. Whether knowingly or unknowingly, virtually every individual in society has provided data that has been used to train AI algorithms.
User
Within society we have a subdivision of individuals considered users. A user is an individual who uses specific technologies or machines (Merriam Webster, n.d). Users provide additional data to technology companies through the devices that they use and products they engage with. The relationship between users and technology companies is layered, while they benefit from the product, they also provide the technology company with revenue in various ways such as data sales, and ad revenue from engagement on their platforms.
Target Market
Another subset within society that is relevant in this system is the target market, the group of consumers at whom a specific product is aimed (Kenton, 2021). Technology companies, like any other businesses, have target markets where they feel their technology would be most successful (i.e. generate the most sales) and thus direct their marketing efforts towards these groups. Unlike with commercial clients, technology companies rarely collaborate with users or target markets to develop technologies in conjunction with their needs.
Government
The different levels of government legislate and regulate many parts of this system including: the technology and AI industry (to an extent), data usage and collection, and lastly society in general. Governments are also recipients and users of AI technologies; many government bodies work with technology companies to integrate artificial intelligence into government policies, practices, and endeavors. Governments are heavily influenced by society, through societal pressures and voting.
Media
The media is a key player in this system. The media communicates to and with society by broadcasting newsworthy content to the public and sometimes being informed by whistleblowers and informants which can have huge consequences.
For example, Google came under scrutiny recently when a secretive contract with the US Department of Defense was revealed in the media. The ensuing firestorm of criticism from within and outside the company caused them to not renew their contract once it ended (Vox, 2021).
Data
Data is a prominent stakeholder in the system. Data is what AI algorithms are trained on and therefore directly impact the final product. Data is collected from society; it is a commodity that can be bought and sold. Thanks to the advancement of technology, data collection, use and storage is occurring at an unprecedented rate. AI technologies use existing data to train their technologies and collect data on their users in order to have more material to train their algorithms with.
The interactions between these various stakeholders as well as the company culture and the culture of the AI and technology industry, are what cause bias to become embedded in systems of AI.
In visualizing the lifecycle of a piece of AI technology, there are multiple places in which bias can inadvertently be brought into AI. Through research, four key places were identified where bias is brought into the system. These correspond to the selected intervention strategies within the system, where bias in AI is prominent:
1. The first place is funding. “Investors are inherently biased, and intuition alone cannot consistently drive good decisions” (Bueschen, 2015). Investors have many unconscious biases that impact the way they fund potential technologies including similarity bias, local bias, anchoring bias, and gender bias (Bueschen, 2015).
2. The next stage is hiring and company culture. Biases in hiring practices have long been documented. These biases can exhibit themselves as gender bias, racial bias, and ageism. The culture of an organization or even an industry can have an impact on potential biases built into their technology. There are many levels of culture within the AI industry— individual technology companies have their own culture; the industry as a whole has a culture—and this culture comes with its own embedded biases at different levels.
3. The third place where we see bias being built into the system is through data collection. As mentioned previously, data is what's used to train an AI algorithm, and so when the data pool is biased, the outcome will be biased.
4. Lastly, there is a bias in public perception of technology and technology companies. Oftentimes AI and technology is perceived as being neutral and without bias. This public perception leads to the unquestioning use and deployment of AI technologies in arenas of public and commercial life without
proper scrutiny or checks and balances.
These identified intervention points, which will be explored in more depth in subsequent sections, correspond with the crucial points of bias which emerge from the exploration of this system, its stakeholders, and their interactions.
The relationships in the AI system are scattered, where some play more active roles, and some are acted upon in the system.
Scope & Boundaries of the System
AI Developers
The AI developers have an advantageous position. They closest to the product with full access to the proprietary AI. They can make direct changes in how the AI operates and functions. They have their
own organizational goals to achieve and the desire of freedom to operate and innovate.


Government
The government has a similar level of power as the developers; however, they are not as agile as the pace of innovation outpaces regulation. While AI is rolled out publicly, governments struggle with
gaining full access to AI information, lack technical capacity, and are required to make grounded policies to update or introduce new regulations.
As such, regulation in this sector is lagging (Mozilla Foundation, 2020). Still regulatory and financial bodies have had the most pronounced impact on shaping the cycle of innovation (Henton & Held, 2013) and AI development.
Investors (VCs, angel investors)
Investors play a critical role as they control a large portion of financing of AI technologies and innovations. As the industry is driven by commercial incentives and growth, their needs often are prioritized by AI developers and companies.
