Data exemplifies good, bad and ugly reflections of our society. While there is not a consensus on how to handle data, overlapping themes are falling within three main categories: motivation & application, analytical & creative inquiry and decision & outcomes. The motivation and application phase is concerned about the ‘WHY’ with respect to business, government and/or civic needs. The questions driving the data engagements are key, particularly those that follow the SMART (specific, measurable, attainable, relevant and timely) criteria. The analytical and creative inquiry phase considers the ‘HOW’ — as it relates to answering those aforementioned questions. Lastly, we have the decision phase which focuses on the ‘WHAT’. It’s important to assess what the analytical and creative inquiries reveals and what are impactful next steps.
The narrative and discussions surrounding fairness in algorithms are troublesome. Avoiding bias as well as fairness in and of technology are misnomers. Bias exists in unquantifiable and varying degrees. We, at least in the computer science continuum of data science, are attempting to create fairness and bias computationally-aware metrics. But, in what situation has everyone agreed that they were treated fairly? “Life isn’t fair” is a common colloquialism. I argue that since we, as a human race, have yet to experience fairness in the physical world, then there is no model to represent it in the digital world. We are reframing to accurately portray what scholars want to do — which is to computationally minimize discrimination.
The PAIR Principles
Let’s shift past the neutral state of talking and enact change every facet of an organization. This growth must happen in two arenas, simultaneously — in staffing and technology. First, we have to conduct an audit of existing organization culture. Second, we can set a strategic plan for embedding and sustaining a growth culture. Lastly, we have to repeat the first and second step every five to six months.
Publications & Invited Presentations
Brandeis Marshall (June 2019). "From Us, To Us: An Inclusivity Architecture", EGG NYC Conference. Dataiku. [YouTube]
Abstract: Time and time again, research proves that diverse teams increase innovation and profits as well as enhance work quality and end products. Yet while the philosophy of diversity in thought continues to gain traction, the full spectrum of diversity — gender, ethnic, racial, class, and sexuality — has yet to be applied. Bridging this gap is where data comes in, as it has led to greater investment in broadening Participation, Access, Inclusion and Representation (PAIR). In this talk, Dr. Marshall will share plausible approaches on how today’s enterprise can construct, support, and sustain inclusivity using data and the power of the PAIR principles.
Thema Monroe-White and Brandeis Marshall (December 2019). "Data Science Intelligence: Mitigating Public Value Failures Using PAIR Principles", Pre-ICIS SIGDSA Symposium on Inspiring mindset for Innovation with Business Analytics and DataScience, Munich Germany. [paper]
Thema Monroe-White, Brandeis Marshall & Hugo Contreras-Palacioos (February 2021). "Waking up to Marginalization: Public Value Failures in Artificial Intelligence and Data Science". AAAI Workshop on Diversity in Artificial Intelligence Conference: Artificial Intelligence - Diversity, Belonging, Equity, and Inclusion (AIDBEI), virtual. [presentation][publication forthcoming]
Abstract: Data science education is increasingly becoming an integral part of many educational structures, both informal and formal. Much of the attention has been on the application of AI principles and techniques, especially machine learning, natural language processing and predictive analytics. While AI is only one phase in the data science ecosystem, we must embrace a fuller range of job roles that help manage AI algorithms and systems — from the AI innovators and architects (in CS, Math and Statistics) to the AI technicians and specialists (in CS, IT and IS). Also, it’s important that we better understand the current state of the low participation and representation of minoritized groups that further stifles the accessibility and inclusion efforts. However, how we learn and what we learn is highly dependent on who we are as learners. In this paper, we examine demographic dis-parities by race/ethnicity and gender within the information systems educational infrastructure from an evaluative perspective. More specifically, we adopt intersectional methods and apply the theory of public value failure to identify learning gaps in the fast-growing field of data science. National NCSES datasets of Master’s and Doctoral graduate students in IS, CS, Math and Statistics are used to create an “institutional parity score” which calculates field-specific representation by race/ethnicity and gender in data science related fields. We conclude by showcasing bias creep including the situational exclusion of individuals from access to the broader information economy, be it access to technologies and data or access to participate in the data work-force or data enabled-economic activity. Policy recommendations are suggested to curb and reduce this marginalization within information systems and related disciplines..