Assessing CivicScape: the Value of Applying Open Source Approaches Toward Algorithmic Decision System Accountability

Assessing CivicScape: the Value of Applying Open Source Approaches Toward Algorithmic Decision System Accountability

Title: Assessing CivicScape: the Value of Applying Open Source Approaches Toward Algorithmic Decision System Accountability 

Affiliation: AI Now Institute / New York University 

Authors: Genevieve Fried , Varoon Mathur , Rashida Richardson , Jason Schultz , Roel 1 2 3 4 Doebbe

Track: Computer Science 

Introduction As part of a growing call to make algorithmic decision systems (ADS) more accountable, policymakers, advocates, and technologists are demanding that ADSs be made transparent and amenable to public understanding, scrutiny, and testing. Yet currently, there is no agreed-upon set of best practices for promoting such transparency. Some researchers advocate for technical approaches to algorithmic transparency, proposing interventions to make ADS outputs more “explainable” [4,8]. Other scholars argue a narrowly technical approach fails to reckon with explanation as a primarily social, rather than technical, concept [7,1], and also with ADSs as 6 socio-technical assemblages whose transparency requires a messier social and contextual treatment [5,2]. A core part of this discussion revolves around the communicative aspect of transparency, which raises questions such as how information about an ADS should be presented. Answers to these questions mediate stakeholder understanding and scrutiny of algorithmic systems. 

Amidst this academic discourse, policymakers and advocates have specifically called for vendors and public agencies to publish the source code of ADS for public access. For example, the initial version of New York City’s “Automated Decision Task Force” bill (Int. No 1969, 2017) proposed a mandate that all city agencies using an algorithm for the purposes of targeting services or imposing penalties on persons “Publish on such agency’s website, the source code of such system.” The approach is often met with skepticism, with concerns around the accessibility of source code for public scrutiny, as well as the ability of source code to convey an understanding of an ADSs behaviour, particularly when the ADS is built with machine learning methods [6, 3]. No scholarship that we could find analyzes how the presentation of source code might inform its level of comprehension for public scrutiny. 

1 AI Now Institute, New York University 2 AI Now Institute, New York University 3 AI Now Institute, New York University 4 New York University School of Law, AI Now Institute 5 AI Now Institute, New York University

To explore these questions in a more concrete context, we conducted an analysis of CivicScape , a company which published the source code of its place-based predictive policing platform on 7GitHub in 2017 for the stated purpose of enabling public review, understanding, and feedback . 8 CivicScape provided this source code and some related documentation in the form of jupyter ipython notebooks under an open source license. This presented the opportunity to study how the company’s GitHub repository comported with best practices from the open source software community and to evaluate whether and how these efforts enabled public scrutiny. We use this case study to examine the value that successful open source software approaches might contribute to ADS accountability. Specifically, we seek to examine how effective such approaches are in meeting the accountability needs and expectations of ADS stakeholders advocating on behalf of impacted communities such as public agencies and legal advocacy organizations. 

Methods 

From literature on software development and open source, we collate a series of best practices that open source code should adhere to, and source code repositories should follow when hosting open source code. We proceed to evaluate each best practice through two types of review: static review (reading through code) and dynamic testing (running a system and subsequently performing either black box or white box testing). This analysis was performed by two junior researchers with respective backgrounds in software development and machine learning. 

Results & Discussion We find that CivicScape failed to follow best practices, including software design and code documentation. Their repository is poorly maintained and their source code bug-ridden. Crucial information, such as the data needed to run CivicScape’s system, is missing, which ultimately precluded our ability to produce a running instance of the system. As a result, we are unable to reproduce any of CivicScape’s analysis or verify its claims about its system, such as the system is able to mitigate bias and has a superior predictive accuracy to other predictive policing systems. 

After concluding that CivicScape failed to follow the best practices of open sourcing code, we analyze the value of open sourcing code in accordance with open source best practices in the context of ADS accountability. We find a number of limitations: 1) the presumption that data, if not willfully provided by a developer, is readily available; 2) open source does not provide 

7 https://github.com/CivicScape/CivicScape/. As of August 12, both the CivicScape website and the CrimeScape GitHub repository have been taken offline without explanation. We conducted our technical analysis in February 2019 and had most of our paper written before this point. Given this change of status, we decided to not change existing citations and references to their repository but interested individuals can see our archived version of CivicScape’s repository here: https://tinyurl.com/yyepkk2j 8 Note that not all source code repositories on GitHub are public or under an open source copyright license, but a majority are. See https://help.github.com/en/articles/licensing- a-repository. See also https://www.theregister.co.uk/2013/04/18/github_licensing_ study/. 

insight into the policies and practices by which data used to train or implement ADS is generated; 3) source code can help identify potential problems with an ADS but does not provide information into how a system functions in practice, which is dependent not only on predictive outcomes but on how users understand and are influenced by those outcomes, and how decisions they make feed back into the environment from which a systems’ data is collected. From this we conclude that open source code is largely necessary but insufficient for ADS accountability. We then provide a number of recommendations that address problems identified in our study and that are actionable by various stakeholders. 

References [1] Helen Nissenbaum. 2004. Privacy as Contextual Integrity Symposium – Technol- ogy, Values, and the Justice System. Washington Law Review 1 (2004), 119–158. [2] Jakko Kemper and Daan Kolkman. 2018. Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society 0, 0 (June 2018), 1–16. https://doi.org/10.1080/1369118X.2018.1477967 [3] Joshua A. Kroll, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2016. Accountable algorithms. U. Pa. L. Rev. 165 (2016), 633. [4] Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89; [5] Mike Ananny and Kate Crawford. 2016. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society (2016), 1461444816676645. [6] New York City Council. 2017. File #: Int 1696-2017. https://legistar.council.nyc. gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D- 62E1- 47E2- 9C42- 461253F9C6D0 [7] Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j. artint.2018.07.007 https://heinonline.org/HOL/P?h=hein.journals/washlr79&i=129 [8] Zachary C. Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).