Evaluating Machine Learning data practices through a Data Curation lens

Studies of dataset development in machine learning call for greater attention to the data practices that make model development possible and shape its outcomes. Many argue that the adoption of theory and practices from archives and data curation fields can support greater fairness, accountability, transparency, and more ethical machine learning.

In response, we are examining data practices in machine learning dataset development through the lens of data curation. We evaluate data practices in machine learning as data curation practices.

For our first paper on this topic, “Machine Learning Data Practices through a Data Curation Lens: An Evaluation Framework”, we evaluated data practices in machine learning *as* data curation practices. We developed an evaluation rubric and applied it to a sample of NeurIPS papers to see what we would find. It was accepted at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2024).

The 12 minute talk is available on Youtube. The preprint is on arxiv.

To evaluate data practices in machine learning as data curation practices, we developed a framework for evaluating machine learning datasets using data curation concepts and principles through a rubric. Through a mixed-methods analysis of evaluation results for 25 ML datasets, we study the feasibility of data curation principles to be adopted for machine learning data work in practice and explore how data curation is currently performed. We find that researchers in machine learning, which often emphasizes model development, struggle to apply standard data curation principles. Our findings illustrate difficulties at the intersection of these fields, such as evaluating dimensions that have shared terms in both fields but non-shared meanings, a high degree of interpretative flexibility in adapting concepts without prescriptive restrictions, obstacles in limiting the depth of data curation expertise needed to apply the rubric, and challenges in scoping the extent of documentation dataset creators are responsible for. We propose ways to address these challenges and develop an overall framework for evaluation that outlines how data curation concepts and methods can inform machine learning data practices.