TabularMark: Watermarking Tabular Datasets for Machine Learning
Abstract
Watermarking is broadly utilized to protect ownership of shared data while
preserving data utility. However, existing watermarking methods for tabular
datasets fall short on the desired properties (detectability,
non-intrusiveness, and robustness) and only preserve data utility from the
perspective of data statistics, ignoring the performance of downstream ML
models trained on the datasets. Can we watermark tabular datasets without
significantly compromising their utility for training ML models while
preventing attackers from training usable ML models on attacked datasets? In
this paper, we propose a hypothesis testing-based watermarking scheme,
TabularMark. Data noise partitioning is utilized for data perturbation during
embedding, which is adaptable for numerical and categorical attributes while
preserving the data utility. For detection, a custom-threshold one proportion
z-test is employed, which can reliably determine the presence of the watermark.
Experiments on real-world and synthetic datasets demonstrate the superiority of
TabularMark in detectability, non-intrusiveness, and robustness.
Authors
Zheng Y; Xia H; Pang J; Liu J; Ren K; Chu L; Cao Y; Xiong L