Risk assessment׳s insensitive toxicity testing may cause it to fail
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
BACKGROUND: Risk assessment of chemicals and other agents must be accurate to protect health. We analyse the determinants of a sensitive chronic toxicity study, risk assessment's most important test. Manufacturers originally generate data on the properties of a molecule, and if government approval is needed to market it, laws globally require toxicity data to be generated using Test Guidelines (TG), i.e. test methods of the Organisation for Economic Cooperation and Development (OECD), or their equivalent. TGs have advantages, but they test close-to-poisonous doses for chronic exposures and have other insensitivities, such as not testing disease latency. This and the fact that academic investigators will not be constrained by such artificial methods, created a de facto total ban of academia's diverse and sensitive toxicity tests from most risk assessment. OBJECTIVE: To start and sustain a dialogue between regulatory agencies and academic scientists (secondarily, industry and NGOs) whose goals would be to (1) agree on the determinants of accurate toxicity tests and (2) implement them (via the OECD). DISCUSSION: We analyse the quality of the data produced by these incompatible paradigms: regulatory and academic toxicology; analyse the criteria used to designate data quality in risk assessment; and discuss accurate chronic toxicity test methods. CONCLUSION: There are abundant modern experimental methods (and rigorous epidemiology), and an existing systematic review system, to at long last allow academia's toxicity studies to be used in most risk assessments.