Data-Driven, Human-Led: Navigating Toxicology’s Next Chapter

Toxicology is evolving quickly. As technological capabilities and data generation continue to expand, New Approach Methodologies (NAMs) are entering a new phase of development and application. For experts like Grace Patlewicz, Ph.D., and Nigel Greene, Ph.D., this shift is not just expected, it’s essential.

Both see a field moving rapidly beyond traditional animal models, which carry inherent limitations, particularly species differences that restrict their relevance to human biology. To achieve safer, more predictive science, toxicology must evolve toward data-rich, human-relevant approaches.

The AI and Data Imperative

Companies working at the forefront of toxicology now generate billions of data points each week. High-throughput screening, advanced cell imaging, and transcriptomics offer immense scientific promise, but they also create a challenge of scale. As Dr. Greene notes, data generation today “far outstrips our ability to understand it.”

Artificial intelligence offers one path forward. When applied thoughtfully, AI can synthesize vast datasets drawn from literature, predictive models, and experimental outputs. In doing so, it frees scientists from repetitive manual tasks and allows them to focus on what matters most: interpretation, context, and decision-making.

Dr. Patlewicz views AI primarily as a facilitator—valuable for extracting, organizing and connecting information, but not a substitute for scientific expertise. She stresses that tools like large language models should help construct the narrative around the data, not generate the data itself.

Integration Makes It Powerful

But the future of NAMs isn’t defined by AI alone, it’s about integration. Dr. Greene highlights the growing potential of combining high-resolution imaging, toxicogenomics, proteomics, and cell-level analysis into cohesive, systems-level assessments. When integrated effectively, these approaches provide a far more complete picture of biological response than any single assay can deliver.

Dr. Patlewicz points to high-throughput metabolomics as a critical area for future development, especially in improving objective assessments like read-across. Systematizing how chemicals are metabolized could significantly strengthen confidence in non-animal methods.

Human Oversight is Non-Negotiable

Despite the excitement surrounding AI, both experts agree: human oversight remains essential. Algorithms can “hallucinate,” and decisions made without understanding their limitations carry serious risk. Dr. Greene warns that AI lacks ethical reasoning, and those developing it often overlook its broader implications. In toxicology, where human health and safety are at stake, that’s a risk which organizations can’t afford.

Responsible application demands more than enthusiasm. It requires what Dr. Patlewicz calls “data literacy”, or the ability to code, query databases, understand model behavior, and construct reliable workflows. Without these skills, even the best tools can lead to misleading conclusions.

A Balanced Future

In the end, the goal isn’t faster science, its better science. As Dr. Greene puts it plainly: more data isn’t always better data. True progress comes from generating the right data, using the right tools, to asking the right questions.

Toxicology’s next chapter will be written by professionals who can navigate complexity without getting lost in it. It needs scientists who use technology to enhance, not replace, judgment. At BlueRidge Life Sciences, we help clients design data-driven strategies that are grounded in scientific rigor and human expertise.

Schedule a complimentary consultation to discuss applying advanced tools—responsibly, effectively, and with purpose.

Frequently Asked Questions (FAQs)

1. Why is AI important in toxicology?

AI enables toxicologists to process and integrate vast amounts of data—from literature, assays, and predictive models—faster than traditional methods allow. This supports quicker decision-making and better prioritization, particularly in complex or data-heavy assessments.

2. Can AI replace human toxicologists?

No. AI is a tool, not a replacement for expert judgment. Human oversight is essential to ensure responsible use, validate outputs, and avoid errors or “hallucinations” that can arise from automated systems.

3. What skills are most important for toxicologists today?

Data literacy is increasingly critical. This includes coding, querying databases, understanding how predictive models work, and knowing how to structure data workflows for reliability and transparency.

4. What’s the value of integrating multiple data types (e.g., toxicogenomics, imaging)?

Integration strengthens predictions. When technologies like high-resolution imaging, transcriptomics, and proteomics are combined, they provide a more complete picture of biological response, far beyond what individual tests can offer.

5. Are there risks in using AI in regulatory submissions?

Yes, if not used responsibly. Regulatory bodies still require clear, validated evidence. AI can support that process, but it must be applied with a full understanding of model limitations, context of use, and data integrity.

Next
Next

Turning Complexity Into Strategy: Regulatory Alignment Through NAMs