The Current State of AI Skepticism: An Industry Survey.

From “If We Build It…” to “Where Does It Actually Help?”

Over the past six months, we have noted, things have changed. The center of gravity has shifted. Adoption is rising, governance is getting serious, and skepticism is evolving from “whether to use AI” toward “how, where, and under what controls.” That shift is playing out quite clearly in eDiscovery, lit-support, and investigations.


  • Corporate legal teams are leaning in. In-house leaders report meaningful year-over-year increases in generative-AI use inside legal departments. (eDiscovery Today by Doug Austin)
  • eDiscovery is the beachhead. To nobody’s great surprise – this year’s State of AI in eDiscovery shows document review emerging as the top opportunity with a sharp rise in overall enterprise AI adoption and a simultaneous rise in formal policies that restrict or block use of open commercial GenAI tools. Translation: use is growing, but so is discipline. (Lighthouse Global)
  • Insurance and malpractice realities are now front-and-center. July analysis warns that LPL coverage may not clearly encompass AI-assisted work if it strays outside “professional services,” and that verification, disclosure, and policy hygiene are now table stakes. That’s skepticism turning into governance—and it’s timely for service providers who shoulder review accuracy and data handling risks. (Reuters)
  • Change management is finally formalizing. The eDiscovery community is talking explicitly about how to move teams from skepticism to trust with training, process redesign, and human-in-the-loop verification—not just tool rollouts. (ACEDS)
  • Workflows are getting sophisticated. We’re seeing more and more examples of sophisticated, hybrid workflows that use both GenAi and more traditional TAR/CAL technology to perform QC on one another. The results seem clear; the combination is stronger than either technology on its own.
  • “Conversational AI evidence” is real. Relativity issued guidance this summer that explains how to preserve and analyze potentially discoverable AI-generated chats, prompts and outputs—implications for legal holds, collection, and privilege are now concrete ediscovery questions, not hypotheticals. (Relativity)
  • Workplace-facing rules are clearer in 2025. European commentary this spring highlights prohibitions on workplace emotion-recognition systems under the AI Act—an example of risk-based constraints that spill into HR disputes, investigations, and compliance matters your discovery teams may touch. (Legal Blogs)
  • Practical governance checklists are mainstream. July guidance for GCs: formalize AI policies, ensure explainability, and stand up monitoring and training programs—principles that ediscovery vendors and managed-review providers can mirror in their own SOPs. (Reuters)

  1. Hallucinations and over-reliance
    Even legal-tuned models can produce errors that sound plausible and are therefore hard to detect. The current standard of care in review remains: treat AI as a speed layer, not a truth oracle; verify before you rely. Recent industry pieces reinforce the need for human oversight and defensible workflows. (Reuters)
  2. Cost, ROI, and “good enough” accuracy
    Our January view matched the NYT-profiled concern: cost curves and error rates matter. Six months on, that hasn’t changed—but buyers are getting smarter about scoping AI to high-leverage tasks (summarization, classification, entity extraction) where marginal gains justify spend and verification is feasible. Lighthouse’s 2025 data points to exactly those targeted uses. (Lucent, Lighthouse Global)
  3. Policy and disclosure gaps
    Carrier and court expectations are converging on: disclose when appropriate, protect client data, and document your controls. Providers without written AI policies, audit logs, and QC protocols are taking avoidable risk. (Reuters)

Where early 2025 was defined by doubt over costs and accuracy, the past six months have seen rising adoption paired with formal controls, malpractice risk framing, and change-management playbooks. Skepticism hasn’t disappeared—it has matured into pragmatic governance and verification. AI is earning its place not by replacing judgment but by compressing the time between ingestion and insight. In this environment, the winning posture is neither hype nor refusal, but careful, verification-first adoption. That is how incremental change becomes inevitable progress, and how skepticism itself becomes a strength.


  • Lighthouse: State of AI in eDiscovery 2025 (report & newsroom release). (Lighthouse Global)
  • Reuters (July 2025): AI malpractice/insurance risks; GC governance guidance. (Reuters)
  • ACEDS (July 31, 2025): From Skepticism to Trust: A Playbook for AI Change Management in Law Firms. (ACEDS)
  • Relativity Blog (July–Aug 2025): AI evidence guidance; value/measurement insights. (Relativity)
  • Wolters Kluwer (June 16, 2025): From optimism to skepticism: artificial intelligence in work—context on the broader workplace mood. (Legal Blogs)
  • Lucent Discovery (Jan 9, 2025): If We Build It, Will They Come? (Lucent)

Be brilliant. insightful. clear.