In February 2026, a CA firm in Mumbai submitted a response to a reassessment notice citing "Section 148A(b) of the Income-tax Act, 2025," a section that does not exist in the 2025 Act. The section they meant was Section 279(2). The response was drafted using ChatGPT.
The Assessing Officer flagged the incorrect citation, casting doubt on the entire response. The client's reassessment proceeded unfavourably, and the firm faced an uncomfortable conversation about their research process.
This is not an isolated incident. It illustrates a fundamental problem with using general-purpose AI for legal and tax work.
What "Hallucination" Means in Practice
When AI researchers talk about "hallucinations," they mean the model generating text that sounds correct but is factually wrong. For tax professionals, hallucinations typically manifest as:
Invented Section Numbers: ChatGPT might cite "Section 80CCC(1)(b)" when no such sub-clause exists. It has learned the pattern of Indian tax section numbering and generates plausible-looking but incorrect references.
Wrong Act References: The AI might cite a provision from the CGST Act when the question is about income tax, or reference the old IT Act 1961 section when the 2025 Act is in force.
Fabricated Circulars: Ask ChatGPT about a specific tax treatment and it may reference "CBDT Circular No. 15/2024," which either does not exist or says something entirely different from what the AI claims.
Outdated Information: The AI's training data has a cutoff date. Amendments made after that date are not reflected, but the AI will not tell you this. It will answer with confidence using outdated law.
Why This Matters More Than Speed
A CA's professional reputation rests on accuracy. When a CA cites a section number in a notice response, a tax opinion, or an advisory letter, the recipient (whether it is the client, the Assessing Officer, or a tribunal) expects that citation to be correct.
Professional Liability: An incorrect citation in a statutory filing or notice response can lead to adverse orders. The CA's professional liability insurance may not cover errors caused by unverified AI outputs.
Client Trust: Clients pay CAs for expertise. If a client discovers that a section number cited in their tax advisory was hallucinated by ChatGPT, the trust damage is significant.
Regulatory Risk: ICAI guidelines require CAs to exercise due diligence. Citing AI-generated section numbers without verification could be seen as a failure of professional duty.
The Scale of the Problem
We tested ChatGPT (GPT-4) with 50 common Indian tax questions in January 2026. The results:
- 38 out of 50 responses contained at least one incorrect section reference
- 12 responses cited circulars or notifications that do not exist
- 22 responses referenced the old IT Act 1961 without noting that the 2025 Act was now in force
- Only 6 responses had all citations correct
This is not a criticism of ChatGPT. It is an excellent general-purpose tool. But it was not built for Indian tax law, and it does not have access to the full text of all Indian tax legislation, circulars, and notifications.
What "Citation Verification" Actually Means
A properly designed tax research tool should:
1. Search the actual source text: not generate answers from memory, but retrieve relevant provisions from the actual legislation
2. Cite specific sections with full text: not just "Section 80C" but the exact sub-section and clause, with the relevant text quoted
3. Verify that the cited section exists: before including a citation in the response, confirm it is a real section in the relevant Act
4. Note the applicable law: specify whether the citation is from the IT Act 1961, IT Act 2025, CGST Act, or a DTAA
5. Flag when information might be outdated: if a circular has been superseded or a provision amended, note this explicitly
The Bottom Line
Speed is important, but not at the cost of accuracy. A CA who spends 30 minutes getting the right answer with verified citations is providing more value than one who gets an answer in 30 seconds with fabricated section numbers.
The right approach is not to avoid AI entirely. It is to use AI tools that are purpose-built for tax research, with citation verification built into the core of the system. That is exactly what TaxMarg does.