What is the compliance effect of experiencing a tax audit? Empirical studies typically report a positive effect, while laboratory experiments frequently report a negative effect. We show experimentally that whether a tax audit increases or decreases subsequent compliance hinges on the balance of learning opportunities, misperception of audit risk, and the confounding effect of censoring. After an audit, taxpayers lower their perceived risk of audit – consistent with a bomb-crater effect – when audit selection is exogenous. However, for an endogenous audit rule under which taxpayers can learn to reduce their audit risk by reporting higher income, learning effects outweigh probability misperception, resulting in an increase in post-audit tax compliance. Finally, we show that accounting for censoring effects can eliminate on its own the negative post-audit compliance effect frequently observed in laboratory experiments.
Tax audits are an essential instrument in achieving compliance. While the threat of audit may itself alter compliance behavior, what effect, if any, does carrying out an audit have upon the future compliance of its target (the post-audit effect)?
Under conventional assumptions on risk aversion, the seminal Allingham-Sandmo (AS) analysis of tax compliance, Allingham and Sandmo (1972), predicts a zero post-audit effect for audited taxpayers who do not receive a fine, and a positive post-audit effect for fined taxpayers. Thus, empirically, the average post-audit effect is predicted to be positive, but potentially difficult to distinguish statistically from zero in samples where a high proportion of audits do not lead to fines. This prediction of a positive average post-audit effect is broadly in line with a growing empirical literature on post-audit effects, but discordant with experimental findings, which are instead consistent with a (weakly) negative average post-audit effect. Such seemingly contradictory findings in the existing evidence base on the post-audit effect are consequential for tax enforcement policy: audits are costly, so determining how many to do and how best to allocate them are key policy questions. If, as the experimental evidence suggests, auditing taxpayers reduces their subsequent compliance, this weakens the case for auditing, and strengthens the case for alternative compliance measures, e.g., enhanced taxpayer support. Also, tax administrations should seek to maximize the perceived risk of audit while at the same time minimizing the number of audits they actually perform. If, however, auditing increases compliance, as suggested in the empirical literature, tax administrations should seek to maximize the number of audits they actually perform.
The sign and size of the post-audit effect is an important ingredient in determining optimal enforcement of the tax system. Prior literature has struggled to agree on the sign of the effect, however, with field studies generally pointing to a positive effect, but experimental findings often pointing to a negative effect.
As discussed in the Introduction, much of the existing field literature has the characteristic that findings indicating a positive post-audit effect are interpreted as signifying both (i) the presence of a rational deterrence effect, operating via learning; and (ii) the absence of a bomb-crater effect. Proceeding from this perspective, the apparent presence of the bomb-crater effect in laboratory outcomes must then be interpreted as evidence of a failure of external validity with respect to laboratory experiments. The perspective suggested by our findings is rather different. It is that evidence – both from the field and the laboratory – will simultaneously comprise effects due to learning, probability misperception (driving a bomb-crater effect), and censoring. It is only the sizes of these competing effects that may differ systematically between the field and the laboratory. This view is supported by the observation that, as we harmonize conditions in the laboratory towards those in the field (a process which would be expected to converge the sizes of the effects discussed with those observed in the field), our findings in the laboratory indeed converge towards those in the field. Moreover, from this alternative vantage, one might go as far as to argue that, in light of the systematic differences between laboratory and field settings in the existing literature, it would be more surprising were the outcomes from the experimental literature convergent with those from field studies, than were they divergent.