I have been thinking of a simple way of expressing the power of bringing legal rules from landlord-tenant law, and perhaps law more broadly, into the economics of the private rental housing market and residential tenancies. My work titled "Leases Over Real Property" published in Regional Science and Urban Economics is a longer, more complicated way of thinking about this problem. This blog post serves as an amuse-bouche to that manuscript.
Consider the rule of dependent covenants, the first of the major landlord-tenant rules that emerged at common law to protect tenants as the law of the landlord and the tenant moved from one of feudal property law and caveat emptor to one of contract law and caveat venditor. Rather ironically, the rule of dependent covenants in US landlord-tenant law emerges out of a case in the state of Wisconsin, Pines v. Perssion, 14 N.W.2d 590 (Wis. 1961), which is also where Matthew Desmond focuses his ethnographic work on low-income tenants in his Pulitzer Prize winning book Evicted: Poverty and Profit in the American City. The rule in Pines makes clear that the tenant's duty to pay rent is contingent on a habitable property.
Now let us explore the simple economics of such a rule.
Let the landlord make an investment in maintenance i at marginal cost c that mitigates the probability of a damage p(i) that relieves the tenant of their duty to pay rent r (i.e. p'<0 and p''>0). Consider now the effect of a rent ceiling r. The landlord chooses investment according to
i* ∈ arg max [1–p(i)] · r – ci.
The first order condition is –p'(i) · r = c. It is easy to see that under the assumptions we have made, which are very few, the landlord's optimal level of investment in maintenance i* is increasing in the rent ceiling r, that is,
∂i*/∂r > 0.
Thus, rent control decreases investment because it decreases the payoff associated in the "good" state. I call this the Friedman-meets-Shavell Theorem.
In Foundations of Economic Analysis of Law, Steven Shavell establishes the tradition in the law of negligence to have exogenous levels of precaution (x*,y*) for both the motorist and pedestrian set by the court, where if x > x* damages lie as they fall (p. 180). However, the reasonable level of care to minimize losses for a person of type i can be expressed endogenously as the solution to the incentive compatibility (IC) constraint
xᵢ* ∈ arg maxₓ EUᵢ(x,y) (IC)
where Uᵢ is the utility function for a defendant of type i expressed as a function of defendant and plaintiff precaution x and y, respectively, and E is the expectation operator. Precisely, to "take reasonable steps to mitigate losses" is to act rationally relative to the costs and benefits faced by the agent in that scenario. For instance, in Australia, the "reasonable person" standard in tort law is contextually applied to the person in the shoes of the defendant (see McHale v Watson, [1966] HCA 13 for a case on negligence standards for children), diverging sharply from the objective reasonable or prudent person established by the House of Lords in the famous 1837 case of Vaughan v Menlove. Thus, to add a negligence standard of "reasonable steps to mitigate" is to impose an additional incentive compatibility constraint in the second-best solution.
Why do I write this? Well, mainly because I think contract theory (economics) has something to learn from contract theory (law). It would be too Talmudic of a reading of Bolton and Dewatripont's Contract Theory to assume that an action must be so-called "hidden" or "unobservable" (pp. 131-132) to warrant the inclusion of an incentive constraint in the problem of the second best. Incentive constraints can be used to endogenously model what is rational in the circumstances, something the law of negligence frequently asks of us in determining the appropriate standard of care.
However, deceivingly, exogenous negligence "standards" are not much of a standard at all, but rather, a rule! Endogenous negligence standards are true standards, in that they are personalized and continuous. The typical "rules versus standards" comparative analysis applies. Rather counterintuitively, the endogenous negligence standard is first-best, but cannot always be implemented due to imperfect information. Though, if the standard of care was meant to be a true standard, as its language would suggest, then it need be endogenous.
Emir Kamenica’s interpretation of mandatory disclosure under Brady v Maryland (known as Stinchcombe disclosure in Canada) in his 2011 American Economic Review article Bayesian Persuasion is incorrect. Kamenica writes:
In Brady v. Maryland, the Supreme Court of the United States ruled that a prosecutor violates the Due Process Clause of the Fourteenth Amendment when he fails to disclose material evidence favorable to the accused. Since a prosecutor maximizing convictions would always willingly choose to report any evidence unfavorable to the accused, our assumption that he discloses any evidence, i.e., any signal realization, seems justifiable. (p. 2599)
This is simply a misstatement of the law of disclosure in criminal proceedings: a prosecutor must produce any material evidence to the defense, but a prosecutor need not (and would not) disclose this evidence to the trier of fact, i.e., the judge or jury.
The more germane concept in the law of evidence is the notion of free or full proof, a defining feature of Anglo-American adversarial legal systems relative to their continental counterparts, which states that all relevant evidence should be put before the trier of fact. Alas, Kamenica likely does not have Evidence Law Adrift in his library.
Post Scriptum: Indeed, Kamenica is doubly wrong because residual discretion, the balancing of probative value against the potential for prejudice, is always and everywhere operating on admissibility (see, e.g., Rule 403 of the Federal Rules of Evidence). It's abundantly clear that Kamenica is cherry-picking his authorities...
Eric Posner and Glen Weyl spill much ink in the first chapter of their science-fiction book Radical Markets to convince us that abolishing private property rights will enable infrastructure development. They appeal to the problem of holdout landowners and test their claim by showing a negative relationship between public infrastructure spending as a percentage of GDP against property rights.
However, much of infrastructure is private. Government rarely carries out these activities on their own for two main reasons. First, large up-front investments pose sizable fiscal burdens, making them economically and politically costly. Second, infrastructure projects are complex construction projects, an area where government is rarely the most efficient provider. Consequently, government typically engages private sector operators through contracts, sometimes called public-private partnerships (PPPs or P3s), or sometimes just procurement.
Data. Using the World Bank's Private Participation in Infrastructure (PPI) Database, a large database of several thousand private infrastructure projects across the world spanning all major sectors, I identify a robust, positive relationship between property rights and infrastructure project size, as measured by investment and physical capacity. Precisely, I find that a one standard deviation increase in property rights is associated with a 25-50 percent increase in project size. The size and robustness of this positive gradient in the data renders it one of the most pronounced features of modern infrastructure project size.
Theory. Besley and Ghatak offer a theory that better fits the data in their 2001 Quarterly Journal of Economics article Government Versus Private Ownership of Public Goods.There are two parties, a government g and a private operator o. Each party makes investments, denoted by the vector I = (i₉, iₒ). The gross social benefit from their investments is B(I) = b(i₉, iₒ). It is natural to assume that the gross benefit is increasing and concave, or b' > 0 and b'' < 0. Suppose also that investments are (weak) complements, or b_{i₉,iₒ} > 0. Each party has a valuation for the project, v₉, vₒ > 0, respectively. With no contracting issues, they choose investments i₉ and iₒ to maximize joint surplus
(v₉ + vₒ) B(I) – i₉ – iₒ.
Let iⱼ* denote the joint surplus maximizing level of the investment by party j ∈ {g,o}. The value solves the following Lindahl-Samuelson type rule:
(v₉ + vₒ) bₖ(i₉*,iₒ*) = 1 for k ∈ {1,2}.
Now consider a more realistic environment where there are contracting imperfections in supposing that the contract reached between the two parties is incomplete. Precisely, investments in the project cannot be specified ex ante. Once investments are sunk, the parties split surplus according to a Nash bargaining rule where the government has a bargaining weight γ = 1/2 and the private operator 1–γ. For party j, the ex ante optimal investment level satisfies
b_{iⱼ} = 1/(γ vⱼ).
One can see immediately from the fact that γ < 1 that 1/(γv₉) > 1/(v₉ + vₒ) and 1/[(1–γ)vₒ] > 1/(v₉ + vₒ). Note that this result is not specific to public goods, but is the classic under-investment result in production contracts as in Grout (1984). Since b' > 0 and b''<0, iⱼ* falls under incomplete contracting. Hence, under-investment by both parties.
Perhaps hold-up is a greater problem than holdouts in the context of private infrastructure.
In his 2010 Journal of Legal Analysis article Costly Screens and Patent Examination, Jonathan Masur argues that costly patenting has benefits: costly screens filter out the worst patents, which have no societal benefits. He writes:
[t]his price barrier [patenting costs] forces potential applicants to draw upon private information about the value of their inventions, information that the patent office is otherwise unable to obtain [...] patent examination is properly understood as a price-setting mechanism. (p. 688)
I write down a simple theoretical framework that nests Masur's (2010) selection effect, but generates a competing outside option effect, which deters firms and entrepreneurs from learning about the value of their ideas in the first place, resulting in fewer good ideas. Masur's notion of innovation and ideas as private information is wanting as a theory of information production. It is unclear whether or not testing the market with or without a patent is more or less efficient from a social planner's perspective. However, market testing serves an important function in the innovation of products, while high patenting costs may deter firms from testing the market for new varieties for which they are uncertain about.
Model. Firms take a draw of their idea quality i where i ~ F(i), which has a revenue potential r(i) that is increasing in the idea's quality (i.e. r'(i) > 0). The expected revenue potential is E[r(i)]. However, the private value of the idea is unknown – to learn i, the firm must pay a fixed patenting cost, c > 0. If the idea is of insufficient quality to create revenue potential greater than patenting costs, the firm abandons the idea, earning 0. Let i be the minimum viable idea, such that r(i) = c.
Expected profit π is Eπ = E(r(i) | i > i) – c. It is trivial to see, by Leibniz's Rule, that expected profit is strictly decreasing in the patenting cost c
dEπ/dc = – (di/dc) r(i) f(i) – 1 < 0.
Masur (2010) observes that E(r(i) | r(i)>c) is increasing in c, the so-called selection effect, which says that the average idea quality is increasing in the patenting cost, but not the outside option effect, which states that expected profit Eπ is decreasing in the patenting cost, which means fewer new varieties or products tested by firms (intensive margin) and possibly even less entrepreneurial activity (extensive margin).
A political science or philosophy student with any modicum of knowledge of what legitimizes authority will know the origins and usage of the Greek word demos in our society: Ancient Greek city-states and their citizens. What one may not know is the Latin synonym for the demos. Wiktionary suggests that populus is the Latin synonym. Indeed, one could find the word in the elementary vocabulary of Chapter III of Wheelock's Latin.
Ginsburg suggested the same to me once at a café in Chicago.
But a lesser known, but equally strong, candidate is cívis, the Latin for "citizen," found much later in the more advanced vocabulary covered in Chapter XIV of Wheelock.
In Richard Epstein's article Rent Control and the Theory of Efficient Regulation, he advances the idea that rent control constitutes a taking, as it carves the interest in reversion out of the fee simple absolute and transfers it to the tenant. In plain English, the landlord is unable to re-enter the estate. He writes in more precise legalese
[r]ent control [...] take[s] part of the landlord's interest in reversion and transfer[s] it to the tenant [...] by compelling the landlord, usually in the context of a lease renewal, to convey an additional term of years at a [controlled price]. (p. 744)
The bundle of rights given to the landlord under fee simple are diminished under many modern forms of rent control legislation. Rent control, and by this I refer to all forms of rent control and beyond, including second-generation vacancy decontrol and the repeal of no-cause evictions, transforms the leasehold, which is supposed to be over a fixed period, into a life estate – "to X for life" – and thereby may constitute an infringement on private property for which the landlord should receive just compensation.
A recent case out of New York nearly made it all the way to the Supreme Court of the United States. Justice Clarence Thomas wrote in Pinehurst LLC v. New York, 601 (2024) U.S.C., in the denial of certiorari, that
The petitioners’ complaints primarily contain generalized allegations about their circumstances and injuries. But, to evaluate their as-applied challenges, we must consider whether specific New York City regulations prevent petitioners from evicting actual tenants for particular reasons. Similarly, petitioners’ facial challenges require a clear understanding of how New York City regulations coordinate to completely bar landlords from evicting tenants. The pleadings do not facilitate such an understanding. However, in an appropriate future case, we should grant certiorari to address this important question.
The will of the landlord now includes his tenant:
To T for life at R dollars per month at X per cent growth rate.
Post Scriptum: Ilya Somin has written a paper arguing that zoning regulations violate the Fifth Amendment's Takings Clause as well.
Often we only get to run one or two statistical tests in the data. One coefficient, one standard error. Yes, there are robustness checks, but these are often not grounded in deep theory. How reliable is a single OLS or 2SLS estimate? An answer to this question in the philosophy of science may lie in analytic philosophy, in particular epistemology. If you aren't familiar with contemporary epistemology after justified true belief (i.e. the Gettier problem), you should be. For the purposes of this blog post, you need to know that Nozick's famous response to skeptics and Gettier outlines in the so-called "Tracking Theory" of knowledge (I call it the "sensitivity requirement") goes as follows:
P is true
S believes that P
If it were the case that (¬P), S would not believe that P
If it were the case that P, S would believe that P
As empiricists, we typically attempt to know whether P is true in the data through a statistical test. Falsification and placebo checks, say, by using a control group unaffected by a particular policy or by using past data before a policy is introduced, are natural extensions of epistemology which determine whether a particular statistical test is falsified when ¬P is the state of the world.
One study that originates this technique is Topalova's 2010 AEJ: Applied paper "Factor Immobility and Regional Impacts of Trade Liberalization: Evidence on Poverty from India." They write:
"[T]he estimates presented in the previous section may simply be a spurious correlation. To address this concern, I conduct a falsification test of whether changes in poverty or average consumption from 1983 to 1987 are correlated with changes in tariffs from 1987 to 1997. If the tariff drops are correlated with pre-existing trends in poverty and consumption, the coefficients on tariff should be similar to those estimated with the actual pre- and post-reform data. (pp. 17-18)
And, in the end, the placebo estimates are near zero and insignificant. Thus, such sensitivity checks are grounded in epistemology.
The most common key changes are up/down a semitone and to the fifth. These modulations happen to be the minimum and the maximum harmonic distance from the original key. One simply moves to the left and to the right of the clock that is the circle of fifths. This idea falls broadly within the category of harmony within music theory, in particular, modulation.
It might seem obvious, but I can't find any particular source that points this out directly, and so I thought I should make note of it here.
Since time immemorial law schools across the world have put Hamlet on trial, often with him invoking some sort of insanity defense, for mistakenly killing Polonius in the Queen's Closet in Act 3 Scene 4 of Hamlet: Prince of Denmark.
This line of inquiry is second-best. The defense of mental disorder is too sui generis (Winko at para. 168) of a regime to be adapted to the facts of Hamlet, as he likely fails the M'Naghten rules. There is no such “Hamlet syndrome”.
We should instead put Claudius on trial evaluating Hamlet qua witness.
His 'antic disposition' or 'madness', the former a famous line in the play while the latter is referenced 24x throughout the play, has much to tell us about the weight of demeanour relative to mistakenness in the way the law of evidence evaluates credibility. To epistemically prejudice Hamlet for his behaviour after losing his father in the play is to fail to track the truth in Hamlet. Though, the Prince doesn't quite meet the standards set out for witnesses in Toohey or Vetrovec. Freud says he suffers from mere melancholia, unlike the serious mental defect in Toohey. Goethe says he is of "a lovely, pure, noble, and most moral nature", unlike the accomplice in Vetrovec.
I dub these mildly unreliable narrators on the stand as “Hamlet witnesses".
Contrasted with Toohey and Vetrovec, the Hamlet witness is much less of a discerning figure. He is, instead, much more commonplace. Despite his nobility and intellect, qua witness Hamlet is actually a very ordinary person going through a rather unorthodox experience. Law might refer to him as the reasonable man, but he is known in literature as the everyman archetype, just like Truman Burbank, Josef K, or Willy Loman. Like Truman, he is “[t]h' observ'd of all observers” (3.1.168); like K., he endures “law's delay” (3.1.82); like Willy, he contemplates whether “to be, or not to be” (3.1.64). As William Hazlitt writes on what Harold Bloom calls Hamlet's "universalism" that "it is we who are Hamlet." In that sense, Hamlet qua witness is, as Einstein says about God, "but a reflection of human frailty."
For those reading who don't know, Baby M is a case involving a dispute over parentage of a child born out of a surrogacy contract. The child was awarded to the biological father on a "best interest of the child" analysis and the surrogacy contract was invalidated for reasons of public policy. I would challenge this ruling. I argue that Baby M shows us that contracts should be enforced not only on utilitarian or efficiency grounds, or purely for promise, but also because contracts exist to insure us against future states of the world, such as the probability of marrying late, being infertile, or being gay.
What I will argue is the contracts as insurance is already present in the way we conceive of contracts. In the optimal contracting model with a hidden action, we characterize the optimal contract as one that "co-insures" the principal and the agent across the good and bad states of nature. This language emerges in the law as well. The law of options contracts, a promise which meets the requirement for the formation of a contract and limits the promisor’s power to revoke an offer, is a manifestation of this principle; options contracts protect us from risk of price fluctuation. Corbin (1914) famously wrote that "time is of the essence" (p. 662) in the options contract. Indeed, per Black-Scholes, a call option with no expiry date is intrinsically worthless, as it simply takes the value of the spot price (i.e. no contract), making it want of consideration. The earn-out clause in M&A contracts is one such type of options contract that re-allocates risk.
If the contracts as insurance theory were to be so, then efficient breach and expressive theories of contract would yield the incorrect resolution to the case of Baby M. They render contracts as "words, words, mere words" (W. Shakespeare, Troilus and Cressida, Act 5, Scene 3). Perhaps the famous theory of contract was advanced by Charles Fried in contracts as promise. Both theories lend support for contractual enforcement. However, the contracts as insurance theory is distinct from Charles Fried's contract as promise. I would contend that contract as insurance is prior to contract as promise; contracts are promises, yes, but we make promises because we want insurance against the world. We contract because want to co-insure us and the other party against states of nature, just as in the optimal contracting model with a hidden action.
A consequence of the contracts as insurance theory is that it serves as justification for freedom of contract: in perfect markets, all risks can be contracted upon. However, we lack a functioning insurance market for children – thus the need for contractual enforcement. And it's true – one need only look to kinship practices in the developing world to see that children and family are an important source of insurance. The practice most certainly exists outside of the developing world as well. Robert Munsch's 1986 classic children's book Love You Forever conveys the importance of a child's care for the elderly quite nicely. In Death of a Salesman, Willy Loman tragically remarked that his life insurance policy rendered him “worth more dead than alive”; perhaps he forgot that his (less preferred) son Happy told him that he would “retire [him] for life” earlier in the play. The duty may be similar to the kind of divine obligation that Antigone feels towards the proper burial of her brother, Polynices. Or it may be purely for economic reasons, as in The Grapes of Wrath, where the cotton-picking Joad family pools their daily wages to buy food. This should at least illustrate the point that the concept of risk-sharing and co-insurance within the family unit is not restricted to the developing world, though development and family’s insurance may be linked.
Lord Brennan writes in a 2000 article titled The Actuary and The Law that “Lloyd's [has always] insure[d] almost anything [...] (according to one history) house-breaking, highway robbery, death by gin drinking, death of horses and `assurance of female chastity' — of which all but the last are still insurable" (p. 801). And yet, the law of contracts fails to insure against the risk of infertility.
The contract in Baby M should have been enforced.
The Dobbs leak embodies some of Fuller's eight desiderata of law from Morality of Law, namely that law be relatively stable over time and non-retroactive. The Dobbs decision marks a stark change in the constitutionality of statutes which criminalize late-stage abortions. The leak ensures that the move from regime in Roe to the one in Dobbs is continuous rather than discrete. The leaker of the Dobbs opinion, conservative or liberal, may deserve a Nobel Peace Prize.
Post Scriptum: The decision itself embodies the desideratum that law be publicly promulgated. In that sense, Dobbs is riddled with inner morality. The holding itself is just a beautiful piece of art:
Held: The Constitution does not confer a right to abortion; Roe and Casey are overruled; and the authority to regulate abortion is returned to the people and their elected representatives.
Find all of the constitutional violations in the opening scene (~pp. 1-6 or the first two paragraphs) of Franz Kafka's The Trial [click the link].
Feel free to use external information contained within the text beyond pages recommended.
___________________________________________
Hint: there are at least four clear violations.
Answers: sections 8, 9, 10(a), 10(b), 11(a), 11(b) of the Charter. For analysis, see Part I of my podcast linked here starting at 16:15 to 39:00.
The automatism defense will always succeed if the cause was external to the accused. Suppose you are holding a knife and a third other person knocks your elbow and you stab me. You would be made into an automaton by a third external force. This type of external, non-insane automatism defenses will result in acquittal, while those internal, endogenous, or stemming from some disease of the mind will typically fall under ss. 16 (NCRMD) or 33.1 (extreme intoxication) of the Criminal Code.
In that sense, automatism is kind of like the exogeneity assumption in instrumental variables (IV) regression in econometrics and statistics: Z hits X which causes Y:
Z → X → Y
If Z is endogenous or internal to X and Y, then the automatism defense is not made out. Mental disorder automatism likely more aptly falls within the ambits of ss. 16 and 33.1.* There is one subtle difference between the law and econometrics: the law says that "Z caused Y" whereas econometricians says "X caused Y", while in fact both are true. The difference is that for automatism to succeed Z must be the only cause of X on Y (conditional on covariates). In fact, this is exactly what the simple instrumental variables regression is!: we simply replace the endogenous regressors xᵢ in the matrix X with the instruments zᵢ to construct a new matrix Z.
Automatism, rather counterintuitively, negates the voluntariness requirement of the actus reus, as opposed to the mens rea which is negated by ss. 16 and 33.1. I think my analogy of exogenous automatism as IV is an intuitive explainer as to why that is so. See Aristotle's Nicomachean Ethics (Bk. III 1110b).
* Though, see R v. Rabey, [1980] 2 S.C.R. 513 – a successful defense of non-insane automatism – and R v. Stone, [1999] 2 S.C.R. 290 – an unsuccessful defense – for the edge case of psychological blows.
Lazy bioethicists often argue that the indeterminacy of the beginning of life justifies the taking of a life during an abortion. In the law, we sometimes say that abortion statutes are impermissibly vague given this indeterminacy.
A stark line is drawn in our lex crimen (criminal law) between infanticide (ss. 233 and 243 of the Criminal Code) and abortion (s. 251 [repealed]). The SCC wrote in Morgentaler, striking down s. 251, that "the protection of the foetus [...] is a perfectly valid legislative objective" (at p. 181). Justice Fish, more recently in 2013, wrote clearly in Levkovic for a unanimous majority which upheld s. 243's questionable vagueness in determining when during gestation a fetus becomes the body of a child that "the state’s interest in late-term failed pregnancies is both ascertainable and well established" (at para. 77). The Court emphasizes the clarity in the requirement that the child would have been born alive.
Why is law able to find a distinction which bioethicists cannot? I would suggest that the intermediate value theorem offers a rejoinder.
Intermediate Value Theorem. Suppose that f is continuous on [a, b] where f(a)≠f(b) and μ is between f(a) and f(b). Then f(c)=μ for some c ∈ [a,b].
The idea is this: while it may not be known exactly when life begins, we can be sure that life does, in fact, begin. Law recognizes this and applies it through the penal code to distinguish between abortion and infanticide. If your retort to this claim is "that you believe that life begins very, very late," then you've abandoned the use of calculus and moved into speculation and perhaps even into a domain where you may accept infanticide. Indeed, Phillip K. Dick had a similar intuition when he wrote the The Pre-Persons.
Bad Canadian lawyers often say that we have a so-called "constitutional right to abortion". That is plainly false. First, there are no "constitutional rights to [this or that]" within the justified limits to judicial review of legislation in our constitutional democracy, but only invalid state action, either prima facie or as-applied to the facts. (This is an easy tell for an amateur in constitutional law). Second, we more or less don't have positive rights under s. 7 of the Charter at all (see Gosselin). Finally, Morgentaler was clearly a minimalist holding upon inspection of the split and reading the dissent. The current state of our penal code in Canada reflects an underlying policy preference not to criminalize late-stage abortions, but the medical profession (and the law of some countries) takes a vested interest in the late-stage foetus.
Though, to be sure, the defence of necessity would always apply (as in calculus, the abacus).
I've been compiling a list for first-year law students of ten of the most important legal inventions which will rear their head in law school, but may not be historically situated:
separation of powers and judicial review
the rule of law
natural law
unwritten law
the common law
the class action
the modern corporation
criminal procedure
the rules of evidence
statutory interpretation
My personal favourite is the principle of proportionality, for de minimis non curat lex and measure for measure. It, along with moral responsibility, builds the foundation for our penal system. Proportionality also shows up in civil procedure, constitutional law, and other aspects of criminal law outside of sentencing.
If a covert, Illuminati-like group were to write laws that subverted the history of our country, would that be the law of the land? Would that be our law or our official story? The answer is most certainly no: it is not its shared account of the law’s structure and sources, which members of its legal community publicly advance and defend.
Law and economics poses a similarly interesting question for legal theorists. The Manne program in the US educated judges with the wisdom and logic of law and economics at the time, and actually led them to change their decision-making on criminal and competition law cases. This is particularly consequential in the Canadian context, where law and economics is not a widely adopted ideological framework outside of a handful of areas of law such as corporate, competition, securities, and private law. The permeation of law and economics logic and its associated legal movement into other areas of Canadian jurisprudence could be seen as an attempt to subvert our true legal order by imposing legal rules which are not widely accepted or shared among an overlapping group of expert and non-experts. It may produce legal rulings which the public would imagine to be entirely unlawful, such as disproportionate sentencing or unjust tort law.
The official story of orthodox Canadian law in some sense plainly obtains here, in a way that (say) the official story of US law, for example the law and economics movement and originalism, does not. We should be cautious of attempts to invoke law and economics logic to areas which do not historically invite it as part of their official story or shared account. For instance, the protection of private property was explicitly left out of section 7 of the Canadian Charter of Rights and Freedoms, opting instead for the language of "life, liberty, and security of person". To take another example, our jurisprudence on free expression explicitly does not centre itself on the so-called "marketplace of ideas" – the phrase has appeared once (Harper at para. 35) since McLachlin J's (as she then was) dissent in Keegstra (at 803). Finally, it is rather well-settled that the debate on the death penalty need not consider its deterrent effects, as some economists have, but rather its cruelty and unusualness. Canada's promise of substantive equality protects even against statistical discrimination, while formal equality protects against only taste-based discrimination. These are just a few examples.
The short and happy life of law and economics is over. The upshot for the future of law and economics is that it can ask and answer bigger and broader questions in law (other than deterrence and competition) thanks to the so-called empirical revolution. Law all too often rests its laurels on many untested empirical regularities. Though, to be sure, empirical results are not to be followed blindly into absurd conclusions.
It is almost comical to now revisit Holmes' assertion that the future of law is one of economics and statistics. Though, I think he is right, but the problem is that the state-of-the-art techniques in this area are advancing so rapidly while the group of experts in this area are innovating to produce theory and evidence that coincides with what the public would imagine our law to be. In that sense, insofar as law and economics treats itself as the science of law, it poses a slightly different hypothetical than the Illuminati, or at least I sincerely hope that it does.
I leave you with an example. Take the defense of contributory negligence. Shavell (2004) states that such a rule is only needed under strict liability regimes, and is irrelevant under a regime of negligence (p. 188). And yet, we still have it due to insurance lobbying and so-called efficiency rationales on the theory of moral hazard. Is the defense of contributory negligence a covert, Illuminati-like law called in from unelected officials?
With recent calls to "defund the police" perhaps it is worth asking what the origins of the modern police are.
A recent study in the AER finds that "in the United States [...] each additional police officer abates approximately 0.1 homicides". The broader literature also finds a small, negative, but statistically insignificant effect of policing on crime. This might seem shockingly small, but the reason that most studies which regress police force size on crime find little to no effect is because they are estimating a weighted average of the marginal treatment effects (MTEs) of policing at current policing levels. One might erroneously conclude that policing does nothing and, on that basis, call for "defunding" the police, but this is because the marginal effect at current levels of policing is close to zero; the average treatment effect (ATE), which is the average of the MTEs, of policing is likely positive (and large!), though, we'd have to test our theory to be sure.
Simply assume a structural model crime where the crime rate is a decreasing in the size of the police force, but the marginal productivity of officers is declining more cops go on the beat. Simply put, at a certain point one extra police officer does almost nothing to abate crime, but rather just eats donuts. This is simply a re-statement of the law of diminishing marginal returns. One functional form that satisfies this is the negative exponential:
crime = k × exp(–λ × police) (Eq. 1)
where k is the crime rate with zero police officers and λ is just a parameter of the model which governs the productivity of police officers in abating crime. The quantities of interest (the notion of a MTE and other estimands as a weighted average of the MTE) are all defined precisely in a famous 2005 Econometrica paper by Heckman and Vytlacil. In fact, in the authors' simulations comparing various estimators "[t]he negative component of the [ordinary least squares] OLS weight leads to a smaller OLS treatment estimate [than the ATE]" (p. 708). This is exactly my postulation for regressions of crime on police force size: they are biased toward zero because of their historical context!
I have a modest proposal: look at the origins of policing. One could simply examine crime rates across cities in England as the modern police are introduced, first in London in 1829 and then rolled out into other cities in England looking to eradicate crime. Data on 19th century crime would likely have to be taken from local newspapers, which will likely have the virtue of reporting on local crime. I won't be doing all of that, at least not today. Nonetheless, it is still interesting to analyze what the estimand actually is from our hypothetical study of the invention of modern police in London, England in the early 19th century.
I can say with some certainty that the marginal effect of policing at the origin is most certainly not zero (or –0.1). To see why, consider the log of (Eq. 1) which gives the structural estimating equation (yes, GLMs are structural models!):
log(crime) = log(k) – λ × police (log of Eq. 1)
which is not linear, but log linear! From what I can tell, the AER paper by Chalfin et al. appears to use the raw number of homicide victims as the outcome variable. While the model in (Eq. 1) is not gospel, it certainly is sensible. If the size of the police force is uncorrelated with the residual factors affecting crime, then an OLS regression will return an estimate of the "structural" parameter λ from the police force production function in (Eq. 1) so long as it is correctly specified. If, on the other hand, (Eq. 1) is incorrectly specified, one could use the rollout of larger and larger police forces across cities in England to non-parametrically estimate the dose-response function of of police force size on crime using a series of local average treatment effects (LATEs) to trace out the policing production function from the origin.
Indeed, the total average treatment effect is the policy-relevant treatment effect (PRTE), the parameter of interest, in evaluating the claim of the impacts of "defunding" the police. The LATE at the origin is also a parameter of high policy-relevance in light of recent proposals to "defund" the police, but for that we must look to modern policing's origins in 1829 London, England.
* For instance, that would be the derivative of (Eq. 1) w.r.t. police evaluated at police = 0. The "structural" parameter is λ, which governs the rate at which crime decreases as additional officers are added.
A recent study of state and federal wiretaps in the US found no relationship between wiretaps and conviction. Indeed, looking at aggregate public data from the Annual Report on the Use of Electronic Surveillance in Canada we can see that the conviction rate is actually lower for cases in which wiretap evidence is adduced at trial from 2003-2011 (difference = –42%, t-stat = –7.13, N = 18).
Almost surely it cannot be that adduced wiretaps lower the probability of conviction. There must be a missing piece to the puzzle.
The answer to the puzzle lies in economics: wiretaps are used on inframarginal cases for which the prosecutor possesses less than proof beyond a reasonable doubt of guilt. Put simply, police use wiretaps on relatively weaker cases. Ordinary least squares estimates fail to recover the causal effect of wiretaps on conviction.
To see this clearly let case quality q ~ F(q) be a random variable and consider that for types of type 1 cases (wiretaps) there is a lower "prosecution threshold" or "admissibility threshold" than in type 2 cases (non-wiretaps), written as θ₁ < θ₂, then it follows that for any greater conviction standard, call it proof beyond reasonable doubt (PBRD) it holds that
P(q > PBRD | q > θ₁) < P(q > PBRD | q > θ₂)
and the conviction rate for wiretap cases is less than that of cases without wiretaps due to a difference in thresholds. Interpreted through the lens of the law, the difference in conviction rate between wiretap and non-wiretap cases could be due to prosecutorial discretion or evidentiary gatekeeping. Perhaps the simplest explanation is that "reasonable and probable grounds" is less than "proof beyond reasonable doubt", and so θ₁ < θ₂ by law.
Wiretaps are used on the inframarginal case, that is, cases of quality q ∈ [θ₁,θ₂], to bring forward those that otherwise could not have been, hence their lower conviction rate. Indeed, the very first time a wiretap is mentioned in the HBO series The Wire, it is an attorney explaining to a judge that "this case needs a wire", which supports my thesis that wiretaps are inframarginal, used on those cases in which there is less than proof beyond reasonable doubt.
The difference in the conviction rate between cases in which there is a wiretap adduced or not disappears following the 2012 decision in R v. Tse, 2012 SCC 16 which struck down s. 184.4 allowing officers to wiretap in exigent circumstances. This led to the introduction of the Response to the Supreme Court of Canada Decision in R. v. Tse Act. The most obvious for the absence of inframarginal wiretaps after 2012 is that reason is that the Act requires the Minister of Public Safety and Emergency Preparedness and the Attorney General of each province to report on the interceptions of private communications made under section 184.4. The most obvious reason is that wiretaps done under exigent circumstances are not inframarginal since there is already high probability of guilt.
It's time to settle the perennial debate:* is a burrito a sandwich? Reasonable people are disagreeing: some judges have said yes; some have said no. Gottlob Frege or Bertrand Russell would have told you that “[t]o every name X there corresponds a cluster of properties φ s.t. A believes φX”. G.E. Moore wrote in Principia Ethica that “[e]verything is what it is, and not another thing.” Saul Kripke would have intervened to clarify that H₂0 ≠ water in ordinary language. I won't claim to have settled the historical debate in the philosophy of language and logic, but I think economics has a resolution to these types of semantic debates as they pertain to economic law, such as contract, tax, and international trade disputes.
My answer: it depends on the degree of substitutability¹ between a burrito and a sandwich and the degree of substitutability of similar goods. The cross-price Slutsky equation gives a condition for what economists call “gross substitutes”, a term for when the substitution effect (substitution across goods due to a change in the ratio of prices) exceeds the compensating income effect (more wealth due to lower prices) for the demand of good i, given the Marshallian demand x*(p,w) = arg max u(x) s.t. p · x ≤ w and Hicksian demand h*(p,u) = arg min p · x s.t. u(x) ≥ u, from a change in the price of good j:
∂xᵢ*/∂pⱼ= ∂hᵢ*/∂pⱼ– xᵢ* ∂xᵢ*/∂w > 0 (Gross Substitutes)
suppressing notation for simplicity. This simply differentiates of the identity h*(p,u) = x*(p,e(p,u)) with respect to prices making use of the generalized chain rule and Shephard's Lemma: x*(p,w) = ∇ₚe(p,u) at w = e(p,u), and then re-arranging. Gross substitutes imply a positive cross-price elasticity 𝜀 > 0 or elasticity of substitution greater than unity σ > 1. Submodularity of the expenditure function e(p,u) in prices (i.e. decreasing differences) is a sufficient condition for the valuation to satisfy the gross substitutes condition. Anderson, Thisse, and de Palma (1992) write that "all variants are strong gross substitutes under both [CES and logit] demand systems" (p. 136). These authors also documented that the two models are intimately related.
If the degree of substitution is close enough to that of the estimate for the substitution between a burrito and another burrito (which in theory is infinity, but in practice is finite), then the contract should be read using the canon of interpretation known as ejusdem generis. If, on the other hand, the degrees of substitutability between the burritos and sandwiches are substantially different, then perhaps expressio unius exclusio alterius is the correct maxim to apply. It is no surprise that the Supreme Court of Canada frequently uses the term "elastic" with respect to definitions.
This economic approach, one well-known to competition lawyers and economists, to naming and necessity can perhaps be applied to contractual and statutory interpretation more widely, such as the interpretation of tax codes, tariff schedules, and contracts. Take a trade agreement which applies a tariff schedule to goods. Disputes over which goods belong to which HS Code are frequent [see here]. Take a tax statute which classifies assets into classes or goods into categories for tax schedules [see here]. The questions are the same as in the burrito and the sandwich: what are the intentions of the drafting parties? It is one of interpretation. What did the parties intend to contract over? In many cases, I contend, it is clearly "close-to-perfect substitutes".
At the time of writing, the present approach to such a legal question is to turn to the dictionary. Ruth Sullivan writes on dictionaries in her text Statutory Interpretation that “[d]ictionaries [...] assist by suggesting the limits of plausible interpretation” (p. 65) and that “the dictionary is [...] used to establish that the interpretation preferred by the court is within the range of plausible meaning” (p. 66). It must always be the case in adopting the plausible meaning rule that the “interpretation that is adopted must be one that the words are capable of bearing” (p. 66). “By fixing the outer limits of meaning, dictionary definitions help to establish the range of plausible meanings a given word may bear.” (p. 66). I see no reason why a table of cross-price elasticities or diversion ratios would not be informative in adjudicating the limits on the ontology of goods in contracts, competition, tax, international trade law, and in statutory interpretation of economic instruments more broadly!
One may also be tempted to turn to literary theory, as Posner himself has analyzed in Law and Literature (2009, Ch. 8). Rather ironically, another answer lies in microeconomic (price) theory. I think Posner would be proud.
* And annihilate that stupid meme.
¹ as measured by something like a cross-price elasticity or a diversion ratio.
Banerjee and Duflo (2014) define credit constraints as the marginal productivity of capital (MPK) greater than the interest rate r (p. 581):
MPK > r
A simple test of the theory is that MPK, as measured by asset turnover (revenue/assets) in balance sheet data, will be equal across legal ownership type (e.g. private, SOE, foreign-owned) if credit is allocated equally among enterprises p, s, and f:
MPKₛ = MPKₚ = MPKբ
I reject the joint test of equality in MPKs across legal ownership types in Vietnamese balance sheet data using a regression with industry fixed effects and standard errors clustered by industry. In 2002, private firms have much higher MPK than foreign- or state-owned enterprises, largely among non-exporters (P < 0.001). However, this difference disappears by year 2010 (P = 0.58).
A key event occurring between 2002 and 2010 is the opening of Vietnam to US market access through the WTO. Perhaps trade can alleviate credit constraints.
Apotex Chairman and CEO Barry Sherman was well-known for coming up with patents for drugs in his sleep. His lawyers often had equally brilliant ideas, and were known for their exceptional litigation tactics, from patents to product liability class actions to civil procedure. It would not be a surprise if Barry were the leading mind behind the firm's legal strategy as well.
Perhaps my favourite litigation technique used by Apotex arises in civil procedure in Apotex Inc. v. Richter Gedeon Vegyeszeti Gyar RT, 2010 ONSC 4070 where Apotex asserted that:
[t]he documents listed in [Schedule A] lack the requisite specificity required to enable a party to unambiguously retrieve specific documents as needed. (at para. 19)
The court invokes the principle of proportionality in discovery under r. 29.2.03 of the Rules of Civil Procedure in assessing what proper disclosure looks like in the mega-trial (at paras. 47-63).
The takeaway? If you can't land your objection to a question in cross-examination on relevance, always go after them on disclosure!
Consider a modern Donoghue v. Stevenson where the plaintiff finds themselves the victim of negligence, say, from a spider being ingested through a bottle of Coca Cola. Assume that a duty of care is owed, the standard of care was not met, and the damage was caused by the negligence, and so forth. Suppose there are several parties that breached this negligence, the manufacturer, the bottler, the distributor, the retailer. On whom should the court place liability?
The conventional answer I get from law students and lawyers is invariably "all of them". But is that right? I think not.
Economics has a divergent answer. The solution in economics is to find the least cost avoider or insurer of the accident and concentrate liability on them. This principle is established in Calabresi's seminal 1961 article Some Thoughts on Risk Distribution and the Law of Torts. Harry Kalven Jr. apparently greeted Calabresi in 1960 with "it's all wrong . . . but I wish I had written an article like that when I was your age!" (to my knowledge, Calabresi wrote the paper when he was a law student).
Perhaps this is a manifestation of the remoteness branch of the test for negligence, sometimes thought to be superfluous, at work. Remoteness refers only to the the closeness of a direct and natural causal sequence between the negligent act and the damages. A duty of care may still be found, as the tortfeasor may fall within the spectre of liability due to reasonable foreseeability, but be too remote to be assigned damage.
It is time to revisit the law of partnerships in light of the rise of high payout structures, such as MLPs, YieldCos, and REITs. In Backman v. Canada, 2001 SCC 10 the SCC left the definition of a partnership far too narrow. The Court states the "view to profit" part of the test for a partnership demands that the “intention requires that business [be] carried on with a view to profit which [is] ancillary to the [...] tax minimization objective” (at para. 11). The appellant in Backman asks the Court to apply Texas partnership law where “the criteria of carrying on business in common with a view to profit is irrelevant to the continuing validity of that partnership” (at para. 35).
I argue that more clarification is needed on this issue to accommodate an important asset class known as tax equity, in which the investor is given rights to losses of the partnership, rather than residual rights to profits in traditional equity. These partners exist solely for the purpose of writing off losses. This investment vehicle play a critical role in capital markets for project finance in solar, oil and gas, infrastructure, real estate, entertainment and sports, and other industries where alternative partnership structures are used.
The law of partnerships in Backman requires more clarification on the “view to profit” part of the test. Investors are uncertain whether their tax equity investments will be enforced given the current state of the law. Is a whole partnership's view to profit sufficiently ancillary to the tax minimization objectives of tax equity partners? Must all partners really have a view to profit (para. 41)? Failure to offer clarification on this legal risk to project finance could be retarding Canadian capital investment in major industries. The SCC failed to contemplate a novel financial instrument. This is at odds with the Westminister Principle.
Time, the fourth dimension, is operating everywhere (except for black holes).
Time is also a recurrent theme in Sophoclean tragedy, where of Oedipus the chorus sings 'Time sees all, and Time, in your despite \ Disclosed and punished your unnatural marriage' (vv. 1213-1214). We are told that in 42 A.D. the Emperor Claudius exhorted the Roman Senate to pass a law introducing a summer session into the congested Roman courts; in 1215, King John promised in the famous and enduring clause 40 of the Magna Carta “[t]o no one will we sell, to no one will we refuse or delay, right or justice” [emphasis added]; Shakespeare’s Hamlet lists ‘law’s delay’ (3.1.80) fifth in his seven burdens of man; Dickens's Bleak House features a seemingly interminable estate litigation, Jarndyce v. Jarndyce; Kafka’s early 20th century critique on Austro-Hungarian criminal procedure, The Trial, makes immediate mention of the phenomenon. The list goes on.
Delay could be due to a lack of institutional resources or defence delay, but it could also be due to prosecutorial discretion over time.
Suppose a prosecutor draws a piece of evidence each month, good or bad, taking the value of 0 (bad) or 1 (good). The probability of conviction is the maximum likelihood estimate of the realized binomial draws at time t defined as p̂ₜ = i/t where i is the number of successes in t draws. The prosecutor must choose a time at which to stop trial in order to maximize the probability of convictions. This bears a certain resemblance to the optimal stopping problem known as the Chow-Robbins game. A simple description is as follows:
Given a sequence of IID random variables X₁ ... Xₙ with distribution F how to decide when to stop to maximize the sample average ¹⁄ₙ (X₁ + X₂ + ... + Xₙ) where n is the stopping time?
With a fair coin Xᵢ ~ Binomial(½) the prosecutor solves the optimality equation
V(i,t) = max { i/t , ½ V(i+1,t+1) + ½ V(i,t+1) }
Hägström and Wästlund (2012) give solutions for optimal stopping levels βₜ at time t: β₁ = 1, β₂= 2, β₃ = 3, β₄ = 2, β₅ = 3, β₈ = 2, β₁₀ = 4, β₁₅ = 3, β₂₅ = 5, β₅₀ = 6, β₇₅ = 7, β₉₉ = 9, and β₁₀₀ = 8.
A clear pattern emerges from the solution: the conviction rate will be decreasing over time.¹ That is, short trials stop because the prosecutor quits while they're ahead, and long trials are more uncertain, weaker cases that have not yet produced sufficient evidence to warrant stopping them. As a result, the prosecutor just lets time pass – if they have discretion over time.
A second prediction of the model is that the value function at the outset V(0,0) (i.e. the conviction rate) is expected to be 79%. There are some publicly available reports of the conviction rate in various jurisdictions. The conviction rate in the US is close to 80%. However, the conviction rate in Canada is somewhere between 62%, but 50% after excluding guilty pleas. Indeed, a remarkable finding is that the heuristic of stopping at p̂ₜ > ½ is to be close to optimal, with an expected payoff equal to π/4 = 0.7854. In sequentially sampling observations to maximize the expected mean (i.e. the Chow-Robbins game), one should sample 1/e = 0.368 of the N items and choose the next option. This implies that for a trial delay with a ceiling N = 18 months, the prosecutor should stop at ~200 days, which is the average prosecution length in Canada.
¹ Despite some non-monotonicity.
In Alberta Health Services v. Johnston (Alberta Health), 2023 ABKB 209 (CanLII) the Alberta Court of King's Bench recognized a new common law tort of harassment. The court outlines the test for the tort at paragraph 107:
A defendant has committed the tort of harassment where he has:
engaged in repeated communications, threats, insults, stalking, or other harassing behaviour in person or through or other means;
that he knew or ought to have known was unwelcome;
which impugn the dignity of the plaintiff, would cause a reasonable person to fear for her safety or the safety of her loved ones, or could foreseeably cause emotional distress; and
caused harm.
The court cites feminist legal theory as justification for the tort of harassment, writing that "[t]he historical failure of courts to recognize a tort of harassment is not evidence that such a tort should not exist" (at para. 85). But the tort of harassment has always existed in the natural law, just as the right to be let alone was [see Pavesich v. New England Life Ins. Co., (Ga. 1905)]. Indeed, we criminalize harassment in s. 264 of the Criminal Code. To call such new nominate torts, such as harassment, "judicial activism" would be a flawed critique. We cannot standby iniustitia merely because history has permitted it. Indeed, the court argued this in Alberta Health Services at paras. 86-87. The natural law has always informed the law of private wrongs [again, see Pavesich, or Donoghue v Stevenson [1932] AC 562].
The test for new nominate torts was re-stated in Nevsun Resources Ltd. v. Araya, 2020 SCC 5 where the court writes at paragraph 237 that
[237] Three clear rules for when the courts will not recognize a new nominate tort have emerged:
The courts will not recognize a new tort where there are adequate alternative remedies (see, for example, Scalera);
the courts will not recognize a new tort that does not reflect and address a wrong visited by one person upon another (Saskatchewan Wheat Pool, at pp. 224-25); and
the courts will not recognize a new tort where the change wrought upon the legal system would be indeterminate or substantial (Wallace v. United Grain Growers Ltd., [1997] 3 S.C.R. 701, at paras. 76-77).
Put another way, for a proposed nominate tort to be recognized by the courts, at a minimum it must reflect a wrong, be necessary to address that wrong, and be an appropriate subject of judicial consideration. [emphasis added]
Sharpe J.A. famously defended the recognition of a tort of intrusion upon seclusion on the basis that the “facts cry out for a remedy” (Jones v. Tsige at para. 69). The facts cry out for a remedy in many of these cases. What remedy are they speaking of? Harry Kalven Jr. famously critiqued the privacy tort on grounds of inappropriate remedy in tort. However, in addition to money damages, the other obvious remedy is the injunction, which is I think what Calabresi and Melamed's "View of the Cathedral" would suggest. Since Kalven Jr.'s critique of the assessment of money damages in the privacy tort in 1966 it has been well-known that what I will call non-pecuniary torts – that is, defamation, the four privacy torts, intentional infliction of emotional distress, and now including harassment and family violence – challenge the court in the assessment of a quantum of money damages. Instead, they beckon for another remedy, like the injunction.
Another remedy that could be borrowed from constitutional law is the remedy of inadmissibility of evidence found in subsection (2) of section 24 the Charter on remedies. Take anexample of the privacy tort. Suppose an invasion of privacy occurs where A obtains private records about B which has import for a separate civil legal proceeding. Kalven Jr. fortuitously recognized the possibility for these new privacy invasions in Privacy in the Year 2000, writing at p. 877 that
intrusions will not be limited to government measures in aid of law enforcement or national security. The technology may become a commonplace in the hands of private parties – employers interested in the off-hours activities of employees, competitors interested in one another's integrity and trade secrets, estranged spouses interested in perfecting grounds for divorce, insurance companies interested in the subsequent health of personal-injury claimants they have paid, and the idly curious who are just interested. Thus, by 2000, man's technical inventiveness may, in terms of privacy, have turned the whole community into the equivalent of an army barracks.
which suffices to say that a cause of action of intrusion upon seclusion would be made out. Suppose A attempts to use those private records in a separate legal proceeding involving B, say regarding the execution of a will or a divorce proceeding or an insurance dispute. Inadmissibility of evidence would be an appropriate remedy for the invasion of privacy as opposed to, say, the injunction.
The sky has fallen. Populations in pretrial custody continue to grow after being emptied during the pandemic. Serious offences with harmed victims go unpunished. We live in an obscure point in history where the Supreme Court of Canada in 2016 decided that it knew how long criminal trials across the country in every category of offence was going to take: 18 or 30 months. They were wrong. But we've seen this all before. The Court made the same error in Askov in the 1990s that it had to self-correct in Morin, which was the prevailing framework up until Jordan.
It is time to revisit the law of s. 11(b) with more sober calculations of how long trials are actually taking. My contribution to this area is a paper which uses queueing theory to estimate the average time and delay of criminal trials, where our measure for case duration appropriately includes the time from arrest to first appearance, which is not available in the raw data. Even more, it appears that no one has bothered to fit the distribution of service times for criminal trials, so it was almost certain that nobody was properly estimating the average time of trials, which were readily available in public statistics. It is rather fitting that the Court used no data in the calculation of the Jordan ceilings, just as we might expect from an institution with insufficient competency on the matter.
You won't hear me say this very often, but in this case I would suggest that we follow the data. I would propose an indexed ceiling for the right to trial within reasonable time which chains itself to case features and systemic factors in estimating the appropriate time for trial. The index is both moving in time and at any given point in time by factors such as type of offence and the number and seriousness of charges. In the limit, it could be an entire actuarial table or algorithm which calculates the estimated time for trial where a factor is applied to that number to determine what is, currently and in the circumstances, unreasonable time for trial. This would be a way of reversing Jordan back to a more contextual test, for example that the US Supreme Court uses in Baker v. Wingo and the framework in Morin that the SCC previously used up until Jordan.
I think of this as a sort of legal triage. Classification of types allows one to appropriately estimate the expected duration of the visit. Only from that point can one really know what is or is not reasonable in the circumstances. In part, the law of Jordan already concedes this with the distinction over the preliminary inquiry. Why not take it to the limit?
I write a lot about children on this blog: on personalized negligence standards for children, on damages calculations for child victims, on the market for surrogacy and Baby M, on advertising bans for children's toys, on the constitutionality of abortion statutes. My recent paper, co-authored with Daniel E. Gold, published in Journal of Urban Economics, finds that the Canadian provincial Residential Tenancy Acts substantially improved the quality of housing for households with children, for whom we find the incidence of poor housing quality is highest and for whom we find poor housing quality to be least sensitive to changes in the vacancy rate. We find that a child tenant has 30% higher odds of occupying a damaged property than the general tenant population. Additionally, housing quality for households with children does not improve when the rental market becomes more slack, consistent with the idea that these households face higher moving costs relative to households without children, and are thus more likely to engage in litigation rather than move in response to property damage.
I think advocating for children is important first and foremost because these individuals do not have a political voice through the democratic process. Children are a discrete and insular minority. The voting age restrictions found in s. 3 of the Elections Act is at best a so-called justified infringement on the s. 15 right to equality and s. 3 right to participate in democratic processes through voting. Indeed, a challenge was filed in 2021 challenging Canadian voting age restrictions. Indeed, Canada has international obligations to protect the rights of children in the Convention on the Rights of the Child.
The precarious position of children in the market for rental housing suggests that we should have a landlord-tenant law centred around children. Consider the awful possibility of the eviction of a child from a home. The Residential Tenancies Acts are silent on the issue of age and children, but 30% of children 17 and under are tenants. I see this is as a failure of the democratic process. The arbitrary or grossly disproportionate eviction of a child could very well be a s. 7 Charter violation with the State action being the second-generation rent control statutes or no-cause eviction exemptions, such as for renovations. Canada's international obligations require a statutory and constitutional interpretation that protects the rights of children based on the presumption of conformity (the CRC was ratified in 1990, prior to the 2006 RTA and the 2017 Rental Fairness Act).
People are usually surprised to find out that Canada's constitutional protection of equality enshrined in s. 15 of the Charter is founded upon the Aristotelian principle of equality – that is, that likes be treated alike, and those different be treated unequally, something we take to mean substantive equality. The Supreme Court of Canada has had some trouble figuring out exactly what this means. The test changes a lot – as in, like, virtually every 10 years. We've had Andrews, Law, Kapp, Quebec v. A., then Fraser, and now Sharma. We have followed Abella's guidance since The Abella Commission, adopted by Andrews, and then as a jurist in Quebec v. A, Kapp, Taypotat, and, finally, in Fraser. We went from dignity to stereotyping and prejudice to arbitrary disadvantage to disproportionate impact. I think we've finally found the resolve. The Court is reticent to admit that they're just confused, but if they aren't, then they are probably doing something like a minimalist s. 15 where the test is evolving – at least that is what seems to be happening from Fraser to Sharma.
But we've got Sharma now, which I'm increasingly convinced is the best yet, so we let's work with it. The facts in Sharma are bad: a 20 year-old Indigenous woman who was behind on rent facing eviction imported nearly 2kg of cocaine in exchange for payment. The question of law was whether the mandatory minimum in the Controlled Drugs and Substances Act violated the right to equality because of its disproportionate impact on Indigenous offenders, effectively nullifying the Gladue sentencing principle in s. 718.2(e). Tragically, her challenge failed and she received a sentence close to the mandatory minimum. The Court felt there was a lack of statistical evidence in assessing whether there really was a disproportionate impact in the law's effects.
I would go one step further. Not only does s. 8 of the CDSA violate the s. 15 right to equality, but rather the entire criminalization of minor cocaine possession in s. 4 of the CDSA. It is well-known that sexual minorities, like gay men, are much more likely to use hard drugs like cocaine, sometimes as part of a practice called “chemsex”. Indeed, they're even more likely to be criminalized for drug charges. The criminalization of hard drugs like cocaine violates s. 15 of the Charter on the enumerated ground of sexual orientation, even if you feel it must be saved under section 1.
The s. 15 jurisprudence is now a game of statistics and economics to find statistical differences among ethnic groups with a state action that perpetuates disadvantage, such as math testing, parts of the Criminal Code, and other areas where there are observational group differences. Oliver Wendell Holmes once said that “[f]or the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” The s. 15 of the future is chock-full of economics and statistics after Fraser and Sharma. Those who ignore such statistics may find themselves the victim of technological unemployment. But we must also remember the ways in which averages lie to us.
I leave you with some home-grown legal realism at the Supreme Court of Canada: female justices at the SCC are 20% more likely to strike down a statute under s. 15 than their male counterparts, even looking within the same case (i.e. upon the inclusion of case fixed effects). And if you think that's an anomaly, then consider that it's not true for any other right under the Charter.
There is no free speech in the natural law. The core of the free speech right is literary, artistic, political, scientific & academic free speech. 'The rest is silence' (Hamlet, 5.2). Edmonton Journal is minimalism hinging on Wilson J's concurring opinion defending the right to privacy. Irwin Toy, Keegstra, Lucas, Butler, Little Sisters, Sharpe. All show deference. The list goes on for 26/38 statutes considered to infringe s. 2(b). Criminal statutes that purportedly infringe the s. 2(b) right are 30% less likely to be struck down and 35% more likely to be saved as a justifiable infringement. The Liberals on the Court don't believe in s. 2(b) as a justification – they're 20% less likely to invoke s. 2(b). The Court is 30% more likely to find that a statute infringes s. 2(b), but equally as likely that it is a justified infringement, which should be consistent with your priors upon reading the jurisprudence. The Court thinks like I and many others do: there is no free speech in the natural law.
Milton's Areopagitica was about popery; not hate speech, not defamation, not harmful speech, not paparazzi, not speech outside the so-called core of speech which is in search of truth, upholds democratic processes, and fosters self-expression. Man must speak with virtue and piety to earn the license to speak. The right to freedom of expression written about by Milton was inextricably linked to the religiosity of the society that he lived in, not for out-of-sample extrapolation. I owe much of my thinking on this issue to Stanley Fish, a Miltonic scholar. The first time I read Areopagitica I thought "well, this is a steaming pile of garbage" – but Milton must be historically and culturally situated, something I only learned long after reading. Milton's conception of tolerable books was limited. And there's just about no chance Milton would have stood for the actual malice standard in defamation [again, the SCC agrees in Hill v. Church of Scientology]. He would have been too devout. He would have retorted: Thou shall not bear false witness against thy neighbour. He would have awarded punitive damages. Check your Miltonic scholars first before quoting Areopagitica. He may, however, have fervently dissented in Little Sisters. Here's a lesson on how to read Milton's Areopagitica.
Olin Brouwder wrote in the Michigan Law Review in 1982 that "[T]o reform the first-year law school curriculum the [...] solution is to expand the treatment of the law on landlord and tenant [...] [t]hereby one could embrace all the first-year subjects except [c]riminal, and a good deal more." (p. 1). All is true. The law of the landlord and the tenant was once feudal property law, but evolved into contract law, in particular, caveat venditor or an implied warranty of habitability. The rule of negligence also rears its head with respect to reasonable steps to mitigate losses. Pepper in a little civil procedure and administrative law and you pretty much have an entire curriculum centred around an applied topic in poverty law. I find that remarkable, not least because I spent my early twenties studying landlord-tenant law.
A word on some of the modern advancements of the law in this area.
Second-Generation Rent Controls. It is functionally equivalent to housing inflation-targeting. To exempt new builds from the growth control will end up resulting in a time inconsistency problem that investors will catch onto. There is a need for a centralized agency to credibly commit to a stable inflation target for housing without the possibility of reneging, just as with inflation targeting. The power of the Canadian provinces to adjust the target via regulation under the statute ought be a delegated power or even ultra vires. For instance, the Residential Tenancies Acts (and their variants) in Canadian provinces allow landlords to renegotiate a price increase every twelve months. The provinces place a cap on rent increases each year via regulation under the statute. In 2022, the increase in Ontario was capped at 1.2 per cent. The time inconsistency problem associated with delegating allowable rental increases to provincial government, in addition to the application of rent control on older builds, without commitment is a possible deterrent of new housing supply.
No-Cause Evictions. It is time for us to re-imagine the tenants' rights movement after Tanudjaja v. Canada (Attorney General), 2014 ONCA 852. First introduced by the Rental Fairness Act of 2017, no-cause evictions allow a landlord to evict a tenant without cause for reasons such as renovations or own-use. These no-cause evictions violate the section 7 guarantee to physical security of person. They are not in accordance with many principles of fundamental justice, such as arbitrariness, over-breadth, and gross disproportionality. It is time to strike down no-cause evictions, in particular when the facts call for it due to a disproportionately unfair and unjust eviction, such as that of a young child or in the case of dishonest behaviour by the landlord. The social science evidence is in: evictions are immensely harmful intrusions to physical security. If you thought that no-cause evictions weren't arbitrary, the phrase "no cause" is right in the title. These statutes clearly catch more activity than what is intended to protect private property interests. They ought be more narrowly tailored to prevent the "renoviction" crisis we have seen in Ontario. The limit on the right to security is not “necessary” to further the state objective (Chaoulli at paras. 131-32) and is "inconsistent with the state interest that lies behind the legislation" (Chaoulli at para. 232). Conservatives will argue (as I have in Epstein Was Right) that this is a justified infringement to protect private property interests, as it transforms the leasehold into a life estate, but one should recall that s. 1 of the Charter only applies to "cases arising out of exceptional circumstances, such as natural disasters, the outbreak of war, epidemics, and the like" (Re B.C. Motor Vehicle Act, [1985] 2 S.C.R. 486 at 518).
It is rumoured that some 0.01% of the population have absolute pitch (AP), or what is colloquially known as "perfect pitch", which is the ability to recognize pitches and associate them with a given key. Those with AP have a larger planum temporale in the auditory cortex. More recently, one study found it present in 4% of musicians. I thought I had perfect pitch when I conjectured that the flatline heart monitor was a C note, but it's actually one half-step down in B. Close enough. In practice, if you can sing a harmony in key without background music or without hearing a relative pitch, then you have de facto absolute pitch. To be sure, I heard the melody C–D–C–B–A–B–A–G♯ in my head without any reference, which is how I convinced myself and others that I had AP.
Wikipedia says that "absolute pitch may be directly analogous to recognizing colours, phonemes (speech sounds), or other categorical perception of sensory stimuli". Indeed, it was Isaac Newton that first observed this relationship. This is called synesthesia. I would venture that this is much more common that one might suspect and can be taught relatively quickly with an instructive example. Let us turn to that now.
Consider the scale of harmonic minor: ♭3 ♭6 ♯7 (which is isomorphic to Phrygian dominant). It is often though to be the most evil or demonic sounding scale, resembling that of a Middle-Eastern snake-charmer. One can quite easily associate with the colour noir with it. However, it is notoriously difficult to write in because of its awkward ♯7, which distinguishes it from the natural minor. I'm not the first to make note of these aspects of minor keys outside of the equal-tempered scale. The interval of the augmented fourth was considered so dissonant that it must have been the work of Lucifer, so the Catholic church appropriately named it Diabolus in musica ("the devil in music"). I often write in the key of harmonic minor simply because I find it can be catchy and idiosyncratic by making the III-chord major instead of minor. Good musicians, bored of convention, often say that moving outside of the key signatures are important for generating new ideas. Joni Mitchell, having absolutely no music training at all, is famous for this. This offers a new soundscape for the brain to encounter. The same is true of good science: it comes from outside the field.
I leave you with the following progression: Em7–Bm7–F♯7–Bm7 played in staccato arpeggios with the harmony G–F♯–E–D played behind it. It has a sort of pop/R&B feel, like something Beyoncé and Jay-Z might collaborate over. Freestyle over it at your own risk.
I appear to have coined the phrase “narrowly tailored in the reus.”
It has been proclaimed since Irwin Toy that a wholesale ban will rarely, if ever, be the least intrusive means by which a government can achieve their objective. This has often taken to mean that a statute, especially rights-restricting penal statutes, require exemptions in order to pass constitutional muster. This is plainly false. A statute can be narrowly tailored in its actus reus or mens rea and be sustained as minimally impairing without an exemption simpliciter. But you were probably looking for proof.
Sections 298, 299 and 300 of the Criminal Code are sufficiently narrowly tailored in the reus, that is, they apply to only truly defamatory speech for which there is knowledge of falsity, such that they lack the necessity for specific exemptions (See R. v. Lucas, [1998] 1 S.C.R. 439). A recent case from the United States found that a subjective mens rea was required for true threats. Indeed, Canada's s. 264.1 of the Criminal Code has no exemptions per se. It was argued by Justice L'Heureux Dubé (dissenting in part) that ss. 276 and 277 of the Criminal Code, prohibiting the use of past sexual conduct as evidence of consent, were, as I say, "narrowly tailored in the reus," as it applied only to irrelevant evidence (R. v. Seaboyer, [1991] 2 S.C.R. 577). While s. 276 was struck down in Seaboyer, her narrow reading of the statute and her logic have stood the test of time (R. v. Darrach, [2000] 2 S.C.R. 443). Posner was likely right when he said that 720 ILCS 5/14 need not be narrowly tailored in the reus to private conversations — free speech cuts both ways. Even the legislation in Irwin Toy was upheld on the basis that was directed only at persons under thirteen years of age — thereby making it sufficiently narrowly tailored without an exemption simpliciter.
Again, it appears that this idea has never been documented, so I thought I should make note of it here.
When there is a violation of the triangle inequality for the ordering ikj there cannot be a violation for the orderings kij and ikj. In essence, the reason is because the "violating side" i–j cannot be so long as to violate the triangle inequality for ∆ikj, but also be so short so as to violate ∆kij and ∆ijk. For an unordered triple of points, the maximum number of total violations of the triangle inequality between them is 3, making the maximal raw violation rate 3/3! = 50%.
Proof. Starting from the simple premise that for each violation K, there are two non-violations, then the implied violation rate calculation ought to be K / [N(N–1)(N–2) – 2K]. Because the denominator must be positive N(N–1)(N–2) – 2K > 0 it must also be the case that K < ½ × N(N–1)(N–2) which says that the raw violation rate K must be less than or equal to 50%.
I am something of an expert in the law of defamation. We sometimes say that it is a strict liability tort: it simply requires that something be communicated to a third party about a person that could lower their reputation in the eyes of others. But this conception of defamation as "strict liability" is so obviously wrong to me because I know of two very simple loopholes to defamation:
Say it to their face in private – no communication to a third party.
Don't use their name or identify them in your statement – say whatever you want directed at “someone in this room”.
I don't really think we need all of these defences to defamation in light of the simplicity in getting around the reus of the tort. Yes, fine, there is no requisite mens rea, but there is still plenty of room for maneuvering in the elements of the tort itself. The act of defamation, in my view, should be a low bar to meet. Indeed, it can be actionable per se without even proving damages. I find the tort, like the Criminal Code provisions, to be quite “narrowly tailored in the reus.”
I don't think normal, non-science or non-philosophy people quite understand the small but significant gratification that comes from using Terminal, Shell, Unix, Linux, LaTeX, Pandoc, Make, etc. I am 10x+ more productive because of this workflow. I write notes, screenplays, equations, stats, tables, etc. I was once told by a screenwriter at a cafe that I would have to purchase an "expensive screenwriting software" to write my screenplay using the appropriate formatting tools. Turns out: nope – no need, there's a document class called "screenplay" free in LaTeX – functions and all!
The biggest frustration in the world is updating your paper every time your results change. It's SO annoying. GNU Make "makes" it easy. The Makefile builds a paper start to finish with just make, but runs only if the time stamp on an "input" changes for a given "output". Perfect! The caveat here is that you need a Mac & have to be committed to the plaintext lifestyle and mindset, but there are HUGE benefits. Sick of changing the numbers in MS Word every time the data/results change? You can slide in your results into a sentence in TeX with \input{stats.txt}. Pandoc accepts LaTeX code too! Pandoc is LaTeX-lite. It was written by a philosopher who wanted LaTeX sans the hustle. I have written a nice and easy template for getting started writing documents in Pandoc. I use it to write all of my papers/assignments now (TeX is too heavy-duty).
Maybe Shakespeare did mean for his audiences to think that the ghost in Hamlet was a devil and Hamlet the Vice figure from medieval morality plays; we strain against this interpretation in part because it would diminish the aesthetic appeal of the play, and it is as an aesthetic object that the play mainly interests us.
— Richard Posner, Law & Literature at 276.
Contracts, statutes, constitutions, treaties – all must be interpreted. This shows up in numerous areas. Contractual interpretation is central to contract law – I have written on this blog about the interpretation of the word "sandwich" in exclusivity clauses in commercial leases. The subject of administrative law gives rise to the proper interpretation of statutes by administrative decision-makers. The failure to properly interpret a statute will result in an unreasonable decision upon judicial review. Our fundamental freedoms depend critically on how constitutions are interpreted, where we require a presumption of conformity that favours constitutional interpretations over their counterparts. Treaties are matters of citizenship and not simple contracts, and they must be interpreted broadly and liberally. Simply put, the tool of interpretation is an indispensable part of the lawyer's toolkit for administrative law, criminal law, taxation law, and other areas of public law.
The field of statutory interpretation typically takes the form of formal analysis (e.g. original, technical, plain meanings), canons of interpretation (e.g. Latin maxims), a set of presumptions (e.g. conformity, tautology, drafting competence). In Canada we place a special weight on purposive analysis in many areas of interpretation. We repeat the familiar mantra from Hunter v. Southam – broad, large and liberal, and purposive – in our interpretive statutes. Uniquely, we also have bilingual and bijural rules of interpretation, giving both French and English statutes equal authenticity and taking only the shared meaning. We have tended to also give weight to Canada's international obligations through a presumption of conformity with international law. One could not be a proper Canadian lawyer of statutory interpretation with knowing Elmer Driedger's Modern Principle (DMP):
Today there is only one principle or approach, namely, the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.
The Canadian tradition to statutory interpretation has lived on with Ruth Sullivan, unsurprisingly someone with a literature background, the current expert in the field, with a Big Ruth and Baby Ruth textbooks.
In those books there is some discussion of the use of dictionaries. I have written elsewhere on this blog how the dictionary has led us astray in statutory interpretation of the word "sandwich". It has also led us astray in the interpretation of the word "land" in Hobbs v. Esquimalt and Nanaimo Railway Co., (1899) 29 S.C.R. 450 where the court was challenged to assess whether a sale of land includes mineral rights. The Court finds that it does. But what if the contract was meant to exclude mineral rights? The dictionary would have led us astray, while the contract itself would have shown the intended allocation of mineral rights as reflected in the price of the land.
There is no choice but to be a scholar of statutory interpretation. It is an essential aspect of the profession. The only real, tangible things that a lawyer can do is (1) speak to a judge and (2) interpret a statute. That's not a lawyer joke. You had better know how to do (2) if you want to do (1) properly.
Law and: economics, literature, psychology, history, philosophy, political science, religion, finance, development, statistics, the horse, the list goes on. I might even be credited for adding operations research to the list. It is either the field of Renaissance people or the field of dilettantes. The Law Society of Upper Canada once demanded that we be scholars in classics. The “law and” movement has brought a lot more “ands”, apparently meaning that a good lawyer is to be well-read in virtually every field under the sun.
The law of comparative advantage suggests that lawyers should stick to what they know best: the law. Indeed, my motto coming to law school after leaving economics was “law qua law” – I stand by those words. The practice and study of law demands a certain level of knowledge of the way the law works itself – perhaps the inner workings of the legal system, but maybe also an understanding of the intellectual history of legal ideas. Very seldom is anyone doing anything "cutting-edge" in law. Even something like the right to privacy was first conceived of before 1900, long before technology, but closer to the time that Kafka lived and wrote.
It is difficult to trace the origins of the "law and" movement as a whole, but I suspect that Karl Llewellyn's legal realism likely played a role in its percolation. Law & literature began with James Boyd White's The Legal Imagination and law & economics with Posner's Economic Analysis of Law, both published in 1973. Economics and literature have come to dominate much of the "law and" discourse, likely because of Posner, who authored texts in both law & literature and law & economics, but also that economics speaks to law's pragmatism and there are many lawyers that have written law in literature fiction for the amusement of other lawyers, including Shakespeare, Dickens, Kafka, Goethe, inter alia.
The quest for law qua law seems difficult to wholly embrace in light of the pull towards the “law and” movements. Are we supposed to be practitioners now? We are all something of a scientist ourselves. But that would be a route headed towards dilettantism. It is a difficult balance to strike. It would be easy to be pessimistic and ask “have I learned nothing in law school?” Let me re-inspire you: statutory interpretation, constitutional law, administrative law, conflict of laws, the law of evidence, criminal procedure, jurisprudence, and private law doctrine and theory. These black-letter, doctrinal areas are the exclusive domain of law. They cannot be logically proven or tested in the data. They belong to the legal profession. Law qua law.
I read ~250 Supreme Court of Canada judicial review (JR) of legislation decisions from 1984–present. That has culminated into a paper titled "The Irwin Toy Hypothesis." People are constantly asking me "what did you learn from reading this universe of JR cases?" I initially had very little to say other than what I had conceived to be the so-called Irwin Toy hypothesis. Irwin Toy has a lot to say about the judicial review of legislation, which is why I selected it for the title of a paper which studies deference at the Supreme Court of Canada.
But what else can be gleaned? Statutes require exemptions to be narrowly tailored, but not always. That was an overall impression I had. I coined the phrase "narrowly tailored in the reus" to describe those cases in which a statute requires no exemptions per se because of the way the reus is constructed. That's a publishable thought, but it doesn't tell you what reus is constitutionally required or how to spot these violations in the wild.
So I thought I would lay down some cold, hard facts about JR of legislation in Canada:
approximately 44% of cases involve a criminal or penal statute
no statute has ever been struck down twice (only the rape shield [ss. 276, 277, 278], the defense of mental disorder [s. 16(4)], and the dangerous offender regime [Part XXI] have been considered more than once)
wholesale bans will rarely, if ever, be the least intrusive means by which a government can achieve its objective (Irwin Toy), but statutes can be narrowly tailored in the reus (e.g. Lucas, Seaboyer)
male and female justices behave the same, except for a large difference in s. 15 right to equality cases
conservative and liberal appointees behave the same, except for a large difference in s. 2(d) right to freedom of association
The Irwin Toy Hypothesis: penal statutes in violation of ss. 7-12 of the Charter are more likely to be struck down and less likely to be saved
The most important cases I read were the 1980s decisions which shaped the constitutional jurisprudence that would follow thereafter: Hunter, Big M, BC Motor Vehicle, Irwin Toy, BC Motor Vehicle, Quebec Secession Reference, Ford v Quebec, and the like. Those are where you should focus on your attention. My goal is to bring Irwin Toy back to the fore in the way we think about how we do judicial review of legislation. Perhaps Canada does, in fact, have a de facto tiered system of judicial review.
For example, some political scientists and legal scholars claim that Canada does not have so-called "as-applied" constitutional challenges. This is false, or at least partially. A penal statute which violates another right of the Charter (e.g. ss. 2(b) or 8) in the instant case is necessarily in violation of s. 7 for vagueness and overbreadth since the case at hand demonstrates that it is so. The ONCA wrote in R. v. LeBeau, 1988 CanLII 3271 (ON CA), a case regarding acts of gross indecency contrary to s. 157 of the Criminal Code, that:
A statement of principle made by the Supreme Court of the United States is, in our view, applicable to the facts of the instant case:
“A plaintiff who engages in some conduct that is clearly proscribed cannot complain of the vagueness of the law as applied to the conduct of others. A court should therefore examine the complainant’s conduct before analyzing other hypothetical applications of the law.”
[...]
This court in R. v. Morgentaler, Smoling and Scott (1985), 1985 CanLII 116 (ON CA), 52 O.R. (2d) 353 at p. 388, 22 C.C.C. (3d) 353 at p. 388, 22 D.L.R. (4th) 641 at p. 676, adopted the principle in Village of Hoffman Estates et al. v. Flipside, Hoffman Estates Inc. (1982), 455 U.S. 489 at p. 495, that if a person’s conduct clearly falls within the proscription of a statute, then that person cannot complain of the vagueness of the statute as applied to others. To succeed on the basis of vagueness, a person would have to show that the statute is vague in all its applications as, for example, if there were no specified standard of conduct.
Now, admittedly, the only penal case where this has happened has been R. v. Khawaja, which challenges terrorism offences under both ss. 2(b) and 7 for overbreadth and vagueness (unsuccessfully). Bedford is a successful example of challenges to both ss. 2(b) and 7, but it relies heavily on gross disproportionality and not as much on overbreadth. Wakeling challenged s. 193(2)(e) under both ss. 7 and 8, but the section 7 analysis was subsumed by s. 8. Zundel challenges the vagueness of the Criminal Code provision, but only under s. 2(b). Winko had the necessary ingredients (ss. 7 and 15 challenges for overbreadth), but was not itself an "as-applied" challenge to the instant case. The case of overbreadth in the instant case was demonstrated successfully in R. v . Heywood, the high watermark of overbreadth.
Thus, a claim of vagueness or overbreadth requires that the conduct not be "within the proscription of [the] statute" and cannot argue on other, hypothetical facts, but only the facts of the instant case.
English is one of the few, if not only, languages that has one word for law which requires the qualifiers natural and positive law. Most languages (French, Spanish, Latin, etc.) have two words for law: both "law" and "right". Lex and ius in Latin; ley y derecho in Spanish; loi and droit in French. Typically the word for "right" is synonymous with the "right" as in left or right, as in natural born right, while law typically means legislation (positive law). We should be attentive to this peculiarity in the English language when we read texts of another language translated into English.
A typical hypothesis that comes from the media is that refugees have led to right-wing political backlash. One might wonder if this is causal. We can evaluate this rather simply using the gravity 2SLS strategy laid out by Frankel and Romer (AER 2001). The gravity instrumental variables (IV) strategy is completely valid – I proved that by simulation here. We use predicted refugee flows from a gravity regression on exogenous factors, like distance, contiguity, common language, and common colonizers. We then look at the fitted values from that regression to see which are the top destinations for refugees. The top 10 destinations for 2000-2016 in terms of predicted value of refugee flows are: Germany, Austria, Switzerland, Belgium, Netherlands, Syria, Sweden, USA, Canada, Benin. The UK and France are #12 and #13, respectively.
Germany, Netherlands, USA, UK, France and Sweden stick out as Western liberal democracy candidates for the proposition that the hypothesis is true. Perhaps the right-wing backlash is coming to Canada, once revered for its immigration policy, in the next election.
Update: Carney's Liberals won the 2025 election, but their immigration policy looks to change dramatically from Trudeau's.
James M. Cain's Double Indemnity is about the masculine moral hazard theory of insurance. Joseph Hansen's Fadeout is a more feminine approach to insurance, which portrays the people who need it. Despite this contrast, the two novels are connected by intertextuality. The famous line in Arthur Miller's Death of a Salesman where Willy laments that his insurance policy renders him "worth more dead than alive" comes from James M. Cain's Double Indemnity, which uses the line twice.
The defence of contributory negligence is also reminiscient of noir themes in insurance in the area of tort law. According to economic theory, the defence is really only necessary to prohibit under-precaution in the case of strict liability and not at all under a regime of negligence. One could say the defence of contributory negligence is unconstitutionally harsh, and that a regime of comparative negligence is constitutionally required under section 7 of the Charter or the Fourteenth Amendment of the US Constitution. It offers a real sense of doom and grimness, akin to that portrayed in Double Indemnity and Fadeout. This is true even of a regime of comparative negligence if the plaintiff does not carry insurance, but especially of the defence of contributory negligence. The cruelty of such a regime is perhaps best illustrated by Ursula K. Le Guin's The Ones Who Walk Away from Omelas, which can be thought of as a critique of tort law.
The principle of symmetry in criminal law states that the mens rea must map to each and every element of the actus reus. This means that the mens rea must be surjective (onto) in its application to the actus reus. However, that is not always the case when applied to the consequences of the actus reus. This is why the SCC held in R v. Creighton, [1993] 3 S.C.R. 3, that the principle of symmetry does not rise to the level of a principle of fundamental justice. The SCC wrote in R. v. DeSousa, [1992] 2 S.C.R. 944 at 966 that this need not apply to acts causing death or bodily harm even if there is no foresight of the consequences.
Kafka's The Trial shows us that the State apparatus can be wielded to construe otherwise perfectly normal, consensual behaviour as quasi-criminal. Donald J. Trump's indictment is an example of how the State instrument can be used to target an enemy of the State to achieve political goals. Just as Josef K is a moral innocent, Trump was exonerated by time and the demos.
We sometimes say in the law, from a principle that emerges from Deuteronomy 19:15, that we cannot convict on a "he said, she said" or in the case of a single eyewitness. As a hard and fast rule, this is true. It might be harder to do so, but, in practice, this is plainly false. Of course we can convict on a single eyewitness. This simply requires that one person's testimony be more credible or coherent than the other's.
Bill C-63 contains a recognizance provision which puts forward a prior restraint on hate speech. The provision is akin to something Philip K. Dick would have dreamed up in Minority Report. Except it's not a violation of the presumption of innocence, but rather free expression. The least intrusive means would be to add s. 319 as a s. 183 intercept offence, the wiretap provision of the Criminal Code. The recognizance provision in Bill C-63 is unconstitutional.
The setting is 16th century Venice during the Ottoman-Venetian War. The plot is Shakespeare's Othello. Othello is a Moor militaristic African leader married to the young, white Desdemona, whom he murders in a fit of sexual rage mechanically driven by Iago’s manipulation. He screams "I will chop her into messes!" (4.1.192) before he smothers her. We later find out that Iago "is the cause" (5.2.1-3). It is clear that Othello is "the man who is drunk or in a rage" acting involuntarily, "not knowingly, but in ignorance" (Nicomachean Ethics, Bk. III). Othello is put on trial for murder. Feel free to individualize the test for murder to the facts of Othello.
Answer: the defence is automatism.
Legal theory is dead. All major legal theories – legal formalism, legal realism, law-as-politics – attract support from the data. In my empirical study of the Supreme Court of Canada I find that:
Judges appear to follow doctrinal sources, such as Irwin Toy and BC Motor Vehicle;
Female judges strike down more equality statutes; and
Liberal appointees strike down more labour regulations and penal statutes
This suggests that each of these theories is predictive in their own way, but none are complete as a theory of what judges do. One can always construct a small enough sample that their theory holds true, but, in the large, all are true to a certain extent. Legal theory is no longer a horse race between various competing theories. No one theory "defeats" the others. All are to be embraced.
Even the 1958 Hart-Fuller debate is played out. "What is law?" is no longer an interesting question much in the same way that the question of "What is knowledge?" became less fruitful in the field of epistemology. Many claim that Hart won the debate, yet entire bodies of law emerge from the moral principles of the natural law: the law of defamation, the duty of care, moral culpability, and contractual promises, just to name a few. Perhaps Dworkin's hybrid theory was closest to truth on that matter. Raz too. Dan Priel, a self-described Fullerian, teaches jurisprudence by asking questions like "how law?" and "why or why not law?".
The future of jurisprudence is to develop complete theories of law. Shapiro's "planning theory of law" or "experimental jurisprudence" are perhaps the best candidate for such theories. They attempt to fully explain law by drawing on all major legal theories that precede them. These new such theories add to the richness of what law is since none of the extant theories are complete in that they span every branch of the law. A complete theory of the law, in my opinion, should explain the content of the capital-L Law itself and not just "what law is" in the abstract.
Rest in peace, legal theory.