### Where Does Scope Come From?

After several false starts, I finally sat down and watched the first of Frank Pfenning's 2012"Proof theory foundations" talks from the University of Oregon Programming Languages Summer School (OPLSS). I am very glad that I did.

Pfenning starts the talk out by pointing out that he will be covering the "philosophy" branch of the "holy trinity" of Philosophy, Computer Science and Mathematics. If you want to "construct a logic," or understand how various logics work, I can't recommend this video enough. Pfenning demonstrates the mechanics of many notions that programmers are familiar with, including "connectives" (conjunction, disjunction, negation, etc.) and scope.

The connectives are demonstrated by showing the connection between intuitionistic or constructivist logic and the "introduction" and "elimination" rules, and how the "harmony" of the qualities of "soundness" and "completeness" fit together in a way that makes our logical systems consistent when applied to real world ideas.

Scope is demonstrated during this process as well. It turns out that in logic, as in programming, the difference between a sensible concept of scope and a tricky one can often mean the difference between a proof that makes no sense, and one that you can rest other proofs on. I am very interested in this kind of fundamental kernel - how the smallest and simplest ideas are absolutely necessary for a sound foundation in any kind of logical system. Scope is one of the first intuitions that new programmers build - can we exploit this fact to make the connections between logic, math, and programming clearer to beginners?

I'm working on a longer writeup of the idea, which I hope to publish here with some transcriptions of what Pfenning writes on the board in the video, but for those who are on the edge about watching these lectures, allow this post to be the push you need. Some audio quality issues during Q&A aside, this is pure gold - dive in!