Giter Club home page Giter Club logo

sml-redprl's People

Contributors

arthuraa avatar cangiuli avatar david-christiansen avatar ecavallo avatar favonia avatar jonsterling avatar jozefg avatar matthewfluet avatar mortberg avatar robertharper avatar scott-fleischman avatar syohex avatar timjb avatar wilcoxjay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sml-redprl's Issues

reflect refiner into the computation system

So, currently theorem declarations are elaborated into definitions like mythm = prove(goal; script). Initially, I was just thinking this was a convenient way to store the data of the proposed proof as an abt, but now I am thinking it would be extremely cool to actually reflect the behavior of the refiner into the computation system, such that prove(goal; script) actually computes to a term that codes up the result of refining the goal with that script.

Then, the action of the "proof checking" elaborator phase would be to evaluate all such declarations, and ensure that they result in so subgoals. So, prove would be sort of an oracle, from that standpoint.

Testing Signature Transformations

Once #12 lands, we'll have a nice way to test all the signature elaboration stuff. It would probably be good to add

  1. A validator for each signature transformation or at least the last one
  2. A few examples of signatures to be parsed an elaborated
  3. Testing glue to make this all run
  4. Add this to the travis build cycle

That way we don't have a bunch of code in there that's never been really run or tested :)

[Nominal LCF] associativity is wrong

Here's a dilemma:

  1. We want tactic sequencing to associate to the left: so, t1; t2; t3 should be THEN(THEN(t1;t2); t3).
  2. We want name scoping to associate to the right; so, in a <- t1; b <- t2; c <- t3, the name a should be in scope for the rest of the script.

pretty print signatures and goals

We should be using our own custom pretty printer rather than the one that comes from sml-typed-abts. Probably we should use the sml-unparse library so that we can support nice layout, etc.; it will become important as soon as we add support for infix operators with precedence.

Additionally, the printer should use environments that map (metavariables, symbols, variables) to strings. So, it would take string MetaCtx.dict * string SymCtx.dict * string VarCtx.dict as input.

Currently, in the printer for proof states, we actually go through and generate new "pretty" names for all the holes and actually substitute out the metavariables everywhere with these new ones. That's pretty silly, and very inefficient.

specify tactic language

So, the idea for the tactic language that I've sketched out in the code is a bit half-baked, but I think it could be really cool. We'll have to re-implement it after I have defined it, since the code at present won't totally suffice.

If the specification takes too long to develop, we might want to temporarily switch to a Coq-style approach to binding, where names aren't bound by the primitive tacticals but are taken as parameters, and introduce a single "fresh" abstractor fresh a in ... that generates a new name. The disadvantage of this approach is that it allows users to do lots of weird badly-scoped things, but it is true to the idea of tactics as "metaprogramming" perhaps.

set up SML/NJ build: add .cm files

For every .mlb file, we'll need to put in a .cm file (at least; in some cases, it may require some auxiliary .sml files, depending on how much of the ML Basis system's features we are using).

The reason to do this is that SML/NJ compilations are many times faster than MLton's, and it is nice to get a quick turn around when playing with the code or fixing some bug.

This could be a good task for someone who wants to contribute, but doesn't know where to start.

Path types

😉

  • type equality
  • path equality
  • path introduction
  • structural equality
  • path-app / type synthesis
  • path-app "computation" (open head expansion can't apply here, so we need an equational rule)
  • equational rule for projection (i.e. M@0 = P_0 if M : paths({x}.A; P_0,P_1)).

minimal command line frontend

This is an easy task for anyone who wants to take it on—we need to have a command line interface to JonPRL so that you can run jonprl file.jonprl and have it elaborate your development, printing out the results.

There is an example of this in the test suite.

support multi-sorted refinement

Currently, the syntax of signatures and the refiner itself assume that evidence for a synthesis of the proving procedure will be of sort EXP; however, eventually we want to support proving the truth of types that refine sorts other than EXP.

enforce arities of CUST operators

As I was working on the elaboration semantics for signatures, I realized that we aren't actually enforcing that CUST operators carry the correct arities. We'll need to introduce a well-formedness check for all terms that are introduced into a signature, that walks the abt to find instances of CUST[ϑ](a), looks up the arity a' assigned to ϑ in the signature, and checks that a == a'.

  • update elaboration spec
  • update code

Design for records

If this is going to be useful for anything, we need to have records that are at least as good as usable as those of Idris's or Agda's. I think we must not go the route that Nuprl went, since it's really hard to build tactics to check that sort of record, and almost immediately you end up with goals that are pretty difficult.

  1. I'd like to use symbols for field names; not necessarily atoms as in Nuprl—this means, essentially, field names can never depend on a variable or be chosen dynamically. This will make a lot of stuff easier, and in practice, I don't think you ever need the full generality of the nuprl approach.
  2. There's an inherent contradiction between the desired binding structure of a record type on the one hand, and the fact that you want the names of fields to be global—or at least, to be scoped outside the definition of the record. That is, in {foo: S, bar: T[foo]}, foo is apparently playing a role as a variable, but also a symbol to be used in the projection get[foo].

Regarding (2), Nuprl simply just uses some horrid punning combined with their display forms, but I consider that a non-starter.

One option that I have been considering is to have record descriptions induce no actual binding structure, but simply associate symbols to types; the "free variables" in the telescope that correspond to previous rows would actually just be symbolic references from PFPL, we would have something like {foo: S, bar: T[&foo]}.

Then, we add a new form of linear environment, "F := . | a ~ T, F" to the typing and evaluation judgments, and then do something like this:

------------------[row-nil]
H; F >> {} row


H; F >> T type
H, x:T; F, a~x >> Ψ row
--------------------------[row-cons]
H; F >> (a : T) :: Ψ row


--------------------------[fluid-ref-step]
F, a~M, F' != &a :-> M

However, I don't know if this is actually a valid thing to do wrt. our meaning explanation & computation system. I wonder if there is a different formulation of this idea which is expressible in the existing judgmental apparatus...

Any thoughts, @jozefg?

Complete rename to RedPRL

  • rename org
  • rename repo
  • update README
  • rename files
  • purchase redprl.org
  • set up github pages on redprl.org (as separate repo)

conversional calculus

It would be nice to have a conversional calculus, built into the language in the following style:

sort conv
op rwc : (conv)tac // rewrite conclusion
op rwh{τ}[hyp:τ] : (conv)tac // rewrite hypothesis

op match{ϑ}  : (...conv...)conv    [ϑ operator] // match a term structurally
op id : conv // do nothing
op step{n} : conv // step n times
op eval : conv // evaluate to canonical form
op rewrite{τ}[hyp:τ] // rewrite under a hypothesized computational equality

The idea is that you build up a conversional structurally and apply it as you please; there is no need for any advanced unification machinery to make this work, unlike some of the stuff I think we had attempted previously....

It is probably also possible to actually elaborate conv scripts into pure tactic scripts that appeal to the primitive rules for computational equality, so the conversional machinery wouldn't have to be trusted.

How does this sound @jozefg?

Kit for more easily writing rules

Rules that involve dependency are so cool! But they are very difficult to write correctly. You have to be very cautious with valences, metavariables, holes, etc.

There's probably a number of possible utility functions that we could write that would factor a lot of the scarier aspects out for most cases. For instance, the Abstract a variable and make a new goal should produce simultaneously the hypothetico-general subgoal along with pre-made hole for use elsewhere.

Add "sort inference" during parsing

Because we're developing multi-sorted type theory, there's a lot of operators that have to take sorts. However, there's no theoretical reason that the user should have to write them, in most cases. In almost every case, the sort annotations on an operator can be inferred by inspecting the sorts of its operands.

Now, the sort parameters have to be totally fixed once parsing is through, or else it will not be possible to check terms in the ABT framework. So, I'm suggesting that we arrange the parser to have two modes: checking & inference. In "checking mode", the parser can use the supplied sort to guide the interpretation of custom notation, whereas in "inference mode", the parser will fall back on unambiguous concrete syntax and simultaneously synthesize a sort if possible.

This should let us gain a cleaner concrete notation for Red JonPRL developments.

the meaning of truth sequents

It is not entirely clear what the semantics for truth sequents should be, but here's one proposal. First, let's recall the lay of the land, since it is a little intricate.

The actual type theory has many judgments, one of which is Y || Ψ != M = N in φ, which is the parametric (in Y) functional (in Ψ) membership sequent. I will assume anyone reading this ticket knows what it means (please consult Type Refinements for the Working Class if not). In this note, I will write Ψ ctx rather than Ψ refines Γ and φ type rather than φ refines τ, just for brevity & simplicity.

The proof theory ("refinement logic") of Red JonPRL has a sequent judgment, written Y || H >> φ true [ext M], like Nuprl (with M an output, and all the other symbols inputs). Whereas the !=-sequent is defined by the meaning explanation, the >>-sequent is defined inductively by a collection of sequent-calculus style rules. Of course, >> shall have an intended semantics in the type theory, with which it must be sound, but cannot be complete. >> is in some sense a finite approximation of !=—over time, we'll add more rules to refine the approximation, but we'll never actually reach a point. Now the way that the intended semantics of the proof theory works is that we impose a "postsupposition" (i.e. a condition that we can conclude from the evidence of the judgment).

So the question raised by this ticket is, What is the intended semantics for >>? Let us first consider the simpler case without atoms. Then, it is east: H >> φ true [ext M] postsupposes H ctx, H != φ type and H != M in φ. Recall how this differs from the usual judgments of type theory which have presuppositions (i.e. where the judgment only has a meaning in case the presuppositions obtain), whereas here, the judgment expresses no presupposition at all; yet, we intend to validate its postsuppositions by considering the ways in which we could come to know the judgment (in this case, inductively).

Now, we must extend the explanation to the case with atoms, Y || H >> φ true [ext M]. First, we can say confidently two of the postsuppositions:

  1. Y || H ctx
  2. Y || H != φ type

But what of M? One option would be the simplest, which is to say Y || H != M in φ. However, this does not seem to comport with Allen's supervaluation semantics, which allows the evidence to live in any model that has enough symbols (including ones that have more symbols!). So, this seems to suggest a slight change to the sequent judgment, to something like Y || H >> φ true [ext ρ.M], postsupposing the following:

  1. Y || H ctx
  2. Y || H != φ type
  3. There exists Y' such that ρ : Y --> Y'
  4. Y' || H.ρ != M in φ.ρ

This allows us to generate new symbols in the course of knowing the truth of φ.

Question: Does this look like it should work? If we write up the usual rules for the proof theory, will they be sound according to these semantics? (i.e. will the postsuppositions hold?) I'm thinking it should work, since the renaming ρ will just percolate up the rule in the same way as the extracts—but there may be complications with renaming, since premises might become evident in mutually incompatible worlds, so we may need to rename things.

@jozefg @freebroccolo @vrahli

Refactor Nominal LCF elaborator

I've got a cleaner method of elaboration in the paper draft, I think, which is a bit more compositional. We should try it out...

Compile / serialize elaborated signatures

This gives us a way to save some work—imagine being able to take an already-elaborated signature (such as one from a file) and add a new declaration to it; elaborating the result of this will only incur the cost of elaborating the new definition, since all the old stuff will be skipped.

This gives us a plausible way to have a real notebook-style interface, where the output of each "cell" is just an elaborated signature—allowing us to elaborate developments incrementally rather than repeatedly.

operational semantics

We can implement the operational semantics in a way that resembles the following:

datatype 'a step = 
    STEP of 'a
  | FINAL

(* we can clean up a lot of the gnarly aspects of our old code
   by providing the following: *)
structure StepMonad : MONAD where type 'a t = 'a step
structure StepFunctor : FUNCTOR where type 'a t = 'a step

exception Stuck

(* Represents the judgment [Sig !- M :-> N], subject to the following
   postsupposition: [freeSymbols N] shall be a subset of [freeSymbols M].
   i.e. we are propounding a *scoped dynamics* in Bob's terminology. *)
val step  : sign -> abt -> abt step

The step operation takes a signature, so that it can look up definitional extensions that are declared in the signature.

@jozefg sound all right? Under the new order, there's no need to worry about universe levels when unfolding definitions, so this should be pretty straightforward.

shorthand for valences?

Might be worth adding a shorthand notation for valences that mention onlythe exp sort. Like {m}[n]exp, which could be shortened to {m}exp or [n]exp when one of the slots is 0.

emacs mode

We'll want an emacs mode with the following to start:

  • syntax highlighting
  • a way to execute a signature

We can try and bring the old emacs mode up to date, though we'll have to remove some features from it until Red JonPRL catches up.

Longer-term, I'd like to have some kind of server interface, so that we can do things like just re-elaborate a single declaration, remembering the results of the previous unchanged declarations.

I suggest that we should let the emacs mode live in this repository, so that it can be updated every time a change to Red JonPRL is made.

Parser for signatures

We need to implement a parser for developments. The parser will still be a little tricky, because as it goes, it needs to accumulate parsing rules for declared operators (and eventually, custom notations—but we'll leave that for after the basics are done). The idea is that when a declaration like the following is parsed,

Def something[u:σ, v:τ](M : ({σ}[τ].τ)τ, ...) : τ = [
  N
].

... we need to remember how to parse applications of the something operator. So, in the future we must parse something[u,v] as CUST ("something", [(u, σ), (v, τ)], arity), where arity is the representation of ({σ}[τ].τ, ...)τ.

The parser will be slightly less tricky than the old one, since we're abandoning the old "open-ended datatype" approach to custom operators, so there's no need for side effects in the parser. Additionally, we always parse first to ASTs, and only then do we convert them to ABTs, which means that all our parsers will be significantly easier (since we do not need to assign identity to variables and symbols immediately).

Other forms of declaration (like Thm and Tac) will do exactly the same thing as Def; in an evaluator for developments, we will separately execute Thm declarations.

Ideas for derivations/extraction

In JonPRL, we constructed explicit derivations as the syntheses of the sequent judgment, rather than merely extracting programs. This was perhaps useful for a couple reasons:

  1. It was helpful for debugging.
  2. It helped dispel the myth that PRLs have only programs, but not proofs
  3. @vrahli was planning to translate the JonPRL derivations to Coq, so that they could be checked against the Nuprl in Coq development.

However, there's some serious problems with producing concrete derivation:

  1. It's a duplication of work and it is not clear what use it is: the refiner is the definition of the logic, and so the proof is already present (but it exists spread out in time, rather than in a single location). In some sense, when you start from a semantical perspective like we do, the proof theory is pretty arbitrary, so it is not clear to me what sense it makes to set it in stone in the grammar of our language.
  2. It is likely to become unwieldy for large proofs. It's very plausible that we could come up with a proof that we can execute, but which we cannot easily store in memory. There's a lovely example of this in the Chalmers Cubical implementation here, where a particular approach to proving univalence results in absolutely outrageous memory usage.
  3. It is a pain to implement!

Does anyone have any thoughts on what ought to be done here? I'm inclined to abandon the construction of explicit derivations, because I don't see much use in it. But I also don't want to scuttle any of @vrahli's intended work; however, I do not know whether there is even anymore any potential for checking Red JonPRL derivations in Coq, because we actually are implementing a different type theory from Nuprl:

  • Ours is a many-sorted framework, unlike Nuprl.
  • We have given a truly different (syntax and) semantics to atoms/symbols from the one in Nuprl.

@jozefg

Add set types, call it "species"?

Next up to be added is the Nuprl set type (i.e. set comprehension). I've been thinking a bit about the name (we have in the past used "subset"), and I am a bit unsatisfied with all the existing ones. "Set" is confusing because of Agda, "subset" is confusing because we don't call our types sets, and "subtype" is confusing because we might expect it to refer to the subtyping relation.

However, Brouwer used the word Spezies (English, species) for precisely a set that is defined by imposing a predicate on already-existing objects.

As far as the rules are concerned, this will be the first significant demonstration of the Dependent LCF machinery, in that we will be able to provide a single refinement rule for species-introduction.

species{σ} : (exp; [σ].exp)     for σ sort

Term parser

We'll need to add a parser for JonPRL ASTs.

Add goals other than truth

If we added a goal like H >> IsType{τ}(T) whose evidence is a universe level, it would be possible to write some cool level inference heuristics as a rule or tactic.

This idea also lets us relax the old "PRL" semantics such that the main postsuppositions are only required for the truth sequents.

Add a "rename" operation to signatures

This operation should take every free symbol (basically, the operator ids) and rename it by some operation (such as a qualifying prefix like fn x => "mylib#" ^ x, etc.). This shall properly be applied in each declaration, etc.

We can use this as a poor-man's support for namespacing. Longterm, we can introduce a new implementation of the SYMBOL signature that includes some path structure in it.

We might want to add an operation to ABT to support mapping a function over the free names in a term, since currently you have to manually substitute each one, which could be fairly inefficient.

Linear types

Can I join the types party? I am learning linear logic right now, how does linear types refer to CTT?

Cool website / propaganda

We need a cool website, in the constructivist style. I made this site for PrlConf, which I think has the basic aesthetic, but I have no idea what I'm doing.

If anyone wants to design and throw together a site for Red JonPRL, they get a free mug with a cool revolutionary slogan on it!

Add discussion of theoretical basis

In the Definition, we should cite and briefly summarize both Syntax and Semantics of Abstract Binding Trees and Type Refinements for the Working Class.

Strategic retreat on multi-sorted type theory (New Syntactic Policy)

The original vision was to construct multi-sorted type theory based on refinements (as in Darin and my recent paper), in which we would have things like Pi(A; x.B) refine σ ⇀ τ when A refines σ and when the family B(x) refines τ, etc. But there is trouble in paradise:

  1. I had also thought that each sort would give rise to its own hierarchy of universes, but now, I do not know if this makes any sense. Each universe refines exp (the sort of type expressions), but it gets confusing beyond that... The first τ-universe contains the types that refine τ, but the second τ-universe contains those types as well as the first τ-universe itself; so there is a universe which contains some types that refine one sort (τ), and some types that refine another sort (exp). At this point, it is totally unclear to me what it means to quantify over types.
  2. Issues with universes aside, if we intend to use Red JonPRL as a program logic, it might make a lot of sense to follow the program we laid out, but I would suggest it would be not so useful until we have generalized to a polymorphic or dependently-sorted syntactic framework (as opposed to mere ABTs, which correspond in some sense to a nominal version of the simply typed lambda calculus).

In any case, I think it's time to take a page out of the proceedings of the 10th Congress and make a strategic retreat.

I am proposing the following:

  1. All of the CTT-related stuff should be put into the exp sort; things like tactic scripts, level expressions, etc. shall live in their own sorts, just like they do now.
  2. For now, we should only have a single universe hierarchy, the hierarchy of types that refine exp. Since all type expressions are of sort exp, it is not even clear what a universe hierarchy for another sort would even mean. Now, this does mean that only types that refine exp inhabit a universe—but recall that the definition of typehood in CTT is not really inhabitance in a universe, but rather the imposition of a partial equivalence relation on the terms of a particular sort. Fixing a universe hierarchy has more to do with the PRL than it has to do with the actual type theory. So, there are still types that don't refine exp, but they just won't inhabit a universe. I don't think that this will be a big problem in general, and we can also add special universes in the future that include types refining a different sort. Additionally, there is no reason why we cannot relax the postsupposition of the PRL judgment H >> P from P shall inhabit a universe to P shall be a type.
  3. Computational approximation, equivalence etc. must remain multi-sorted! There is no reason to restrict these to exp.

I spoke with @freebroccolo a bit over IRC last night, but I'm curious if anyone else has feedback. /cc @jozefg @david-christiansen

Rethink hypothesis "visibility"

I don't remember why I had used the "hidden/visible" mechanism for hypotheses in Classic JonPRL rather than just using a primitive squash type like MetaPRL. It may have had something to do with a difficulty getting good refinement rules, but if this was the difficulty, we can certainly deal with it using Dependent LCF.

[nominal-lcf] have one choice sequence per sort

Currently, we're threading a single choice sequence of atoms through the refiner, but in fact, this will lead to trouble as you might bind a symbol of one sort, but have it be used elsewhere as if it were another sort. The right thing to do is, I think, to send around a family of name stores, and have symbol binding in Nominal LCF scripts elaborate to pushing a symbol onto the appropriate choice sequence.

user-defined canonical operators & computation rules

Supporting this would be awesome! If we added PER types, then users could really treat Red JonPRL like a fully open-ended logical framework.

It would be nice, however, to ensure that the computational properties of the theory are not destroyed by a bad definition. For instance, it would be nice to support some syntactic check that approximates the extensionality condition for operators; there ought to be some removable subset of definitions that we can accept which are always extensional, I would think. We would also want to ensure that the definitions preserve determinism.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.