Giter Club home page Giter Club logo

mathlib's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mathlib's Issues

theories/number_theory/pell.lean: unsolved goals

Travis-CI is failing due to the errors below. I'm just making a note of it in case nobody else noticed.

theories/number_theory/pell.lean:186:24: error: solve1 tactic failed, focused goal has not been solved
state:
d x y z w : ℕ,
xy : sq_le x 1 y d,
zw : sq_le z 1 w d,
this :
  0 ≤
    ↑x * (↑x * (↑z * ↑z)) +
      (↑d * (↑d * (↑y * (↑y * (↑w * ↑w)))) +
         (-(↑d * (↑x * (↑x * (↑w * ↑w)))) + -(↑d * (↑y * (↑y * (↑z * ↑z))))))
⊢ 0 ≤
    ↑x * (↑x * (↑z * ↑z)) +
      (↑d * (↑x * (↑y * (↑z * ↑w))) +
         (↑d * (↑x * (↑y * (↑z * ↑w))) +
            (↑d * (↑d * (↑y * (↑y * (↑w * ↑w)))) +
               (-(↑d * (↑x * (↑x * (↑w * ↑w)))) +
                  (-(↑d * (↑x * (↑y * (↑z * ↑w)))) +
                     (-(↑d * (↑x * (↑y * (↑z * ↑w)))) + -(↑d * (↑y * (↑y * (↑z * ↑z))))))))))
theories/number_theory/pell.lean:807:84: error: tactic failed, there are unsolved goals
state:
a : ℕ,
a1 : a > 1,
y2 y1 y0 yn1 yn0 xn1 xn0 ay a2 : ℤ
⊢ yn1 * (ay * a2) + (-y0 + (-(yn0 * ay) + -(xn1 * a2))) =
    y1 * a2 + (yn1 * (ay * a2) + (-y0 + (-(y1 * a2) + (-(yn0 * ay) + -(xn1 * a2)))))

Definition of the ring of adeles of a number field

It dawned on me that we are really close to being able to define the adeles of a number field in Lean, at least as an abstract ring. We don't have polynomials (edit: yes we do) or number fields, so it seems like we are a million miles away, but we now have enough abstract commutative algebra to make everything else very simple. Here is a roadmap.

  1. Define the polynomial ring R[X] in one variable X over a commutative ring R (perhaps as functions f from nat to R for which there exists some bound B=B(f) such that f(n)=0 for n>B, or perhaps as some inductive type), and prove it's a commutative ring. Define the evaluate-at-r map (r in R) from R[X] to R (induction).

Edit: this is already in Lean e.g.
import data.finsupp
#check finsupp.to_comm_ring

  1. Define a number field K to be a field which is abstractly isomorphic (as a ring) to the quotient ring Q[X]/M for M some maximal ideal of Q[X] (Q the rationals) (the isomorphism is not strictly speaking part of the data). Define i : Q -> K to be the canonical map (composition of functions) (this is independent of the choice of isomorphism).

  2. Define the algebraic integers of a number field K to be the elements x in K such that there exists some monic polynomial f(X) in Z[X] with f(x)=0.

  3. Prove that the algebraic integers form a ring. The most natural proof involves deducing it from some more abstract and general facts about finitely-generated modules over a Noetherian ring, namely that if M is such a module then M satisfies the ascending chain condition for submodules, and that any submodule of M is also finitely-generated. The point is that x in K is an algebraic integer iff the subring of K generated over Z by x is finitely-generated as a Z-module.

  4. Define the completion of a commutative ring R at an ideal J to be the projective limit of R/J^n, n>=0.

Edit: Kenny Lau has done this.

  1. Now define the finite adeles of K to be a subring of the following product of rings: the product is over the set of maximal ideals of R, the algebraic integers of K, and the term in the product corresponding to a maximal ideal M is the field of fractions K_M of the M-adic completion R_M of R. It is not true that completing an arbitrary integral domain at a prime ideal will give you an integral domain (the fact that this fails is not obvious ring-theoretically but it is obvious if you think about rings geometrically as affine schemes), however it's true in this case. Now R_M is a subring of K_M and an element of this product is a finite adele iff its component at M is in R_M for all but finitely many M.

  2. Define the infinite adeles of K=Q[X]/M to be R[X]/M. It's also a ring.

  3. Finally, define the adeles of K to be the product of the finite and the infinite adeles of K.

tendsto_dist nerfed

Since bb1a9f2 tendsto_dist from metric_space.lean requires the source space to be metric (because of some line declaring a metric_space instance on β).

If this was unintended, the easy fix is probably to use another letter for the source space. However it's not great if Lean took the initiative to add this hypothesis, some elaboration or type class inference went wrong (maybe it's unavoidable though).

Duplicate lemma about product uniformity

Lemmas
https://github.com/leanprover/mathlib/blob/e74ff76ccac882018e9c1fd70772fd7c88ae06f5/analysis/topology/uniform_space.lean#L1482-L1485
and
https://github.com/leanprover/mathlib/blob/e74ff76ccac882018e9c1fd70772fd7c88ae06f5/analysis/topology/uniform_space.lean#L1535-L1539
have the same statement, although one proof is shorter than the other.

Obviously I can open a PR deleting one of them if you want, but maybe you want to use the opportunity to make some extra checks and reorganization.

Change list.prod from foldl to foldr

@kckennylau says: Was there any point in time where list.prod [t] was definitionally equivalent to t * 1? and was there any point in time where list.prod (x :: L) = x * list.prod L was rfl?

@johoelzl says: I traced it back, it was always foldl. I ported the definition from Lean2's library. I think we should change it to foldr then we get the expected equalities.

data.nat.cast.cast is too general to be an instance

The module data.nat.cast exports nat.cast as a has_coe instance (nat.cast_coe) to any type which has has_zero, has_one, and has_add instances. The module is transitively imported by data.nat.basic, so there is a good chance that the instance will in scope for a module which uses mathlib.

The instance is way too general, as any type which has these instances will get a coercion from nat. Furthermore, cast is very inefficient --- it is O(n) where n is the value of the nat.

Possible solution: remove the has_coe instance and make it local where it is actually required.

Error in install instructions

Step #2 in the installation instructions states a falsehood.

https://github.com/leanprover-community/mathlib/blob/master/docs/elan.md

It says, "(This step can be skipped: VS Code will prompt you to install elan if it can't find a usable copy of Lean.)"

At least on Windows that's not true. If you crank up VS Code, load the Lean extension, and then try to use it, it just complains about a failure to start the server.

It appears to be necessary to run the Lean installer manually, in a terminal.

#find doesn't work very well

import tactic.find

def blah := true -- can't use #find straight after import

#find 0 < _ → 0 < _ → 0 < _ + _ -- no output

#check add_pos
-- add_pos : 0 < ?M_3 → 0 < ?M_4 → 0 < ?M_3 + ?M_4

Beginners struggle to find theorems in Lean and mathlib. Experts know the techniques for their IDE (e.g. guess the name, mash ctrl-space and esc in VS Code) but I have never found #find useful. If there are some tips on how to make it work better, or some tweaks that will make it work better, it might make life a bit easier for beginners.

Quantifier syntax for filters

My plan is to replace most usages of filter.sets by the following two quantifiers:

import order.filter

protected def filter.eventually {α : Type*} (p : α → Prop) (f : filter α) : Prop :=
{a | p a} ∈ f.sets

protected def filter.frequently {α : Type*} (p : α → Prop) (f : filter α) : Prop :=
{a | ¬ p a} ∉ f.sets

notation `∀ᶠ ` binders ` in ` f `, ` r:(scoped p, filter.eventually p f) := r

notation `∃ᶠ ` binders ` in ` f `, ` r:(scoped p, filter.frequently p f) := r

section
open filter
variables {α : Type*} {f g : filter α} {s : set α} {p : α → Prop}

set_option pp.notation false
example : ∀ᶠ x in f, p x := _
example : ∃ᶠ x in f, p x := _
example : ∃ᶠ x in principal s, p x := _
example : ∃ᶠ x in f ⊔ g, p x := _
example : ∃ᶠ x in ⊤, p x := _
example : f.eventually = (λp:α → Prop, true) := _

end

∀ᶠ is written \forall\^f, ∃ᶠ is written \exists\^f (unfortunately one needs to write and delete a space in the middle, at least in VS Code...) Instead of ∀ᶠ x in f, p x I tried ∀ᶠ x ← f, p x but this didn't work.

CC: @PatrickMassot @sgouezel

EDITED: changed argument order for eventually and frequently

`ring` tactic fails on some inputs

Writing this down so I don't forget...

The ring tactic has some problem with finding normal forms here. A minimized example:

import tactic.ring
local infix `^` := monoid.pow
example {α} [comm_ring α] (x : α) (m n : ℕ) :
  x^2 + x^2 + x = x^2 + (x^2 + x) := by ring

Duplicate definition of quotient groups

There seems to be two definitions of quotient groups in mathlib. One in group_theory.coset called left_cosets, with some lemmas about left_cosets and one called quotient_group in group_theory.quotient_group. We should probably choose one name to stick to.

rcases and nested patterns

There seems to be an issue with rcases and nested patterns. Consider the two following examples:

theorem bad (e : ∃ a : unit, true ∧ true) : true :=
  by rcases e with ⟨a, ⟨p, q⟩⟩

theorem good (e : ∃ a : unit, true ∧ true) : true :=
  by rcases e with ⟨a, p, q⟩

Lean prints:

.../example.lean:1:5: error: tactic failed, there are unsolved goals
state:
a : unit,
e_h_right : true
⊢ true
.../example.lean:4:5: error: tactic failed, there are unsolved goals
state:
a : unit,
p q : true
⊢ true

As you can see, the good example provides the expected names for p and q, while the bad example does not.

The point of this issue is primarily to record the issue for posterity. Since the workaround is easy enough, there is no urgent need to fix it.

This issue came from a discussion on Gitter.

Tactic doc cleanup

The file tactic.md has grown a lot. This is good news, but we should really think about organizing it (or maybe even split it?) and hunt down coverage gaps. For instance it seems tauto is not mentioned (I realized this when I wanted to add documentation for the classical version).

Better support for numerical equalities/inequalities in reals

Various people have written Lean code for me over the last few weeks to enable me to verify basic equalities and inequalities (by which I mean both less_than/greater_than and not_equal) in Lean's reals. It currently doesn't quite compile (I've added an axiom at the top as a temporary fix), but if I document its current state here then this might be of help to people. It feels a bit too ropey to be a pull request but Mario asked for some sort of coherent summary of what I wanted/needed and here it is.

import analysis.real

-- I need this because at the time of writing I believe this lemma is broken.
axiom of_rat_lt_of_rat {q₁ q₂ : ℚ} : of_rat q₁ < of_rat q₂ ↔ q₁ < q₂ 

-- Helpful simp lemmas for reals: thanks to Sebastian Ullrich
run_cmd mk_simp_attr `real_simps
attribute [real_simps] of_rat_zero of_rat_one of_rat_neg of_rat_add of_rat_sub of_rat_mul
attribute [real_simps] of_rat_inv of_rat_le_of_rat of_rat_lt_of_rat
@[real_simps] lemma of_rat_bit0 (a : ℚ) : bit0 (of_rat a) = of_rat (bit0 a) := of_rat_add
@[real_simps] lemma of_rat_bit1 (a : ℚ) : bit1 (of_rat a) = of_rat (bit1 a) :=
by simp [bit1, bit0, of_rat_add,of_rat_one]
@[real_simps] lemma of_rat_div {r₁ r₂ : ℚ} : of_rat r₁ / of_rat r₂ = of_rat (r₁ / r₂) :=
by simp [has_div.div, algebra.div] with real_simps

-- I don't understand this code; however it is the only way that I as
-- a muggle know how to access norm_num. Thanks to Mario Carneiro
namespace tactic
meta def eval_num_tac : tactic unit :=
do t ← target,
   (lhs, rhs) ← match_eq t,
   (new_lhs, pr1) ← norm_num lhs,
   (new_rhs, pr2) ← norm_num rhs,
   is_def_eq new_lhs new_rhs,
   `[exact eq.trans %%pr1 (eq.symm %%pr2)]
end tactic

-- Look what we can do now

example : (((3:real)/4)-12)<6 := by simp with real_simps;exact dec_trivial
example : (6:real) + 9 = 15 := by tactic.eval_num_tac
example : (2:real) * 2 + 3 = 7 := by tactic.eval_num_tac
example : (5:real) ≠ 8 := by simp with real_simps;exact dec_trivial
example : (6:real) < 10 := by simp with real_simps;exact dec_trivial 

Travis build - timeout mechanism broken

I believe the offending line is:

        - travis_long "timeout 2400 leanpkg test" | awk 'BEGIN{e=0;c=-1}c&&--c;/error/{if (!e) {c=30;e=1}};{if (!c) {exit 1}}'

@jcommelin Can you see why this would block everything on stdout and stderr?

Using stdlib as an external dependency

I'd like to use subset.antisymm from stdlib/data/set/ in my pet project.

I've specified stdlib as a dependency using the following leanpkg.toml

[package]
name = "picrin-lean"
version = "0.0.0"

[dependencies]
stdlib = {git = "https://github.com/leanprover/stdlib", rev = "6d9072821b60d6a4a7293c943c07fba65041f25e"}

and run leanpkg --configure. This step created a local directory _target with the desired dependency.

But I'm not sure how to import subset.antisymm in my project.

I couldn't find any documentation, and I've tried various combinations of import/open and subset.antisymm with various different prefixes, from both VSCode extension and command line, but without luck.

I can provide more detail, and a smaller example, but the specific file I'm talking about (along with leanpkg.toml) is here:

https://github.com/picrin/lean/blob/master/topology.lean#L53

installation doesn't quite work as expected

I refer to the instructions here:

https://github.com/leanprover/mathlib/blob/master/docs/elan.md

These instructions say something a bit confusing, which appears almost contradictory:

  1. But if you run leanpkg build from inside my_playground, then it will compile only those files that are dependencies of mathlib group_theory/subgroup.lean.
  2. If you want to play more, it's better to compile all your dependencies once and for all. You can do this by going into my_playground and running leanpkg build.

Empirically, if you follow these instructions with an empty ./src directory, nothing happens. I think perhaps the second item should be closer what's said here:

$ cd _target/deps/mathlib/
$ leanpkg build

(That's step 3 from https://github.com/ImperialCollegeLondon/xena-UROP-2018.)


Accordingly, I'd suggest the following rewording for that item:

If you want to play more, it's better to compile all your potential dependencies once and for all. You can do this by going into my_playground/_target/deps/mathlib/ and running leanpkg build.

squeeze_simp trace message not complete

In the following example

import order.complete_lattice

open lattice

lemma strange {α : Type*} [complete_lattice α] {s t: set α} :
   Inf s ⊔ Inf t = Inf ((λp : α × α, p.1 ⊔ p.2) '' (set.prod s t)) :=
begin
  apply le_antisymm,
  { squeeze_simp,
    sorry },
  { sorry }
end

the trace formula given by squeeze_simp does not have the same effect as squeeze_simp.

Equivalence of Cauchy sequences

It would be useful to extend the definition of equivalence of cauchy sequences to sequences that have not been proved to be cauchy sequences, or introduce a new definition

def near (abv : β → α) [is_absolute_value abv]
(f g : ℕ → β) := ∀ ε > 0, ∃ i, ∀ j ≥ i, abv (f j - g j) < ε

It happens a lot that I want to prove something is a cauchy sequence by proving it is equivalent to another. I am aware of of_near, but it's neater to introduce this as a definition rather than type out ∀ ε ... all the time. Introducing this definition would allow me to prove to state theorems like this in a much nicer way.

series_cauchy_prod
  {α β : Type*} [discrete_linear_ordered_field α] [ring β]
  {a b : ℕ → β} {abv : β → α} [is_absolute_value abv] :
  is_cau_seq abs (series (λ n, abv (a n))) → is_cau_seq abv (series b) → 
∀ ε : α, 0 < ε → ∃ i : ℕ, ∀ j ≥ i,
  abv (series a j * series b j - series (λ n,  series (λ m, a m * b (n - m)) n) j) < ε

It would also allow me to prove things like near abs (λ n, x) f → x = lim f or near abs f g → lim f = lim g, again without requiring a proof that the functions are cauchy. I would also be able to prove this relation is symmetric, transitive and reflexive, which would make a lot of my proofs much neater. I think it would be better to extend the existing definition than introduce a new definition.

Probelm in analysis.real

import analysis.real fails with
mathlib/analysis/real.lean:198:32: error: invalid character, ' expected

I am using lean compiled from the master branch on Linux debian sid.

Reorganization

Outcome of the discussion at Lean Together 2019 (Friday afternoon)


analysis
├── instances
│   ├── complex.lean
│   ├── ennreal.lean
│   ├── nnreal.lean
│   ├── polynomial.lean
│   ├── real.lean

├── exponential.lean
├── limits.lean   ~>   specific_limits.lean

├── normed_space
│   ├── bounded_linear_maps.lean
│   └── basic.lean


measure_theory
├── borel_space.lean
├── integration.lean
├── lebesgue_measure.lean
├── measurable_space.lean
├── measure_space.lean
├── outer_measure.lean
├── probability_mass_function.lean


Idea (break of topology)

topology
├── basic.lean         /~> merge continuity and basic, split into `defs`, `basic`, `compact`, `connected` etc.
├── continuity.lean   /
├── continuous_map.lean  ~>   compact_open.lean
├── stone_cech.lean
├── uniform_space
│   ├── completion.lean
│   └── basic.lean    ~> `Cauchy` to completion,
├── metric_space
│   ├── basic.leane
│   └── emetric_space.lean
├── algebra
│   ├── infinite_sum.lean
│   ├── quotient.lean
│   ├── monoid.lean
│   ├── group.lean
│   ├── ring.lean
│   ├── field.lean
│   ├── order.lean

Proposal:

Create a `basic` directory:

basic/
├── logic/ (move schroeder_bernstein to set_theory)
├── data/
│   ├── nat.lean
│   ├── int.lean
│   ├── rat.lean
│   ├── set.lean
│   ├── list.lean
│   ├── multiset.lean
│   ├── finset.lean
├── algebra/
├── order/

Nightly distribution of mathlib appears to be broken

I'm following the instructions on the elan page: use leanpkg +nightly to create a project, then leanpkg add mathlib, then leanpkg build or lean --make. I get lots of errors from the library compilation process, such as this: ... /_target/deps/mathlib/src/meta/coinductive_predicates.lean:83:9: error: invalid definition, a declaration named 'tactic.mk_and_lst' has already been declared

refine_struct not working as advertised?

The example in comments for refine_struct suggests using refine_struct ({ .. } : semigroup α), but I can't get this to work.

Here's my MWE:

import tactic.interactive

variable (α : Type)
def foo : semigroup α := 
begin
  refine_struct ({ .. } : semigroup α),
end

This just says failed at the refine_struct. Instead I expected it to give me two tagged goals.

Implement computable real numbers

Hi,

I'd like to have real numbers that can be used for in-kernel computations. I mean, it would be nice to have, e.g., a function pi.approx : ℚ₊ → ℚ with a proof |pi.approx r - pi| ≤ r.

The main issue is that this function cannot be implemented as approx : ℝ → ℚ₊ → ℚ. I can think about two (not mutually exclusive) approaches.

  1. Define new

    struct creal := (approx : ℚ₊ → ℚ) (sound : ∀ r₁ r₂, |approx r₁ - approx r₂| ≤ r₁ + r₂)

    with a (strong) setoid structure, then migrate some (or most) theorems to deal with this setoid.

  2. Define

    def approximates (x : ℝ) (f : ℚ₊ → ℚ) := ∀ r, |x - real.of_rat (f r)| ≤ r

    then prove lemmas like

    approximates x f → approximates y g → approximates (x + y) (λ r, f (r / 2) + g (r / 2)

TODO: unify `disjoint`

Introduce disjoint on has_mem and unify the list, multiset, finset instances. The finset instance would be generalized by removing the requirement for decidable_eq

`ext` ignores types when matching

import tactic.ext data.set.basic data.stream.basic
example (s₀ s₁ : set ℕ) : s₀ = s₁ :=
by ext1; guard_target x ∈ s₀ ↔ x ∈ s₁
-- error : ⊢ stream.nth n s₀ = stream.nth n s₁

I think ext1 just goes through all extensionality lemmas and applies anything that typechecks. It should look at the stated type of the objects when determining which extensionality rule to apply.

bundling morphisms

Currently in src/algebra/module.lean we have this:

class is_linear_map (α : Type u) {β : Type v} {γ : Type w}
  [ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ]
  (f : β → γ) : Prop :=
(add  : ∀x y, f (x + y) = f x + f y)
(smul : ∀(c : α) x, f (c • x) = c • f x)

structure linear_map (α : Type u) (β : Type v) (γ : Type w)
  [ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ] :=
(to_fun : β → γ)
(add  : ∀x y, to_fun (x + y) = to_fun x + to_fun y)
(smul : ∀(c : α) x, to_fun (c • x) = c • to_fun x)

i.e. someone (possibly @kckennylau ) bundled linear maps. My general impression from the chat is that people are in general in favour of bundling morphisms between algebraic structures in general. However it seems that we have a lot of them: is_ring_hom, is_field_hom, is_semiring_hom, is_monoid_hom, is_add_group_hom... . Bundling all of them at once will I guess be a major refactor, but it seems like there are advantages. Bundling them bit by bit -- is this even feasible? Kenny bundled is_field_hom in #717 but the PR got closed. I am opening this as an issue because I am slightly concerned that the pros and cons of bundling are just being scattered randomly around the chat and it would be good to have one focal point for the discussion.

Releases and update-mathlib

A couple of requests which I think would be very handy.

  1. It would be nice if we could use update-mathlib to update a clone, when we just cloned mathlib using git clone and have not added it as a dependency to a project using leanpkg
  2. Release for every build
  3. This is not that important, but it also might be handy for it to work on community branches as well, so people can collaborate on projects more easily.

lean-3.4.2 tags could be non-monotonic

There's currently nothing that really guarantees that Travis builds for pushes finish in the expected order, so the lean-3.4.2 might not always point to the (logically) latest commit which has passed Travis.

Off-hand, I'm not sure how to fix this easily.

We don't have Cauchy's integral formula

I think it should be noted that we don't have Cauchy's Integral Formula in mathlib. It's getting to the point where I feel like almost all of the standard definitions and theorems in the first and second year undergraduate pure mathematics courses at a typical UK university are either in mathlib already, or could relatively easily be put into mathlib -- apart from complex analysis, which still to me feels a million miles away. Furthermore, as far as I can see there has been no real progress on this and nobody is working on it (although people are beginning to work on real analysis). My head is full of interesting mathematics which I think should be fun to put into mathlib, e.g. group cohomology, perfectoid spaces, adeles of a global field, sheaf cohomology for schemes and so on, and there are a whole bunch of other suggestions at the mathoverflow question "Which mathematical definitions should be formalised in Lean?". But I think it's good to remember that we are currently a long way from a completely standard 2nd year undergraduate theorem whose importance in mathematics cannot be understated. Every time I think "ooh, let's do a bunch of theorems about modular forms" I then quickly remember that modular forms are complex analytic objects, and without the basic tools in complex analysis such as Cauchy's integral formula, we're never going anywhere.

I understand that one can only learn a limited amount from formalisations of this theorem in other theorem provers, but it might be nice to at least collect some links to where this is done in other places. I feel like in the future this lack of complex analysis will become an impediment to me when I am preaching the gospel in university maths departments. I know we're working on analysis. I know we'll get there in the end. But progress is slow and judging from Chris' comments on the 2nd year complex analysis course at Imperial, formalising some of the "picture proofs" given might be hard. What generality does one formulate the result in? Just circles, or more exotic closed curves which are piecewise smooth? Do we need the Jordan curve theorem first? These questions are difficult for me to answer. But until we have something which we can call Cauchy's integral formula, we cannot even formalise most of the questions on an exam which every mathematics student at my university has to pass, namely the compulsory 2nd year complex analysis course. This is a humbling reminder.

Protect the master and use the force

It seems we have a problem due to a force-push to master:

mathlib-force-push

I assume this was not intentional. Accidents happen after all.

Since the force-push was from 1ed7ad1 to f623d34, is it possible to “fix” (at least on the GitHub remote) the problem by force-pushing in the reverse direction?

Can the appropriate person configure the master branch as protected in the GitHub repository settings to disallow force pushes?

Expand the surreals library

We currently have a definition of the surreal numbers in surreal.lean, but there is still more to be done here. The next steps would be to show that the arithmetic operations defined on pSurreal descend to the surreals and make the surreals into a field.

Linting tools

The goal is to sanitize the mathlib code base but also, the code base of any project depending on it.

Here is a short list of code smells that can be detected automatically:

  • unnecessary imports
  • unused parameters or assumption
  • unused lemmas
  • overlapping type class instances
  • simp rules that may cause simp loops

These could be implemented by a combination of @digama0's olean parser and some Lean meta programming.

wlog gets slow

The following code takes ~30s to execute because of wlog

import algebra.module
import data.real.basic
import data.set.intervals

variables {α : Type*}
  [add_comm_group α] [vector_space ℝ α]
  (A : set α) (B : set α) (x : α)

open set

def convex (A : set α) :=
∀ {x y : α} {a b : ℝ}, x ∈ A → y ∈ A → 0 ≤ a → 0 ≤ b → a + b = 1 →
a • x  + b • y ∈ A

lemma convex_Ioo (r s : ℝ): convex (Ioo r s) :=
begin
  intros x y a b hx hy ha hb hab,
  wlog h : x ≤ y using [x y a b, y x b a],
end

problem with `module`

There is a problem with module, due to the fact that a module is a module over a ring.

Here's a minimised example, due to Mario.

class ring' (α : Type*).

class module' (α : out_param $ Type*) (β : Type*) [out_param $ ring' α].

class normed_field' (α : Type*) extends ring' α.

class normed_space' (α : out_param $ Type*) (β : Type*) [out_param $ normed_field' α] extends module' α β.

instance pi.module' {I : Type*} {f : I → Type*}
 {α : out_param Type*} [out_param $ ring' α] [∀ i, module' α $ f i] : module' α (Π i : I, f i) :=
sorry

instance loop (α) [ring' α] : ring' (option α) := sorry

set_option trace.class_instances true

instance fintype.normed_space' {α} [normed_field' α]
  {ι : Type*} {E : ι → Type*} [∀i, normed_space' α (E i)] :
  normed_space' α (Πi, E i) :=
⟨_, _⟩
[class_instances]  class-instance resolution trace
[class_instances] (0) ?x_2 : @module' ?x_0 (Π (i : ι), E i) ?x_1 := @pi.module' ?x_3 ?x_4 ?x_5 ?x_6 ?x_7
[class_instances] (1) ?x_6 : out_param (ring' ?x_5) := @loop ?x_8 ?x_9
[class_instances] (2) ?x_9 : ring' ?x_8 := @loop ?x_10 ?x_11
[class_instances] (3) ?x_11 : ring' ?x_10 := @loop ?x_12 ?x_13
[class_instances] (4) ?x_13 : ring' ?x_12 := @loop ?x_14 ?x_15
...

Sebastian suggested

instance pi.module' {I : Type*} {f : I → Type*}
 {α : Type*} {r : ring' α} [∀ i, module' α (f i)] : module' α (Π i : I, f i) :=

[note {r : ring' α} ]

Some chat about this is here:

https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/tc.20loop.20again

I am making this an issue despite not understanding most of the above, because I just want a place where we can collect our thoughts. I am unclear about whether anything can be done without changing core Lean. I am unclear about whether anything can be done even if we change core Lean. I am unclear about how serious the problem is. Perhaps others would like to jot down some thoughts here; currently what is happening that there is occasional zulip chat which gets lost and this issue is an attempt to have a central point where experts can explain the problem and what can and can't be done about it.

Definition of a scheme

I am interested to know how Lean will cope with the sort of complex mathematical objects which many mathematicians use. My impression (which might be incorrect because of lack of experience) is that whilst there are nowadays some pretty complex proofs typed up in computer proof checkers, they are typically about objects which one could call "basic" in the sense that a first year undergraduate might know about them (finite groups, finite graphs spring to mind). My motivation for seeing complex mathematical objects in Lean is simply that the Formal Abstracts project is going to need definitions of many of the standard complex mathematical objects which mathematicians use, and whilst the project can of course just manage in the short term with axioms and constants, at some point the objects do need to be defined.

Here is an explanation (with coherent and complete URLs containing more details) of what a scheme is. Schemes are the fundamental objects of study in algebraic geometry. They were discovered (in their current form) by Grothendieck in the late 1950s and early 1960s and they revolutionised the theory of algebraic geometry. This issue is a roadmap for implementing the definition of schemes in Lean. For more details I will constantly refer to the stacks project, which is a gigantic (6000 pdf pages and counting, but available in far more friendly form on the web) piece of work overseen by Johan de Jong, which one can nowadays describe as an attempt to be "a readable Bourbaki for algebraic geometry". The work is extremely thorough, and the thoroughness of the work should be extremely helpful for anyone wishing to get involved in implementing algebro-geometric objects in Lean. The foundations of the work are in ZFC set theory, however my understanding is that this should be embeddable in Lean and hence cause no problems.

By "ring" throughout this issue I mean a commutative ring with a 1, and all morphisms of rings send 1 to 1.

Here are some definitions with links to the stacks project, and within the stacks project one should hopefully find all other mathematical definitions one needs. Note that the stacks project is not absolutely overloaded with hyperlinks like Wikipedia, however the search facility is good (in the sense that it seems to work for me, although admittedly I know the basics of the area already).

A sheaf of rings on a topological space is a functor from the category of open sets on the space to the category of rings, which satisfies the sheaf axiom (a glueing axiom). Here are definitions of a presheaf (of sets), a presheaf of rings, a sheaf (of sets) and a sheaf of rings.

A locally ringed space is a topological space equipped with a sheaf of rings, such that all the stalks are local rings (a local ring is a ring with a unique maximal ideal).

An affine scheme is a certain kind of locally ringed space which is built from a ring. It is a fundamental object in algebraic geometry. The points of the underlying topological space are the prime ideals of the ring, and the sheaf of rings is built via localizing the original ring.

A scheme is a locally ringed space which has a cover by affine schemes.

These four definitions above (a sheaf of rings, a locally ringed space, an affine scheme, a scheme) seem to me to be four natural steps which one takes when defining a scheme, given what we currently have in Lean (topological spaces, and commutative rings with 1). However there are lots of other intermediate definitions (local rings, sheaves of sets, stalks of a sheaf, prime ideals, the Zariski topology, localisation of a ring at a multiplicative set...) which will also be needed. But I see no obstruction to making these definitions. If anyone is unclear as to what any precise mathematical definitions are involved here, and they can't find answers in the stacks project, then I would be happy to try to help.

I should finish by saying that I am not posting this issue because I desperately need to see schemes in Lean. I am posting this issue as basically a challenge to see if Lean can handle such a complex definition. Note that there are many other definitions in geometry which have a similar flavour (e.g. a smooth manifold is something covered by open balls, a rigid space is something covered by affinoid rigid spaces, a perfectoid space is something covered by affinoid perfectoids, and so on and so on) and it seems to me that schemes are hence both a nice basic test case for other objects defined by "glueing", and also something whose definition is nowadays very well documented on the web.

following docs/elan.md "Scenario 1" instructions gives you an old mathlib

As indicated in the title, if you follow the instructions at https://github.com/leanprover/mathlib/blob/master/docs/elan.md under "Scenario 1: Start a new package", replacing nightly-2018-04-06 by 3.4.1 according to master mathlib's current leanpkg.toml file, the result is that you get a checkout of the lean-3.4.1 branch of mathlib, which has not been updated since June 20. I described the situation in more detail on Zulip. This is potentially confusing to new users and even to me. I can easily diagnose what happened and how to fix it, but for the sake of new users we should make it impossible to end up in this situation in the first place.

I infer from elan.md (and also dimly recall) that there was a time when mathlib master built against a nightly version of Lean. As that is no longer the case, can we simply delete the lean-3.4.1 branch of mathlib? Then leanpkg should automatically choose the master branch, which is the desired behavior, right?

Organize some directories separated by definitions and theorems

Summary

I propose a file organization scheme for some mathlib topic directories as follows:

  1. Put most or all of the definitions* for a particular topic (e.g. list) into one file under some directory (e.g. data/list).
  2. Put the theorems* for that topic into one or more separate files under the same directory (e.g. theorems.lean for an exhaustive collection or mem.lean, union.lean, disjoint.lean, etc. for separate definition-specific collections).

(* By “definitions,” I mean anything written with def, inductive, structure, class, etc. By “theorems,” I mean anything written with theorem, lemma, etc.)

In particular, I'm looking to start with the organization of data/list/basic.lean, data/finset.lean, and data/multiset.lean, which have monolithic files that are growing quite large. These are also the topics I'm most familiar with. Other topics may be organized differently.

Motivation

Consider the current file organization of the data/list directory (ignoring default.lean which isn't pertinent to the discussion):

  1. basic.lean (3362 lines) has a mix of definitions (37) and theorems (418).
  2. perm.lean (756 lines) has a definition (1) and theorems (91).
  3. sort.lean (279 lines) has definitions (3) and theorems (17).

I believe the interspersing of definitions and theorems — in particular, I'm looking at basic.lean — is a barrier to browsing, discovering, and understanding the various components to using lists and proving things with them.

The current organization appears to be oriented toward keeping a definition with a group of theorems involving that definition. However, that means to find either (a) all the definitions for lists or (b) a particular definition, one must scroll or search through the file.

When I started learning Lean and whenever I have started using a new topic in mathlib, I've always first searched for the definitions (names, notation, and the definitions themselves), so that I know what tools exist that I can use to work with the topic. When I've started using a definition and writing proofs about it, I've secondly searched for theorems related to one or more definitions. (This may be my own personal approach and not shared by others. In either case, it would be useful to know what other people do.)

One approach to subdividing a file such as data/list/basic.lean would be to restructure into separate files, in which you find one or two definitions and a collection of theorems. I infer this to be the motivation for making data/list/perm.lean a separate file. This approach presents its own problems: the theorems often (a) involve multiple definitions and (b) use combinations of other theorems. The approach also does not help when trying to view all definitions at once, which is quite useful. Consequently, I don't think this the best approach.

Proposal

Therefore, I propose to reorganize topics into at least two files under a topic directory: one for definitions and one for theorems. It may be desired to use more than two files — e.g. to split up the theorems — but that is an independent issue from this proposal. The essence of the proposal is to put the definitions together and to separate them from the theorems.

Benefits

As a consequence of this proposal, we will have:

  1. Definitions that are easier to find by browsing (primary benefit)
  2. Theorems that can refer to multiple definitions without concern for the declaration order of definitions and theorems, because the definitions will always be declared before the theorems (secondary benefit)

Issues

  1. Definitions are no longer next to their associated theorems. This might be the main complaint, but I have two responses:
    1. You can now open up the definitions file and the theorems file in an editor and look at them side-by-side.
    2. Since many theorems often refer to multiple definitions, it will always be the case that some theorems will not be near associated definitions.
  2. Why not just put the definitions at the top of a file and the theorems after that? I think it's better to put the definitions in a separate file to reduce the file size and the number of lines you have to scroll to browse the file.
  3. Splitting theorems into multiple files can create issues about which file imports which. However, this can be avoided by not splitting the theorems into separate files. This proposal is only concerned about splitting the definitions from the theorems, not the organization of the theorems themselves.
  4. You might say this issue could be resolved by something in Emacs or a documentation generator. I am not aware of the existence of these solutions, but, even if they did exist, I think good code organization will always be useful, even if other, indirect solutions exists: there are always people reading the code.

Exceptions

There may be exceptions to the rule. For example, you may have a def that is really only there to be used in the theorems. You may have a topic that has so few lines of code that it may seem wasteful to split it into separate files. (On the other hand, it is advantageous to have a consistent directory structure across topics.) These can be discussed, of course, but I think the number of exceptions is generally smaller than the those definitions that fit the rule.

Details

There are some details to be worked out. For example, what names do the files get? There is also the matter of seeing a concrete example. I'm willing to do some refactoring myself — to data/list/basic.lean, data/multiset.lean, and data/finset.lean — if this proposal is approved.

Discussion

parsing slow for multiset

lean -M 20000 --make _target/deps/mathlib slows down a lot when it encounters this line,
printing the message

.../_target/deps/mathlib/src/data/multiset.lean: parsing at line 275

It does ultimately get past that, but it looks as though it has stalled and that does not instill confidence.

transporting definitions and theorems along isomorphisms.

There's a construction called Spec which eats a commutative ring R and spits out a topological space Spec R. To a mathematician it is trivially true that if we have an isomorphism f : R -> S then we get an induced isomorphism Spec R -> Spec S. In Lean one would formalise this as follows: if we have an equiv between R and S such that both morphisms are ring homomorphisms, then we get an induced equiv between Spec R and Spec S such that both maps are continuous.

Currently in Lean this seems to be hard to do. I believe Simon Hudon had some ideas about this in the (huge) thread

https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/.22canonically.22.20identifying.20two.20types

Example quote from April 27 -- I asked how to prove in Lean that if A->B->C was an exact sequence of abelian groups and we had isomorphisms A=A' and B=B' and C=C' then the induced exact sequence A'->B'->C' was exact, and Kenny wrote a 70 line proof. But in my mind the proof should be rw [iA,iB,iC] with iA the isomorphism A=A' etc. We could talk about homotopy type theory here, but that is not relevant to this issue.

My understanding of that thread was that Simon started talking about a tactic which would prove things like this, and then Scott got involved and suggested things like @[derive transportable] and this:

https://leanprover.zulipchat.com/#narrow/stream/113488-general/subject/.22canonically.22.20identifying.20two.20types/near/125774155

...but then Mario pointed out subtleties and Johannes later on (perhaps in another thread) suggested that perhaps transfer(?) could already do a lot of this, and then I think the momentum died down.

This issue is to put on record that seems to be a hole in Lean of the "easy in maths" form, and for someone like me who markets Lean to mathematicians, these holes are embarrassing.

docs: issue with tactic_writing.md

I see this error:

don’t know how to synthesize placeholder
context:
a b c : ℤ,
hyp : a = b
⊢ Type ?

Code below extracted from the file tactic_writing.md is designed to reproduce the error. Other examples from the file work fine, or else have some narration explaining why they don't work.

open tactic.interactive («have»)
open tactic (get_local infer_type)
open interactive (parse)
open lean.parser (ident)
open lean.parser (tk)
open interactive (loc.ns)
open interactive.types (texpr location)

meta def mul_left (q : parse texpr) : parse location → tactic unit
| (loc.ns [some h]) := do
   e ← tactic.i_to_expr q,
   H ← get_local h,
   `(%%l = %%r) ← infer_type H,
   «have» h ``(%%e*%%l = %%e*%%r)
            ``(congr_arg (λ x, %%e*x) %%H),
            tactic.clear H
| _ := tactic.fail "mul_left takes exactly one location"

-- it gives me an error, why?
example (a b c : ℤ) (hyp : a = b) : c*a = c*b :=
begin
  mul_left c at hyp,
  exact hyp
end

Naming convention for binary theorems with left and right

I first posted about this on Zulip, but I thought it would be useful to put it here before the issue gets lost.

Two different naming schemes have for binary definitions in which two theorems are similar except for the position of a key element

  1. A “positional” naming scheme uses the position of two word w.r.t. each other to indicate which of the two variants is referenced (examples taken from data/finset.lean and simplified):

    theorem insert_subset {a : α} {s t} : insert a s ⊆ t ↔ a ∈ t ∧ s ⊆ t
    theorem subset_insert (a : α) (s) : s ⊆ insert a s
  2. A “sided” naming scheme uses “left” and “right” to distinguish the two variants:

    theorem mem_union_left {a : α} {s₁} (s₂) (h : a ∈ s₁) : a ∈ s₁ ∪ s₂
    theorem mem_union_right {a : α} {s₂} (s₁) (h : a ∈ s₂) : a ∈ s₁ ∪ s₂

In the examples above, it is arguable that the two pairs are different enough to deserve different naming schemes. In the first, the insert is an argument to the subset; in the second, the h parameter refers to side of the union. Also, one might say that positional names for the second pair do not make sense. This may indeed be how the naming schemes came about. However, let me first compare the two alternatives in the abstract.

Positional

  • Pros:
    • Shorter name
  • Cons:
    • More difficult to determine meaning of name: requires thinking about existence of alternative.

Sided

  • Pros:
    • Less difficult to determine meaning of name
    • Can better indicate bracketing by order of names
  • Cons:
    • Longer name

While I accept that shorter names are attractive, in this case, I would argue that using a sided naming scheme is worth the longer name. I believe the benefit shows up even more in slightly more interesting cases such as the following:

theorem erase_insert_subset (a : α) (s) : erase (insert a s) a ⊆ s
theorem insert_erase_subset (a : α) (s) : s ⊆ insert a (erase s a)

which I think would be renamed to the following using the sided naming scheme:

theorem subset_erase_insert_left (a : α) (s) : erase (insert a s) a ⊆ s
theorem subset_insert_erase_right (a : α) (s) : s ⊆ insert a (erase s a)

With the latter names, it is more clear that the name refers to a subset with either erase (insert a s) a or insert a (erase s a) as an argument to the subset.

Addendum

One might consider the growth of names to be quite unfortunate, and that is understandable. There are means we might consider for reducing that growth. Since _left and _right are such common suffixes, we could instead use abbreviations such as _l and _r. My only concern with _l is that it may not be easily distinguishable from _1 in some fonts. To resolve that, we might consider _L and _R, which are, I think, unambiguous.

Resolution

The resolution of this issue could be as simple as updating docs/naming.md to clarify when to use what naming scheme in the future. Or, alternatively, the issue could be resolved by renaming a number of theorems to match whatever scheme is decided.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.