summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMatthew Sotoudeh <matthew@masot.net>2023-01-04 22:37:10 -0800
committerMatthew Sotoudeh <matthew@masot.net>2023-01-04 22:37:10 -0800
commit8f6cde5e0dbfd58c40b4944b6c5622979bdad6e0 (patch)
treed438f5233599697beae90af066dd97796d12dc71
Initial dump
-rw-r--r--.gitignore2
-rw-r--r--README.md190
-rw-r--r--checker.py58
-rw-r--r--claim.py57
-rw-r--r--core_pseudo_claims.py77
-rw-r--r--examples/group.z3fc51
-rw-r--r--examples/readme.z3fc71
-rw-r--r--main.py13
-rw-r--r--parsing.py179
-rw-r--r--plugins/real_solver_plugins.py131
-rw-r--r--plugins/zfc_plugins.py129
-rw-r--r--solver.py75
-rw-r--r--vim_syntax_highlighting/ftdetect/z3fc.vim1
-rw-r--r--vim_syntax_highlighting/syntax/z3fc.vim31
14 files changed, 1065 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..1b63297
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+__pycache__
+.*.sw*
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..1ec24e4
--- /dev/null
+++ b/README.md
@@ -0,0 +1,190 @@
+# z3fc
+A simple proof assistant, thought of more as a front-end to SMT solvers.
+
+Basic design:
+- FOL-based, using the Vampire theorem prover to check most implications.
+- Mizar-style batch processing: the proof script contains an explicit sequence
+ of claims that are supposed to imply each other, not a sequence of tactics
+ that are manipulating some hidden proof state.
+- Heavy focus on modifying the proof assistant itself: axiom schemas,
+ macros, special syntax, predicates, functions, etc. have to be defined in
+ Python, at the proof assistant level.
+
+## Running Example Script
+Install `pysmt`, Vampire, and CVC5. Then:
+```
+$ python3 main.py examples/readme.z3fc
+```
+You can also add z3fc syntax highlighting to vim like so:
+```
+$ mkdir -p ~/.vim/ftdetect
+$ mkdir -p ~/.vim/syntax
+$ cp vim_syntax_highlighting/ftdetect/* ~/.vim/ftdetect
+$ cp vim_syntax_highlighting/syntax/* ~/.vim/syntax
+```
+
+## Core Logic
+The core idea is that a proof script is a sequence of claims (FOL expressions),
+where later claims can mark dependence on earlier ones:
+```
+[Claim 1]
+ forall x. P(x);
+
+[Claim 2]
+ forall x. forall y.
+ P(x) and P(y)
+by [Claim 1];
+```
+Claims consist of a name, a FOL expression, and (optionally) justification.
+The checker iterates over these claims. When it sees the justification
+`by [Claim 1]` for `[Claim 2]`, it attempts to use the automated theorem prover
+Vampire to prove `[Claim 2]` from `[Claim 1]`.
+
+This core idea is simple, but often Vampire is not strong enough to prove (in a
+reasonable amount of time) entire proofs. The basic idea is to use Vampire to
+prove mid-sized logical "jumps" in the proof, then use a natural deduction-like
+system to prove the main structure of the proof (similar to the use of
+Sledgehammer-style tactics in other proof assistants).
+
+There are a few more basic operations that enable this.
+
+First, the `{Taut}` pseudo-claim can be used to introduce tautological claims
+without the system interpreting them as axioms:
+```
+[LEM]
+ forall x. P(x) or (not P(x))
+by {Taut};
+```
+
+Second, if you prove some claim directly from an earlier claim, you can alway
+introduce the implication of the two claims (which now no longer depends on the
+earlier claim):
+```
+[Claim 1]
+ forall x. P(x);
+
+[Claim 2]
+ forall x. forall y. P(x) and P(y)
+by [Claim 1];
+
+[Claim 3]
+ (forall x. P(x))
+ implies
+ (forall x. forall y. P(x) and P(y))
+by {Implication}, [Claim 2], [Claim 1];
+```
+As shown, we invoke this implication rule by including the "fake" claim
+`{Implication}` in our justification list, along with two other claims. One of
+the other claims must be a precondition for the latter.
+
+We can also introduce arbitrary constants:
+```
+take any x;
+```
+and they can be introduced having a certain property if there is some prior
+claim guaranteeing their existence:
+```
+[exists successor]
+ forall x.
+ exists y.
+ successor(x, y);
+
+take any 0;
+
+take 1 s.t.
+ [1 successor 0]
+ successor(0, 1)
+ by [exists successor];
+```
+The above script adds an axiom that every element has a successor, then
+introduces the constant `0`, then instantiates the successor axiom to produce a
+new constant `1` that is the successor of `0` (this fact can be referred to
+later as `[1 successor 0]`).
+
+Finally, any variable can be universal quantified out so long as the claim does
+not rely on any assumptions about that variable, using the `{Generalize}`
+pseudo-claim:
+```
+[<= relates all]
+ forall x. forall y.
+ (x <= y) or (y <= x);
+
+take x;
+[x <= x]
+ (x <= x) or (x <= x)
+by [<= relates all];
+
+[reflexivity]
+ (x <= x)
+by {Generalize}, [x <= x];
+```
+(Note that existential quantification can be introduced by Vampire directly.)
+
+## Real Arithmetic Extension
+There is an example extension which allows proving facts about real numbers by
+calling out to an SMT solver. It can be invoked with the `{SMTReal}`
+pseudo-claim:
+
+```
+take any x s.t. [X is Real] x in Reals;
+take any y s.t. [Y is Real] y in Reals;
+
+[Test Reals]
+ (x > y) implies ((x + x) > (y + y))
+by {SMTReal}, [X is Real], [Y is Real];
+```
+
+## Set Theory Schemata, Notation
+One notably missing feature of the core syntax described above is any support
+for axiom schemas: roughly, universal quantification over predicates. This is
+essential to ZFC, e.g., the filtering axiom says _for any predicate P_ and set
+`S`, there exists a set `F` consisting of exactly those elements of `S`
+satisfying `P`.
+
+In `z3fc`, we build such axiom schemas directly into the Python side. This
+is done in `set_extensions.py`. In particular, one can use reasonable filtering
+notation of the form `{x in BigSet | P(x)}`:
+```
+take Naturals;
+take 0 s.t. [0 in Naturals] 0 in Naturals;
+take 1 s.t. [0 < 1] 0 < 1;
+[negatives are less than 1]
+ 0 in {x in Naturals | x < 1}
+by [0 in Naturals], [0 < 1], {FilterSet};
+```
+At parsing time, the entire form `{x in Naturals | x < 1}` is replaced by a
+single function call of the form `_filter_set_0(1)`. Then including the
+`{FilterSet}` pseudo-claim adds the assumption
+```
+forall a.
+ (x in _filter_set_0(a))
+ iff
+ ((x in Naturals) and x < a)
+```
+
+Similarly, finite sets can be constructed on-the-fly with finite set notation:
+```
+[check fin sets]
+ forall x. forall y. forall z.
+ (z in {x, y}) iff ((z = x) or (z = y))
+ by [Taut], {FinSet};
+```
+where here the `{x, y}` is replaced at parse-time with `_fin_set_2(x, y)` and
+the `{FinSet}` pseudo-claim adds essentially the `[check fin sets]` assumption
+above.
+
+## Issues/Gotchas
+- You'll need `pysmt` along with a `vampire` binary on your path for all
+ scripts, and a `cvc5` binary for anything using the `{SMTReal}` pseudo-claim.
+- There is no guaranteed parsing precedence. You should use parentheses for
+ anything that could possibly be ambiguous. Otherwise you will get weird
+ errors.
+- The lexing is \"uber-simple; no symbols other than alphanumerics longer than
+ 1 character are allowed.
+- The `{SMTReal}` pseudo-claim is an extremely rough POC that basically only
+ works for the example in the README.
+- I'm pretty sure the core logic is inconsistent, but haven't yet found a
+ counterexample. If you find something I would be curious to hear about it.
+ The main concern is the `{Generalize}` pseudo-claim and how it determines
+ which variables we can generalize (maybe we need to do better tracking of
+ assumption discharging).
diff --git a/checker.py b/checker.py
new file mode 100644
index 0000000..1780790
--- /dev/null
+++ b/checker.py
@@ -0,0 +1,58 @@
+from parsing import *
+from claim import Claim
+from core_pseudo_claims import *
+
+class Checker:
+ def __init__(self):
+ self.uid = 0
+ self.parsers = DEFAULT_PARSERS
+ self.scope = dict({
+ "Taut": TautClaim(),
+ "Generalize": GeneralizeClaim(self),
+ "Implication": ImplicationRule(),
+ })
+ self.dependent_vars = set()
+
+ def count(self):
+ self.uid += 1
+ return self.uid - 1
+
+ def register_function_parsers(self, names, infix_too=False):
+ for name in names:
+ self.parsers.append(prefix_parser("apply_function", name))
+ if infix_too:
+ self.parsers.append(infix_parser("apply_function", name))
+
+ def register_predicate_parsers(self, names, infix_too=False):
+ for name in names:
+ self.parsers.append(prefix_parser("apply_predicate", name))
+ if infix_too:
+ self.parsers.append(infix_parser("apply_predicate", name))
+
+ def check_file(self, file_name):
+ lexemes = lex(open(file_name, "r").read())
+ parser = Parser(self.parsers)
+ statements = parser.parse(lexemes)
+ for statement in statements:
+ if statement[0] == "claim":
+ label, expr, justifiers = statement[1:]
+ self.scope[label] = Claim(expr, self, label=label)
+ if justifiers:
+ print("Checking claim", f"[{label}]", "...")
+ justifiers = [self.scope[j] for j in justifiers]
+ self.scope[label].try_justify(justifiers)
+ elif statement[0] == "constant":
+ name, claim = statement[1:]
+ self.scope[name] = ["varid", self.count()]
+ if claim:
+ self.dependent_vars.add(self.scope[name][1])
+ label, expr, justifiers = claim[1:]
+ self.scope[label] = Claim(expr, self, label=label)
+ if justifiers:
+ claim = Claim(["exists", name, expr], self)
+ justifiers = [self.scope[j] for j in justifiers]
+ claim.try_justify(justifiers)
+ self.scope[label].assumptions = claim.assumptions
+ else:
+ print(statement)
+ raise NotImplementedError
diff --git a/claim.py b/claim.py
new file mode 100644
index 0000000..6ecfd8d
--- /dev/null
+++ b/claim.py
@@ -0,0 +1,57 @@
+import solver
+
+class Claim:
+ def __init__(self, raw_expr, checker, label=None):
+ raw_scope = checker.scope
+ self.checker = checker
+ self.label = label or f"<anonymous claim {id(self)}>"
+ def visit(expr, scope):
+ if not isinstance(expr, list):
+ return expr
+ elif expr[0] == "var":
+ return scope[expr[1]]
+ elif expr[0] in ("forall", "exists"):
+ scope = scope.copy()
+ scope[expr[1]] = ["varid", checker.count()]
+ return [expr[0], scope[expr[1]],
+ visit(expr[2], scope)]
+ else:
+ return [visit(e, scope) for e in expr]
+ self.expression = visit(raw_expr, raw_scope)
+ self.assumptions = None
+
+ def to_expr(self, justifiers=None, justifying=None):
+ return self.expression
+
+ def try_justify(self, justifiers):
+ judgment_rules = [justifier for justifier in justifiers
+ if isinstance(justifier, JudgmentRule)]
+ if judgment_rules:
+ assert len(judgment_rules) == 1
+ return judgment_rules[0].try_justify(self,
+ [j for j in justifiers if j is not judgment_rules[0]])
+
+ pre = ["true"]
+ for justifier in justifiers:
+ others = [j for j in justifiers if j is not justifier]
+ pre = ["and", pre, justifier.to_expr(others, self)]
+ if solver.check_sat(["not", ["implies", pre, self.expression]]):
+ print("Failed verification!")
+ raise NotImplementedError
+ self.assumptions = justifiers
+
+ def raw_assumptions(self):
+ if self.assumptions is None:
+ return [self]
+ return [raw for assumption in self.assumptions
+ for raw in assumption.raw_assumptions()]
+
+ def has_ancestor(self, ancestor):
+ if self == ancestor:
+ return True
+ return any(assumption.has_ancestor(ancestor)
+ for assumption in (self.assumptions or []))
+
+class JudgmentRule:
+ def try_justify(self, claim, justifiers):
+ raise NotImplementedError
diff --git a/core_pseudo_claims.py b/core_pseudo_claims.py
new file mode 100644
index 0000000..c6fbd48
--- /dev/null
+++ b/core_pseudo_claims.py
@@ -0,0 +1,77 @@
+from claim import *
+
+class TautClaim(Claim):
+ def __init__(self):
+ self.expression = ["true"]
+ self.assumptions = []
+
+class GeneralizeClaim(Claim):
+ def __init__(self, checker):
+ self.checker = checker
+ self.label = "<generalize>"
+ self.expression = None
+ self.assumptions = []
+
+ def to_expr(self, justifiers, claim):
+ """We can generalize a variable if (i) no ultimate assumption involves
+ it, and (ii) no later variable is referenced in the expression which
+ has higher variable id
+ """
+ def free_vars(expr):
+ if not isinstance(expr, list):
+ return set()
+ if expr[0] == "varid":
+ return set({expr[1]})
+ if expr[0] in ("forall", "exists"):
+ var_id = expr[1][1]
+ within = free_vars(expr[2])
+ return within - set({var_id})
+ return set({var_id for arg in expr for var_id in free_vars(arg)})
+
+ expr = ["true"]
+ for justifier in justifiers:
+ subexpr = justifier.to_expr()
+ var_ids = set(free_vars(subexpr))
+ assumption_vars = set({
+ var_id
+ for assumption in justifier.raw_assumptions()
+ for var_id in free_vars(assumption.expression)
+ })
+ # We can only universally quantify over a variable that is older
+ # than any dependent var free in the expression, and older than any
+ # var free in an assumption.
+ younger_than = min(assumption_vars
+ | (self.checker.dependent_vars & var_ids)
+ | {self.checker.uid})
+ for var_id in sorted(var_ids):
+ if var_id <= younger_than:
+ continue
+ subexpr = ["forall", ["varid", var_id], subexpr]
+ expr = ["and", subexpr, expr]
+ return expr
+
+class ImplicationRule(JudgmentRule):
+ def try_justify(self, claim, justifiers):
+ assert len(justifiers) == 2
+
+ pre, post = justifiers
+ if not post.has_ancestor(pre):
+ pre, post = post, pre
+ assert post.has_ancestor(pre)
+
+ checking = ["implies", pre.to_expr(), post.to_expr()]
+ if solver.check_sat(["not", ["implies", checking, claim.to_expr()]]):
+ print("Failed implication!")
+ raise NotImplementedError
+
+ claim.assumptions = []
+ # The assumptions graph is a DAG. We want to maintain that. We go back,
+ # collecting assumptions, recursing on ones so long as they have
+ # something discharged as an ancestor.
+ def visit_assumption(assumption):
+ if assumption.has_ancestor(pre):
+ for parent in (assumption.assumptions or []):
+ visit_assumption(parent)
+ elif assumption not in claim.assumptions:
+ claim.assumptions.append(assumption)
+ visit_assumption(post)
diff --git a/examples/group.z3fc b/examples/group.z3fc
new file mode 100644
index 0000000..6866cf7
--- /dev/null
+++ b/examples/group.z3fc
@@ -0,0 +1,51 @@
+take 0;
+
+[+ is associative]
+ forall x. forall y. forall z.
+ ((x + y) + z) = (x + (y + z));
+
+[0 is id]
+ forall x.
+ ((0 + x) = x) and ((x + 0) = x);
+
+[+ is invertible]
+ forall x. exists xinv.
+ ((x + xinv) = 0) and ((xinv + x) = 0);
+
+# Vampire does a good job of proving basic things for us.
+[cancellation]
+ forall x. forall y. forall z.
+ (((x + y) = (x + z)) or ((y + x) = (z + x)))
+ implies
+ (y = z)
+ by [+ is associative], [0 is id], [+ is invertible];
+
+# Or, if we want, we can break things into smaller steps and prove them more
+# manually.
+take any x;
+take any y;
+take any z;
+take xinv s.t.
+ [xinv inverts x]
+ (xinv + x) = 0
+ by [+ is invertible];
+
+[L1]
+ (x + y) = (x + z);
+
+[L2]
+ ((xinv + x) + y) = ((xinv + x) + z)
+ by [L1], [+ is associative];
+
+[L3]
+ y = z
+ by [L2], [xinv inverts x], [0 is id];
+
+[L4]
+ ((x + y) = (x + z)) implies (y = z)
+ by [Implication], [L3], [L1];
+
+[left cancellation (manually)]
+ forall x. forall y. forall z.
+ ((x + y) = (x + z)) implies (y = z)
+ by [Generalize], [L4];
diff --git a/examples/readme.z3fc b/examples/readme.z3fc
new file mode 100644
index 0000000..c6601e1
--- /dev/null
+++ b/examples/readme.z3fc
@@ -0,0 +1,71 @@
+# Core Logic
+[Claim 1]
+ forall x. P(x);
+
+[Claim 2]
+ forall x. forall y.
+ P(x) and P(y)
+by [Claim 1];
+
+# {Taut} Pseudo-Claim
+[LEM]
+ forall x. P(x) or (not P(x))
+by {Taut};
+
+# {Implication} Pseudo-Claim
+[Claim 3]
+ (forall x. P(x))
+ implies
+ (forall x. forall y. P(x) and P(y))
+by {Implication}, [Claim 2], [Claim 1];
+
+# Variables and the {Generalize} Pseudo-Claim
+take any x;
+
+[exists successor]
+ forall x.
+ exists y.
+ successor(x, y);
+
+take any 0;
+
+take 1 s.t.
+ [1 successor 0]
+ successor(0, 1)
+ by [exists successor];
+
+[<= relates all]
+ forall x. forall y.
+ (x <= y) or (y <= x);
+
+take x;
+
+[x <= x]
+ (x <= x) or (x <= x)
+by [<= relates all];
+
+[reflexivity]
+ (x <= x)
+by {Generalize}, [x <= x];
+
+# Real Arithmetic Extension
+take any x s.t. [X is Real] x in Reals;
+take any y s.t. [Y is Real] y in Reals;
+
+[Test Reals]
+ (x > y) implies ((x + x) > (y + y))
+by {SMTReal}, [X is Real], [Y is Real];
+
+# Set Theory Schemata, Notation
+take Naturals;
+take 0 s.t. [0 in Naturals] 0 in Naturals;
+take 1 s.t. [0 < 1] 0 < 1;
+
+[negatives are less than 1]
+ 0 in {x in Naturals | x < 1}
+by [0 in Naturals], [0 < 1], {FilterSet};
+
+[check fin sets]
+ forall x. forall y. forall z.
+ (z in {x, y}) iff ((z = x) or (z = y))
+by [Taut], {FinSet};
diff --git a/main.py b/main.py
new file mode 100644
index 0000000..beede96
--- /dev/null
+++ b/main.py
@@ -0,0 +1,13 @@
+import sys
+from checker import Checker
+from plugins.zfc_plugins import *
+from plugins.real_solver_plugins import *
+
+checker = Checker()
+checker.register_function_parsers(["f", "g"])
+checker.register_function_parsers(["+", "-", "*", "/"], infix_too=True)
+checker.register_predicate_parsers(["P", "Q", "successor"])
+checker.register_predicate_parsers([">", "<", "<=", ">=", "=", "in", "~"], infix_too=True)
+register_zfc_plugins(checker)
+register_real_solver_plugins(checker)
+checker.check_file(sys.argv[1])
diff --git a/parsing.py b/parsing.py
new file mode 100644
index 0000000..84bf4bc
--- /dev/null
+++ b/parsing.py
@@ -0,0 +1,179 @@
+OP_TYPES = [
+ "claim", # label, expr, justifiers
+ "constant", # name, (claim or None)
+ "forall", # name, expr
+ "exists", # name, expr
+ "apply_function", # fn_name, expr
+ "apply_predicate", # pred_name, expr
+ "var", # name
+ "not", "or", "and", "iff", "implies",
+]
+
+#### LEXING
+def lex(string):
+ string = " ".join(line.split("#")[0] for line in string.split("\n"))
+ lexemes = []
+ while string:
+ if string[0].isspace():
+ string = string.strip()
+ elif string[0] in "({[]})":
+ lexemes.append(string[0])
+ string = string[1:]
+ elif string[0].isalnum() or string[0] == "_":
+ for i, c in enumerate(string):
+ if not (c.isalnum() or c == "_"):
+ lexemes.append(string[:i])
+ string = string[i:]
+ break
+ else:
+ lexemes.append(string)
+ break
+ else:
+ for i, c in enumerate(string):
+ if c.isspace() or c.isalnum() or c == "_":
+ lexemes.append(string[:i])
+ string = string[i:]
+ break
+ else:
+ lexemes.append(string)
+ break
+ return lexemes
+
+#### PARSER HELPERS
+def skipto(lexemes, search_for):
+ paren_depth = 0
+ opens, closes = "({[", ")}]"
+ for i, l in enumerate(lexemes):
+ paren_depth += int(l in opens)
+ paren_depth -= int(l in closes)
+ if paren_depth < 0:
+ raise ParseError
+ if l == search_for and not paren_depth:
+ return lexemes[:i], lexemes[i+1:]
+ raise ParseError
+
+def split_csv(lexemes, delim):
+ try:
+ arg0, rest = skipto(lexemes, delim)
+ return [arg0] + split_csv(rest, delim)
+ except ParseError:
+ return [lexemes] if lexemes else []
+
+#### PARSER CLASS
+class ParseError(Exception):
+ pass
+
+class Parser:
+ def __init__(self, parser_methods):
+ self.parser_methods = parser_methods.copy()
+
+ def parse(self, lexemes):
+ # print(" ".join(lexemes))
+ for parser in self.parser_methods:
+ try:
+ return parser(self.parse, lexemes)
+ except ParseError:
+ continue
+ raise ParseError
+
+def parse_only_when(condition):
+ if not condition:
+ raise ParseError
+
+def parse_statements(parse, lexemes):
+ # Ensure we have at least one ;
+ skipto(lexemes, ";")
+ return [parse(statement)
+ for statement in split_csv(lexemes, ";")]
+
+def parse_parens(parse, lexemes):
+ parse_only_when(lexemes[0] == "(")
+ inner, rest = skipto(lexemes, ")")
+ parse_only_when(not rest)
+ return parse(inner[1:])
+
+def parse_logical_ops(parse, lexemes):
+ for op in ("or", "and", "iff", "implies"):
+ try:
+ lhs, rhs = skipto(lexemes, op)
+ except ParseError:
+ continue
+ return [op, parse(lhs), parse(rhs)]
+ raise ParseError
+
+def parse_negate(parse, lexemes):
+ parse_only_when(lexemes[0] == "not")
+ return ["not", parse(lexemes[1:])]
+
+# [Name Label] expression by [Name 1], [Name 2], ...
+def parse_claim(parse, lexemes):
+ parse_only_when(lexemes[0] == "[")
+ label, lexemes = skipto(lexemes, "]")
+ parse_only_when(lexemes)
+ label = " ".join(label[1:])
+
+ try:
+ lexemes, justifiers = skipto(lexemes, "by")
+ justifiers = split_csv(justifiers, ",")
+ justifiers = [" ".join(justifier[1:-1])
+ for justifier in justifiers]
+ except ParseError:
+ justifiers = []
+ return ["claim", label, parse(lexemes), justifiers]
+
+def parse_constant(parse, lexemes):
+ parse_only_when(lexemes[0] == "take")
+ lexemes = lexemes[1:]
+ if lexemes[0] == "any":
+ lexemes = lexemes[1:]
+ name, lexemes = lexemes[0], lexemes[1:]
+ claim = None
+ if lexemes:
+ parse_only_when(lexemes[:4] == ["s", ".", "t", "."])
+ claim = parse(lexemes[4:])
+ return ["constant", name, claim]
+
+def parse_forall(parse, lexemes):
+ parse_only_when(lexemes[0] == "forall")
+ parse_only_when(lexemes[2] == ".")
+ return ["forall", lexemes[1], parse(lexemes[3:])]
+
+def parse_exists(parse, lexemes):
+ parse_only_when(lexemes[0] == "exists")
+ parse_only_when(lexemes[2] == ".")
+ return ["exists", lexemes[1], parse(lexemes[3:])]
+
+def parse_name(parse, lexemes):
+ parse_only_when(len(lexemes) == 1)
+ parse_only_when(
+ all(c.isalnum() or c == "_" for c in lexemes[0]))
+ return ["var", lexemes[0]]
+
+DEFAULT_PARSERS = [
+ parse_statements,
+ parse_claim,
+ parse_constant,
+ parse_forall,
+ parse_exists,
+ parse_parens,
+ parse_logical_ops,
+ parse_name,
+ parse_negate,
+]
+
+#### CUSTOM PARSER HELPERS
+def prefix_parser(op, name):
+ def do_parse(parse, lexemes):
+ parse_only_when(lexemes[0] == name)
+ parse_only_when(lexemes[1] == "(")
+ inner, rest = skipto(lexemes[1:], ")")
+ parse_only_when(not rest)
+ args = split_csv(inner[1:], ",")
+ return [op, name, [parse(arg) for arg in args]]
+ return do_parse
+
+def infix_parser(op, name):
+ def do_parse(parse, lexemes):
+ lhs, rhs = skipto(lexemes, name)
+ return [op, name, [parse(lhs), parse(rhs)]]
+ return do_parse
diff --git a/plugins/real_solver_plugins.py b/plugins/real_solver_plugins.py
new file mode 100644
index 0000000..52e1f17
--- /dev/null
+++ b/plugins/real_solver_plugins.py
@@ -0,0 +1,131 @@
+import solver
+from claim import Claim
+
+def register_real_solver_plugins(checker):
+ checker.scope["Reals"] = ["varid", checker.count()]
+ checker.scope["SMTReal"] = RealSMTClaim(checker)
+
+class RealSMTClaim(Claim):
+ def __init__(self, checker):
+ self.reals_var = checker.scope["Reals"]
+ self.expression = None
+ self.assumptions = []
+
+ def to_expr(self, justifiers, claim):
+ """Check that, from everything else, we can prove all variables are in
+ the Reals set. Then compile everything to SMT + shunt out to Z3.
+ """
+ def var_ids(expr):
+ if not isinstance(expr, list) or not expr:
+ return set()
+ if expr[0] == "varid":
+ return set({expr[1]})
+ return set({var_id for arg in expr for var_id in var_ids(arg)})
+
+ other_exprs = [justifier.to_expr() for justifier in justifiers]
+
+ # First: ensure that all involved var ids are in the Reals set
+ pre = ["true"]
+ for expr in other_exprs:
+ pre = ["and", pre, expr]
+
+ found = var_ids(claim.to_expr())
+ for expr in other_exprs:
+ found.update(var_ids(expr))
+ found = found - set({self.reals_var[1]})
+
+ precondition = ["true"]
+ for var_id in sorted(found):
+ precondition = ["and",
+ ["apply_predicate", "in", [["varid", var_id], self.reals_var]],
+ precondition]
+
+ if solver.check_sat(["not", ["implies", pre, precondition]]):
+ print("Failed precondition for using RealSMTClaim; make sure all"
+ "variables are in Reals!")
+ raise NotImplementedError
+
+ if check_sat_reals(["not", ["implies", pre, claim.to_expr()]]):
+ print("Failed SMT verification!")
+ raise NotImplementedError
+
+ return ["false"]
+
+# TODO: dedup with solver.py
+import pysmt.shortcuts as ps
+from pysmt.smtlib.script import smtlibscript_from_formula
+from pysmt.logics import UF
+import warnings
+from subprocess import Popen, PIPE
+import sys
+import warnings
+import pysmt.environment
+from pysmt.typing import Type, FunctionType, BOOL, REAL
+
+def check_sat_reals(expression):
+ pysmt.environment.push_env(env=None)
+ ps.get_env().enable_infix_notation = True
+
+ name_to_var = dict({
+ "=": ps.Equals,
+ })
+ def visit(expr):
+ if expr[0] == "varid":
+ try:
+ return name_to_var[expr[1]]
+ except KeyError:
+ name_to_var[expr[1]] = ps.Symbol(f"v_{expr[1]}", REAL)
+ return name_to_var[expr[1]]
+
+ if expr[0] == "implies":
+ return ps.Implies(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "iff":
+ return ps.Iff(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "and":
+ return ps.And(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "or":
+ return ps.Or(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "not":
+ return ps.Not(visit(expr[1]))
+ if expr[0] == "false":
+ return ps.FALSE()
+ if expr[0] == "true":
+ return ps.TRUE()
+
+ if expr[0] == "exists":
+ return ps.Exists([visit(expr[1])], visit(expr[2]))
+ if expr[0] == "forall":
+ return ps.ForAll([visit(expr[1])], visit(expr[2]))
+
+ if expr[0] == "apply_predicate":
+ if expr[1] in ("<=", ">=", "<", ">"):
+ return eval(f"x {expr[1]} y", {"x": visit(expr[2][0]), "y": visit(expr[2][1])})
+ if expr[1] not in name_to_var:
+ fn_type = FunctionType(BOOL, [REAL for _ in expr[2]])
+ name_to_var[expr[1]] = ps.Symbol(f"{expr[1]}_p", fn_type)
+ return name_to_var[expr[1]](*[visit(arg) for arg in expr[2]])
+
+ if expr[0] == "apply_function":
+ if expr[1] in ("+", "-", "*", "/"):
+ return eval(f"x {expr[1]} y", {"x": visit(expr[2][0]), "y": visit(expr[2][1])})
+ if expr[1] not in name_to_var:
+ fn_type = FunctionType(REAL, [REAL for _ in expr[2]])
+ name_to_var[expr[1]] = ps.Symbol(f"{expr[1]}_f", fn_type)
+ return name_to_var[expr[1]](*[visit(arg) for arg in expr[2]])
+
+ print(expr)
+ raise NotImplementedError
+
+ warnings.filterwarnings("ignore", category=UserWarning)
+ smtlibscript_from_formula(visit(expression), logic="QF_UFNIRA").to_file("/tmp/out.smt2", daggify=False)
+
+ if True:
+ # print("Calling CVC5...")
+ process = Popen(["cvc5", "/tmp/out.smt2"], stdout=PIPE)
+ (output, err) = process.communicate()
+ exit_code = process.wait()
+ is_sat = "unsat" not in output.decode("utf-8")
+ else:
+ raise NotImplementedError
+
+ return is_sat
diff --git a/plugins/zfc_plugins.py b/plugins/zfc_plugins.py
new file mode 100644
index 0000000..c78a799
--- /dev/null
+++ b/plugins/zfc_plugins.py
@@ -0,0 +1,129 @@
+from parsing import *
+from claim import Claim
+
+def register_zfc_plugins(checker):
+ checker.FILTER_SETS = dict()
+ def parse_filterset_(parse, lexemes):
+ return parse_filterset(checker, parse, lexemes)
+ checker.parsers.extend([parse_filterset_, parse_finset])
+ checker.scope["FilterSet"] = FilterSetClaim(checker)
+ checker.scope["FinSet"] = FinSetClaim()
+
+def parse_finset(parse, lexemes):
+ parse_only_when(lexemes[0] == "{")
+ inner, rest = skipto(lexemes, "}")
+ parse_only_when(not rest)
+ args = split_csv(inner[1:], ",")
+ return ["apply_function",
+ f"_fin_set_{len(args)}", [parse(arg) for arg in args]]
+
+def parse_filterset(checker, parse, lexemes):
+ parse_only_when(lexemes[0] == "{")
+ inner, rest = skipto(lexemes, "}")
+ parse_only_when(not rest)
+
+ lhs, rhs = skipto(inner[1:], "|")
+ parse_only_when(lhs[1] == "in")
+
+ prototype_var = parse(lhs[:1])
+ universe = parse(lhs[2:])
+
+ filter_expr = parse(rhs)
+ def var_names(expr):
+ if not isinstance(expr, list):
+ return set()
+ if expr[0] == "var":
+ return set({expr[1]})
+ return set({x for subexpr in expr for x in var_names(subexpr)})
+ filter_vars = var_names(filter_expr)
+ filter_vars = sorted(filter_vars - set({lhs[0]}))
+
+ fn_name = f"_filter_set_{checker.count()}"
+ checker.FILTER_SETS[fn_name] = (prototype_var, filter_vars, filter_expr)
+ return ["apply_function",
+ fn_name, [["var", filter_var] for filter_var in filter_vars]]
+
+def functions(expr):
+ if not isinstance(expr, list) or not expr:
+ return set()
+ if expr[0] == "apply_function":
+ return set({expr[1]})
+ return set({function for arg in expr for function in functions(arg)})
+
+class FilterSetClaim(Claim):
+ def __init__(self, checker):
+ self.checker = checker
+ self.expression = None
+ self.assumptions = []
+
+ def to_expr(self, justifiers, claim):
+ """Goes through all other justifiers, find filter set functions, and do
+ a forall expression.
+ """
+ def filter_sets(expr):
+ return set({function for function in functions(expr)
+ if function.startswith("_filter_set_")})
+
+ found = filter_sets(claim.to_expr())
+ for justifier in justifiers:
+ found.update(filter_sets(justifier.to_expr()))
+
+ expr = ["true"]
+ for filter_set in sorted(found):
+ prototype_var_name, arg_names, filter_expr = self.checker.FILTER_SETS[filter_set]
+ prototype_var = ["varid", self.checker.count()]
+ dummy_args = [["varid", self.checker.count()] for _ in arg_names]
+
+ claim = ["apply_predicate", "in",
+ [prototype_var,
+ ["apply_function", filter_set, dummy_args]]]
+ def fill_expr(expr):
+ if not isinstance(expr, list) or not expr:
+ return expr
+ if expr == prototype_var_name:
+ return prototype_var
+ if expr[0] == "var":
+ return dummy_args[arg_names.index(expr[1])]
+ return [fill_expr(arg) for arg in expr]
+ claim = ["iff", claim, fill_expr(filter_expr)]
+ for dummy_arg in dummy_args:
+ claim = ["forall", dummy_arg, claim]
+ claim = ["forall", prototype_var, claim]
+ expr = ["and", claim, expr]
+ return expr
+
+class FinSetClaim(Claim):
+ def __init__(self):
+ self.expression = None
+ self.assumptions = []
+
+ def to_expr(self, justifiers, claim):
+ """Goes through all other justifiers, find fin set functions, and do
+ a forall expression.
+ """
+ def fin_sets(expr):
+ return set({function for function in functions(expr)
+ if function.startswith("_fin_set_")})
+
+ found = fin_sets(claim.to_expr())
+ for justifier in justifiers:
+ found.update(fin_sets(justifier.to_expr()))
+
+ expr = ["true"]
+ for fin_set in sorted(found):
+ n_args = int(fin_set.split("_")[-1])
+ prototype_var = ["varid", claim.checker.count()]
+ dummy_args = [["varid", claim.checker.count()] for _ in range(n_args)]
+ claim = ["apply_predicate", "in",
+ [prototype_var,
+ ["apply_function", fin_set, dummy_args]]]
+ cond = ["false"]
+ for dummy_arg in dummy_args:
+ cond = ["or", cond,
+ ["apply_predicate", "=", [prototype_var, dummy_arg]]]
+ claim = ["iff", claim, cond]
+ for dummy_arg in dummy_args:
+ claim = ["forall", dummy_arg, claim]
+ claim = ["forall", prototype_var, claim]
+ expr = ["and", claim, expr]
+ return expr
diff --git a/solver.py b/solver.py
new file mode 100644
index 0000000..fdde745
--- /dev/null
+++ b/solver.py
@@ -0,0 +1,75 @@
+import pysmt.shortcuts as ps
+from pysmt.smtlib.script import smtlibscript_from_formula
+from pysmt.logics import UF
+import warnings
+from subprocess import Popen, PIPE
+import sys
+import warnings
+import pysmt.environment
+from pysmt.typing import Type, FunctionType, BOOL
+
+def check_sat(expression):
+ pysmt.environment.push_env(env=None)
+ ps.get_env().enable_infix_notation = True
+ Universe = Type("Universe")
+
+ # print(expression)
+ name_to_var = dict({
+ "=": ps.Equals,
+ })
+ def visit(expr):
+ if expr[0] == "varid":
+ try:
+ return name_to_var[expr[1]]
+ except KeyError:
+ name_to_var[expr[1]] = ps.Symbol(f"v_{expr[1]}", Universe)
+ return name_to_var[expr[1]]
+
+ if expr[0] == "implies":
+ return ps.Implies(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "iff":
+ return ps.Iff(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "and":
+ return ps.And(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "or":
+ return ps.Or(visit(expr[1]), visit(expr[2]))
+ if expr[0] == "not":
+ return ps.Not(visit(expr[1]))
+ if expr[0] == "false":
+ return ps.FALSE()
+ if expr[0] == "true":
+ return ps.TRUE()
+
+ if expr[0] == "exists":
+ return ps.Exists([visit(expr[1])], visit(expr[2]))
+ if expr[0] == "forall":
+ return ps.ForAll([visit(expr[1])], visit(expr[2]))
+
+ if expr[0] == "apply_predicate":
+ if expr[1] not in name_to_var:
+ fn_type = FunctionType(BOOL, [Universe for _ in expr[2]])
+ name_to_var[expr[1]] = ps.Symbol(f"{expr[1]}_p", fn_type)
+ return name_to_var[expr[1]](*[visit(arg) for arg in expr[2]])
+
+ if expr[0] == "apply_function":
+ if expr[1] not in name_to_var:
+ fn_type = FunctionType(Universe, [Universe for _ in expr[2]])
+ name_to_var[expr[1]] = ps.Symbol(f"{expr[1]}_f", fn_type)
+ return name_to_var[expr[1]](*[visit(arg) for arg in expr[2]])
+
+ print(expr)
+ raise NotImplementedError
+
+ warnings.filterwarnings("ignore", category=UserWarning)
+ smtlibscript_from_formula(visit(expression), logic=UF).to_file("/tmp/out.smt2", daggify=False)
+
+ if True:
+ # print("Calling Vampire...")
+ process = Popen(["vampire", "--input_syntax", "smtlib2", "/tmp/out.smt2"], stdout=PIPE)
+ (output, err) = process.communicate()
+ exit_code = process.wait()
+ is_sat = "Termination reason: Refutation" not in output.decode("utf-8")
+ else:
+ raise NotImplementedError
+
+ return is_sat
diff --git a/vim_syntax_highlighting/ftdetect/z3fc.vim b/vim_syntax_highlighting/ftdetect/z3fc.vim
new file mode 100644
index 0000000..1aead81
--- /dev/null
+++ b/vim_syntax_highlighting/ftdetect/z3fc.vim
@@ -0,0 +1 @@
+au BufRead,BufNewFile *.z3fc set filetype=z3fc
diff --git a/vim_syntax_highlighting/syntax/z3fc.vim b/vim_syntax_highlighting/syntax/z3fc.vim
new file mode 100644
index 0000000..a7a06b4
--- /dev/null
+++ b/vim_syntax_highlighting/syntax/z3fc.vim
@@ -0,0 +1,31 @@
+" https://vim.fandom.com/wiki/Creating_your_own_syntax_files
+" Vim syntax file
+" Based on Celestia Star Catalogs by Kevin Lauder
+" Language: z3fc
+" Maintainer: Matthew Sotoudeh
+" Latest Revision: 04 January 2023
+
+if exists("b:current_syntax")
+ finish
+endif
+
+let b:current_syntax = "z3fc"
+
+" Matches
+syn match literalTag "\[[^\]]*\]"
+syn match literalTag "[{][^}]*[}]"
+syn match basicKeywords "\zsby \ze"
+syn match basicKeywords "\zsforall\ze"
+syn match basicKeywords "\zsexists\ze"
+syn match basicKeywords "\zsiff\ze"
+syn match basicKeywords "\zsimplies\ze"
+syn match basicKeywords "\zs or \ze"
+syn match basicKeywords "\zs and \ze"
+syn match basicKeywords "\zstake \ze"
+syn match basicKeywords " \zss.t.\ze "
+syn match basicKeywords "\zsany\ze [a-zA-Z]*"
+syn match comment "\zs#.*\ze$"
+
+hi def link literalTag Type
+hi def link basicKeywords Keyword
+hi def link comment Comment
generated by cgit on debian on lair
contact matthew@masot.net with questions or feedback