From Ramana.Kumar at cl.cam.ac.uk Tue Jul 23 09:37:46 2013 From: Ramana.Kumar at cl.cam.ac.uk (Ramana Kumar) Date: Tue, 23 Jul 2013 11:37:46 +0200 Subject: [opentheory-users] Scalable LCF-style proof translation Message-ID: Cezary Kaliszyk and Alexander Krauss have done a lovely piece of work that was just presented at ITP 2013 ( http://cl-informatik.uibk.ac.at/users/cek/docs/kaliszyk-itp13.pdf) about efficient import from HOL Light into Isabelle/HOL. Their approach has many similarities to OpenTheory, and their proof trace format is not too different from OpenTheory article format, and indeed in Alex's talk he said integration with OpenTheory would be a worthy goal. The performance of their method is very impressive, and certainly faster than using OpenTheory as it is now, so I think it would greatly benefit OpenTheory to adopt and standardise K&K's ideas. I write this message to ask OpenTheory users for their comments on this proposal, and to make a call for volunteers to help with the integration/adoption. As a first step, perhaps we could just make a list of the differences from how OpenTheory currently works, to see what needs to be done. Ramana -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ramana.Kumar at cl.cam.ac.uk Wed Jul 24 12:51:48 2013 From: Ramana.Kumar at cl.cam.ac.uk (Ramana Kumar) Date: Wed, 24 Jul 2013 14:51:48 +0200 Subject: [opentheory-users] Scalable LCF-style proof translation In-Reply-To: References: Message-ID: So, the first observation is a difference in the article formats: Currently, OpenTheory articles are programs for a virtual stack machine. Stack objects may live inside the dictionary and be referenced (by an explicit ref command) by integer keys, or they can be used directly without going into the dictionary. By contrast, K&K articles are programs for a virtual machines without a stack. The arguments to commands are given inline with the commands, and are always integer keys (i.e., every object is in the dictionary) or literal names. This simplifies and shortens the article format and processing. There are fewer kinds of object (just terms, types, theorems, and names; whereas OpenTheory also requires numbers and lists) and no stack manipulation commands. (Also, I suspect K&K constants are identical if they have the same name; they aren't generative (identity depends on construction provenance) as in OT. This is more intuitive to my taste.) Is there a good reason for the stack-based approach? I couldn't think of any, but maybe I'm missing something. One thing the stack gives you is the ability to share sub-objects like lists of terms; but I don't think this is worthwhile. (With K&K you of course can still share the main objects (terms,types,thms).) On Tue, Jul 23, 2013 at 11:37 AM, Ramana Kumar wrote: > Cezary Kaliszyk and Alexander Krauss have done a lovely piece of work that > was just presented at ITP 2013 ( > http://cl-informatik.uibk.ac.at/users/cek/docs/kaliszyk-itp13.pdf) about > efficient import from HOL Light into Isabelle/HOL. > > Their approach has many similarities to OpenTheory, and their proof trace > format is not too different from OpenTheory article format, and indeed in > Alex's talk he said integration with OpenTheory would be a worthy goal. > > The performance of their method is very impressive, and certainly faster > than using OpenTheory as it is now, so I think it would greatly benefit > OpenTheory to adopt and standardise K&K's ideas. > > I write this message to ask OpenTheory users for their comments on this > proposal, and to make a call for volunteers to help with the > integration/adoption. As a first step, perhaps we could just make a list of > the differences from how OpenTheory currently works, to see what needs to > be done. > > Ramana > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at gilith.com Fri Jul 26 21:29:25 2013 From: joe at gilith.com (Joe Leslie-Hurd) Date: Fri, 26 Jul 2013 14:29:25 -0700 Subject: [opentheory-users] Scalable LCF-style proof translation In-Reply-To: References: Message-ID: Hi Ramana, Thanks for drawing my attention to this paper, it's nice to see a collection of detailed performance statistics for the OpenTheory style of low-level proof logging. >From looking at the article formats it seems to be mainly stylistic differences, such as OpenTheory using of a stack and dictionary whereas K&K use only a dictionary (this reminds me of the difference between JVM and Dalvik bytecodes). In OpenTheory it's up to the reader to decide how to process definitions: they're not inherently generative. The OpenTheory reader implements a purely functional logical kernel, so its definitions are generative, but reading an article into HOL Light would make definitions that update the global symbol table. Given these minor differences, I'm intrigued by the statement in the conclusion: "In our development we de?ned a new format for proof exchange traces, despite the existence of other exchange formats. We have tried writing the proof trace in the OpenTheory format, and it was roughly 10 times bigger." Cezary, Alex: Would it be possible to share this proof trace in the two formats, so I can see what difference is responsible for the blow-up? Cheers, Joe On Wed, Jul 24, 2013 at 5:51 AM, Ramana Kumar wrote: > So, the first observation is a difference in the article formats: > > Currently, OpenTheory articles are programs for a virtual stack machine. > Stack objects may live inside the dictionary and be referenced (by an > explicit ref command) by integer keys, or they can be used directly without > going into the dictionary. > > By contrast, K&K articles are programs for a virtual machines without a > stack. > The arguments to commands are given inline with the commands, and are always > integer keys (i.e., every object is in the dictionary) or literal names. > > This simplifies and shortens the article format and processing. There are > fewer kinds of object (just terms, types, theorems, and names; whereas > OpenTheory also requires numbers and lists) and no stack manipulation > commands. > > (Also, I suspect K&K constants are identical if they have the same name; > they aren't generative (identity depends on construction provenance) as in > OT. This is more intuitive to my taste.) > > Is there a good reason for the stack-based approach? I couldn't think of > any, but maybe I'm missing something. > One thing the stack gives you is the ability to share sub-objects like lists > of terms; but I don't think this is worthwhile. (With K&K you of course can > still share the main objects (terms,types,thms).) > > > On Tue, Jul 23, 2013 at 11:37 AM, Ramana Kumar > wrote: >> >> Cezary Kaliszyk and Alexander Krauss have done a lovely piece of work that >> was just presented at ITP 2013 >> (http://cl-informatik.uibk.ac.at/users/cek/docs/kaliszyk-itp13.pdf) about >> efficient import from HOL Light into Isabelle/HOL. >> >> Their approach has many similarities to OpenTheory, and their proof trace >> format is not too different from OpenTheory article format, and indeed in >> Alex's talk he said integration with OpenTheory would be a worthy goal. >> >> The performance of their method is very impressive, and certainly faster >> than using OpenTheory as it is now, so I think it would greatly benefit >> OpenTheory to adopt and standardise K&K's ideas. >> >> I write this message to ask OpenTheory users for their comments on this >> proposal, and to make a call for volunteers to help with the >> integration/adoption. As a first step, perhaps we could just make a list of >> the differences from how OpenTheory currently works, to see what needs to be >> done. >> >> Ramana > > > > _______________________________________________ > opentheory-users mailing list > opentheory-users at gilith.com > http://www.gilith.com/mailman/listinfo/opentheory-users > From Ramana.Kumar at cl.cam.ac.uk Fri Jul 26 21:47:10 2013 From: Ramana.Kumar at cl.cam.ac.uk (Ramana Kumar) Date: Fri, 26 Jul 2013 23:47:10 +0200 Subject: [opentheory-users] Scalable LCF-style proof translation In-Reply-To: References: Message-ID: On Fri, Jul 26, 2013 at 11:29 PM, Joe Leslie-Hurd wrote: > In OpenTheory it's up to the reader > to decide how to process definitions: they're not inherently > generative. The OpenTheory reader implements a purely functional > logical kernel, so its definitions are generative, but reading an > article into HOL Light would make definitions that update the global > symbol table. > I meant to talk about generativity of term-formation operations, not definitions. For example, in OpenTheory if you use the const command twice on the same name, and then constTerm with the same type on the two resulting consts, the two resulting terms will not be alpha equivalent. This effectively forces you to put the constTerm into the dictionary as soon as you create it, so you can reuse the same one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at gilith.com Fri Jul 26 22:07:05 2013 From: joe at gilith.com (Joe Leslie-Hurd) Date: Fri, 26 Jul 2013 15:07:05 -0700 Subject: [opentheory-users] Scalable LCF-style proof translation In-Reply-To: References: Message-ID: Hi Ramana, > For example, in OpenTheory if you use the const command twice on the same > name, and then constTerm with the same type on the two resulting consts, the > two resulting terms will not be alpha equivalent. The two resulting terms should indeed be alpha-equivalent, and if you have seen otherwise please report it as a bug. I made a little example article (appended) that shows this. Cheers, Joe _____________________________________ # Tiny example to check that two constructed constant terms # are alpha-equivalent - Joe Leslie-Hurd nil # First construction of constant term `c : bool` "c" const "bool" typeOp nil opType constTerm axiom nil # Second construction of constant term `c : bool` "c" const "bool" typeOp nil opType constTerm # This thm would fail if the two terms were not alpha-equivalent thm From rda at lemma-one.com Wed Jul 31 11:12:22 2013 From: rda at lemma-one.com (Rob Arthan) Date: Wed, 31 Jul 2013 12:12:22 +0100 Subject: [opentheory-users] Problems with the hol light Open Theory support Message-ID: I have a couple of problems with hol light implementation of Open Theory. I followed the instructions at http://src.gilith.com/hol-light.html, but got lots of errors when I tried to load opentheory/all.ml. The problem was a syntax error caused by a missing semi-colon in theorems.ml (see patch below). When I fixed that, opentheory/all.ml loaded without any errors. From a glance at the source, I expected interesting things to appear in opentheory/articles, but there was nothing. What have I missed? Regards, Rob. --- theorems.ml- 2013-07-31 11:33:20.000000000 +0100 +++ theorems.ml 2013-07-31 11:33:25.000000000 +0100 @@ -426,7 +426,7 @@ export_thm EXISTS_REFL;; -let EXISTS_REFL' = ONCE_REWRITE_RULE [EQ_SYM_EQ] EXISTS_REFL; +let EXISTS_REFL' = ONCE_REWRITE_RULE [EQ_SYM_EQ] EXISTS_REFL;; let EXISTS_UNIQUE_REFL = prove (`!a:A. ?!x. x = a`, -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at gilith.com Wed Jul 31 20:59:02 2013 From: joe at gilith.com (Joe Leslie-Hurd) Date: Wed, 31 Jul 2013 13:59:02 -0700 Subject: [opentheory-users] Problems with the hol light Open Theory support In-Reply-To: References: Message-ID: Hi Rob, Sorry for the broken state of the repo - I really should set up a work-in-progress branch and only push to master when it's in a consistent state. To export a single theory from the proof logging fork of HOL Light, follow the instructions at http://www.gilith.com/research/opentheory/faq.html#export-from-hol-light The initial start_logging command is used to switch on proof logging, which is off by default since most users don't want to export the whole standard theory library. However, that may indeed be exactly what you want, in which case carry out the following two steps: cd opentheory make theories Then you should see a lot of stuff pile up in opentheory/articles/ Hope that helps, Joe On Wed, Jul 31, 2013 at 4:12 AM, Rob Arthan wrote: > I have a couple of problems with hol light implementation of Open Theory. > > I followed the instructions at http://src.gilith.com/hol-light.html, but got > lots of errors when I tried to load opentheory/all.ml. The problem was a > syntax error caused by a missing semi-colon in theorems.ml (see patch > below). > > When I fixed that, opentheory/all.ml loaded without any errors. From a > glance at the source, I expected interesting things to appear in > opentheory/articles, but there was nothing. What have I missed? > > Regards, > > Rob. > > --- theorems.ml- 2013-07-31 11:33:20.000000000 +0100 > +++ theorems.ml 2013-07-31 11:33:25.000000000 +0100 > @@ -426,7 +426,7 @@ > > export_thm EXISTS_REFL;; > > -let EXISTS_REFL' = ONCE_REWRITE_RULE [EQ_SYM_EQ] EXISTS_REFL; > +let EXISTS_REFL' = ONCE_REWRITE_RULE [EQ_SYM_EQ] EXISTS_REFL;; > > let EXISTS_UNIQUE_REFL = prove > (`!a:A. ?!x. x = a`, > > > _______________________________________________ > opentheory-users mailing list > opentheory-users at gilith.com > http://www.gilith.com/mailman/listinfo/opentheory-users > From krauss at in.tum.de Wed Jul 31 21:49:11 2013 From: krauss at in.tum.de (Alexander Krauss) Date: Wed, 31 Jul 2013 23:49:11 +0200 Subject: [opentheory-users] Scalable LCF-style proof translation In-Reply-To: References: Message-ID: <51F98657.4040208@in.tum.de> Hi Ramana, Hi Joe, Thanks, Ramana, for summarizing our discussions at ITP here on the list. Some more aspects: - When we came up with our format, the goal was really to have the simplest thing possible to achieve our goal, i.e., good scalability and performance. Choosing OpenTheory right from the start would probably have forced us to make trade-offs here to be compliant with the format, which we did not want to make at this point. However, we do not have sufficient data to say that it couldn't be done similarly with the OpenTheory format as it is now. - So far, we haven't really tried to use the OpenTheory exporter from HOL Light on Flyspeck, mainly because it seemed to rely on all those annotations present in the sources, and having to put in these commands into the Flyspeck sources did not seem very attractive. - In any case, I think in the long run it does not make sense to have two formats, and I am more than happy to give up our format if the cost is not too high. On 07/26/2013 11:29 PM, Joe Leslie-Hurd wrote: > Given these minor differences, I'm intrigued by the statement in the conclusion: > > "In our development we de?ned a new format for proof exchange traces, despite > the existence of other exchange formats. We have tried writing the > proof trace in the > OpenTheory format, and it was roughly 10 times bigger." > > Cezary, Alex: Would it be possible to share this proof trace in the > two formats, so I can see what difference is responsible for the > blow-up? I must pass this question on to Cezary, who made this specific experiment. I'm not even sure whether this is before or after compression... But just looking at the list of OpenTheory commands, it seems to me that building up substitutions as nested lists on the stack is rather complicated. > On Wed, Jul 24, 2013 at 5:51 AM, Ramana Kumar wrote: >> So, the first observation is a difference in the article formats: >> >> Currently, OpenTheory articles are programs for a virtual stack machine. >> Stack objects may live inside the dictionary and be referenced (by an >> explicit ref command) by integer keys, or they can be used directly without >> going into the dictionary. >> >> By contrast, K&K articles are programs for a virtual machines without a >> stack. >> The arguments to commands are given inline with the commands, and are always >> integer keys (i.e., every object is in the dictionary) or literal names. >> >> This simplifies and shortens the article format and processing. There are >> fewer kinds of object (just terms, types, theorems, and names; whereas >> OpenTheory also requires numbers and lists) and no stack manipulation >> commands. >> (Also, I suspect K&K constants are identical if they have the same name; >> they aren't generative (identity depends on construction provenance) as in >> OT. This is more intuitive to my taste.) In fact they are generative in the sense that new constant objects are constructed each time. Optimizing this does not seem very relevant to me, so I guess we can ignore this for now. >> Is there a good reason for the stack-based approach? I couldn't think of >> any, but maybe I'm missing something. Yes, I also wonder about whether there was a motivation beyond the conceptual simplicity that a command is always just a single token. However, I have some further questions: - I wonder which of the offline processing that we do currently is actually done similarly by the existing opentheory infrastructure. By looking at some opentheory tool help texts, I couldn't see the answer to this question. Most of the commands seem to be concerned with package management, which is unrelated. Currently, we do -- mark the last occurrence of any given object, to ensure deletion -- strip material that is not relevant for some "exported" theorem - Do the existing exporters make use of the stack in any significant way apart from what is necessary to construct objects? How does the HOL Light exporter deal with sharing (between terms)? I assume for the HOL4 exporter this is not an issue, since sharing can be observed directly. Is this correct? - If I want to play with the tools, i.e., export some small theorem from HOL Light, apply some mapping (aka theory interpretation) and re-import into HOL4... Are there any step by step instructions that I can follow? Thanks, Alex