Fragments: Posts tagged 'guest'urn:https-www-tfeb-org:-fragments-tags-guest-html2023-05-02T13:16:58ZNirvanaurn:https-www-tfeb-org:-fragments-2023-05-02-nirvana2023-05-02T13:16:58Z2023-05-02T13:16:58ZTim Bradshaw
<p>An article constructed from several emails from my friend Zyni, reproduced with her permission. Note that Zyni’s first language is not English.</p>
<!-- more-->
<p>Many people have tried to answer what is so special about Lisp by talking about many things.</p>
<p>Such as interactive development, a thing common now to many languages of course, and if you use Racket with DrRacket not in fact how development usually works there at all. Are we to cast Racket into the outer darkness?<sup><a href="#2023-05-02-nirvana-footnote-1-definition" name="2023-05-02-nirvana-footnote-1-return">1</a></sup></p>
<p>Such as CLOS, a thing specific to Common Lisp: can you not achieve Lisp enlightenment unless you program in Common Lisp? Was Lisp enlightmenent impossible before CLOS existed? What stupid ideas. Could you implement CLOS in a language which was not Lisp? Certainly you could.</p>
<p>Such as the CL condition system: a thing also specific to Common Lisp. Something also which could be implemented in any sufficiently dynamic language. Something almost nobody who writes in Common Lisp understands I think.</p>
<p>And so it goes on.</p>
<p>None of this is the answer. None of this is close to the answer. To find the answer ask <em>why</em> did these things arise in Lisp first? What is the property of Lisp which is in fact unique to Lisp and which <em>defines</em> Lisp in strict sense that if any other language had this property <em>it would be a Lisp</em>? To see answer to this you must understand <a href="https://www.tfeb.org/fragments/2022/10/03/bradshaw-s-laws/" title="Bradshaw's law">Bradshaw’s law</a> and my corollary to it:</p>
<p><strong>Bradshaw’s law.</strong> <em>All sufficiently large software systems end up being programming languages.</em></p>
<p><strong>Zyni’s corollary.</strong> <em>At whatever size you think Bradshaw’s law applies, it applies sooner than that.</em></p>
<p>This means that <em>all programming is language construction</em>.<sup><a href="#2023-05-02-nirvana-footnote-2-definition" name="2023-05-02-nirvana-footnote-2-return">2</a></sup> When you write a program you are writing a language in which to express the problem you wish to solve.</p>
<p>Now you can begin understand what is so interesting about Lisp. In almost all programming languages when you solve a problem you define a lot of new words for the language you have, and perhaps you define elaborate classifications of the nouns of the language you will allow. But you can do nothing with the structure of the language you must use because the language will not allow that: it has a fixed grammar handed down by the great and good who designed it who are sometimes not fools. And indeed you are fiercely discouraged from even understanding what it is you are doing: discouraged from understanding that you are building a new language.</p>
<p>And quite soon (sooner than you think and in fact immediately) you find you must actually have new structure, new <em>grammar</em>. But you cannot do this easily both because the language you use does not allow it and also because you do not know what it is you are doing – you do not realise that you are making a language. So probably you use a templating system or something and build an awful horror. Often this horror will have nested languages where inner languages appear in strings in outer languages. Often it will have evaluation rules so obscure and inconsistent that it is impossible for humans to write safe large programs in this language (Unix shells: I look at you). We have all seen these things.</p>
<p>And so you live out your life crawling in the dirt, never understanding what thing it is of which you are making a very bad, very unsafe, very ugly version. Because you have been taught there is only mud so all you do is pile up structures out of mud, to be washed away by the next rain. A little way over is a tribe who knows only straw and they build structures from straw which blow away in the first wind. You hate them; they hate you. Sometimes you have little wars.</p>
<p>What, on the other hand, do you do in Lisp? Well, few days ago I needed a way to express the idea of searching some (very) large structure and being able to fail in a structured way. So after ten minutes work, my program now says things like this:</p>
<pre class="brush: lisp"><code>(defun big-serch-thing (thing)
(attempting
(quick-and-dirty thing)
(try-harder thing)))
(defun try-harder (thing)
(walking-thing (node thing :level 0)
(attempting
(first-pass thing)
(desparate-fallback thing))))
(defun first-pass (thing)
...
(when doom (fail))
...)</code></pre>
<p>Well it does not matter what this does and this is not what my program is actually like, but what is clear just by looking is that <em>this language is not Common Lisp</em>. Instead it is Common Lisp extended with at least two new grammatical constructs: <code>attempting</code> with its friend <code>fail</code> which looks like a verb but in fact is a control construct really, and <code>walking-thing</code> which is some kind of new iteration construct perhaps.</p>
<p>And there is more: when you look at <code>attempting</code> you will find it is implemented (by a function which) uses a construct called <code>looping</code> which is <em>another</em> extension to Common Lisp. And similarly for <code>walking-thing</code> (which is not really called that) which uses I think four separate new grammatical constructs I do not remember.</p>
<p>And there is more: when I started this essay these constructs were mostly as I showed above, but we have decided this was wrong, so the new language is now somewhat different and somewhat richer. A few more tens of minutes of work, most of it altering the existing programs in the old language to use the new language. The new language is even defined using a language-extending construct which itself is an extension to CL’s provided ones.</p>
<p>And this is how you program in Lisp. <em>In Lisp, writing programs is building languages</em>: in Lisp to solve a problem is to first build a language in which the problem may be solved. And because doing this is so easy in Lisp, this is what you do even for very small problems: you incrementally extend the grammar of the language — not just its lexicon — to create a language in which to describe the problem.</p>
<p>Well, this is not surprising, is it? This is what the laws imply: programming <em>is</em> constructing languages, and this applies even for very small programs. What is surprising is that so few languages encourage this. And because they do not we end up with the horror we all know. Perhaps even this is not surprising: any language which supports this well will have all the characteristics of Lisp, will in fact <em>be</em> a Lisp. So no other languages do this because to do it requires being Lisp. So why is Lisp not more popular? Well, answer is fairly easy but this is discussion for another day, I think.</p>
<p>And now we see why Lisp got features first: because it could. Let us say you wish to explore an object system in Lisp. Well, perhaps you will want a class-defining construct, so you write a macro, <code>define-class</code> or something. And you wish to be able to send messages, so you write a <code>send</code> function and then you modify the readtable so <code>[o message ...]</code> is <code>(send o message ...)</code>. And perhaps you wish some new binding construct for fields so you write <code>with-fields</code> and so, and so.</p>
<p>And now you have a new language. If you were careful you may even have constructed that new language inside a single running Lisp image. And this took, perhaps, some hours. And later, you decide that no, you wish your new language to be different, so you change it. Another few hours. Eventually, in a different world, you call this part of the language ZLOS and there is a standard.</p>
<p>And this is why these linguistic innovations happen in Lisp: because Lisp is a machine for linguistic innovation. It is <em>that</em> feature of Lisp which makes it interesting, and it is <em>only</em> that feature: both because all other features derive from that one and because to have that feature is to be Lisp.</p>
<p>That is all.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-05-02-nirvana-footnote-1-definition" class="footnote-definition">
<p>Do not answer this or I will kill you with a stale loaf of bread. <a href="#2023-05-02-nirvana-footnote-1-return">↩</a></p></li>
<li id="2023-05-02-nirvana-footnote-2-definition" class="footnote-definition">
<p>This is exaggeration: if you define <em>no</em> names in your program you are, perhaps, not constructing a language. <a href="#2023-05-02-nirvana-footnote-2-return">↩</a></p></li></ol></div>Measuring some tree-traversing functionsurn:https-www-tfeb-org:-fragments-2023-03-26-measuring-some-tree-traversing-functions2023-03-26T09:25:50Z2023-03-26T09:25:50ZTim Bradshaw
<p>In a <a href="https://www.tfeb.org/fragments/2023/03/13/variations-on-a-theme/" title="Variations on a theme">previous article</a> my friend Zyni wrote some variations on a list-flattening function, some of which were ‘recursive’ and some of which ‘iterative’, managing the stack explicitly. We thought it would be interesting to see what the performance differences were, both for this function and a more useful variant which searches a tree rather than flattening it.</p>
<!-- more-->
<h2 id="what-we-measured">What we measured</h2>
<p>The code we used is <a href="https://github.com/tfeb/zyni-flatten" title="sample code">here</a><sup><a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-1-definition" name="2023-03-26-measuring-some-tree-traversing-functions-footnote-1-return">1</a></sup>. We measured four variations of each of two functions.</p>
<h3 id="list-flattening">List flattening</h3>
<p>All these functions use <a href="https://tfeb.github.io/tfeb-lisp-hax/#collecting-lists-forwards-and-accumulating-collecting" title="collecting"><code>collecting</code></a> to build their results forwards. They live in <a href="https://github.com/tfeb/zyni-flatten/blob/main/flatten-variants.lisp" title="flatten-variants.lisp"><code>flatten-variants.lisp</code></a>.</p>
<ul>
<li><code>flatten/implicit-stack</code> works in the obvious recursive way, with an implicit stack. This uses <a href="https://tfeb.github.io/tfeb-lisp-hax/#applicative-iteration-iterate" title="iterate"><code>iterate</code></a> to express the local recursive function.</li>
<li><code>flatten/explicit-stack</code> uses an explicit stack (called <code>agenda</code> in the code) represented as a vector, and uses <a href="https://tfeb.github.io/tfeb-lisp-hax/#decomposing-iteration-simple-loops" title="looping"><code>looping</code></a> to express iteration.</li>
<li><code>flatten/explicit-stack/adja</code> is like the previous function but it is willing to extend the explicit stack, which it does by using <code>adjust-array</code> and assignment.</li>
<li><code>flatten/explicit-stack/adjb</code> is like <code>flatten/explicit-stack/adja</code> but uses a local tail-recursive function to <em>bind</em> the extended stack rather than assignment.</li>
<li>Finally <code>flatten/consy-stack</code> is very close to Zyni’s original iterative solution: it represents the stack as a list. This version necessarily conses fairly copiously.</li></ul>
<h3 id="searching-cons-trees">Searching cons trees</h3>
<p>These functions, in <a href="https://github.com/tfeb/zyni-flatten/blob/main/treesearch-variants.lisp" title="treesearch-variants.lisp"><code>treesearch-variants.lisp</code></a>, correspond to the flattening variants, except they are searching for some atomic value in the tree of conses:</p>
<ul>
<li><code>search/implicit-stack</code> uses an implicit stack;</li>
<li><code>search/explicit-stack</code> uses a vector;</li>
<li><code>search/explicit-stack/adja</code> uses a vector and adjusts by assignment;</li>
<li><code>search/explicit-stack/adjb</code> uses a vector and adjusts by binding;</li>
<li><code>search/consy-stack</code> uses a consy stack.</li></ul>
<h3 id="notes-on-the-code">Notes on the code</h3>
<p>The functions all have <code>(declare (optimize (speed 3)))</code> but specifically <em>don’t</em> turn off safety or use implementation-specific settings: we wanted to test code we felt we’d be happy running, and that means code compiled with reasonable settings for safety: if you turn safety off you’re brave, foolish, or both.</p>
<p>We did not compare <code>looping</code> with <code>do</code> or <code>loop</code>: we probably should. However the expansion of <code>looping</code> is pretty straightforward:</p>
<pre class="brush: lisp"><code>(looping ((this o) (depth 0))
(declare ...)
...)</code></pre>
<p>Turns into</p>
<pre class="brush: lisp"><code>(let ((this o) (depth 0))
(declare ...)
(block nil
(tagbody
#:start
(multiple-value-setq (this depth) ...)
(go #:start))))</code></pre>
<p>The only real question here, we think is whether <code>multiple-value-setq</code> is compiled well: brief inspection implies it is. We should probably still compare the current version with more ‘native CL’ variants.</p>
<p>The variants which use a vector as a stack maintain the current element themselves: that’s because we tested using a fill pointer and <code>vector-push</code> / <code>vector-pop</code> and it was really significantly slower in both implementations.</p>
<h2 id="what-we-did">What we did</h2>
<h3 id="the-lisp-implementations-we-used">The Lisp implementations we used</h3>
<p>We used LispWorks 8.0 and very recent SBCL builds, compiled from the <code>master</code> branch no more than a few days before we ran the tests in mid March 2023.</p>
<p>In the case of SBCL we paid attention to notes and warnings during compilation. The significant one we did <em>not</em> address was that it complained vociferously about not being able to optimize calls to <code>eql</code>: that’s because we don’t know the type of the thing we are searching for: it <em>needs</em> to do the work it is trying to avoid. Apart from this the only warnings were about the computation of the new length of the agenda, which never actually happens in the tests we ran.</p>
<h3 id="the-machines-we-benchmarked-on">The machines we benchmarked on</h3>
<p>We both have M1-based Macbook Airs so this is what we used. In particular we have not run any benchmarks on x64.</p>
<h3 id="what-we-ran">What we ran</h3>
<p><code>make-car-cdr</code>, in <a href="https://github.com/tfeb/zyni-flatten/blob/main/common.lisp" title="common.lisp"><code>common.lisp</code></a>, makes a list where each element is a chain linked by cars, finally terminating in a specified element. Controlling the length of the list and the depth of the chains gives the functions more iterative or more recursive work to do respectively. The benchmarking code then made a series of suitable structures of increasing size and timed many iterations of each function on the same structure, computing the time per call. We then wrote a program in Racket to plot the results on axes of ‘breadth’ (length of the list) and ‘depth’ (depth of the car-linked chain). For the search functions the element being searched for was not in the tree so they had to do as much work as possible.</p>
<p>Life was usually arranged so that the initial agenda was big enough for the functions which used a vector as the agenda, so none of that aspect of them was teated, except for one case below. Apart from that case, the ‘vector stack’ timings refer to <code>flatten/explicit-stack</code> and <code>treesearch/explicit-stack</code>, not the adjustable-stack variants.</p>
<h2 id="some-results">Some results</h2>
<p>We timed 1,000 iterations of each call, for list lengths (breadth in the plots and below) from 30 to 1,000 in steps of 10 and depths (depth in the plots and below) from 10 to 300 in steps of 10, computing times in μs per iteration. Neither of us knows anything about how data like this should be best presented but simply plotting the performance surfaces seemed reasonable. We used bilinear interpolation to make the surface from the points<sup><a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-2-definition" name="2023-03-26-measuring-some-tree-traversing-functions-footnote-2-return">2</a></sup>.</p>
<h3 id="lispworks">LispWorks</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-implicit-vector.svg" alt="Treesearch: implicit compared with vector stack" />
<p class="caption">Treesearch: implicit compared with vector stack</p></div>
<p>This is nicely linear in both breadth and depth, and so quadratic in breadth \(\times\) depth. And it’s easy to see that for LW using the implicit stack is faster than the manually-managed stack.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-vector-consy.svg" alt="Treesearch: vector stack compared with consy stack" />
<p class="caption">Treesearch: vector stack compared with consy stack</p></div>
<p>This compares the vector stack with the consy stack, for treesearch. The consy stack is slightly faster which surprised us. This conses a list as long as the depth of the tree for each ‘leftward’ branch, and then immediately unwinds that and throws the whole list away. So it creates significant garbage, but the allocation and garbage collection overhead together is still faster than using a vector. Consing really is (almost) free.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-flatten.svg" alt="Treesearch compared with flatten, both with implicit stacks" />
<p class="caption">Treesearch compared with flatten, both with implicit stacks</p></div>
<p>Here is more evidence that consing is very cheap: the difference between treesearch (which does not cons) and flatten (which does) is tiny.</p>
<h3 id="sbcl">SBCL</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-implicit-vector.svg" alt="Treesearch: implicit compared with vector stack" />
<p class="caption">Treesearch: implicit compared with vector stack</p></div>
<p>So here is SBCL. For SBCL explicitly managing the stack as a vector is significantly faster than the implicit stack. Something that is also apparent here is how variable SBCL’s timings are compared with LW’s: we don’t know why that is although we suspect it might be because SBCL’s garbage collector is more intrusive than LW’s. We also don’t know whether this variation is repeatable, or whether it’s due to a single very slow run or something like that.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-vector-consy.svg" alt="Treesearch: vector stack compared with consy stack" />
<p class="caption">Treesearch: vector stack compared with consy stack</p></div>
<p>For SBCL the consy stack is significantly slower than the vector stack, so for SBCL the vector stack is the fastest.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-flatten.svg" alt="Treesearch compared with flatten, both with implicit stacks" />
<p class="caption">Treesearch compared with flatten, both with implicit stacks</p></div>
<p>SBCL has a slightly larger difference between treesearch and flatten, with flatten being slower. There are also curious ‘waves’ in the plot as depth increases.</p>
<h3 id="lispworks-compared-with-sbcl">LispWorks compared with SBCL</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-treesearch-implicit.svg" alt="Treesearch: SBCL compared with Lispworks, implicit stacks" />
<p class="caption">Treesearch: SBCL compared with Lispworks, implicit stacks</p></div>
<p>LW is significantly faster than SBCL for implicit stacks except for very small depths.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-treesearch-best.svg" alt="Treesearch: SBCL compared with Lispworks, best stacks" />
<p class="caption">Treesearch: SBCL compared with Lispworks, best stacks</p></div>
<p>This compares LW using an implicit stack with SBCL using an explicit vector stack. The difference is pretty small now.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-flatten-consy.svg" alt="Flatten: SBCL compared with Lispworks, consy stacks" />
<p class="caption">Flatten: SBCL compared with Lispworks, consy stacks</p></div>
<p>This was meant to be the worst-case for both: flattening and a consy stack. But it’s not particularly informative, I think.</p>
<h3 id="the-outer-reaches-lispworks-with-a-deep-tree">The outer reaches: LispWorks with a deep tree</h3>
<p>We did one run with the maximum depth set to 10,000 with a step of 500, and maximum breadth set to 1,000 with a step of 100, averaged over 100 iterations instead of 1,000. This is too deep for LW’s stack, but LW allows stack extension, and we wrote what later became <a href="https://github.com/tfeb/tfeb-lisp-implementation-hax/blob/main/lw/modules/allowing-stack-extensions.lisp">this</a> to extend the stack as required. Note that this happens only during the first recursion into the left-hand branch of the tree so has minimal effect on performance. This also used <code>search/explicit-stack/adjb</code> for the vector stack.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-implicit-vector-deep.svg" alt="Treesearch: implicit compared with consy stack, deep tree" />
<p class="caption">Treesearch: implicit compared with consy stack, deep tree</p></div>
<p>As before the implicit stack is much better for LW. This is much more bumpy than LW was for smaller depths, this might have been because the machine did other things while it was running but we don’t think so.</p>
<h2 id="some-conclusions">Some conclusions</h2>
<p>None of the differences were really large. In particular there’s no enormous advantage from managing the stack yourself.</p>
<p>Consing and the resulting garbage-collection does really seem to be very cheap, especially in LispWorks: the days of long GC pauses are long gone.</p>
<p>We were surprised that LispWorks was fairly reliably faster than SBCL: surprised enough that we ran everything several times to be sure. It’s also interesting how much smoother LW’s performance surface is in most cases.</p>
<p>It is possible that our implementations just suck, of course.</p>
<p>Mostly it’s just some pretty pictures.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-03-26-measuring-some-tree-traversing-functions-footnote-1-definition" class="footnote-definition">
<p>All of the functions should be portable CL. Some of the mechanism for expressing dependencies and loading things is not. However it should be easy for anyone to run this if they wish to. <a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-1-return">↩</a></p></li>
<li id="2023-03-26-measuring-some-tree-traversing-functions-footnote-2-definition" class="footnote-definition">
<p>Getting the bilinear interpolation right took longer than anything else, and perhaps longer than everything else put together. <a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-2-return">↩</a></p></li></ol></div>Variations on a themeurn:https-www-tfeb-org:-fragments-2023-03-13-variations-on-a-theme2023-03-13T12:36:33Z2023-03-13T12:36:33ZTim Bradshaw
<p>My friend Zyni wrote a comment to a thread on reddit with some variations on a list-flattening function. We’ve since spent some time thinking about things related to this, which is written up in a following article. Here is her comment so the following article can refer to it. Other than notes at the end the following text is Zyni’s, not mine.</p>
<!-- more-->
<h2 id="httpswwwredditcomrcommonlispcomments11o1wvmcommentjbt9n54utmsourceshareutmmediumweb2xcontext3the-reddit-comment-by-zyni"><a href="https://www.reddit.com/r/Common_Lisp/comments/11o1wvm/comment/jbt9n54/?utm_source=share&utm_medium=web2x&context=3">The reddit comment by Zyni</a></h2>
<p>First of all we all know that CL does not promise to optimize tail recursion: means that tail recursive program may generate recursive not iterative process. So recursive program in CL <em>even if tail recursive</em> is not safe on data of unknown size, assuming stack is limited.</p>
<p>But let us assume as good implementations do that tail recursion is optimized in implementation (no need for general tail calls here but is obvious nice thing if implementations do this). Certainly if we are deploying code in space we know what implementation we use and can check this.</p>
<p>So we look at this supposed wonder of code, which I rewrite slightly to use <a href="https://tfeb.github.io/tfeb-lisp-hax/#applicative-iteration-iterate" title="iterate"><code>iterate</code> macro</a> which is simply Scheme’s named-<code>let</code> to be compatible with later examples:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; original terrible one
(iterate ftn ((x o) (accumulator '()))
(typecase x
(null accumulator)
(cons (ftn (car x) (ftn (cdr x) accumulator)))
(t (cons x accumulator)))))</code></pre>
<p>This … is really bad program. It makes an essential mistake that it wishes to build result forwards but lists wish to be built backwards, so it must therefore recurse (not tail) on cdr of structure first. But most list-based structures have little weight in car but much in cdr, so this will fail <em>even on list which is already flat</em>: <code>(flatten (make-list 100000 :initial-element 1))</code> will fail if your example fails.</p>
<p>Any person presenting this code as good example should be ashamed of self.</p>
<p>So first change: we accept that we must build lists backwards but we change program so that tail call is on cdr not car, and reverse result:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; not TR but better on usual assumptions
(nreverse
(iterate ftn ((x o) (accumulator '()))
(typecase x
(null accumulator)
(cons (ftn (cdr x) (ftn (car x) accumulator)))
(t (cons x accumulator))))))</code></pre>
<p>This function will be fine on assumption of structures which have most weight in their cdrs, which often is true.</p>
<p>Well, you say, ugly <code>reverse</code>. OK this is easy: we simply add in a <a href="https://tfeb.github.io/tfeb-lisp-hax/#collecting-lists-forwards-and-accumulating-collecting" title="collecting"><code>collecting</code> macro</a> which allows construction of list forwards, implementation is obvious (tail pointer). Now we have done this we can also reorder calls to be more obvious (car call, not TR, is now first):</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; not TR, better on usual assumptions, no reverse
(collecting
(iterate ftn ((x o))
(typecase x
(cons
(ftn (car x))
(ftn (cdr x)))
(null)
(t (collect x))))))</code></pre>
<p>This is still not fully TR, so will fail on structures which have much weight in car.</p>
<p>Well, of course, we can deal with this as well: we use explicit agenda to move stack onto heap and turn into pure tail recursive version. First one which builds list backwards in obvious way, therefore needs <code>reverse</code> again:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; pure TR
(iterate ftn ((agenda (list o))
(accumulator '()))
(if (null agenda)
;; can write own reverse as tail recursive of course if wish
;; to be pure of heart
(nreverse accumulator)
(destructuring-bind (this . more) agenda
(typecase this
(null
(ftn more accumulator))
(cons
(ftn (list* (car this) (cdr this) more) accumulator))
(t
(ftn more (cons this accumulator))))))))</code></pre>
<p>Assuming implementation optimizes tail recursion this will flatten completely arbitrary structure limited only by memory.</p>
<p>We can avoid this reversery of course:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; pure TR, no reverse
(collecting
(iterate ftn ((agenda (list o)))
(when (not (null agenda))
(destructuring-bind (this . more) agenda
(typecase this
(null
(ftn more))
(cons
(ftn (list* (car this) (cdr this) more)))
(t
(collect this)
(ftn more))))))))</code></pre>
<p>As before this is limited only by memory assuming implementation optimizes tail calls.</p>
<hr />
<p>Well, I have written Lisp for only couple of years really (but have maths background). But even I can see that this idea of having to put scary label on recursive function is very bad. Instead people using such code should perhaps <em>read it and understand it</em> to see what its problems and advantages are. Radical idea, I know.</p>
<p>Finally idea that stack space is scarce may or may not be true. Example, if we rewrite original version in Racket (first Lisp I used before being lured to dark side):</p>
<pre class="brush: lisp"><code>(define (flatten o)
(let ftn ([x o] [accumulator '()])
(cond
[(null? x) accumulator]
[(cons? x) (ftn (car x) (ftn (cdr x) accumulator))]
[else (cons x accumulator)])))</code></pre>
<p>This will happily ‘flatten’ 100,000 element list and is only limited by memory available because Racket does not treat stack same way.</p>
<hr />
<p>Finally here is variant of final version using <a href="https://tfeb.github.io/tfeb-lisp-hax/#decomposing-iteration-simple-loops" title="simple loops"><code>looping</code> macro</a> which does applicative iteration: this is iterative, on any implementation:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; Iterative
(collecting
(looping ((agenda (list o)))
(when (null agenda)
(return))
(destructuring-bind (this . more) agenda
(typecase this
(null more)
(cons (list* (car this) (cdr this) more))
(t (collect this) more))))))</code></pre>
<p><code>looping</code> part of this turns into:</p>
<pre class="brush: lisp"><code>(let ((agenda (list o)))
(block nil
(tagbody
#:start (setq agenda
(progn
(when (null agenda) (return))
(destructuring-bind (this . more) agenda
(typecase this
(null more)
(cons (list* (car this) (cdr this) more))
(t (collect this) more)))))
(go #:start))))</code></pre>
<p>which is iterative.</p>
<p>I think <code>iterate</code> one is nicer.</p>
<hr />
<h2 id="notes-from-tim">Notes from Tim</h2>
<p>English is Zyni’s third language: she wanted me to fix up the above but I refused as I find the way she writes so charming.</p>
<p>Both of us would like to know how often <code>flatten</code> is actually used: everyone seems to be very keen on it, but we can’t think of any cases where we’ve ever wanted it or anything very much like it.</p>
<p>All of the macros referenced are ‘mine’ in a somewhat loose sense: They’re all published by me, and some of them are mine, some of them were mine but have been made much better by Zyni, some of them are really hers. There are generally comments in the code. Zyni refuses to have anything but a very minimal internet presence for reasons I used to think were absurd but no longer do: you can’t be too careful when your parents and by extension you might be on the wrong side of Putin.</p>
<p>Zyni is not her real name, obviously.</p>Macros (from Zyni)urn:https-www-tfeb-org:-fragments-2022-08-27-macros-from-zyni2022-08-27T10:12:33Z2022-08-27T10:12:33ZTim Bradshaw
<blockquote>
<p>It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties. — Alfred Whitehead</p></blockquote>
<!-- more-->
<p>Once upon a time, long ago in a world far away, Lisp had many features which other languages did not have. Automatic storage management, dynamic typing, an interactive environment, lists, symbols … and macros, which allow you to seamlessly extend the language you have into the language you want and need.</p>
<p>But that was long long ago in a world far away where giants roamed the earth, trolls lurked under every bridge and, they say, gods yet lived on certain distant mountains.</p>
<p>Today, and in this world, many many languages have automatic storage management, are dynamically typed, have symbols, lists, interactive environments, and so and so and so. More of these languages arise from the thick, evil-smelling sludge that coats every surface each day: hundreds, if not thousands of them, like flies breeding on bad meat which must be swatted before they lay their eggs on your eyes.</p>
<p>Lisp, today and in this world not another, has <em>exactly one</em> feature which still distinguishes it from the endless buzz of these insect languages. That feature is seamless language extension by macros.</p>
<p>So yes, macros are dangerous, and they are hard and they are frightening. They are dangerous and hard and frightening because all powerful magic is dangerous and hard and frightening. They are dangerous because they are a thing which has escaped here from the future and it is the business of the future to be dangerous.</p>
<p>If macros are too dangerous, too hard and too frightening for you, <em>do not use Lisp</em> because <em>macros are what Lisp is about</em>.</p>
<hr />
<p>This originated as a comment by my friend Zyni: it is used with her permission.</p>Field camerasurn:https-www-tfeb-org:-fragments-2021-05-11-field-cameras2021-05-11T18:39:47Z2021-05-11T18:39:47ZTim Bradshaw
<p>A comment by my friend, whose <em>nom de guerre</em> is Zyni Moë, reproduced with her permission. Note that Zyni’s first language is not English.</p>
<!-- more-->
<p>Most people are confused about field cameras. They think are best at driving to some scenery pretending to be Ansel Adams except not as good (not actually sure how good he was now, certainly can’t look at his pictures any more). Perhaps in 1990 this was true: today if you actually wanted to copy Adams you would use some digital camera, perhaps Sigma Quattro with Foveon in fancy-high-res mode, still a lot faster than a field camera, image quality better and even with that camera you can take 30 or 100 pictures in the time you can take one with the wooden box.</p>
<p>Completely wrong use for such a camera in 2020. What is the right use? That is easy: street camera. If you want to take street portraits in 21st century no camera is better than a field camera.</p>
<p>You walk around with some official anointed ‘street camera’ (small, expensive, recognisable) then people notice you because it is not any more 1950 and people are aware of cameras now. And they know you are trying to steal their photograph and, mostly, they don’t like that. If it is the most anointed kind of ‘street camera’ they will notice it even more (anyone who thinks these cameras are discreet in any way has not carried one much) and they know that you are not only trying to steal their photographs, you are almost certainly richer than them. People like even less than the stealing of photographs the stealing of photographs by rich men (always it is men).</p>
<p>Instead you can walk around with a wooden box on a tripod and a bag of rattling bits. No-one, ever, refuses to have their picture taken because it is so interesting and strange. Better, offer them a print in return for their picture: now they give you something and you give them something in return. Yes you do not get the same pictures you would with your pretend-discrete camera: you will not get pictures any one of ten thousand thousand people would take, mostly better than you. You will instead get more interesting pictures, pictures only a few hundred people could take better than you and not many even will try.</p>
<p>Of course you have to walk carrying this huge thing over your shoulder and if you are not so rich and can’t afford a fancy carbon tripod it will be heavy. But humans are good at walking if they will only try.</p>
<p>Well I have not done this but my friend has: is how I met him in fact. I have the print which I value above most things, and not just because he made it.</p>
<hr />
<p>This was originally a comment to <a href="https://theonlinephotographer.typepad.com/the_online_photographer/2020/10/how-to-choose-a-4x5.html">this</a>.</p>