<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Manfred Bergmann&apos;s blog</title>
    <updated>2026-03-02T01:00:00+01:00</updated>
    <id>http://retro-style.software-by-mabe.com/</id>
    <author>
        <name>Manfred Bergmann / Software by MaBe</name>
    </author>
    <link href="http://retro-style.software-by-mabe.com/"></link>
    <entry>
        <title type="html"><![CDATA[ ACE BASIC 3.0 - Classes IEEE and More ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/ACE+BASIC+3.0+-+Classes+IEEE+and+More"></link>
        <updated>2026-03-02T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/ACE+BASIC+3.0+-+Classes+IEEE+and+More</id>
        <content type="html"><![CDATA[ <h3>ACE BASIC 3.0</h3>

<p>The previous posts followed ACE BASIC from v2.5 through v2.9 -- AGA screens, GadTools, closures, MUI, structs, RTG graphics, tail-call optimization, and HTTP networking. Version 3.0 is a major release. It introduces an object system with generic functions, switches the floating-point format to IEEE 754, adds type-based pattern matching, atoms, multitasking primitives, and ships a set of new submodules including a JSON parser. There is a lot here, so let's go through it.</p>

<h3>Object System</h3>

<p>This is the headline feature. ACE BASIC now has classes with single inheritance and polymorphic dispatch via generic functions. The new keywords are <code>CLASS</code>, <code>METHOD</code>, <code>EXTENDS</code>, and <code>GENERIC</code>.</p>

<p>If you are familiar with Common Lisp's CLOS or Julia's multiple dispatch, the design will feel natural. Classes define data. Methods are standalone functions that take class instances as parameters. Generic declarations wire up the runtime dispatch.</p>

<h4>Defining a class</h4>

<p>A class groups data members together, much like a struct but with a type identity that enables runtime dispatch:</p>

<pre class="basic"><code>CLASS Disc
    LONGINT tag
    SINGLE radius
END CLASS

CLASS Rect
    LONGINT tag
    SINGLE w
    SINGLE h
END CLASS</code></pre>

<p>Classes contain only data -- no methods are defined inside the class block. Instance creation and member access use the same <code>-&gt;</code> syntax as structs:</p>

<pre class="basic"><code>DECLARE CLASS Disc d
d-&gt;radius = 5.0

DECLARE CLASS Rect r
r-&gt;w = 10.0
r-&gt;h = 3.0</code></pre>

<p>Each instance carries a hidden type descriptor at offset 0. This is what the runtime uses for dispatch.</p>

<h4>Methods and generic dispatch</h4>

<p>Methods are defined outside the class. The first parameter is a typed class instance, which tells the runtime which class this method specialization belongs to:</p>

<pre class="basic"><code>METHOD Mark(Disc c)
    c-&gt;tag = 1
END METHOD

METHOD Mark(Rect r)
    r-&gt;tag = 2
END METHOD</code></pre>

<p>Two methods with the same name, each taking a different class. To enable runtime dispatch, you declare a <code>GENERIC</code>:</p>

<pre class="basic"><code>GENERIC METHOD Mark(CLASS)
    ON Disc
    ON Rect
END GENERIC</code></pre>

<p>The <code>GENERIC</code> declaration says: &quot;<code>Mark</code> is a generic function that dispatches on one class argument. There are specializations for <code>Disc</code> and <code>Rect</code>.&quot; The <code>CLASS</code> placeholder in the signature marks the dispatched parameter. <code>ON</code> lists the concrete types that have specializations.</p>

<p>Now when you call <code>Mark</code>, the runtime checks the actual type of the argument and dispatches to the correct specialization:</p>

<pre class="basic"><code>Mark(d)    '..calls Mark(Disc c), sets d-&gt;tag to 1
Mark(r)    '..calls Mark(Rect r), sets r-&gt;tag to 2</code></pre>

<p>Methods can return typed values, just like FUNCTION:</p>

<pre class="basic"><code>METHOD SINGLE CalcArea(Disc c)
    CalcArea = c-&gt;radius * c-&gt;radius
END METHOD

METHOD SINGLE CalcArea(Rect r)
    CalcArea = r-&gt;w * r-&gt;h
END METHOD

GENERIC SINGLE METHOD CalcArea(CLASS)
    ON Disc
    ON Rect
END GENERIC

SINGLE area
area = CalcArea(d)    '..25.0
area = CalcArea(r)    '..30.0</code></pre>

<p>They can also take additional non-dispatched parameters:</p>

<pre class="basic"><code>METHOD LONGINT Scale(Disc c, LONGINT factor)
    Scale = 100 + factor
END METHOD

METHOD LONGINT Scale(Rect r, LONGINT factor)
    Scale = 200 + factor
END METHOD

GENERIC LONGINT METHOD Scale(CLASS, LONGINT)
    ON Disc
    ON Rect
END GENERIC

LONGINT s
s = Scale(d, 5)    '..105
s = Scale(r, 5)    '..205</code></pre>

<h4>Multiple dispatch</h4>

<p>This is where things get interesting. A generic function can dispatch on more than one class parameter. Consider a collision detection system:</p>

<pre class="basic"><code>CLASS Disc
    SINGLE radius
END CLASS

CLASS Rect
    SINGLE w
    SINGLE h
END CLASS

GENERIC LONGINT METHOD Collide(CLASS, CLASS)
    ON Disc, Disc
    ON Disc, Rect
    ON Rect, Disc
    ON Rect, Rect
END GENERIC

METHOD LONGINT Collide(Disc a, Disc b)
    Collide = 11
END METHOD

METHOD LONGINT Collide(Disc a, Rect b)
    Collide = 12
END METHOD

METHOD LONGINT Collide(Rect a, Disc b)
    Collide = 21
END METHOD

METHOD LONGINT Collide(Rect a, Rect b)
    Collide = 22
END METHOD

DECLARE CLASS Disc c1, c2
DECLARE CLASS Rect r1, r2

result = Collide(c1, c2)    '..11 (disc-disc)
result = Collide(c1, r1)    '..12 (disc-rect)
result = Collide(r1, c1)    '..21 (rect-disc)
result = Collide(r1, r2)    '..22 (rect-rect)</code></pre>

<p>The runtime dispatches on the types of both arguments simultaneously. Each combination of types maps to a different method specialization. This is genuine multiple dispatch -- the same mechanism found in CLOS and Julia, now available in ACE BASIC.</p>

<h4>Inheritance</h4>

<p>Classes support single inheritance with <code>EXTENDS</code>. Child classes inherit all parent members:</p>

<pre class="basic"><code>CLASS Shape
    LONGINT x
    LONGINT y
END CLASS

CLASS Rect EXTENDS Shape
    LONGINT w
    LONGINT h
END CLASS

CLASS ColorRect EXTENDS Rect
    LONGINT col
END CLASS</code></pre>

<p>The memory layout follows the inheritance chain. <code>Shape</code> takes 12 bytes (4 for the type descriptor, 4 each for x and y). <code>Rect</code> adds w and h for 20 bytes. <code>ColorRect</code> adds col for 24 bytes. Parent members are always at the same offsets, so a <code>Rect</code> can be passed anywhere a <code>Shape</code> is expected.</p>

<p>Generic dispatch walks the parent chain. If a child class has no specialization for a generic function, the runtime walks up the inheritance tree until it finds one:</p>

<pre class="basic"><code>GENERIC LONGINT METHOD Info(CLASS)
    ON Shape
    ON Rect
END GENERIC

METHOD LONGINT Info(Shape s)
    Info = 1
END METHOD

METHOD LONGINT Info(Rect r)
    Info = 2
END METHOD

DECLARE CLASS Shape s1
DECLARE CLASS Rect r1
DECLARE CLASS ColorRect cr1

Info(s1)     '..returns 1 (direct match: Shape)
Info(r1)     '..returns 2 (direct match: Rect)
Info(cr1)    '..returns 2 (inherited: ColorRect -&gt; Rect)</code></pre>

<p><code>ColorRect</code> has no <code>Info</code> specialization, so the runtime walks up: ColorRect -&gt; Rect, finds a match, and dispatches there. If there were only a <code>Shape</code> specialization, a three-level walk (ColorRect -&gt; Rect -&gt; Shape) would find it.</p>

<h4>Atom dispatch</h4>

<p>The <code>ATOM</code> type (more on this below) can also participate in generic dispatch. This enables pattern matching on symbolic values mixed with class types:</p>

<pre class="basic"><code>CLASS Widget
    LONGINT id
END CLASS

CLASS Knob
    LONGINT id
END CLASS

GENERIC LONGINT METHOD React(CLASS, ATOM)
    ON Widget, #:click
    ON Widget, #:hover
    ON Knob, #:click
END GENERIC

METHOD LONGINT React(Widget w, #:click evt)
    React = 10 + w-&gt;id
END METHOD

METHOD LONGINT React(Widget w, #:hover evt)
    React = 20 + w-&gt;id
END METHOD

METHOD LONGINT React(Knob b, #:click evt)
    React = 30 + b-&gt;id
END METHOD

DECLARE CLASS Widget wg
DECLARE CLASS Knob bt
wg-&gt;id = 1
bt-&gt;id = 2

React(wg, #:click)    '..11
React(wg, #:hover)    '..21
React(bt, #:click)    '..32</code></pre>

<p>Dispatch happens on both the class type and the atom value. This is a natural fit for event handling -- the class identifies the widget, the atom identifies the event kind.</p>

<h3>TYPECASE</h3>

<p>Related to the object system is <code>TYPECASE</code>, which provides type-based pattern matching with variable narrowing:</p>

<pre class="basic"><code>CLASS Animal
    LONGINT legs
END CLASS

CLASS Dog EXTENDS Animal
    LONGINT goodboy
END CLASS

SUB LONGINT CheckDog(Animal a)
  LONGINT result
  result = 0
  TYPECASE a
    CASE Dog
      result = a-&gt;goodboy
    CASE ELSE
      result = a-&gt;legs
  END TYPECASE
  CheckDog = result
END SUB

DECLARE CLASS Dog d
d-&gt;legs = 4
d-&gt;goodboy = 1

DECLARE CLASS Animal a
a-&gt;legs = 99

CheckDog(d)    '..returns 1 (matched Dog, reads goodboy)
CheckDog(a)    '..returns 99 (fell through to ELSE, reads legs)</code></pre>

<p>Inside the <code>CASE Dog</code> branch, the variable <code>a</code> is narrowed to <code>Dog</code> type, so <code>a-&gt;goodboy</code> is accessible even though the SUB parameter is declared as <code>Animal</code>. The matching follows ISA semantics -- a <code>Dog</code> instance matches both <code>CASE Dog</code> and <code>CASE Animal</code>, so order matters. Put specific types first.</p>

<h3>ATOM Type</h3>

<p>Atoms are a new primitive type for lightweight symbolic constants. The literal syntax uses <code>#:</code> followed by a name:</p>

<pre class="basic"><code>ATOM status
status = #:ok

IF status = #:ok THEN
  PRINT "All good"
END IF</code></pre>

<p>Atoms are compile-time constants that produce unique integer values via FNV-1a hashing. They are useful for tagging and dispatch -- anywhere you would otherwise define a set of <code>CONST</code> values. As shown above, atoms can also participate in generic method dispatch, which makes them especially powerful for event-driven patterns.</p>

<p>Atoms can also be dispatched on their own without classes:</p>

<pre class="basic"><code>GENERIC LONGINT METHOD Process(ATOM)
    ON #:ok
    ON #:fail
    ON #:retry
END GENERIC

METHOD LONGINT Process(#:ok result)
    Process = 1
END METHOD

METHOD LONGINT Process(#:fail result)
    Process = -1
END METHOD

METHOD LONGINT Process(#:retry result)
    Process = 0
END METHOD

Process(#:ok)      '..returns 1
Process(#:fail)    '..returns -1
Process(#:retry)   '..returns 0</code></pre>

<h3>IEEE 754 Floating Point</h3>

<p>This is a breaking change, and a necessary one. ACE has used Motorola Fast Floating Point (FFP) since its original release in the early 1990s. FFP is a non-standard 32-bit format that was fast on the 68000 but is incompatible with everything else. No modern toolchain, library, or hardware uses it.</p>

<p>Version 3.0 migrates to IEEE 754 single-precision floating point throughout the compiler and runtime. All float literals, constants, and runtime operations now use the standard format. This means:</p>

<ul>
<li>Float values are compatible with C libraries and OS functions that expect IEEE floats</li>
<li>The VBCC compiler (which replaced GCC in this release) handles IEEE floats natively</li>
<li>Math operations use the <code>mathieeesingbas</code> and <code>mathieeesingtrans</code> libraries instead of the FFP equivalents</li>
<li>Existing programs that rely on specific FFP bit patterns need to be recompiled</li>
</ul>

<p>For most programs, recompiling is all that is needed. The syntax is identical -- <code>SINGLE</code> is still the type, and float literals look the same. The difference is under the hood. This change also fixed a crash when printing float values that was caused by K&amp;R float parameter promotion mismatches between the FFP and IEEE calling conventions.</p>

<h3>TASKPROC -- Multitasking Support</h3>

<p>The Amiga is a multitasking operating system, and ACE can now launch Exec tasks. The <code>TASKPROC</code> keyword marks a zero-parameter SUB as a task entry point:</p>

<pre class="basic"><code>SUB BackgroundWork TASKPROC
  ' This runs as a separate Exec task
  ' Automatically saves/restores registers
  ' Calls Wait(0) before returning
END SUB</code></pre>

<p>A <code>TASKPROC</code> SUB automatically saves and restores registers on entry and exit, and calls <code>Wait(0)</code> before returning to signal the parent that it is done. It takes no parameters and cannot be called directly from ACE code -- it is meant to be passed to the new <code>taskutil.b</code> submodule:</p>

<pre class="basic"><code>REM #using ace:submods/taskutil/taskutil.o

#include &lt;submods/taskutil.h&gt;

TaskLaunch("worker", @BackgroundWork, 4096)
' ... do other work ...
TaskTerminate("worker")</code></pre>

<p><code>TaskLaunch</code> creates an Exec task with the given name, entry point, and stack size. <code>TaskGetData</code> retrieves a task's data pointer for inter-task communication. <code>TaskTerminate</code> signals a task to shut down.</p>

<h3>New Submodules</h3>

<p>Version 3.0 ships seven new submodules. The most interesting ones build on the new object system.</p>

<h4>Hashmap</h4>

<p>The <code>hashmap.b</code> submodule implements a CLASS-based string-keyed hashmap with open addressing:</p>

<pre class="basic"><code>REM #using ace:submods/hashmap/hashmap.o

#include &lt;submods/hashmap.h&gt;

DECLARE CLASS Hashmap map
map = HmNew(32)

HmPut(map, "name", "ACE BASIC")
HmPut&(map, "version", 3)

PRINT HmGet$(map, "name")       '..prints "ACE BASIC"
PRINT HmGet&(map, "version")    '..prints 3

HmFree(map)</code></pre>

<p>It stores typed values (string, integer, long, single, address) keyed by string. More examples are in the <code>submods/hashmap/</code> folder.</p>

<h4>Dynamic Array</h4>

<p>The <code>dynarray.b</code> submodule provides a growable, type-tagged indexed collection:</p>

<pre class="basic"><code>REM #using ace:submods/dynarray/dynarray.o

#include &lt;submods/dynarray.h&gt;

DECLARE CLASS Dynarray arr
arr = DaNew(16)

DaAdd&(arr, 10)
DaAdd&(arr, 20)
DaAdd&(arr, 30)

PRINT DaGet&(arr, 0)    '..prints 10
PRINT DaSize(arr)        '..prints 3

DaFree(arr)</code></pre>

<p>It supports iteration, a builder pattern, searching, higher-order functions, sorting, and automatic growth when elements are added beyond the initial capacity. Where the List submodule from v2.8 gives you linked-list semantics, Dynarray gives you indexed random access. More examples are in the <code>submods/dynarray/</code> folder.</p>

<h4>JSON</h4>

<p>The <code>json.b</code> submodule is a complete JSON parser, generator, and pretty-printer. It uses Hashmap and Dynarray as its intermediate representation:</p>

<pre class="basic"><code>REM #using ace:submods/json/json.o
REM #using ace:submods/hashmap/hashmap.o
REM #using ace:submods/dynarray/dynarray.o

#include &lt;submods/json.h&gt;

ADDRESS root

root = JsonParse("{""name"":""ACE"",""version"":3,""features"":[""objects"",""ieee""]}")

PRINT JsonGetStr$(root, "name")     '..prints "ACE"
PRINT JsonGetLng&(root, "version")  '..prints 3

JsonPrettyPrint(root)
JsonFree(root)</code></pre>

<p>Having JSON support means ACE programs can now parse configuration files, consume web API responses (using the HTTP client from v2.9), or generate structured output. The combination of HTTP client and JSON parser makes it possible to write practical network clients in ACE BASIC. More examples are in the <code>submods/json/</code> folder.</p>

<h4>Other submodules</h4>

<ul>
<li><strong>fad.b</strong> (Files And Directories): Over 20 SUBs for file system operations -- existence checks, metadata queries, path manipulation, and directory iteration. Examples in <code>submods/fad/</code>.</li>
<li><strong>iff.b</strong>: IFF ILBM picture loading, extracted from the built-in compiler commands into a standalone submodule.</li>
<li><strong>testkit.b</strong>: Shared test assertion library used across all submodule test suites, eliminating duplicated test boilerplate.</li>
</ul>

<h3>Bounded String Operations</h3>

<p>This is a safety improvement that happens under the hood. ACE's string operations (<code>LET</code>, <code>MID$</code>, <code>LINE INPUT#</code>, etc.) did not previously check destination buffer sizes. A string longer than the target buffer would silently overwrite adjacent memory -- the kind of bug that causes mysterious crashes hours later.</p>

<p>Version 3.0 adds bounded string operations at the runtime level. The compiler now emits the destination buffer size alongside string assignments, and the runtime's <code>_strncpy</code> and <code>_strncat</code> functions enforce the limit. This applies to string variable assignments, array element assignments, struct member assignments, and <code>LINE INPUT#</code> from files.</p>

<p>There is no syntax change. Existing code benefits automatically when recompiled.</p>

<h3>Brief Mentions</h3>

<ul>
<li><p><strong>VBCC Toolchain</strong>: The compiler build itself now uses the VBCC compiler instead of GCC. The runtime libraries have been rebuilt with VBCC as well. This simplifies the build process and aligns the whole toolchain around a single compiler.</p></li>
<li><p><strong>EXIT WHILE / EXIT REPEAT</strong>: You can now break out of <code>WHILE...WEND</code> and <code>REPEAT...UNTIL</code> loops early, analogous to the existing <code>EXIT FOR</code>. A small quality-of-life addition.</p></li>
<li><p><strong>FREE Statement</strong>: Per-block memory deallocation. <code>FREE</code> releases memory allocated by <code>ALLOC</code> for a specific block, while <code>CLEAR ALLOC</code> frees everything. This gives you finer control over memory lifetime.</p></li>
<li><p><strong>SUB Tracing</strong>: The <code>-t</code> compiler flag and <code>TRON</code>/<code>TROFF</code> runtime commands let you trace SUB, FUNCTION, and METHOD entry and exit. Useful for debugging complex call chains and generic dispatch.</p></li>
<li><p><strong>CyberGraphX Support</strong>: Screen mode 13 now also works with CyberGraphX in addition to Picasso96, broadening RTG hardware compatibility.</p></li>
<li><p><strong>Struct SUB Parameters</strong>: You can now use struct type names directly as SUB parameter types (e.g. <code>SUB Foo(MyStruct s)</code>), and the compiler generates the pointer setup automatically. No more <code>DECLARE STRUCT</code> boilerplate at the top of every SUB that takes a struct.</p></li>
<li><p><strong>Runtime Optimizations</strong>: Lookup tables, O(1) argument access, and dynamic allocation improvements in the runtime libraries.</p></li>
<li><p><strong>Replaced ami.lib with amiga.lib</strong>: The custom <code>ami.lib</code> has been replaced with the standard <code>amiga.lib</code> for better compatibility, plus dedicated <code>ace_clib.s</code> and <code>ieee_math.s</code> modules for ACE-specific needs.</p></li>
</ul>

<h3>Conclusion</h3>

<p>Version 3.0 is a very big release. The object system with classes, generic functions, and multiple dispatch turns ACE into a language where you can model problems with proper abstractions. The design follows the multimethod tradition -- classes define data, methods are standalone, dispatch happens at runtime based on actual types. IEEE 754 floats align ACE with every other toolchain and library in existence. TYPECASE and atoms provide clean pattern matching. The new submodules -- hashmap, dynarray, JSON -- show what the object system enables in practice. And bounded string operations make the runtime safer by default.</p>

<p>Combined with the HTTP client from v2.9, ACE can now fetch JSON from a web API, store the results in a hashmap, iterate with a dynamic array, and display them in a MUI interface. That is a long way from where this project started.</p>

<p>The project lives on <a href="https://github.com/mdbergmann/ACEBasic" target="_blank" class="link">GitHub</a>. Bug reports and feature requests are welcome.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ ACE BASIC - Structs RTG and More ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+Structs+RTG+and+More"></link>
        <updated>2026-02-16T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+Structs+RTG+and+More</id>
        <content type="html"><![CDATA[ <h3>Another round of updates</h3>

<p>The previous posts covered ACE BASIC up to v2.8 -- closures, MUI, linked lists, and CubicIDE integration. Version 2.9 is out now with struct enhancements, new string functions, RTG graphics, tail-call optimization, an HTTP client, double-precision floats, and SAGA audio.</p>

<h3>Struct Enhancements</h3>

<p>Structs in ACE have been fairly basic until now -- flat collections of scalar and string fields. Version 2.9 changes that. Structs can now contain typed arrays, embed other structs, hold typed pointers to structs, reference their own type, and contain arrays of structs. This makes it possible to model real data structures without falling back to raw PEEK/POKE arithmetic.</p>

<h4>Typed array members</h4>

<p>Previously, only <code>STRING</code> could use the <code>SIZE</code> keyword to declare a fixed-size buffer inside a struct. Now any base type works:</p>

<pre class="basic"><code>STRUCT Packet
  BYTE header SIZE 4
  LONGINT values SIZE 10
  SHORTINT flags SIZE 8
  SINGLE coords SIZE 3
  STRING name SIZE 32
END STRUCT</code></pre>

<p>Each array member reserves the appropriate amount of space inline in the struct. Access is through indexed notation:</p>

<pre class="basic"><code>DECLARE STRUCT Packet p

p-&gt;values(0) = 100&
p-&gt;values(1) = 200&
p-&gt;coords(0) = 1.5
p-&gt;coords(2) = 3.5

FOR i% = 0 TO 3
  p-&gt;header(i%) = 65 + i%
NEXT i%</code></pre>

<p>The index can be a constant, a variable, or an expression. The compiler generates the correct element size multiplication and offset calculation automatically.</p>

<h4>Nested structs</h4>

<p>Structs can now embed other structs as members. The <code>-&gt;</code> operator chains through the nesting:</p>

<pre class="basic"><code>STRUCT Vec2
  LONGINT x
  LONGINT y
END STRUCT

STRUCT Rect
  Vec2 topLeft
  Vec2 bottomRight
END STRUCT

DECLARE STRUCT Rect r

r-&gt;topLeft-&gt;x = 10&
r-&gt;topLeft-&gt;y = 20&
r-&gt;bottomRight-&gt;x = 100&
r-&gt;bottomRight-&gt;y = 200&</code></pre>

<p>This also works with deeper nesting. A struct that embeds a struct that embeds another struct gives you three levels of <code>-&gt;</code> chaining. The compiler resolves the offsets at compile time, so there is no runtime cost for the nesting.</p>

<h4>Typed struct pointers</h4>

<p>A struct member can be declared as a pointer to a specific struct type. This tells the compiler what type lives at the other end, so you can chain <code>-&gt;</code> through the pointer:</p>

<pre class="basic"><code>STRUCT Inner
  LONGINT x
  LONGINT y
END STRUCT

STRUCT Outer
  Inner *ptr
  LONGINT z
END STRUCT

DECLARE STRUCT Outer o

o-&gt;ptr = ALLOC(SIZEOF(Inner))
o-&gt;ptr-&gt;x = 42&
o-&gt;ptr-&gt;y = 99&</code></pre>

<p>Without typed pointers, you would have to store a plain <code>ADDRESS</code>, cast it manually, and use PEEK/POKE. With typed pointers, the compiler knows the layout and does the offset math for you.</p>

<h4>Self-referential structs and struct arrays</h4>

<p>A struct can contain a pointer to its own type, which is the classic building block for linked lists and trees:</p>

<pre class="basic"><code>STRUCT Node
  STRING name
  Node *next
END STRUCT</code></pre>

<p>And you can embed a fixed-size array of structs inside another struct:</p>

<pre class="basic"><code>STRUCT Vec2
  LONGINT x
  LONGINT y
END STRUCT

STRUCT Polygon
  Vec2 pts SIZE 5
  LONGINT count
END STRUCT

DECLARE STRUCT Polygon poly

poly-&gt;pts(0)-&gt;x = 10&
poly-&gt;pts(0)-&gt;y = 20&
poly-&gt;pts(2)-&gt;x = 50&</code></pre>

<p>The syntax <code>poly-&gt;pts(i)-&gt;field</code> combines struct array indexing with field access in a single expression. This is probably the most complex access pattern ACE supports now, and it works with variable indices and in loops.</p>

<h3>New String Functions</h3>

<p>Version 2.9 adds eleven new string functions. Here is a quick overview:</p>
<table >
<thead>
<tr><th>
Function</th><th>
Description</th></tr>
</thead>
<tbody>
<tr><td>
<code>TRIM$</code> / <code>LTRIM$</code> / <code>RTRIM$</code></td><td>
Strip leading/trailing whitespace</td></tr>
<tr><td>
<code>STARTSWITH</code> / <code>ENDSWITH</code></td><td>
Test prefix or suffix (returns boolean)</td></tr>
<tr><td>
<code>RINSTR</code></td><td>
Search for substring from the right</td></tr>
<tr><td>
<code>REPLACE$</code></td><td>
Replace all occurrences of a substring</td></tr>
<tr><td>
<code>REVERSE$</code></td><td>
Reverse a string</td></tr>
<tr><td>
<code>REPEAT$</code></td><td>
Repeat a string N times</td></tr>
<tr><td>
<code>LPAD$</code> / <code>RPAD$</code></td><td>
Pad to a given width</td></tr>
<tr><td>
<code>FMT$</code></td><td>
sprintf-style formatting</td></tr>
<tr><td>
<code>MID$</code> (statement)</td><td>
In-place modification of a substring</td></tr>
</tbody>
</table>

<p>The one I find most useful is <code>FMT$</code>. It works like C's <code>sprintf</code> but returns a BASIC string:</p>

<pre class="basic"><code>msg$ = FMT$("%s has %d items", "list", 42)
'..result: "list has 42 items"

hex$ = FMT$("addr: %08x", 255)
'..result: "addr: 000000FF"</code></pre>

<p>It supports <code>%s</code>, <code>%d</code>, <code>%x</code>, <code>%c</code>, and <code>%%</code> with up to eight arguments. This is much cleaner than concatenating strings with <code>STR$</code> calls.</p>

<h3>P96/RTG Screen Support</h3>

<p>Until now, ACE only supported planar Amiga screens -- OCS, ECS, and AGA modes that use bitplane graphics. Version 2.9 adds Picasso96 retargetable graphics with a new screen mode 13. This gives you chunky (linear) framebuffers with 8, 15, 16, 24, or 32-bit color depth. It works on any P96-compatible hardware.</p>

<p>Opening a P96 screen is straightforward:</p>

<pre class="basic"><code>SCREEN 1, 800, 600, 8, 13</code></pre>

<p>Mode 13 tells ACE to use P96 instead of the native chipset. The depth parameter sets the color depth in bits. For 8-bit screens, you get a 256-color palette just like AGA, but the framebuffer is chunky instead of planar. For HiColor and TrueColor depths, there is a new <code>COLOR r,g,b</code> syntax for direct RGB drawing.</p>

<p>All the standard drawing commands -- <code>LINE</code>, <code>CIRCLE</code>, <code>PRINT</code>, <code>LOCATE</code> -- work on P96 screens because Picasso96 patches the graphics library. But you can also write directly to the framebuffer via <code>POKE</code> for maximum speed:</p>

<pre class="basic"><code>' Get the chunky framebuffer address
bitmapAddr& = SCREEN(4)
frameAddr& = PEEKL(bitmapAddr& + 8)

' Write a pixel at (x, y) on an 8-bit screen
POKE frameAddr& + CLNG(y%) * CLNG(800) + CLNG(x%), colorIndex%</code></pre>

<p>This opens up possibilities for software rendering, chunky-to-chunky blitting, and effects that would be painful on planar screens.</p>

<p>The following example combines direct framebuffer writes (the rainbow gradient) with OS drawing primitives (lines, rectangles, circles) on the same chunky buffer:</p>

<pre class="basic"><code>' P96 Chunky Screen - 800x600, 256 colors
SCREEN 1, 800, 600, 8, 13
WINDOW 1,"",(0,0)-(799,599),0,1

' Rainbow palette
FOR i% = 0 TO 253
  ' ... compute r%, g%, b% for rainbow gradient ...
  PALETTE i%, r%/255, g%/255, b%/255
NEXT

' Direct framebuffer fill with rainbow bars
bitmapAddr& = SCREEN(4)
frameAddr& = PEEKL(bitmapAddr& + 8)
FOR y% = 0 TO 599
  i% = (y% * 254) / 600
  FOR x% = 0 TO 799
    POKE frameAddr& + CLNG(y%) * 800& + CLNG(x%), i%
  NEXT
NEXT

' OS drawing on top: lines, circles, filled shapes
COLOR 255, 0
LINE (20,80)-(380,80)
CIRCLE (200,180),80
CIRCLE (200,180),60,,,,f
LINE (400,80)-(600,180),200,bf</code></pre>

<h3>Tail-Call Optimization</h3>

<p>Recursive functions in ACE use the system stack. Each call pushes a frame, and if the recursion is deep enough, you run out of stack and crash. Version 2.9 adds automatic tail-call optimization (TCO) for self-recursive SUBs with numeric parameters. When the compiler detects that a recursive call is the last thing a SUB does, it replaces the call with a jump back to the top of the SUB, reusing the same stack frame.</p>

<p>Enable it with <code>OPTION O+</code>:</p>

<pre class="basic"><code>OPTION O+

SUB LONGINT Gcd(a&, b&)
  IF b& = 0 THEN
    Gcd = a&
  ELSE
    Gcd = Gcd(b&, a& MOD b&)
  END IF
END SUB

' This would overflow the stack without TCO
ASSERT Gcd(1000000, 3) = 1</code></pre>

<p>Without TCO, <code>Gcd(1000000, 3)</code> requires around 333,000 recursive calls and would need roughly 48 KB of stack space -- well beyond the default. With TCO, it uses about 24 bytes regardless of depth.</p>

<p>The optimization works through the peephole optimizer. After the compiler generates the recursive <code>JSR</code> instruction, the peephole pass recognizes the pattern (restore frame, JSR to self, return) and replaces it with parameter shuffling and a <code>BRA</code> to the function entry. No parser changes were needed.</p>

<p>TCO only applies when the recursive call is in tail position -- there must be no computation after the call. The accumulator pattern is the standard way to write tail-recursive functions:</p>

<pre class="basic"><code>OPTION O+

SUB LONGINT Factorial(n&, acc&)
  IF n& &lt;= 1 THEN
    Factorial = acc&
  ELSE
    Factorial = Factorial(n& - 1, n& * acc&)
  END IF
END SUB

result& = Factorial(12, 1)   '..479001600</code></pre>

<h3>HTTP Client</h3>

<p>Version 2.9 ships an HTTP client submodule that provides networking from ACE BASIC. It supports HTTP and HTTPS (via AmiSSL), chunked transfer encoding, and streaming responses. The implementation is split into three submodules: <code>tcpclient.b</code> for raw TCP sockets, <code>amissl.b</code> for TLS, and <code>httpclient.b</code> for the HTTP protocol layer.</p>

<p>The API is struct-based. You declare connection and request/response structs, then call the appropriate functions:</p>

<pre class="basic"><code>#include &lt;submods/httpclient.h&gt;

DECLARE STRUCT TcpConn conn
DECLARE STRUCT HttpRequest req
DECLARE STRUCT HttpResponse resp

LONGINT status

' High-level: one-call HTTP HEAD
status = HttpHead(req, resp, conn, "http://www.google.com/")
PRINT "Status:"; status    '..prints 200</code></pre>

<p>For more control, there is a low-level API that lets you open a connection, send a request, read headers, and read the body in chunks:</p>

<pre class="basic"><code>rc = HttpOpen(req, conn, "www.example.com", 80, 0)
rc = HttpSendRequest(req, conn, "GET", "/")
status = HttpReadStatus(conn, resp)
bytes = HttpReadBody(conn, resp, buffer, bufferSize)
HttpClose(conn)</code></pre>

<p>This is the first time ACE BASIC has any kind of network access built in. Fetching data from a web API, downloading files, or even posting data to a server is now possible from BASIC.</p>

<h3>Brief Mentions</h3>

<p>A few more additions worth noting:</p>

<ul>
<li><p><strong>DP-Float submodule</strong>: Double-precision floating-point math via the <code>mathieeedoubbas</code> and <code>mathieeedoubtrans</code> libraries. ACE's native <code>SINGLE</code> type uses Motorola Fast Floating Point (32-bit). The DP-Float submodule gives you 64-bit IEEE doubles with 32 functions covering arithmetic, trigonometry, hyperbolic functions, exponents, and string conversion. Originally written by David Benn (the creator of ACE), now integrated as an external submodule.</p></li>
<li><p><strong>SAGA Sound submodule</strong>: 16-bit audio playback on Vampire V4 hardware using the SAGA chipset. Supports 16 channels, 8 or 16-bit samples, stereo volume control, and sample rates up to 56 kHz.</p></li>
<li><p><strong>Turtle graphics moved to submodule</strong>: The 13 built-in turtle commands (<code>FORWARD</code>, <code>BACK</code>, <code>TURNRIGHT</code>, etc.) have been removed from the compiler and runtime. They now live in the <code>turtle.b</code> submodule. Existing programs just need to add <code>#include &lt;submods/turtle.h&gt;</code> and link the submodule. This keeps the compiler smaller and makes the turtle library easier to maintain independently.</p></li>
<li><p><strong>Buffered File I/O</strong>: The runtime functions behind <code>LINE INPUT #</code>, <code>INPUT #</code>, and <code>INPUT$</code> now use bulk <code>Read</code>+<code>Seek</code> calls instead of reading one character at a time. This gives roughly 12x throughput improvement for file reading operations.</p></li>
<li><p><strong>CubicIDE plugin improvements</strong>: The plugin now uses <code>regedit</code> for proper preset and filetype registration. The autocase dictionary, syntax highlighting, and quickinfo have been updated to cover all current keywords and functions.</p></li>
</ul>

<h3>Conclusion</h3>

<p>Version 2.9 makes ACE BASIC significantly more capable for systems programming. Structs now support the kind of nesting and composition you need for working with OS data structures and building your own. RTG support opens up high-resolution, high-color graphics beyond the Amiga chipset. Tail-call optimization makes recursive algorithms practical. And the HTTP client brings network access to ACE BASIC for the first time.</p>

<p>The project lives on <a href="https://github.com/mdbergmann/ACEBasic" target="_blank" class="link">GitHub</a>. Bug reports and feature requests are welcome.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Developing with AI - Understanding the Context ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Developing+with+AI+-+Understanding+the+Context"></link>
        <updated>2026-02-13T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Developing+with+AI+-+Understanding+the+Context</id>
        <content type="html"><![CDATA[ <h3>Intro</h3>

<p>AI coding tools like Claude Code have become part of many developers' daily work. They can write code, run tests, search code, and handle complex tasks with many steps. But to use them well -- and to avoid surprises in the middle of a session -- you need to understand one key concept: <strong>the context window</strong>.</p>

<p>This post explains what the context is, how it works, why running out of it makes your results worse, and what you can do to stay in control.</p>

<h3>What Is the Context?</h3>

<p>Here is the key idea: <strong>the context is an array</strong>. It is a list of message objects on the client side. This list gets sent to the LLM with every single API call. The LLM itself has no state. It has no memory between calls. Everything it &quot;knows&quot; about your conversation is only there because the client sends it each time.</p>

<p>The array follows a strict pattern where <code>user</code> and <code>assistant</code> messages take turns:</p>

<pre class=""><code>messages = [
  { role: "user",      content: "Please refactor the auth module" },
  { role: "assistant", content: [text blocks, tool_use blocks] },
  { role: "user",      content: [tool_result blocks] },
  { role: "assistant", content: [text blocks, tool_use blocks] },
  ...
]</code></pre>

<p>The <code>content</code> field of each element can be a plain string or an array of typed content blocks. These blocks include:</p>

<ul>
<li><strong>Text blocks</strong>: The actual text from you or the assistant.</li>
<li><strong>Tool use blocks</strong>: When the AI wants to read a file, run a command, or search your code, it creates a tool_use block with the tool name and its parameters.</li>
<li><strong>Tool result blocks</strong>: After the tool runs, its output goes back into the array as a tool_result block in the next user message.</li>
<li><strong>Thinking blocks</strong>: When extended thinking is turned on, the AI's reasoning steps show up as thinking blocks. These are large but get removed from older turns to save space.</li>
</ul>

<p>There is also a <strong>system prompt</strong> that is sent along with the array but is not part of it. It holds the AI's main instructions -- what tools it has, how it should behave, what rules to follow. In Claude Code, this system prompt is quite large.</p>

<p>The key point to remember: <strong>this array is the AI's entire short-term memory</strong>. If something is not in the array, the AI does not know about it. If the array gets too long, older content gets shortened or removed. Every tool call, every file read, every command output -- it all goes into this array and takes up space.</p>

<h3>CLAUDE.md -- Instructions That Stay in the Context</h3>

<p>AI coding tools support project-level instruction files that get loaded into the context when a session starts. In Claude Code, this file is called <code>CLAUDE.md</code>. Other tools like Cursor use <code>AGENTS.md</code> or similar names, but the idea is the same.</p>

<p>When a Claude Code session starts, it reads <code>CLAUDE.md</code> files from several places:</p>

<ul>
<li><strong>Project root</strong>: <code>./CLAUDE.md</code> -- shared with your team through version control.</li>
<li><strong>User-level</strong>: <code>~/.claude/CLAUDE.md</code> -- your personal settings for all projects.</li>
<li><strong>Local overrides</strong>: <code>./CLAUDE.local.md</code> -- personal, project-specific, not committed.</li>
<li><strong>Auto memory</strong>: <code>~/.claude/projects/&lt;project&gt;/memory/MEMORY.md</code> -- notes that Claude saves from earlier sessions.</li>
</ul>

<p>These files are added to the context as system reminders. They stay there for the whole session and survive compaction (more on that below). This makes <code>CLAUDE.md</code> the right place for things that should never be forgotten: build commands, coding rules, architecture decisions, test strategies.</p>

<p>But there is a trade-off. Everything in <code>CLAUDE.md</code> uses context space on every API call. If you put 5,000 tokens of instructions in it, that is 5,000 tokens less for your actual conversation. So, keep it short. Only put things there that are always needed.</p>

<h3>Context Window Limits</h3>

<p>Every LLM has a maximum context window size -- the upper limit on how large the array can be. Current Claude models offer:</p>
<table >
<thead>
<tr><th>
Model</th><th>
Context Window</th><th>
Max Output</th></tr>
</thead>
<tbody>
<tr><td>
Claude Opus 4.6</td><td>
200K tokens</td><td>
128K tokens</td></tr>
<tr><td>
Claude Sonnet 4.5</td><td>
200K tokens</td><td>
64K tokens</td></tr>
<tr><td>
Claude Haiku 4.5</td><td>
200K tokens</td><td>
64K tokens</td></tr>
</tbody>
</table>

<p>There is also a 1M token beta for some models, but the default is 200K. That sounds like a lot, but it fills up faster than you might think. Let's look at what goes into the array during a typical session:</p>

<ul>
<li>System prompt: ~10-15K tokens</li>
<li>CLAUDE.md files: 1-5K tokens</li>
<li>Each file you read: hundreds to thousands of tokens</li>
<li>Each tool call and result: different sizes, but it adds up fast</li>
<li>Each conversation turn: your message plus the AI's answer</li>
<li>Extended thinking: can be very large per turn (but gets removed from older turns)</li>
</ul>

<p>A session where you read ten files, run a few commands, and have some back-and-forth can easily use 100K+ tokens. A complex session that touches many files can hit the limit within an hour.</p>

<h3>What Happens When You Run Out: Compaction</h3>

<p>When the context array gets close to the window limit, Claude Code triggers <strong>auto-compaction</strong>. This happens at about 83% of the context window (around 167K tokens for a 200K window). Here is what happens:</p>

<ol>
<li>The system makes an extra API call asking the AI to summarize the whole conversation so far.</li>
<li>The summary replaces all previous messages in the array.</li>
<li>The conversation continues with just the summary as history.</li>
</ol>

<p>This sounds fine in theory. In practice, compaction has real problems:</p>

<p><strong>You will lose information.</strong> A summary cannot keep every detail. Specific variable names, exact error messages, careful decisions from earlier in the session -- these get shortened into approximations. The AI may &quot;forget&quot; things you decided earlier.</p>

<p><strong>It costs money.</strong> The summary step is an extra API call using the same model. You pay for it.</p>

<p><strong>The timing is hard to predict.</strong> Auto-compaction triggers based on token count, not at a good moment in your work. It might happen right in the middle of a complex change across many files, losing track of what was already done and what still needs doing.</p>

<p><strong>Problems can get worse over time.</strong> If important instructions get lost during compaction, the AI may start making mistakes. Those mistakes create more context (error messages, corrections), which leads to more compaction, which loses more context. This is a downward spiral.</p>

<p>You can trigger compaction manually with <code>/compact</code> (and even guide it with <code>/compact focus on the API changes</code>). This gives you more control over what gets kept. But the basic problem stays: once context is compacted, the original details are gone.</p>

<h3>The Goal: Stay Within the Context Window</h3>

<p>The best strategy is simple: <strong>do not let compaction happen</strong>. If you can finish your task within the context window, the AI has full access to everything that was said and done during the session. No summaries, no lost details, no degradation over time.</p>

<p>This means being careful about how you use context:</p>

<ul>
<li>Do not load whole files into the conversation if you only need a few functions. Point the AI at specific line ranges.</li>
<li>Use <code>/context</code> to check your usage. Know where you stand before starting a big task.</li>
<li>Be aware that MCP servers add tool definitions to every request. A few MCP servers can use a lot of context before you even write a single line.</li>
<li>Break large tasks into phases (see below).</li>
</ul>

<p>A good rule of thumb: if you think your task will use more than 80% of the context window, split it into phases. If you are already at 95% and almost done, push through. Otherwise, plan for a clean context reset.</p>

<h3>Multi-Phase Development with State Files</h3>

<p>For tasks too large for a single context window -- a big refactoring, a new feature across many files, a migration -- I find the best approach is <strong>multi-phase development with state files</strong>.</p>

<p>The idea is straightforward:</p>

<ol>
<li><strong>Break the task into phases</strong> that each fit within a context window.</li>
<li><strong>Keep a state file</strong> that holds everything needed to continue from one phase to the next.</li>
<li><strong>Reset the context between phases</strong> by starting a new session and having the AI read the state file.</li>
</ol>

<p>The state file is the key. It works as a handoff document that connects one context to the next. A good state file looks something like this:</p>

<pre class="markdown"><code># Project State: Auth Module Migration

## Goal
Migrate from session-based auth to JWT tokens across the API.

## Completed (Phase 1)
- Created JWT utility module at src/auth/jwt.ts
- Updated User model with refresh token field
- Added token generation to login endpoint
- Tests passing for jwt.ts (14/14)

## In Progress (Phase 2)
- Replacing session checks in middleware (3 of 7 routes done)
- Routes completed: /api/users, /api/projects, /api/settings
- Routes remaining: /api/billing, /api/admin, /api/webhooks, /api/export

## Decisions Made
- Using RS256 algorithm (asymmetric) for token signing
- Access token TTL: 15 minutes
- Refresh token TTL: 7 days
- Storing refresh tokens in database, not Redis

## Known Issues
- /api/admin has custom middleware that needs special handling
- Rate limiter depends on session ID; needs new key strategy

## Next Steps
1. Continue middleware migration for remaining routes
2. Update rate limiter to use JWT subject claim
3. Add token refresh endpoint</code></pre>

<p>When you start a new phase, the conversation is fresh. The AI reads the state file, sees where things stand, and picks up where the last phase stopped -- without carrying the weight of everything that happened before.</p>

<p>This approach has several nice properties:</p>

<ul>
<li><strong>Each phase gets the full context window.</strong> No compaction, no degradation.</li>
<li><strong>The state file is easy to read.</strong> You can check it, edit it, and fix mistakes before the next phase.</li>
<li><strong>It works across sessions, machines, and even different AI tools.</strong> It is just a markdown file.</li>
<li><strong>It forces you to think about how to split tasks.</strong> This usually leads to better results regardless of which tools you use.</li>
</ul>

<p>You can ask the AI to create and update the state file as part of each phase: &quot;Before we finish this phase, update the state file with what we did and what comes next.&quot;</p>

<h3>Subagents: Separate Contexts for Parallel Work</h3>

<p>Claude Code has another way to manage context well: <strong>subagents</strong>. These are separate AI instances that the main agent can give tasks to. The important thing is that each subagent runs in its <strong>own, separate context window</strong>.</p>

<p>When the main agent starts a subagent, here is what happens:</p>

<ol>
<li>A new AI instance is created with a fresh, empty context.</li>
<li>The subagent only gets a task description and its own system prompt -- not the main conversation history.</li>
<li>The subagent works on its own: reading files, searching code, running commands, making many tool calls.</li>
<li>When done, the subagent sends back a <strong>short summary</strong> of what it found to the main agent.</li>
<li>Only that summary goes into the main agent's context array.</li>
</ol>

<p>This is important: all the work the subagent did -- every file it read, every search it ran, every step of reasoning -- stays in the subagent's own context. It does not fill up the main context. The main agent only gets the final result.</p>

<p>Claude Code has several built-in subagent types:</p>

<ul>
<li><strong>Explore</strong>: Fast code search (runs on a smaller, faster model).</li>
<li><strong>Plan</strong>: Research and design approaches (read-only, no file changes).</li>
<li><strong>General-purpose</strong>: Complex tasks with many steps and full tool access.</li>
<li><strong>Bash</strong>: Command execution in a separate context.</li>
</ul>

<p>The main agent works as a coordinator. It decides when to hand off work, what to hand off, and how to use the results. You can even run several subagents at the same time -- for example, one searches for all uses of an old API while another reads the migration guide.</p>

<p>The practical benefit for context management is quite significant. Think of a task where you need to understand how authentication works across a large codebase. Without subagents, the main agent reads file after file, and each file goes into the main context. Twenty files later, you have used a huge part of your context window just for exploration.</p>

<p>With subagents, the main agent just hands off the work: &quot;Explore the codebase and explain how authentication works.&quot; The Explore subagent reads those twenty files in its own context, puts the findings together, and sends back a two-paragraph summary. The main context gets those two paragraphs instead of twenty files worth of content. Pretty cool.</p>

<p>There are limits. Subagents cannot start other subagents (no nesting). And if many subagents return detailed results, those summaries still use main context space. But when used wisely, subagents are one of the best tools for keeping the main context lean.</p>

<h3>Practical Tips</h3>

<p>A few more strategies worth knowing:</p>

<p><strong>Use CLAUDE.md for lasting context.</strong> Anything that should survive across sessions -- build commands, rules, architecture notes -- goes in CLAUDE.md. It is reloaded on every API call and survives compaction.</p>

<p><strong>Manual compaction is better than auto-compaction.</strong> If you must compact, do it manually at a good stopping point (<code>/compact</code>) instead of letting it trigger at random. You can guide the summary: <code>/compact focus on the database migration progress</code>.</p>

<p><strong>Use git as a checkpoint.</strong> Commit often during AI-assisted sessions. If context gets worse after compaction, you can always start a fresh session and point the AI at the git log to understand what happened.</p>

<p><strong>Check usage with <code>/context</code>.</strong> This command shows you what is using space. Run it before starting a big task.</p>

<p><strong>Structured data survives compaction better than prose.</strong> If you are tracking task lists or test results, use structured formats (markdown tables, JSON) instead of long descriptions.</p>

<h3>Conclusion</h3>

<p>The context window is the basic constraint of AI-assisted development. Understanding it -- knowing that it is an array on the client, that the AI has no state, that every interaction uses space, that compaction loses information -- changes how you work with these tools.</p>

<p>The most effective developers I have seen treat context like a limited resource. They plan their sessions, split large tasks into phases, use state files to pass information between phases, hand off exploration to subagents, and try to avoid hitting the compaction wall.</p>

<p>The tools are powerful. But they work best when you understand what is happening behind the scenes.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ ACE BASIC - Closures MUI and More ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+Closures+MUI+and+More"></link>
        <updated>2026-02-10T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+Closures+MUI+and+More</id>
        <content type="html"><![CDATA[ <h3>Picking up where we left off</h3>

<p>The previous two posts covered ACE BASIC v2.5 (AGA screen support) and v2.6 (GadTools gadgets, ASSERT, 68020 code generation). Development has not slowed down. Versions 2.7 and 2.8 bring functional programming features, a high-level MUI interface, Lisp-style linked lists, double-buffered graphics, and CubicIDE integration. There is a lot of ground to cover, so let's get started.</p>

<h3>Closures and Function Pointers</h3>

<p>The biggest language addition in v2.7 is support for function pointers, partial application, and closures. These are the building blocks for higher-order programming -- passing behavior around as data.</p>

<h4>Function references and INVOKE</h4>

<p>The <code>@</code> operator takes a reference to a SUB and returns an address you can store and call later with <code>INVOKE</code>:</p>

<pre class="basic"><code>DECLARE SUB Hello

funcPtr& = @Hello
INVOKE funcPtr&

SUB Hello
  PRINT "Hello from a function pointer!"
END SUB</code></pre>

<p><code>@Hello</code> produces a long integer that holds the address of the <code>Hello</code> subroutine. <code>INVOKE funcPtr&amp;</code> calls whatever SUB that address points to. This is the simplest form -- no arguments, no return value, just indirect dispatch.</p>

<h4>BIND and partial application</h4>

<p>Things get more interesting with <code>BIND</code>. It captures a function reference together with one or more arguments, producing a closure that remembers the bound values:</p>

<pre class="basic"><code>DECLARE SUB LONGINT AddN(LONGINT n, LONGINT x)

adder& = BIND(@AddN, 5)
result& = INVOKE adder&(10)
PRINT result&               '..prints 15

SUB LONGINT AddN(LONGINT n, LONGINT x)
  AddN = n + x
END SUB</code></pre>

<p><code>BIND(@AddN, 5)</code> creates a closure that captures <code>5</code> as the first argument to <code>AddN</code>. When you <code>INVOKE adder&amp;(10)</code>, it calls <code>AddN(5, 10)</code> and returns 15. The bound value is captured at bind time -- if you change the variable later, the closure still sees the original value.</p>

<p>This is partial application (sometimes called currying). You fix some arguments now and supply the rest later.</p>

<h4>Returning closures from SUBs</h4>

<p>You can create closures inside a SUB and return them to the caller. This is the classic &quot;factory&quot; pattern:</p>

<pre class="basic"><code>DECLARE SUB LONGINT AddN(LONGINT n, LONGINT x)
DECLARE SUB LONGINT MakeAdder(LONGINT n)

add5& = MakeAdder(5)
result& = INVOKE add5&(10)
PRINT result&               '..prints 15

SUB LONGINT MakeAdder(LONGINT n)
  MakeAdder = BIND(@AddN, n)
END SUB

SUB LONGINT AddN(LONGINT n, LONGINT x)
  AddN = n + x
END SUB</code></pre>

<p><code>MakeAdder(5)</code> returns a closure that adds 5 to whatever you pass it. The local variable <code>n</code> is captured by value inside the closure, so it survives after <code>MakeAdder</code> returns.</p>

<h4>The INVOKABLE keyword</h4>

<p>Version 2.8 adds the <code>INVOKABLE</code> keyword for SUBs that are meant to be used as callbacks -- particularly for the List library's higher-order functions and similar patterns where closures are passed as <code>ADDRESS</code> parameters:</p>

<pre class="basic"><code>DECLARE SUB LONGINT Transformer(LONGINT v) INVOKABLE

cb& = BIND(@Transformer)
result& = MapValue(cb&, 7)
PRINT result&               '..prints 14

SUB LONGINT Transformer(LONGINT v) INVOKABLE
  Transformer = v * 2
END SUB

SUB LONGINT MapValue(ADDRESS cb, LONGINT in)
  MapValue = INVOKE cb(in)
END SUB</code></pre>

<p>When a closure is passed as a generic <code>ADDRESS</code> parameter (as <code>cb</code> in <code>MapValue</code> above), the compiler cannot know at compile time whether it points to a plain SUB or a closure with bound arguments. <code>INVOKABLE</code> generates the calling convention that allows <code>INVOKE</code> to detect this at runtime and do the right thing. Without it, passing a closure as a callback could silently produce wrong results.</p>

<h3>Lisp-Style Linked Lists</h3>

<p>Closures become genuinely useful when you have data structures that accept callbacks. Version 2.8 ships a List submodule that implements Lisp-style linked lists built from cons cells. Each cell holds a typed value (integer, long, single, string, or nested list) and a pointer to the next cell.</p>

<h4>Building lists</h4>

<p>The builder pattern provides a clean way to construct lists:</p>

<pre class="basic"><code>#include &lt;submods/list.h&gt;

ADDRESS myList

LNew
  LAdd&(10)
  LAdd&(20)
  LAdd&(30)
myList = LEnd</code></pre>

<p><code>LNew</code> starts a new list, <code>LAdd&amp;</code> appends a long integer value, and <code>LEnd</code> returns the finished list. The <code>&amp;</code> suffix indicates the type -- <code>LAdd%</code> for integers, <code>LAdd!</code> for singles, <code>LAdd$</code> for strings, <code>LAddList</code> for nested lists.</p>

<p>You can also build lists directly with <code>LCons&amp;</code> (prepend) or <code>LSnoc&amp;</code> (append), but the builder pattern reads more naturally for most cases.</p>

<h4>Higher-order functions</h4>

<p>The real payoff is the set of higher-order functions that operate on lists using closures:</p>

<pre class="basic"><code>DECLARE SUB ADDRESS DoubleValue(ADDRESS carVal, SHORTINT typeTag) INVOKABLE
DECLARE SUB SHORTINT IsEven(ADDRESS carVal, SHORTINT typeTag) INVOKABLE
DECLARE SUB ADDRESS SumValues(ADDRESS acc, ADDRESS carVal, SHORTINT typeTag) INVOKABLE

ADDRESS nums, doubled, evens

'..Build a list: (1 2 3 4 5 6)
LNew
  FOR i% = 1 TO 6 : LAdd&(i%) : NEXT i%
nums = LEnd

'..Map: double every element -&gt; (2 4 6 8 10 12)
doubled = LMap(nums, BIND(@DoubleValue))

'..Filter: keep only even numbers -&gt; (2 4 6)
evens = LFilter(nums, BIND(@IsEven))

'..Reduce: sum all elements -&gt; 21
LONGINT total
total = LReduce(nums, BIND(@SumValues), 0&)

LFree(nums)
LFree(doubled)
LFree(evens)

SUB ADDRESS DoubleValue(ADDRESS carVal, SHORTINT typeTag) INVOKABLE
  LONGINT lngVal
  lngVal = carVal
  DoubleValue = lngVal * 2
END SUB

SUB SHORTINT IsEven(ADDRESS carVal, SHORTINT typeTag) INVOKABLE
  LONGINT lngVal
  lngVal = carVal
  IsEven = (lngVal MOD 2 = 0)
END SUB

SUB ADDRESS SumValues(ADDRESS acc, ADDRESS carVal, SHORTINT typeTag) INVOKABLE
  LONGINT accLng, valLng
  accLng = acc
  valLng = carVal
  SumValues = accLng + valLng
END SUB</code></pre>

<p>Every callback receives the cell's raw value as <code>ADDRESS carVal</code> and a type tag as <code>SHORTINT typeTag</code>. The type tag tells you what kind of value the cell holds (<code>LTypeInt</code>, <code>LTypeLng</code>, <code>LTypeSng</code>, <code>LTypeStr</code>, <code>LTypeList</code>). Since our list contains only long integers, the callbacks here just cast <code>carVal</code> to <code>LONGINT</code> directly. A generic callback would dispatch on <code>typeTag</code> to handle multiple types -- the test suite in the repository shows that pattern.</p>

<p><code>LMap</code> applies a callback to every element and returns a new list. <code>LFilter</code> returns a new list containing only elements for which the callback returns non-zero. <code>LReduce</code> folds the list into a single value using an accumulator. All three take a <code>BIND(@callback)</code> closure, which is where the <code>INVOKABLE</code> keyword matters.</p>

<p>The submodule also provides <code>LForEach</code> for side-effecting iteration, and destructive variants <code>LNmap</code> and <code>LNfilter</code> that modify the list in place. The full API is documented in the <a href="https://github.com/mdbergmann/ACEBasic/tree/master/submods/list" target="_blank" class="link">[List submodule README]</a>.</p>

<h3>MUI Support</h3>

<p>MUI (Magic User Interface) is the standard third-party GUI toolkit on the Amiga. It provides object-oriented widgets with automatic layout, font sensitivity, user-customizable appearance, and a consistent look across applications. Most serious Amiga applications from the mid-1990s onward use MUI.</p>

<p>Version 2.7 adds a MUI submodule that wraps the raw MUI API into builder-style calls. To appreciate what it does, consider the alternative.</p>

<h4>The raw approach</h4>

<p>Programming MUI directly from ACE BASIC means working with tag arrays and <code>MUI_NewObjectA</code> calls. A minimal &quot;Hello World&quot; window takes around 150 lines of code: you allocate tag items, fill in tag IDs and values, create each MUI object by hand, set up notifications with <code>DoMethodA</code>, run the event loop, and dispose everything. The <a href="https://github.com/mdbergmann/ACEBasic/blob/master/examples/mui/SimpleMUI.b" target="_blank" class="link">[SimpleMUI.b]</a> example in the repository shows this approach in full.</p>

<h4>The submodule approach</h4>

<p>With the MUI submodule, the same program fits in about 30 lines:</p>

<pre class="basic"><code>#include &lt;submods/MUI.h&gt;

LIBRARY "intuition.library"
LIBRARY "utility.library"

ADDRESS app, win, grp, txt

MUIInit

txt = MUITextCentered("Hello from MUI!")

MUIBeginVGroup
    MUIGroupFrameT("Welcome")
    MUIChild(txt)
grp = MUIEndGroup

win = MUIWindow("Hello MUI", grp)
app = MUIApp("HelloMUI", "$VER: HelloMUI 1.0", win)

IF app &lt;&gt; 0& THEN
    MUINotifyClose(win, app, MUIV_Application_ReturnID_Quit)
    MUIWindowOpen(win)

    WHILE MUIWaitEvent(app) &lt;&gt; MUIV_Application_ReturnID_Quit
    WEND

    MUIDispose(app)
END IF

MUICleanup

LIBRARY CLOSE "utility.library"
LIBRARY CLOSE "intuition.library"</code></pre>

<p><code>MUIInit</code> opens muimaster.library. <code>MUITextCentered</code> creates a text object. <code>MUIBeginVGroup</code>/<code>MUIEndGroup</code> define a vertical layout group. <code>MUIWindow</code> and <code>MUIApp</code> wrap the objects into a window and application. The event loop calls <code>MUIWaitEvent</code> which blocks until something happens and returns an event ID. <code>MUIDispose</code> frees the entire object tree.</p>

<p>The pattern for buttons is similarly concise -- create them with <code>MUIButton</code>, set up click notifications with <code>MUINotifyButton</code>, and dispatch on event IDs in the loop.</p>

<p>The submodule currently provides wrappers for text, buttons, string and integer input fields, checkmarks, cycle gadgets, radio buttons, list views, horizontal and vertical groups, menus, tabs, and hooks. That covers most typical application GUIs.</p>

<p>Here is a screenshot of the MUI File Browser example -- a more complete application built with the submodule:</p>

<p><img src="/static/gfx/blogs/MUIFileBrowser.jpg" alt="MUI File Browser" width="720" /></p>

<h3>Double-Buffered Graphics</h3>

<p>When you draw directly to the visible framebuffer, the display can update mid-frame and show a partially drawn image. This is screen tearing, and it ruins any kind of smooth animation.</p>

<p>The classic solution is double buffering: draw to a hidden back buffer, then swap it with the visible front buffer during the vertical blank interval. Version 2.7 includes a <code>DoubleBuffer.h</code> include file that implements this entirely in ACE BASIC -- no compiler changes were needed.</p>

<h4>How it works</h4>

<p>On the Amiga, the hardware displays whatever bitmap the ViewPort's RasInfo points to, and drawing commands go to whatever bitmap the RastPort points to. Double buffering exploits this separation:</p>

<ol>
<li><code>DbufInit</code> allocates a second bitmap with <code>AllocBitMap</code> (matching the screen's dimensions and depth) and redirects the RastPort to draw into it.</li>
<li><code>DbufSwap</code> makes the back buffer visible by updating <code>RasInfo-&gt;BitMap</code> and calling <code>ScrollVPort</code> to regenerate the copper list, then <code>WaitTOF</code> to sync with the vertical blank. Drawing is then redirected to the previously-displayed buffer.</li>
<li><code>DbufCleanup</code> restores the original bitmap and frees the allocated memory.</li>
</ol>

<p>The bitmap pointer swaps are done with <code>POKEL</code> -- direct memory writes to the RastPort and RasInfo structures at their documented offsets.</p>

<h4>The bouncing ball demo</h4>

<p>Here is the core animation loop from <code>examples/gfx/dbuf_demo.b</code>:</p>

<pre class="basic"><code>#include &lt;ace/DoubleBuffer.h&gt;

SCREEN 1,320,256,4,1
WINDOW 1,,(0,0)-(320,256),32,1

DbufInit

IF NOT DbufReady THEN
  PRINT "Failed to allocate back buffer!"
  WINDOW CLOSE 1
  SCREEN CLOSE 1
  STOP
END IF

SINGLE bx, by, dx, dy
bx = 160 : by = 128 : dx = 3 : dy = 2

WHILE INKEY$ = ""
  LINE (0,0)-(319,255),0,bf         '..clear back buffer
  bx = bx + dx : by = by + dy

  IF bx - 15 &lt; 0 OR bx + 15 &gt;= 320 THEN dx = -dx : bx = bx + dx
  IF by - 15 &lt; 0 OR by + 15 &gt;= 256 THEN dy = -dy : by = by + dy

  CIRCLE (CINT(bx), CINT(by)), 15, 2,,,,F   '..filled ball
  CIRCLE (CINT(bx), CINT(by)), 15, 1        '..outline

  COLOR 3
  LOCATE 1,1
  PRINTS "Double Buffer Demo - Press any key"

  DbufSwap                           '..swap and sync
WEND

DbufCleanup
WINDOW CLOSE 1
SCREEN CLOSE 1</code></pre>

<p>Each frame: clear the back buffer, update positions, draw, swap. The ball bounces smoothly without any tearing.</p>

<p>One important gotcha: <code>DbufCleanup</code> must be called before <code>SCREEN CLOSE</code>. The second bitmap is allocated with the OS <code>AllocBitMap</code> call, which is not tracked by ACE's automatic cleanup. If you skip <code>DbufCleanup</code>, that memory leaks until reboot.</p>

<h3>CubicIDE Integration</h3>

<p><a href="http://www.oxyron.de/html/cubicide.html" target="_blank" class="link">[CubicIDE]</a> (also known as GoldEd Studio) is a popular programmer's editor on the Amiga. Version 2.8 ships with a CubicIDE plugin that adds:</p>

<ul>
<li><strong>Syntax highlighting</strong> for ACE BASIC source files (.b, .bas)</li>
<li><strong>Quick help</strong> -- the bottom bar of the CubicIDE window shows the syntax of the BASIC command under the cursor</li>
<li><strong>Source navigation</strong> in the sidebar for quick jumping between SUBs and functions</li>
<li><strong>Toolbar buttons</strong> for compile, compile-and-run, and submodule compilation</li>
</ul>

<p>Submodule linking is handled by the <code>bas</code> build script via <code>REM #using &lt;path&gt;</code> comments in your source. For example, <code>REM #using ace:submods/mui/MUI.o</code> at the top of your file tells <code>bas</code> to link the MUI submodule when compiling.</p>

<p><img src="/static/gfx/blogs/CubicIDE-ACE.jpg" alt="CubicIDE ACE Plugin" width="720" /></p>

<h3>Brief Mentions</h3>

<p>A few smaller additions worth noting:</p>

<ul>
<li><p><strong>YAP preprocessor</strong>: The legacy APP preprocessor has been replaced by YAP (Yet Another Preprocessor). YAP supports macros, conditional compilation, and include directives with a cleaner syntax. It is now the default for all <code>#include</code> and <code>#define</code> processing.</p></li>
<li><p><strong>ELSEIF keyword</strong>: You can now write <code>IF</code>/<code>ELSEIF</code>/<code>ELSE</code>/<code>END IF</code> chains without nesting. A small quality-of-life improvement that reduces indentation in multi-branch logic.</p></li>
<li><p><strong>Compiler refactoring</strong>: About 1,500 lines of duplicated code were removed from the compiler internals. This does not change any user-facing behavior, but it makes the codebase easier to maintain and extend going forward.</p></li>
</ul>

<h3>Conclusion</h3>

<p>Versions 2.7 and 2.8 take ACE BASIC in a decidedly more modern direction. Closures and function pointers bring functional programming patterns to a language that has been purely imperative for 30 years. The List submodule demonstrates what that enables. MUI support makes sophisticated GUI applications practical. Double buffering rounds out the graphics story. And CubicIDE integration makes the development workflow smoother on the Amiga itself.</p>

<p>The project lives on <a href="https://github.com/mdbergmann/ACEBasic" target="_blank" class="link">[GitHub]</a>. Bug reports and feature requests are welcome.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ ACE BASIC - GadTools and More ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+GadTools+and+More"></link>
        <updated>2026-01-31T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+GadTools+and+More</id>
        <content type="html"><![CDATA[ <h3>Picking up where we left off</h3>

<p>In the previous post I introduced ACE BASIC v2.5 and its new AGA screen support. Since then, development has continued and version 2.6 is now available. This release focuses on three main areas: a high-level interface for GadTools gadgets, an ASSERT statement for defensive programming, and native 68020 code generation for faster arithmetic.</p>

<p>Let's dive in.</p>

<h3>GadTools Gadget Support</h3>

<p>GadTools is an Amiga system library that provides standardized, Intuition-aware gadgets with a modern look and feel. Before v2.6, using GadTools from ACE required about 230 lines of boilerplate code: opening libraries, setting up visual info structures, creating gadget lists, and carefully managing memory. Now, all of that is handled by a single <code>GADGET</code> statement.</p>

<p>The new syntax supports these gadget types:</p>
<table >
<thead>
<tr><th>
Type</th><th>
Description</th></tr>
</thead>
<tbody>
<tr><td>
BUTTON_KIND</td><td>
Push button</td></tr>
<tr><td>
CHECKBOX_KIND</td><td>
Boolean checkbox</td></tr>
<tr><td>
INTEGER_KIND</td><td>
Numeric input field</td></tr>
<tr><td>
STRING_KIND</td><td>
Text input field</td></tr>
<tr><td>
LISTVIEW_KIND</td><td>
Scrollable list</td></tr>
<tr><td>
MX_KIND</td><td>
Mutual-exclude radio buttons</td></tr>
<tr><td>
CYCLE_KIND</td><td>
Dropdown cycle gadget</td></tr>
<tr><td>
PALETTE_KIND</td><td>
Color palette chooser</td></tr>
<tr><td>
SCROLLER_KIND</td><td>
Scroll bar</td></tr>
<tr><td>
SLIDER_KIND</td><td>
Horizontal or vertical slider</td></tr>
<tr><td>
TEXT_KIND</td><td>
Read-only text display</td></tr>
<tr><td>
NUMBER_KIND</td><td>
Read-only numeric display</td></tr>
</tbody>
</table>

<p>Each gadget is created with a single line that specifies its ID, position, type, and any GadTools tags for customization.</p>

<p><strong>Note:</strong> ACE already had a <code>GADGET</code> command for legacy Intuition gadgets. The syntax is similar but not identical. Legacy gadgets use numeric types (1=BUTTON, 2=STRING, 3=LONGINT, 4=POTX, 5=POTY) and numeric style parameters. GadTools gadgets use <code>_KIND</code> constants and flexible <code>TAG=value</code> pairs for configuration. They also require a label parameter and provide the modern 2.0+ look. Both syntaxes remain available -- use legacy gadgets for simple cases or Kickstart 1.x compatibility, and GadTools gadgets for richer interfaces on AmigaOS 2.0+.</p>

<h3>Example: A GadTools GUI</h3>

<p>Let's walk through a complete example that creates a window with a slider, a string gadget, and a button. This is based on <a href="https://github.com/mdbergmann/ACEBasic/blob/master/examples/gui/GTGadgets.b" target="_blank" class="link">[examples/gui/GTGadgets.b]</a> in the ACE distribution.</p>

<h4>Setting up constants</h4>

<p>First, we define constants for our gadget IDs and the window close event:</p>

<pre class="basic"><code>CONST GAD_SLIDER = 1
CONST GAD_STRING = 2
CONST GAD_BUTTON = 3
CONST WIN_CLOSE = 256</code></pre>

<p>Each gadget needs a unique ID so we can identify which one triggered an event. The special value 256 indicates that the user clicked the window's close button.</p>

<h4>Opening a window</h4>

<pre class="basic"><code>WINDOW 1,"GadTools Gadget Demo",(0,0)-(400,100),30</code></pre>

<p>This opens window 1 with the given title, positioned at (0,0) with a size of 400x100 pixels. The flags value 30 enables the close button, drag bar, depth gadget, and sizing gadget.</p>

<h4>Setting the gadget font (optional)</h4>

<pre class="basic"><code>GADGET FONT "topaz.font", 8</code></pre>

<p>The <code>GADGET FONT</code> command specifies which font GadTools should use for rendering gadget labels and text. Here we use the classic Topaz 8-point font. If omitted, the system default font is used.</p>

<h4>Creating the gadgets</h4>

<p>Now we create our three gadgets:</p>

<pre class="basic"><code>GADGET GAD_SLIDER, ON, "Speed:   ", (100,20)-(300,32), SLIDER_KIND, GTSL_Min=1, GTSL_Max=20, GTSL_Level=5, GTSL_LevelFormat="%2ld", GTSL_MaxLevelLen=2
GADGET GAD_STRING, ON, "Type Here:", (100,40)-(300,54), STRING_KIND, GTST_String="Hello World!", GTST_MaxChars=50
GADGET GAD_BUTTON, OFF, "Click Here", (150,60)-(250,72), BUTTON_KIND</code></pre>

<p>The <code>GADGET</code> statement takes these parameters:</p>

<ul>
<li><strong>ID</strong>: The gadget's unique identifier (our constants)</li>
<li><strong>State</strong>: <code>ON</code> to enable or <code>OFF</code> to initially disable the gadget</li>
<li><strong>Label</strong>: Text displayed next to the gadget</li>
<li><strong>Position</strong>: Bounding rectangle as <code>(left,top)-(right,bottom)</code></li>
<li><strong>Type</strong>: One of the <code>_KIND</code> constants</li>
<li><strong>Tags</strong>: Optional GadTools tag=value pairs for customization</li>
</ul>

<p>The slider uses <code>GTSL_Min</code>, <code>GTSL_Max</code>, and <code>GTSL_Level</code> to set its range and initial value. The <code>GTSL_LevelFormat</code> tag displays the current value using printf-style formatting. The string gadget uses <code>GTST_String</code> for its initial content and <code>GTST_MaxChars</code> to limit input length.</p>

<h4>The event loop</h4>

<p>With the gadgets created, we enter an event loop:</p>

<pre class="basic"><code>LONGINT terminated, gad
terminated = 0

WHILE terminated = 0
  GADGET WAIT 0
  gad = GADGET(1)

  CASE
    gad = GAD_SLIDER : MsgBox "Speed: "+STR$(GADGET(3)),"OK"
    gad = GAD_STRING : MsgBox CSTR(GADGET(2)),"OK"
    gad = GAD_BUTTON : BEEP
    gad = WIN_CLOSE  : terminated = 1
  END CASE
WEND</code></pre>

<p><code>GADGET WAIT 0</code> blocks until the user interacts with a gadget or the window. The parameter 0 means wait indefinitely. After an event, <code>GADGET(1)</code> returns the ID of the gadget that was activated. For sliders and string gadgets, <code>GADGET(3)</code> and <code>GADGET(2)</code> return the current numeric value and string content respectively.</p>

<p>The <code>CASE</code> statement dispatches based on which gadget triggered the event. When the slider changes, we show its value in a message box. When the user presses Enter in the string gadget, we display what they typed. The button just beeps, and the window close event sets the termination flag.</p>

<h4>Cleanup</h4>

<p>Finally, we close everything in reverse order:</p>

<pre class="basic"><code>GADGET CLOSE GAD_BUTTON
GADGET CLOSE GAD_STRING
GADGET CLOSE GAD_SLIDER
WINDOW CLOSE 1
END</code></pre>

<p>That's it -- a complete GadTools GUI in about 30 lines instead of 230.</p>

<h4>Runtime attribute access</h4>

<p>Version 2.6 also adds <code>GADGET SETATTR</code> and <code>GADGET GETATTR</code> for modifying gadget properties at runtime:</p>

<pre class="basic"><code>GADGET SETATTR GAD_SLIDER, GTSL_Level=10   ' Set slider to 10
level& = GADGET GETATTR(GAD_SLIDER, GTSL_Level)  ' Read current value</code></pre>

<p>This allows dynamic UI updates without recreating gadgets.</p>

<h3>The ASSERT Statement</h3>

<p>Defensive programming is about catching errors early. The new <code>ASSERT</code> statement helps with that:</p>

<pre class="basic"><code>ASSERT expression [, "message"]</code></pre>

<p>If the expression evaluates to false (zero), ACE halts execution and prints an error message. If the expression is true (non-zero), execution continues silently.</p>

<pre class="basic"><code>SUB ProcessData(ADDRESS buffer, LONGINT size)
  ASSERT buffer &lt;&gt; 0, "ProcessData: buffer cannot be null"
  ASSERT size &gt; 0, "ProcessData: size must be positive"

  ' ... proceed with processing
END SUB</code></pre>

<p>When an assertion fails, you get immediate feedback about what went wrong and where. The optional message string helps identify the problem without needing a debugger.</p>

<p>ASSERT is particularly useful during development. You can sprinkle assertions throughout your code to verify invariants, check preconditions, and catch logic errors before they cause mysterious crashes. In production, the assertions serve as documentation of your assumptions.</p>

<h3>68020 Code Generation</h3>

<p>The Motorola 68000 CPU in the original Amiga does not have native 32-bit multiply and divide instructions. ACE works around this by calling library routines for these operations. This works, but it is slow.</p>

<p>The 68020 and later processors (68030, 68040, 68060, and the Vampire accelerators) do have native 32-bit arithmetic instructions: <code>MULS.L</code>, <code>DIVS.L</code>, and <code>DIVSL.L</code>. Version 2.6 can now generate these directly.</p>

<p>To enable 68020 code generation, compile with the <code>-2</code> flag:</p>

<pre class=""><code>ace -2 myprogram.b</code></pre>

<p>When should you use this? If your target hardware is an A1200, A3000, A4000, or any accelerated Amiga, the -2 flag can significantly speed up integer-heavy code. Loops with multiplication or division inside see the biggest gains -- the native instructions are several times faster than the library calls.</p>

<p>Here is a simple benchmark:</p>

<pre class="basic"><code>DEFLNG a-z

start& = TIMER
FOR i = 1 TO 1000000
  result = i * 7 / 3
NEXT i
elapsed& = TIMER - start&

PRINT "Time: "; elapsed&; " ticks"</code></pre>

<p>On a 68060 at 50 MHz, this loop runs about 3x faster when compiled with -2. On a Vampire V4 the difference is even more pronounced because the FPGA-based CPU executes the native instructions very efficiently.</p>

<p>Note that executables compiled with -2 will not run on a stock 68000 machine. If you need to support all Amigas, compile without the flag and accept the slower library calls. If you know your audience has accelerated hardware, use -2 for the extra speed.</p>

<h3>What's Coming in v2.7</h3>

<p>Development continues. Here is a preview of what's planned for version 2.7:</p>

<ul>
<li><p><strong>MUI (Magic User Interface)</strong>: High-level support for MUI, the popular object-oriented GUI toolkit for AmigaOS. MUI offers sophisticated widgets, automatic layout, and a consistent look across applications.</p></li>
<li><p><strong>INVOKE and BIND</strong>: These new statements enable functional programming patterns. <code>BIND</code> captures the current value of variables and associates them with a subroutine, and <code>INVOKE</code> calls it. This is effectively currying rather than true closures -- the bound values are captured at bind time and do not reflect later changes to the original variables. True closure semantics may be added in a future version.</p></li>
<li><p><strong>Bug fixes</strong>: As always, various fixes and improvements based on user feedback.</p></li>
</ul>

<h3>Conclusion</h3>

<p>ACE v2.6 makes GUI programming dramatically easier with built-in GadTools support, adds ASSERT for catching bugs early, and offers 68020 code generation for faster arithmetic on accelerated hardware. Combined with the AGA support from v2.5, ACE is becoming a capable tool for modern Amiga development.</p>

<p>The project lives on <a href="https://github.com/mdbergmann/ACEBasic" target="_blank" class="link">[GitHub]</a>. Bug reports and feature requests are welcome.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ ACE BASIC - AGA Screen Support ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+AGA+Screen+Support"></link>
        <updated>2026-01-27T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/ACE+BASIC+-+AGA+Screen+Support</id>
        <content type="html"><![CDATA[ <h3>A bit of history</h3>

<p>ACE is a freely distributable AmigaBASIC compiler originally written by David Benn. It takes BASIC source code and produces Motorola 68000 assembly, which is then assembled and linked into a native Amiga executable. ACE supports a large subset of AmigaBASIC and adds many features on top: recursion, structures, turtle graphics, shared library access, subprogram modules, and more.</p>

<p>The last official release was in 1998. After that, ACE went silent.</p>

<p>Last year I was looking for a high-level language to program on the Amiga. I evaluated BlitzBasic and a few other BASIC dialects, but ACE stood out. It is simple, produces standalone executables, and gives you direct access to the Amiga operating system. I tried a few things, liked what I saw, and just recently decided to pick up development. The project now lives on <a href="https://github.com/mdbergmann/ACEBasic" target="_blank" class="link">[GitHub]</a> (please file tickets there if you find any bugs).</p>

<h3>What's new in v2.5</h3>

<p>Version 2.5 is the first release under the new stewardship. Here is a summary of what changed:</p>

<ul>
<li><strong>AGA Screen Support (modes 7-12)</strong>: Full support for AGA chipset screens with up to 256 colors (8-bit depth), including HAM8 modes.</li>
<li><strong>Modern toolchain</strong>: vasm and vlink replace the legacy a68k assembler and blink linker.</li>
<li><strong>FFP/vbcc compatibility fix</strong>: Fixed Motorola Fast Floating Point handling in the runtime library for vbcc compatibility.</li>
<li><strong>GNU Makefile build system</strong>: New Makefiles replacing the old AmigaDOS build scripts.</li>
<li><strong>Project housekeeping</strong>: Directory restructuring, documentation consolidated, and a test suite with 35 test cases covering syntax, arithmetic, floats, and control flow.</li>
</ul>

<p>The most visible new feature is AGA support, so let's look at that in more detail.</p>

<h3>AGA Screen Support</h3>

<p>AGA (Advanced Graphics Architecture) is the Amiga's third-generation chipset, found in the A1200, A4000, and CD32. It supports screen modes with up to 256 colors from a 24-bit palette (8 bitplanes), as well as HAM8 which can display up to 262,144 colors.</p>

<p>Previous versions of ACE only supported OCS/ECS screen modes (modes 1-6) with a maximum of 32 colors (5 bitplanes). Version 2.5 adds six new modes:</p>
<table >
<thead>
<tr><th>
Mode</th><th>
Description</th><th>
Max Colors</th></tr>
</thead>
<tbody>
<tr><td>
7</td><td>
Lores AGA</td><td>
256</td></tr>
<tr><td>
8</td><td>
Hires AGA</td><td>
256</td></tr>
<tr><td>
9</td><td>
Super-Hires AGA</td><td>
256</td></tr>
<tr><td>
10</td><td>
HAM8 Lores</td><td>
262,144</td></tr>
<tr><td>
11</td><td>
HAM8 Hires</td><td>
262,144</td></tr>
<tr><td>
12</td><td>
HAM8 Super-Hires</td><td>
262,144</td></tr>
</tbody>
</table>

<p>A new <code>CHIPSET</code> function allows runtime detection of the installed chipset (0 = OCS, 1 = ECS, 2 = AGA), so programs can check for AGA before attempting to open an AGA screen. The <code>PALETTE</code> command now works with all 256 color registers using 24-bit precision via the system's <code>SetRGB32()</code> call on AGA hardware.</p>

<h3>Example: 256 colors on an AGA screen</h3>

<p>Let's walk through a complete example that opens an AGA screen and displays all 256 colors as gradient bars.</p>

<h4>Checking for AGA</h4>

<p>First, we check whether the machine actually has an AGA chipset. If not, the program prints a message and stops.</p>

<pre class="basic"><code>DEFLNG a-z

IF CHIPSET &lt; 2 THEN
  PRINT "This demo requires AGA chipset."
  PRINT "Please run on A1200, A4000, or CD32."
  STOP
END IF</code></pre>

<p><code>CHIPSET</code> returns 0 for OCS, 1 for ECS, and 2 for AGA. The <code>DEFLNG a-z</code> directive at the top makes all variables default to long integers, which is generally a good idea for performance on the 68000.</p>

<h4>Opening the screen</h4>

<p>Next, we open a 256-color AGA lores screen:</p>

<pre class="basic"><code>SCREEN 1,320,200,8,7</code></pre>

<p>The parameters are: screen ID (1), width (320), height (200), depth (8 bitplanes = 256 colors), and mode (7 = AGA lores). Under the hood, ACE uses <code>OpenScreenTagList()</code> with the appropriate AGA mode ID to set up the screen.</p>

<h4>Setting up the palette</h4>

<p>With 256 color registers available, we set up four gradient ramps: red, green, blue, and gray. Each gradient uses 64 colors.</p>

<pre class="basic"><code>'..Colors 0-63: Red gradient
FOR i = 0 TO 63
  PALETTE i, i/63, 0, 0
NEXT i

'..Colors 64-127: Green gradient
FOR i = 0 TO 63
  PALETTE i+64, 0, i/63, 0
NEXT i

'..Colors 128-191: Blue gradient
FOR i = 0 TO 63
  PALETTE i+128, 0, 0, i/63
NEXT i

'..Colors 192-255: Gray gradient
FOR i = 0 TO 63
  PALETTE i+192, i/63, i/63, i/63
NEXT i</code></pre>

<p>The <code>PALETTE</code> command takes a color index and three floating-point values for red, green, and blue intensity in the range 0.0 to 1.0. On AGA hardware this maps to full 24-bit color precision via <code>SetRGB32()</code>. On OCS/ECS the same command uses <code>SetRGB4()</code> with only 12-bit precision.</p>

<h4>Drawing the color bars</h4>

<p>We open a borderless window on the screen and draw four horizontal bars, one for each gradient:</p>

<pre class="basic"><code>WINDOW 1,,(0,0)-(320,200),32,1

PRINT "AGA 256-Color Demo"
PRINT "Mode 7: 320x200, 8 bitplanes"
PRINT

'..Red bar
FOR c = 0 TO 63
  COLOR c
  LINE (c*5,50)-(c*5+4,70),,bf
NEXT c

'..Green bar
FOR c = 0 TO 63
  COLOR c+64
  LINE (c*5,80)-(c*5+4,100),,bf
NEXT c

'..Blue bar
FOR c = 0 TO 63
  COLOR c+128
  LINE (c*5,110)-(c*5+4,130),,bf
NEXT c

'..Gray bar
FOR c = 0 TO 63
  COLOR c+192
  LINE (c*5,140)-(c*5+4,160),,bf
NEXT c</code></pre>

<p><code>COLOR</code> sets the current drawing color to a palette index. <code>LINE (x1,y1)-(x2,y2),,bf</code> draws a filled rectangle (the <code>bf</code> flag stands for &quot;box fill&quot;). Each bar consists of 64 small rectangles, each in a slightly different shade.</p>

<h4>Waiting and cleanup</h4>

<p>Finally, we wait for a keypress and close everything:</p>

<pre class="basic"><code>COLOR 255
LOCATE 23,1
PRINT "Press any key to exit";

WHILE INKEY$="":SLEEP:WEND

WINDOW CLOSE 1
SCREEN CLOSE 1</code></pre>

<p>The <code>SLEEP</code> in the wait loop is important -- without it the program would busy-wait and hog the CPU. On the Amiga, being friendly to other tasks matters.</p>

<p>Here is a screenshot of the demo running on a Vampire V4SA:</p>

<p><img src="/static/gfx/blogs/aga256.jpg" alt="AGA 256-Color Demo" width="720" /></p>

<h3>Conclusion</h3>

<p>AGA support in ACE v2.5 opens up 256-color and HAM8 screen modes for BASIC programmers on A1200, A4000, and CD32 hardware. Combined with runtime chipset detection, programs can gracefully handle different Amiga configurations while taking advantage of the more capable hardware when available.</p>

<p>There is more to come. Version 2.6 already adds GadTools gadget support, an ASSERT statement, and native 68020 code generation. But that is a topic for another post.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Polymorphism and Multimethods ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Polymorphism+and+Multimethods"></link>
        <updated>2023-03-02T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Polymorphism+and+Multimethods</id>
        <content type="html"><![CDATA[ <h3>Polymorphism</h3>

<p>What is Polymorphism and what is it useful for?</p>

<p>In OOP (Object-Oriented Programming) polymorphism is well-known. It allows to separate an interface from multiple implementations that can have different behaviour.</p>

<p>Polymorphism comes from the greek <code>polús</code> (many) and <code>morphe</code> (form). Multiple forms, makes sense.</p>

<p>Unless a variable, defined to an interface, is statically wired (using <code>new</code> in Java) the concrete object referenced by the variable is not known at compile time. So, which polimorphic method of the interface is called is determined at runtime. This is called <em>dynamic dispatch</em>.</p>

<p>Let's make a simple example in Scala:</p>

<pre class="scala"><code>trait IPerson {
  def sayHello()
}

class Teacher extends IPerson {
  override def sayHello() {
    println("Hello, I'm a teacher.")
  }
}

class Pupil extends IPerson {
  override def sayHello() {
    println("Hello, I'm a pupil.")
  }
}

class Student extends IPerson {
  override def sayHello() {
    println("Hello, I'm a student.")
  }
}</code></pre>

<p>This implements three different persons which say 'hello' in a different way. The beauty with this is that when you have an object that is of type <code>IPerson</code> you don't need to know which concrete implementation it is. It usually is sufficient to know that it supports saying hello by calling <code>sayHello</code>. This abstraction is great because it allows a decoupling of the interface and the concrete implementations which may even be defined in different areas or modules of the application sources.</p>

<p>OO languages like Scala, Java, C#, etc. combine data and behaviour in classes. An additional step in separation and decoupling is to separate data and behaviour. While that is possible in OO languages it is often not the norm, and once the language allows to mix data (state) and behaviour into classes it needs a lot of discipline to refrain from it.</p>

<p>Other languages separate data from behaviour naturally, which enables more decoupled design because data and behaviour can develop orthogonally. Many of those languages implement polymorphism with a concept called <em>multimethods</em>.</p>

<h3>Multimethods</h3>

<p>I choose Common Lisp as representative to show multimethods (because I like Lisps and this one in particular :), but also Groovy, JavaScript, Python or other languages support multimethods either natively or via libraries.</p>

<h4>Single dispatch</h4>

<p>In Common Lisp multimethods are implemented as <em>generic functions</em>. Common Lisp in general has a very powerful object system.</p>

<p>As a first step we create the classes used later in the dispatch:</p>

<pre class="lisp"><code>(defclass person () ())  ;; base
(defclass teacher (person) ())
(defclass pupil (person) ())
(defclass student (person) ())</code></pre>

<p>Similarly as the <code>trait</code> in Scala we first create a generic function definition:</p>

<pre class="lisp"><code>(defgeneric say-hello (person))</code></pre>

<p>Now we can add the concrete methods:</p>

<pre class="lisp"><code>(defmethod say-hello ((person teacher))
  (format t "Hello, I'm a teacher."))

(defmethod say-hello ((person pupil))
  (format t "Hello, I'm a pupil."))

(defmethod say-hello ((person student))
  (format t "Hello, I'm a student."))</code></pre>

<p>At this point we have a complete multimethod setup.<br/>
We can now call the methods and see if it works:</p>

<pre class="plain"><code>CL-USER&gt; (say-hello (make-instance 'teacher))
Hello, I'm a teacher.

CL-USER&gt; (say-hello (make-instance 'student))
Hello, I'm a student.</code></pre>

<p>The runtime system will search for methods it can dispatch on based on a generic function definition. The method implementations can be in different source files or packages/namespaces which makes this extremely flexible. This lookup does come with a performance penalty though, but implementations often apply some kind of caching to mitigate this.</p>

<h4>Multi dispatch</h4>

<p>The above is a 'single dispatch' because the dispatching is based on a single parameter, the person class.</p>

<p>Multi dispatch can dispatch on multiple parameters. Let's extend the example a bit to show this:</p>

<pre class="lisp"><code>(defgeneric say-hello (person time-of-day))

(defmethod say-hello ((person teacher) (time-of-day (eql :morning)))
  (format t "Good morning, I'm a teacher."))

(defmethod say-hello ((person teacher) (time-of-day (eql :evening)))
  (format t "Good evening, I'm a teacher."))

(defmethod say-hello ((person pupil) (time-of-day (eql :noon)))
  (format t "Good appetite, I'm a pupil."))

(defmethod say-hello ((person student) (time-of-day (eql :evening)))
  (format t "Good evening, I'm a student."))</code></pre>

<p>Now we have a second parameter <code>time-of-day</code> which doesn't represent a time, but whether it is morning, noon or evening (or some other time of the day). Since <code>time-of-day</code> is not a class we have to use the <code>eql</code> specializer for the dispatching, but it could also be another class.</p>

<pre class="plain"><code>CL-USER&gt; (say-hello (make-instance 'teacher) :evening)
Good evening, I'm a teacher.

CL-USER&gt; (say-hello (make-instance 'teacher) :morning)
Good morning, I'm a teacher.

CL-USER&gt; (say-hello (make-instance 'pupil) :noon)
Good appetite, I'm a pupil.</code></pre>

<p>So looks like that the dispatching works, by taking both parameters into consideration. Of course this works also with more than two parameters.</p>

<p>The <em>generic functions</em> in Common Lisp have a lot more features than those simple examples. For instance, with method specializers <code>:before</code>, <code>:after</code> or <code>:around</code> it is possible to implement aspect oriented programming. However, this is not the topic of this post.</p>

<h3>Conclusion</h3>

<p>Multimethods and separating data from behaviour allows more decoupled code and a more data-driven programming paradigm. When the data is immutable we are closer in the realm of functional programming. Functional programming and data-driven programming have pros and cons which should be named and weighted when starting a new project.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Global Day of CodeRetreat - recap ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Global+Day+of+CodeRetreat+-+recap"></link>
        <updated>2022-11-07T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Global+Day+of+CodeRetreat+-+recap</id>
        <content type="html"><![CDATA[ <p>Last weekend was GDCR (Global Day of CodeRetreat).</p>

<p>This was my first physical visit since three years. I was looking forward to it.</p>

<p>It was a smaller event compared to my last visits, but not less interesting. On the contrary. With ~50 or maybe less people it felt more familial. Thank you for hosting.</p>

<p>To give a quick intro of what GDCR is: it is a day of learning. On this day, which is happening across the globe, you practice Test-Driven Development. Usually there are 4 or 5 session with each 45 minutes followed by a retro session. Every year, and in all sessions you implement Conway's Game of Life. You might think, how boring is that. In fact this thought also crosses my mind each time. But each time I realize that it is all but boring. Why is it not boring? A few things: for each session you pair with someone else. Each session you may use a different set of technologies (the visitor has to prep a laptop with ready tooling). This leads to different discussions in each session. Also, each session has slightly different constrains.</p>

<p>Let me tell a bit more about those really great 4 sessions I had:</p>

<p>To find pairs for the first session the strategy normally is to have all people stand in a row and let them sort themselves for the amount of years of experience in TDD. With 6-7 years I was more on the experienced side. Sometimes there are people doing this far longer, like this time there was one guy doing TDD since 2005. Then this line of people is folded in the middle so that for the first pair more experienced go together with less experienced.</p>

<p><strong>The first</strong></p>

<p>What a coincidence. Two Lispers got together on the first pair. :)
I think of all those ~50 people there were exactly two Lispers and those two got together for the first session. How is that.</p>

<p>So we were able to choose from Common Lisp, Lisp Flavoured Erlang, Emacs Lisp and Clojure. Since all of those variants are best coded in Emacs and I haven't seen too much of Emacs Lisp yet we settled on Emacs Lisp. That allowed a simple setup: just Emacs.</p>

<p>The guy I paired with didn't do too much TDD. Lisps usually have extremely interactive REPLs (Common Lisp is hard to beat here) that allow a very interactive and incremental development of code in the REPL which is then just copy to a source file. So the guy had experience with that. While this is nice, and I do it as well to try some code, it is problematic for the following reasons.<br/>
First, the code produced this way doesn't necessarily result in automated tests. Second, it's very hard to get the coverage right when writing tests after. In that regard, it's difficult to recreate the mind set and the context at the time of writing the production code. All the thoughts are lost which otherwise could be captured in tests as specs and documentation.</p>

<p><em>Lessons learned</em>: tests provide spec and documentation. <a href="https://www.gnu.org/software/emacs/manual/html_mono/ert.html" target="_blank" class="link">ert</a> is a nice little test framework for Emacs.</p>

<p><strong>The second</strong></p>

<p>The second pairing was with someone who was more experienced in TDD. We did Scala with ScalaTest and the <code>AnyFunSpec</code> style where you do <code>describe</code> blocks with <code>it</code> children for each test. I had to realize that on the day to day work I got a bit lazy. In this session I was reminded that the strictness of TDD, to properly categorize and describe the tests is incredibly valuable.
The funny part of this session was: we were incrementally implementing the Game of Life rules. The handout contained four rules written down. After implementing those and doing the refactorings we had production code that was 2 lines long (compared to x times the test code). After creating a few more edge-case tests all tests remained green. We were looking at each other like: how is this possible? It can't be that simple code. Then the session was over. Later we realized that we indeed had forgotten a rule. However, we figured it was not part of the spec written on the handout, so all good. :)</p>

<p><em>Lessons learned</em>: if there are no well described tests that capture the context and the specs it may be close to impossible to extract the spec and context from just the production code later, even with properly named function and variable names.</p>

<p><strong>Lunch time</strong></p>

<p>We had all great Pizza and talks over lunch... :)</p>

<p><strong>The third</strong></p>

<p>The third session was with someone who was an experienced developer who doesn't do TDD too much at work. Mostly he creates tests after. This was also a very interesting session. We did again Scala on my box. There was another constrained for this session: don't talk. (But we ignored it. :) With a language that at least one doesn't know it gets difficult and it's likely you don't get too much out of the session.)<br/>
What one was able to recognize is that in this session the tendency was to think too big. We were thinking too many steps ahead instead of just satisfying the test at hand. This happens to me as well sometimes even with TDD that you get stuck in writing production code for many many minutes. When that happens it is likely that you get lost in details. But this is much more likely to happen when you don't have a fast test cycle.</p>

<p><em>Lessons learned</em>: try to make small increments. That's what TDD is for. Focus on the small thing at hand. This helps to reduce the complexity. Your brain has only so much capacity.</p>

<p><strong>The fourth</strong></p>

<p>I have to say the fourth session was one of the most interesting ones. The constraint was <a href="https://williamdurand.fr/2013/06/03/object-calisthenics/" target="_blank" class="link">Object Calisthenics</a>. So we should not use primitives like numbers, strings, etc., only one level of indentation, no else keywords, etc.<br/>
If you haven't seen it yet you might think: what? how else would I do programming if not with ints, longs, strings and such. Well, it's possible. You wrap them in types, and you make comparisons on types. A language with pattern matching is handy here. But let's get a little bit more concrete:</p>

<p>In Game of Life rules you have to make comparisons based on the number of living neighbour cells of a cell. 'Normally' you'd have comparisons like:</p>

<pre class="scala"><code>// the true/false defines the new state of the cell.
// does it live or die
if (livingNeighbours &lt; 2) return false
if (livingNeighbours == 3) return true
...</code></pre>

<p>Now, with the given constrains we can't do that. Instead we have looked at the input data and tried to categorize it. The categorization actually is:</p>

<pre class="scala"><code>case class NeighbourCategory(neighbourRange: (Int, Int))
object NeighbourCategory {
  // i.e. &lt; 2 neighbours is under population.
  val UnderPopulation = NeighbourCategory((0, 1))
  // 2 or 3 neighbours is survival
  val Survival = NeighbourCategory((2, 3))
  // &gt; 4 is over population
  val OverPopulation = NeighbourCategory((4, 9))
  // exactly 3 is resurection
  val Resurection = NeighbourCategory((3, 3))
}</code></pre>

<p>Those categories as Scala types define the value sets of living neighbours for the comparison we have to make.</p>

<p>Additionally we defined the information whether a cell is alive, or dead like this:</p>

<pre class="scala"><code>case class CellState(value: Boolean)
object CallState {
  val Alive = CellState(true)
  val Dead = CellState(false)
}</code></pre>

<p>Now, this allowed us to do the comparison just on those types, no numbers involved:</p>

<pre class="scala"><code>neighbourCategory match {
  case UnderPopulation =&gt; Dead
  case Survival =&gt; Alive
  ...
}</code></pre>

<p>What benefits could this have?</p>

<p>We didn't actually touch too much the other restrictions because we just didn't get that far. Much of the value of those sessions is actually the approach and discussions of the approach.</p>

<p><em>Lessons learned</em>: <em>not</em> using primitive types allows a better understanding of the domain. The domain is clearly written, it gets explicit. A reader of the code can more easily understand what this is about.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ House automation tooling - Part 4 - Finalized ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+4+-+Finalized"></link>
        <updated>2022-11-01T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+4+-+Finalized</id>
        <content type="html"><![CDATA[ <p>Update (5.2.2024):</p>

<p>The project around house automation has further taken shape and manifested in <a href="https://github.com/mdbergmann/chipi" class="link" target="_blank">[Chipi]</a>.</p>

<p>Things went not as planned. But that's what plans are for, right? To change them.</p>

<p>I've planned to do small blog entries for the micro steps taken in development. I've created tags for many those steps in Git. But unfortunately due to private and business matters I couldn't find the time to do what and how I wanted.</p>

<p>But, I finished the tooling, and it is in production in my home doing its job, running 24/7 since a few months successfully. To recall, this tool captures temperatur/sensor data from a wood chip boiler and reports it to an openHAB system.<br/>
I've added a few additions to the original spec, i.e. I found it important to calculate averages of the captured values and report them additionally at specified time intervalls. See below for more info.</p>

<p>So, I'd like to finalize this blog series by writing about some best practices that I used as well as some obstacles I had to solve.</p>

<p>Again, the project can be seen here: <a href="https://github.com/mdbergmann/cl-etaconnector" class="link" target="_blank">[cl-eta]</a></p>

<h4>Noteworthy</h4>

<h5>Sento (cl-gserver) works. Let's use more of actors</h5>

<p>This was basically the first real world project that uses <a href="https://github.com/mdbergmann/cl-gserver" target="_blank" class="link">Sento</a>.<br/>
There was one change I had to make to satisfy proper testing capability of the actor in use. That change required to ensure that the actor-system is fully shutdown and all actors stopped at the end of a test (as the last part of a test fixture). Actors not fully closed at the end of a test can interfere with the next test and can produce weird test results that are hard to debug.</p>

<p>Aside of that Sento works well. The tool uses one actor that exposes the complete public functions interface. While internally code is structured in multiple modules all functions are driven by messaging the actor. From opening the serial, reading and writing from/to serial, generating averages to reporting those values to openHAB.</p>

<p>The testing also works well. Even though Sento has no sophisticated test support like i.e. Akka has (TestKit) I think that this is not necessary. Sento is simple enough to allow exhaustive testing. Of course, since actors are (or can be) asynchronous, one has to probe for responses or state repeatedly.</p>

<h5>Switching serial library</h5>

<p>If you read the first blog post of the series I've settled on a serial library <a href="https://github.com/jetmonk/cl-libserialport" class="link" target="_blank">cl-libserialport</a>. As it turned out, this library had a serious memory leak. After 1-3 days things stopped working and I had to restart the REPL. I've reported the issue to the maintainer (but unfortunately I wasn't able to test the fix). With some minor adaptions I could switch to <a href="https://github.com/snmsts/cserial-port" class="link" target="_blank">cserial-port</a>. This since works well.</p>

<h5>Fix cl-mock with multi-threading support</h5>

<p>If you look at the tests where I do Outside-In TDD a lot, I used mocking extensively. However, <a href="https://github.com/Ferada/cl-mock/" class="link" target="_blank">cl-mock</a> didn't work well in multi-threaded environments. Function invocations were not properly captures when executed in a different threat than the test runner threat. But I was able to fix this issue and cl-mock now has multi-threading support. I think it's the only CL mocking library that has that.</p>

<h5>Integration test using easy-routes</h5>

<p>Eventually I was eager to add proper integration tests that can test also the HTTP reporting to openHAB. So I set up <a href="https://github.com/mmontone/easy-routes" class="link" target="_blank">easy-routes</a>, a REST routing framework based on Hunchentoot server. This library is easy and has a nice DSL. See the integ/acceptance <a href="https://github.com/mdbergmann/cl-etaconnector/blob/master/test/eta-atest.lisp" class="link" target="_blank">test</a>.</p>

<h5>Additional features - generate and report average values</h5>

<p>Instead of relying on openHAB to generate averages I thought why not do this here. This thing is running all the time, all values are passing though it. So why not capture or generate average data and submit those at specifed times. So this was an additional feature which already runs successfully in production. I've used <a href="https://github.com/ciel-lang/cl-cron" class="link" target="_blank">cl-cron</a>, a simple cron library to specify when and in which intervals average values are to be reported. This can be daily, weekly or so.</p>

<h5>Testing</h5>

<p>Of course the project was implemented using TDD and partially Outside-In TDD. Without having run a coverage tooling I'd say that the coverage should be very high. Testing asynchronous operations is not as straight forward as normal function/method calls. But it's not only that. The actor in my case did a fair bit of side-effects where a result can't be captured as message response. Even if some parts of the program were straight modules/packages with just pure functions, they were called as a side-effect from the higher-level business logic implemented as actor. In this case you can only verify and control what the business-logic does by setting up mocks that capture how the business-logic module 'drives' the subordinate modules. Sometimes people tend to confuse this with testing implementation detail. But this is not the case. It just verifies and controls the in- and output of the unit under test.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ House automation tooling - Part 3 - London-School and Double-Loop ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+3+-+London-School+and+Double-Loop"></link>
        <updated>2022-07-02T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+3+-+London-School+and+Double-Loop</id>
        <content type="html"><![CDATA[ <p>Last post was more research and about prototyping some code related to how the serial communication can work using actors.</p>

<p>In this post we start the project with a first use-case. We'll do this using a methodology called &quot;Outside-in TDD&quot; but with a double test loop.</p>

<h4>Outside-in TDD (London School)</h4>

<p>There are a few variants of TDD. The classic one, which usually does inside-out, is called <em>Classic TDD</em> also known as &quot;Detroit School&quot;, because that's where TDD was invented roughly 20 years ago. When you have a use-case to be developed, sometimes this is a vertical slice through the system maybe touching multiple layers, then Classic TDD starts developing at the inner layers providing the modules for the above layers.</p>

<p><em>Outside-in TDD</em> also known as &quot;London School&quot; (because it was invented in London) goes the opposite direction. It touches the system from the outside and develops the modules starting at the outside or system boundary layers towards the inside layers. If the structures don't yet exist they are created by imagining how they should be and the immediate inner layer modules are mocked in a test. The test helps define and probe those structures as a first &quot;user&quot;. Outside-in is known to go well together with YAGNI (You Ain't Gonna Need It) because it creates exactly the structures and modules as needed for each use-case, and not more. Of course outside-in TDD is still TDD.</p>

<h4>Double loop TDD</h4>

<p>Here we use outside-in TDD with a double test loop, also known as Double Loop TDD.</p>

<p><figure>
<img src="http://retro-style.software-by-mabe.com/static/gfx/blogs/outer-inner.png" alt="Outer-Inner" />
</figure></p>

<p>Double Loop TDD creates acceptance tests on the outer test loop. This usually happens on a use-case basis. The created acceptance test fails until the use-case was fully developed. Doing this has multiple advantages. The acceptance test can verify the integration of components, acting as integration test. It can also check against regression because the acceptance criteria are high-level and define how the system should work, or what outcome is expected. If that fails, something has gone wrong. This kind of test can be developed in collaboration with QA or product people.</p>

<p>Double Loop TDD was first explained in detail by the authors of the book <a href="http://www.growing-object-oriented-software.com" target="_blank" class="link">Growing Object-Oriented Software, Guided by Tests</a>. This book got so well-known in the TDD practicing community that it is just known as &quot;GOOS&quot;.</p>

<h4>Let's start with this outer test</h4>

<p>Our understanding of the first use-case is that we send a certain command to the boiler which will instruct the boiler to send sensor data on a regular basis, like every 30 seconds. The exact details of how this command is sent, or even how this command looks like is not yet relevant. So far we just need a high-level understanding of how the boiler interface works. An expected result of sending this command is that after a few seconds an HTTP REST request goes out to the openHAB system. As a first start we just assume that there is a boundary module that does send the REST request. So we'll just mock that one. Later we might wanna remove all mocking from the acceptance test and setup a full web server that simulates the openHAB web server. It is likely that the acceptance test also goes over multiple iterations until it represents what we want and doesn't use any inner module structures directly.</p>

<pre class="lisp"><code>(defvar *path-prefix* "/rest/items/")

(test send-record-package--success--one-item
  "Sends the record ETA interface package
that will result in receiving sensor data packages."
  (with-mocks ()
    (answer (openhab:do-post url data)
      (progn
        (assert (uiop:string-prefix-p "http://" url))
        (assert (uiop:string-suffix-p
                 (format nil "~a/HeatingETAOperatingHours" *path-prefix*)
                 url))
        (assert (floatp data))
        t))

    (is (eq :ok (eta:send-record-package)))
    (is (= 1 (length (invocations 'openhab:do-post))))))</code></pre>

<p>So we're still at Common Lisp (non Lispers don't worry, Lisp is easy to read). Throughout the code examples we use <a href="https://github.com/lispci/fiveam" class="link" target="_blank">fiveam</a> test framework and <a href="https://github.com/Ferada/cl-mock/" class="link" target="_blank">cl-mock</a> for mocking. </p>

<p><code>with-mocks</code> sets up a code block where we can use mocks. The package <code>openhab</code> will be used for the openhab connectivity. So, however the internals work, eventually we expect the function <code>do-post</code> (in package <code>openhab</code>, denoted as <code>openhab:do-post</code>) to be called with an URL to the REST resource and the data to be sent. As our first iteration this might be OK. This expectation can be expressed with <code>answer</code>. <code>answer</code> takes two arguments. The first is the function that we expect to be called. We don't know yet who calls this and when, or where. It's just clear that this has to be called.<br/>
Effectively this is what we have to implement in the inner test loops. When this function is expressed like here <code>(openhab:do-post url data)</code> then <em>cl-mock</em> does pattern matching on the function arguments and it allows the arguments to be captured as variables <code>url</code> and <code>data</code>. This enables us to do some verification of those parameters in the second argument of <code>answer</code>, which represents the return value of <code>do-post</code>. So yes, we also define what this function should return here and now in this context. The return value of <code>do-post</code> here is <code>t</code> (which is like a boolean 'true' in other languages) as the last expression in the <code>progn</code> (<code>progn</code> is just a means to wrap multiple forms where only one is expected. The last expression of <code>progn</code> is returned). The assertions inside the <code>progn</code> do verify that the URL looks like we want it to look and that the data is a float value. Perhaps those things will later slightly change as we understand more of the system.</p>

<p>Sending data to openHAB is the expected side-effect of this use-case. The action that triggers this is: <code>(eta:send-record-package)</code>. This defines that we want to have a package <code>eta</code> which represents what the user &quot;sees&quot; and interacts with (the UI of this utility will just be the REPL). So we call <code>send-record-package</code> and it should return <code>:ok</code>.<br/>
At last we can verify that <code>do-post</code> got called by checking the recorded invocations of <em>cl-mock</em>.<br/>
And of course this test will fail.</p>

<p>It is important that we go in small steps. We could try to code all perfect the first time, but that doesn't work out. Things will be too complex to get right first time. There will be more iterations and it is OK to change things when appropriate and when more things are better understood.</p>

<h4>What's next</h4>

<p>Next time we'll dive into the inner loops to satisfy those constrains we have setup here.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Modern Programming ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Modern+Programming"></link>
        <updated>2022-05-14T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Modern+Programming</id>
        <content type="html"><![CDATA[ <h4>Modern programming</h4>

<p>Modern Programming is programming that is guided by tests and executed in small/micro steps,  incremental and reversible by checking in (VC) each successful test. No production code is produced without a test. With small steps is not meant the small (once/twice a day) steps used for Continuous Integration but really steps that are in the range of a few minutes, with a lot of THINKing between them and maybe pairing with a peer to bounce ideas.</p>

<p>This stems from decades and years of experience in the agile and craftsman ship movement and communities.<br/>
It makes optimal use of tests as a tool where the tests guide the creation and structuring of code while providing immediate feedback, raising the quality bar for maintainable code and highly reduces the defect rate. Concentrating on small steps reduces the immediate mental load. And as a side-effect the tests provide a high test coverage.<br/>
Tests done right also act as documentation and example for how to use the created code.</p>

<p>However, wielding this tool in this way is not easy. There are many intricasies that are important to successfully apply this (which may take years to fully master). It requires control of your workflow. Who doesn't know how easy it is to get carried away and do too many changes at once (when you lost control). It requires knowledge of what good design is to refactor the design when the test code gives indications of bad design.</p>

<h5>When this is modern, what is not modern then?</h5>

<p>Everything else. There is no standard but many variants.</p>

<p>I would go through a few examples of experienced variants. This should be the norm in most companies in slight variations.</p>

<p>When I started working at my first employer during studies around the year 2000 and beyond automated tests in any incarnation effectively did not exist. I've written code and then tested all 'by hand' either alone or in the team until 'it worked'. Since I did work on micro controllers these days this was very time consuming. At that time I didn't know how to abstract and design code in a way to allow most of it to be unit tested and reduce manual testing even on hardware to a minimum.</p>

<p>This way of coding continued for 5 or 6 years also on various other platforms like macOS (Mac OSX at that time), Windows .NET, Java. After that I've experienced a variant where I thought that it might be good to at least partially write a few tests for code areas that would be good to know they work, because testing those manually was extremely inconvenient and time consuming and QA too often came back with findings that could have been caught. Those tests were not part of an automated suite. They were just run on demand. The tests did their work and I was impressed by their effectiveness. But still there was a lot of manual testing.</p>

<p>The next variant was in a phase where code quality and tests were more important. But tests were still an aftermath. I was working at a banking enterprise at that time. Tests were considered important, but were done after the fact and were not enforced. So you were developing code for 3 days and then writing tests for 3 days to back that code that you did write earlier. Quite unsatisfactory for my taste, and again, quite a waste of time because many parts probably have been manually tested already during development. Yes, those tests still have their advantage. Yet, it is likely that while writing those tests it turned out that they are hard to write and the code needs to be refactored partially to make it better testable.</p>

<p>When production code is written without the immediate feedback of a test it is very likely that the code ends up being difficult to test. Code that is difficult to test is difficult to maintain. The tests have this advantage to give a feedback if code is too coupled, uses too many collaborators, uses different levels of abstraction, etc. But it requires some skill and experience to listen to this feedback and use it for the better.</p>

<h5>Returning to the light...</h5>

<p>Another variant, that I did experience (and am still on my way to mastery on this) in the last 5 to 7 years, is that of the TDD world. I think Kent Beck isn't so lucky with naming his inventions (XP (eXtreme Programming) could be more popular if it had a different name, he said this himself some time ago). I think a better expression of what TDD is could be: <strong>&quot;Development Guided by Tests&quot;</strong> (thanks Allan Holub for coming up with this). The tight workflow of TDD is something Kent Beck invented. When done right (and that takes a bit of practice) it can unfold all those attributes that I mentioned in the first paragraphs.</p>

<p>The <a href="http://manifesto.softwarecraftsmanship.org" class="link" target="_blank">craftspeople</a> all adopted this way of working to raise the quality bar of software.</p>

<p>During the last few years some additions to classic TDD were invented. I.e. there is the &quot;London school&quot; of TDD which advocates outside-in development. There is also ATDD (Acceptance TDD) which is similar to the double test loop TDD (that I <a href="/blog/Test-driven+Web+application+development+with+Common+Lisp" class="link" target="_blank">blogged</a> about).</p>

<p>Today all of those variants of programming are still in use. Companies of all sizes do one or the other variant, or a mixture. Often it's up to the developer.<br/>
Having said that, this mostly applies to (business) application development. For system development other optimized workflows may apply.</p>

<p>Some references you might find interesting.</p>

<p>Books:</p>

<ul>
<li><a href="https://www.goodreads.com/book/show/4268826-growing-object-oriented-software-guided-by-tests" class="link" target="_blank">Growing Object-Oriented Software, Guided by Tests</a></li>
<li><a href="https://www.goodreads.com/book/show/387190.Test_Driven_Development" class="link" target="_blank">Test-Driven Development: By Example</a></li>
<li><a href="https://www.goodreads.com/book/show/3735293-clean-code" class="link" target="_blank">Clean Code: A Handbook of Agile Software Craftsmanship</a></li>
</ul>

<p>Videos:</p>

<p>There are many. Those in particular are interesting:</p>

<ul>
<li>Sandro Mancuso: <a href="https://www.youtube.com/watch?v=KyFVA4Spcgg" class="link" target="_blank">Does TDD Really Lead to Good Design?</a></li>
<li>Ian Cooper: <a href="https://www.youtube.com/watch?v=EZ05e7EMOLM" class="link" target="_blank">TDD, Where Did It All Go Wrong</a></li>
</ul>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ House automation tooling - Part 2 - Getting Serial ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+2+-+Getting+Serial"></link>
        <updated>2022-03-21T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+2+-+Getting+Serial</id>
        <content type="html"><![CDATA[ <p>The war in Ukraine is ongoing.<br/>
Stop the war and generally any violence against humans and animals and Gods creation.</p>

<h4>Getting Serial</h4>

<p>Last post I prepared <a href="https://ccl.clozure.com" class="link" target="_blank">Clozure CL</a> on an iBook with MacOS X 10.4 (Tiger) including getting <a href="https://www.quicklisp.org" class="link" target="_blank">quicklisp</a> ready. quicklisp is not absolutely necessary, but it helps. Otherwise all libraries that you want to use you have to download and load manually in the REPL.</p>

<p>In this post I'd want to check feasability and prepare the serial communication.<br/>
We'll do some CL coding and use the actor pattern for this prove of concept.</p>

<h5>The adapter</h5>

<p>This iBook as well as more modern computers don't have a Sub-D serial port adapter anymore. However, the device (the boiler) this software should communicate with has a 9 pin Sub-D connector. So we need a USB to serial adapter. There are a number of them available. But we also need drivers for this version of Mac OS. This one, a Keyspan (<a href="https://www.tripplite.com/keyspan-high-speed-usb-to-serial-adapter~USA19HS" class="link" target="_blank">USA19HS</a>) works with this version of Mac OSX and drivers are available.</p>

<h5>Development peer</h5>

<p>OK, in order to 'simulate' the boiler we use an Amiga 1200, which still has a serial port and a nice software called 'Term' which allows to act as a serial peer for development. The application 'Term' has an Amiga Rexx (ARexx) scripting interface which allows to script behavior in Term. In the end this could be handy to create a half-automated test environment for system tests.<br/>
However, for now we only do feasability work to figure out if and how the serial library works in order to plan a bit ahead what has to be done in the serial interface module of the automation tool. This should be the only (sort of) manual testing. From there we structure the code in a way to abstract the serial interface in order to fake or mock the serial communication which allows an easier and faster feedback loop for development.</p>

<p><img src="/static/gfx/blogs/a1200_cropped.jpg" alt="A1200" width="720" /></p>

<p>(Since the Amiga has a 25 pin Sub-D interface but the Keyspan adapter has a 9 pin interface I had to build a 25&lt;-&gt;9 pin converter. Of course I could have bought it but I like doing some soldering work from time to time.)</p>

<p><img src="/static/gfx/blogs/serial-adapter.jpg" alt="serial-adapter" width="300" /></p>

<h5>The Common Lisp serial interface library</h5>

<p>There are two CL libraries based on FFI (Foreign Function Interface) that would work. I've experimented with both.</p>

<ol>
<li><a href="https://github.com/snmsts/cserial-port" class="link" target="_blank">cserial-port</a></li>
<li><a href="https://github.com/jetmonk/cl-libserialport" class="link" target="_blank">cl-libserialport</a></li>
</ol>

<p>In my opinion cl-libserialport offers a few more features and I'd settle on it. I.e. it allows to specify a termination character for the read operation where when received the read will automatically return.<br/>
The disadvantage, cl-libserialport requires an additional C shared library (<a href="https://github.com/sigrokproject/libserialport" class="link" target="_blank">libserialport</a>) to exist in the system which has to be installed first. cserial-port also uses FFI but works with existing POSIX/Windows library calls. cl-libserialport is actually a CL layer on top of libserialport.<br/>
On my development machine I can just install this library via <a href="https://brew.sh" class="link" target="_blank">Homebrew</a>. On the target machine (the iBook) I had to download and compile the library. But it is straight forward and not more than: <code>autogen.sh &amp;&amp; make &amp;&amp; make install</code>.</p>

<p>cl-libserialport is not on quicklisp, so in order to still load it in the REPL via quicklisp we have to clone it to <code>~/quicklisp/local-projects</code>, then quicklisp will find it and load it from there. Btw: this is a nice way to override versions from the quicklisp distribution.</p>

<p>With all the additional work for cl-libserialport (which is actually not that much and a one-time effort) I hope it pays off by being easier to work with.</p>

<h5>Prototyping some code</h5>

<p>The boiler serial protocol will require to send commands to the boiler, and receive sensor data. One of the commands is a 'start-record' command which instructs the boiler to start sending data repeatedly every x seconds until it received a 'stop-record' command. Since it is not possible to send and receive on the serial device at the same time we have to somehow serialize the send and receive of data. One way to do this is to use a queue. We enqueue send and read commands and when dequeued the command is executed. Now, this cries for an actor. Fortunately there is a good actor library for Common Lisp called <a href="https://github.com/mdbergmann/cl-gserver" class="link" target="_blank">cl-gserver</a> which we can utilize for this and hack together some prove of concept. (Though if I read correctly then libserialport internally uses semaphores to manage concurrent access to the device resource. Nontheless I'd like to use an actor.)</p>

<p>For this we have to implement to initialize the serial interface, set the right baud value and such. Then we want to write/send and read/receive data.</p>

<p>The initialization, opening the serial device can look like this:</p>

<pre class="lisp"><code>(defparameter *serial* "/dev/cu.usbserial-143340")
(defparameter *serport* nil)

(defun open-serial (&optional (speed 19200))
  (setf *serport*
        (libserialport:open-serial-port
         *serial*
         :baud speed :bits 8 :stopbits 1 :parity :sp-parity-none
         :rts :sp-rts-off
         :flowcontrol :sp-flowcontrol-none)))</code></pre>

<p>The opened serial device will be stored in <code>*serport*</code>. The baud rate we need is 19200 and there should be no flow control and such. Just plain serial communication.</p>

<p>Now write and read will look like this:</p>

<pre class="lisp"><code>(defun read-serial ()
  (libserialport:serial-read-octets-until
   *serport*
   #\}
   :timeout 2000))

(defun write-serial (data)
  (libserialport:serial-write-data *serport* data))</code></pre>

<p>The read function utilizes the termination character because I know already that the boiler data uses start and end characters <code>{</code> and <code>}</code>. The timeout is used to terminate the read command in case there is no data available to read. When we queue the commands and there is a write after a read we have a delay for the write not longer than 2 seconds. This might be acceptable in production because sending new commands doesn't need to be immediate.</p>

<p>Now let's see how the actor can look like in a simple way that can work for this example:</p>

<pre class="lisp"><code>(defun receive (actor msg state)
  (case (car msg)
    (:init
     (open-serial))
    (:read
     (progn
       (let ((read-bytes (read-serial)))
         (when (&gt; (length read-bytes) 0)
           (format t "read: ~a~%" read-bytes)
           (format t "read string: ~a~%" (babel:octets-to-string read-bytes))))
       (tell actor msg)))
    (:write
     (write-serial (cdr msg))))
  (cons nil state))

(defvar *asys* (asys:make-actor-system))
(defparameter *serial-act* (ac:actor-of
                            *asys*
                            :receive (lambda (a b c) (receive a b c))))</code></pre>

<p>The last part creates the actor-system and a <code>*serial-act*</code> actor. Messages sent to the actor should be pairs of a key: <code>:init</code>, <code>:read</code> and <code>:write</code>, and something else. This something else is only used for <code>:write</code> to transport the string to be written and can be <code>nil</code> otherwise.</p>

<p>For the <code>:receive</code> key argument to the <code>actor-of</code> function we could just use <code>#'receive</code>, but then we couldn't make adjustments to the <code>receive</code> function and have it applied immediately when evaluated. The <code>#'receive</code>, which is actually <code>(function receive)</code>, seems to pass a function symbol that is static which then doesn't take changes to the <code>receive</code> function into account.</p>

<p>To initialize the serial device we do: </p>

<pre class="lisp"><code>(tell *serial-act* '(:init . nil))</code></pre>

<p>To write to the serial device we do:</p>

<pre class="lisp"><code>(tell *serial-act* '(:write . "Hello World"))</code></pre>

<p>Having done that we see in the 'Term' application the string &quot;Hello World&quot;. So this works.</p>

<p><img src="/static/gfx/blogs/term-hello.jpg" alt="term-hello" width="720" /></p>

<p>The read has a speciality: sending <code>'(:read . nil)</code> will not only read from the device but also enqueue again the same command, because we want to test receiving data continuously but mixing in write, or other commands in between. This should reflect the reality pretty well.</p>

<p>When I type something in the 'Term' program the REPL will print the read data:</p>

<pre class="nohighlight"><code>SERIAL&gt; (tell *serial-act* '(:write . "Hello World"))
T
SERIAL&gt; (tell *serial-act* '(:read . nil))
T
SERIAL&gt; 
read: #(13)
read string: 
read: #(104 101 108)
read string: hel
read: #(108 111 32 102 114 111 109 32 116 101 114 109)
read string: lo from term
read: #(105 110 97 108)
read string: inal
read: #(125)
read string: }
; No values
SERIAL&gt; (tell *serial-act* '(:write . "Hello World2"))
T
SERIAL&gt; 
read: #(102)
read string: f
read: #(111 111 125)
read string: oo}
; No values</code></pre>

<p>So this seems to work. I need to think about the next step now. Since I'd like to develop outside-in with a double test loop the next thing to do is figure out a use-case and create a test for it that basically sets the bounds of what should be developed in smaller increments.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ House automation tooling - Part 1 - CL on MacOSX Tiger ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+1+-+CL+on+MacOSX+Tiger"></link>
        <updated>2022-03-07T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/House+automation+tooling+-+Part+1+-+CL+on+MacOSX+Tiger</id>
        <content type="html"><![CDATA[ <p>In light of the current events in the world - I mean the war in Ukraine, this article is totally not important.<br/>
Please stop this war and all wars in this world. Let's live together in peace. Let's be kind to one another.</p>

<h4>The project</h4>

<p>The goal of this project is to create a tool that can read sensor data from my ETA wood chip boiler (main heating) and push this data to my <a href="https://www.openhab.org" class="link" target="_blank">openHAB</a> system. I use openHAB as a hub for various other data. It has database integrations and can do visualization from the stored data.</p>

<p>This ETA boiler has a serial interface where one can communicate with the boiler, retrieve temperatur and other data. It also allows to control the boiler, to a certain degree, by sending commands to it.</p>

<p>The tooling will be done in <a href="https://common-lisp.net" class="link" target="_blank">Common Lisp</a>.</p>

<p>For the hardware I'd want to utilize an old PowerPC iBook I still have lying here. So, Common Lisp should run on this old version of Mac OSX 10.4 (Tiger), including a library that can use the serial port. The data will eventually be sent via HTTP to a REST interface of openHAB. For that we probably use <a href="https://edicl.github.io/drakma/" class="link" target="_blank">drakma</a>.</p>

<p>Does the tool need a GUI? I thought so, maybe to show the current sensor data, and to have buttons for sending commands. However, it turned out that a GUI is not that easy. I did look into TK (via LTK Common Lisp bindings), but that didn't work out-of-the-box on the pre-installed TCL/TK 8.4 version. Since I use CCL we probably could use the Cocoa bindings it provides. So maybe I'll do that as a last step. For now we just use the REPL. That should be fully sufficient as a UI.</p>

<p>In this first part of the project I'd want to choose the Common Lisp implementation and spend some time to get <a href="https://www.quicklisp.org/beta/" class="link" target="_blank">quicklisp</a> and <a href="https://gitlab.common-lisp.net/asdf/asdf" class="link" target="_blank">ASDF</a> working in order to download and work with additional libraries that we may need in a convenient way.</p>

<h4>TDD and the 'double loop'</h4>

<p>Once this initial research and prove of concept (with CL on this hardware and the serial port) is done we'll continue with developing this tool <em>guided by tests</em> (TDD). Similarly as in this <a href="/blog/Test-driven+Web+application+development+with+Common+Lisp" class="link" target="_blank">blog article</a> we'll also try to do an outer acceptance test loop.</p>

<h4>Finding the Common Lisp implementation</h4>

<p>I've settled on CCL (Clozure Common Lisp). I did briefly look at an older version of SBCL but that went nowhere. CCL version 1.4 works nicely on this version of Mac OSX. Those older CCL versions can be downloaded <a href="https://ccl.clozure.com/ftp/pub/release/" class="link" target="_blank">here</a>.</p>

<h4>quicklisp and ASDF</h4>

<p>Now, in order to have some convenience I'd want to have quicklisp work on this version of CCL. It doesn't work out-of-the box because of the outdated ASDF version.</p>

<p>When we go through the standard quicklisp installation procedure <code>(quicklisp-quickstart:install)</code> the installation attempt bails out at this error:</p>

<pre class="nohighlight"><code>Read error between positions 173577 and 179054 in 
/Users/manfred/quicklisp/asdf.lisp.
&gt; Error: Could not load ASDF "3.0" or newer
&gt; While executing: ENSURE-ASDF-LOADED, in process listener(1).
&gt; Type :POP to abort, :R for a list of available restarts.
&gt; Type :? for other options.
</code></pre>

<p>There is no restart available that could overcome this. At this point, however, we already have an unfinished quicklisp installation at <code>~/quicklisp</code>.</p>

<p>What we try now is to </p>

<ul>
<li>download the latest version of ASDF from <a href="https://asdf.common-lisp.dev/archives/asdf.lisp" class="link" target="_blank">https://asdf.common-lisp.dev/archives/asdf.lisp</a></li>
<li>replace the old asdf.lisp version in the quicklisp folder with the new one (you can as well rename the old one to 'asdf_old.lisp' or so.</li>
</ul>

<p>Then, while being at the REPL we compile the new ASDF version:</p>

<ul>
<li>compile asdf: <code>(compile-file #P&quot;~/quicklisp/asdf.lisp&quot;)</code></li>
</ul>

<p>We are thrown into the debugger because it seems that this version of CCL does not have an exported function <code>delete-directory</code>. But we have a restart available (2) that allows us to 'Create and use the internal symbol CCL::DELETE-DIRECTORY'. Choosing this restart we can overcome the missing function error. It is possible though that this will limit the functionality of quicklisp, or ASDF.</p>

<pre class="nohighlight"><code>&gt; Error: Reader error: No external symbol named "DELETE-DIRECTORY" 
&gt; in package #&lt;Package "CCL"&gt; .
&gt; While executing: CCL::%PARSE-TOKEN, in process listener(1).
&gt; Type :GO to continue, :POP to abort, :R for a list of available restarts.
&gt; If continued: Create and use the internal symbol CCL::DELETE-DIRECTORY
&gt; Type :? for other options.
? :R
&gt;   Type (:C &lt;n&gt;) to invoke one of the following restarts:
0. Return to break level 1.
1. #&lt;RESTART CCL:ABORT-BREAK #x294556&gt;
2. Create and use the internal symbol CCL::DELETE-DIRECTORY
3. Retry loading #P"/Users/manfred/quicklisp/asdf.lisp"
4. Skip loading #P"/Users/manfred/quicklisp/asdf.lisp"
5. Load other file instead of #P"/Users/manfred/quicklisp/asdf.lisp"
6. Return to toplevel.
7. #&lt;RESTART CCL:ABORT-BREAK #x294C0E&gt;
8. Reset this thread
9. Kill this thread

:C 2</code></pre>

<p>From here the compilation of ASDF can resume. When done we have a binary file right next to the lisp source file. We'll see shortly how to use it.</p>

<p>Now we have to finish our quicklisp installation by:</p>

<ul>
<li>manually loading 'setup.lisp': <code>(load #P&quot;~/quicklisp/setup.lisp&quot;)</code></li>
</ul>

<p>Ones this is through we can instruct quicklisp to create an init file for CCL and add initialization code for quicklisp to it. This init file is loaded by CCL on every startup. We can do this by calling:</p>

<ul>
<li><code>(ql:add-to-init-file)</code></li>
</ul>

<p>When this is done we can close the repl and modify the created <code>~/.ccl-init.lisp</code> init file by adding:</p>

<ul>
<li><code>#-asdf (load #P&quot;~/quicklisp/asdf&quot;)</code></li>
</ul>

<p>to the top of the file. This instruction will load the compiled binary of asdf (notice we use 'asdf' here instead of 'asdf.lisp' for the <code>load</code> function). The <code>#-</code> is a lisp reader instruction that basically says: if <code>asdf</code> is not part of <code>*features*</code> evaluate the following expression.</p>

<p>When the repl is fully loaded we can check <code>*features*</code>:</p>

<pre class="lisp"><code>? *features*
(:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX 
:ASDF-UNICODE :PRIMARY-CLASSES :COMMON-LISP :OPENMCL :CCL :CCL-1.2 :CCL-1.3 :CCL-1.4
:CLOZURE :CLOZURE-COMMON-LISP :ANSI-CL :UNIX :OPENMCL-UNICODE-STRINGS
:OPENMCL-NATIVE-THREADS :OPENMCL-PARTIAL-MOP :MCL-COMMON-MOP-SUBSET
:OPENMCL-MOP-2 :OPENMCL-PRIVATE-HASH-TABLES :POWERPC :PPC-TARGET :PPC-CLOS
:PPC32-TARGET :PPC32-HOST :DARWINPPC-TARGET :DARWINPPC-HOST :DARWIN-TARGET
:DARWIN-HOST :DARWIN-TARGET :POWEROPEN-TARGET :32-BIT-TARGET :32-BIT-HOST
:BIG-ENDIAN-TARGET :BIG-ENDIAN-HOST :DARWIN)</code></pre>

<p>There we are.</p>

<p>Let's check the installation by loading a library:</p>

<ul>
<li>load cl-gserver: <code>(ql:quickload :cl-gserver)</code></li>
</ul>

<pre class="nohighlight"><code>To load "cl-gserver":
  Load 1 ASDF system:
    cl-gserver
; Loading "cl-gserver"
[package cl-gserver.logif]........................
[package cl-gserver.atomic].......................
[package cl-gserver.config].......................
[package cl-gserver.wheel-timer]..................
[package cl-gserver.utils]........................
[package cl-gserver.actor]........................
[package cl-gserver.dispatcher]...................
[package cl-gserver.queue]........................
[package cl-gserver.messageb].....................
[package cl-gserver.eventstream]..................
[package cl-gserver.actor-system].................
[package cl-gserver.actor-context]................
[package cl-gserver.future].......................
[package cl-gserver.actor-cell]...................
[package cl-gserver.agent]........................
[package cl-gserver.tasks]........................
[package cl-gserver.router].......................
[package cl-gserver.agent.usecase-commons]........
[package cl-gserver.agent.hash]...................
[package cl-gserver.agent.array].
(:CL-GSERVER)</code></pre>

<p>The library was fully loaded and compiled proper. Ready for use.</p>

<p>Next stop is getting the serial port working.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Common Lisp - Oldie but goldie ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Common+Lisp+-+Oldie+but+goldie"></link>
        <updated>2021-12-18T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Common+Lisp+-+Oldie+but+goldie</id>
        <content type="html"><![CDATA[ <p>This article should be a brief introduction to Common Lisp. Brief, because Common Lisp is a rather large and complex system. It has many features. I will try to concentrate on the basics and some exceptional features that stand out for me. I started writing it for myself in order to understand certain concept better, like symbols. But it might be useful for others as well.  </p>

<p><a id="orgbd1d8a8"></a></p>

<h4>How did I come to Common Lisp</h4>

<p>I have been working with various languages and runtimes since the start of my career 22 years ago. Beginning of 2019 I wanted to find something else to closely look into that is not JVM based (which I'm mostly been working with since close to 20 years starting with Java 1.1).<br/>
For some reason, which I can't recall, I haven't really been introduced to Lisps, ever. I also can't recall why 2019 I thought that I should take a look at Lisps. So I took a look at Clojure first. Clojure is a great language but it was again on the JVM. I wanted something native, or at least some other runtime. After some excursions to Erlang, Elixir and LFE (Lisp Flavoured Erlang, which all three are extremely interesting as well) and Scheme I finally found Common Lisp and didn't regret it.  </p>

<p><a id="org1626589"></a></p>

<h4>Brief history</h4>

<p>First drafts of Common Lisp appeared 1981/82. While mostly a successor of Maclisp it tried to unify and standardize Maclisp and the various other successors of Maclisp. In 1994 Common Lisp was an ANSI standard.  </p>

<p><a id="org6d8e1e5"></a></p>

<h4>Age advantages</h4>

<p>Since then the standard hasn't changed. That can of course be seen as a bad thing, when things don't change. But actually I believe it is a good thing. Common Lisp is even today surprisingly 'modern' and has many features of todays languages, partially even more features than 'modern' languages. And what it doesn't have can be added in form of libraries so that it feels as being part of the language.<br/>
Common Lisp is a quite large and complex package. After this long time there are of course some dusty corners. But all in all it is still very attractive and has an active community.<br/>
Because the standard didn't change since 1994 any code written since then should still be compile and runable on newer compilers and runtime implementations (where there are a few, see below) which were written in a portable way.  </p>

<h4>Content</h4>

<ul>
<li><a href="#orgbd62e83" class="link">Basics</a>

<ul>
<li><a href="#org0669756" class="link">Lists</a></li>
<li><a href="#orgfeceb96" class="link">Functions</a>

<ul>
<li><a href="#org7ce6981" class="link">Mandatory arguments</a></li>
<li><a href="#org59e6b50" class="link">Optional arguments</a></li>
<li><a href="#orgde5876c" class="link">Key arguments</a></li>
<li><a href="#org26456fd" class="link">Rest arguments</a></li>
<li><a href="#orga703ee0" class="link">Mixing arguments</a></li>
</ul></li>
<li><a href="#orgac51641" class="link">Lambdas</a></li>
<li><a href="#org2266afa" class="link">Macros</a></li>
<li><a href="#org0fb504c" class="link">Packages</a></li>
<li><a href="#org6d16428" class="link">Symbols</a>

<ul>
<li><a href="#org08a2a66" class="link">Unbound symbols</a></li>
<li><a href="#orgbe23e88" class="link">Bound symbols</a></li>
<li><a href="#orgcff0d95" class="link">The Lisp reader</a></li>
</ul></li>
</ul></li>
<li><a href="#org0e6fc54" class="link">Types</a>

<ul>
<li><a href="#org5bec702" class="link">Everything has a type</a></li>
<li><a href="#org19d1a44" class="link">Create new types</a></li>
<li><a href="#org7f1b0b1" class="link">Check for types</a>

<ul>
<li><a href="#org8c56ab4" class="link">check-type</a></li>
<li><a href="#orgaf49559" class="link">declaim</a></li>
</ul></li>
</ul></li>
<li><a href="#org8f1ca12" class="link">Error handling</a>

<ul>
<li><a href="#org21a3108" class="link">Conditions</a></li>
<li><a href="#orgecc7ced" class="link">unwind-protect</a></li>
<li><a href="#org3abe3e7" class="link">Handle condition with stack unwind</a></li>
<li><a href="#org50c740a" class="link">Restarts / Handle condition without stack unwind</a></li>
</ul></li>
<li><a href="#org0761f12" class="link">CLOS and object-oriented programming</a></li>
<li><a href="#orgab03075" class="link">Multi dispatch</a></li>
<li><a href="#orga5282f9" class="link">Debugging</a></li>
<li><a href="#orgb5c1978" class="link">Library management with Quicklisp</a></li>
<li><a href="#org36aa9a1" class="link">Runtimes/compilers (CCL, SBCL, ECL, Clasp, ABCL | LispWorks, Allegro)</a></li>
<li><a href="#org833736f" class="link">Image based</a>

<ul>
<li><a href="#org1b55d4e" class="link">image snapshot</a></li>
<li><a href="#org6ad489a" class="link">load from image</a></li>
</ul></li>
<li><a href="#org5f6afae" class="link">Functional programming</a></li>
<li><a href="#org1ce34e8" class="link">Resources</a></li>
</ul>

<p><a id="orgbd62e83"></a></p>

<h3>Basics</h3>

<p>Let me run through some of the basic features of Common Lisp. Those basic features are likely also available in other languages. Common Lisp has some unique features that I'll be talking about later.  </p>

<p><a id="org0669756"></a></p>

<h4>Lists</h4>

<p>Since the name 'Lisp' is an abbreviation for List Processing we should have a quick look at lists. Lists are the corner stone of the Lisp language because every Lisp construct/form is a list, also called s-expression. A list is bounded by parentheses <code>(</code> and <code>)</code>. So <code>'(1 2 3 4)</code> is a list of the numbers 1, 2, 3 and 4. This list represents data. Lists representing data are usually quoted. Quoted means that this list, or the list elements, are not evaluated. The <code>'</code> in front of the first parenthesis denotes a quoted list. But <code>(myfunction &quot;abc&quot;)</code> is also a list representing code, which is evaluated. By convention the first list entry must either be a function name, a macro operator, a lambda expression or a special operator. <code>if</code> for example is a special operator. The other list elements are usually function, or operator arguments. Lists can be nested. In most cases Lisp programs are trees of lists.  </p>

<p><a id="orgfeceb96"></a></p>

<h4>Functions</h4>

<p>Functions are nothing special. Every language knows them. A simple function definition (which does nothing) looks like this:</p>

<pre class="lisp"><code>(defun my-fun ())
(my-fun)</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: NIL</code></pre>

<p>A function in Common Lisp always returns something, even if not explicitly. This simple function just returns <code>NIL</code>, which in Common Lisp has two meanings. a) it has a boolean meaning of <code>false</code> and b) it means the empty list equal to <code>'()</code>.  </p>

<p>Common Lisp provides a very sophisticated set of features to structure function arguments.  </p>

<p><a id="org7ce6981"></a></p>

<h5>Mandatory arguments</h5>

<p>Mandatory arguments are simply added to the list construct following the function name. This list construct that represents the arguments is commonly known as <em>lambda list</em>. In the following example <code>arg1</code> and <code>arg2</code> are mandatory arguments.</p>

<pre class="lisp"><code>(defun my-fun (arg1 arg2)
  (list arg1 arg2))
(my-fun "Hello" "World")</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| Hello | World |</code></pre>

<p><a id="org59e6b50"></a></p>

<h5>Optional arguments</h5>

<p>Optional arguments are defined using the <code>&amp;optional</code> keyword:</p>

<pre class="lisp"><code>(defun my-fun (arg1 &optional opt1 (opt2 "buzz" opt2-p))
  (list arg1 opt1 opt2 opt2-p))
(list
 (my-fun "foo")
 (my-fun "foo" "bar")
 (my-fun "foo" "bar" "my-buzz"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| foo | NIL | buzz    | NIL |
| foo | bar | buzz    | NIL |
| foo | bar | my-buzz | T   |</code></pre>

<p>The first optional <code>opt1</code> does not have a default value, so if undefined it'll be <code>NIL</code>. The second optional <code>opt2</code> when undefined is populated with the given default value &quot;buzz&quot;. The  optional <code>opt2-p</code> predicate indicates whether the <code>opt2</code> parameter has been given or not. Sometimes this is useful in succeeding code.  </p>

<p><a id="orgde5876c"></a></p>

<h5>Key arguments</h5>

<p><code>key</code> arguments are similar as named arguments in other languages. The ordering of <code>key</code> arguments is not important and is not enforced. They are defined with a the <code>&amp;key</code> keyword:</p>

<pre class="lisp"><code>(defun my-fun (&key key1 (key2 "bar" key2-p))
  (list key1 key2 key2-p))
(list
 (my-fun)
 (my-fun :key1 "foo")
 (my-fun :key1 "foo" :key2 "buzz"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| NIL | Foo  | NIL |
| Bar | Foo  | NIL |
| Bar | Buzz | T   |</code></pre>

<p><code>key</code> arguments are optional. Similarly as <code>&amp;optional</code> arguments a default value can be configured and a predicate that indicates whether the parameter was provided or not. Defining <code>key2-p</code> is optional.  </p>

<p><a id="org26456fd"></a></p>

<h5>Rest arguments</h5>

<p><code>rest</code> arguments are arguments that have not already been captured by mandatory, optional, or key. So they form a rest. This rest is available in the body as a list. In the example below defined by <code>rest</code> keyword. <code>rest</code> arguments are sometimes useful to pass them on to <code>APPLY</code> function.</p>

<pre class="lisp"><code>(defun my-fun (arg1 &optional opt1 &rest rest)
        (list arg1 opt1 rest))
(list
 (my-fun "foo" :rest1 "rest1" :key1 "buzz")
 (my-fun "foo" "opt1" :rest1 "rest1" :key1 "buzz"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| foo | :REST1 | (rest1 :KEY1 buzz)        |
| foo | opt1   | (:REST1 rest1 :KEY1 buzz) |</code></pre>

<p><a id="orga703ee0"></a></p>

<h5>Mixing arguments</h5>

<p>As you can see it is possible to mix <code>optional</code>, <code>key</code> and <code>rest</code> arguments. However, some care must be taken with mixing <code>optional</code> and <code>key</code> because the key of the <code>key</code> argument could be taken as an <code>optional</code> argument. Similarly with <code>rest</code> and <code>key</code> arguments as can be seen in the examples above. In most use-cases you'd either have <code>optional</code> or <code>key</code> together with mandatory arguments.  </p>

<p><a id="orgac51641"></a></p>

<h4>Lambdas</h4>

<p>Lambdas are anonymous functions created at runtime. Other than that they are similar to <code>defun</code>s, regular/named functions. They can be used in place of a function name like this:</p>

<pre class="lisp"><code>((lambda (x) x) "foo")  ;; returns "foo"</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: foo</code></pre>

<p>In which case the lambda is immediately evaluated. The function 'is applied' on the value &quot;foo&quot;, represented as the argument x. The function then returns x.<br/>
In other cases, i.e. when a lambda is bound to a variable one need to invoke the lambda using <code>funcall</code>:</p>

<pre class="lisp"><code>(let ((my-fun (lambda (x) x)))
  (funcall my-fun "foo"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: foo</code></pre>

<p>This is in contrast to Scheme, or other Lisp-1s, where also <code>my-fun</code> can be used in place of the function name and would just be evaluated as a function.<br/>
Common Lisp is a Lisp-2, which means that there are separate environments for variables and functions. In the above example <code>my-fun</code> is a variable. In order to evaluate it as a function one has to use <code>FUNCALL</code>.  </p>

<p>Lambdas are first-class objects in Lisp which means they can be created at runtime, bound to variables and passed around as function arguments or function results:</p>

<pre class="lisp"><code>(defun my-lambda ()
  (lambda (y) y))
(list (type-of (my-lambda)) 
      (funcall (my-lambda) "bar"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| function | bar |</code></pre>

<p>The &quot;Lambda-calculus&quot; (Alonzo Church, 1930) is a mathematical formal system based on variables, function abstractions (lambda expressions) and applying those using substitution. This can be used for any kind of computation and is Turing machine equivalent (or can be used to simulate a Turing machine).<br/>
So if one would stack/nest lambda expression in lambda expression in lambda expression and so on, where a lambda expression is bound to a variable and the computation of this again substitutes a variable, you could have such a lambda-calculus.<br/>
This is of course not so practical and hard to read but this alone would be enough to calculate anything that is calculatable.  </p>

<p><a id="org2266afa"></a></p>

<h4>Macros</h4>

<p>Macros are an essential part in Common Lisp. One should not confuse Lisp macros with C macros which just do textual replacement. Common Lisp macros are extremely powerful.<br/>
In short, macros are constructs that generate and/or manipulate code. Lisp macros still stand out in contrast to other languages because Lisp macros just generate and manipulate ordinary Lisp code whereas other languages use an AST (Abstract Syntax Tree) representation of the code and hence the macros must deal with the AST. In Lisp, Lisp is the AST. Lisp is homoiconic.  </p>

<p>Macros are not easy to distinguish from functions. In programs one can not see the difference. Many functions could be replaced by macros. But functions can usually not replace macros. There is a fundamental difference between the two.<br/>
The arguments to macros are passed in a quoted form, meaning they are not evaluated (remember the lists as data above). Whereas parameters to functions are first evaluated and the result passed to the function. The output of macros is also quoted code. For example let's recreate the <code>when</code> macro:</p>

<pre class="lisp"><code>(defmacro my-when (expr &body body)
  `(if ,expr ,@body))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: MY-WHEN</code></pre>

<p>When using the macro it prints:</p>

<pre class="nohighlight"><code>CL-USER&gt; (my-when (= 1 0)
           (print "Foo"))
NIL
CL-USER&gt; (my-when (= 1 1)
           (print "Foo"))
"Foo"</code></pre>

<p>The macro expands the <code>expr</code> and <code>body</code> arguments. Macros always (should) generate quoted Lisp code, that's why the result of a macro should be a quoted expression. Quoted expressions are not evaluated, they are just plain data (a list), so the macro expression can be replaced with the macro body wherever the macro is used.<br/>
We can expand the macro (using <code>MACROEXPAND</code>) to see what it would be replaced with. Let's have a look at this:</p>

<pre class="nohighlight"><code>CL-USER&gt; (macroexpand-1 '(my-when (= 1 1)
                          (print "Foo")))
(IF (= 1 1) (PRINT "Foo"))</code></pre>

<p>So we see that <code>my-when</code> is replaced with an <code>if</code> special form. As we said, a quoted expression is not evaluated, so would we use the <code>expr</code> argument in the quoted expression we would just get <code>(IF EXPR ...)</code>, but we want <code>expr</code> to be expanded here so that the right <code>if</code> form is created with what the user defined as the <code>if</code> test expression. The <code>,</code> 'escapes' the quoted expression and will expand the following form. <code>,expr</code> is thus expanded to <code>(</code> 1 1)= and <code>,@body</code> to <code>(print &quot;Foo&quot;)</code>. The <code>@</code> is special as it unwraps (splices) a list of expressions. Since the body of a macro can denote many forms they are wrapped into a list for the <code>&amp;body</code> argument and hence have to be unwrapped again on expansion. I.e.:</p>

<pre class="lisp"><code>(my-when t
  (print "Foo")
  (print "Bar"))</code></pre>

<p>Here the two print forms represent the body of the macro and are wrapped into a list for the <code>&amp;body</code> argument like:</p>

<pre class="lisp"><code>((print "Foo")
 (print "Bar"))</code></pre>

<p>The <code>@</code> will remove the outer list structure.  </p>

<p><strong>when are macros expanded?</strong>  </p>

<p>Macros are expanded during the 'macro expansion' phase. This phase happens before compilation. So the Lisp compiler already only sees the macro expanded code.  </p>

<p><a id="org0fb504c"></a></p>

<h4>Packages</h4>

<p>Packages are constructs, or namespaces, to separate and structure data and code similar as in other languages. <code>DEFPACKAGE</code> declares a new package. <code>IN-PACKAGE</code> makes the named package the current package. Any function, macro or variable definitions are then first of all local to that package where they are defined in. Function, macro or variable definitions can be exported, which means that they are then visible for/from other packages. A typical example of a package with some definitions would be:</p>

<pre class="lisp"><code>(defpackage :foo
  (:use :cl)
  (:import-from #:bar
                #:bar-fun
                #:bar-var)
  (:export #:my-var
           #:my-fun))
(in-package :foo)
    
(defparameter my-var "Foovar")
(defun my-fun () (print "Foofun"))
(defun my-internal-fun () (print "Internal"))</code></pre>

<p>A package is kind of a lookup table where function names, variable names, etc., represented as symbols (later more on symbols) refer to an object which represents the function, variable, etc. The function <code>MY-FUN</code> would be referred to using a package qualified name <code>foo:my-fun</code>. The exported 'symbols' are the public interface of the package. Using a double colon one can also refer to internal symbols, like: <code>foo::my-internal-fun</code> but that should be done with care as it means accessing implementation details.<br/>
It is also possible to import specific package symbols (functions, variables, etc.) by using the <code>IMPORT</code> or <code>IMPORT-FROM</code> functions. Any package added as parameter of <code>:use</code> will be inherited by the defined package and so all exported symbols of the packages mentioned at <code>:use</code> will be known and can be used without using the package qualified name.  </p>

<p><a id="org6d16428"></a></p>

<h4>Symbols</h4>

<p>Symbols in Common Lisp are almost everywhere. They reference data and are data themselves. They reference variables or functions. When used as data we can use them as identifiers or as something like enums.  </p>

<p>We can create symbols by just saying <code>'foo</code> in the REPL. This will create a symbol with the name &quot;FOO&quot;. Notice the uppercase. We also create symbols by using the function <code>INTERN</code>.  </p>

<p>Let's have a look at the structure of symbols. We create a symbol from a string by using the <code>INTERN</code> function.  </p>

<p><a id="org08a2a66"></a></p>

<h5>Unbound symbols</h5>

<pre class="lisp"><code>(intern "foo")</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: |foo|</code></pre>

<p>This symbol <code>foo</code> was created in the current package (<code>*PACKAGE*</code>). We can have a look at <code>*PACKAGE*</code> (in Emacs by just evaluating <code>*PACKAGE*</code> and clicking on the result):</p>

<pre class="nohighlight"><code>#&lt;PACKAGE #x30004000001D&gt;
--------------------
Name: "COMMON-LISP-USER"
Nick names: "CL-USER"
Use list: CCL, COMMON-LISP
Used by list: 
2 present symbols.
0 external symbols.
2 internal symbols.
1739 inherited symbols.
0 shadowed symbols.</code></pre>

<p>We'll see that there are 2 internal symbols. One of them is our newly created symbol <code>foo</code>. Let's drill further down to the internal symbols.</p>

<pre class="nohighlight"><code>#&lt;%PACKAGE-SYMBOLS-CONTAINER #x3020014B3FCD&gt;
--------------------
All internal symbols of package "COMMON-LISP-USER"

A symbol is considered internal of a package if it's
present and not external---that is if the package is
the home package of the symbol, or if the symbol has
been explicitly imported into the package.
    
Notice that inherited symbols will thus not be listed,
which deliberately deviates from the CLHS glossary
entry of `internal' because it's assumed to be more
useful this way.
    
  [Group by classification]
   
Symbols:                Flags:
----------------------- --------
foo                     --------</code></pre>

<p>So <code>foo</code> is listed as symbol. Let's look at <code>foo</code> in detail (in Emacs we can click on <code>foo</code>).</p>

<pre class="nohighlight"><code>#&lt;SYMBOL #x3020012F958E&gt;
--------------------
Its name is: "foo"
It is unbound.
It has no function value.
It is internal to the package: COMMON-LISP-USER [export] [unintern]
Property list: NIL</code></pre>

<p>Here we see the attributes of symbol <code>foo</code>. Symbols can be bound to variables, or they can have a function value (Common Lisp is a Lisp-2, which means it separates variables from function names. In a Lisp-1, like Scheme, one cannot have the same name for a variable and function), in which case they refer to a variable or function. Our symbol is neither, it's just a plain symbol.  </p>

<p>We can get the name of the symbol by:</p>

<pre class="lisp"><code>(symbol-name (intern "foo"))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: foo</code></pre>

<p><a id="orgbe23e88"></a></p>

<h5>Bound symbols</h5>

<p>Whenever we define a variable (not lexical variables (<code>let</code>)), or function we bind a symbol to a variable or function. Let's do this:</p>

<pre class="lisp"><code>;; create a variable definition in the current package
(defvar *X* "foo")</code></pre>

<p>When we look again in the current package <code>*PACKAGE*</code> we see an additional symbol:</p>

<pre class="nohighlight"><code>#&lt;%PACKAGE-SYMBOLS-CONTAINER #x3020014B3FCD&gt;
...
Symbols:                Flags:
----------------------- --------
*X*                     b-------
foo                     --------</code></pre>

<p>And it is flagged with &quot;b&quot;, meaning it is bound, see below.</p>

<pre class="nohighlight"><code>#&lt;SYMBOL #x30200145E2EE&gt;
--------------------
Its name is: "*X*"
It is a global variable bound to: "foo" [unbind]
It has no function value.
It is internal to the package: COMMON-LISP-USER [export] [unintern]
Property list: NIL</code></pre>

<p>The same can be done with functions. Defining a function with <code>DEFUN</code> will create a symbol in the current package whose function object is the function. Let's create a function: <code>(defun foo-fun ())</code> and look at the symbol:</p>

<pre class="nohighlight"><code>#&lt;%PACKAGE-SYMBOLS-CONTAINER #x3020015C0E8D&gt;
--------------------
Symbols:                Flags:
----------------------- --------
FOO-FUN                 -f------
    
#&lt;SYMBOL #x3020014D1C4E&gt;
--------------------
Its name is: "FOO-FUN"
It is unbound.
It is a function: #&lt;Compiled-function FOO-FUN #x3020014D0A8F&gt; [unbind]</code></pre>

<p><a id="orgcff0d95"></a></p>

<h5>The Lisp reader</h5>

<p>When a Lisp file is read, or some input from the REPL, it is first of all just a sequence of characters. What the <em>reader</em> reads it turns into objects, symbols, and stores those (using <code>INTERN</code>) into the current package. It also applies some rules for how the character sequence is converted to the symbol name. Usually those rules include turning all characters to uppercase. So i.e. a function name &quot;foo&quot; creates a symbol with the name <code>FOO</code>.<br/>
It is possible to have symbol names with literals. We have seen that when we defined the symbol <code>|foo|</code> above. The reader puts vertical bars around &quot;foo&quot; which means the symbol name is literally &quot;foo&quot;. This is because we have not applied the conversion rules when using <code>INTERN</code>. If we had defined the symbol as <code>(intern &quot;FOO&quot;)</code> we wouldn't see the vertical bars.  </p>

<p>Let's make an example with a function. Say, we are in a package <code>MY-P</code> and we define a function:</p>

<pre class="lisp"><code>(defun my-fun () "fun")</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: MY-FUN</code></pre>

<p>The REPL responds with <code>MY-FUN</code>. This is the returned symbol from the function definition that was added to the package. When we now want to execute the function we write: <code>(my-fun)</code>. When the reader reads &quot;my-fun&quot;, it uses <code>INTERN</code> to either create or retrieve the symbol (<code>INTERN</code> retrieves the symbol if it already exists). It is retrieved if previously the function was defined with <code>DEFUN</code> which implicitly, through the reader, creates the symbol and 'attaches' a function object to it. The attached function object can then be executed.  </p>

<p><a id="org0e6fc54"></a></p>

<h3>Types</h3>

<p>Even though Common Lisp is not statically typed it has types. In fact everything in Common Lisp has a type.  </p>

<p><a id="org5bec702"></a></p>

<h4>Everything has a type</h4>

<p>And there are no primitives as they are in Java.</p>

<pre class="lisp"><code>(defun my-fun ())
(list
 (type-of 5)
 (type-of "foo")
 (type-of #\a)
 (type-of 'foo)
 (type-of #(1 2 3))
 (type-of '(1 2 3))
 (type-of (cons 1 2))
 (type-of (lambda () "fun"))
 (type-of #'my-fun)
 (type-of (make-condition 'error)))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| (INTEGER 0 1152921504606846975) |
| (SIMPLE-BASE-STRING 3)          |
| STANDARD-CHAR                   |
| SYMBOL                          |
| (SIMPLE-VECTOR 3)               |
| CONS                            |
| CONS                            |
| FUNCTION                        |
| FUNCTION                        |
| ERROR                           |</code></pre>

<p><a id="org19d1a44"></a></p>

<h4>Create new types</h4>

<p>There are different ways to create new types. One is to just create a new structure, or class. New structure types can be created with <code>DEFSTRUCT</code>. <code>DEFCLASS</code> will create a new class type.</p>

<pre class="lisp"><code>(defstruct address 
  (street "" :type string)
  (streetnumber nil :type integer)
  (plz nil :type integer))
(type-of (make-address 
          :street "my-street"
          :streetnumber 1
          :plz 51234))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: ADDRESS</code></pre>

<p>The <code>:type</code> specified in <code>DEFSTRUCT</code> is optional but when provided the type is checked on creating a new structure.<br/>
<code>DEFCLASS</code> can be used instead of <code>DEFSTRUCT</code>. If you build object-oriented software and want to work with inheritance then use <code>DEFCLASS</code>, because a struct can't do it. Classes also have the feature of updating the its structure at runtime which structs can't do.  </p>

<p><code>deftype</code> allows to create new types as a combination of existing types. Let's create a new type that represents the numbers from 11 to 50.</p>

<pre class="lisp"><code>(defun 10-50-number-p (n)
  (and (numberp n)
       (&gt; n 10)
       (&lt;= n 50)))
(deftype 10-50-number ()
  `(satisfies 10-50-number-p))</code></pre>

<p>This snipped creates a predicate function that ensures the number argument is within 10 and 50 (excluding 10 and including 50). The type definition then uses <code>SATISFIES</code> with the given predicate function to check the type. So we can then say:</p>

<pre class="lisp"><code>(list
 (typep 10 '10-50-number)
 (typep 11 '10-50-number)
 (typep 50 '10-50-number)
 (typep 51 '10-50-number))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| NIL | T | T | NIL |</code></pre>

<p>The results show that the middle two satisfy this type, the other two not.  </p>

<p><a id="org7f1b0b1"></a></p>

<h4>Check for types</h4>

<p>Types can be checked on runtime, or also (partially) on compile time (SBCL has some static type check capability). Checking types usually makes sense for function parameters but can be done anywhere.  </p>

<p><a id="org8c56ab4"></a></p>

<h5>check-type</h5>

<p><code>CHECK-TYPE</code> is used to do this. It can be used as follows, considering the <code>10-50-number</code> type from above:</p>

<pre class="lisp"><code>(defun add-10-50-nums (n1 n2)
  (check-type n1 10-50-number)
  (check-type n2 10-50-number)
  (+ n1 n2))</code></pre>

<p>Do we call this as <code>(add-10-50-nums 10 11)</code> we will get a type error raised:</p>

<pre class="nohighlight"><code>The value 10 is not of the expected type 10-50-NUMBER.
   [Condition of type TYPE-ERROR]</code></pre>

<p>Under the hoods <code>CHECK-TYPE</code> is a wrapper for <code>ASSERT</code> call.  </p>

<p><a id="orgaf49559"></a></p>

<h5>declaim</h5>

<p>With <code>DECLAIM</code> one can make declarations for variables or functions we'd have to:</p>

<pre class="lisp"><code>(declaim (ftype (function (10-50-number 10-50-number) 10-50-number) add-10-50-nums))
(defun add-10-50-nums (n1 n2)
  (+ n1 n2))</code></pre>

<p>This declares the input and output types of the function <code>ADD-10-50-nums</code>. However, this will not do type checks at runtime, and whether it will be checked at compile time depends on the Common Lisp implementation. SBCL will check it, CCL doesn't, in which case it will be useable as documentation only.  </p>

<p>It's not nicely readable though. The library <a href="https://github.com/ruricolist/serapeum/blob/master/REFERENCE.md#types" class="link">Serapeum</a> adds some syntactic sugar to make this more nice. I.e. the <code>DECLAIM</code> from above can be written as:</p>

<pre class="lisp"><code>(-&gt; add-10-50-nums (10-50-number 10-50-number) 10-50-number)</code></pre>

<p><a id="org8f1ca12"></a></p>

<h3>Error handling</h3>

<p>Common Lisp has some unique error handling properties. The &quot;Restarts&quot;. We will see later some examples. Let's first check what conditions are.  </p>

<p><a id="org21a3108"></a></p>

<h4>Conditions</h4>

<p>Conditions are objects of a type <code>condition</code>. The CLHS says: &quot;an object which represents a situation&quot;. So conditions are far more than errors. Any condition/situation can be transported by conditions. Now while a condition itself can represent a situation like an error, there are multiple ways to raise a condition and multiple ways to handle a condition depending on the need. For example: an error condition can be just signaled (using <code>SIGNAL</code>) in which case nothing much will happen if the condition is not handled at all. <code>SIGNAL</code> will just return <code>NIL</code> in that case. However, when an error condition is raised using <code>ERROR</code>, then it must be handled, otherwise the runtime will bring up the debugger. There is also <code>WARN</code>, which will print a warning message if the condition is not handled.  </p>

<p><a id="orgecc7ced"></a></p>

<h4>unwind-protect</h4>

<p><code>UNWIND-PROTECT</code> is similar as a try-finally in other languages, Java for example. It protects the stack from unwinding further and allows to call a clean-up form.</p>

<pre class="lisp"><code>(defun do-stuff ())
(defun clean-up ())
    
(unwind-protect
     (do-stuff)  ;; can raises condition
  (clean-up))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
: NIL</code></pre>

<p><a id="org3abe3e7"></a></p>

<h4>Handle condition with stack unwind</h4>

<p><code>HANDLER-CASE</code> is a bit more sophisticated than <code>UNWIND-PROTECT</code>, it allows to differenciate on the raised condition and do a different handling. This is comparable to a try-catch-finally (i.e. in Java or elsewhere). This is nothing special really, so let's move on to Restarts.  </p>

<p><a id="org50c740a"></a></p>

<h4>Restarts / Handle condition without stack unwind</h4>

<p>Restarts is a unique feature of Common Lisp that I have not seen elsewhere (though that doesn't necessarily have to mean much). It allows to handle conditions without unwinding the stack. If not handled by a handler the runtime will drop you into the debugger with restart options where the user can choose an available way to continue. Let's make a very simple example to show how it works:</p>

<pre class="lisp"><code>(define-condition my-err1 () ())
(define-condition my-err2 () ())
(define-condition my-err3 () ())
(define-condition my-err4 () ())
    
(defun lower (err-cond)
  (restart-case
      (error err-cond)
    (restart-case1 (&optional arg)
      (format t "restart-case1 arg:~a~%" arg))
    (restart-case2 (&optional arg)
      (format t "restart-case2 arg:~a~%" arg))
    (restart-case3 (&optional arg)
      (format t "restart-case3 arg:~a~%" arg))))
    
(defun higher ()
  (handler-bind
      ((my-err1 (lambda (c)
                  (format t "condition: ~a~%" c)
                  (invoke-restart 'restart-case1 "foo1")))
       (my-err2 (lambda (c)
                  (format t "condition: ~a~%" c)
                  (invoke-restart 'restart-case2 "foo2")))
       (my-err3 (lambda (c)
                  (format t "condition: ~a~%" c)
                  (invoke-restart 'restart-case3 "foo3"))))
    (lower 'my-err1)
    (lower 'my-err2)
    (lower 'my-err3)
    (lower 'my-err4)))</code></pre>

<p>In the example <code>HIGHER</code> calls <code>LOWER</code>. <code>LOWER</code> immediately raises a condition with <code>ERROR</code>. You'd normally of course have some other code here that would raise the conditions instead. To setup restarts one uses <code>RESTART-CASE</code> everywhere where there is potentially a way to get out of a situation without loosing the context. The <code>RESTART-CASE</code> actually looks very similar to a <code>HANDLER-CASE</code>. The restart cases can take arguments that can be passed in from a calling module. In our case here the restarts cases just dump a string to stdout.<br/>
The magic in <code>HIGHER</code> to actually 'invoke' the restart targets is achieved with <code>HANDLER-BIND</code>. It is possible to automatically invoke restarts by differenciating on the condition. The restart cases are invoked with <code>INVOKE-RESTART</code>. This allows to also pass the argument to the restart case handler that could create the basis for resuming the computation. If a condition handler is not bound the condition will bubble further up the call chain. So it's possible to bind condition handlers on different levels where on a higher level one possibly has more oversight to decide which restart to use.<br/>
Executing <code>HIGHER</code> will give the following output:</p>

<pre class="nohighlight"><code>CL-USER&gt; (higher)
condition: Condition #&lt;MY-ERR1 #x302001398D9D&gt;
restart-case1 arg:foo1
condition: Condition #&lt;MY-ERR2 #x30200139886D&gt;
restart-case2 arg:foo2
condition: Condition #&lt;MY-ERR3 #x30200139833D&gt;
restart-case3 arg:foo3</code></pre>

<p>This output is from calling <code>LOWER</code> function with condition types <code>MY-ERR1</code>, <code>MY-ERR2</code> and <code>MY-ERR3</code>. When we now call <code>LOWER</code> with <code>MY-ERR4</code> we will be dropped into the debugger, because there is no condition handler for <code>MY-ERR4</code>. But in this case that's exactly what we want. The debugger now offers the three restarts we have set up (plus some standard ones). So we see:</p>

<pre class="nohighlight"><code>Condition #&lt;MY-ERR4 #x302001445A7D&gt;
   [Condition of type MY-ERR4]
    
Restarts:
 0: [RESTART-CASE1] #&lt;RESTART RESTART-CASE1 #x251B7B8D&gt;
 1: [RESTART-CASE2] #&lt;RESTART RESTART-CASE2 #x251B7BDD&gt;
 2: [RESTART-CASE3] #&lt;RESTART RESTART-CASE3 #x251B7C2D&gt;
 3: [RETRY] Retry SLY mREPL evaluation request.
 4: [*ABORT] Return to SLY's top level.
 5: [ABORT-BREAK] Reset this thread
 --more--
    
Backtrace:
 0: (LOWER MY-ERR4)
 1: (HIGHER)
 2: (CCL::CALL-CHECK-REGS HIGHER)
 3: (CCL::CHEAP-EVAL (HIGHER))
 4: ((:INTERNAL SLYNK-MREPL::MREPL-EVAL-1))
 --more--</code></pre>

<p>We could now choose one of our restarts manually to have the program continue in a controlled way by maybe retrying some operation with a different set of parameters.  </p>

<p><a id="org0761f12"></a></p>

<h3>CLOS and object-oriented programming</h3>

<p>CLOS (Common Lisp Object System) is an object oriented class system (or framework) in Common Lisp. It has a separate name, but it is part of the Common Lisp standard and part of every Common Lisp runtime. In very basic terms it allows to define classes using <code>DEFCLASS</code>. CLOS supports multi-inheritance. The behavior of classes (if something like that exists in Common Lisp - I'd say it doesn't) are structures keeping state but don't have behavior as such (and that's a good thing). The behavior to classes is added with generic functions. There is some default behavior, like <code>INITIALIZE-INSTANCE</code>, or <code>PRINT-OBJECT</code>, etc. which is behavior defined as generic functions. This default behavior of classes is defined by <strong>meta-classes</strong>, classes that define classes. A pretty powerful thing. This would allow me to create my own base class behavior. Comparing this to Java one could very remotely say that this is like creating a new <code>Object</code> class that behaves different than the default <code>Object</code> class.  </p>

<p>Generic functions allow to be overridden. This is driven by providing method (<code>DEFMETHOD</code>) definitions which define certain concrete object types as parameters which are subclasses of some class. Say I have a class Person and have a method definition that works on that object. To override this method I'd have to define a method that works on, say Employee object type, a subclass ob Person. Then it's possible to also call the implementation of the super class using <code>CALL-NEXT-METHOD</code> (see chapter 'Multi dispatch'; <code>float</code> is a subtype of <code>number</code>). Though overriding behavior like that is something that one should try to avoid these days. Composition over inheritance is popular. Not without reason. Those very deep inheritance graphs are considered problematic for a few reasons. One is that it's harder to reason about the methods and what they do. The other problem is that inheritance has higher coupling than composition.  </p>

<p><a id="orgab03075"></a></p>

<h3>Multi dispatch</h3>

<p>Multi, or dynamic dispatch is not something that all languages have (some do) but it's quite handy. In Common Lisp it's tied to generic functions. Let's have a look:</p>

<pre class="lisp"><code>(defgeneric print-my-object (obj))
    
(defmethod print-my-object ((obj number))
  (format nil "printing number: ~a~%" obj))
    
(defmethod print-my-object ((obj float))
  (format nil "printing float number: ~a, ~a~%" obj (call-next-method)))
    
(defmethod print-my-object ((obj string))
  (format nil "printing string: ~a~%" obj))
    
(defmethod print-my-object ((obj keyword))
  (format nil "printing keyword: ~a~%" obj))
    
(list
 (print-my-object "foo")
 (print-my-object :foo)
 (print-my-object 5)
 (print-my-object .5))</code></pre>

<pre class="nohighlight"><code>+RESULTS:
| printing string: foo                             |
| printing keyword: FOO                            |
| printing number: 5                               |
| printing float number: 0.5, printing number: 0.5 |</code></pre>

<p>Isn't this cool? This works with objects of classes defined with <code>DEFCLASS</code>, or structures defined with <code>DEFSTRUCT</code>, even conditions. Well, actually with objects of any, including built-in types. There is just an implicit type-check happening on the argument. But there is a certain performance downside. The runtime has to check which function to call by comparing the types on runtime.  </p>

<p><a id="orga5282f9"></a></p>

<h3>Debugging</h3>

<p>As a TDD'er (Test-Driven Development) I don't much use the debugging facilities in general, also not in other languages. Because the TDD increments are so small and the feedback is so immediate that I have used a debugger very rarely in the last years.<br/>
However, there are two facilities which I'd like to mention. One I use sometimes: <code>TRACE</code>. Trace allows to trace specific functions with its inputs and outputs. Say we have a function <code>FOO</code>:</p>

<pre class="lisp"><code>(defun foo (arg)
  (format nil "hello ~a" arg))</code></pre>

<p>We can now enable the tracing of it by saying <code>(trace foo)</code>.<br/>
When we now call <code>FOO</code> we'll see:</p>

<pre class="nohighlight"><code>CL-USER&gt; (foo "world")
0&gt; Calling (FOO "world") 
&lt;0 FOO returned "hello world"
"hello world"</code></pre>

<p>Another thing which I'd like to mention is <code>BREAK</code>. <code>BREAK</code> enters the debugger when placed in the source code. When we have the function:</p>

<pre class="lisp"><code>(defun foo (arg)
  (break))</code></pre>

<p>and call <code>FOO</code> the debugger will open and we can get a glimpse at the stack trace and can inspect the variables.</p>

<pre class="nohighlight"><code>Break
   [Condition of type SIMPLE-CONDITION]
    
Restarts:
 0: [CONTINUE] Return from BREAK.
 1: [RETRY] Retry SLY mREPL evaluation request.
 2: [*ABORT] Return to SLY's top level.
 3: [ABORT-BREAK] Reset this thread
 4: [ABORT] Kill this thread
    
Backtrace:
 0: (FOO "world")
 1: ((CCL::TRACED FOO) "world")
 2: (CCL::CALL-CHECK-REGS FOO "world")
 3: (CCL::CHEAP-EVAL (FOO "world"))
 4: ((:INTERNAL SLYNK-MREPL::MREPL-EVAL-1))
 --more--</code></pre>

<p>In Sly/Slime the Backtrace elements can be opened and further inspected. This is quite handy sometimes.  </p>

<p><a id="orgb5c1978"></a></p>

<h3>Library management with Quicklisp</h3>

<p>Library (dependency) management was quite late in Common Lisp. Apache Maven in the Java world existed since 2004 and was probably one of the first of its kind. <a href="https://www.quicklisp.org/beta/" class="link">Quicklisp</a> exists since 2010 (as far as I could research). Nowadays remote and local library version management is common and supports even GitHub (or Git) repositories directly as resource URLs. However Quicklisp is still different. While others let you choose arbitrary versions Quicklisp is distribution based. This can be remotely compared with the package management of Linux distributions. It has pros and cons. The pro side is that it's consistent. A library that has other dependencies are all resolved from the distribution. While in Java many speak of the jar-hell. This comes from the fact that you may end up with different dependent versions in your classpath (the first one found by the class-loader wins) when you specify a direct dependency of a library in your project, but some other direct dependency has one of your direct dependencies also as direct dependencies but a different version of it. This cannot happen in Quicklisp. Well, actually it can. There are two ways: a) <a href="https://github.com/fukamachi/qlot" class="link">Qlot</a>, which allows to lock certain versions for a project, or b) it's possible within Quicklisp to clone single projects into Quicklisps 'local-projects' folder. Projects cloned in there take precedence over what the distributions offers. So this allows to use updated (or downgraded) versions still without getting into the jar-hell.</p>

<p>One other nice thing about Quicklisp is that you can load libraries directly in the REPL and just use them. So once Quicklisp is installed and made available when the REPL starts you can say: <code>(ql:quickload :cl-gserver)</code> and it will load the library into the image and it's ready to use. This is a big plus. It makes things extremely simple to just try out some code in the REPL.  </p>

<p><a id="org36aa9a1"></a></p>

<h3>Runtimes/compilers (CCL, SBCL, ECL, Clasp, ABCL | LispWorks, Allegro)</h3>

<p>Common Lisp is available in quite a few different implementations which all have different features. Historically there were many implementations. Many of them started at universities. Some were and are are open-source implementations, some were commercial implementation but have been open-sourced and some remain commercial. Some are still being maintained, some are not and will only work on older systems.<br/>
The current most popular one I would say is <a href="http://www.sbcl.org/" class="link">SBCL</a>. SBCL is a fork of <a href="https://cmucl.org/" class="link">CMUCL</a>. SBCL is fast and can do statical type checks (see above). I use SBCL myself for production. For development I use <a href="https://ccl.clozure.com/" class="link">CCL</a>. CCL is not as strict as SBCL, developing with is a bit smoother IMO but can also lead to weird effects sometimes. The compiler is said to be faster than SBCL, which I think it true. But the produced code is by far not as fast as SBCLs. CCL comes from a commercial product MCL (Macintosh Common Lisp). In fact I still have a version of MCL on my old PowerMac with MacOS 9 which still runs fine. But CCL is not limited to Apple. It works on Windows and Linux, too.<br/>
<a href="https://common-lisp.net/project/ecl/main.htm" class="link">ECL</a> for Embeddable Common Lisp probably has the largest supported hardware and OS base. There aren't many systems where ECL is not available. Due to the nature of ECL and what it is geared for, namely to be easily embedded in applications, it doesn't work with images (see 'Image based'). It is also quite slow. But it is actively maintained and certainly has it's use-cases.<br/>
<a href="https://github.com/clasp-developers/clasp" class="link">Clasp</a> is relatively new. I believe it uses some of ECL but is otherwise different and uses the LLVM backend with the goal to use any LLVM available libraries easily (like C++ libraries). Clasp, as I followed the project, is useable since a good while. But you have to compile it yourself (which isn't difficult). More work is being done on performance optimizations.<br/>
<a href="https://abcl.org/" class="link">ABCL</a> started out as scripting engine for a Java editor application. Today it has come a long way and is a full featured Common Lisp that runs on the JVM. It even implements JSR-223 (the Java scripting API) and has nice but not as good Java interop as Clojure. It is not super fast, but is very robust due to the battle-proven Java runtime system.<br/>
There are more not so much maintained implementations of Common Lisp, like <a href="https://clisp.sourceforge.io/" class="link">Clisp</a>, or <a href="https://www.gnu.org/software/gcl/" class="link">GCL</a>.<br/>
Then there are the commercial products <a href="https://franz.com/products/allegrocl/" class="link">Allegro CL</a> and <a href="http://www.lispworks.com/index.html" class="link">LispWorks</a>. Both come with sophisticated IDEs and many features but are not cheap. Check them out. There are limited, but free editions available.  </p>

<p><a id="org833736f"></a></p>

<h3>Image based</h3>

<p>Common Lisp is (usually) an image based system. The only other image based system that I know is Smalltalk. I haven't seen that in younger languages and runtimes. What is it? When you start a Common Lisp system, usually the REPL, then everything you do, like creating variables and functions, etc. is creating or manipulating state in the runtime memory. So far this is not different to other runtimes. What you do during your REPL session is just manipulating data in some memory area. The difference is that Common Lisp allows to create a snapshot (an image) of that runtime memory with all its state and can store it to disk. Then it's possible to run the REPL and load that image and all state is recovered, you could even reconnect to servers and reopen files and so on. The REPL allows to load multiple applications, because all is just variables and functions structured in packages. So you can make ready images to have a head start when starting to work. Usually all Common Lisps that support images actually start with an image when running the REPL. It's just an empty, or default, image.  </p>

<p><a id="org1b55d4e"></a></p>

<h4>image snapshot</h4>

<p>To give this a quick run, create a variable like this: <code>(defparameter *foo* &quot;Hello World&quot;)</code>. Now save the image like this in CCL <code>(ccl:save-application filename)</code> (may be different on other implementations).  </p>

<p><a id="org6ad489a"></a></p>

<h4>load from image</h4>

<p>To load the image you start CCL with -I, like <code>ccl -I foo-ccl.image</code>.<br/>
Then dump your variable <code>*foo*</code> and you'll see &quot;Hello World&quot;.  </p>

<p><a id="org5f6afae"></a></p>

<h3>Functional programming</h3>

<p>If you are interested in functional programming with Common Lisp then I'd want to redirect you to my <a href="/blog/Functional+Programming+in+(Common)+Lisp" class="link">blog post</a> on it.  </p>

<p><a id="org1ce34e8"></a></p>

<h3>Resources</h3>

<p>Much of the information in here is either from my own experience or mentioned and linked web pages but also books like:  </p>

<ul>
<li><a href="https://gigamonkeys.com/book/" class="link">Practical Common Lisp</a></li>
<li><a href="https://lispcookbook.github.io/cl-cookbook/" class="link">Common Lisp Cookbook</a></li>
<li><a href="(http://www.lispworks.com/documentation/HyperSpec/Front/Help.htm" class="link">Common Lisp Hyper Spec</a></li>
</ul>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Functional Programming in (Common) Lisp ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Functional+Programming+in+(Common)+Lisp"></link>
        <updated>2021-05-29T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Functional+Programming+in+(Common)+Lisp</id>
        <content type="html"><![CDATA[ <h3>Intro</h3>

<p>Functional programming (FP) has again been getting popular in recent years. Why again? Because the paradigm of FP is very old. Early languages like Lisp, APL from the 1950/60s were already functional. Later, imperative languages like C, Pascal, etc., replaced functional languages to some degree. Now, for some years functional languages have again become more popular.</p>

<p>This article will describe some key concepts of functional programming. While many pure functional programming languages exist (like ML, Erlang, Haskell, Clojure), which lock you in into a pure functional style of programming, this article will describe some techniques to allow functional programming with multi-paradigm languages. The language of choice for this article is Common Lisp, a well-established industry standard that, despite its age, provides a broad range of modern language features. The techniques apply to any language that provides lambdas and functions as objects (like Java, Python, C#, Scala, etc.), which is a key feature for functional programming.</p>

<h3>What is the difference between the two styles?</h3>

<h4>Functional programming</h4>

<p>As the name suggests, FP is all about functions. About composing functions and about applying those functions to data to transform the data. Functions can be anonymous functions created at runtime or named functions. FP is declarative. It is more important to say <em>what</em> should be done than <em>how</em>. For instance, FP languages have higher-order functions that operate on lists. You tell this function <em>what</em> transformation you want to have for each list element by supplying a function that is called with each list element and <strong>transforms</strong> it, rather than manually writing a loop construct (<code>while</code>, <code>for</code>, etc.) that iterates over a list and implicitly does the transformation as part of the loop construct.<br/>
Pure functions are a key element of FP. Pure functions are <em>referencially transparent</em> which means that they always produce the same output for the same input and, therefore, this function could be replaced with just the value of the function output (<a href="https://en.wikipedia.org/wiki/Referential_transparency" target="_blank" class="link">wiki</a>). This means that functions that have side-effects are not referencially transparent. Referential transparency then also implies that the parameters supplied to the function (the input set) are not changed. A pure function produces new data, but doesn't change the old data that was used to produce the new data. Functions that don't alter the input set are <em>not destructive</em>.<br/>
Additionally, this implies that there exists some immutability of data structures, whether the data structures are immutable by design or functions just don't mutate the input data structures (the latter requires a lot of discipline from the programmer). Those characteristics are often also mentioned together with data-driven development. Pure functions, as you can imagine, are thread-safe and can, therefore, be easily used in multi-threaded environments.<br/>
Aside from the practical benefits of the above, like: it's a lot easier to reason on pure functions and they tend to be easier to maintain (especially in multi-core and multi-threaded environments) and a (subjective) nicer way to code. Though it's, of course, still possible to write crap code that no one understands and is hard to maintain, the skill of the programmer to write readable and maintainable code is still imperative.<br/>
There are also disadvantages: FP is more memory consuming and requires more CPU cycles. Because, practically, due to the immutability of the objects and functions producing new data structures rather than modifying existing ones, more data is produced. This means more memory allocations are necessary and more memory must be cleaned up, which costs garbage collector time. However, with current multi-core hardware, those disadvantages are neglectable. Developer time and a maintainable code base that is easy to reason on are much more valuable for a business.<br/>
For people interested in mathmatics, FP involves quite a bit of it with <a href="https://en.wikipedia.org/wiki/Category_theory" target="_blank" class="link">category theory</a>, morphisms, functors, etc. But this is not something that is part of this article.</p>

<h5>Design considerations</h5>

<p>From a software design perspective, FP programms usually have a purity core where only data transformations are taking place in the form of pure functions (a 'functional core'). But even FP programms have to deal with side-effects and, of course, there is state. Languages deal differently with this. Haskell, for example, knows 'monads' to deal with side-effects and state. Erlang/Elixir have efficient user-space processes where state is locked in. Side-effects, state, as well as threading should happen on the system's boundaries. The <a href="https://en.wikipedia.org/wiki/Hexagonal_architecture_%28software%29" target="_blank" class="link">hexagonal architecture</a> is an architectural style that can nicely fit to FP. It can be used with an 'imperative shell' and a 'functional core'.</p>

<h5>Assignments</h5>

<p>Due to the immutability, pure FP languages allow assignments to variables only once. Erlang/Elixir for example have a <code>=</code> operator, but it's not an assignment operator as in languages like C or Java. It is a match operator, similar as in mathmatics where you can say <code>x = y</code> which means that <code>x</code> is equal to <code>y</code>. Let's have a quick look at Erlang:</p>

<pre class="erlang"><code>4&gt; X = 1.
1
5&gt; Y = 2.
2
6&gt; X = Y.
** exception error: no match of right hand side value 2</code></pre>

<p>The <code>X</code> and <code>Y</code> variables here take the corresponding values of <code>1</code> and <code>2</code> because at this point, they are unset. Since they are unset, the <code>=</code> matches and assigns <code>X</code> with <code>1</code>.<br/>
But when both variables are set and <code>=</code> is used, the match fails because 1 is <em>not</em> equal to 2.</p>

<p>Elixir is a little less strict on the last part:</p>

<pre class="elixir"><code>iex(1)&gt; x = 1
1
iex(2)&gt; y = 2
2
iex(3)&gt; x = y
2</code></pre>

<p>As you can see, the last part works in Elixir. But there is still an important difference to a normal assignment. The new <code>x</code> has a different memory location than the previous <code>x</code>, which means the value in the old memory location is not altered.</p>

<h5>Immutability</h5>

<p>Immutability is also a key characteristic in FP languages. FP languages provide immutable datatypes like tuples, lists, arrays, maps, trees, etc.<br/>
The provided functions that operate on the datatypes are pure functions. For example, removing or adding items to a list do create a new list.<br/>
This behavior is built into FP languages (ML, Haskell, Erlang/Elixir, just to name but a few). So, using those languages, your are operating in an immutability bubble. To maintain immutability inside the immutable bubble, a 'shallow' copy of the data is usually sufficient. This is much more efficient for the runtime system and means that functions operating on datatypes create a copy of the datatype but share the instances that the datatype holds. For instance, adding a new head to a list (<code>cons</code>) will create a new list that is constructed from the given head element and the given input list as the tail.</p>

<pre class="lisp"><code>CL-USER&gt; (defvar *list* '(1 2 3 4))
*LIST*
CL-USER&gt; (defun prepend-head (head list)
           (cons head list))
PREPEND-HEAD
CL-USER&gt; (prepend-head 0 *list*)
(0 1 2 3 4)
CL-USER&gt; *list*
(1 2 3 4)</code></pre>

<p>What is not built-in are deep copies that are necessary when one leaves the immutable bubble, i.e., to do I/O.</p>

<h5>Design patterns (GoF)</h5>

<p>Just a few words on this. Some of the design patterns of object-oriented programming, as captured in the Gang of Four book, also apply for FP, but their implementation is, in many cases, much simpler, i.e., a strategy pattern in FP is just a higher-order function. A visitor pattern in a simple form could just be a reduce function.</p>

<h4>Imperative programming</h4>

<p>Imperative programming (IP), on the other hand, is more about mutating the state of a machine with every instruction/statement. I would tend to say that also Object-Oriented Programming (OOP) is also imperative. The difference being that OOP allows you to give more structure to the programm. From a memory and CPU perspective, the paradigm of IP is more efficient. And I've read somewhere (can't remember where, but it makes sense to me) that IP did replace FP in the latter half of the last century because memory was expensive and CPUs were not fast, and IP clearly has an advantage there when the value of a memory location is just changed instead of a new memory location being allocated and the old one having to be cleaned up.<br/>
But in today's multi-core and multi-threaded computing world, state is a problem. It's not possible without state, but how it is dealt with is important and different in FP.</p>

<h4>Functional style in multi-paradigm language, Common Lisp</h4>

<p>Common Lisp is a Lisp (obviously). Lisps have always had functions as first-class elements of the language. Functions can be anonymous (lambdas), or can have a name, they can be created at runtime and they can be passed around in the same way as strings or other objects can. Every function returns something, even if is it's just <code>nil</code> (for an empty function).<br/>
Common Lisp allows you to program in multiple paradigms. Historically, it had to capture, consolidate and modernize all the Lisp implementations flowing around at the time. So, it has a rather large but well-balanced feature spec and it allows both OOP (with <a href="https://en.wikipedia.org/wiki/Common_Lisp_Object_System" target="_blank" class="link">CLOS</a>) and FP. Common Lisp doesn't have the very strict functional characteristics, like the above-mentioned assignments. Functions in Common Lisp can have side effects and Common Lisp has assignments in the form of <code>set</code>, <code>setq</code> and <code>setf</code>. For FP, this means that the programmer has to be more disciplined in how they programm and which elements of the language they use. But it's, of course, possible. This applies in a similar way to Java, Scala, C#, etc.<br/>
When we look at the important characteristics of FP, they are:</p>

<ul>
<li>first-class functions</li>
<li>pure functions</li>
<li>non-destructive functions</li>
<li>immutable data structures</li>
</ul>

<p>Then, we have to be a bit careful which of all the things available in Common Lisp we use. All built-in data structures in Lisp are mutable. Lists are a bit of an exception because they are usually used in a non-destructive way. Like the function <code>cons</code> (construct) creates a new list by prepending a new element to the head of a list, in which case, the old list is not modified or destroyed but a new list is created. But <code>delete-if</code> (in contrast to <code>remove-if</code>) is destructive because it modifies the input list. The array/vector and hashmap data structures and their functions are destructive and shouldn't be used when doing FP. So, when developing pure and non-destructive functions (i.e. for a functional core), you need to make sure you use the right higher-order functions for operating on lists or other data structures. Depending on which data structure it is, it could also mean manually making deep copies of the input parameters, operating on the copy and returning the copy. <code>modf</code> can help with this, see later info.</p>

<h5>Immutable data structures - FSet</h5>

<p>The immutable data structures library, <a href="https://common-lisp.net/project/fset/Site/index.html" target="_blank" class="link">FSet</a> should be a good fit when doing FP in Common Lisp. FSet defines a large set of functions for all sorts of operations.</p>

<p>There are more alternatives offering this functionality. See, for example, <a href="https://github.com/ndantam/sycamore" target="_blank" class="link">Sycamore</a>.</p>

<p>Other languages, like Java and Scala, offer a range of immutable data structures out of the box.</p>

<h5>Custom immutable types</h5>

<p>Immutable maps are commonly used in FP instead of classes or structure types. They have one disadvantage. They don't create a specific type. In some FP languages, like Erlang/Elixir, it's possible to dispatch a function based on destructuring of the function arguments, like a map or list. But Elixir also allows you to define a type for a map structure, which then can be used for dispatching on a function level.<br/>
In Common Lisp, it would also be cool to use <em>generic functions</em> also for FP because it allows dynamic/multi-dispatch and is generally a nice feature. But it can't destructure lists or maps on function arguments. It can only check on a type or equality of objects using <code>eql</code>. Both won't work well when using FSet with just maps, or sets.<br/>
So, in addition to the data structures available in FSet the standard structure type in Common Lisp (<code>defstruct</code>) could still be usable. It defines a new type, so, we can use it with generic functions, we can check equality on the slots/instance vars with <code>equalp</code>, and we can set the slots/instance vars to <code>:read-only</code> which prevents from changing the slot/variable values. <code>defstruct</code> automatically generates a 'copier' function that copies the structure. This copy is just a flat copy and it doesn't allow you to change values while copying. Let's have a quick look at some of the structure items:</p>

<pre class="lisp"><code>CL-USER&gt; (defstruct foo (bar "" :read-only t))
FOO
CL-USER&gt; (defstruct bar (foo "" :read-only t))
BAR</code></pre>

<p>The option <code>:read-only</code> has the effect that <code>defstruct</code> doesn't generate 'setter' functions to change the slot. (Though it is still possible to change the slot values using lower-level API, i.e., the <code>slot-value</code> function, but the public interface does disallows it.)</p>

<p>The next snippet shows how the dynamic dispatch works with the created new structure types.</p>

<pre class="lisp"><code>CL-USER&gt; (defgeneric m-dispatch (arg))
#&lt;STANDARD-GENERIC-FUNCTION M-DISPATCH #x3020034B2EEF&gt;

CL-USER&gt; (defmethod m-dispatch ((arg foo))
           (format t "me: foo~%"))
#&lt;STANDARD-METHOD M-DISPATCH (FOO)&gt;

CL-USER&gt; (defmethod m-dispatch ((arg bar))
           (format t "me: bar~%"))
#&lt;STANDARD-METHOD M-DISPATCH (BAR)&gt;

CL-USER&gt; (m-dispatch (make-foo :bar "bar"))
me: foo
NIL
CL-USER&gt; (m-dispatch (make-bar :foo "foo"))
me: bar
NIL</code></pre>

<p>The above shows the dynamic dispatch on the different structure types <code>'foo</code> and <code>'bar</code>. This works quite well. To use the structure type in FP, we'd 'just' have to come up with a copy function that allows changing the values when copying the object.</p>

<p>In Scala, this works quite nicely with case classes where a copy of an immutable object can be performed like this:</p>

<pre class="scala"><code>case class MyObject(arg1: String, arg2: Int)

val myObj1 = MyObject("foo", 1)

val myObj2 = myObj1.copy(arg1 = "bar", arg2 = 2)</code></pre>

<h6>Modf to the rescue</h6>

<p>The <a href="https://github.com/smithzvk/modf" target="_blank" class="link">Modf</a> library does exactly that for Common Lisp. <code>modf</code> has to be used instead of <code>setf</code>. But it works in the same way as <code>setf</code>, except that it creates a new instance of the structure instead of modifying the existing structure. Let's see this in action:</p>

<pre class="lisp"><code>CL-USER&gt; (defstruct foo (x 1) (y 2))
FOO
CL-USER&gt; (defparameter *foo* (make-foo))
*FOO*
CL-USER&gt; *foo*
#S(FOO :X 1 :Y 2)
CL-USER&gt; (modf (foo-x *foo*) 5)
#S(FOO :X 5 :Y 2)
CL-USER&gt; *foo*
#S(FOO :X 1 :Y 2)</code></pre>

<p>Following this little example, we can see that <code>modf</code> doesn't touch the original <code>*foo*</code> instance but creates a new one with <code>x = 5</code>. This is pretty cool. It's getting better. This also works for standard CLOS objects:</p>

<pre class="lisp"><code>CL-USER&gt; (defclass my-class () 
           ((x :initform 1)
            (y :initform 2)))
#&lt;STANDARD-CLASS MY-CLASS&gt;
CL-USER&gt; (defparameter *my-class* (make-instance 'my-class))
*MY-CLASS*
CL-USER&gt; *my-class*
#&lt;MY-CLASS #x302002101F8D&gt;
CL-USER&gt; (slot-value *my-class* 'x)
1 (1 bit, #x1, #o1, #b1)
CL-USER&gt; (slot-value *my-class* 'y)
2 (2 bits, #x2, #o2, #b10)
CL-USER&gt; (modf (slot-value *my-class* 'x) 5)
#&lt;MY-CLASS #x302002250CFD&gt;
CL-USER&gt; (slot-value * 'x)
5 (3 bits, #x5, #o5, #b101)
CL-USER&gt; (slot-value *my-class* 'x)
1 (1 bit, #x1, #o1, #b1)</code></pre>

<p>We can see from the memory reference that a new instance was created: <code>#x302002101F8D</code> vs. <code>#x302002250CFD</code>.</p>

<p>So now we basically have our immutable custom types. The only important thing to remember, which again requires discipline, is to use <code>modf</code> instead of <code>setf</code>.</p>

<p>Even though <code>modf</code> also works on the built-in data structures like lists, arrays and hashmaps I would probably still tend to use a library like FSet.</p>

<p>One thing to mention here is that <code>modf</code> only makes a 'shallow' copy of the data, which means that only a new instance of the 'container' is created while the internal objects (if they are references) are shared.</p>

<h5>More things that help doing FP</h5>

<h6>Function composition</h6>

<p>Common Lisp does not have a construct to compose functions other than just wrapping function calls like: <code>(f(g(h()))</code>. But that is not pleasant to read and it actually turns around the logical order of function calls. The defacto standard Common Lisp library <a href="https://common-lisp.net/project/alexandria/" target="_blank" class="link">alexandria</a> has a function for that that can ben used, like <code>(compose h g f)</code>:</p>

<pre class="lisp"><code>CL-USER&gt; (funcall (alexandria:compose #'1+ #'1+ #'1+) 1)
4 (3 bits, #x4, #o4, #b100)</code></pre>

<p>This generates a composition function of the three functions <code>1+</code> like: <code>(1+ (1+ (1+ 1)))</code> which then can be called using <code>funcall</code> or provided as a higher-order function. The call order is still from right to left.</p>

<p>There is another alternative way of composing functions which comes from Clojure. It's actually rather a piping than a composition. Elixir also knows this as the <code>|&gt;</code> operator. In Clojure it's called 'threading'. In Common Lisp exist 3 third-party libraries that implement this. The one used here is <a href="https://github.com/phoe/binding-arrows/" target="_blank" class="link">binding-arrows</a>. There are a few more operators (macros) available for threading with slightly different features than the used <code>-&gt;</code>. I like this a lot and use it often.</p>

<pre class="lisp"><code>CL-USER&gt; (binding-arrows:-&gt;
           1
           1+
           1+
           1+)
4 (3 bits, #x4, #o4, #b100)</code></pre>

<p>The normal 'thread' arrow <code>-&gt;</code> passes the previous value as the first argument to the next function. There is also a <code>-&gt;&gt;</code> arrow operator which passes the value as the last argument.</p>

<h6>Pattern matching</h6>

<p>Pattern matching is kind of standard for languages that have FP features. In Common Lisp, pattern matching is not part of the standard language features. But the <a href="https://github.com/guicho271828/trivia" target="_blank" class="link">Trivia</a> library fills that gap. Trivia has an amazing feature set. It can match (and capture) on all native Common Lisp data structures, including structure and class slots. There are extensions for pattern matching on regular expressions and also for the before-mentioned FSet library. It can be relatively easily expanded with new patterns. The documentation is OK but could be more and better structured.</p>

<p>Here a simple example:</p>

<pre class="lisp"><code>;; matching on an FSet map
(match (map (:x 5) (:y 10))
   ((fset-map :x x :y y)
    (list x y)))
=&gt; (5 10)
          
;; matching on a list with capturing the tail
(match '(1 2 3)
   ((list* 1 tail)
    tail))
=&gt; (2 3)</code></pre>

<h6>Currying</h6>

<p><a href="https://en.wikipedia.org/wiki/Currying" target="_blank" class="link">Currying</a> is something you see in most FP languages. It is a way to decompose one function with multiple arguments into a sequence of functions with fewer arguments. In practical terms, it reduces the dimension of available inputs to a function. For example, say you have the function <code>coords</code> that takes two arguments and produces a coordinate in an x-y coordinate system. With currying, we can lock one dimension, x or y.</p>

<p>Say we have the function:</p>

<pre class="lisp"><code>CL-USER&gt; (defun coords (x y)
           (cons x y))
COORDS</code></pre>

<p>Now, I want to lock the x coordinate to a value, say 1:</p>

<pre class="lisp"><code>CL-USER&gt; (curry #'coords 1)
#&lt;COMPILED-LEXICAL-CLOSURE (:INTERNAL CURRY) #x3020022BB83F&gt;</code></pre>

<p>The <code>curry</code> function here creates a new function that locks the x coordinate to 1 and now supports only one argument. Calling this now produces:</p>

<pre class="lisp"><code>CL-USER&gt; (funcall * 2)
(1 . 2)
CL-USER&gt; (funcall ** 5)
(1 . 5)</code></pre>

<p>(<code>*</code> denotes the last, <code>**</code> the second-from-last result in the REPL.)<br/>
So, currying destructured the <code>coord</code> function call into two function calls. But the curried function can be stored and reused. It represents only a single dimention from the original two-dimentional set.</p>

<p>Common Lisp also doesn't have currying built-in. But it's easy to create. The following function performed the trick above:</p>

<pre class="lisp"><code>CL-USER&gt; (defun curry (fun &rest cargs)
           (lambda (&rest args)
             (apply fun (append cargs args))))
CURRY</code></pre>

<p>Though there is no need to create this. This is also part of the Alexandria library. Which in addition to this, also provides <code>rcurry</code> to curry from the right.</p>

<h4>Conclusion</h4>

<p>It is possible to do functional programming in languages that are not made for pure FP. It is important to separate the areas where side-effects may happen ('imperative-shell') and where not ('functional-core'). Functions in the 'functional core' should be pure functions that don't modify input parameters. Using immutable data structures is a big help in doing that. But immutable data structures are not always available. In that case you have to manually copy mutable data structures and operate on the copies. This requires discipline. In multi-threaded environments, it might still be worth the effort for the gain of simplicity and reasonability.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Patterns - Builder-make our own ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Patterns+-+Builder-make+our+own"></link>
        <updated>2021-03-13T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Patterns+-+Builder-make+our+own</id>
        <content type="html"><![CDATA[ <p>Add-on to the post about the <a href="/blog/Patterns+-+Builder" class="link" target="_blank">Builder</a> pattern.<br/>
In this post we'll create our own simple Common Lisp builder DSL using macros.</p>

<p>Macros are a crucial component of Common Lisp, making the language so enormously extendable. The term 'macro' is a bit convoluted. Because many things are called 'macro' but have little do to with Lisp macros. The C macros for example are just a simple textual replacements. Today other languages have macros as well. The difference with Lisp macros is that Lisp macros are just Lisp code while other languages have a different AST (Abstract Syntax Tree) representation of the code. This is much more complicated to deal with. Lisp has no AST.</p>

<p>And yet, it's not all that easy. There is a fundamental difference between normal functions and macros. This difference and the consequence of it can take a while to grasp. The difference is that macros are executed at compile time (or macro-expansion time) and the parameters of macros are not evaluated while functions are executed on runtime and parameters of functions are evaluated before they are applied on the function. I'm still trying to wrap my head around it. I can create simple macros but I'm not an expert.</p>

<p>Let's have a look.</p>

<p>I want to use the builder like this:</p>

<pre class="lisp"><code>(build 'person p
  (set-name p "Manfred")
  (set-lastname p "Bergmann")
  (set-age p 27)
  (set-gender p "m"))</code></pre>

<p>The return of this is a new instance of <code>person</code> with the parameters set on the instance. So this <code>build</code> thing has to create an instance of the class <code>'person</code> which is represented by the variable <code>p</code>, evaluate all those <code>set-xyz</code> thingies and at last return the instance <code>p</code>.</p>

<p>We can easily come up with a simple macro that does this:</p>

<pre class="lisp"><code>(defmacro build (clazz var &body body)
  `(let ((,var (make-instance ,clazz)))
     ,@body
     ,var))</code></pre>

<p>The parameters <code>clazz</code> is the class to create (here <code>'person</code>), <code>var</code> is the variable name we want to use for the instance, and <code>body</code> are all expressions inside <code>build</code> (<code>set-name</code>, etc.). What the macro creates is a 'quoted' (quasi-quote) expression. Quoted expressions are not evaluated. Effectively they are just data, a list. When we use the <code>build</code> macro then what the compiler does is to replace <code>build</code> and everything inside it with the quoted expression. After the compiler expanded the macro it looks like this:</p>

<pre class="lisp"><code>(let ((p (make-instance 'person)))
  (set-name p "Manfred")
  (set-lastname p "Bergmann")
  (set-age p 27)
  (set-gender p "m")
  p)</code></pre>

<p>When we look again at the macro and compare the two then we see that the compiler actually used the macro arguments and replaced <code>,clazz</code>, <code>,var</code> and <code>,@body</code> with those. So this is what the <code>,</code> does in combination with the back-tick called quasi-quote. The <code>,</code> tells the compiler that it has to interpolate <code>'person</code> in place of <code>,clazz</code>, <code>p</code> in place of <code>,var</code> and the list of body expessions given to <code>build</code> macro in place of <code>,@body</code>. The <code>@</code> sign here means 'splice' and is needed because the body expressions are a list, like: <code>((expr1) (expr2) (expr3))</code>, but we don't want the list but just the expressions inside the list. So 'splice' removes the outer list.</p>

<p>Now, this is all good and nice. But it doesn't work. The setters <code>set-name</code>, etc. are not known to Lisp. They are no regular functions or macros. Slot access functions are auto-generated on classes. But using them in the builder macro doesn't look nice and is too much typing. What would already work with the macro as is:</p>

<pre class="lisp"><code>(build 'person p
  (setf (slot-value p 'name) "Manfred")
  (setf (slot-value p 'lastname) "Bergmann")
  (setf (slot-value p 'age) 27)
  (setf (slot-value p 'gender) "m"))</code></pre>

<p>So we'll have to create those setter functions ourselves. A bit More DSL to create.</p>

<p>It would be cool if those setters (and also getters) could be auto-generated whenever we define a new class. So we want to define a class, that automatically generates setter and getters like this:</p>

<pre class="lisp"><code>(defbeanclass person () (name lastname age gender))</code></pre>

<p><code>defbeanclass</code> doesn't exist. The rest of the syntax is equal to <code>defclass</code>. So we'll create a macro that can do this:</p>

<pre class="lisp"><code>(defmacro defbeanclass (name
                        direct-superclasses
                        direct-slots
                        &rest options)
  `(progn
     (defclass ,name ,direct-superclasses ,direct-slots ,@options)
     (generate-beans ,name)
     (find-class ',name)))</code></pre>

<p>This macro basically just wraps the default <code>defclass</code> macro. <code>generate-beans</code> is another macro that generates the setters and getters. We'll look shortly at this. Then finally <code>find-class</code> is responsible to return the generated class. (There might be a better way to do this.)</p>

<p><code>generate-beans</code> (you might remember Java) looks like this:</p>

<pre class="lisp"><code>(defmacro generate-beans (clazz)
  (cons 'progn
        (loop :for slot-symbol
                :in (mapcar #'slot-definition-name
                            (class-direct-slots 
                              (class-of (make-instance clazz))))
              :collect
              `(defbean ,slot-symbol))))</code></pre>

<p>This adds something new. Macros can have code that is evaluated at compile time (or macro expansion time) and code that is generated by the macro. The 'quote' makes the difference. Let's see shortly what this macro generates. The unquoted code in there, in particular the <code>loop</code>, is executed at compile time and generates a list of quoted <code>defbean</code> expressions, one for each slot (name, age, gender, etc.).</p>

<p>Macro expanded this looks like:</p>

<pre class="lisp"><code>(progn (defbean name) (defbean lastname) (defbean age) (defbean gender))</code></pre>

<p>(if someone knows a way to remove the <code>(cons 'progn</code>, please ping me.)</p>

<p>Cool. So <code>generate-beans</code> creates beans for each slot. But <code>defbean</code> is yet another macro. It does the real work of creating the setter and getter functions for a slot definition.</p>

<pre class="lisp"><code>(defmacro defbean (slot-symbol)
  (let ((slot-name (gensym))
        (getter-name (gensym))
        (setter-name (gensym)))
    (setf slot-name (symbol-name slot-symbol))
    (setf getter-name (intern (concatenate 'string "GET-" slot-name)))
    (setf setter-name (intern (concatenate 'string "SET-" slot-name)))
    `(progn
       (defun ,getter-name (obj)
         (slot-value obj ',slot-symbol))
       (defun ,setter-name (obj value)
         (setf (slot-value obj ',slot-symbol) value)))))</code></pre>

<p>This macro has again some code that must execute on macro expansion. We have to define the getter and setter names and 'intern' them to the Lisp environment so that they are known. If we wouldn't do this, but just expand the <code>defun</code>s we would get errors at runtime that the functions are not known. The 'interning' makes the connection between the function name (as used in <code>defun</code>) and the 'interned' symbol of the function name in the Lisp environment. After all this macro expands to (example for name getter/setter):</p>

<pre class="lisp"><code>(progn (defun get-name (obj) (slot-value obj 'name))
       (defun set-name (obj value) (setf (slot-value obj 'name) value)))</code></pre>

<p>Looking more closely this generates exactly the <code>setf</code> slot access we had above which we wanted to replace.<br/>
So we can now define classes that auto-generate getters and setters the way we want to use them in the builder.</p>

<p>When we fully macro expand <code>defbeanclass</code>:</p>

<pre class="lisp"><code>(progn
  (defclass person () (name lastname age gender))
  (progn
    (progn
      (defun get-name (obj) (slot-value obj 'name))
      (defun set-name (obj value) (setf (slot-value obj 'name) value)))
    (progn
      (defun get-lastname (obj) (slot-value obj 'lastname))
      (defun set-lastname (obj value) (setf (slot-value obj 'lastname) value)))
    (progn
      (defun get-age (obj) (slot-value obj 'age))
      (defun set-age (obj value) (setf (slot-value obj 'age) value)))
    (progn
      (defun get-gender (obj) (slot-value obj 'gender))
      (defun set-gender (obj value) (setf (slot-value obj 'gender) value))))
  (find-class 'person))</code></pre>

<p>We see that what the macro generates is just ordinary Lisp code. And yet on the top we have extended the language with new functionality.</p>

<p>Cheers</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Patterns - Builder ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Patterns+-+Builder"></link>
        <updated>2021-02-24T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Patterns+-+Builder</id>
        <content type="html"><![CDATA[ <p>The last blog <a href="/blog/Patterns+-+Abstract-Factory" class="link" target="_blank">post</a> was about the Abstract-Factory pattern. We have seen that in  Common Lisp there is hardly a pattern visible.</p>

<p>One could say patterns are code constructs that are repetetive. Almost like a language in a language. Paul Graham once asked: &quot;Are Patterns a language smell?&quot;.</p>

<h3>Builder</h3>

<p>Today we look at the Builder pattern. Similarly as the Abstract-Factory pattern is the Builder a creator pattern. It can help creating instances of objects. The difference to Abstract-Factory is that the Builder is tightly coupled to the class it creates. Yet, it allows to hide details of the class that only the Builder has access to while being in the same package. There can be different Builders that create instances of the same class but with a different configuration. If we wanted to do this with the classes directly we'd have to open them up. A Builder can also hide complexities when creating objects while providing a more simple interface to the user.</p>

<h4>Example in Scala</h4>

<p>First, we will look at some Scala code.<br/>
We want to create an object (a dungeon) like this:</p>

<pre class="scala"><code>val dungeon = new CastleDungeonBuilder()
  .setDifficulty(VeryDifficult)
  .addMonsters(15)
  .addSpecialItems(5)
  .get()</code></pre>

<p>First we create a Builder. It is a special kind of Builder that builds a castle dungeon. We set a difficulty, add monsters and some special items that the dungeon object should place somewhere.</p>

<p>The CastleDungeonBuilder looks like this:</p>

<pre class="scala"><code>class CastleDungeonBuilder extends IDungeonBuilder {
  override protected val theDungeon = new Dungeon(CastleDungeonKind)

  def addMonsters(n: Int): IDungeonBuilder = {
    // add nice monsters
    val filteredMonsters = Monsters.filter(m =&gt; m.creepyFactor &lt; 5)
    theDungeon.monsters = (0 until n)
      .map(filteredMonsters(new Random().nextInt(filteredMonsters.size)))
    this
  }
}</code></pre>

<p>As part of creating the Builder instance it creates a <code>Dungeon</code> instance. This <code>CastleDungeonBuilder</code> has a speciality, the monsters it adds are nice monsters that have a low 'creepy factor'. There is also a <code>CellarDungeonBuilder</code> that adds monsters with a 'creepy factor' &gt;= 5 (on a scale from 0 to 10). The right monsters for a cellar.<br/>
The method <code>addMonsters</code> also hides some complexity from the user. It just allows to say how many monsters to add, but to the dungeon instance the Builder sets a collection of pre-configured monsters instances.</p>

<p>The abstract Builder (where <code>CastleDungeonBuilder</code> and <code>CellarDungeonBilder</code> inherit from) actually only does some generic configuration. It looks like this:</p>

<pre class="scala"><code>trait IDungeonBuilder {
  protected val theDungeon: Dungeon

  def setDifficulty(difficulty: Difficulty): IDungeonBuilder = {
    theDungeon.difficulty = difficulty
    this
  }
  def addMonsters(n: Int): IDungeonBuilder = {
    theDungeon.monsters = 
      for(i &lt;- 0 until n) 
      yield Monster(new Random().nextInt(3), new Random().nextInt(10))
    this
  }
  def addSpecialItems(n: Int): IDungeonBuilder = {
    theDungeon.specialItems = 
      for(i &lt;- 0 until n) 
      yield SpecialItem(new Random().nextInt(7))
    this
  }
  def get: Dungeon = theDungeon
}</code></pre>

<p>This is the <code>Dungeon</code> class itself:</p>

<pre class="scala"><code>class Dungeon(private _kind: DungeonKind) {
  private var _difficulty: Difficulty = Difficulty.NotDifficultAtAll
  private var _monsters: List[Monster] = Nil
  private var _specialItems: List[SpecialItem] = Nil

  def difficulty: Difficulty = _difficulty
  private[dungeon]
  def difficulty_=(d: Difficulty): Unit = _difficulty = d

  def monsters: List[Monster] = _monsters.copy
  private[dungeon]
  def monsters_=(list: List[Monster]): Unit = _monsters = list.copy

  def specialItems: List[SpecialItem] = _specialItems.copy
  private[dungeon]
  def specialItems_=(list: List[SpecialItems]): Unit = _specialItems = list.copy
}</code></pre>

<p>While it allows to query the properties. It doesn't allow to set them except from within the same package. So the Builder must be defined in the same package as the dungeon class is.</p>

<h5>The poor-man's Builder</h5>

<p>Scala allows named and optional parameters in functions and constructors. A poor-man's Builder pattern in Scala could simply be to use those features on object creation together with auxiliary constructors. Though this doesn't allow the abstraction of a Builder and the encapsulation of the object properties but could be sufficient in some cases.</p>

<h4>Example in Common Lisp</h4>

<p>In Common Lisp we could certainly build a similar structure for Builders with separate classes and so on. But that's not needed. It is possible to allow the same features, the same level of abstraction and encapsulation by using multi-methods.</p>

<p>Let's also start with how we want the object to be created. I'd like to use the 'threading' (<code>-&gt;</code>) operator known from Clojure. I find it quite nice, but it is just some syntactic sugar around a <code>let</code>:</p>

<pre class="lisp"><code>(let ((dungeon (-&gt; (make-dungeon :type 'cellar)
                   (set-difficulty 'very-difficult)
                   (add-monsters 15)
                   (add-special-items 5))))
  ;; do something with dungeon
  )</code></pre>

<p>This first creates a dungeon object of <code>'cellar</code> type, then sets difficulty, adds monsters and special-items. Here are two different things at play. <code>make-dungeon</code> is a simple factory function. <code>set-*</code> and <code>add-*</code> functions are generic functions that we use to form a builder protocol. Each returns the dungeon object so that the 'threading' (or piping) can be done:</p>

<pre class="lisp"><code>;; builder protocol
(defgeneric set-difficulty (dungeon difficulty))
(defgeneric add-monsters (dungeon amount))
(defgeneric add-special-items (dungeon amount))</code></pre>

<p>Similarly as the Builders we created in Scala those generic function definitions should be in the same package as the dungeon class and the factory function is. If we want to apply a different set of monsters for different dungeon types we have to do two things. First we need to define sub-classes for those dungeon types. And second, we have to provide different implementation of the <code>add-monsters</code> builder protocol. Let's have a look at the classes and the factory function:</p>

<pre class="lisp"><code>(defclass dungeon ()
  ((difficulty :initform 'not-difficult-at-all)
   (monsters :initform nil :reader monsters)
   (special-items :initform nil :reader special-items)))
(defclass castle-dungeon (dungeon) ())
(defclass cellar-dungeon (dungeon) ())

(defun make-dungeon (&key type)
  (make-instance (ecase type
                   (castle 'castle-dungeon)
                   (cellar 'cellar-dungeon))))</code></pre>

<p>The specialization of the <code>add-monsters</code> generic function on the class type does the trick:</p>

<pre class="lisp"><code>;; specialized for 'castle-dungeon
(defmethod add-monsters ((obj castle-dungeon) amount)
  (with-slots (monsters) obj
    ;; set a bunch of nice looking monsters
    (setf monsters
          (filter-monsters-by-creepy-factor 5 #'&lt; amount *monsters*)))
  obj)

;; specialized for 'cellar-dungeon
(defmethod add-monsters ((obj cellar-dungeon) amount)
  (with-slots (monsters) obj
    ;; set a bunch of creepy monsters
    (setf monsters
          (filter-monsters-by-creepy-factor 5 #'&gt;= amount *monsters*)))
  obj)</code></pre>

<p>Common Lisp automatically does a match on the first function parameter for the class type. This is called multi-dispatch or multi-methods. So a different <code>add-monsters</code> implementation is called depending on whether the dungeon is created with type <code>'castle</code> or <code>'cellar</code>.
There is otherwise not really a lot more to it. All we did here is use the language features.</p>

<h4>Summary</h4>

<p>The Builder pattern in many object-oriented languages requires separate builder classes around a class they should create. This is used for abstraction and data encapsulation which would not be easily possible without the Builder.</p>

<p>In Common Lisp dedicated Builder classes are not needed. But dedicated classes are required to allow the multi-methods to do their work. This structure of this can also be recognized as a pattern, but it is simpler.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Patterns - Abstract-Factory ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Patterns+-+Abstract-Factory"></link>
        <updated>2021-02-07T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Patterns+-+Abstract-Factory</id>
        <content type="html"><![CDATA[ <p>Peter Norvig (one of the main people behind Common Lisp at the time) claimed that many design patterns are either not needed or much simpler in Lisp or dynamic languages generally. See <a href="http://norvig.com/design-patterns/design-patterns.pdf" class="link" target="_blank">this PDF</a>.</p>

<p>In this series of blog posts I'd like to go through some of the well known design patterns and make a comparison between the implementation in Scala and Common Lisp.<br/>
<a href="https://scala-lang.org/" class="link" target="_blank">Scala</a> is a statically typed, multi-paradigm language running on the Java Virtual Machine.<br/>
<a href="https://common-lisp.net/" class="link" target="_blank">Common Lisp</a> is a dynamically typed, multi-paradigm language running natively on many platforms.</p>

<h3>Abstract Factory</h3>

<p>Abstract Factory is a common creation pattern where details of object creation are abstracted and hidden behind a creation 'facade'. A factory generally allows to hide the details of object creation. For example when creating the object is complex the user of a factory is and should not be aware of those details. The factory hides those details.<br/>
Another important feature is that a factory can hide the concrete class implementation of the object it creates. The created object just has to comply to an interface/protocol. This has the benefit of less coupling. It is also possible to separate the source code dependenies so that a module that uses a factory does not need to have a source code dependency on the class implementation but only on the interface/protocol.</p>

<p>An Abstract Factory goes a step further in that it handles a set of factories, or put differently, it is an abstraction of a set of factories. For example, you have a GUI framework, this framework allows to create buttons. It should work in the same way no matter which toolkit is the backend. The Abstract Factory is usually configured at startup of the application with the right concrete factory implementation. This also allows to configure a mock or fake factory in a test environment.<br/>
An Abstract Factory is in a way Open-Closed. New button types and button factories can be added without affecting the existing buttons and factories.</p>

<p>In a static language like Scala usually two parallel class hierarchies are needed, one for the GUI button implementation and one for the factory that creates the button.</p>

<h4>Example in Scala</h4>

<pre class="scala"><code>trait IButton
class AbstractButton extends IButton

class GtkButton extends AbstractButton
class QtButton extends AbstractButton</code></pre>

<pre class="scala"><code>trait IButtonFactory {
  def makeButton(): IButton
}

class GtkButtonFactory extends IButtonFactory
class QtButtonFactory extends IButtonFactory

object ButtonFactory extends IButtonFactory {
  var factoryInstance: IButtonFactory
  
  def makeButton(): IButton = {
    factoryInstance.makeButton()
  }
}</code></pre>

<p>A user will now only use <code>ButtonFactory.makeButton()</code> to create buttons. It implements the same protocol as the concrete factories but it doesn't create a button itself, rather it delegates the creation to a concrete factory that has been configured.</p>

<h4>Example in Common Lisp</h4>

<p>In Common Lisp something similar could be easily created using CLOS (Common Lisp Object System). But there is a more simple way. Is it not necessary to maintain two parallel hierarchies. Just the buttons are needed.</p>

<p>In Common Lisp classes are designated by a symbol. For instance a class &quot;foo&quot; is designated by the symbol 'foo.</p>

<pre class="lisp"><code>(defclass foo () ())

(make-instance 'foo)</code></pre>

<p>But the class definition does not need to be known when creating an instance. <code>find-class</code> can find the class on run-time (assuming the class exists in the environment).</p>

<pre class="lisp"><code>(make-instance (find-class 'foo))
#&lt;FOO #x3020014BB27D&gt;</code></pre>

<p>So, the factory, which creates the button instance also does not need a source dependency on the concrete implementation of the button class. This gives us the separation, and we can define a default button class at run-time somewhere on startup which could be configured from a configuration file.</p>

<p>Then it is fully sufficient to create a simple button factory function which creates an instance of the button:</p>

<pre class="lisp"><code>(package :my-button-factory)

;; could be `(find-class 'qt-button)`, configured by startup code.
(defparameter *button-class* nil)

(defun make-button ()
  (make-instance *button-class*))</code></pre>

<p>We also need the buttons: </p>

<pre class="lisp"><code>(defclass abstract-button () ())
(defclass qt-button (abstract-button) ())
(defclass gtk-button (abstract-button) ())</code></pre>

<p>In a test we can easily set a mock or fake class for <code>*button-class*</code>.</p>

<p>New button implementations can easily be added without affecting existing buttons or the factory.</p>

<h4>Summary</h4>

<p>The parallel factory hierarchy is not necessary in Common Lisp. Neither is there really a pattern here that would be worth describing. It is so simple.</p>

<p>To be fair, to some degree a similar approach is also possible for Scala/Java using reflection where it is possible to create new instances of classes from the class object. For example an instance of a class can be created with:</p>

<pre class=""><code>Foo.class.getDeclaredConstructors()[0].newInstance()</code></pre>

<p>But the handling of this is quite combersome and by far not as convenient as with Common Lisp. In particular if there are different constructors. Also this approach leaves the type safe area that Scala provides. What <code>newInstance()</code> creates is just an <code>Object</code> which requires a manual cast.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Lazy-sequences - part 2 ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Lazy-sequences+-+part+2"></link>
        <updated>2021-01-13T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Lazy-sequences+-+part+2</id>
        <content type="html"><![CDATA[ <p>Lazy (evaluated) sequences - part 2.</p>

<p>In the last <a href="http://retro-style.software-by-mabe.com/blog/Lazy-sequences" class="link" target="_blank">blog post</a> I've talked about lazy sequences that are generated by a generator. The generator needs state to remember what number (or thing) it has generated before in order to generate the next. A consumer simply asks the generator about the next 'thing'.</p>

<p>This way of implementing lazy evaluated sequences has two negative consequences (thanks for pointing this out, Rainer). 1) there is not really a list or sequence data structure (not even a 'lazy' one :). The consumer generates a result data structure (a list) by repeatedly asking the generator for the requested number of items. 2) it is not possible to re-use a lexically scoped generator like below. Because it keeps state it will continue counting.</p>

<pre class="lisp"><code>(let ((generator (range)))
         (print (take 3 generator))
         (print (take 2 generator)))
(0 1 2)
(3 4)
;; where we would expect:
(0 1 2)
(0 1)</code></pre>

<p>So we will look at proper <em>lazy sequences.</em> What I'm writing about is fully handled in the book &quot;Structure and Interpretation of Computer Programs&quot; <a href="https://sarabander.github.io/sicp/html/3_002e5.xhtml" class="link" target="_blank">chapter 3.5</a>. I have changed the names of functions slightly in order to be better comparable to the naming scheme in the last blog post.</p>

<p><strong>Primitives</strong></p>

<p>As you might know, a <code>cons</code> cell is the basis for lists. A <code>cons</code> is <em>cons</em>tructed of two cells. In Lisp the left part is called <code>car</code> and the right part is called <code>cdr</code> (those two names still refer to (maybe obsolete) implementation details as 'address register' and 'decrement register'). With combining conses it is possible to generate linked lists when the <code>cdr</code> is again a <code>cons</code>. The <code>car</code> also represents the head of the list and the <code>cdr</code> the tail.</p>

<p>Now, in lazy sequences the computation of the <code>cdr</code> part is deferred until needed like this. (I've called this <code>cons</code> wrapper just <code>lazy-cons</code>):</p>

<pre class="lisp"><code>(defmacro lazy-cons (a b)
  `(cons ,a (delay ,b)))</code></pre>

<p>This generates a normal <code>cons</code> from two values only that the second, the <code>cdr</code>, is not evaluated yet. The <code>delay</code> is simply just <code>b</code> wrapped into a lambda like this:</p>

<pre class="lisp"><code>(defmacro delay (exp)
  `(lambda () ,exp))</code></pre>

<p>So if we would macroexpand this we would get just this:</p>

<pre class="lisp"><code>(cons a
      (lambda () b))</code></pre>

<p>But it is good to create a new layer of meaning so I want to create a few primitives to hide the details of this and to make working with lazy sequences more natural similar to a normal <code>cons</code>.</p>

<p>Both of the definitions above have to be macros because otherwise both <code>delay</code> and <code>b</code> when passed into <code>lazy-cons</code> would be evaluated immediately. But that should be delayed until wanted.</p>

<p>In order to access <code>car</code> and <code>cdr</code> of the <code>lazy-cons</code> we introduce two more primitives, <code>lazy-car</code> and <code>lazy-cdr</code>:</p>

<pre class="lisp"><code>(defun lazy-car (lazy-seq)
  (car lazy-seq))</code></pre>

<pre class="lisp"><code>(defun lazy-cdr (lazy-seq)
  (force (cdr lazy-seq)))</code></pre>

<p><code>lazy-car</code> just calls <code>car</code>. We could certainly just use <code>car</code> directly, but to be consistent and to create a new metaphor to be used for this lazy sequence we'll add both.</p>

<p><code>lazy-cdr</code> does somthing additional. This is a key element. When accessing the <code>cdr</code> of the list we now en<em>force</em> the computation of it. <code>force</code> is very simple. Where <code>delay</code> wrapped the expression into a lambda we now have to unwrap it by <em>funcall</em>ing this lambda to compute the expression. So <code>force</code> looks like this:</p>

<pre class="lisp"><code>(defun force (delayed-object)
  (funcall delayed-object))</code></pre>

<p>Those 5 things are the base primitives to construct lazy sequences. Now let's create a new <code>range</code> function - which now is not a generator anymore.</p>

<p><strong>Generate</strong></p>

<pre class="lisp"><code>(defun range (&key (from 0))
  (lazy-cons
   from
   (range :from (1+ from))))</code></pre>

<p>This <code>range</code> implementation doesn't need state. It simply constructs and returns a <code>lazy-cons</code>. The special feature is that the <code>cdr</code> is again a call to <code>range</code> with an incremented <code>from</code> parameter. In effect, calling <code>force</code> on the <code>lazy-cons</code> will construct the next <code>lazy-cons</code> and so on.</p>

<p>If we mentally go through a call chain to construct the values 0, 1, 2, 3 we'd have to:</p>

<pre class="plain"><code>call (range :from 0)
=&gt; (cons 0 &lt;delayed range call&gt;)
=&gt; lazy-car = 0

force lazy-cdr which calls range :from 1
=&gt; (cons 1 &lt;delayed range call&gt;)
=&gt; lazy-car = 1

force lazy-cdr which calls range :from 2
=&gt; (cons 2 &lt;delayed range call&gt;)
=&gt; lazy-car = 2

force lazy-cdr which calls range :from 3
=&gt; (cons 3 &lt;delayed range call&gt;)
=&gt; lazy-car = 3</code></pre>

<p>And so on. So a consumer, like <code>take</code> would have to iteratively or recursively go through this call chain to construct lazily computed values.</p>

<p><strong>Consume</strong></p>

<p><code>take</code> generates a list, so it must collect all lazily computed values. How does it do that? By recursively creating conses.</p>

<pre class="lisp"><code>(defun take (n lazy-seq)
  (if (= n 0)
      nil
      (cons (lazy-car lazy-seq)
            (take (1- n) (lazy-cdr lazy-seq)))))</code></pre>

<p>So <code>take</code> creates conses from <code>lazy-car</code> (the head) and a recursive call to <code>take</code> with the next delayed <code>cdr</code> (the tail) which then constructs the result list we're after.</p>

<pre class="lisp"><code>(take 5 (range))
(0 1 2 3 4)</code></pre>

<p>That was pretty simple so far. A library (or language) that supports lazy sequences usually provides more functionality, like filtering or mapping.</p>

<p><strong>Filtering</strong></p>

<p>Filtering is a pretty nice and important part in this. It allows to create specialized lazy sequences. For example we could create a lazy sequence where <code>take</code> collects just even numbers.</p>

<pre class="lisp"><code>(defun even-numbers ()
  (lazy-filter #'evenp (range)))</code></pre>

<pre class="lisp"><code>(take 5 (even-numbers))
(0 2 4 6 8)</code></pre>

<p>This <code>lazy-filter</code> function is very flexible by allowing arbitrary filter functions (just like a filter function on normal lists). The implementation of <code>lazy-filter</code> must again create <code>lazy-cons</code> to be completely transparent to the consumer functions. This also allows the composition of filter functions.</p>

<pre class="lisp"><code>(defun lazy-filter (pred lazy-seq)
  (cond
    ((null lazy-seq)
     nil)
    ((funcall pred (lazy-car lazy-seq))
     (lazy-cons (lazy-car lazy-seq)
                (lazy-filter pred (lazy-cdr lazy-seq))))
    (t
     (lazy-filter pred (lazy-cdr lazy-seq)))))</code></pre>

<p>When the passed in <code>lazy-seq</code> parameter (which is a <code>lazy-cons</code>) is empty, just return nil (the empty list). When applying the predicate to <code>lazy-car</code> is true (<code>t</code>) then return a new <code>lazy-cons</code> with <code>lazy-car</code> as head and delayed call to <code>lazy-filter</code> with the 'forced' tail. When none of the above is true. Which means that this result has to be filtered out. Then call again <code>lazy-filter</code> with the next 'forced' tail.</p>

<p>A similar thing can be done for mapping. But I'll leave that for you to read up on in SICP.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Lazy-sequences ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Lazy-sequences"></link>
        <updated>2021-01-07T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Lazy-sequences</id>
        <content type="html"><![CDATA[ <p>Lazy (evaluated) sequences are sequences whose elements are generated on demand. Many languages have them built in or available as libraries.</p>

<p>If you don't know what this is then here is an example:</p>

<pre class="lisp"><code>(take 5 (range :from 100))
(100 101 102 103 104)</code></pre>

<p><code>take</code> takes the first 5 elements from the a generator <code>range</code> which starts counting at 100. Each 'take' makes the <code>range</code> generator compute a new value rather then computing 5 elements up-front.</p>

<p>That's why it is called 'lazy'. The elements of the sequence are computed when needed. In a very simple form lazy evaluated sequences can be implemented using a generator that we call <code>range</code> and a set of consumers, like <code>take</code>. The generator can be implemented in a stateful way using a 'let over lambda', like this:</p>

<pre class="lisp"><code>(defun range (&key (from 0))
  (let ((n from))
    (lambda () (prog1
              n
            (incf n)))))</code></pre>

<p>The <code>range</code> function returns a lambda which has bound the <code>n</code> variable (this is also called 'closure'). When we now call the the lambda function it will return <code>n</code> and increment as a last step. (The <code>prog1</code> form returns the first element and continues to evaluate the rest)</p>

<p>So we can formulate a <code>take</code> function like this:</p>

<pre class="lisp"><code>(defun take (n gen)
  (loop :repeat n
        :collect (funcall gen)))</code></pre>

<p><code>take</code> has two arguments, the number of elements to 'take' and the generator, which is our lambda from <code>range</code>. This is a very simple example but effectively this is how it works.</p>

<p>If you are looking for good libraries for Common Lisp then I can recommend the following two:</p>

<ol>
<li><a href=https://github.com/cbeo/gtwiwtg class="link">gtwiwtg</a>: a new kid on the block.</li>
<li><a href=http://series.sourceforge.net/ class="link">Series</a>: a well known and solid library.</li>
</ol>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Thoughts about agile software development ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Thoughts+about+agile+software+development"></link>
        <updated>2020-11-17T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Thoughts+about+agile+software+development</id>
        <content type="html"><![CDATA[ <p>Thoughts about agile software development (agility).</p>

<p>Some time ago I had a discussion with someone on Twitter about &quot;Agile&quot; (notice spelled with a capital &quot;A&quot; as for a proper noun).</p>

<p>I'm not sure how it came to it, there was a bit of back and forth, but I explained that there is no magic in doing agile software development or in agility in general. It's not a secret. It's actually very simple. It's mostly common sense and many people apply this (without knowing) every day in their lifes.</p>

<p>At the core of agility is feedback. Very frequent feedback. And this is the major difference to waterfall or other processes. So you have to see that you place feedback loops at places where you want to adjust decisions and make changes. From the very low levels, like the feedback loop of test-driven development to higher levels like the feedback of frequent continous integration/delivery and deployment and of course the primary feedback loop with a client/customer that tries out a newly integrated story (this client can also be the product owner of your company).</p>

<p>This feedback implicitly allows you to make <em>'Working software'</em> frequently. The feedback is also at the core of the relationships with your customers towards <em>'Customer collaboration'</em> and is the source of <em>'Responding to change'</em>. But that's not all. Feedback from your fellow collegues in QA or the dev team also allows you to change quickly and is at the heart of <em>'Individuals and interactions'</em>.</p>

<p>So, this guy then said I would be 'out of reality' (not very nice). Because I suggested something so simple and pragmatic. Weird. But we came to a conclusion eventually.</p>

<p>As <a href="https://www.youtube.com/watch?v=a-BOSpxYJ9M" class="link" target="_blank">Dave Thomas puts it</a>: 'agile' is an adjective. You can't sell adjectives. But you can sell nouns.<br/>
A whole &quot;Agile&quot; industry has grown after the &quot;Manifesto for Agile Software Development&quot; was written and many people and consulting companies make a lot of money explaining clients how &quot;Agile&quot; works. So of course &quot;Agile&quot; must be something magic, something inexplicable that must be explained to companies by consultants for a lot of money.</p>

<p>But after all it's as simple as (again from Dave Thomas, not literally):<br/>
- see where you are<br/>
- make a small step in the direction you want to go<br/>
- see how that went (feedback)<br/>
- review and adjust<br/>
- repeat</p>

<p>Do that towards your clients/customers when collaborating on a product, or a feature.<br/>
Do that in the dev team by applying TDD, and generally by requesting and providing feedback for code changes, features, etc.<br/>
Do that when interacting dev &lt;-&gt; QA team.<br/>
etc.</p>

<p>So far so good. Here comes the challenge.<br/>
Doing this in practice it not easy. Because there are possibly more people having to see the value in this and most of the people have to pull in the same direction and spend effort to apply this.
Effectively it requires imposing discipline for how you work on yourself. That is not easy either.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Test-driven Web application development with Common Lisp ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Test-driven+Web+application+development+with+Common+Lisp"></link>
        <updated>2020-10-04T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Test-driven+Web+application+development+with+Common+Lisp</id>
        <content type="html"><![CDATA[ <p>The intention of this article is to:</p>

<ul>
<li>give a tutorial for the workflow of developing outside-in (or top-down) with tests-first</li>
<li>give an introduction to creating web applications in <a href="https://common-lisp.net/" class="link">Common Lisp</a> including some of the available libraries and frameworks</li>
<li>explain a bit about test-driven development in general</li>
</ul>

<p><strong>Outside-in with tests-first</strong></p>

<p>Outside-in (or top-down) nor tests-first is something new. Outside-in approach has been done for probably as long as there are computer languages. Similarly for tests-first. This all has been done for a very long time. The Smalltalk community in the 80's, did tests-first. Kent Beck then developed the workflow and discipline of test-driven development (TDD) a little bit later.<br/>
Combining the two makes sense. The idea is that you have two test cycles. An outer test loop which represents an integration or acceptance test, where new test cases (which represent features or parts of a feature) are added incrementally, but only when a previous test case passes. The outer test case fails until the feature was completely integrated and all components were added. And inner test loops that represent all the unit test that are developed in a TDD style for the components to be added.</p>

<p>Adding features incrementally in the context of outside-in means that a feature is developed as a vertical slice of the application rather than building layer by layer horizontally.<br/>
This is what we will go though in this article for a single feature of a web application developed from scratch.<br/>
At this point I'd like to recommend the book &quot;Growing Object-Oriented Software, Guided by Tests&quot; which talks at length about this topic.</p>

<p>The application here will be developed also incrementally and iteratively. Following the guidelines you should be getting a working application. The iterations shown here don't represent TDD iterations. TDD iterations are much smaller steps but this is hard to really show in writing. For this article it made little sense to really do this. The important thing is to transport the general workflow.</p>

<p><strong>Common Lisp</strong></p>

<p>I wished I had found the Common Lisp world (or the Lisp world in general) earlier. So, I just found Common Lisp somewhen in early 2019. I'm otherwise mostly working in the Java/Scala ecosystem, for almost 20 years. Of course I looked at many other languages and runtimes.<br/>
There are many 'new' computer languages these days that have in fact nothing new and are insignificant iterations of something that existed before. In fact nothing new in computer languages came up in the last 40 years or so.<br/>
The other thing is, if you want to learn something about programming languages there is no way around having a deeper look at Lisp. The Lisp language is brilliantly simple and expressive. A really practical and productive variant is Common Lisp.<br/>
Common Lisp is a representative of the Lisp family that has pretty much every language feature you could think of. It's not statically typed in a way like Haskell or OCaml (ML family) is (I don't wanna get into the dynamic vs. static types thingy now). But what I can say is that both variants have existed for more than 40 years and each has it's pros and cons.</p>

<p><strong>Content overview</strong></p>

<p>As already said, we will go through the development in a test-driven outside-in approach where we will slice vertically through the application and implement a feature with a full integration test and inner unit tests. We will have a look at the following things:</p>

<ul>
<li><a href="#the-web" class="link">Getting to the web / Intro</a></li>
<li><a href="#starting" class="link">Project start</a></li>
<li><a href="#feature" class="link">Adding the blog feature</a>

<ul>
<li><a href="#blog-feature_outer-test-loop-index" class="link">The outer test loop</a>

<ul>
<li><a href="#blog-feature-start_the_server" class="link">Starting the server, for real</a></li>
<li><a href="#blog-feature_asdf-system" class="link">ASDF - a quick detour</a></li>
</ul></li>
<li><a href="#blog-feature_inner-test-loops-first" class="link">The inner test loops</a>

<ul>
<li><a href="#blog-feature_url-routing" class="link">URL routing / introducing the MVC controller</a></li>
<li><a href="#blog-feature_blog-controller-first" class="link">The blog controller</a>

<ul>
<li><a href="#blog-feature_mvc-detour" class="link">MVC - a quick detour</a></li>
<li><a href="#blog-feature_tdd-detour" class="link">TDD - a quick detour</a></li>
<li><a href="#blog-feature_tdd_cheat" class="link">TDD - the cheating</a></li>
<li><a href="#blog-feature_reflection" class="link">Taking a step back and reflect</a></li>
<li><a href="#blog-feature_outer-loop-revisit" class="link">Revisit the outer test loop</a></li>
<li><a href="#blog-feature_ctrl-update-asd" class="link">Updating the ASDF system</a></li>
</ul></li>
<li><a href="#blog-feature_blog-view" class="link">The blog view</a>

<ul>
<li><a href="#blog-feature_view-test" class="link">Testing the view</a></li>
<li><a href="#blog-feature_view-roundup" class="link">Roundup</a></li>
</ul></li>
</ul></li>
<li><a href="#blog-feature_deployment" class="link">Some words on deployment</a></li>
</ul></li>
<li><a href="#conclusion" class="link">Conclusion</a></li>
</ul>

<h3><a name="the-web"></a>Getting to the web / Intro</h3>

<p>I had the opportunity to work with a few web frameworks in the Java world. From pure markup extension frameworks like JSP over MVC frameworks like <a href="https://www.playframework.com/" class="link">Play</a> or <a href="https://grails.org/" class="link">Grails</a> to component based server frameworks like <a href="https://vaadin.com/" class="link">Vaadin</a> and <a href="https://tapestry.apache.org/" class="link">Tapestry</a> until I finally have settled with <a href="https://wicket.apache.org/" class="link">Wicket</a>, which I now work with since 2008.</p>

<p>The frameworks I worked with are usually based on the Java Servlet technology specification (this more or less represents and is an abstraction to the HTTP server and some session handling), which they pretty much all have in common. On top of the Java Servlets are the web frameworks which all enforce certain workflows, patterns and principles. The listed frameworks are a mixture of pure view frameworks and frameworks that also provide data persistence. They provide a routing mechanism and everything needed to make the user interface (UI). Some do explicit separation according to MVC with appropriate folder structures on 'views', 'controllers', 'models' while others do this less explicit. Of course many of those frameworks are opinionated to some degree. But since they usually have many contributors and maintainers the opinions are flattened and less expressive.</p>

<p>The Common Lisp ecosystem regarding web applications is very diverse. There are many framework approaches considering the relatively small community of Common Lisp. The server abstraction exists in form of a self-made opinionated abstraction layer called <a href="https://github.com/fukamachi/clack" class="link">Clack</a> which allows to use a set of available HTTP servers.<br/>
Those are the frameworks I have had a look at: <a href="http://40ants.com/weblocks/" class="link">Weblocks</a>, <a href="http://borretti.me/lucerne/" class="link">Lucerne</a>, <a href="https://shirakumo.github.io/radiance/" class="link">Radiance</a>, <a href="http://8arrow.org/caveman/" class="link">Caveman2</a>.</p>

<p>The listed frameworks either base on Clack or directly on the defacto standard HTTP server <a href="https://edicl.github.io/hunchentoot/" class="link">Hunchentoot</a>. Pretty much all frameworks allow to define REST styled and static routes.<br/>
I am not aware of a framework that adds or enforces MVC ('model', 'view', 'controllers'). So if you want MVC you'll have to come up with something yourself (which we'll do here in a very simple form).<br/>
The HTML generation is either based on a Django clone called <a href="https://mmontone.github.io/djula/" class="link">Djulia</a> or is done using one of the brilliant HTML generation libraries for Common Lisp <a href="https://github.com/edicl/cl-who" class="link">cl-who</a> (for HTML 4) and <a href="https://github.com/ruricolist/spinneret" class="link">Spinneret</a> (for HTML 5). Those libraries are HTML DSLs that allow you to code 'HTML' as Lisp code and hence it is compiled, can be type checked and debugged (if needed). Very powerful.<br/>
I think the only framework that enforces the use of Djulia is Lucerne. The others don't lock you in on something.<br/>
All frameworks also do some convenience wrapping of the request/response for easier access to parameters.<br/>
The only one that creates some 'model' abstractions for views is Weblocks. The only one that adds data persistence is Caveman2. But this is just some glue code that you get as convenience. The same libraries can be used in other frameworks.</p>

<p>The most complete one for me seemed to be Caveman2. It also sets up configuration, and creates test and production environments. But the documentation situation is not so good for Caveman2 (and/or <a href="http://8arrow.org/ningle/" class="link">Ningle</a> which Caveman2 is based on). I really had a hard time finding things. The other framework documentations are better. However, since the frameworks for a large part glue together libraries it is possible to look at the documentation for those libraries directly. The documantation for Hunchentoot server, cl-who, Spinneret, etc. are sufficiently complete.</p>

<p>The web application we will be developing during this article is based on an old web page design of mine that I'd like to revive. The web application will primarily be about a 'blog' feature that allows blog posts be written in HTML or Markdown and stored as files. The application will pick them up and convert them on the fly (in case of Markdown).  </p>

<p>The web application is based on the following libraries (web application relevant only):</p>

<ul>
<li>a simple self-made MVC like structure</li>
<li><a href="https://edicl.github.io/hunchentoot" class="link">Hunchentoot</a> HTTP server</li>
<li><a href="https://github.com/joaotavora/snooze" class="link">Snooze</a> REST routing library. This library is implemented with plain CLOS and hence can be easily unit tested. We'll see later how this works. I didn't find this easily possible with any other routing definitions of the other frameworks.</li>
<li><a href="https://github.com/edicl/cl-who" class="link">cl-who</a> for HTML generation, because this old web page is heavy on HTML 4. Otherwise I had used Spinneret.</li>
<li><a href="https://github.com/3b/3bmd" class="link">3bmd</a> for Markdown to HTML conversion.</li>
<li><a href="https://github.com/VitoVan/xml-emitter" class="link">xml-emitter</a> for generating XML. Used for the Atom feed generation.</li>
<li><a href="https://github.com/dlowe-net/local-time" class="link">local-time</a> for dealing with date and time formats. Conversions from timestamp to string and vise-versa.</li>
<li><a href="https://github.com/sharplispers/log4cl" class="link">log4cl</a> a logging library.</li>
<li><a href="https://github.com/lispci/fiveam" class="link">fiveam</a> as unit test library.</li>
<li><a href="https://github.com/Ferada/cl-mock/" class="link">cl-mock</a> a mocking library</li>
</ul>

<p>The project is hosted on <a href="https://github.com/mdbergmann/cl-swbymabeweb" class="link">GitHub</a>. So you can checkout the sources yourself. The life web page is available <a href="http://retro-style.software-by-mabe.com/blog" class="link">here</a>.</p>

<h3><a name="starting"></a>Project start</h3>

<p>Since this was my first web project with Common Lisp I had to do some research for how to integrate and run the server and add routes, etc. This is where the scaffolding that frameworks like Caveman2 produce are appeciated.</p>

<p>But, once you know how that works you can start a project from scratch. Along the way you can create a template for future projects. (This can also be in combination with one of the mentioned frameworks.)</p>

<p>That means we don't have a lot of setup to start with. We create a project folder and a <em>src</em> and <em>tests</em> folder therein. That's it. We'll add an ASDF based project/system definition as we go along.</p>

<p>To get started and since we use a test-driven approach we'll start with adding an integration (or acceptance) test for the blog feature.</p>

<p>In order to add tests that are part of a full test suite we'll start creating an overall 'all-tests' test suite. Create a new Lisp buffer/file and add the following code and save it as <em>tests/all-tests.lisp</em>:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.tests
  (:use :cl :fiveam)
  (:export #:run!
           #:all-tests
           #:nil
           #:test-suite))

(in-package :cl-swbymabeweb.tests)

(def-suite test-suite
  :description "All catching test suite.")

(in-suite test-suite)</code></pre>

<p>This is an empty <em>fiveam</em> test package that just defines an empty test suite. It will help us later when creating the ASDF test system as we can point it to this 'all-tests' suite and it'll automatically run all tests of the application.</p>

<h3><a name="feature"></a>Adding the blog feature</h3>

<p>We will excercise the integration test cycle with the <em>blog</em> page. There are a few use cases for the blog page where we take one that we will go through. The tests need to make sure that all components involved with serving this page are properly integrated and are operational.</p>

<h4><a name="blog-feature_outer-test-loop-index"></a>The outer test loop</h4>

<p>As already said, we have two test cycles. An outer and inner cycle. The outer test cycle represent the integration or acceptance tests while the inner the unit tests. While working on the unit tests it is possible to go back to the outer test for verifications. But the goal is to have the outer test fail until all the inner work is done so that the outer test can act as a guide and a safety net. The outer test cases are added incementally, feature by feature (or maybe also parts of a feature). While all code is developed and refined iteratively in the TDD workflow.</p>

<p><figure>
<img src="/static/gfx/blogs/outer-inner.png" alt="Outer-Inner" />
</figure></p>

<p>The blog index page is shown when a request goes to the path <em>/blog</em>. On this path the last available blog post is to be selected and displayed.<br/>
Let's start with the integration test and create a new Lisp buffer/file, save it as <em>tests/it-routing.lisp</em> and add the following code:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb-test
  (:use :cl :fiveam)
  (:local-nicknames (:dex :dexador))
  (:import-from #:cl-swbymabeweb
                #:start
                #:stop))
(in-package :cl-swbymabeweb-test)

(def-suite it-routing
  :description "Routing integration tests."
  :in cl-swbymabeweb.tests:test-suite)

(in-suite it-routing)

(def-fixture with-server ()
  (start :address "localhost")
  (sleep 0.5)
  (unwind-protect 
       (&body)
    (stop)
    (sleep 0.5)))

(test handle-blog-index-route
  "Test integration of blog - index."
  (with-fixture with-server ()
    (is (str:containsp "&lt;title&gt;Manfred Bergmann | Software Development | Blog"
                         (dex:get "http://localhost:5000/blog")))))</code></pre>

<p>Let's go through it. It creates a new test package and a new test suite. The <code>:in cl-swbymabeweb.tests:test-suite</code> adds this test suite to the <em>all-tests</em> test suite that we've created before.</p>

<p>The test <code>handle-blog-index-route</code> is a full cycle integration test that uses dexador HTTP client to run a request against the server and expect a certain page title which must be part of the result HTML. Of course, more assertions should be added to make this a proper acceptance test. The intention of the test, and of the feature should be fully clear at this stage. For simplicity reasons we'll more or less just test the routing and the overall integration of components. This test though doesn't create any hint about the architecture of the application or about the inner components. The architecture is carved out step by step by following the flow of calls or data (outside-in).</p>

<p>Since fiveam does not support <em>before</em> or <em>after</em> setup/cleanup functionality we have to workaround this using a fixture that is defined by <code>def-fixture</code>. The fixture will <code>start</code> and <code>stop</code> the HTTP server and in between run code that is the <em>body</em> of <code>with-fixture</code>. We also want to wrap all calls in <code>unwind-protect</code> in order to force shutting down the server as cleanup procedure even if the <code>&amp;body</code> raises an error which would otherwise unwind the stack and the HTTP server would keep running which had consequences on the next test we run.</p>

<p>Now, as part of adding this test we define a few things that don't exist yet. For example do we define a package called <code>cl-swbymabeweb</code> where we import <code>start</code> and <code>stop</code> from. Those <code>start</code> and <code>stop</code> functions obviously do start and stop the web server, so the package <code>cl-swbymabeweb</code> should be an application entry package that does those things.<br/>
This is part of what tests-first and TDD does, it acts as the first user of the production code and so defines how the interface should look like from an API user perspective.</p>

<p>When evaluating this buffer/file (I use <code>sly-eval-buffer</code> in Sly, or <code>C-c C-k</code> when the file was saved) we realize (from error messages) that there are some missing packages. So in order to at least get this compiled we have to load the dependencies using <em>quicklisp</em>. Here this would be <code>:dexador</code>, <code>:fiveam</code> and <code>:str</code> (string library).<br/>
We also have to create the defined package <code>cl-swbymabeweb</code> and add stubs (for now) for the <code>start</code>and <code>stop</code> functions. That's what we do now. Create a new buffer/file, add the following code as the minimum code to make the integration test compile, evaluate and save it under <em>src/main.lisp</em>.</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb
  (:use :cl)
  (:export #:start
           #:stop))

(in-package :cl-swbymabeweb)

(defun start (&key address))
(defun stop ())</code></pre>

<p>We can now go into the test package in the REPL by doing <code>(in-package :cl-swbymabeweb-test)</code> and run the test where we will see the following output:</p>

<pre class="nohighlight"><code>CL-SWBYMABEWEB-TEST&gt; (run! 'handle-blog-index-route)

Running test HANDLE-BLOG-INDEX-ROUTE X
 Did 1 check.
    Pass: 0 ( 0%)
    Skip: 0 ( 0%)
    Fail: 1 (100%)

 Failure Details:
 --------------------------------
 HANDLE-BLOG-INDEX-ROUTE in IT-ROUTING [Test integration of blog - index.]: 
      Unexpected Error: #&lt;USOCKET:CONNECTION-REFUSED-ERROR #x30200389ACBD&gt;
Error #&lt;USOCKET:CONNECTION-REFUSED-ERROR #x30200389ACBD&gt;.
 --------------------------------</code></pre>

<p>So, of course. Dexador is trying to connect to the server, but there is no server running. The <code>start/stop</code> functions are only stubs. This is OK. It is expected.</p>

<p><a name="blog-feature-start_the_server"></a><em>Start the server, for real</em></p>

<p>In order for the integration test to do it's job and test the full integration we still have a bit more work to do here before we move on. The HTTP server should be working at least. Let's do that now:</p>

<p>Add the following to <em>src/main.lisp</em> on top of the <code>start</code> function:</p>

<pre class="lisp"><code>(defvar *server* nil)</code></pre>

<p>For the <code>start</code> function we'll change the signature like this in order to be able to also specify a different port: <code>&amp;key (port 5000) (address &quot;0.0.0.0&quot;)</code>. Finally we'll now start the server like so in <code>start</code>:</p>

<pre class="lisp"><code>(defun start (&key (port 5000) (address "0.0.0.0") &allow-other-keys)
  (log:info "Starting server.")
  (when *server*
    (log:info "Server is already running."))
  (unless *server*
    (setf *server*
          (make-instance 'hunchentoot:easy-acceptor
                         :port port
                         :address address))    
    (hunchentoot:start *server*)))</code></pre>

<p>This code will make sure that there is no server instance currently being set and if not it will create a server instance and start it.</p>

<p>As a general dependency we use <em>log4cl</em>, a logging framework.</p>

<p>The <code>stop</code> function can be implemented like this:</p>

<pre class="lisp"><code>(defun stop ()
  (when *server*
    (log:info "Stopping server.")
    (prog1
        (hunchentoot:stop *server*)
      (log:debug "Server stopped.")
      (setf hunchentoot:*dispatch-table* nil)
      (setf *server* nil))))</code></pre>

<p>After 'quickloading' <em>log4cl</em> and <em>hunchentoot</em> and running the test again we will see the following output instead:</p>

<pre class="nohighlight"><code>CL-SWBYMABEWEB-TEST&gt; (run! 'handle-blog-index-route)

Running test HANDLE-BLOG-INDEX-ROUTE 
 &lt;INFO&gt; [21:35:21] cl-swbymabeweb (start) - Starting server.
::1 - [2020-09-07 21:35:22] "GET /blog HTTP/1.1" 404 339 "-" 
"Dexador/0.9.14 (Clozure Common Lisp Version 1.12  DarwinX8664); Darwin; 19.6.0"
X
 &lt;INFO&gt; [21:35:22] cl-swbymabeweb (stop) - Stopping server.
 Did 1 check.
    Pass: 0 ( 0%)
    Skip: 0 ( 0%)
    Fail: 1 (100%)

 Failure Details:
 --------------------------------
 HANDLE-BLOG-INDEX-ROUTE in IT-ROUTING [Test integration of blog - index.]: 
      Unexpected Error: #&lt;DEXADOR.ERROR:HTTP-REQUEST-NOT-FOUND #x3020032527FD&gt;
An HTTP request to "http://localhost:5000/blog" returned 404 not found.

&lt;html&gt;&lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;h1&gt;Not Found&lt;/h1&gt;
The requested URL /blog was not found on this server.&lt;p&gt;&lt;hr&gt;&lt;address&gt;
&lt;a href='http://weitz.de/hunchentoot/'&gt;Hunchentoot 1.3.0&lt;/a&gt; 
&lt;a href='http://openmcl.clozure.com/'&gt;
(Clozure Common Lisp Version 1.12  DarwinX8664)&lt;/a&gt; 
at localhost:5000&lt;/address&gt;&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;.
 --------------------------------</code></pre>

<p>This looks a lot better. The test still fails, which is good and expected. But the server works and responds with 404 for a request to <a href="http://localhost:5000/blog" class="link">http://localhost:5000/blog</a>.</p>

<p>The test will fail until the server responds with some HTML that contains the expected page title. In order to have the right page title we'll still have some work to do. So now is the time to move towards the inner test loops and develop the inner components in a TDD style. The inner unit tests should of course all pass.</p>

<p><a name="blog-feature_asdf-system"></a><em><a href="https://common-lisp.net/project/asdf/" class="link">ASDF</a> - a quick detour</em></p>

<p>But before we do that, and since we still can remember what files and libraries we added to make this all work we should setup an ASDF system that we'll expand as we go along.</p>

<p>For a quick recall, ASDF is the de-facto standard to define Common Lisp systems (or projects if you want). It allows to define library dependencies, source dependencies, tests, and a lot of other metadata.</p>

<p>So create a new buffer/file, save it as <em>cl-swbymabeweb.asd</em> in the root folder of the project and add the following:</p>

<pre class="lisp"><code>(defsystem "cl-swbymabeweb"
  :version "0.1.1"
  :author "Manfred Bergmann"
  :depends-on ("hunchentoot"
               "uiop"
               "log4cl"
               "str")
  :components ((:module "src"
                :components
                ((:file "main"))))
  :description ""
  :in-order-to ((test-op (test-op "cl-swbymabeweb/tests"))))

(defsystem "cl-swbymabeweb/tests"
  :author "Manfred Bergmann"
  :depends-on ("cl-swbymabeweb"
               "fiveam"
               "dexador"
               "str")
  :components ((:module "tests"
                :components
                ((:file "all-tests")
                 (:file "it-routing" :depends-on ("all-tests"))
                 )))
  :description "Test system for cl-swbymabeweb"
  :perform (test-op (op c)
                    (symbol-call :fiveam :run!
                                 (uiop:find-symbol* '#:test-suite
                                                    '#:cl-swbymabeweb.tests))))</code></pre>

<p>This defines the necessary ASDF system and test system to fully load the project to the system so far. When the project is in a folder where asdf can find it (like <em>~/common-lisp</em>) then it can be loaded into the image by:</p>

<pre class="lisp"><code>;; load (and compile if necessary) the production code
(asdf:load-system "cl-swbymabeweb")

;; load (and compile if necessary) the test code
(asdf:load-system "cl-swbymabeweb/tests")

;; run the tests
(asdf:test-system "cl-swbymabeweb/tests")</code></pre>

<p>Notice <code>test-system</code> vs. <code>load-system</code>. Since Common Lisp (CL) is image based, ASDF is a facility that can load a full project into the CL image. Keeping the system definition up-to-date is a bit combersome because loading the system must be performed on a clean image to really see if it works or not and if all dependencies are named proper. This is something that must be tried manually on a clean image. I usually do this by issuing <code>sly-restart-inferior-lisp</code> with loading the system, test system and finally testing the test system. When that works it is quite easy to continue working on a project which is merely just:</p>

<ol>
<li>open Emacs</li>
<li>run Sly/Slime REPL</li>
<li><code>load-system</code> (also the test system if tests should be run) of the project to work on.</li>
</ol>

<p>Until here we have a ditrectory structure like this:</p>

<pre class="nohighlight"><code>.
├── cl-swbymabeweb.asd
├── src
│   └── main.lisp
└── tests
    ├── all-tests.lisp
    └── it-routing.lisp</code></pre>

<p>I need to mention that the ASDF systems we defined explicitely name the source files and dependencies. ASDF can also work in a different mode where it can determine source dependencies according to the <code>:use</code> directive in the defined packages that are spread in files (I tend to use one package per file). This mode then just requires the root source file definition and it can sort out the rest. Look in the ASDF documentation for <em>package-inferred-system</em> if you are interessted.</p>

<h4><a name="blog-feature_inner-test-loops-first"></a>The inner test loops</h4>

<p>Now we will move on to the inner components. The first component that is hit by a request is the routing. We have to define which requests, request paths are handled by what and how. As mentioned earlier most frameworks come with a routing mechnism that allows defining routes. We will use <a href="https://github.com/joaotavora/snooze" class="link">Snooze</a> for this. The difference between Snooze and other URL router frameworks is more or less that routes are defined using plain Lisp functions in Snooze and HTTP conditions are just Common Lisp conditions. The author says: <em>&quot;Since you stay inside Lisp, if you know how to make a function, you know how to make a route. There are no regular expressions to write or extra route-defining syntax to learn.&quot;</em>. The other good thing is that the routing can be easily unit tested.</p>

<h5><a name="blog-feature_url-routing"></a>URL routing / introducing the MVC controller</h5>

<p><figure>
<img src="/static/gfx/blogs/router-cut.png" alt="Router cut" />
</figure></p>

<p>Of course we will start with a test for the routing. There will also be a new architectural component in the play, the <em>MVC controller</em>. The URL routing is still a component heavily tied to system boundary as it has to deal with HTTP inputs and outputs and reponse codes. In order to apply separation of concerns and single responsibility (SRP) we make the routing responsible for collecting all relevant input from the HTTP request and pass it on to the <em>controller</em>. At this stage we have to establish a contract between the router and the <em>controller</em>. So we define the input, expected output of the <em>controller</em> as we see fit from our level of perspective in the router. The output also includes the errors the <em>controller</em> may raise. All this is primarily carved out during developing the routing tests.</p>

<p>So let's put together the first routing test. Create a new buffer/file, save it as <em>tests/routes-test.lisp</em> and put the following in:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.routes-test
  (:use :cl :fiveam :cl-mock :cl-swbymabeweb.routes)
  (:export #:run!
           #:all-tests
           #:nil))
(in-package :cl-swbymabeweb.routes-test)

(def-suite routes-tests
    :description "Routes unit tests"
    :in cl-swbymabeweb.tests:test-suite)

(in-suite routes-tests)

(test blog-route-index
  "Tests the blog index route.")</code></pre>

<p>This just defines an empty test. But we require a new library called <a href="https://github.com/Ferada/cl-mock/" class="link">cl-mock</a>. It is a mocking framework.</p>

<p>Why do we need mocking here? Well, we want to use a collaborating component, the <em>controller</em>. But we'd want to defer the implementation of the <em>controller</em> until it is necessary. That is not now. The mock allows us to define the interface to the <em>controller</em> without having to implement it. This also allows us to stay focused on the routing and the <em>controller</em> interface definition. We don't need to be distracted with any <em>controller</em> implementation details.</p>

<p>In order to get the test package compiled we have to <em>quickload</em> two things, that is <code>snooze</code> and <code>cl-mock</code>. We also have to create the 'package-under-test' package. This can for now simply look like so (save as <em>src/routes.lisp</em>):</p>

<pre class="lisp"><code>(defpackage cl-swbymabeweb.routes
  (:use :cl :snooze))
(in-package :cl-swbymabeweb.routes)</code></pre>

<p>Once the test code compiles and we can actually run the empty test (use <code>in-package</code> and <code>run!</code> as above) we can move on to implementing more of the test.</p>

<p>One thing to remember is to update the ASDF system definition with the new files we added and library dependencies. However, in order to not interrupt the workflow I'd like to defer that until we can make a clear head again. The best time might be when we are done with the unit tests for the routes.</p>

<p>Now, let's add the following to the test function <code>blog-route-index</code>:</p>

<pre class="lisp"><code>  (with-mocks ()
    (answer (controller.blog:index) (cons :ok ""))

    (with-request ("/blog") (code)
      (is (= 200 code))
      (is (= 1 (length (invocations 'controller.blog:index))))))</code></pre>

<p><code>with-mocks</code> is a macro that comes with <em>cl-mock</em>. Any mocking code must be wrapped inside it. To actually mock a function call we use the <code>answer</code> macro which is also part of <em>cl-mock</em>. The use of <code>answer</code> in our test code basically means: <em>answer</em> a call to the function <code>(controller.blog:index)</code> with the result <code>(cons :ok &quot;&quot;)</code>. Since the <em>controller</em> does not yet exist we did define the interface for it here and now. This is how we want the <em>controller</em> to work. We did define that there should be a dedicated <em>controller</em> for the <em>blog</em> family of pages. We also defined that if there is no query parameter we want to use the <code>index</code> function of the <em>controller</em> to deliver an appropriate result. The result should be a <code>cons</code> consisting of an <em>atom</em> (_car<em>) and a string (_cdr</em>). The <em>car</em> indicates success or failure result (the exact failure atoms we don't know yet). The <em>cdr</em> contains a string for either the generated HTML content or a failure description. <code>answer</code> doesn't call the function, it just records what has to happen when the function is called.<br/>
Let's move on: the <code>with-request</code> macro (below) is copied from the <em>snooze</em> sources. It takes a request path and fills the <code>code</code> parameter with the result of the route handler. In the body of the <code>with-request</code> macro we can verify the <code>code</code> with an expected code. Also we want to verify that the request handler actually called the <em>controller</em> index function by checking the number of <code>invocations</code> that <em>cl-mock</em> recorded.</p>

<p>Now to compile and run the test there are a few things missing. First of all the <code>with-request</code> macro. Copy the following to <em>routes-test.lisp</em>:</p>

<pre class="lisp"><code>(defmacro with-request ((uri
                         &rest morekeys
                         &key &allow-other-keys) args
                        &body body)
  (let ((result-sym (gensym)))
    `(let* ((snooze:*catch-errors* nil)
            (snooze:*catch-http-conditions* t)
            (,result-sym
              (multiple-value-list
               (snooze:handle-request
                ,uri
                ,@morekeys)))
            ,@(loop for arg in args
                    for i from 0
                    when arg
                      collect `(,arg (nth ,i ,result-sym))))
       ,@body)))</code></pre>

<p>Also we need a stub of the <em>controller</em>. Create a new buffer/file, save it as <em>src/controllers/blog.lisp</em> and add the following:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.controller.blog
  (:use :cl)
  (:export #:index)
  (:nicknames :controller.blog))

(in-package :cl-swbymabeweb.controller.blog)

(defun index ())</code></pre>

<p>The test and the overall code should now compile. When running the test we see that the HTTP result code is 404 instead of the expected 200. We also see that </p>

<pre class=""><code>(LENGTH (INVOCATIONS 'CL-SWBYMABEWEB.CONTROLLER.BLOG:INDEX))</code></pre>

<p>evaluated to 0. Which means that the <em>controller</em> index function was not called.<br/>
This is good, because there is no route defined in <em>src/routes.lisp</em>. In contrast to the outer loop tests, which we shouldn't solve immediatele, we should of course solve this one. So let's add a route now to make the test 'green'. Add this to <em>routes.lisp</em>:</p>

<pre class="lisp"><code>(defroute blog (:get :text/html)
  (controller.blog:index))</code></pre>

<p>This defines a route with a root path of <em>/blog</em>. It defines that it must be a <em>GET</em> request and that the output has a content-type of <em>text/html</em>.<br/>
When we now evaluate the new route and run the test again we have both <code>is</code> assertions passing.</p>

<p>At this point we should add a failure case as well. What could be a failure for the <em>index</em> route? The index is supposed to take the last available blog entry and deliver it. Having no blog entry is not an error I would say, rather the HTML content that the controller delivers should be empty, or should contain a simple string saying &quot;there are no blog entries&quot;. So the only error that could be returned here is some kind of internal error that was raised somewhere which bubbles up through the controller to the route handler.</p>

<p>Let's add an additional test:</p>

<pre class="lisp"><code>(test blog-route-index--err
  "Tests the blog index route. internal error"
  (with-mocks ()
    (answer (controller.blog:index) (cons :internal-error "Foo"))

    (with-request ("/blog") (code)
      (is (= 500 code))
      (is (= 1 (length (invocations 'controller.blog:index)))))))</code></pre>

<p>Running the test has one assertion failing. The <em>code</em> is actually 200, but we expect it to be 500. In order to fix it we have to add some error handling and differentiate between <code>:ok</code> and <code>:internal-error</code> results from the <em>controller</em>. Let's do this, change the route definition to this:</p>

<pre class="lisp"><code>(defroute blog (:get :text/html)
  (let ((result (controller.blog:index)))
    (case (car result)
      (:ok (cdr result))
      (:internal-error (http-condition 500 (cdr result))))))</code></pre>

<p>This will make all existing tests green. But I'd like to add another test for a scenario where the controller result is undefined, or at least not what the router expects. I'd like to prepare for any unforseen that might happen on this route handler so that the response can be a defined proper error code. So add this test:</p>

<pre class="lisp"><code>(test blog-route-index--err-undefined-controller-result
  "Tests the blog index route.
internal error when controller result is undefined."
  (with-mocks ()
    (answer (controller.blog:index) nil)

    (with-request ("/blog") (code)
      (is (= 500 code))
      (is (= 1 (length (invocations 'controller.blog:index)))))))</code></pre>

<p>When executing this test the response code is a 204, which represents 'no content'. And indeed, this is correct. When the <em>controller</em> result is <code>nil</code> the router handler will also return <code>nil</code> because there is no <code>case</code> that handles this <em>controller</em> result, so the function runs through without having defined an explicit return, which then makes the function return <code>nil</code>. So we have to change the router code a bit to handle this and more cases. Change the route definition to:</p>

<pre class="lisp"><code>(defroute blog (:get :text/html)
  (handler-case
      (let ((result (controller.blog:index)))
        (case (car result)
          (:ok (cdr result))
          (:internal-error (http-condition 500 (cdr result)))
          (t (error "Unknown controller result!"))))
    (error (c)
      (let ((error-text (format nil "~a" c)))
        (log:error "Route error: " error-text)
        (http-condition 500 error-text)))))</code></pre>

<p>The outer <code>handler-case</code> catches any error that may happen and produces a proper code 500. Additionally it logs the error text. The <code>case</code> has been enhanced with a 'otherwise' handler which actually produces an error condition that is caught in the outer <code>handler-case</code>.<br/>
When running the tests again we should be fine.</p>

<p>The tests of the router don't actually test the string content (_cdr_) of the <em>controller</em> result because it's irrelevant to the router. It's important to only test the responsibilities of the unit-under-test. Any tests that go beyond the responsibilities, or the public interface of the unit-under-test leads to more rigidity and the potential is much higher that tests fail and must be fixed when changes are made to production code elsewhere.</p>

<p>We are now done with this feature slice in the router. It is now a good time to bring the ASDF system definition up to date. Add the new library dependencies: <em>snooze</em>, <em>cl-mock</em>. Also change the <code>:components</code> section to look like this:</p>

<pre class="lisp"><code>  :components ((:module "src"
                :components
                ((:module "controllers"
                  :components
                  ((:file "blog")))
                 (:file "routes")
                 (:file "main"))))</code></pre>

<p>This adds the 'controllers' sub-folder as a sub component that can name additional source files under it. When done, restart the REPL, load both systems and also run <code>test-system</code>. At this point this should look like this:</p>

<pre class="nohighlight"><code> Did 7 checks.
    Pass: 6 (85%)
    Skip: 0 ( 0%)
    Fail: 1 (14%)</code></pre>

<p>The only expected failing test is the integration test. Though the fail reason is still 404 'Not Found'. This is because we have not yet registered the route with the HTTP server. But I'd like to postpone this for when we have implemented the <em>controller</em>.</p>

<h5><a name="blog-feature_blog-controller-first"></a>The blog controller</h5>

<p>Before we go to implement the tests for the <em>controller</em> and the code for the <em>controller</em> itself we have to think a bit about the position of this component in relation to the other components and what the responsibilies of the blog <em>controller</em> are. In the MVC pattern it has the role of controlling the <em>view</em>. It also has the responsibility to generate the data, the <em>model</em>, that the <em>view</em> needs to do its job. The <em>view</em> is responsible to produce a representation in a desired output format. The <em>model</em> usually consist of a) the data to present to the user and b) the attributes to control the <em>view</em> components for visibility (enabled/disabled, etc.).<br/>
In our case we want the <em>view</em> to create a HTML page representation that contains the text and images of a blog entry, the blog navigation, and all the rest of the page. So the <em>model</em> must contain everything that the <em>view</em> needs in order to generate all this.<br/>
Let's have a look at this diagram:</p>

<p><figure>
<img src="/static/gfx/blogs/class-deps-diag.png" alt="Class dependencies" width="375" />
</figure></p>

<p>The blog <em>controller</em> should not have to deal with loading the blog entry data from file. This is the responsibility of the <em>blog repository</em>. Stripping out this functionality into a separate component has a few advantages. It keeps the <em>controller</em> code small and clean maintaining <em>single responsibility</em>. This is reflected in the tests, they will be simple and clean as well. The <em>blog-repo</em> will have to be mocked for the tests. The interface to the <em>blog-repo</em> will also be defined when implementing the tests for the blog <em>controller</em> on a use-case basis. The <em>blog-repo</em> as being in a separate area of the application will have its own model that can carry the data of the blog entries. The <em>controller</em> s job will be to map all relevant data from the <em>blog-repo</em> model to the <em>view</em> model. A <em>model</em> separation is important here for orthogonality. Both parts, the <em>blog-repo</em> and the <em>controller</em> / <em>view</em> combo should be able to move and develop separately and in their own pace. Of course the <em>controller</em>, as user of the <em>blog-repo</em> has to adapt to changes of the <em>blog-repo</em> interface and model. But this should only be necessary in one direction.<br/>
The purpose of <em>blog-repo-factory</em> is to be able to switch the kind of <em>blog-repo</em> implementation for different environments. It allows us to make the <em>controller</em> use a different <em>blog-repo</em> for test and production environment. The <em>controller</em> will only access the <em>blog-repo</em> through the <em>blog-repo-facade</em> which is a simplified interface to the <em>blog-repo</em> that hides away all the inner workings of the blog repository. So the <em>controller</em> will only use two things: the <em>blog-repo-facade</em> and the blog repo model. This simplified interface to the blog repository will also be simple to mock in the <em>controller</em> tests as we will see shortly.</p>

<p>The arrows in this diagram mark the direction of dependencies. The <em>router</em> has an inward dependency on the <em>controller</em>. The <em>controller</em> in turn has a dependency on the <em>view</em> and <em>model</em> because it must create both. The <em>controller</em> also has a dependency on the <em>blog-repo-facade</em> and on the model of the <em>blog-repo</em>. But none of this should have a dependency on the <em>controller</em>. Nor should the <em>controller</em> know about anything happening in the <em>router</em> or deal with HTTP codes directly. This is the responsibility of the <em>router</em>.</p>

<p><a name="blog-feature_mvc-detour"></a><em>MVC - a quick detour</em></p>

<p>The MVC pattern at first wasn't actually a pattern, or at least not officially known as a pattern. It was added to Smalltalk as a way to program UIs in the late 70's and only later MVC was adopted by other languages and frameworks. It allows a decoupling and a separation of concerns. Different teams can work on <em>view</em>, <em>controller</em> and <em>model</em> code. It also allows better testability and a higher reusability of the code. Use-cases grouped as MVC have a high cohesion and a lower coupling.</p>

<p>It's interesting, the 70's to 90's were amazing times. Pretty much all technological advancements of programming languages and patterns of computer science come from this time frame. Structured programming, object-oriented programming, functional programming, statically typed languages (Standard ML) and type inference (Hindley-Milner) were invented then. It was a time of open-minded exploration and ideas.</p>

<p>-- <em>detour end</em></p>

<p><figure>
<img src="/static/gfx/blogs/controller-cut.png" alt="Controller cut" />
</figure></p>

<p>Again, we start with test code, the plain blog <em>controller</em> test package. Save this as <em>tests/blog-controller-test.lisp</em>:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.blog-controller-test
  (:use :cl :fiveam :cl-mock)
  (:export #:run!
           #:all-tests
           #:nil))
(in-package :cl-swbymabeweb.blog-controller-test)

(def-suite blog-controller-tests
  :description "Tests for blog controller"
  :in cl-swbymabeweb.tests:test-suite)

(in-suite blog-controller-tests)</code></pre>

<p>The first test is already a bummer. It is slightly more complex than anything we had so far. But it won't get a lot more complex. It's not trivial to write how this slowly develops applying TDD using the red-green-refactor phases. So I'm just pasting the complete test with all additional needed data. But of course this was developed in the classic TDD style.</p>

<pre class="lisp"><code>(defparameter *expected-page-title-blog*
  "Manfred Bergmann | Software Development | Blog")

(defparameter *blog-model* nil)

(test blog-controller-index-no-blog-entry
  "Test blog controller index when there is no blog entry available"

  (setf *blog-model*
        (make-instance 'blog-view-model
                       :blog-post nil))
  (with-mocks ()
    (answer (blog-repo:repo-get-latest) (cons :ok nil))
    (answer (blog-repo:repo-get-all) (cons :ok nil))
    (answer (view.blog:render *blog-model*) *expected-page-title-blog*)

    (is (string= (cdr (controller.blog:index)) *expected-page-title-blog*))
    (is (= 1 (length (invocations 'view.blog:render))))
    (is (= 1 (length (invocations 'blog-repo:repo-get-all))))
    (is (= 1 (length (invocations 'blog-repo:repo-get-latest))))))</code></pre>

<p>The first, simpler test assumes that no blog entry exists. Let's go through it:<br/>
We have now two things that come into play. It's 1) the <em>blog-repo-facade</em> here represented as <em>blog-repo</em> package and 2) the blog <em>view</em> package <code>view.blog</code>. The blog <em>views</em> s <code>render</code> function will produce HTML output. We will mock the <em>view</em> generation and will <code>answer</code> with the pre-defined <code>*expected-page-title-blog*</code>. The blog <em>view</em> will also need a <em>model</em>, represented as <code>*blog-model*</code> parameter.<br/>
Again we need to setup mocks using <code>with-mocks</code> macro. The <code>answer</code> calls represent the interfaces and function calls the <em>controller</em> should do to the <em>blog-repo</em> in order to a) retrieve all blog entries (<code>blog-get-all</code>) which is internally triggered through the call <code>blog-get-latest</code>. So the way the <em>blog-repo</em> works is that in order to retrieve the latest entry all entries must be available which must be collected before. Again we define an output interface of the <em>blog-repo</em> to be a <code>cons</code> with an <em>atom</em> as <em>car</em> and a result as the <em>cdr</em>. The two <em>blog-repo</em> facade calls both return with <code>:ok</code> but contain an empty result. This is not an error. The <em>view</em> has to render appropriately, which is not tested here. Also again, the <em>controller</em> tests do only test the expected behavior of the <em>controller</em> which is more or less: generate the <em>model</em> for the view, pass it to the <em>view</em> and take a response from the <em>view</em>.<br/>
The <code>view.blog:render</code> function takes as parameter the blog <em>model</em> and should return some HTML which contains the expected page title. The <code>*blog-model*</code> is a class structure which is here initialized kind of empty (<code>nil</code> represents empty).</p>

<p>The assertions make sure that a call to <code>controller.blog:index</code> actually returns the expected page title as <em>cdr</em> and also that all expected functions have been called.</p>

<p>In order for this to compile we have to add a few things. Create a new buffer/file and add the following code for the stub of the <em>blog-repo-facade</em> and save it as <em>src/blog-repo.lisp</em>:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.blog-repo
  (:use :cl)
  (:nicknames :blog-repo)
  (:export ;; facade for repo access
           #:repo-get-latest
           #:repo-get-all))

(in-package :cl-swbymabeweb.blog-repo)

(defun repo-get-latest ()
  "Retrieves the latest entry of the blog.")

(defun repo-get-all ()
  "Retrieves all available blog posts.")</code></pre>

<p>Also we have to add the <em>view</em> stub. Create a new buffer/file, save it as <em>src/views/blog.lisp</em> and add the following:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.view.blog
  (:use :cl)
  (:nicknames :view.blog)
  (:export #:render
           #:blog-view-model))

(in-package :cl-swbymabeweb.view.blog)

(defclass blog-view-model ()
  ((blog-post :initform nil
              :initarg :blog-post
              :reader model-get-blog-post)
   (all-blog-posts :initform '()
              :initarg :all-blog-posts
              :reader model-get-all-posts)))

(defun render (view-model))</code></pre>

<p>This package also defines the <em>model</em> class. The <code>blog-view-model</code> is the aggregated <em>model</em> that is passed in from the <em>controller</em>. The <code>blog-post</code> and <code>all-blog-posts</code> slots model up the 'to-be-displayed' blog entry and all available blog entries which is relevant for the navigation view component. To have a separation from the <em>blog-repo</em> model data (orthogonality) we will add separate model classes that are used only for the <em>view</em>. We will do this shortly.<br/>
Considering the dependencies we have two options of where to define the <em>model</em>. It's either right here (this works if we have a simple model class), or we define it in a separate package. In case we have more model classes and more complex ones this would be the better approach.</p>

<p>There is one addition we have to add to the test package. Add the following to right below the <code>:export</code>:</p>

<pre class="lisp"><code>  (:import-from #:view.blog
                #:blog-view-model)</code></pre>

<p>We explicitly import only the things we really need. This is the minimal code to get all compiled but the tests should still fail. So we have to implement some logic to the <code>index</code> function of the <em>controller</em>.</p>

<p>In order for the blog <em>controller</em> code in <em>src/controllers/blog.lisp</em> to know the <em>view</em> functions we should import the <code>:view.blog</code> package and then we have to implement some code to make the tests pass.</p>

<p><a name="blog-feature_tdd-detour"></a><em>TDD - a quick detour</em></p>

<p>We just have to look at the test expectations and implement them. An aspect of TDD I haven't talked about is the faking and cheating one can do in order to get the tests green (pass) as quickly as possible. When the tests pass we can refactor and replace the cheating with some 'real' code. Until now I have presented you full implemententations of the production code that fit to the test expectations. But a TDD cycle moves in fast paced iterations  with only small changes from red to green, then refactor, then from green to red for a new test case to restart the cycle. The cycle from red to green can contain cheating. This is because we want feedback as quickly as possible about what we did is either good or no good. When we cheat, this 'good' means just that we meet the current test expectation. Iteration for iteration we add new expectations that at some point can't be cheated anymore. This workflow that switches between test code and production code very rapidly and the immediate feedback we get puts <em>us</em> kind of into a symbiosis between the test code and the production code. The realization and the feeling of the code building up this way is enormously satisfying. The fact that you can just concentrate on small fractions of code but know that there is a outer protection (outer test loop) is a big relief.</p>

<p>-- <em>detour end</em></p>

<p><a name="blog-feature_tdd_cheat"></a><em>The cheating</em></p>

<p>So now I'll introduce a bit of cheating that just makes the one existing test case and all assertions pass. As I said this actually builds up in much smaller steps. And eventually of course we need to get rid of the cheating.</p>

<p>So, replace the <code>index</code> function with the following implementation and also add the other small functions.</p>

<pre class="lisp"><code>(defun index ()
  (let ((lookup-result (blog-repo:repo-get-latest))
        (all-posts-result (blog-repo:repo-get-all)))
    (make-controller-result
     :ok
     (view.blog:render
      (make-view-model (cdr lookup-result) (cdr all-posts-result))))))

(defun make-controller-result (first second)
  "Convenience function to create a new controller result.
But also used for visibility of where the result is created."
  (cons first second))

(defun make-view-model (the-blog-entry all-posts)
  (make-instance 'blog-view-model
                 :blog-post nil
                 :all-blog-posts nil))</code></pre>

<p>This is partly cheated insofar as the view model is generated with hardcoded <code>nil</code> values just as the tests expect it. When we compile this we're getting warnings shown for unused variables <code>the-blog-post</code> and <code>all-posts</code>. Those warnings should be treated seriously. We'll fix them shortly.<br/>
To better <em>reveil the intention</em> of how the <em>controller</em> works, and the output it generates we add a function <code>make-controller-result</code> that can generate the result (which after all is just a <code>cons</code>).<br/>
When we run the tests they will all pass:</p>

<pre class="nohighlight"><code>Running test BLOG-CONTROLLER-INDEX-NO-BLOG-ENTRY ....
 Did 4 checks.
    Pass: 4 (100%)
    Skip: 0 ( 0%)
    Fail: 0 ( 0%)</code></pre>

<p>When we now add another test case we will have no other choice as to remove the cheating in order to make the tests pass. We will see now. For the new test case we will need quite a bit additional production code, even if it's just stubs (more or less) but we need to get things compiled in order to even only run the new test. Add the following test case:</p>

<pre class="lisp"><code>(defparameter *blog-entry* nil)
;; 12 o'clock on the 20th September 2020
(defparameter *the-blog-entry-date* (encode-universal-time 0 0 12 20 09 2020))

(test blog-controller-index
  "Test blog controller for index which shows the latest blog entry"

  (setf *blog-entry*
        (blog-repo:make-blog-entry "Foobar"
                                   *the-blog-entry-date*
                                   "&lt;b&gt;hello world&lt;/b&gt;"))
  (setf *blog-model*
        (make-instance 'blog-view-model
                       :blog-post
                       (blog-entry-to-blog-post *blog-entry*)
                       :all-blog-posts
                       (mapcar #'blog-entry-to-blog-post (list *blog-entry*))))
  (with-mocks ()
    (answer (blog-repo:repo-get-latest) (cons :ok *blog-entry*))
    (answer (blog-repo:repo-get-all) (cons :ok (list *blog-entry*)))
    (answer (view.blog:render model-arg)
      (progn
        (assert
         (string= "20 September 2020"
                  (slot-value (slot-value model-arg 'view.blog::blog-post)
                              'view.blog::date)))
        (assert
         (string= "20-09-2020"
                  (slot-value (slot-value model-arg 'view.blog::blog-post)
                              'view.blog::nav-date)))
        *expected-page-title-blog*))

    (is (string= (cdr (controller.blog:index)) *expected-page-title-blog*))
    (is (= 1 (length (invocations 'view.blog:render))))
    (is (= 1 (length (invocations 'blog-repo:repo-get-all))))
    (is (= 1 (length (invocations 'blog-repo:repo-get-latest))))))</code></pre>

<p>The parameter <code>*blog-entry*</code> is set up with the <em>model</em> from the <em>blog-repo</em>, which we have to define still. Otherwise it is similar to the previous test case. The difference is that we expect the <em>blog-repo</em> now to actually get us blog entries which are mapped to the <em>view</em> model and passed on to the <em>view</em> to generate the display. We also use a new functionality of the <code>answer</code> macro. It can do pattern matching on the provided function parameter and so we can validate the <em>date</em> and <em>nav-date</em> formatting (we will add the model for this shortly). We also pre-define a timestamp with the <code>*the-blog-entry-date*</code> parameter which we require to be stable for the test case.<br/>
Now let's add the missing code to get this compiled. Stay close as we have to modify a few files.</p>

<p>To <em>src/blog-repo.lisp</em> add the following class which represents the blog <em>model</em>:</p>

<pre class="lisp"><code>(defclass blog-entry ()
  ((name :initform ""
         :type string
         :initarg :name
         :reader blog-entry-name
         :documentation "the blog name, the filename minus the date.")
   (date :initform nil
         :type fixnum
         :initarg :date
         :reader blog-entry-date
         :documentation "universal timestamp")
   (text :initform ""
         :type string
         :initarg :text
         :reader blog-entry-text
         :documentation "The HTML representation of the blog text.")))
         
(defun make-blog-entry (name date text)
  (make-instance 'blog-entry :name name :date date :text text))         </code></pre>

<p>The <code>make-blog-entry</code> is a convenience function to more easily create a <code>blog-entry</code> instance. This class structure has three slots. The <code>name</code> represents the name of the blog entry. The <code>date</code> is the date (timestamp, type <code>fixnum</code>) of the last update of the blog entry. And <code>text</code> is the HTML representation of the blog entry text. We don't go into detail about the blog <code>text</code>. The <em>blog-repo</em> takes care of this detail. What is important is that it delivers the text in a representational format that is immediately usable. There may be different strategies at play in the <em>blog-repo</em> that are able to convert from different sources to HTML. As initially pointed out the goal should be to allow plain HTML and Markdown texts. So at this point <em>blog-repo</em> is a black box for us. We use the data as is.</p>

<p>Then we'll have to add some additional <code>export</code>s in this package so that the class itself and the <code>reader</code> accessors can be used from importing packages.</p>

<pre class="lisp"><code>(:export #:make-blog-entry
         #:blog-entry-name
         #:blog-entry-date
         #:blog-entry-text
         ;; facade for repo access
         #:repo-get-latest
         #:repo-get-all)</code></pre>

<p>In <em>src/controllers/blog.lisp</em> we need the following additions:</p>

<pre class="lisp"><code>(defun blog-entry-to-blog-post (blog-entry)
  "Converts `blog-entry' to `blog-post'.
This function makes a mapping from the repository 
blog entry to the view model blog entry."
  (log:debug "Converting post: " blog-entry)
  (when blog-entry
    (make-instance 'blog-post-model
                   :name (blog-entry-name blog-entry)
                   :date (format-timestring nil
                                            (universal-to-timestamp
                                             (blog-entry-date blog-entry))
                                            :format
                                            '((:day 2) #\Space
                                              :long-month #\Space
                                              (:year 4)))
                   :nav-date (format-timestring nil
                                                (universal-to-timestamp
                                                 (blog-entry-date blog-entry))
                                                :format
                                                '((:day 2) #\-
                                                  (:month 2) #\-
                                                  (:year 4)))
                   :text (blog-entry-text blog-entry))))</code></pre>

<p>I'll explain in a bit what this does. Suffice to say for now that this is the function that maps the data from a <code>blog-entry</code> data structure to the <code>blog-post-model</code> data structure (which we'll define next) as used in the <code>blog-view-model</code>.<br/>
This function uses date-time formatting, so we need an import for the functions <code>format-timestring</code> and <code>universal-to-timestamp</code>. Those are date-time conversion functions that allows the Common Lisp <code>get-universal-time</code> timestamp to be converted to a string using a defined format. Import and quickload the package <code>local-time</code> for that. Additionally we need the <em>controller</em> to import <code>:blog-repo</code> so that is has access to the <code>blog-entry</code> and the <em>readers</em> accessors.</p>

<p>We also need to define another view model class that represents the blog entry to be displayed. Add the following to <em>src/views/blog.lisp</em>:</p>

<pre class="lisp"><code>(defclass blog-post-model ()
  ((name :initform ""
         :type string
         :initarg :name)
   (date :initform ""
         :type string
         :initarg :date)
   (nav-date :initform ""
             :type string
             :initarg :nav-date)
   (text :initform ""
         :type string
         :initarg :text)))</code></pre>

<p>This class is relatively close to the <em>blog-repo</em> class <code>blog-entry</code>. The <em>controller</em> function <code>blog-entry-to-blog-post</code> makes the mapping from one to the other. The <em>view</em> has a different responsibility than the <em>blog-repo</em> has. For example has the <code>blog-post-model</code> an additional slot, the <code>nav-date</code>. It is used in the 'recents' navigation and must present the blog post create/update date in a different string format than is shown in the full blog post display. Generally we use a <code>string</code> type for the <code>date</code> and <code>nav-date</code> slots here because the instance that controls how something is displayed is the <em>controller</em>.<br/>
So <code>blog-entry-to-blog-post</code> makes a full mapping from a <code>blog-entry</code> to a <code>blog-post-model</code> with all that is actually needed for the <em>view</em>. With this we make the <em>view</em> a relatively dump component that just shows what the <em>controller</em> wants. The <em>controller</em> test also defines the date formats to be used. Those formats and date strings as displayed by the <em>view</em> are validated in the <code>answer</code> call. Let's have a look at this in more detail:</p>

<pre class="lisp"><code>(answer (view.blog:render model-arg)
  (progn
    (assert
     (string= "20 September 2020"
              (slot-value (slot-value model-arg 'view.blog::blog-post)
                          'view.blog::date)))
    (assert
     (string= "20-09-2020"
              (slot-value (slot-value model-arg 'view.blog::blog-post)
                          'view.blog::nav-date)))
    *expected-page-title-blog*))</code></pre>

<p>The <code>answer</code> macro captures the function call arguments, so we can give the argument a name and check on its values. In our case we want to assert that the date strings are of the correct format, which are two different formats. The <em>nav-date</em> for example has to be a bit more condensed than the standard <em>date</em> format. After all <code>answer</code> has to still return something so we use <code>progn</code> which returns the last expression. Since we did not export the slots of <code>blog-view-model</code> and <code>blog-post-model</code> we use the double colon <code>::</code> to access them. We didn't export those symbols because no one except the <em>view</em> itself needs to access them. This is a bit of a grey area because we tap into a private area of the <em>model</em> data structures. On the other hand it would be good to control and verify the format of the date string. So we choose to accept to live with possible test failures when the structure of the <em>model</em> changes.</p>

<p>With the last addition the code compiles now. So we can run the new test. The test of course fails with:</p>

<pre class="nohighlight"><code> BLOG-CONTROLLER-INDEX in BLOG-CONTROLLER-TESTS 
 [Test blog controller for index which shows the latest blog entry]: 
      Unexpected Error: #&lt;SIMPLE-ERROR #x3020039689ED&gt;
NIL has no slot named CL-SWBYMABEWEB.VIEW.BLOG::DATE..</code></pre>

<p>This is logical. Because we still have our cheating in place that creates a <em>view</em> model with hard coded <code>nil</code> values. So the mocks don't have any effect due to this.</p>

<p>To make the tests pass we have to add the proper implementation of the <code>make-view-model</code> function in the <em>controller</em> code (see above). Replace the function with this:</p>

<pre class="lisp"><code>(defun make-view-model (the-blog-entry all-posts)
  (make-instance 'blog-view-model
                 :blog-post
                 (blog-entry-to-blog-post the-blog-entry)
                 :all-blog-posts
                 (mapcar #'blog-entry-to-blog-post all-posts)))</code></pre>

<p>This will now pass the <em>blog-repo</em> blog entry through the mapping function for the single <code>blog-post</code> slot as well as for all available blog posts generated by <code>mapcar</code> for <code>all-blog-posts</code>. Compiling this will now also remove the warnings we have had with this previously as the two function arguments are now used. Running the tests again now will give us a nice:</p>

<pre class="nohighlight"><code>Running test BLOG-CONTROLLER-INDEX ....
 Did 4 checks.
    Pass: 4 (100%)
    Skip: 0 ( 0%)
    Fail: 0 ( 0%)</code></pre>

<p><a name="blog-feature_reflection"></a><em>Taking a step back and reflect</em></p>

<p>The MVC blog <em>controller</em> is a relatively complex and central piece in this application. Let's take a step back for a moment and recapture what we have done and how we should continue.</p>

<p>The <em>controller</em> is using two collaborators to do its work. Those two are the <em>blog-repo</em> and the <em>view</em>. Both are not part of the <em>controller</em> unit and hence must be tested reparately. The <em>controller</em> as the driver of the functionality wants to control how to talk to the two collaborators. So the <em>controller</em> tests define the interface which is then implemented in the <em>controller</em> code and in both the <em>blog-repo</em> and the <em>view</em>. For now the collaborators are only implemented with stubs so that the mocking can be applied. Since the <em>controller</em> is the only 'user' of the two collaborating components it can more or less freely define the interface as it requires it. Would there be more 'users' there would be a bit more of 'pulling' from each 'user' so that eventually the interface would be more of a compromise or just had more functionality. We have also decided that the <em>controller</em> controls what and how the <em>view</em> displays the data. This was done for the date formats that are being displayed in the view.</p>

<p>Before we move on to working on the <em>view</em> we should recheck the outer loop test to verify that it still fails. Also now is a good time to register the routing on the HTTP server.</p>

<p><a name="blog-feature_outer-loop-revisit"></a><em>Revisit the outer test loop</em></p>

<p>The registration of the routes on the HTTP server is a missing piece in the full integration. The error the test provokes should get more accurate the closer we get to the end. So we're closing that gap now. To recall, the integration test raises a 404 'Not Found' error on the <em>/blog</em> route. That can be 'fixed' because we have implemented the route.</p>

<p>Extend the <em>src/routes.lisp</em> with the following function:</p>

<pre class="lisp"><code>(defun make-routes ()
  (make-hunchentoot-app))</code></pre>

<p>This function must also be exported: <code>(:export #:make-routes)</code>.<br/>
Then in <em>src/main.lisp</em> import this function:</p>

<pre class="lisp"><code>(:import-from #:cl-swbymabeweb.routes
              #:make-routes)</code></pre>

<p>and change the <code>start</code> function to this (partly):</p>

<pre class="lisp"><code>  (unless *server*
    (push (make-routes)
          hunchentoot:*dispatch-table*)
    (setf *server*
          (make-instance 'hunchentoot:easy-acceptor
                         :port port
                         :address address))    
    (hunchentoot:start *server*)))</code></pre>

<p>The <code>make-routes</code> function will create route definitions that can be applied on Hunchentoot HTTP server <code>*dispatch-table*</code>. This is a feature of <em>snooze</em>. It can do this for other servers as well.</p>

<p>Running the integration test now will give a different result.</p>

<pre class="nohighlight"><code>&lt;ERROR&gt; [13:20:51] cl-swbymabeweb.routes routes.lisp (blog get text/html) - 
Route error: CL-SWBYMABEWEB.ROUTES::ERROR-TEXT: 
"The value \"Retrieves the latest entry of the blog.\" 
is not of the expected type LIST."</code></pre>

<p>This is a bit odd. What's going on?<br/>
When looking at it more closely it makes sense. This is in fact great. It shows us that many parts are still missing for a full integration of all components. The text 'Retrieves the latest entry of the blog.' is returned by the function <code>repo-get-latest</code> of the <em>blog-repo</em> facade. Since it defines no explicit return it will implicitly return the only element in the function, which here is the docstring. But later, in the controller, the return of the <code>repo-get-latest</code> is expected to be a <code>cons</code> (the error says LIST, but a list is a list of <code>cons</code> elements) but apparently it is no <code>cons</code>. So when trying to access elements of the <code>cons</code> with <code>car</code> or <code>cdr</code> we will see this error.<br/>
This tells us that there is still quite a bit of work left. We will later fix the integration test without fully implementing the <em>blog-repo</em>.</p>

<p><a name="blog-feature_ctrl-update-asd"></a><em>Updating the ASDF system</em></p>

<p>Also a good time to bring the ASDF system up-to-date. Add all the new files and library dependencies. Add the <em>src/views/blog.lisp</em> component similarly as we added the <em>controller</em> component. Also add <em>src/blog-repo.lisp</em>. It should be defined before the <em>view</em> and the <em>controller</em> definitions due to the direction of dependency. Also add <em>local-time</em> library dependency.</p>

<p>To check if the system definitions work you can always do those four steps:</p>

<ol>
<li>restart the inferior lisp process.</li>
<li>load the default system (<code>asdf:load-system</code>) and fix missing components if it doesn't compile.</li>
<li>load the test system. Fix missing components if necessary.</li>
<li>test the test system (<code>asdf:test-system</code>).</li>
</ol>

<h5><a name="blog-feature_blog-view"></a>The blog view</h5>

<p>We are free to generate the view representation as we like. We can use any library we like to. We could even mix those. What the <em>controller</em> expects the <em>view</em> to deliver is HTML as a string. This is the only contract we have. What are our options to generate HTML? We could use:</p>

<ol>
<li>a templating library. Templating libraries are based on template files which represent pages or smaller page components. A page template is usually composed of smaller component templates. Templates also allow inheritance. Templating libraries usually provide 'language' constructs that allow 'structured programming' (to some extend) in the template, like <em>for</em> loops, <em>if/else</em>, etc. The template library then evaluates the template and expands the language constructs to HTML code:

<ul>
<li><a href="https://mmontone.github.io/djula/" class="link">Djulia</a>: This one is close to Pythons Django and provides a custom template language.</li>
<li><a href="https://gitlab.common-lisp.net/mraskin/cl-emb" class="link">cl-emb</a>: This is a templating library that allows using Common Lisp code in the template. It's use-case is not limited to HTML.</li>
</ul></li>
<li>a DSL that allows to write Lisp code which looks close to HTML constructs. There are a few libraries in Common Lisp that do this. Generally DSLs are relatively easy to create in Common Lisp using macros:

<ul>
<li><a href="https://github.com/edicl/cl-who" class="link">cl-who</a>: This library has a long history. It is very mature. However, it is limited to HTML 4.</li>
<li><a href="https://github.com/ruricolist/spinneret" class="link">spinneret</a>: This is a younger library that concentrates on HTML 5.</li>
</ul></li>
</ol>

<p>There are a few more options for both variants. If you are curious you can have a look at <a href="https://github.com/CodyReichert/awesome-cl#html-generators-and-templates" class="link">awesome-cl</a>.<br/>
We are choosing cl-who for this project mainly because I like the expressivness of Lisp code. When I can write HTML this way, the better. In addition, since it is Lisp code I get an almost immediate feedback about the validity of the code in the editor when compiling a code snippet. For larger projects however the templating variant may be a better choice because of the separation between the HTML and the backing code it provides, even though the template language weakens this separation. But yet it might be easier to non-coders to work with the HTML template files.</p>

<p><a name="blog-feature_view-test"></a><em>Testing the view</em></p>

<p>If we recall, we had created a package for the <em>view</em> where we just created a stub of the <code>render</code> function, and also we did define the view <em>model</em> classes that the <em>controller</em> instantiates and fills with data. But we had not created tests for this because the <code>render</code> function of the <em>view</em> was mocked in the <em>controller</em> test.<br/>
Now, when resuming here we first create a test. Testing the <em>view</em> is a bit tricky. In the most simple form you can only make string comparisons of what is expected to be in the generated HTML. Some frameworks come with sophisticated test functionality that goes far beyond just testing in the HTML string representation (Apache Wicket is such a candidate). A stop gap solution to allow more convenient testing is to use some form of HTML parser utility wrapped behind a framework facade. But we're not developing a framework here (this could be a nice project). Neither does any existing Common Lisp web framework has such a functionality. So we're stuck with just doing some string comparisons.</p>

<p>Let's start with a test package for the <em>view</em>. Create a new buffer/file, add the following and save it as <em>tests/blog-view-test.lisp</em>:</p>

<pre class="lisp"><code>(defpackage :cl-swbymabeweb.blog-view-test
  (:use :cl :fiveam :view.blog)
  (:export #:run!
           #:all-tests
           #:nil))
(in-package :cl-swbymabeweb.blog-view-test)

(def-suite blog-view-tests
  :description "Blog view tests"
  :in cl-swbymabeweb.tests:test-suite)

(in-suite blog-view-tests)</code></pre>

<p>This is nothing new. Now let's create a first test. To keep things simple we start with a test that expects a certain bit of HTML to be generated, like the header page title. But we have to supply an instanciated <em>model</em> object to the <code>render</code> function. So there is a bit of test setup needed. This is how it looks:</p>

<pre class="lisp"><code>(defparameter *expected-blog-page-title*
  "Manfred Bergmann | Software Development | Blog")

(defparameter *blog-view-empty-model*
  (make-instance 'blog-view-model
                 :blog-post nil
                 :all-blog-posts nil))

(test blog-view-nil-model-post
  "Test blog view to show empty div when there is no blog post to show."
  (let ((page-source (view.blog:render *blog-view-empty-model*)))
    (is (str:containsp *expected-blog-page-title* page-source))))</code></pre>

<p>This test for now has a single assertion. We only check for the existence of the header page title which is defined as a parameter <code>*expected-blog-page-title*</code>. The <em>model</em> passed in to the <code>render</code> function does not contain any blog entry and no list of blog entries for the navigation element.<br/>
A proper web framework should provide enough self-tests and building blocks that one can assume a HTML page is structurally correct. We don't do this kind of checking here to keep things simple. A library that could be facilitated for this is <a href="https://shinmera.github.io/plump/" class="link">Plump</a>, which is a HTML parser library.</p>

<p>When we run this test it of course fails, because for sure the page title is not included in the <code>render</code> output. In fact the <code>render</code> output is <code>nil</code>. Let's change that.</p>

<p>The following production code will make the test pass. One addition though, we have to <code>:use</code> the <em>cl-who</em> library in the <em>blog-view</em> package because we're going to use now the DSL this library provides. We also have to <em>quickload</em> this library first and of course add it to the <em>.asd</em> file eventually.</p>

<pre class="lisp"><code>(defparameter *page-title* "Manfred Bergmann | Software Development | Blog")

(defun render (view-model)
  (log:debug "Rendering blog view")
    (with-page *page-title*))

(defmacro with-page (title &rest body)
  `(with-html-output-to-string
       (*standard-output* nil :prologue t :indent t)
     (:html
      (:head
       (:title (str ,title))
       (:meta :http-equiv "Content-Type"
              :content "text/html; charset=utf-8"))
      (:body
       ,@body))))</code></pre>

<p>Adding this code will make the test pass. What does it do? First of all, the <code>render</code> function uses a self-made building block, the macro <code>with-page</code>. The macro allows to pass two things, 1) the page <em>title</em> and 2) it allows to nest additional code as <em>body</em> of the macro. Having a look at the macro we see the use of cl-who. The <code>with-html-output-to-string</code> macro allows embedding a <em>body</em> of DSL structures that are similar to HTML tags. The only difference is that instead of XML tags that enclose an element we use the Lisp syntax to do the same thing. So for example a <code>:html</code> macro can again nest other code in the <em>body</em> the same as a <code>&lt;html&gt;&lt;/html&gt;</code> XML tag does. Since this is then Lisp code it is compiled as any other Lisp code and hence can also be validated by the compiler, at least as far as the Lisp macro/function structure goes. The <code>(:body ,@body)</code> allows to add more components to the <code>:body</code> which represents the <code>&lt;body&gt;&lt;/body&gt;</code> HTML tag. The current use of the <code>with-page</code> macro could also look like this:</p>

<pre class="lisp"><code>(with-page "my page title"
  (:a :href "http://my-host/foo" :class "my-link-class" (str "my-link-label")))</code></pre>

<p>This would be translated to:</p>

<pre class="html"><code>&lt;html&gt;
    &lt;! head, title, etc. &gt;
    &lt;body&gt;
        &lt;a href="http://my-host/foo" class="my-link-class"&gt;my-link-label&lt;/a&gt;
    &lt;/body&gt;
&lt;/html&gt;</code></pre>

<p>So <em>cl-who</em> in combination with Lisp macros it is easily possible to build pages, or smaller page components as reusable building blocks that can be nested and composed where needed. This is probably the reason why no, or only few Common Lisp libraries provide this kind of thing out of the box. Because it's so easy to create from scratch. And after all, depending on what you create and in which context, pre-defined macros and framework building blocks may not necessarily represent the domain language you want or need in your application.</p>

<p><a name="blog-feature_view-roundup"></a><em>Roundup</em></p>

<p>Actually I'd like to stop here. You've got a glimpse of how generating HTML components using <em>cl-who</em> and macros  works and how the testing can be done. There is of course a lot more work to be done for this application. If you are curious the full <a href="https://github.com/mdbergmann/cl-swbymabeweb" class="link">project</a> is at GitHub.</p>

<p>But we have one last thing missing before we can recapture. The integration test is still failing. The view code we just added should suffice to make it pass. But as said above at <a href="#blog-feature_outer-loop-revisit" class="link">Revisit the outer test loop</a>, the <em>blog-repo</em> in it's incomplete form returns something unusable. So we have to 'fix' that. Change the <em>blog-repo</em> facade functions to this:</p>

<pre class="lisp"><code>(defun repo-get-latest ()
  "Retrieves the latest entry of the blog."
  (cons :ok nil))

(defun repo-get-all ()
  "Retrieves all available blog posts."
  (cons :ok nil))</code></pre>

<p>This will make the integration test pass, but again, this is a fake. </p>

<pre class="nohighlight"><code>CL-SWBYMABEWEB-TEST&gt; (run! 'handle-blog-index-route)

Running test HANDLE-BLOG-INDEX-ROUTE 
 &lt;INFO&gt; [14:39:31] cl-swbymabeweb main.lisp (start) - Starting server.
::1 - [2020-10-03 14:39:31] "GET /blog HTTP/1.1" 200 305 "-" 
"Dexador/0.9.14 (Clozure Common Lisp Version 1.12  DarwinX8664); Darwin; 19.6.0"
.
 &lt;INFO&gt; [14:39:31] cl-swbymabeweb main.lisp (stop) - Stopping server.
 Did 1 check.
    Pass: 1 (100%)
    Skip: 0 ( 0%)
    Fail: 0 ( 0%)</code></pre>

<p>It kind of suffices for an integration test, because the <em>blog-repo</em> is part of this integration. But of course when adding more integration tests we would have to come up with something else in which the <em>blog-repo</em> would be fully developed. Checkout the full code in the GitHub project.</p>

<p>With this integration test passing we finalized a full vertical slice of a feature implementation. All relevant components were integrated, even if only as much as needed for this feature (or part of a feature). Any more outer integration tests (which also represent feature integrations) will maybe extend or change the interface to the added components.</p>

<h4><a name="blog-feature_deployment"></a>Some words on deployment</h4>

<p>There are many ways of deploying a web application in Common Lisp. You could for example just open a REPL, load the project using ASDF, or Quicklisp (when it's in an appropriate local folder) and just run the server starter as we did in the integration test.</p>

<p>Another option is to make a simple wrapper script that could look like this:</p>

<pre class="lisp"><code>(ql:quickload :cl-swbymabeweb)  ;; requires this project to be in a local folder
                                ;; findable by Quicklisp

(defpackage cl-swbymabeweb.app
  (:use :cl :log4cl))
(in-package :cl-swbymabeweb.app)

(log:config :info :sane :daily "logs/app.log" :backup nil)

;; run server here</code></pre>

<p>And just save this file as <code>app.lisp</code> in the root folder of the project.<br/>
Then just start your Common Lisp like this: <code>sbcl --load app.lisp</code>.</p>

<p>There are also ways to run a remote session using Slync or Swank where you can also do remote debugging, etc.</p>

<h3><a name="conclusion"></a>Conclusion</h3>

<p>We've implemented part of a feature of a web application doing a full vertical slice of the application design using an outside-in test-driven approach. While doing that using Common Lisp we've used and looked at test libraries as well as other libraries that help making web applications easier.</p>

<p>But we also didn't talk about many things that are relevant for web applications. Like, how to configure logging in the web server. How to add static routes. How to use sessions and also localization of strings on a per session basis. How to use JavaScript using the awesome <a href="https://common-lisp.net/project/parenscript/reference.html" class="link">Parenscript</a> package that allows writing JavaScript in Common Lisp. There are other references on the web to address those things. Maybe I will also blog about on of these sometime in the future.</p>

<p>So long, thanks for reading.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Wicket UI in the cluster - the alternative ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+cluster+-+the+alternative"></link>
        <updated>2020-07-09T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+cluster+-+the+alternative</id>
        <content type="html"><![CDATA[ <p>After the <a href="/blog/Wicket+UI+in+the+Cluster+-+know+how+and+lessons+learned" class="link">first</a> and the <a href="/blog/Wicket+UI+in+the+cluster+-+reflection" class="link">second</a> part.
The third part of clustering with <a href="https://wicket.apache.org" class="link">Apache Wicket</a> is about an alternative.</p>

<p>Let me list the following options when you want to run Wicket clustered:</p>

<p><strong>1. Using a load-balancer that supports HTTP protocol or sticky sessions</strong></p>

<p>With &quot;sticky sessions&quot; I mean that the load-balancer forwards the requests to the server that created the session on a session id basis. This requires that the request is decrypted at the load-balancer in order to look at the HTTP headers for the jsessionid. Then the request is either again encrypted when being sent to the server, or it is sent unencrypted. Performance technically the last one is preferred. Decryption at the LB can work if you have control over the certificates. For a multi-tenant application with a lot of domains or sub-domains this can be a deal-breaker as it's hardly managegable to deal with all certificates of all tenants on the load-balancer.</p>

<p><strong>2. A stateful TCP load-balancer</strong></p>

<p>This LB creates a session on a MAC or source IP address basis and forwards requests from the same source to the same server. There is no need to decrypt the request. The session on the TCP load-balancer usually has a timeout. That timeout should be in sync with the HTTP server session timeout. This variant requires a bit of maintenance on the LB side and the LB has to deal with state for the sessions which adds complexity to the load-balancer.</p>

<p>Both of those variants usually still require that the session is synchronised between the servers to prepare for the case that one server goes down either wanted or unwanted.</p>

<p><strong>3. A stateless TCP load-balancer</strong></p>

<p>This works when the session is stored in a common place like a database where each server has access to.
Each read and write of the session is being done on the database. As you can imagine, this is very slow. Caching the session on the server for performance reasons is problematic because with a stateless LB each request can theoretically hit a different server. But an only slightly different session can break the Wicket application.</p>

<p><strong>Now the alternative</strong></p>

<p>The alternative works with a stateless load-balancer. It involves a bit of additional coding. Also you need a session synchronisation mechanism. But it's a lot faster than the database variant.</p>

<p>The idea is that the server that created the session will handle all requests related to this session, except it goes down of course. With a stateless LB it is likely that a request is forwarded to a server that did not create the session. Even if the session is synchronised across the servers, the synchronisation might be too slow so that stale session data might be used. We can't rely on that. Instead, the server where the request hits first will proxy the request to the server that created the session. This of course requires inter-server communication on an HTTP port, preferrably unencrypted.</p>

<p>For that to work, the hostname of the server where the session was created must be stored in the session (or the actual session is wrapped in another object where the additional data is stored). Additionally, when a request hits the server it must check (in the synchronised session object) where the session was created. If 'here' then pass through the request (let me mention Servlet filter-chain), if not 'here', get the hostname from the session object and proxy the request.</p>

<p>The additional unencrypted proxying should be relatively unexpensive. The more servers there are the more likely a request must be proxied.<br/>
There are a few edge cases that need a bit of attention, like when immediately after creating the session a second request (within a second or so) goes to a different server, but the session object wasn't synchronised yet.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Mars Rover Kata Outside-in in Common Lisp ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Mars+Rover+Kata+Outside-in+in+Common+Lisp"></link>
        <updated>2020-05-03T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Mars+Rover+Kata+Outside-in+in+Common+Lisp</id>
        <content type="html"><![CDATA[ <p>This implementation of the <a class="link" href="https://kata-log.rocks/mars-rover-kata" target="_blank">Mars Rover Kata</a> in Common Lisp gives an introduction to the Actor library <a class="link" href="https://github.com/mdbergmann/cl-gserver" target="_blank">cl-gserver</a>.</p>
<p>So the Rover is implemented as an Actor, but the design is test-driven. I.e.: what collaborators do we have there and how are they associated?</p>
<p>We will finally have carved out a 'reporting' facility where the rover reports its state to.</p>
<p><iframe src="https://www.youtube.com/embed/vbISgthxugY" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ MVC Web Application with Elixir ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/MVC+Web+Application+with+Elixir"></link>
        <updated>2020-02-16T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/MVC+Web+Application+with+Elixir</id>
        <content type="html"><![CDATA[ <p>
    I did explore 
    <a class="link" href="https://elixir-lang.org" target="_blank">Elixir</a>
     in the last half year.
    <br/>
     It's a fantastic language. Relatively young but already mature. It runs on the solid and battle proven Erlang VM.
</p>
<p>
    Now I thought it is time to have a look at the web framework 
    <a class="link" href="https://www.phoenixframework.org" target="_blank">Phoenix</a>
    .
</p>
<p>
    After reading a few things and trying my way through 
    <a class="link" href="https://pragprog.com/book/phoenix14/programming-phoenix-1-4" target="_blank">Programming Phoenix</a>
     I didn't really understand what's going on underneath this abstraction that Phoenix has built. There seemed to be a lot of magic happening. So I wanted to understand that first.
</p>
<p>Of course a lot of brilliant work has gone into Phoenix. However, for some key components like the web server, the request routing, the template library Phoenix more or less just does the glueing.</p>
<p>
    But for me it was important to understand how the web server is integrated, and how defining routes and handlers work.
    <br/>
     So the result of this exploration was a simple MVC web framework.
</p>
<p>
    It is actually quite easy to develop something simple from scratch. Of course this cannot compete with Phoenix and it should not.
    <br/>
     However for simple web pages this might be fully sufficient and it doesn't require a large technology stack.
</p>
<p>
    So I'd like to go though this step by step while crafting the code as we go. The web application will contain a simple form where I want to put in reader values of my house, like electricity or water. When those are submitted they are transmitted to my 
    <a class="link" href="https://www.openhab.org" target="_blank">openHAB</a>
     system. So you might see the name 'HouseStatUtil' more often. This is the name of the project.
</p>
<p>Those are the components we will have a look at:</p>
<ul>
    <li>the web server</li>
    <li>the request routing and how to define routes</li>
    <li>how to add controllers and views and the model they use</li>
    <li>the HTML rendering</li>
    <li>string resource localisation</li>
</ul>
<p>
    For reference, the complete project is on 
    <a class="link" href="https://github.com/mdbergmann/elixir_house_stat_util" target="_blank">Github</a>
    .
</p>
<h4 id="toc_1">Project setup</h4>
<p>
    You use the usual 
    <code>mix</code>
     tooling to create a new project.
</p>
<p>
    Then we'll need some dependencies (extract from 
    <code>mix.exs</code>
    ):
</p>
<div>
    <pre class="elixir"><code>defp deps do
  [
    {:plug_cowboy, "~&gt; 2.1.0"},
    {:eml, git: "https://github.com/zambal/eml.git"},
    {:gettext, "~&gt; 0.17.1"},
    {:mock, "~&gt; 0.3.4", only: :test}
  ]
end</code></pre>
</div>
<p>
    As you probably know, if you don't specify 
    <code>git:</code>
     in particular 
    <code>mix</code>
     will pull the dependencies from 
    <a class="link" href="https://hex.pm" target="_blank">hex</a>
    . But 
    <code>mix</code>
     can also deal with github projects.
</p>
<ul>
    <li>
        <a class="link" href="https://hexdocs.pm/plug_cowboy/Plug.Cowboy.html" target="_blank">plug_cowboy</a>
        : so 
        <a class="link" href="https://elixirschool.com/en/lessons/specifics/plug/" target="_blank">Plug</a>
         is an Elixir library that makes building web applications easy. Plugs can be seen as plugins. And so is 
        <code>plug_cowboy</code>
         a 'Plug' that bundles the Erlang web server 
        <a class="link" href="https://ninenines.eu" target="_blank">Cowboy</a>
        .
    </li>
    <li>
        <a class="link" href="https://github.com/zambal/eml" target="_blank">Eml</a>
        : is a library for generating HTML in form of Elixir language constructs, a DSL. But as we will later see, Elixir macros are very powerful (almost as Common Lisp macros). We we will build our own HTML DSL abstraction which should make it easy to use any backend library to generate HTML.
    </li>
    <li>
        <a class="link" href="https://hexdocs.pm/gettext/Gettext.html" target="_blank">gettext</a>
        : is the default localization framework in Elixir. We will see later how that works.
    </li>
    <li>
        <a class="link" href="https://github.com/jjh42/mock" target="_blank">mock</a>
        : since we do Test-Driven Development (TDD) of course we need a mocking framework. A library for unit tests is not necessary. This is part of the core Elixir.
    </li>
</ul>
<h4 id="toc_2">The web server</h4>
<p>
    <a class="link" href="https://ninenines.eu" target="_blank">Cowboy</a>
     is probably the most well known and used web server in the Erlang world. But we don't have to deal with the details that much.
</p>
<p>We have to tell the Erlang runtime to start Cowboy as a separate 'application' in the VM. The term 'application' is a bit misleading. You should see this more as a module or component.</p>
<p>Since in Erlang most things are actor based, and you can have a hierarchy and eventually a tree of actors that are spawned in an 'application' (or component) you have to at least make sure that those components are up and running before you use them.</p>
<p>
    So we'll have to add this to 
    <code>application.ex</code>
     which is the application entry point and should be inside the 'lib/' folder.
</p>
<p>This is how it looks for my application:</p>
<div>
    <pre class="elixir"><code>require Logger

def start(_type, _args) do
  children = [
    Plug.Cowboy.child_spec(
      scheme: :http,
      plug: HouseStatUtil.Router,
      options: [port: application_port()])
  ]

  opts = [strategy: :one_for_one, name: HouseStatUtil.Supervisor]
  pid = Supervisor.start_link(children, opts)
  Logger.info("Server started")

  pid
end

defp application_port do
  System.get_env()
  |&gt; Map.get("PORT", "4001")
  |&gt; String.to_integer()
end</code></pre>
</div>
<p>
    The first thing to note is that we use the Elixir 
    <code>Logger</code>
     library. So we need to 
    <code>require</code>
     it. (As a side note, usually you do use 
    <code>import</code>
     or 
    <code>alias</code>
     to import other modules. But 
    <code>require</code>
     is needed when the component defines macros.)
</p>
<p>
    The 
    <code>start</code>
     function is called by the runtime. Now we have to define the 'children' processes we want to have started. Here we define the 
    <code>Plug.Cowboy</code>
     as a child.
</p>
<p>
    The line 
    <code>plug: HouseStatUtil.Router</code>
     defines the request router. We'll have a look at this later.
</p>
<p>
    <code>Supervisor.start_link(children, opts)</code>
     will then start the children actors/processes.
</p>
<h4 id="toc_3">Request routing and how to define routes</h4>
<p>
    The 
    <code>HouseStatUtil.Router</code>
     is the next step. We need to tell Cowboy how to deal with requests that come in. In most web applications you have to define some routing, or define page beans that are mapped to some request paths.
</p>
<p>In Elixir this is pretty slick. The language allows to call functions without parentheses like so:</p>
<div>
    <pre class="elixir"><code>get "/" do
    # do something
end</code></pre>
</div>
<p>
    This could be written with parentheses as well: 
    <code>get("/") do</code>
</p>
<p>Here is the complete router module:</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.Router do
  use Plug.Router

  alias HouseStatUtil.ViewController.ReaderPageController
  alias HouseStatUtil.ViewController.ReaderSubmitPageController

  plug Plug.Logger

  plug Plug.Parsers,
    parsers: [:urlencoded],
    pass: ["text/*"]

  plug :match
  plug :dispatch

  get "/" do
    {status, body} = ReaderPageController.get(conn.params)
    send_resp(conn, status, body)    
  end

  post "/submit_readers" do
    IO.inspect conn.params
    {status, body} = ReaderSubmitPageController.post(conn.params)
    send_resp(conn, status, body)
  end

  match _ do
    send_resp(conn, 404, "Destination not found!")
  end
end</code></pre>
</div>
<p>Let's go though it.</p>
<p>
    <code>use Plug.Router</code>
     is the key element here. This will make this module a router. This also specifies the request types 
    <code>get</code>
    , 
    <code>post</code>
     and so on.
</p>
<p>
    <code>conn</code>
     is a connection structure which has all the data about the connection and the request, like the header and query parameters and so on. 
    <code>conn.params</code>
     is a combination of payload and query parameters.
</p>
<p>
    Each route definition must send a response to the client. This is done with 
    <code>send_resp/3</code>
    . It does take three parameters, the connection structure, a status and a response body (which is the payload).
</p>
<p>
    All the 
    <code>plug</code>
     definitions are executed in a chain for each request. Which means every request is url encoded (the request path at least) and must have a content-type of 'text/*'.
</p>
<p>
    <code>plug :match</code>
     does the matching on the paths. The last 
    <code>match _ do</code>
     is a 'catch all' match which here sends a 404 error back to the client.
</p>
<p>As you can see we have two routes. Each route is handled by a view controller. The only thing we pass to the view controller are the connection parameters.</p>
<h5 id="toc_4">Serving static content</h5>
<p>
    Most web sites need to serve static content like JavaScript, CSS or images. That is no problem. The 
    <a class="link" href="https://hexdocs.pm/plug/Plug.Static.html" target="_blank">Plug.Static</a>
     does this. As with the other plugs you just define this, maybe before 
    <code>plug :match</code>
     like so:
</p>
<div>
    <pre class="elixir"><code>plug Plug.Static, from: "priv/static"</code></pre>
</div>
<p>The 'priv' folder, in this relative form is in your project folder on the same level as the 'lib' and 'test' folders. You can then add sub folders to 'priv/static' for images, css and javascript and define the appropriate paths in your HTML. For an image this would then be:</p>
<div>
    <pre class="elixir"><code>
&lt;img src="images/foo.jpg" alt=""/&gt;
    </code></pre>
</div>
<h4 id="toc_5">Testing the router</h4>
<p>
    Of course the router can be tested. The router can nicely act as an integration test.
    <br/>
     Add one route test after another. It will fail until you have implemented and integrated the rest of the components (view controller and view). But it will act as a north star. When it passes you can be sure that all components are integrated properly.
</p>
<p>Here is the test code of the router:</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.RouterTest do
  use ExUnit.Case, async: true
  use Plug.Test

  alias HouseStatUtil.Router

  @opts HouseStatUtil.Router.init([])

  test "get on '/'" do
    conn = :get
    |&gt; conn("/")
    |&gt; Router.call(@opts)

    assert conn.state == :sent
    assert conn.status == 200
    assert String.contains?(conn.resp_body, "Submit values to openHAB")
  end

  test "post on /submit_readers" do
    conn = :post
    |&gt; conn("/submit_readers")
    |&gt; Router.call(@opts)

    assert conn.state == :sent
    assert conn.status == 200
  end
end</code></pre>
</div>
<p>
    There is a bit of magic that is being done by the 
    <code>Plug.Test</code>
    . It allows you to specify the 
    <code>:get</code>
     and 
    <code>:post</code>
     requests as in the tests.
</p>
<p>
    After the 
    <code>Router.call(@opts)</code>
     has been made we can inspect the 
    <code>conn</code>
     structure and assert on various things. For the 
    <code>conn.resp_body</code>
     we only have a chance to assert on some existing string in the HTML output.
</p>
<p>
    This can be done better. A good example is 
    <a class="link" href="https://wicket.apache.org" target="_blank">Apache Wicket</a>
    , a Java based web framework that has excellent testing capabilities. But the situation is similar on most of the MVC based frameworks. Since they are not component based the testing capabilities are somewhat limited.
</p>
<p>Nonetheless we'll try to make it as good as possible.</p>
<p>Next step are the view controllers.</p>
<h4 id="toc_6">How to define controllers and views and the model</h4>
<h4 id="toc_7">The view controller</h4>
<p>
    As you have seen above, each route uses its own view controller. I thought that a view controller can handle 
    <code>get</code>
     or 
    <code>post</code>
     requests on a route. So that handling more 'views' related to a path can be combined in a view controller. But you can do that as you wish. There is no rule.
</p>
<p>
    As a first step I defined a 
    <code>behaviour</code>
     for a view controller. It looks like this:
</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.ViewController.Controller do
  @callback get(params :: %{binary() =&gt; any()}) :: {integer(), binary()}
  @callback post(params :: %{binary() =&gt; any()}) :: {integer(), binary()}
end</code></pre>
</div>
<p>
    It defines two functions who's parameters are 'spec'ed as a map of strings -&gt; anything (
    <code>binary()</code>
     is Erlang and is actually something stringlike. I could also use an Elixir 
    <code>string</code>
     here). And those functions return a tuple of integer (the status) and again a string (the response body).
</p>
<p>
    I thought that the controller should actually define the status since it has to deal with the logic to render the view and process the form parameters, maybe call some backend or collaborator. So if anything goes wrong there the controller knows it.
    <br/>
     This is clearly a debatable design decision. We could argue that the controller should not necessarily know about HTTP status codes.
</p>
<p>Here is the source for the controller:</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.ViewController.ReaderPageController do
  @behaviour HouseStatUtil.ViewController.Controller

  alias HouseStatUtil.ViewController.Controller
  alias HouseStatUtil.View.ReaderEntryUI
  import HouseStatUtil.View.ReaderPageView

  @default_readers [
    %ReaderEntryUI{
      tag: :elec,
      display_name: "Electricity Reader"
    },
    %ReaderEntryUI{
      tag: :water,
      display_name: "Water Reader"
    }
  ]

  @impl Controller
  def get(_params) do
    render_result = render(
      %{
        :reader_inputs =&gt; @default_readers
      }
    )

    cond do
      {:ok, body} = render_result -&gt; {200, body}
    end
  end

  @impl Controller
  def post(_params), do: {400, ""}
end</code></pre>
</div>
<p>
    You see that this controller implements the 
    <code>behaviour</code>
     specification in the 
    <code>get</code>
     and 
    <code>post</code>
     functions. This can optionally be marked with 
    <code>@impl</code>
     to make it more visible that those are the implemented behaviours.
    <br/>
     A 
    <code>post</code>
     is not allowed for this controller and just returns error 400.
</p>
<p>
    The 
    <code>get</code>
     function is the important thing here. The response body for 
    <code>get</code>
     is generated by the views 
    <code>render/1</code>
     function. So we have a view definition here imported as 
    <code>ReaderPageView</code>
     which specifies a 
    <code>render/1</code>
     function.
</p>
<p>
    The views 
    <code>render/1</code>
     function takes a model (a map) where we here just specify some 
    <code>:reader_input</code>
     definitions. Those are later rendered as a table with checkbox, label and textfield.
</p>
<p>
    The 
    <code>render/1</code>
     function returns a tuple of 
    <code>{[ok|error], body}</code>
    . In case of 
    <code>:ok</code>
     we return a success response (200) with the rendered body.
</p>
<p>So we already have the model in the play here that is used by both controller and view. In this case the controller creates the model that should be used by the view to render.</p>
<h4 id="toc_8">Generating HTML in the controller</h4>
<p>For simple responses it's not absolutely necessary to actually create a view. The controller can easily generate simple HTML (in the way we describe later) and just return it. However, it should stay simple and short to not clutter the controller source code. After all it's the views responsibility to do that.</p>
<h4 id="toc_9">A view controller with submit</h4>
<p>
    To support a submit you certainly have to implement the 
    <code>post</code>
     function. The 
    <code>post</code>
     function in the controller will receive the form parameters as a map. This is how it looks like:
</p>
<div>
  <pre class="elixir"><code>%{
  "reader_value_chip" =&gt; "",
  "reader_value_elec" =&gt; "17917.3",
  "reader_value_water" =&gt; "",
  "selected_elec" =&gt; "on"
}</code></pre>
</div>
<p>The keys of the map are the 'name' attributes of the form components.</p>
<p>Since we only want to send selected reader values to openHAB we have to filter the form parameter map for those that were selected, which here is only the electricity reader ('reader_value_elec').</p>
<p>
    Here is the source of the 'submit_readers' 
    <code>post</code>
     controller handler:
</p>
<div>
  <pre class="elixir"><code>def post(form_data) do
      Logger.debug("Got form data: #{inspect form_data}")

      post_results = form_data
      |&gt; form_data_to_reader_values()
      |&gt; post_reader_values()
      
      Logger.debug("Have results: #{inspect post_results}")
      
      post_send_status_tuple(post_results)
      |&gt; create_response    
      end</code></pre>
</div>
<p>
    More sophisticated frameworks like Phoenix do some pre-processing and deliver the form parameters in pre-defined or standardised structure types.
    <br/>
     We don't have that, so there might be a bit of manual parsing required. But we're developers, right?
</p>
<h4 id="toc_10">Testing the controller</h4>
<p>
    Since the controller is just a simple module it should be easy to test it. Of course it depends a bit on the dependencies of your controller if this is more or less easy. At least the controller depends on a view component where a 
    <code>render/1</code>
     function is called with some model.
</p>
<p>But the controller test shouldn't test the rendering of the view. We basically just test a bi-directional pass through here. One direction is the generated model to the views render function, and the other direction is the views render result that should be mapped to a controller result.</p>
<p>To avoid to really have the view render stuff in the controller test we can mock the views render function.</p>
<p>
    In my case here I have a trivial test for the 
    <code>ReaderPageController</code>
     which just should render the form and doesn't require mocking (we do some mocking later).
</p>
<div>
  <pre class="elixir"><code>
defmodule HouseStatUtil.ViewController.ReaderPageControllerTest do
  use ExUnit.Case
      
  alias HouseStatUtil.ViewController.ReaderPageController
      
  test "handle GET" do
    assert {200, _} = ReaderPageController.get(%{})
  end

  test "handle POST returns error" do
    assert {400, _} = ReaderPageController.post(%{})
  end
end</code></pre>
</div>
<p>
    The 
    <code>get</code>
     test just delivers an empty model to the controller, which effectively means that no form components are rendered except the submit button.
    <br/>
     The 
    <code>post</code>
     is not supported on this controller and hence should return a 400 error.
</p>
<h4 id="toc_11">Mocking out collaborators</h4>
<p>
    The situation is a bit more difficult for the submit controller 
    <code>ReaderSubmitPageController</code>
    . This controller actually sends the entered and parsed reader results to the openHAB system via a REST interface. So the submit controller has a collaborator called 
    <code>OpenHab.RestInserter</code>
    . This component uses 
    <a class="link" href="https://github.com/edgurgel/httpoison" target="_blank">HTTPoison</a>
     http client library to submit the values via REST.
    <br/>
     I don't want to pull in those dependencies in the controller test, so this is a good case to mock the 
    <code>RestInserter</code>
     module.
</p>
<p>
    The first thing we have to do is 
    <code>import Mock</code>
     to have the defined functions available in the controller test.
</p>
<p>As an example I have a success test case and an error test case to show how the mocking works.</p>
<p>The tests work on this pre-defined data:</p>
<div>
  <pre class="elixir"><code>@reader_data %{
  "reader_value_chip" =&gt; "",
  "reader_value_elec" =&gt; "1123.6",
  "reader_value_water" =&gt; "4567",
  "selected_elec" =&gt; "on",
  "selected_water" =&gt; "on"
}
@expected_elec_reader_value %ReaderValue{
  id: "ElecReaderStateInput",
  value: 1123.6,
  base_url: @openhab_url
}
@expected_water_reader_value %ReaderValue{
  id: "WaterReaderStateInput",
  value: 4567.0,
  base_url: @openhab_url
}</code>
  </pre>
</div>
<p>
    This defines submitted reader form data where reader values for water and electricity were entered and selected. So we expect that the 
    <code>RestInserter</code>
     function is called with the 
    <code>@expected_elec_reader_value</code>
     and 
    <code>@expected_water_reader_value</code>
    .
</p>
<h5>A success case</h5>
<div>
  <pre class="elixir"><code>test "handle POST - success - with reader selection" do
  with_mock RestInserter,
    [post: fn _reader -&gt; {:ok, ""} end] do
      
    assert {200, _} = ReaderSubmitPageController.post(@reader_data)
      
    assert called RestInserter.post(@expected_elec_reader_value)
    assert called RestInserter.post(@expected_water_reader_value)
  end
end</code></pre>
</div>
<p>
    The key part here is the 
    <code>with_mock </code>
    . The module to be mocked is the 
    <code>RestInserter</code>
    .
    <br/>
     The line 
    <code>[post: fn _reader -&gt; {:ok, ""} end]</code>
     defines the function to be mocked, which here is the 
    <code>post/1</code>
     function of 
    <code>RestInserter</code>
    . We define the mocked function to return 
    <code>{:ok, ""}</code>
    , which simulates a 'good' case. Within the 
    <code>do end</code>
     we eventually call the controllers post function with the pre-defined submitted form data that normally would come in via the cowboy plug.
</p>
<p>
    Then we want to assert that 
    <code>RestInserter</code>
    s 
    <code>post/1</code>
     function has been called twice with both the expected electricity reader value and the expected water reader value.
</p>
<h5>A failure case</h5>
<div>
  <pre class="elixir"><code>test "handle POST - with reader selection - one error on submit" do
  with_mock RestInserter,
    [post: fn reader -&gt;
      case reader.id do
        "ElecReaderStateInput" -&gt; {:ok, ""}
        "WaterReaderStateInput" -&gt; {:error, "Error on submitting water reader!"}
      end
    end] do

    {500, err_msg} = ReaderSubmitPageController.post(@reader_data)
    assert String.contains?(err_msg, "Error on submitting water reader!")

    assert called RestInserter.post(@expected_elec_reader_value)
    assert called RestInserter.post(@expected_water_reader_value)
  end
end</code></pre>
</div>
<p>
    The failure test case is a bit more complex. Based on the reader value data that the 
    <code>RestInserter</code>
     is called with we decide that the mock should return success for the electricity reader but should fail for the water reader.
</p>
<p>
    Now, when calling the controllers post function we expect that to return an internal error (500) with the error message that we defined the 
    <code>RestInserter</code>
     to return with.
</p>
<p>
    And of course we also assert that the 
    <code>RestInserter</code>
     was called twice.
</p>
<p>Still pretty simple, isn't it?</p>
<h4 id="toc_12">The view</h4>
<p>The view is responsible to render the HTML and convert that to a string to pass it back to the controller.</p>
<p>Similarly as for the controller we define a behaviour for this:</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.View.View do
  @type string_result :: binary()

  @callback render(
    assigns :: %{binary() =&gt; any()}
  ) :: {:ok, string_result()} | {:error, string_result()}
end</code></pre>
</div>
<p>
    This behaviour defines the 
    <code>render/1</code>
     function along with input and output types. Erlang and Elixir are not statically typed but you can define types which are verified with dialyzer as an after compile process.
</p>
<p>
    So the input for the 
    <code>render/1</code>
     function defines 
    <code>assigns</code>
     which ia a map of string -&gt; anything entries. This map represents the model to be rendered.
    <br/>
     The result of 
    <code>render/1</code>
     is a tuple of either 
    <code>{:ok, string}</code>
     or 
    <code>{:error, string}</code>
     where the 'string' is the rendered HTML.
    <br/>
     This is the contract for the render function.
</p>
<h4 id="toc_13">Testing the view</h4>
<p>
    Testing the view is even more simple than the controller because it is less likely that some collaborator must be mocked or faked here.
    <br/>
     As said earlier, classic MVC frameworks, also Phoenix, ASP MVC or 
    <a class="link" href="https://www.playframework.com" target="_blank">Play</a>
     mostly only allow to test rendered views for the existence of certain strings.
    <br/>
     This is insofar different in 
    <a class="link" href="https://wicket.apache.org" target="_blank">Wicket</a>
     that Wicket operates component based and keeps an abstract view representation in memory where it is possible to test the existence of components and certain model values rather than strings in the rendered output.
</p>
<p>But any-who, here is an example of a simple test case that checks a heading in the rendered output:</p>
<div>
    <pre class="elixir"><code>test "has form header" do
  {render_result, render_string} = render()

  assert render_result == :ok
  assert String.contains?(
    render_string,
    h2 do "Submit values to openHAB" end |&gt; render_to_string()
  )
end</code>></pre>
</div>
<p>
    As you can see the 
    <code>render/1</code>
     function is called without model. This will not render the form components but certain other things that I know should be part of the HTML string. So we can check for it using a 
    <code>String.contains?</code>
    .
</p>
<p>
    You might realise that I've used some constructs that I will explain in the next chapter. For the string compare I create a 
    <code>h2</code>
     HTML tag the same way as the view creates it and I want it to be part of the rendered view.
</p>
<p>Here is another test case that checks for the rendered empty form:</p>
<div>
    <pre class="elixir"><code>>test "Render form components, empty reader inputs" do
  {render_result, render_string} = render()

  assert String.contains?(
    render_string,
    form action: "/submit_readers", method: "post" do
    input type: "submit", value: "Submit"
    end |&gt; render_to_string
  )
end</code></pre>
</div>
<p>The empty form which contains the submit button only is created in the test and expected to be part of the rendered view. Similarly we can certainly pass in a proper model so that we have some reader value entry text fields and all that being rendered.</p>
<p>Creating those HTML tags using Elixir language constructs is pretty slick, isn't it? I'll talk about this now.</p>
<h4 id="toc_14">How do to the HTML rendering</h4>
<p>
    Let me start with this. I know Phoenix uses 
    <a class="link" href="https://hexdocs.pm/eex/EEx.html" target="_blank">EEx</a>
    , the default templating library of Elixir (EEx stands for 'Embedded Elixir'). But, I do prefer (for this little project at least) to create HTML content in Elixir source code as language constructs, a DSL.
</p>
<p>Taking the form example from above I want to create HTML like this:</p>
<div>
    <pre><code>form action: "/submit_readers", method: "post" do
  input type: "checkbox", name: "selected_" &lt;&gt; to_string(reader.tag)
  input type: "submit", value: "Submit"
end</code></pre>
</div>
<p>... and so forth. This is pretty cool and just Elixir language.</p>
<h4 id="toc_15">Using a HTML DSL to abstract HTML generation</h4>
<p>No matter what backend generates the HTML I want to be flexible. With only a few macros we can create our own DSL that acts as a frontend that lets us use Elixir language constructs to write HTML code.</p>
<p>
    This made a blog post by itself. So read about how to create a HTML DSL with Elixir 
    <a class="link" href="../blog?title=Creating+a+HTML+domain+language+with+Elixir+using+macros" target="_blank">here</a>
    .
</p>
<h4 id="toc_17">Localisation</h4>
<p>
    So the controller, view and HTML generation is quite different to how Phoenix does it. The localisation is again similar. Both just use the 
    <a class="link" href="https://hexdocs.pm/gettext/Gettext.html" target="_blank">gettext</a>
     module of Elixir.
</p>
<p>
    The way this works is pretty simple. You just create a module in your sources that 'uses' 
    <code>Gettext</code>
    .
</p>
<div>
    <pre class="elixir"><code>defmodule HouseStatUtil.Gettext do
  use Gettext, otp_app: :elixir_house_stat_util
end</code></pre>
</div>
<p>
    This new module acts like a gettext wrapper module for your project. You should import it anywhere where you want to use one of the gettext functions: 
    <code>gettext/1</code>
    , 
    <code>ngettext/3</code>
    , 
    <code>dgettext/2</code>
     for example 
    <code>gettext("some key")</code>
     searches for a string key of "some key" in the localisation files.
    <br/>
     The localisation files must be created using 
    <code>mix</code>
     tool.
</p>
<p>
    So the process is to use the gettext function in your code where needed and then call 
    <code>mix gettext.extract</code>
     which then extracts the gettext keys used in the source code to localization resource files.
    <br/>
     There is a lot more info on that on that gettext web page. Check it out.
</p>
<h4 id="toc_18">Outlook and recap</h4>
<p>
    Doing a simple web application framework from scratch is quite easy. If you want to do more by hand and want to have more control over how things work then that seems to be a viable way. However, the larger the web application gets the more you have to carve out concepts which could after all compete with Phoenix. And then, it might be worth using Phoenix right away. In a professional context I would use Phoenix anyway. Because this project has gone though the major headaches already and is battle proven.
    <br/>
     Nonetheless this was a nice experience and exploration.
</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Creating a HTML domain language in Elixir with macros ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Creating+a+HTML+domain+language+in+Elixir+with+macros"></link>
        <updated>2020-02-15T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Creating+a+HTML+domain+language+in+Elixir+with+macros</id>
        <content type="html"><![CDATA[ <p>In this post we'll do a bit of exploration with <a class="link" href="https://elixir-lang.org" target="_blank">Elixir</a> macros and create our own little HTML DSL that will be part of a larger exploration project that develops a simple MVC based web framework.</p>
<p>This DSL should have a frontend and a backend that actually generates the HTML representation. For now it should use <a class="link" href="https://github.com/zambal/eml" target="_blank">Eml</a> to generate the HTML representation and the to_string conversion.<br /> However, it would be possible to also create an implementation that uses <a class="link" href="https://hexdocs.pm/eex/EEx.html" target="_blank">EEx</a> as a backend. And we could switch the backend without having the API user change its code.</p>
<p>So here is what we have to do to create a HTML DSL.</p>
<p>First we need a collection of tags. I have hardcoded them into a list:</p>
<div>
<pre class="elixir"><code>  @tags [:html, :head, :title, :base, :link, :meta, :style,
         :script, :noscript, :body, :div, :span, :article, ...]</code></pre>
</div>
<p>Then I want to allow to define tags in two styles. A one-liner style and a style with a multi-line body to be able to express multiple child elements.</p>
<div>
<pre class="elixir"><code># one-liner
span id: "1", class: "span-class", do: "my span text"

# multi-liner
div id: "1", class: "div-class" do
  span do: "my span text"
  span do: "my second text"
end</code></pre>
</div>
<p>We need two macros for this. The <code>do:</code> in the one-liner is seen just as an attribute to the macro. So we have to strip out the <code>do:</code> attribute and use it as body. The macro for this looks like this:</p>
<div>
<pre class="elixir"><code>  defmacro tag(name, attrs \\ []) do
    {inner, attrs} = Keyword.pop(attrs, :do)
     quote do: HouseStatUtil.HTML.tag(unquote(name),
                                      unquote(attrs), do: unquote(inner))
  end</code></pre>
</div>
<p>First we extract the value for the <code>:do</code> key in the <code>attrs</code> list and then pass the <code>name</code>, the remaining <code>attrs</code> and the extracted body as <code>inner</code> to the actual macro which looks like this and does the whole thing.</p>
<div>
<pre class="elixir"><code>  defmacro tag(name, attrs, do: inner) do
    parsed_inner = parse_inner_content(inner)
    
    quote do
      %E{tag: unquote(name),
         attrs: Enum.into(unquote(attrs), %{}),
         content: unquote(parsed_inner)}
    end
  end

  defp parse_inner_content({:__block__, _, items}), do: items
  defp parse_inner_content(inner), do: inner</code></pre>
</div>
<p>Here we get the first glimpse of Eml (the <code>%E{}</code> in there is an Eml structure type to create HTML tags). The helper function is to differentiate between having an AST as inner block or non-AST elements. But I don't want to go into more detail here.<br /> Instead I recommend reading the book <a class="link" href="https://pragprog.com/book/cmelixir/metaprogramming-elixir" target="_blank">Metaprogrammning Elixir</a> by Chris McCord which deals a lot with macros and explains how it works.</p>
<p>But something is still missing. We now have a <code>tag</code> macro. With this macro we can create HTML tags like this:</p>
<div>
<pre class="elixir"><code>tag "span", id: "1", class: "class", do: "foo"</code></pre>
</div>
<p>But that's not yet what we want. One step is missing. We have to create macros for each of the defined HTML tags. Remember the list of tags from above. Now we take this list and create macros from the atoms in the list like so:</p>
<div>
<pre class="elixir"><code>for tag <- @tags do
  defmacro unquote(tag)(attrs, do: inner) do
    tag = unquote(tag)
    quote do: HouseStatUtil.HTML.tag(unquote(tag), unquote(attrs), do: unquote(inner))
  end
 
  defmacro unquote(tag)(attrs \\ []) do
    tag = unquote(tag)
    quote do: HouseStatUtil.HTML.tag(unquote(tag), unquote(attrs))
  end
end
</code></pre>
</div>
<p>This creates three macros for each tag. I.e. for <code>span</code> it creates: <code>span/0</code>, <code>span/1</code> and <code>span/2</code>. The first two are because the <code>attrs</code> are optional but Elixir creates two function signatures for it. The third is a version that has a <code>do</code> block.</p>
<p>With all this put together we can create HTML as Elixir language syntax. Checkout the full <a class="link" href="https://github.com/mdbergmann/elixir_house_stat_util/blob/master/lib/house_stat_util/html.ex" target="_blank">module source</a> in the github repo.</p>
<h4 id="toc_16">Testing the DSL</h4>
<p>Of course we test this. This is a test case for a one-liner tag:</p>
<div>
<pre class="elixir"><code>  test "single element with attributes" do
    elem = input(id: "some-id", name: "some-name", value: "some-value")
    |&gt; render_to_string

    IO.inspect elem

    assert String.starts_with?(elem, "&lt;input")
    assert String.contains?(elem, ~s(id="some-id"))
    assert String.contains?(elem, ~s(name="some-name"))
    assert String.contains?(elem, ~s(value="some-value"))
    assert String.ends_with?(elem, "/&gt;")
  end</code></pre>
</div>
<p>This should be backend agnostic. So no matter which backend generated the HTML we want to see the test pass.</p>
<p>Here is a test case with inner tags:</p>
<div>
<pre class="elixir"><code>  test "multiple sub elements - container" do
    html_elem = html class: "foo" do
      head
      body class: "bar"
    end
    |&gt; render_to_string

    IO.inspect html_elem

    assert String.ends_with?(html_elem, 
      ~s())
  end</code></pre>
</div>
<p>The source file has more tests, but that should suffice as examples.</p>
<p>That was it. Thanks for reading.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Game of Life in Common Lisp ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Game+of+Life+in+Common+Lisp"></link>
        <updated>2019-07-01T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Game+of+Life+in+Common+Lisp</id>
        <content type="html"><![CDATA[ <p>This time it's the Game of Life in Common Lisp.</p>
<p>Since I've tried out <a class="link" href="https://clojure.org" target="_blank">Clojure</a> (see the episode on <a class="link" href="../blog?title=Game%2Bof%2BLife%2Bwith%2BTDD%2Bin%2BClojure%2Band%2BEmacs" target="_blank">YouTube</a>) I've discovered a whole new world. The world of <a class="link" href="https://common-lisp.net" target="_blank">Common Lisp</a>.<br />It's been around for so long and I really don't know why I haven't looked at this before.</p>
<p>In my attempt to bring TDD closer to developers (to build better designed software with less defects) &nbsp;I've made a TDD session in <a class="link" href="http://www.growing-object-oriented-software.com" target="_blank">Outside-In</a> style with Common Lisp.<br />Outside-In can use a 'classicist' TDD or the 'London Style' TDD where more is done in the red phase of 'red-green-refactor', and also it uses mocking to carve out design.</p>
<p>Certainly you code Common Lisp using the awesome editor <a class="link" href="https://www.gnu.org/software/emacs/" target="_blank">Emacs</a> using the <a class="link" href="https://github.com/joaotavora/sly" target="_blank">Sly</a>&nbsp;integrated development environment.</p>
<p>Common Lisp geeks will certainly see some deficiencies in my use of the language. So any tipps are welcome.<br />But the purpose is to show that this development style is certainly possible in Common Lisp because in CL it is possible to develop with a short feedback loop.</p>
<p>&nbsp;<iframe src="https://www.youtube.com/embed/-7QRrUpWR34" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - classicist vs. London Style ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+classicist+vs.+London+Style"></link>
        <updated>2019-06-27T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+classicist+vs.+London+Style</id>
        <content type="html"><![CDATA[ <p>OK, I don't want/need to explain TDD.</p>
<p>As for 'Outside-in', it is a development approach where you start developing at the boundary of a system on a use-case basis.<br />This can be a web service, a web page, a CLI interface or something else.<br />You could say that it's a vertical slice through the system where you add the behavior for the use-case.</p>
<p>But you start your coding with an integration test which expects the right outcome, but since nothing is coded yet it will fail until the very end.<br />The integration tests makes sure that all components are eventually properly wired together, and can produce the side-effect or direct outcome that is expected.</p>
<h4>So what is 'classicist'?</h4>
<p>With 'classicist' we mean the original TDD approach or red-green-refactor cycle and triangulation where the production code is developed in small steps.<br />In between (in the refactor step) you want to do refactorings and carve out collaborators, find abstractions, etc., <br />but your tests should not be changed once they were green. The refactorings you do are internal, not externally visible.<br />Your tests implicitly test the behavior of helper classes like collaborators.<br />You don't usually do a lot of mocking, in particular not of the collaborators.</p>
<h4>'London style'&nbsp;</h4>
<p>'London style' is different in that you explicitly think about any collaborations and helper classes while you write the test.<br />So you do more during the 'red' step and therefore the yellow (refactor) step is shorter than in classicist.<br />As a consequence you have to mock out those collaborators, because you know about them and want to control them.<br />On a new system you can carve out a lot of the architecture and design this way.</p>
<p>So basically, while 'classicist' drives design passively and as a refactoring, 'London style' drives it actively through mocking.</p>
<p>Some say that this ('London style') actually tests internals, which you should avoid.<br />But I think we have to look at this from a different perspective.<br />As a tester and designer I would want to know which classes collaborate and use other classes. And this is satisfied by the mocking.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Wicket UI in the cluster - reflection ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+cluster+-+reflection"></link>
        <updated>2019-05-10T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+cluster+-+reflection</id>
        <content type="html"><![CDATA[ <p>In the last blog post I've talked about the technicallies of running Wicket clustered.</p>
<p>But there are more things to consider.&nbsp;</p>
<p dir="auto">The session handling of server based web applications is usually done by the web application server&nbsp;which in the Java world runs on top of a Servlet container.<br />This might be Jetty, Tomcat or some comercial one.</p>
<p dir="auto">The Servlet specification contains the session and cookie handling. <br />Since Wicket runs on the application server, it doesn't itself has to deal with storing and loading sessions. This is all done by the application server.<br />Wicket just 'uses' the session that the application server provides.</p>
<p dir="auto">In a cluster environment it is also the responsibility of the application server to create an environment where it replicates the session.</p>
<p dir="auto">As outlined in the previous article it is possible to follow certain best practices to help Wicket keep the session small, like using LoadableDetachableModel's.</p>
<p dir="auto">But the application server relies on a certain level of reliability of session stickiness of the load-balancer.<br />Looking at the technology stack there are just too many drawbacks when we have to assume a none-stickiness, or when this doesn&rsquo;t work reliably.</p>
<p dir="auto">There usually are a few scenarios how the session replication can work.</p>
<p dir="auto">If the session is stored in a database that all cluster nodes can use at the same time it might be possible that the application works without LB session stickiness.<br />But this is quite a performance hit since there is usually a long way to the database and the session data has to be serialized and deserialized.<br />In this scenario you cannot really use the second-level cache of the application server because session data might differ slightly when switching from one cluster node to another in a rapid succession.</p>
<p dir="auto">It is also possible to store the session in memory using a technology like Hazelcast, <a class="link" href="https://ignite.apache.org/use-cases/caching/web-session-clustering.html" target="_blank">Apache Ignite</a> or something like that.<br />In this scenario the session is stored locally in a second-level cache and then replicated in the background to other cluster nodes.<br />However the session replication might not be immediate. Which means that a non-stickiness will not properly work here. Because the session might not have replicated when the LB switches nodes during a request which will lead to unexpected behavior or page load errors.</p>
<p dir="auto">&nbsp;</p>
<p dir="auto">So it is highly recommended to use the application servers second-level cache for performance reasons and use session stickiness of the load-balancer.<br />To avoid SSL offloading load-balancers usually also can be configured to use certain nodes on an IP address or region basis.</p>
<p dir="auto">For more info take a look here:&nbsp;<a href="https://ci.apache.org/projects/wicket/guide/8.x/single.html#_lost_in_redirection_with_apache_wicket" target="_blank">lost_in_redirection_with_apache_wicket</a></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Wicket UI in the Cluster - know how and lessons learned ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+Cluster+-+know+how+and+lessons+learned"></link>
        <updated>2019-04-29T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Wicket+UI+in+the+Cluster+-+know+how+and+lessons+learned</id>
        <content type="html"><![CDATA[ <p>While working on a Wicket UI cluster support feature in the last weeks I covered quite a bit of new territory that I was only partly aware of, even after ~9 years of doing Wicket. And I had to do quite a bit of research to collect know-how from different sources.</p>
<p>In this post I&rsquo;d like to share what I have learned and things I want to emphasize that should be applied.</p>
<p>(I&rsquo;m doing coding in Scala. So some things are Scala related but should generally apply to Java as well.)</p>
<h4>&nbsp;</h4>
<h4>Model separation</h4>
<p>If your application has the potential to get bigger with multiple layers you should separate your models (and not only your models). Honor separation of concerns (<code>SoC</code>) and single responsibility (<code>SRP</code>). Create dedicated models at an architectural, module or package boundary (where necessary) and map your models. Apply orthogonality.</p>
<p>If you don&rsquo;t it&rsquo;ll hit you in the face at some point. And you are lucky if it does only once.</p>
<p>You can imagine what the disadvantages are if you don&rsquo;t use dedicated models: changes to the model affect every part of the application where it&rsquo;s directly used which makes the application ridgid.</p>
<p>After all &rsquo;soft&rsquo;ware implies being &lsquo;soft&rsquo; as in flexible and easy to change.</p>
<p>In regards to Wicket or other &lsquo;external interfaces&rsquo; the problem is that a loaded model is partly stored in instance variables of Wicket components. The domain model can contain a ton of data and you have no control over what gets serialized and what not without changing your domain model, which you shouldn&rsquo;t do to satisfy the requirements of an external interface.</p>
<p>So because in a cluster environment those components now must be (de)serialized to be distributed across the cluster nodes and there is no cache anymore it is: <br />a) a performance hit and <br />b) uses up quite some network bandwidth when the session changes a few times per second.</p>
<p>The approach should be to create a dedicated model for a view, because most probably not all data of a domain model is visualized. Further, when the domain model is used directly, submitting form data goes straight back to the domain model. Instead a dedicated &lsquo;submit form&rsquo; model can be created that only holds the data of the submit and can be merged back into the domain model on a higher level that can better control when, where and how this is done (i.e. applying additional validations, etc.) This certainly takes a bit more time but is worth the effort in the longer run.</p>
<h4>&nbsp;</h4>
<h4>Use LoadableDetachableModel</h4>
<p><code>LoadableDetachableModel</code>s load the model when a request is made and &lsquo;forget&rsquo; it after the response was generated, and before the state is saved to the session. Which means that model data is not stored to the session but reloaded from scratch more often. One has to keep in mind that the session can change multiple times per request/response cycle, in particular if JavaScript based components load their data lazily. In a cluster environment, without the Servlet container&rsquo;s second-level cache (see below), it is better to load the data on a request basis instead of serializing and deserializing large amounts of data which have to be synchronized between cluster nodes. Usually the application has a general caching mechanism on a higher level which makes loading the data acceptable.</p>
<p>Preferably no model is stored in the components at all but only the state of the components as such. With this the session size can be contained at a few kBytes.</p>
<p>This is something the Wicket developer has to sensibly consider when developing a component.</p>
<p>In Wicket models can be chained. I like using <code>CompoundPropertyModel</code>s. But you can still use a <code>LoadableDetachableModel</code> by chaining them together:</p>
<pre><code>new CompountPropertyModel[Foo](new LoadableDetachableModel(myModelObject))
</code></pre>
<h4>&nbsp;</h4>
<h4>Extend from <code>Serializable</code> (or use Scala <code>case class</code>es) for any model classes that are UI model</h4>
<p>This should be obvious. Any class that should be serializable requires inheriting from <code>Serializable</code> interface.</p>
<p>In Wicket you can also interit from <code>IClusterable</code>, which is just a marker trait inheriting from <code>Serializable</code>.</p>
<h4>&nbsp;</h4>
<h3>Add <code>Serializable</code> to abstract parent classes if there is a class hierarchy</h3>
<p>I&rsquo;ve had a few cases where serialized classes could not be deserialized. The reason was that when you have a class hierarchy the abstract base class must also inherit from <code>Serializable</code>.</p>
<p>The deserialization of the code below fails even though class <code>Bar</code> inherits from <code>Serializable</code>. Class <code>Foo</code> also <strong>must</strong> inherit from <code>Serializable</code>.:</p>
<pre><code>@SerialVersionUID(1L)
abstract class Foo(var1, var2)
  
class Bar extends Foo with Serializable
</code></pre>
<h4>&nbsp;</h4>
<h4>Add <code>@SerialVersionUID</code>, always</h4>
<p>Wicket components, including the model classes are serializable by default. But to keep compatibility across temporarily different versions of the app when updating a cluster node, add a <code>SerialVersionUID</code> annotation to your component classes (for Scala, in Java it is a static final field). Also add this to every model data class.</p>
<p>When ommiting this annotation the serial version is dynamically created by Java for each compilation process and hence is incompatible to each other even if no code changes were made. So add this annotation to specify a constant version.</p>
<p>Add this to your IDEs class template mechnism. Any class created should have this annotation. It doesn&rsquo;t hurt when it&rsquo;s there but not used.</p>
<p>If you want to know more about this, and how to create compatible versions of classes read this: <a class="small" href="https://docs.oracle.com/javase/8/docs/platform/serialization/spec/version.html" target="_blank">https://docs.oracle.com/javase/8/docs/platform/serialization/spec/version.html</a></p>
<h4>&nbsp;</h4>
<h4>No Scala Enumeration, causes trouble at deserialization</h4>
<p>Use <code>Enumeratum</code> instead or just a combination of Scala <code>case class</code> plus some constant definitions on the companion object.</p>
<h4>&nbsp;</h4>
<h4>Enumeratum, add no arg constructor with <code>abstract class</code></h4>
<p>The below code doesn&rsquo;t deserialize if the auxiliary constructor is missing, keep that in mind:</p>
<pre><code>@SerialVersionUID(1L)
sealed abstract class MyEnum(val displayName: String) extends EnumEntry {
  def this() = this("")
}
</code></pre>
<h4>&nbsp;</h4>
<h4>Use Wicket <code>RenderStrategy.ONE_PASS_RENDER</code></h4>
<p>By default Wicket uses a POST-REDIRECT-GET pattern implementation. This is to avoid the &lsquo;double-submit&rsquo; problem.</p>
<p>However, in cluster environments it&rsquo;s possible that the GET request goes to a different cluster node than the POST request and hence this could cause trouble.</p>
<p>So either you have to make certain that the cluster nodes got synchronized between POST and GET or you configure Wicket to the render strategy <code>ONE_PASS_RENDER</code>.</p>
<p><code>ONE_PASS_RENDER</code> basically returns the page markup as part of the POST response.</p>
<p>See here for more details: <a class="link" href="https://ci.apache.org/projects/wicket/apidocs/8.x/index.html?org/apache/wicket/settings/RequestCycleSettings.RenderStrategy.html" target="_blank">https://ci.apache.org/projects/wicket/apidocs/8.x/index.html?org/apache/wicket/settings/RequestCycleSettings.RenderStrategy.html</a></p>
<h4>&nbsp;</h4>
<h4>Use Wicket <code>HttpSessionStore</code></h4>
<p>By default Wicket uses a file based session page store where the serialized pages are written to. Wicket stores those to support the browser back button and to render older versions of the page when the back button is pressed.</p>
<p>In a cluster setup the serialized pages must be stored in the session so that the pages can be synchronized between the cluster nodes.</p>
<p>In Wicket version 8 you do it like this (in <code>Application#init()</code>):</p>
<pre><code>setPageManagerProvider(new DefaultPageManagerProvider(this) {
  override def newDataStore() = {
    new HttpSessionDataStore(getPageManagerContext, new PageNumberEvictionStrategy(5))
  }
})
</code></pre>
<p>The <code>PageNumberEvictionStratety</code> defines how many versions of one page are stored.</p>
<h4>&nbsp;</h4>
<h4>Disable the Servlet containers second-level cache</h4>
<p>Jetty (or generally Servlet containers) usually uses a second-level cache (<code>DefaultSessionCache</code>) where session data, in form of the runtime objects, is stored for quick access without going through the (de)serialization.</p>
<p>In a cluster environment however this can cause issues because what the second-level cache contains is likely to be different on each cluster node and hence wrong states may be pulled out of it when the load-balancer is delegating to a different node for a request.</p>
<p>So it is better to not use a second-level cache. In Jetty you do this by setting up a <code>NullSessionCache</code>. To this <code>NullSessionCache</code> you also have to provide the backing <code>SessionDataStore</code> where the session data is written and read from.</p>
<p>You do this like this on a <code>ServletContextHandler</code> basis (Jetty 9.4):</p>
<pre><code>val sessionHandler = new SessionHandler
handler.setSessionHandler(sessionHandler)

val sessionCache = new NullSessionCacheFactory().getSessionCache(handler.getSessionHandler)
val sessionStore = // set your `SessionDataStore` implementation here

sessionCache.setSessionDataStore(sessionStore)
sessionHandler.setSessionCache(sessionCache)
</code></pre>
<p>You have different options for the <code>SessionDataStore</code> implementation. Jetty provides a <code>JDBCSessionDataStore</code> which stores the session data into a database.</p>
<p>But there are also implementations for Memcached or Hazelcast, etc.</p>
<h4>&nbsp;</h4>
<h4>Serialization considerations</h4>
<p>There are other options than the Java object serialization. I&rsquo;d like to name two which are supported by Wicket:</p>
<ul>
<li><a class="link" href="https://github.com/wicketstuff/core/tree/master/serializer-kryo2" target="_blank">https://github.com/wicketstuff/core/tree/master/serializer-kryo2</a></li>
<li><a class="link" href="https://github.com/wicketstuff/core/tree/master/serializer-fast2" target="_blank">https://github.com/wicketstuff/core/tree/master/serializer-fast2</a></li>
</ul>
<p>Both provide more performance and flexibility on serialization than the default Java serializer and should be considered.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Mars Rover Kata classicist in Scala ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Mars+Rover+Kata+classicist+in+Scala"></link>
        <updated>2019-04-23T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Mars+Rover+Kata+classicist+in+Scala</id>
        <content type="html"><![CDATA[ <p>Hey. <br />So I've performed the <a class="link" href="http://kata-log.rocks/mars-rover-kata" target="_blank">Mars Rover Kata</a>&nbsp;in a classicist TDD style outside-in.</p>
<p>&nbsp;</p>
<p>&nbsp;Interested? Check it out on YouTube:</p>
<p>&nbsp;</p>
<p>&nbsp;<iframe src="https://www.youtube.com/embed/3kGpDv3VXMQ" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Burning your own Amiga ROMs (EPROMs) ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Burning+your+own+Amiga+ROMs+(EPROMs)"></link>
        <updated>2019-01-26T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Burning+your+own+Amiga+ROMs+(EPROMs)</id>
        <content type="html"><![CDATA[ <p>With the release of the latest AmigaOS version (3.1.4) the package you could buy included ROM images to be used for either maprom (depending on your accelerator card tool support) or for burning it to a ROM.</p>
<p>Maprom is probably preferred, because it's more flexible, but not always possible. For instance the A3440 card can't do maprom. Or if you have no accelerator at all you can't do maprom either.</p>
<p>Which leaves only a few options. Either you can buy the ROM, have someone burn it or burn it yourself.</p>
<p>Here I want to show how it works to burn it yourself.</p>
<p>&nbsp;</p>
<p>What you need:</p>
<p>- an EPROM programmer. I have chosen the low cost GQ-4x4 USB programmer.</p>
<p>- to program the EPROMs used&nbsp;in an Amiga you have to get a 16-Bit 40/42 pin ZIF adapter board for the burner:<br />ADP-054 16 Bit EPROM 40/42 pin ZIF adapter</p>
<p>- an UV eraser, which can erase the EPROMs, in case something goes wrong.</p>
<p>- then you need EPROMs. The types used in A500/A600/A2000 are 27C400. I found the following to work which can be ordered in eBay: AMD27C400</p>
<p>- for burning ROMs for A1200/A4000 you need 27C800 / AMD27C800 roms, two of them to burn one ROM.</p>
<p>- and certainly a ROM image you want to burn.</p>
<p>&nbsp;</p>
<p>Sometimes there are good offers at Amazon or eBay for a complete package (except the EPROMs).<br />You shouldn't pay more than &euro;150 for the GQ-4x4, the adapter board and the eraser.</p>
<p>Here is a picture of the device with attached adapter board with an EPROM inside.</p>
<p><img src="/static/gfx/blogs/GQ-Burn-0_small.jpg" alt="GQ programmer with adapter board and EPROM" width="480" height="360" /></p>
<p>Then you need to download the software for the burner. That is a) the burner software itself named "GQUSBprg". The latest version as of this writing is 7.21.<br />And you need the USB driver 3.0.<br /><br />Can be downloaded here:&nbsp;<a class="link" href="http://mcumall.com/store/device.html" target="_blank">http://mcumall.com/store/device.html</a></p>
<p>When you connected the burner and installed the software we can start.<br />Now open the burner software. Make sure that there is no EPROM put in.</p>
<p>1. first step is to select the device, or the EPROM to burn.<br /><br />Make sure you choose either AM27C400 or 27C400.<br /><br /><img src="/static/gfx/blogs/GQ-Burn-1-SelectDevice_small.jpg" alt="" width="480" height="312" /></p>
<p>2. Next we'll make a voltage check to see if the burner has all voltages in order to properly burn the EPROM.<br /><br />I found that while you can attached a power supply on the burner it is not required. <strong>The USB provides enough power.</strong><br /><br /><img src="/static/gfx/blogs/GQ-Burn-2-VoltageCheck_small.jpg" alt="" width="480" height="312" /></p>
<p>3. Load the ROM image into the buffer.<br /><br />When you load the image make sure you choose .bin (binary).<br /><br /><strong>!!! This is important, or otherwise the programmed ROM won't work.</strong><br /><strong>After you loaded the ROM image, you have to make sure to swap bytes.</strong><br /><strong>This can be done in the 'Command' menu of the software.</strong><br /><br /><img src="/static/gfx/blogs/GQ-Burn-3-LoadRomAsBinary_small.jpg" alt="" width="480" height="312" /></p>
<p>4. Now you have to put in your EPROM into the ZIF slot.<br /><br />Make sure it sits tight and doesn't move anymore.</p>
<p>5. Make a blank check to see if the EPROM is empty.<br /><br /><img src="/static/gfx/blogs/GQ-Burn-6-BlankCheck-small.jpg" alt="" width="480" height="313" /></p>
<p>6. When the EPROM is blank we can write it.<br /><br /><img src="/static/gfx/blogs/GQ-Burn-7-Write_small.jpg" alt="" width="480" height="411" /></p>
<p>When the write process is finished it's done.<br /><br />You can take out the EPROM and put it into the Amiga and it should work.</p>
<p><strong>Some notes:</strong><br />Partly this whole process of writing the ROM was a real pain because the GQ burner would just stop writing at some address. And in fact I had to get the package replaced including the adapter board.<br /><br />I had first tried it in a virtual machine (VMware Fusion on Mac) but this doesn't work for some reason as the GQ programmer detaches and re-attaches to the USB bus on some of the operations and that doesn't seem to be working reliably in a VM.</p>
<p><a class="twitter-follow-button link" href="https://twitter.com/mdbergmann?ref_src=twsrc%5Etfw" data-show-screen-name="false" data-show-count="false">Follow @mdbergmann</a></p>
<br/>
<br/>
<p>Update:</p>
<p>The Amiga 4000 can only use 512k EPROMs, hence only 27C400 will work. The Amiga 1200 can also use 27C800 (1MB). The byte-swap, if your ROM image is already byte-swapped, then you don't need to do this here. Some ROM images, which are ready to burn have this already. However, if you want to burn ROM images that are used in maprom or UAE, then you have to byte-swap.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Game of Life in Clojure and Emacs ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Game+of+Life+in+Clojure+and+Emacs"></link>
        <updated>2019-01-05T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Game+of+Life+in+Clojure+and+Emacs</id>
        <content type="html"><![CDATA[ <p>Check out the screencast about implenting the <a class="link" href="http://kata-log.rocks/game-of-life-kata" target="_blank">Game of Life</a> in <a class="link" href="https://clojure.org" target="_blank">Clojure</a> with TDD (Test-Driven Development) approach in <a class="link" href="https://www.gnu.org/software/emacs/" target="_blank">Emacs</a> editor with <a class="link" href="https://cider.readthedocs.io/en/latest/" target="_blank">CIDER</a> plugin.</p>
<p>&nbsp;</p>
<p><iframe src="https://www.youtube.com/embed/GtuP8byblT4" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>&nbsp;</p>
<p>P.S: Happy new Year!!!</p>
<p>&nbsp;</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Outside-in with Wicket and Scala-part 2 ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Outside-in+with+Wicket+and+Scala-part+2"></link>
        <updated>2018-12-24T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Outside-in+with+Wicket+and+Scala-part+2</id>
        <content type="html"><![CDATA[ <p>This is the second and last part of the series. It shows the login.</p>
<p>Since I forgot to show how the registration works in the browser after we've implemented it all I'm showing this in the beginning together with two book introductions.</p>
<p>Have fun.</p>
<p>&nbsp;</p>
<p><iframe src="https://www.youtube.com/embed/os5_pFh75-I" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>&nbsp;</p>
<p>And... Merry Christmas to all of you.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ TDD - Outside-in with Wicket and Scala-part 1 ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/TDD+-+Outside-in+with+Wicket+and+Scala-part+1"></link>
        <updated>2018-12-04T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/TDD+-+Outside-in+with+Wicket+and+Scala-part+1</id>
        <content type="html"><![CDATA[ <p>This YouTube video demonstrates the Outside-In approach first explained in the book "<a class="link" href="http://www.growing-object-oriented-software.com" target="_blank">Growing Object-Oriented Software Guided by Tests</a>".</p>
<p>It'll create a simple Wicket based web application that works its way down to the domain and creates design by mocking.</p>
<p>Be aware that this is not intended for beginners of TDD.</p>
<p>The first part will impmenent the user registration.</p>
<p>&nbsp;</p>
<p><iframe src="https://www.youtube.com/embed/C2_LbKSWgQM" width="750" height="420" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Floating Point library in m68k Assembler on Amiga ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Floating+Point+library+in+m68k+Assembler+on+Amiga"></link>
        <updated>2018-08-09T02:00:00+02:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Floating+Point+library+in+m68k+Assembler+on+Amiga</id>
        <content type="html"><![CDATA[ <h4>Part 1 - some theory</h4>
<p>Someone told me lately: &ldquo;If you haven&rsquo;t developed a floating-point library, go home and do it. It&rsquo;s a nice weekend project.&rdquo;</p>
<p>I followed this advice. <br />So, while I was away on vacation I&rsquo;ve developed this using pen-and-paper and only later wrote and tested it on one of my Amigas.</p>
<p>I must say, it took longer than a weekend. :) But it was a great experience to see how those numbers are generated and handled, and how they 'jitter' at the last bits of precision.</p>
<p>The Amiga offers a number of programming languages, including C/C++ and more high level languages like Pascal or Oberon, and some Basic dialects like AMOS, BlitzBasic and others.<br /> But I thought assembler would be nice. The Motorola 68000 series is a very nice assembler.<br /> I do know it from my old Amiga times. But never really did a lot with it. So I&lsquo;m not an expert in assembler. Hence the assembler code introduced here might not be efficient or optimised.<br />I took the assembler specs with me as print-out and studied them while developing the code.</p>
<p>(I&rsquo;m posting the full assembler source code at the end of the post. <br />It was developed using the &lsquo;DevPac&rsquo; assembler. A well known macro assembler for the Amiga.)</p>
<p>As the first part of this blog I&rsquo;d like to write a little about the theory of floating-point numbers.<br /> But I&rsquo;m assuming that you know what &lsquo;floating-point&rsquo; numbers are.</p>
<p>One of the floating point standards is IEEE 754.<br /> It standardises how the number is represented in memory/CPU registers and how it is calculated.<br /> IEEE 754 is a single-precision 32 bit standard.<br /> The binary representation is defined as (from high bit to to low bit):<br /> - 1 bit for the sign (-/+)<br /> - 8 bit for the exponent<br /> - 23 bit for the mantissa</p>
<p>The sign is pretty clear, it says whether the number is positive or negative.</p>
<p>The 8 bit exponent basically encodes the &lsquo;floating-point&rsquo; shift value to the left and right.<br /> Shifting to the left means that a negative exponent has to be encoded. Shifting to the right a positive.<br /> In order to encode positive and negative values in 8 bit a so called &lsquo;biased-representation&rsquo; is used. With an &sbquo;excess&lsquo; value of 127 it&rsquo;s possible to encode numbers (and exponents) from &ndash;126 to 127.</p>
<p>The 23 bit mantissa combines the integer part of the floating-point number and the fraction part.</p>
<p>The integer part in the mantissa can go through a &lsquo;normalisation&rsquo; process, which means that the first &lsquo;1&rsquo; in a binary form of the number matters. And everything before that is ignored, considering the number is in a 32 bit register.<br /> So only the bits from the first &lsquo;1&rsquo; to the end of the number are taken into the mantissa.<br /> The &lsquo;hidden bit&rsquo; assumes that there is always a &lsquo;1&rsquo; as the first bit of a number.<br /> So that IEEE 754 says that this first &lsquo;1&rsquo; does not need to be stored, hence saving one bit for the precision of the fraction part.</p>
<p>Let&rsquo;s take the number 12.45.<br /> In binary it is: <code>1100,00101101</code><br /> The left side of the comma, the integer part has a binary value definition of:<br /> <code>1100</code> = <code>1*2^8 + 1*2^4 + 0*2^2 + 0*2^1</code><br /> The fraction part, right side of the comma:<br /> <code>00101101</code> = <code>0*2^-2 + 0*2^-4 + 1*2^-8 + 0*2^-16 ...</code></p>
<p>That is how it would be stored in the mantissa.<br /> Considering the &lsquo;hidden bit&rsquo;, the left bit of the integer part does not need to be stored. Hence there is one bit more available to store the fraction part.<br /> Later, when the number must be converted back into decimal it is important to know the bit size of the integer part (positive exponent), or in case the integer part is 0, how many digits were shifted right to the first 1 bit of the fraction part (negative exponent).</p>
<p>There is more to it, read upon it here if you want: https:/en.wikipedia.org/wiki/IEEE_754</p>
<h3>Part 2 - the implementation - dec2bin (decimal to binary)</h3>
<p>We make a few slight simplifications to the IEEE 754 standard so that this implementation is not fully compliant.<br /> - the &lsquo;hidden bit&rsquo; is not hidden :)<br /> - no normalisation, which means we don&rsquo;t have negative exponents, because we don&rsquo;t look into the delivered fraction part for the first &lsquo;1&rsquo;.</p>
<p>Now, how does it work in practice to get a decimal number into the computer as IEEE 754 representation.<br /> The library that is developed here assumes that the integer part (left side of the comma) and the fraction part (right side of the comma) is delivered in separate CPU registers. Because we do not have a &sbquo;float&lsquo; number type where this could be delivered in a combined way.<br /> It would certainly work to use one register, the upper word for integer part and the lower word for the fraction part. 16 bit for each, that would in many cases fully suffice. But for simplicity, lets take separate registers.</p>
<p>Say, the number is: <code>12.45</code>.<br /> Then <code>12</code> (including the sign) would be delivered in register d0.<br /> The fraction part, <code>45</code> in d1.<br /> The binary floating point number output will be delivered back in register d7.</p>
<p>Converting the integer part into binary form is pretty trivial. We just copy the value <code>12</code> into a register and that&rsquo;s it. The CPU does the decimal to binary conversion for us automatically, because the number consists only of positive powers of two. Hence, the input register d0 already contains the binary representation of the number <code>12</code>.</p>
<p>As next step we have to calculate the bit length of that number because it is later stored in the exponent.<br /> The algorithm is to shift the register d0 to left bit-by-bit until register bit 32 is a &sbquo;1&lsquo;. We need to count how many times we shifted.<br /> Subtracting that shift-count from 32 (the bit length of the register) will give as result the bit length of the integer value.</p>
<p>Here is the assembler code for that:</p>
<pre><code>    ; d0 copied to d6
    ; if int_part (d6) = 0 then no need to do anything
    cmpi.l  #0,d6
    beq .loop_count_int_bits_end
    
    ; now shift left until we find the first 1
    ; counter in d2
.loop_count_int_bits
    btst.l  #$1f,d6     ; bit 32 set?
    bne.s   .loop_count_int_bits_done
    addq    #1,d2       ; inc counter
    lsl.l   #1,d6
    bra .loop_count_int_bits

.loop_count_int_bits_done

    move.l  #32,d3
    sub.l   d2,d3       ; 32 - 1. bit of int
    move.l  d3,d2

.loop_count_int_bits_end
</code></pre>
<p>In register d2 is the result, the bit length of the integer part.</p>
<p>The fraction part is a little more tricky. Bringing it into a binary form requires some thought.<br /> Effectively the fraction part bit values in binary form is (right of the comma): <code>2^-2, 2^-4, 2^-8, 2^-16, ...</code><br /> We setup the convention that the fraction value must use 4 digits. <code>45</code> then will be expanded to <code>4500</code>.<br /> 4 digits is not that much but it suffices for this proof-of-concept.</p>
<p>I found that an algorithm that translates the fraction into binary form depends on the number of digits.<br /> The algorithm is as follows (assuming a 4 digit fraction part):</p>
<ol>
<li>fraction part &gt; 5000?</li>
<li>if yes then mark a &lsquo;1&rsquo; and subtract 5000</li>
<li>if no then mark a &lsquo;0&rsquo;</li>
<li>shift 1 bit to left<br /> (shifting left means multiplication by factor 2)</li>
<li>repeat</li>
</ol>
<p>This loop can be repeated until there are no more bits in the fraction part. Or, the loop only repeats for the number of &bdquo;free&ldquo; fraction bits left in the mantissa.<br /> Remember, we have 23 bits for the mantissa. From those we need some to store the integer part. The rest is used for the fraction part.</p>
<p>The threshold value, 5000 here, depends on the number of digits of the fraction part.<br /> If the number of digits is 1 the threshold is 5.<br /> If the number of digits are 2 the threshold is 50.<br /> And so forth.<br /> (5 * (if nDigits &gt; 1 then 10 * nDigits else 1))</p>
<p>Here is the code to convert the fraction into binary value:</p>
<pre><code>    ; now prepare fraction in d1

.prepare_fract_bits

    ; the algorithm is to:
    ; check if d1 &gt; 5000 (4 digits)
    ; if yes -&gt; mark '1' and substract 5000
    ; if no  -&gt; mark '0'
    ; shift left (times 2)
    ; repeat until no more available bits in mantisse, which here is d3

    move.l  #5000,d4    ; threshold
.loop_fract_bits
    subi.l  #1,d3       ; d3 is position of the bit that represents 5000
    clr.l   d6
    cmp.l   d4,d1
    blt .fract_under_threshold
    sub.l   d4,d1
    bset    d3,d6
.fract_under_threshold
    or.l    d6,d7
    lsl.l   #1,d1       ; d1 * 2
    cmpi.l  #0,d3       ; are we done?
    bgt .loop_fract_bits

.prepare_fract_bits_end
</code></pre>
<p>The above code positions the fraction bit directly into the output register d7. And only so many bits are generated as there is space available in the mantissa.</p>
<p>Now we have the mantissa complete.</p>
<p>What&rsquo;s missing is the exponent.<br /> We know the size of the integer part, it is saved in register d2.<br /> That must now be encoded into the exponent.<br /> What we do is add the integer part bit size to 127, the &lsquo;excess&rsquo; value, and write the 8 bits at the right position of the output regster d7:</p>
<pre><code>    ; at this point we have the mantissa complete
    ; d0 still holds the source integer part
    ; d2 still holds the exp. data
    ; (int part size, which is 0 for d0 = 0 because we don't hide the 'hidden bit')
    ; d7 is the result register
    ; all other registers may be used freely
    
    ; if d0 = 0 goto end
    cmpi.l  #0,d0
    beq .prepare_exp_bits_end
    
.prepare_exp_bits
    ; Excess = 127
    move.l  #127,d0     ; we don't need d0 any longer
    add.l   d2,d0       ; size of int part on top of excess
    move.l  #23,d3
    lsl.l   d3,d0       ; shift into right position
    or.l    d0,d7
            
.prepare_exp_bits_end
</code></pre>
<p>Notice, there is a special case. If the integer part is 0, delivered in d0, then we&rsquo;ll make the exponent 0, too.</p>
<p><em>The test</em></p>
<p>That&rsquo;s basically it for the decimal to binary operation.<br /> The output register d7 contains the floating point number.</p>
<p>Test code for that is straight forward.<br /> The dec2bin operation is coded as a subroutine in a separate source file. We can now easily create a test source file and include the dec2bin routine.<br /> Like so:</p>
<pre><code>    ; dec2bin test code
    
    move.l  #12,d0      ; integer part =&gt; 1010
    move.l  #4500,d1    ; fract part
    
    ; subroutine expects d0, d1 to be filled
    ; result: the IEEE 754 number is in d7
    bsr dec2bin

    move.l  #%01000001111000111001100110011001,d3   ; this what we expect
    cmp.l   d3,d7
    beq assert_pass
    
    move.l  #1,d3
    bra assert_end
    
assert_pass
    move.l  #0,d3
    
assert_end
    illegal

        
    ;include
    ;
    include "dec2bin.i"
</code></pre>
<p>The test code compares the subroutine output with a manually setup binary number that we expect.<br /> Is the comparison OK a 1 is written in register d3.<br /> Otherwise a 0.</p>
<h3>Part 3 - the implementation - bin2dec (binary to decimal)</h3>
<p>We want to convert back from the binary float number to the decimal representation with the integer part (with sign) and the fraction part in separate output registers.<br /> And we want to assert that we get back what we initially put in.</p>
<p>In register d0 we expect the floating point number as input.<br /> In d6 will be the integer part output.<br /> In d7 the fraction part output.</p>
<p>Let&rsquo;s start extracting the exponent, because we need to get the integer part bit length that is encoded there.</p>
<p>We&rsquo;ll make a copy of the input register where we operate on, because we mask out everything but the exponent bits.<br /> Then we&rsquo;ll right align those and subtract 127 (the &lsquo;excess&rsquo;).<br /> The result is the integer part bit length.<br /> However, if the exponent is 0 we can skip this part.</p>
<pre><code>.extract_exponent
    move.l  d0,d1
    andi.l  #$7f800000,d1   ; mask out all but exp
    move.l  #23,d2
    lsr.l   d2,d1           ; right align
    
    ; if int part = 0
    cmpi.w  #0,d1
    beq .extract_sign
    subi.w  #127,d1
    
    ; d1 is now the size of int part
</code></pre>
<p>As next step we&rsquo;ll extract the integer part bits.<br /> Again we make a copy of the input register.<br /> Then we mask out all but the mantissa, 23 bits.<br /> It is already right aligned, but we want to shift out all the fraction bits until only the integer bits are in this register.<br /> Finally we can already copy this to the output register d6.</p>
<pre><code>.extract_mantisse_int
    move.l  d0,d2       ; copy
    andi.l  #$007fffff,d2   ; mask out all but mantisse
    move.l  #23,d3
    sub.l   d1,d3       ; what we figured out above (int part size)
    lsr.l   d3,d2       ; right align
    move.l  d2,d6       ; result
    
    ; d6 now contains the int part
</code></pre>
<p>We also have to extract the sign bit and merge it with the integer part in register d6.</p>
<p>As next important and more tricky step is converting back the fraction part of the mantissa into a decimal representation.<br /> Basically it is the opposite operation of above.</p>
<p>First we have to extract the mantissa bits again, similarly as we did in the last step.</p>
<p>What do the &lsquo;1&rsquo; bits in the fraction mantissa represent?<br /> Effectively they represent the value 5000 (in our case of 4 digits) for each &lsquo;1&rsquo; we have.<br /> Considering the fraction bit values for the positions right side of the comma: <code>2^-2, 2^-4, 2^-8, ...</code></p>
<p>I.e.: assuming those bits: <code>11001</code> the fraction value is: <code>1/2 + 1/4 + 1/32 = ,78125</code></p>
<p>Now, if each &lsquo;1&rsquo; represents 5000 we have the following: <code>5000/2 + 5000/4 + 5000/32</code><br /> But that&rsquo;s not all. We have to add the remainder of each division in the next step, and we have to multiply the quotient by 2 to get back to our initial input.</p>
<p>Here is the code:</p>
<pre><code>    clr.l   d7          ; prepare output    
    clr.l   d1          ; used for division remainder
    move.l  #1,d4       ; divisor (1, 2, 4, 8, ...
                        ; equivalent to 2^-1, 2^-2, 2^-4, ...)
.loop_fract
    subi.l  #1,d2       ; d2 current bit to test for '1'
    lsl.l   #1,d4       ; divisor - multiply by 2 on each loop
    cmpi.w  #0,d4       ; loop end? if 0 we shifted out of the word boundary
    beq .loop_fract_end

    btst.l  d2,d3       ; if set we have to devide
    beq .loop_fract     ; no need to devide if 0
    move.l  #5000,d5    ; we devide 5000
    add.l   d1,d5       ; add remainder from previous calculation
    divu.w  d4,d5       ; divide
    clr.l   d6          ; clear for quotient
    add.w   d5,d6       ; copy lower 16 bit of the division result (the quotient)
    lsl.l   #1,d6       ; *2
    add.l   d6,d7       ; accumulate the quotient
    and.l   #$ffff0000,d5   ; the new remainder
    move.l  #16,d1      ; number of bits to shift remainder word
    lsr.l   d1,d5       ; shift
    move.l  d5,d1       ; copy new remainder
    bra .loop_fract

.loop_fract_end
</code></pre>
<p>If we look at the <code>divu.w</code> operation, it only allows a denominator of 16 bit length and we only use a denominator of powers of 2.<br /> Effectively that is our precision limit.<br /> Even if we had more fraction bits in the mantissa we couldn&rsquo;t actually use them to accumulate the result.<br /> So we have some precision loss.</p>
<p>Let&rsquo;s add a test case.</p>
<pre><code>    ; test code for dec2bin2dec
    ;
    
    move.l  #12345,d0       ; integer part =&gt; 1010
    move.l  #5001,d1    ; fract part
    
    ; subroutine expects d0, d1 to be filled
    ; result: the IEEE 754 number is in d7
    bsr dec2bin

    move.l  d7,d0       ; input for the back conversion
        
    bsr bin2dec

    cmpi.l  #12345,d6
    bne error
    
    cmpi.l  #5001,d7
    bne error

    moveq   #0,d0       ;success    
    illegal

error
    moveq   #1,d0       ;error
    illegal
    
    
    include "dec2bin.i"
    include "bin2dec.i"
</code></pre>
<p>Since we have now both operations, we can use dec2bin and bin2dec in combination.</p>
<p>We provide input for dec2bin, then let the result run through bin2dec and compare original input to the output.</p>
<p>I must say that there is indeed a precision loss. The last (fourth) digit can be off up-to 5, so we have a precision loss of up-to 5 thousandth.</p>
<p>That can clearly be improved. But for this little project this result is acceptable.</p>
<p>In the next &bdquo;parts&ldquo; I&rsquo;d like to implement operations for addition, subtraction, division and multiplication.<br /> Also rounding, ceil and floor operatiuons could be implemented. The foundation is in place now.</p>
<p>Here are the sources: <a class="link" href="https://github.com/mdbergmann/fp-lib-m68k" target="_blank">m68k-fp-lib on GitHub</a></p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Cloning Compact Flash (CF) card for Amiga ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Cloning+Compact+Flash+(CF)+card+for+Amiga"></link>
        <updated>2017-12-25T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Cloning+Compact+Flash+(CF)+card+for+Amiga</id>
        <content type="html"><![CDATA[ <p>While being in the process of putting my A1000 with Vampire in production I was wondering how I can clone the Compact Flash (CF) card of my Vampireized A600.<br /> <br /> The A600 is nicely set-up, has everything on it that I need. So I&rsquo;d like to clone the CF card and put it into my A1000.<br /> <br /> Certainly the screenmode settings have to be adjusted later, because the A1000 uses a 16:10 22&ldquo; display while the A600 has a 4:3 19&rdquo; display attached. But that should not pose a problem at all.<br /> <br /> It is possible on Linux or Mac to use the <code>dd</code> command to make a backup and restore it on another CF card.<br /> <br /> Here is how to make a backup: <br /><code>sudo dd if=/dev/disk5 of=~/Desktop/amiga.img bs=1m</code><br /> <br /> <code>disk5</code> is the device here. But it may be different for you. Open the harddisk tool on Mac or check with '<code>diskutil list</code>' for your CF card device.<br /> <br /> Finally, when you made the image you restore it on another CF card: <br /> <code>sudo dd if=~/Desktop/amiga.img of=/dev/disk5 bs=1m</code><br /> <br /> And&hellip; hurray, it boots.</p>
<figure><img src="/static/gfx/blogs/A1000_Vamp.jpg" alt="A1000 with Vampire" width="500" />
<figcaption>A1000 with Vampire</figcaption>
</figure>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Writing tests is not the same as writing tests ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Writing+tests+is+not+the+same+as+writing+tests"></link>
        <updated>2017-12-08T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Writing+tests+is+not+the+same+as+writing+tests</id>
        <content type="html"><![CDATA[ <p>When I went to university to study computer science with no word were automated tests in any form mentioned. This was ~15 years ago.<br /><br />My first encounter which tought me that unit tests are important and (depending on the environment) the easiest way of writing tests was when reading the book &ldquo;Clean Code&rdquo; [<a class='link' href='https://www.goodreads.com/book/show/3735293-clean-code' target='_blank'>Clean Code</a>] by Robert C. Martin.<br /><br />That got me in the right direction. During the next few years I&rsquo;ve heard about writing &ldquo;tests first&rdquo; and TDD (Test-Driven Development). But I couldn&rsquo;t really imagine how one would write a test first, before any production code? How is that supposed to work?<br /><br />But the topic was so interessting that after a while I read the book &ldquo;Test-Driven Development&rdquo; by Kent Beck. He explains in a step-by-step way the discipline of TDD, which helped me a lot. However, that still felt odd and I couldn&rsquo;t yet adopt the practice. But it didn&rsquo;t let me loose either. Somehow I was pulled towards it.<br /><br />But finally I&rsquo;ve have adopted it. And after practicing it for ~two years I have to say, once I got used to it, it&rsquo;s one of the best practices that I was able to learn in my life as professional developer.<br /><br />Because aside from validating the production code TDD has more advantages, which are kind of automatically applied:</p>
<ul>
<li>with TDD you will end up having a good test coverage. That in turn makes you detect <strong>regression</strong> (changes to the code which might have broken something elsewhere).</li>
<li>you don&rsquo;t have to be scared for <strong>refactorings</strong>. Refactorings should always be done to clean up the code and to make things simpler. Having a good test coverage you don&rsquo;t have to worry too much about doing refactorings.</li>
<li>it is <strong>documentation</strong> for the production code, because tests <em>use</em> the production code. Hence other developers just need to look at the tests to find out how something works.</li>
<li>it enforces a better <strong>structure</strong> to your production code (at least that is what I have experienced). When tests get too complicated this is usually a sign that the production is too complicated as well. Then you should refactor. Refactor out components and create separate tests for them. Reduce your dependencies, etc. That will lead to less tightly coupled components and code.</li>
</ul>
<p>Make no mistake, If you do not write your tests first you will end up having a test suite with a lot of holes. When you run it and it is &bdquo;green&ldquo; it doesn&rsquo;t really tell you a lot. Since you have not really covered everything there is still a lot of potential where something is broken.<br /><br />Software tests are similar to empirical tests in science. You cannot prove that a software is bug free. The tests you write and the better your coverage is the more you can assume that you don&lsquo;t have a lot of bugs and from that you have to judge whether you trust your test suite and release or not. When do you trust your test suite? When you have applied TDD and have a good coverage.</p>
 ]]></content>
    </entry>
    <entry>
        <title type="html"><![CDATA[ Dependency Injection in Objective-C... sort of ]]></title>
        <link href="http://retro-style.software-by-mabe.com/blog/Dependency+Injection+in+Objective-C...+sort+of"></link>
        <updated>2011-01-20T01:00:00+01:00</updated>
        <id>http://retro-style.software-by-mabe.com/blog/Dependency+Injection+in+Objective-C...+sort+of</id>
        <content type="html"><![CDATA[ <p>The post will be about Dependency Injection (DI) in Objective-C. DI or the pattern behind it IoC (Inversion of Control) is well known in the Java world. There are quite a few frameworks available like Spring, EJB, Guice, etc.<br /> On Mac I didn't find something like it. So I've implemented a proof of concept.<br /> <br /> The goal was to inject a service class instance into another object, let&rsquo;s say a consumer of that service. Also it should be possible that the service object can be mocked and a different instance of the service be injected for unit testing.<br /> <br /> Let&rsquo;s see, what we need first is some kind of registration facility where we can register classes by name. When an instance of that class is asked for a new instance will be created and it will be returned. Alternatively an instance of a class can be set for a name, then this instance will be returned instead. With this we can set mock objects for unit testing while in production the real class is used.<br /> <br /> This is the &bdquo;DependencyRegistration&ldquo; facility&rsquo;s interface:</p>
<pre><code>@interface DependencyRegistration : NSObject {
    NSMutableDictionary *classRegistrations;
    NSMutableDictionary *objectInstances;
}
+ (DependencyRegistration *)registrator;

- (void)addRegistrationForClass:(Class)aClass withRegName:(NSString *)aRegName;
- (void)removeClassRegistrationForRefName:(NSString *)aRegName;
- (void)clearClassRegistrations;

- (void)addObject:(id)anObject forRegName:(NSString *)aRegName;
- (void)clearObjectForRegName:(NSString *)aRegName;
- (void)clearAllObjects;

- (id)objectForRegName:(NSString *)aRegName;
@end
</code></pre>
<p><br /> Here are some relevant parts of the implementation:</p>
<pre><code>- (void)addRegistrationForClass:(Class)aClass withRegName:(NSString *)aRegName {
    [classRegistrations setObject:aClass forKey:aRegName];
}

- (void)addObject:(id)anObject forRegName:(NSString *)aRegName {
    [objectInstances setObject:anObject forKey:aRegName];
}

- (id)objectForRegName:(NSString *)aRegName {
    id anObject = [objectInstances objectForKey:aRegName];
    if(!anObject) {
        Class class = [classRegistrations objectForKey:aRegName];
        anObject = [[[class alloc] init] autorelease];
    }
    return anObject;
}
</code></pre>
<p><br /> This facility is implemented as singleton.<br /> As you can see a class or an instance of a class can be associated with a registration name. the</p>
<pre><code>-objectForRegName:</code></pre>
<p>method either creates an object from a registered class or uses a class instance if one has been set.<br /> <br /> Now how is this going to be of use? Let&rsquo;s continue. The next thing we need is a service protocol and a service class that implements this protocol:</p>
<pre><code>@protocol MyServiceLocal
- (NSString *)sayHello;
@end
</code></pre>
<p><br /> The protocol should be placed outside of the service class implementation, in another header file. Something like &bdquo;Services.h&ldquo;.</p>
<pre><code>#import 
@interface MyService : NSObject  {
}
- (NSString *)sayHello;
@end

@implementation MyService
- (id)init {
    return [super init];
}
- (void)finalize {
    [super finalize];
}
- (NSString *)sayHello {
    return @"Hello";
}
@end
</code></pre>
<p><br /> I&rsquo;ve mixed interface and implementation here which normally is separated in .h and .m files.<br /> Good. We have our service.<br /> Now we create a consumer of that service that get&rsquo;s the service injected.</p>
<pre><code>#import 
@interface MyConsumer : NSObject {
    id myServiceInstance;
}
- (NSString *)letServiceSayHello;
@end

@interface MyConsumer ()
@property (retain, readwrite) id myServiceInstance;
@end

@implementation MyConsumer
@synthesize myServiceInstance;

- (id)init {
    if(self = [super init]) {
        self.myServiceInstance = INJECT(MyServiceRegName);
    }
    return self;
}
- (NSString *)letServiceSayHello {
    NSString *hello = [myServiceInstance sayHello];
    NSLog(@"%@", hello);
    return hello;
}
@end
</code></pre>
<p><br /> This is the consumer.<br /> The interesting part is the INJECT(MyServiceRegName). Now where does this come from? The INJECT is just a #define. The MyServiceRegName is also a #define which specifies a common name for a service registration. We can add this to the DependencyRegistration class like this:</p>
<pre><code>#define INJECT(REGNAME)     [[DependencyRegistration registrator] objectForRegName:REGNAME]
#define MyServiceRegName     @"MyService"
</code></pre>
<p><br /> In fact all service registration names could be collected in this class but they could also be someplace else.<br /> The INJECT define does nothing else than get an instance of the DependencyRegistration singleton and call the -objectForRegName: method which will either return an instance from a created Class or an already set object instance.<br /> <br /> The injection does occur here in an initialisation method.<br /> It could actually also do via a setter or init like:</p>
<pre><code>[consumer setMyService:INJECT(MyServiceRegName)];
[[Consumer alloc] initWithService:INJECT(MyServiceRegName)];
</code></pre>
<p><br /> The way this is implemented either every consumer get&rsquo;s a new instance of the service or all get the same instance depending on whether an instance has been set in the DependencyRegistration object or not.<br /> <br /> Now let&rsquo;s create a unit test to see if it&rsquo;s working:</p>
<pre><code>#import &lt;SenTestingKit/SenTestingKit.h&gt;
#import 
@interface MyConsumerTest : SenTestCase {
    DependencyRegistration *registrator;
}
@end

@implementation MyConsumerTest
- (void)setUp {
    registrator = [DependencyRegistration registrator];
    [registrator addRegistrationForClass:[MyService class] withRegName:MyServiceRegName];
}

- (void)testSayHello {
    MyConsumer *consumer = [[[MyConsumer alloc] init] autorelease];
    STAssertNotNil(consumer, @"");
    NSString *hello = [consumer letServiceSayHello];
    STAssertEquals(hello, @"Hello", @"");
}
@end
</code></pre>
<p><br /> You will see that it works when you execute this test. Here just a class name is registered which means that a new class instance is created and injected to the consumer.<br /> <br /> There is plenty of space for improvements of this.<br /> In terms of Java what we have here is either an application scope object (when a service instance has been added via -addObject::) or a request scope object (when no service instance has been added and one is created each time) is passed the the caller.<br /> <br /> Well, after all the DependencyRegistration class is not much more than an Abstract Factory for multiple class types.<br /> <br /> <br /> Cheers</p>
 ]]></content>
    </entry>
</feed>