Consider the 4-step process S = abcd and the 3-step process T = xyz. The processes run concurrently and can interleave at any point, but must execute in sequence.
So abxcyzd is a valid interleaving, but baxcyzd is not. Also, steps in different processes can occur simultaneously: abxcyzd is different from a[bx]cyzd. How many valid interleavings are there?
I ran into this problem while studying models of communicating processes. I looked for an elegant mathematical formula for this, and not finding any, decided to brute force it in Raku instead.
Solving the problem
The first insight is that we’re that the actual names of the steps don’t matter. We’re not trying to list all of the interleavings, just count them. So we can represent the starting state of the system as (4, 3): abcd has four steps remaining and xyz has three. Let ct(S, T) be the total number of interleavings of processes of S and T.
S takes a step. The new state is (3, 3).
T takes a step. The new state is (4, 2).
S and T take simultaneous steps. The new state is (3, 2).
From there, we recursively repeat the process: ct(4, 3) = ct(3, 3) + ct(4, 2) + ct(3, 2).
Now for the base cases. If S and T are empty, there’s only one possible program execution: the one where nothing happens. ct(0, 0) = 1.
What if S=1000 and T=0? While we still have a lot of steps to go, there’s only one possible way to “interleave it”: execute S sequentially. ct(x, 0) = ct(0, x) = 1.
Now let’s put it all into a script:
multi sub gc($s, $t) {
if $s|$t == 0 {
1
}
else {
sum samewith( $s-1, $t-1 ),
samewith( $s-1, $t ),
samewith( $s, $t-1 );
}
}
The only “Raku tricks” here is samewith, which is just an easy way to do self recursion, and that $S|$T == 0 is a junction. Let’s try it!
You’re not here for a math problem, you’re here to see Raku. So let’s make the problem more complicated. Instead of two processes, we have N, and any number of them can take a set simultaneously. The same overall logic applies: for ct(3, 2, 1)we’d calculate:
Whew! So there’s a couple new problems here. The first is that for N processes, we have to sum 2^N – 1 terms. That’s not something we can hardcode. Second, some processes are “exhausted early”. But if there’s more than one process left, we can’t just replace it with a 1. ct(2, 1, 0) still needs to be computed, and we shouldn’t sum in a ct(2, 1, -1) term.
Instead, we need to find every possible combination of processes with at least one step left. As we’ll see later, we specifically want the indices of these processes. Fortunately, Raku makes this easy:
gather and take turn a block into a sequence generator. The for loop now returns every value that’s taken as a single sequence.
$new[$_] takes advantage of takes advantage of “slices”: positional lookups can take a list of values, and then returns those elements in order. Critically, this returns element references, meaning we can mutate the original (cloned) array.
>>-- is a hyperoperator over postfix --, decrementing each element corresponding to the combination.
In the actual algorithm we don’t need to retrieve the new values, just their gcs, so we can instead take the samewith and sum over them.
Finally, we can’t just check $S|$T > 0. We instead want to check at most one process length is greater than 0. This is easy with junctions:
multi sub gc($val) {
if $val.none|$val.one > 0 {
return 1
}
sum gather for $val.grep(* > 0, :k).combinations(1..*) {
my $new = $val.clone;
$new[$_]>>--;
take samewith($new);
}
}
We can make this cleaner and more robust, but this is already a good showcase of what Raku can do.
As mentioned before, this matters for estimating the number of behaviors of a concurrent system. This will always be an upper bound: most systems have additional restrictions on how processes may interleave (such as mutexes).
If we restrict ourselves to two processes, the number of interleavings are described by the Delannoy numbers (OEIS). For more than two processes, this paper describes how to calculate “higher-dimension Delannoy paths”. Looking at it, I’m not surprised I couldn’t find a simple equation!
My goal for this post is to convince you – yes, you, current reader – that you both can and should present a talk about Raku for The Perl & Raku Conference this year. Statistically, I’m assuming that you’re not currently planning to give a Raku talk, so I’ve got my work cut out for me. But I’m confident that I can change your mind.
First, I’d like to persuade you that you’re capable of giving a Raku talk. After that, we’ll get into why you should. So, what does it take to present a Raku talk at TPRC?
How to give a Raku talk at TPRC
In brief, to give a TPRC talk, all you need to do is: pick a topic, submit your talk proposal, prepare your talk, attend TPRC, and present your talk. But let’s be less brief and break that down, step by step.
No so. Sure, tales of advanced Raku hacking can make for nice conference talks. But so can the perspective of someone who just learned Raku and can talk about their first impressions of the language – after all, we all want Raku to be approachable to newcomers and to present a good first impression. Everyone working on the language and documentation tries to imagine ourselves as new users, but if you actually are a new user, then your perspective on Raku is instantly valuable.
In fact, I’m prepared to argue that you have something worth sharing with other Rakoons regardless of your experience level with Raku. To prove it to you, here’s a list of potential talk titles, suitable for anyone from the newest Rakoon to the most grizzled of old timers:
I didn’t know Raku before writing this talk: Raku first impressions
Learning Raku as my first programming language
Coming to Raku from JavaScript [or whatever other language you came from]
Lessons for Raku from functional [or Object-Oriented, or statically typed] programming languages
A new Rakoon’s perspective on the Raku documentation
What I learned writing my first Raku CLI script
My favorite Raku features and why they’re awesome
What I wish I knew then: lessons from my first 10 Raku scripts
How to explain Raku’s strengths to Python [or other language] programmers
Why I failed to release a Raku module
Lessons from writing my first Raku module
My new Raku module is neat and you should use it
Raku documentation: a contributor’s perspective
How to use this cool but complicated Raku module: a user’s perspective
Beginner compiler hacking: my first contributions to Rakudo
This Raku feature is cool and underappreciated – here’s how to use it in your code
Combine these Raku features for even more expressive code
Level up your compiler hacking by venturing into NQP
Raku best practices (or, my coding conventions and why I wish you’d follow them)
How to use this cool but complicated Raku module: the maintainer’s perspective
Writing high-performance Raku code: what I’ve learned
Performance improvements to Rakudo: what I’ve done
I’m literally Larry Wall: State of the Butterfly 2025
Please help yourself to any of those ideas – even if this post is the very first you’ve heard of Raku, I bet you could take at least one.
But here’s the real take away: regardless of your experience level, you’ve likely done something Raku related recently – and sharing how that went would make a great Raku talk. Even – especially! – if the answer to “how that went” is “not great” and you have criticisms of Raku or suggestions for how to prevent your not-great-experience for the next Rakoon.
Step 1: Submit your talk
Once you have a topic, the next step is to submit it on PaperCall. Note that this step comes before writing your talk – all you need is a title and a very brief (300 characters) talk description. So don’t wait; submit your talk as soon as you have the idea!
You’ll also need to decide which length to request for your talk: 20 minutes or 50 minutes (each of which includes both your talk and Q&A after your talk). That’s something that only you can judge based on your topic and how much you have to say about it.
But if you’re on the fence, here’s my advice: Go for the longer talk. The Q&A is often the best part and, besides, sometimes tales grow in the telling and you discover that you have more to say than you thought. Better to have a 50 minute slot with 25 minutes for questions than a 20 minute one with negative 5.
After submitting your talk, you should hear back from conference organizers fairly soon. In general, the organizers make every effort to include as many talks as possible, so I wouldn’t be too nervous about whether your talk will be accepted.
Step 2: Write your talk
Okay, on to actually writing your talk. First, let’s talk format.
For most presenters, writing a talk often means making a slide deck, but it doesn’t need to – never forget that just talking to your audience is a perfectly valid option. If you decide to make slides, you can do so with LibreOffice Impress, Keynote, Google Slides, or any presentation software of your choice. Or, if you prefer to live in the terminal, lookatme or slides are fine too.
My main advice is not to overthink it. It’s tempting – trust me, I know. You’re talking to someone who once wrote a presentation by first building a Raku→HTML preprocessor, using that HTML to generate slides with Reveal.js, and then got distracted adding new features to Reveal.js. And, you know what? It wasn’t worth it. Lately, I’ve been making pretty basic slides with Impress and I doubt anyone cares one way or the other.
(Tip: if, like me, you’re tempted to do something fancy to have properly highlighted Raku code in your slides, remember that there’s nothing wrong with using screenshots of highlighted code instead – once it’s on the slide, it’ll look just the same.)
Now, on to the actual content – a realm where the “don’t overthink it” advice applies at least as strongly. If you’ve followed my advice so far, you’ve got something to say about the topic you picked (like reporting how your recent Raku experiences went). So just say that. Plan to be conversational; explain your topic just like you were talking to a friend. You don’t need to try to sound smart; you don’t need to avoid criticizing Raku or the Raku community. (Though you do need to avoid personal attacks, deadnaming, explicit images, racist language, or anything else that violates the Raku Code of Conduct. Rakoons are serious about making sure Raku talks are -Ofun for everyone and we take CoC enforcement seriously.)
In general, remember that you’re writing a talk for a friendly audience of other Rakoons; explain things to them like you’d explain to a group of friends and you’ll be well on your way to having a great talk.
Even though this step is titled “write your talk”, there’s one more task after you’ve written your slides or other talk materials – practice! This step is technically optional, but strongly strongly recommended. I recommend running through your full talk at least twice – more is better, of course.
And, if at all possible, at least one run-through should have an audience who hasn’t heard the talk before. This person doesn’t need to be able to follow the details of the talk (non-Rakoons/non programmers are fine!) and could be watching over a video call instead of in person – the point is just to trigger that part of your brain that recognizes when you’re being watched and ensure you’ve had that experience for this talk before you get to TPRC.
The benefits of rehearsing with an audience might sound trivial, but it genuinely makes a tremendous difference. In fact, I believe in the power of that sort of rehearsal so much that I’d be happy to be your live audience. If you don’t have anyone else to practice your talk in front of, send me an email and we’ll set up a video call.
With at least a couple of rehearsals under your belt, you’re almost ready to give your talk.
Step 3: Attend TPRC
But first you have to actually get to the conference. For many of you, this could be the biggest hurdle and the one where I can offer the least useful advice.
This year, The Perl & Raku Conference will be in Greenville, South Carolina (USA). The Raku community is global – perhaps a consequence of our best-in-class Unicode support – and I recognize that some of you are on the other side of an ocean from Greenville.
If that describes you, then I understand if attending TPRC (which, at least this year, is an absolute requirement for presenting a talk) comes at too high a cost, in money or time. Even for people who don’t have an ocean between them and Greenville, the logistics of getting there can be a challenge.
But I’ll offer a few facts that might help. First, remember that – since you’re giving a talk – your conference ticket will be free, which could help keep the total costs down even with travel costs. If air travel is in your budget, you should look into flights for multiple airports. In addition to flying directly to Greenville, you can also fly to Atlanta, Georgia (about 2.5 hours away). The benefit of flying into Atlanta is that its airport happens to be the US airport with the absolute most connections – that is, the most cities that you can fly directly from. This not only simplifies flight logistics but can also (sometimes!) significantly reduce the total cost.
If flying isn’t in your budget, I’d also encourage you to consider driving from a longer distance than you might otherwise. Speaking personally, I’ll mention that previously drove about 14 hours for a TPRC – and attending more than justified that drive. Plus, if you can extend your trip a bit, the area around Greenville is a great spot to spend a bit of time in the outdoors.
Finally, if you just can’t make it to Greenville but would otherwise want to give a conference talk, please consider recording a talk for YouTube instead. True, there’s not a way to remotely present a recorded talk at TPRC, but there’s still a lot of value in preparing and presenting a talk. If you go this route, please let me know and I’ll do my best to ensure that your recorded talk gets the attention it deserves – maybe it could even be linked to from the conference-talk playlist (though I can’t personally guarantee that).
Step 4: Present your talk & take questions
Once you’re in Greenville, the only thing that’s left is to give your talk. If you’ve followed my advice so far, this step should be the easiest one of all. All you need to do is to deliver your talk exactly the way you’ve rehearsed it. I recognize that some people have anxieties around public speaking, but the good news is that TPRC is about the friendliest audience you could possibly have: everyone there is guaranteed to be interested in your topic and is rooting for you to succeed. In my experience presenting at various venues, the audience for Raku talks at TPRC is pretty much a best case scenario.
When it comes to Q&A, there’s equally little to be worried about. Yes, you may get some questions that you’re not expecting and don’t know how to answer – but it’s 100% okay to say so. No one will be interested in gotcha moments or exposing some mistake; they’ll just be asking out of genuine curiosity. Many times, the questions can spark genuinely interesting conversations, so just treat each question as the potential beginning to a conversation among friends and everything will go great. In fact, that gets to the biggest piece of advice I have for the presentation as a whole – approach the whole thing as a conversation and you’ll be on the correct (and low-stress!) path.
So, there you have it: come up with a topic, submit your proposal, prepare your talk, attend TPRC, and present your talk. There’s really no reason why you – yes, you! – can’t present a Raku talk at TPRC.
Why
That just leaves one question: given that you can present a talk, should you?
Yes. Yes, you should give a Raku talk at TPRC. Why? Because it will benefit you and because it will benefit the Raku community as a whole. Here’s how.
Good for you
Giving a Raku talk will make you better off in three main ways: you’ll learn from it, you’ll have fun, and you’ll connect with other Rakoons. Let’s discuss each of those in turn.
First, you’ll learn a ton from presenting your Raku talk. Trust me, no matter how well you think that you understand a topic, you’ll never truly master it until you’re able to explain it to others, exactly what you’re doing when you prepare and deliver a TPRC talk. Even before you arrive, the simple act of writing your presentation will force you to think through the topic from the perspective of your audience – that is, from the perspective of people who lack your particular expertise (even if that “expertise” is your unique perspective as a beginner in Raku!).
Moreover, remember what I said about the Q&A being conversational? You’ll also learn from the questions and feedback you get from your talk. In the vast majority of cases, these conversations won’t be limited to the formal Q&A period; as you interact with other Rakoons at the conference, they’ll continue to discuss your talk – and these conversations can be incredibly educational.
After all, the Raku community is small enough that (unless you’re incredibly lucky!), the conference is probably the only place where you can have face-to-face conversations with other programmers who are just as interested in Raku as you are. It’s entirely possible that you’ll leave the conference with ideas for your next Raku project or new techniques that improve your Raku code.
(Note that I’m passing over all the ways that you’ll learn from the other Raku talks and from discussing them with fellow attendees since these benefits depend on attending TPRC and not specifically on presenting a talk. But, for the record, just attending is a tremendous learning opportunity too and well worth it even if you ignore my advice about giving a talk.)
Next, presenting a talk is just plain fun. As I’ve mentioned a few times, the Raku community is pretty awesome (or, to use the Raku terminology, “optimized for fun” aka -Ofun); getting to hang out with fellow Rakoons – people who are typically weird in the same ways anyone who has read this far most likely is – is a real treat. That’s true as a conference attendee, too. But it’s doubly true as a presenter, since your conference talk provides a built-in conversation topic with anyone who saw your talk; the perfect ice breaker.
Talking about how much fun the conference can be might sound like a momentary benefit, but it’s really not. From personal experience, the enjoyable conversations you’ll have at TPRC – about your talk, about other talks, or just about Raku in general – can provide incredible motivation for future Raku hacking. And that motivation lasts far past the end of the conference itself.
Finally, giving a Raku talk is a great way to integrate into the Raku community. Much of coding is inherently a solo activity – for all that pull requests, collaborative coding, and code reviews bring other people in, the actual writing of code is something you do alone. Of course, we Rakoons share our work, whether that’s via GitHub, r/rakulang, the #raku IRC channel, or any of the other online spaces we inhabit. But there’s really no substitute for face-to-face meetings; for the chance to put a real-life person to the usernames and project maintainers you’ve interacted with online.
This flows in both directions – at the same time you’re meeting people, they’re meeting you too. And you benefit from both knowing and being known; in either direction, all future interactions will be colored by having met, by mutually seeing each other as three-dimensional human beings rather than flat online avatars.
And while this benefit is most dramatic for the people you meet at the conference, it’s not at all limited to them. As I mentioned, your talk will be posted to YouTube and thus will be seen by many Rakoons who aren’t able to attend TPRC. Obviously, you won’t be able to meet these Rakoons, but they’ll still get to see you as a person, not just a name. You’ll benefit from that parasocial bond in each future interaction with the community as a whole – and maybe even from the higher profile that comes from being not just a member of the community but a meaningful part of the conversation.
Bottom line: presenting a Raku talk will be good for you in the short-term (it’s fun and you’ll learn a ton); in the medium term (you’ll be motivated to put what you’ve learned into practice and will have an easier time working with others as you do so); and in the long term (you’ll play a larger role in the Raku community and have a higher profile in our community).
Good for Raku
The personal benefits to you come paired with benefits to the Raku community as a whole – something I’m guessing you care at least a little bit about if you’ve read this far.
Most obviously, other Rakoons will get to hear your unique perspective and the actual content of your talk. Just as you learn from us, we’ll learn from you. Similarly, conversations with you will motivate us, too. And we all benefit from seeing each other as real people rather than avatars. More broadly, if you’re interested enough in Raku to give a conference talk, you’re -Ofun too and we’ll all be better off for your decision to present a talk.
Further, you’ll help Raku in a way that goes beyond existing Rakoons. When your talk is posted online, it will be visible to the Raku-curious as well and will play a real part in raising Raku’s overall profile. I know that some of my first exposure to Raku came from watching conference talks about the language and I’m sure that I’m far from alone. Putting your own ¢2 out there could be exactly what convinces someone to check out the language; each talk we post plays a real role in growing the community around the language we all love.
Fin
So, there you have it. You can give a Raku talk. You should give a Raku talk. Sign up today!
Yes, this year is almost finished, and Santa Claus needs to get his work done. Of course, helped by many volunteers. But sometimes it is a bit too much. Well, Santa Claus is a tough man, I know that for sure, he can handle stress! The other day, however, he came rushing in, terrified, pale around the nose. That was something new for me. He said there was a huge shortage of advent papers, ‘that we need to write several documents and quick! thank you very much!’.
Luckily, the department was just on the verge of writing a piece we were working on. It is all about the Raku language binding to the native libraries of Gnome. It has been done before, examples are GTK::Scintilla and GTK::Simpler of azawi, GTK::Simple of Jonathan Worthington (currently maintained by Richard Hainsworth), and GtkLayerShell of CIAvash.
About Gtk version 4
This department already wrote the Gnome::Gtk3 repository and its dependencies. The modules found in there are all generated from the C source files. However, there was a growing concern that the generator became too complex to handle all kinds of differences showing up in the C source code, and the idea of getting a binding for Gtk version 4 made it even impossible. The other thing was, that the installation of the packages took a long time. So other paths were investigated. To make a long story short, it resulted in the making of Gnome::Gtk4, and a rewrite of all the dependencies. Some packages were removed like Gnome::Cairo, others were added like Gnome::Pango, Gnome::Gsk4 and Gnome::Graphene.
Small appetizer
use Gnome::Gtk4::Window:api<2>;
with Gnome::Gtk4::Window $window .= new-window {
.set-title('My First Window');
.show;
}
Simple isn’t it. But it doesn’t do much yet. Besides we will take a jump into the deep and make it a bit more complex by using an ApplicationWindow instead of a Window. First a few pointers: In the first line you see a not-much-used tag, ‘:api<2>‘, added to the module name. This is a necessity to get the proper set of modules on which the Gnome::Gtk4::Window depends. The second detail is the initialization on line 3. The new() calls used in Gnome::Gtk3 and the dependencies are changed to longer names, in this case .new-window().
Now, let’s make a program that does something. The result is the same as the 01-hello-world.raku example from the GTK::Simple package. For us, it was the start of understanding the native bindings and testing the packages. When you compare the programs, you will notice that the code below is more low-level with all the pros and cons.
First, import the needed modules, and notice the ‘:api<2>‘ tags.
use Gnome::Gio::T-ioenums:api<2>;
use Gnome::Gtk4::Button:api<2>;
use Gnome::Gtk4::Grid:api<2>;
use Gnome::Gtk4::Application:api<2>;
use Gnome::Gtk4::ApplicationWindow:api<2>;
Then some conveniences to make names somewhat shorter.
Announce the class we need to use (HelloWorldApp). Normally the department will write the class in a different file and then import it. Here, for the example, we would like to have the code in one file. After that, we initialize the class using an application ID, a string, which is mostly a reversed domain name. Then, the call .run() will show everything after the user interface is built. It will stay there until a .quit() routine is called, returning a status.
class HelloWorldApp { ... }
with my HelloWorldApp $app .= new(:app-id<org.gtk.example>) {
my Int $status = .run;
say "Exit status: $status";
}
Now we’ll write the HelloWorldApp class. First, we will save the Application instance which we will use elsewhere. Notice that it handles the ‘.run()‘ method called from outside.
class HelloWorldApp {
has Application $!app handles <run>;
The ‘BUILD()’ method initializes the application and sets two signal handlers, one to do work, which is the building of the user interface, and the other to stop the machinery when one of the decoration buttons is pressed on the title bar. The call-back routines, ‘.do-work()’ and ‘.app-shutdown()’ are defined in the class later on (hence the use of self).
submethod BUILD ( Str :$app-id ) {
say 'start the works';
$!app .= new-application(
$app-id, G_APPLICATION_FLAGS_NONE
);
with $!app {
.register-signal( self, 'do-work', 'activate');
.register-signal( self, 'app-shutdown', 'shutdown');
}
}
In the following method, the user interface is built. We start with creating the buttons. When the buttons are pressed, the routines ‘.b1-press()’ and ‘.b2-press()’ are called. Those methods are also defined in this class. The button ‘$button2' is made so that it does not react when clicked.
method do-work ( ) {
with my Button $button2 .= new-with-label('Goodbye') {
.register-signal( self, 'b2-press', 'clicked');
.set-sensitive(False);
}
with my Button $button1 .= new-with-label(
'Hello World'
) {
.register-signal(
self, 'b1-press', 'clicked', :$button2
);
}
Then a grid is created with some space around its content and the buttons are added to this grid. Format: ‘.attach( object, x, y, nbr-cols, nbr-rows);‘. So you see, the buttons will appear vertically in a column.
Finally, we add the application window in which we insert the grid.
my ApplicationWindow $win .=
new-applicationwindow($!app);
with $win {
.set-title('Two Buttons');
.set-child($grid);
.show;
}
}
The rest of the methods are needed to handle the button’s click events and to stop the application. Note the use of ‘Button()’ as opposed to ‘Button‘. This is because the named argument, ‘:$_native-object‘ is, as the name suggests, a native object that must coerce into a Raku class Button. The named argument ‘:$button2’ is provided because we gave it in the call to ‘.register-signal( self, 'b1-press', 'clicked', :$button2);‘ for $button1 above.
A window shows up with two buttons1. The lower one is not sensitive. When you click on the ‘Hello World’ button, you will see the ‘Goodbye’ button becoming responsive and the ‘Hello World’ button becomes insensitive. The application will exit when either the ‘X’ button in the top right corner of the window2 is clicked or the ‘Goodbye’ button is pressed.
So then, this is a nice document for Santa Claus, he will be happy and hopefully, you will be too.
To install the distributions, you only need to install the Gtk4 distribution like so, (note the quotes to prevent the shell redirect action);
zef install ‘Gnome::Gtk4:api<2>’
Of course, you must take care of the native libraries that must be installed. Unfortunately, we have no experience with operating systems other than Linux-based Unix.
The colors will depend on the theme you are using on your computer. The department uses a dark theme and thus the image of the window and buttons is dark. ↩︎
This button is placed in the title bar and may have a different symbol depending on your theme. ↩︎
Raku has a superb support for Iterables — for example, map is almost like a basis for whatever operation you’d like to do with your iterable or there is the Iterator protocol that is kind of a NAND gate that you can use to build any circuitry over your iteration logic.
But would it be even finer to have more abilities on Raku’s iterables (and also Strings!) that are packed in, or ab5tracted out, to high-level functions that hopefully speak for themselves in their names? I hope so!
The rest of this writing will be little “challenges” to do a task on a given Iterable/String using Raku’s batteries, and we’ll present an alternative black boxes along.
Task: Given a list of numbers, e.g., [2, 14, 111, 1787], double the primes in it and leave the non-primes as is.
Supposed advantage is that it’s somewhat more concise and name might give it away what it’s doing quicker to the reader compared to the map call. Also, the speed of execution should be higher given that it’s designed for a specific task. These will be the theme all along.
Okay, this was not super exciting to wrap in a function. But we move on:
Task: Given an iterable, e.g., [1, 2, 4], infinitely yield its elements by wrapping to beginning, i.e., cycling.
Solutions: We can list-repeat the given iterable infinite amount of times with the xx operator. But it won’t flatten automatically, e.g., it’d yield [1, 2, 4], [1, 2, 4], ..., so we slap in a flat:
A quick benchmark (100 runs) over 10,000-length array with 50,000 stream of elements requested, cycle turned out to be ~%30 faster! Well, this should be kind of expected as it’s specifically tailored for the exact task.
Task: Given an iterable, e.g., [7, 8, 9], yield index-value tuples, e.g., (0, 7), (1, 8), (2, 9). The starting index should be adjustable, e.g., if we want to enumerate from 1, then it should yield (1, 7), (2, 8), (3, 9).
Solutions: If not for the arbitrary starting index, this task is nothing but a .kv call. But it doesn’t recognize a starting index other than 0. map can do it though:
The need for index to start from a nonzero value (most often 1) is useful because human indexing is 1-based, so, for example, in a loop you’d like to log "{$n}th iteration..." and neither it starting from “0th iteration…” nor making it {$n + 1} feels aesthetically pleasing (to me at least), hence a thin wrapper.
Task: Given an iterable, detect if all of its elements are the same or not.
For example, [] and [2, 2, 2] should give True (former is vacuously true) and [4, 8, 8, 8 ... (ad infinitum)] should yield False. Note that the algorithm should short-circuit, i.e., once it finds 2 different elements, it should stop to report False.
Solutions: Since this is an aggregation, i.e., we want a Single value out of Multiple values (many to one if you will), map is not the goto tool.
A naive attempt would be get .unique values of the Iterable: if it has cardinality 1, return True; otherwise False. But it would fail on “obviously” False infinitely-long samples because it would need to exhaust the Iterable to construct the unique values only to measure its length and throw away the values themselves (that we didn’t really care about to begin with).
So, alternatively, we can check if all the elements of the Iterable are the same as the first one. This does seem doable in a lazy manner (unfortunately Junctions don’t short-circuit yet, so they are not to be used here):
>>> my \it = [4, |(8 xx *)]
>>> it.grep(* != it[0]).not
False
We are using double negation: one in the predicate to grep, one afterwards to query its emptiness. Thankfully, grep is clever that when it sees a chained .not call, it understands that we only care about whether there were no matches, and as soon as it finds a match, it returns False. In essence, we are applying De Morgan’s rule to mimic what the all-junction would do if it short-circuited. (Conveniently, it also works for an empty iterable, since there is nothing to grep about and it boils down to ().Seq.not, which is True.)
Our wrapper is is-all-same for this:
>>> my \it = [4, |(8 xx *)]
>>> it.&is-all-same
False
Now we don’t have to double-negatively think anymore 🙂
Task: Given an iterable and an index-value pair(s), “insert” the new value(s) into the desired index(es). For example, ["e", "f"] and 0 => "d" should yield ["d", "e", "f"].Seq.
Should also work for strings, e.g., "sing" with 1 => "tr" should yield "string".
Solutions: We can first Hashify the given pairs, then map the kv-ed iterable and slip the new value together with the old one:
>>> my \it = ["e", "f"]
>>> my @pairs = 0 => "d"
>>> my %ph = @pairs;
>>> it.kv.map({ %ph{$^idx}:exists ?? |(%ph{$idx}, $^val) !! $^val })
("d", "e", "f").Seq
# For strings, we comb -> operation -> join
>>> %ph = @pairs = 1 => "tr"
>>> "sing".comb.kv.map({ #`[same as above] }).join
"string"
The special case of 0 => $new-value is somewhat common, i.e., pre-pending a value to a given iterable. The direct way to go could be ($new-value, |@iterable) but this has the (undesired) effect of copying the entire iterable, whereas sometimes (oftentimes) all we want is first the new-value is yielded, then whatever the iterable has; we don’t need to face an O(N) operation here.
Task: Given an iterable, group the consecutive values into buckets using the given criterion (e.g., sameness) and under a possible transformation (values themselves by default, or, e.g., .keys of the items).
For example, [1, 2, 2, 6, 6, 6, 2]should yield (1 => (1,), 2 => (2, 2), 6 => (6, 6, 6), 2 => (2,)). Note that last 2 is apart from the first group, i.e., consecutiveness is broken, so it’s grouped differently.
This is actually the Unix’s uniq‘s way of working (on stream of data). An example with a transformation could be [a => 2, a => 3, b => 3] with *.key to get (a => (a => 2, a => 3), b => 3). This parameter is called “as” in many routines in Raku, e.g., classify.
An example with a criterion for comparison could be to use =:= to ensure container-level equality. This parameter is called “with” in many routines in Raku, e.g., unique.
Solutions: Left as an exercise for the reader 🙂 We propose group-conseq to achieve this:
# Elements themselves are the groupers by default
>>> [3, 4, 4, 5, 4].&group-conseq
(3 => (3,), 4 => (4, 4), 5 => (5,), 4 => (4,)).Seq
# They are all the same, really
>>> [1, -1, 1, -1, 1, -1].&group-conseq(as => &abs)
(1 => (1, -1, 1, -1, 1, -1)).Seq
# Respect the container for sameness
>>> my $a = 7
>>> ($a, $a, 7).&group-conseq(with => &[=:=])
(7 => (7, 7), 7 => (7,)).Seq
# Case insensitive detection of consecutive duplicates in a string; typos?
>>> my $s = "how aree youU?"
>>> $s.&group-conseq(as => &lc).grep(*.value > 1)
(e => (e, e), u => (u, U)).Seq
Task: Given an iterable, take values from until the given predicate fails. For example, [2, 5, 17, 8, 3, 13] with &is-prime should give (2, 5, 17).Seq.
Solutions: We can last the map if we see the predicate fail:
>>> [2, 5, 17, 8, 3, 13].map({ last if !.&is-prime; $_ })
(2, 5, 17).Seq
So we have seen several mundane-looking tasks (that are actually not that rare to come up in real scenarios) and some solutions from the batteries of Raku as well as ab5tracted out counterparts of them.
All the functions presented here are actually from the third party module Iter::Able (and they also support String and Iterator inputs whenever applicable) — there are currently ~30 functions implemented and aim is to implement many more, drawing inspirations from other languages and libraries therein, as well as our imaginations. I thank you for your presence.
While designing an event system for a personal project, I ran across a requirement which I knew could be implemented elegantly with Raku’s metaprogramming capabilities. Specifically, I wanted both sync and async events, with the sync events allowing mutation of fields (e.g. for cancellation), and the async ones being merely informational and thus immutable.
First, I needed an Event role to group eventy behavior and be inherited by all the event classes. Then, choosing mixins over inheritance, I decided that sync and async would also be roles (Sync and Async) not related to Event in the type hierarchy, but rather mixed into instances of Event‘s inheritors as appropriate.
# no relations between these types!
role Event is export {}
role Sync is export {}
role Async is export {}
Using mixins for the sync/async distinction here eliminates the need to write two versions of each event class, one doing a theoretical SyncEvent role and the other a theoretical AsyncEvent role (which would in turn both do the Event god-role). Instead, we write the inheriting class once and reuse it—a much cleaner design!
However, this presents the issue indicated in the topic. Because sync—but not async—events are meant to be mutable by event handlers, how do we write each event class once yet have differing behavior without duplicating logic in every method (shown below)? After all, the Sync and Async roles cannot have advance knowledge of the exact structure of what they’ll be mixed into, so they can’t provide the differentiating functionality.
class ConfigLoadEvent does Event {
has Int $!timeout;
# Horribly tedious!
method get-timeout { $!timeout; }
method set-timeout(Int $timeout) {
fail unless self ~~ Sync;
$!timeout = $timeout;
}
}
The solution is a type-aware (specifically mixin-aware) trait which I decided to name sync-rw. This will be applied to attributes of the various classes doing the Event role which have some data that sync handlers can change but which ought not to be touched by async handlers.
Like Raku’s built-in rw trait, this trait allows us to run some code at compile time to alter runtime behavior in and around language object we applied said trait to. “Language object” could mean subroutine, class, method, parameter, and so on. In this case, we want a trait specifically for use with object attributes, i.e., stuff declared with the has declarator.
The rw trait can also be applied in other places such as parameter declarations.
With that said, what precisely shouldsync-rw do to the attribute? Actually, nothing. That wasn’t a trick question, either: the fact is, we needn’t change anything about the attribute itself, rather the code that gets generated for the class containing the attribute.
For those unfamiliar, Raku generates an accessor for you when you use the .twigil on an attribute, as shown below. By default, this accessor returns a readonly container. With the rw trait, it returns a writeable container, again as demonstrated below. Note that when I say “accessor” I don’t mean anything special at a language level; I just mean a method that happens to provide access to an attribute. This results in what looks like direct attribute access à la C structs but is in fact a method call whose name happens to match the attribute name and which happens to give you access to the attribute.
class Foo {
has $!secret; # no accessor
has $.bar;
has $.baz is rw;
}
my Foo $foo .= new: :2bar, :2baz;
say $foo.bar;
$foo.baz = 3;
say $foo.baz(); # parens to demonstrate this is a method
This ability to return a writeable container is not unique to code generated for classes by the compiler. You, the user, can get this functionality with return-rw. That and the Metaobject Protocol (MOP) are about all we need to write our sync-rw trait.
multi trait_mod:<is>(Attribute $attr, :$sync-rw!) is export {
$attr.package.^add_method:
$attr.name.substr(2),
anon method :: {
if self ~~ Sync {
return-rw $attr.get_value: self;
}
elsif self ~~ Async {
return $attr.get_value: self;
}
else {
die "{ self.^name } with attributes marked `is sync-rw`"
~ ' must have either Sync or Async mixed in';
}
};
}
Let’s break down this code. First, trait_mod:<is> is how we refer to the is pseudo-operator.
While you can treat is like an operator in some ways, such as adding a new candidate like we are here, it is special-cased because it needs to do special (compile-time) things.
We’re using a multi here because there are already built-in definitions for is, and we want to add another candidate. This candidate takes an Attribute (a meta-class representing an object attribute) as its left-hand argument and the bare term sync-rw as its right-hand argument. The colon makes $sync-rw a named parameter, which normally we would pass with a colon on the caller side, but is has some syntax sugaring, so we needn’t write is :sync-rw. The exclamation makes it required (named params are optional by default).
We’re exporting this routine because we want it available to other code units. (Yes, export is also a trait!) Looking at the body of the routine, our first line gets the package in which the Attribute lives, which will be the class that has the attribute. We use a meta-method call (indicated by the carat) to add a method to that class. Another way of saying “meta-method call” is “method call on the object’s meta representation.”
The first parameter for add_method is the new method’s name. We take $attr‘s name, strip the first two characters (which will be the sigil and twigil like $.), and use that for the method name. Just like the built-in accessor we get from the . twigil, this will make the generated method’s name match the attribute name.
If Raku generates a method, and we’re also generating a method with the same name, will the two code generation processes interfere? Nope. Raku’s built-in accessor generation only happens if there is not already a method defined with that name, allowing the user to define a custom accessor (or method which does something entirely different) without it getting steamrolled. The way we’re defining the method here is out of the ordinary, but the result is the same, and Raku will refrain from generating the default accessor.
The second parameter is an anonymous method object. The anon declarator prevents the symbol from being installed in any scope or symbol table. The root package :: is used in place of a name. While you can give anon methods a name (which only the method itself would know), we don’t need or want a proper name here.
The body of the method checks to see whether the invocant (self, the object on which the method will be called at runtime) is Sync, Async, or neither. If it’s Sync, we return-rw; if it’s Async, we do a regular return; and if it’s neither, we generate an error.
We retrieve the actual value with $attr.get_value: self. It needs to be passed the self instance because Attributes represent a part of your code. They know what class they’re in, but they’re at a class (or package) level, not an instance level. Once we give it the instance, $attr knows how to retrieve the relevant value from that instance.
As an aside, we don’t strictly need the Event role for this minimized example. For the real thing, we can and should check inside our trait_mod:<is> candidate that the attribute’s package (class) does the Event role, throwing an error otherwise.
And that’s all the code we need. I spent a while explaining the why and how, but the final volume of code is barely over a dozen significant lines. For a task that delves into metaprogramming, Raku gets us to the solution shockingly fast and with nearly zero boilerplate code.
Now, we’ll throw the role and trait definitions together in an Event.rakumod file so we can test it with the below .rakutest file.
use Test;
use lib '.';
use Event;
class Foo does Event {
has $.attr is sync-rw;
}
# The value can be updated when the instance is Sync
my $rw = (Foo but Sync).new(attr => 'old');
lives-ok { $rw.attr = 'new' };
ok $rw.attr eq 'new';
# But not when Async
my $non-rw = (Foo but Async).new(attr => 'old');
throws-like { $non-rw.attr = 'new'; }, X::AdHoc,
message => 'Cannot assign to a readonly variable or a value';
throws-like { Foo.new(attr => 'invalid').attr; }, X::AdHoc,
message => 'Foo with attributes marked `is sync-rw` must have either Sync or Async mixed in';
And there we have it! All the tests pass, and we have a generic solution which is easily reused and prevents a lot of nasty code duplication. One could argue that meta-object fiddling is nasty in itself, but it’s O(1) nastiness compared to the O(n) nastiness that code duplication would be. In the end, I find this a rather elegant interface because it parallels Raku’s built-in rw trait and requires so little effort on the usage side. Slap is sync-rw on and that’s it.
Hopefully this serves as a lesson in language design, too. Such an elegant yet easy-to-implement solution is only possible because Raku exposes to the user (nearly) all the same tools the language designers have, with a concise and straightforward interface out of the box—no third-party tools needed. If you’ve ever done reflection in Java, you’ll understand how much there is to appreciate here.
Finally, I’d like to give credit to guifa from the Raku IRC for the exact form of the trait code. Prior to their suggestion, I had something much messier because I didn’t know what was available and how well things would DWIM.
Many of us have been in a situation where we need to whip up a quick presentation. We might enjoy doing a talk on the subject material, but the thought of having to make presentation slides can take some joy out of it.
One of my requirements was to make a presentation that can be written in plain text, and publicly shared over the internet (not the web, we’ll get to that part later).
When I saw people using Gemini for all sorts of things, I started wondering if it would work for presentations as well.
What is Gemini?
Gemini is an “internet communication protocol for accessing remote documents”. Think of it as a lightweight ‘web’ where instead of HTML documents served over HTTP, we have Gemtext documents served over the Gemini protocol.
We can see the Gemtext spec is extremely lightweight, and deliberately does not have lot of features.
However, this has not stopped people from using it. There is a an entire world of Gemini and Gopher to be explored. There are a lot of Gemini capsules and Gopher holes out there that are worth checking out!
Writing our presentation
Let’s start creating a presentation on say, functional programming. Let’s make a slide.
We’ll need some way of navigating forwards and backwards, so this can be done with links.
Our first slide may look something like this.
## Map
> "I have a bunch of things, I want to transform them all to some other bunch of things."
=> gemini://test.samebchase.com/presentations/fp-intro/map.png Map
=> gemini://test.samebchase.com/presentations/fp-intro/7.gmi next
=> gemini://test.samebchase.com/presentations/fp-intro/5.gmi prev
Hmm, this part of adding forward and backward links can get annoying and error prone to type out manually. If only there were some fun programming language with which all this drudgery could be automated… 😉
Adding more slides
Now, we need to decide upon a way to add more slides. Let’s use a totally arbitrary separator like === to denote the boundary between slides. We can split the file on this separator, and generate the individual gemtext files.
Raku to the rescue
#!/usr/bin/env raku
# Example structure of presentation.gmi
# All the gemtext you want on a slide separated by ===
#`{
===
## Title for slide 1
* Point 1
* Point 2
===
## Title for slide 2
* Point 1
* and so on...
===
}
sub MAIN() {
my $presentation-name = 'fp-intro';
my $output-dir = "generated";
my $url-prefix = "gemini://test.samebchase.com/presentations/$presentation-name";
my $presentation-cont = slurp "presentation.gmi";
my @slides = $presentation-cont.split('===');
for @slides.kv -> $idx, $slide {
my $slide-cont = $slide ~ "\n";
my $filename;
# We add an index.gmi as the first slide so that when the dir
# is opened this file gets served the same way as index.html
if $idx == 0 {
$filename = "index.gmi";
$slide-cont ~= "=> $url-prefix/{$idx.succ}.gmi start\n";
}
else {
$filename = "$idx.gmi";
# Show the next slide link for all but the last one
if $idx < @slides.end {
$slide-cont ~= "=> $url-prefix/{$idx.succ}.gmi next\n";
}
# For the second slide, the previous slide link is the
# index.gmi
if $idx == 1 {
$slide-cont ~= "=> $url-prefix/index.gmi prev\n";
}
else {
$slide-cont ~= "=> $url-prefix/{$idx.pred}.gmi prev\n";
}
}
my $file-path = "$output-dir/$filename";
spurt $file-path, $slide-cont;
say "generated: $file-path";
}
}
That’s all the code we need to generate the individual gemtext files for each slide in the presentation.
Uploading the files to the node
We need to copy these generated .gmi files to a directory that your Gemini server is serving as part of your Gemini capsule (a Gemini ‘website’, so to speak). I used good old rsync to copy all those files to the node where the Gemini server is configured and running.
The presentation is ready! I placed it in a folder called presentations/fp-intro, so we can open that using any Gemini client.
Viewing the presentation
I like using Lagrange, so let’s launch that and point it here!
Viewing the presentation served over Gemini using the Lagrange Gemini client.
This is what it looks like. You can also use any of the other available Gemini clients as well. There are graphical clients, command line clients, clients that run in Emacs and clients that run on mobile phones. Use whatever you like.
Why (not) use Raku for this?
This is all rather straightforward and can be done in any programming language.
You may wonder: “Why can’t I just use $MAINSTREAM-PRODUCTIVITY-LANG?” for this task. I contend, that for situations where massively popular and well regarded libraries are not required, why not make it fun for yourself by trying out something new? You may end up having a good time, and learn a few new neat tricks in the process.
As we have seen, with a bit of Raku to automate things, we can turn the process of making presentations from a chore to something (-O)fun.
Enjoy the holidays!
(Thanks to Elizabeth Mattijsen for helping with proofreading this article.)
Exercise for the reader: Setting up a Gemini Capsule
We did not cover which is to setup a Gemini capsule, which is out of scope for this article. This is analogous to setting up a web server (like NGINX), certificates required for enabling TLS on a publicly accessible node.
Once this required setup is done, we need to upload the gemtext files to a directory that the Gemini server is configured to serve.
There are plenty of Gemini server implementations in a wide array of languages including this one written in Raku.
The goal of this post is to demonstrate some Raku features by accomplishing something useful in about 100 lines of Raku. We’re going to avoid using libraries, and instead use only what’s available to us in the core language.
Our task requires wrapping the GitHub CLI and generating an HTML dashboard. We’ll query the recently updated issues for some core Raku repositories, parse the CLI’s output into structured data, and write out templated HTML to a file.
Along the way, we’ll learn about
Running a subprocess and capturing it’s output
Opening and writing to files
String substitution
Raku’s quoting constructs
Raku is a powerful language with lots of syntax, so we will try to proceed slowly, explaining each part of our script as we go. We will provide links to the official documentation where appropriate.
raku/nqp – the “not quite perl” intermediate representations
Let’s get started.
Part 1: Calling Another Program
The command line utility we’ll use is the GitHub CLI.
The GitHub CLI can fetch up to 1000 issues at a time with a command like the following.
gh issue list -R raku/doc-website --state closed,open --search 'sort:updated-desc'
We’re only going to concern ourselves with 50, and sort them by recently updated. We include the closed as well as the open issues, because an issue being recently closed is important information for Raku contributors.
The only argument we need to parameterize is -R. We have 5 repos we’ll need to pass to -R, so lets make an array.
my @repos = <<
raku/doc-website
raku/doc
moarvm/moarvm
rakudo/rakudo
raku/nqp
>>;
Next, let’s loop through that array and shell out to gh with the run builtin subroutine.
for @repos -> $repo {
say "getting issues for $repo";
run("gh", "issue", "list", "-R", "$repo", "--search", "sort:updated-desc",
"--limit", "50", "--state", "all");
}
This would be enough if we wanted to run this script locally and look at the output in our own terminal, but we want more!
Part 2: Program Structure
To keep our script organized, we’ll break it up into functions, or subroutines as Raku calls them. For now we’re going to leave some details out and focus on the program’s structure.
We’ll start by defining sub MAIN(), a special subroutine that specifies the entrypoint to our program. Yes, it’s all caps.
#!/usr/bin/env raku
sub MAIN() {
# program starts here...
}
We can define another function get-issues that takes a repo name as a parameter. We’ll be calling this inside a loop. This function will call gh, parse its output, and return structured data.
sub get-issues($repo) {
# encapsulate calling gh and parsing output
}
Finally, we’ll create a write-document function that accepts an open file handle and a hash of all the data we’ve gathered into memory.
sub write-document($h, %data) {
# Iterate our data and write it to a file handle
}
So far, I’ve avoided specifying types on either parameters or return values. Raku allows gradual typing, and enforces types with a mix of compile-time and run-time checks. We’ll add some types later.
Part 3: Capturing Output
Let’s explore the implementation of the get-issues function. We need to capture the output of gh. Previously we shelled out like this.
That dumps output to our terminal. Let’s clean this up and capture the output.
my @cmd-line = << gh issue list -R $repo --search "sort:updated-desc" --limit 50 --state "all" >>;
my Proc $proc = run @cmd-line, :out;
Our @cmd-line variable uses the << >> array style, which will still let us interpolate $repo, but use space-separated elements.
Furthermore, we pass the :out symbolic parameter to run, which captures the process’ stdout.
And we also add the builtin class Proc as a type annotation. This is for you, dear reader, to reinforce the fact that the run subroutine returns a Proc.
Now it’s time to do something with the output. The default output of gh issue list is newline-delimited. The lines method transforms our output into an array of strings. One line of output for each issue.
my @lines = $proc.out.lines;
Each line of output looks like this.
4536 OPEN Run and Edit examples docs, wishlist 2024-12-01T00:04:33Z
Conveniently, the output is tab-delimited.
Let’s put it all together and finish our get-issues function.
sub get-issues($repo) {
my @cmd-line = << gh issue list -R $repo --search "sort:updated-desc" --limit 50 --state "all" >>;
my Proc $proc = run @cmd-line, :out;
my @lines = $proc.out.lines;
my @issues;
for @lines -> $line {
my @fields = $line.split("\t");
my %issue = (
id => @fields[0].Int,
status => @fields[1],
summary => @fields[2],
tags => @fields[3],
updated-at => DateTime.new(@fields[4])
);
@issues.push(%issue)
# ignore any parsing errors and continue looping
CATCH { next }
}
return @issues;
}
To summarize, we shell out to gh issue list, loop through all the output, and accumulate the data into an array of hashes. See the Hash documentation for all the wonderful ways to build and manipulate hashes.
For good measure, we’ve coerced id into an Int (with an .Int method call) and parsed the updated-at date string into the builtin DateTime type (with the new class constructor).
Back in our MAIN, we can make use of our fully-implemented get-issues routine. For each $repo, we add to our %dataobject.
my @repos = <<
raku/doc-website
raku/doc
moarvm/moarvm
rakudo/rakudo
raku/nqp
>>;
my %data;
for @repos -> $repo {
my @issues = get-issues($repo);
%data{$repo} = @issues;
}
Our %data hash ends up with the keys being the repo name, and the associated value is the array of issues for that repo.
Part 4: Rendering an HTML File
We have our data. Let’s template it as HTML and write it to a file.
There are many ways to open a file in Raku, but they’re all going to give you back an IO::Handle object.
For no particular reason, we’ll use the standalone builtin open. The :w symbol here will open the file for writing, and truncate the file if it already exists.
my $filename = "report.html";
my $fh = open $filename, :w;
write-document($fh, %data)
$fh.close;
Actually, on second thought, let’s spice things up. We can do the same thing but use given, which lets us avoid naming our file handle, and instead access it as the topic variable $.
given open $filename, :w {
write-document($_, %data);
.close
}
All that’s left to do is implement write-document.
The responsibility of our write-document routine is to write html to a file, and there are several ways of writing to a file. We will use the standalone spurt routine. The first argument to spurt will be our IO::Handle to the open file, and the second argument will be strings, our fragments of templated HTML.
Since the start of our HTML document is a fairly long string, we can use HEREDOC style quoting. The various quoting constructs that Raku provides give us much of the power of string templating languages without requiring and additional libraries.
Everything between q:to/END and the closing END is treated as a single string argument to spurt. We used the q:to form since we didn’t need to interpolate any variables.
When we do need to interpolate variables, we can use the qq:to form and wrap our variables in curly brackets.
Let’s loop through our nested %data hash to fill out the templated middle part of our HTML document. We’ll see qq:to and data interpolation in action.
for %data.kv -> $repo, @issues {
spurt $h, qq:to/END/;
<section>
<h1>{$repo}</h1>
<details open>
<summary>Most recent issues (show/hide)</summary>
END
for @issues -> %issue {
# destructure values from %issue
my ($issue-number, $summary, $status) = %issue<id summary status>;
# HTML escape (minimal)
# & becomes &
# < becomes <
# > becomes >
$summary.=subst(/ '&' /, '&'):g;
$summary.=subst(/ '<' /, '<'):g;
$summary.=subst(/ '>' /, '>'):g;
spurt $h, qq:to/END/;
<div class="issue-container">
<div class="issue-id"><a href="https://github.com/{$repo}/issues/{$issue-number}">{$issue-number}</a></div>
<div class="issue-summary">{$summary}</div>
<div class="issue-status">{$status}</div>
</div>
END
}
# Section end
spurt $h, q:to/END/;
</details>
</section>
<hr/>
END
}
We also did a little bit of HTML escaping on the $summary string. Note that the .= is an in-place modification of the $summary string, using method call assignment. Every Str has a subst method, and we’re just calling that and assigning to ourselves in one go. The reason we need to do that is to escape some characters that will frequently appear in issue summaries, but are bad news for rendering to HTML. This isn’t a totally injection-safe solution, but it’s good enough for our purposes.
Finally we can end our write-document routine with closing tags.
Two Raku know-how topics: plotting chess positions and LLM persona making and interaction
A general know-why topic: reasons for LLM-deficiencies while using FEN
Remark: Playing chess (really) well with LLMs is not the focus of this article.
Remark: This article was originally written as a Jupyter notebook. A few type of “magic” cells are used. Hence, some of the cells below contain start with comment lines that are actually magic cell specifications. Here are a few such specs: #%js, $% chat cm, #% markdown, etc.
TL;DR
The making of nice and tunable chess position (JavaScript) plots is facilitated by Raku.
Marveling at the chess position plots is encouraged.
The main message of the notebook/document is the section “Possible explanations”.
Asking LLMs to play chess via FEN is counter-productive.
FEN “compresses” field data and LLMs are not good at predicting those kind of sequential compressions.
On chess playing
There are plenty of (research) articles, posts, software packages, or coding projects about LLMs-and-chess. (For example, see [DM1, NC1, NCp1, MLp1].)
At least half a dozen of those articles and posts proclaim much better chess playing results than the examples given below.
In this article the focus is not on playing chess well by LLMs. Article’s main message is that Raku can be used to facilitate chess playing, exploration, or puzzle solving. (With LLMs or otherwise.)
LLM persona — ChessMaster
Here is the prompt of an LLM persona for playing chess:
#% chat cm prompt, model=gpt-4o, max-tokens=4096, temperature=0.25
You are a ChessMaster engaged in a text-based chess game. You play the opponent, usually with the black chess pieces and pawns.
The game should be easy to follow and adhere to standard chess rules.
Here is how you behave:
- Use Forsyth-Edwards Notation (FEN) strings to communicate the state of the gameplay.
- Ensure you play by standard chess rules.
- Ensure the game interprets and executes these moves correctly.
- Allow the player to input using:
1. Algebraic Notation (AN) input, for example, "e2 to e4" to move a pawn
2. Descriptive input, like, "bishop from a3 to f8"
- When the user gives you AN string you respond with three strings:
1. FEN position after user's AN input is played
2. Your move given as AN string
3. FEN position after your AN move is played
- If asked provide clear feedback to the player after each move, including the move itself.
- For example, "You moved: e2 to e4" and "Opponent moved: e7 to e5".
- If you take a figure of the user make sure say that in your responses.
- If asked provide player guidance
- Include in-game guidance resources or instructions that explain chess rules, move input, and special conditions like check and checkmate.
- Proclaim when the user gives you AN move that is wrong, noncompliant with the standard chess rules.
- When the user asks you to start a game without specifying a FEN string:
- You give the standard initial position as start.
- You play the black opponent, the user plays the white, except if the user says that he plays the black side.
Chat object created with ID : cm.
A few observations on the prompt above:
The prompt is carefully crafted.
Other research efforts often use much shorter prompts like “You are chess master.”
FEN is explicitly postulated as the way to communicate the “game field.”
Both AN and plain English are “allowed” as user inputs.
Remark: Below the LLM persona is referred to as “ChessMaster” or “CM”.
Chess position plotting setup
This section shows the setup for plotting chess positions while using a Raku Chatbook, [AAp2].
Load packages:
use Data::Importers;
use FEN::Grammar;
use JavaScript::D3;
use JavaScript::D3::Chess; # Needed only to show tuning parameters
Priming Jupyter notebooks for JavaScript plots code rendering:
In this section are given several methods for extracting FEN strings from (free) text strings. (I.e. chess LLM outputs.)
A function to check whether the argument is a FEN string or not:
my sub is-fen-string(Str:D $txt) {
my $match = FEN::Grammar.parse($txt);
return $match.defined;
}
Here is a way to use that verification function:
my sub fen-specs-by-predicate(Str:D $txt) {
# First the faster check
my @res = do with $txt.match(
/ $<delim>=('`'|\v) $<fen>=(<-[\v`]>+) $<delim> /, :g
) {
$/.values.map: {
$_<fen>.Str.trim => &is-fen-string($_<fen>.Str.trim)
}
}
@res .= grep({ $_.value });
return @res».key if @res;
# Slow(er) check
my @res2 = do with $txt.match(
/ $<fen>=(.*) <?{ is-fen-string($<fen>.Str) }> /, :g
) {
$/.values.map({ $_<fen>.Str.trim });
}
else {
Nil
}
@res2
}
Remark: The functions is-fen-string and fen-specs-by-predicate are useful if 3rd party chess notation packages are used. (Not necessarily Raku-grammar based.)
Here is a fast and simple use of a Raku grammar to extract FEN strings:
my sub fen-specs(Str:D $txt) {
return do with $txt.match( /<FEN::Grammar::TOP>/, :g) {
$/».Str
}
}
Here are examples using both approaches:
my $txt = q:to/END/;
You moved: b2 to b3
FEN after your move: `rnbqkbnr/pppppppp/8/8/8/1P6/P1PPPPPP/RNBQKBNR b KQkq - 0 1`
Now, it's my turn. I'll play:
Opponent moved: e7 to e5
FEN after my move: `rnbqkbnr/pppp1ppp/8/4p3/8/1P6/P1PPPPPP/RNBQKBNR w KQkq e6 0 2`
Your move!
END
say "fen-specs-by-predicate : ", fen-specs-by-predicate($txt).raku;
say "fen-specs : ", fen-specs($txt).raku;
fen-specs-by-predicate : ["rnbqkbnr/pppppppp/8/8/8/1P6/P1PPPPPP/RNBQKBNR b KQkq - 0 1", "rnbqkbnr/pppp1ppp/8/4p3/8/1P6/P1PPPPPP/RNBQKBNR w KQkq e6 0 2"]
fen-specs : ("rnbqkbnr/pppppppp/8/8/8/1P6/P1PPPPPP/RNBQKBNR b KQkq - 0 1", "rnbqkbnr/pppp1ppp/8/4p3/8/1P6/P1PPPPPP/RNBQKBNR w KQkq e6 0 2")
Example play
This section has an example play (again within a Raku chatbook, [AAp2].)
Starting a game:
#% chat cm > markdown
Let us start a new game.
Great! Let's start a new game. Here is the standard initial position:
FEN: `rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1`
It's your turn. You can make your move using algebraic notation or descriptive input.
Plotting the FEN strings from the LLM output:
#%js
js-d3-chessboard(fen-specs(cbpaste), |%opts)
User first move:
#% chat cm > markdown
b2 to b3
You moved: b2 to b3
FEN after your move: `rnbqkbnr/pppppppp/8/8/8/1P6/P1PPPPPP/RNBQKBNR b KQkq - 0 1`
Now, it's my turn.
My move: e7 to e5
FEN after my move: `rnbqkbnr/pppp1ppp/8/4p3/8/1P6/P1PPPPPP/RNBQKBNR w KQkq - 1 2`
Your move!
Plotting the FEN strings from the LLM output:
#%js
js-d3-chessboard(fen-specs(cbpaste), |%opts)
Another user move:
#% chat cm > markdown
Bishop from c1 to a3
You moved: Bishop from c1 to a3
FEN after your move: `rnbqkbnr/pppp1ppp/8/4p3/8/1P6/B1PPPPPP/RN1QKBNR b KQkq - 2 2`
Now, it's my turn.
My move: Bf8 to a3 (capturing your bishop)
FEN after my move: `rnbqk1nr/pppp1ppp/8/4p3/8/1P6/b1PPPPPP/RN1QKBNR w KQkq - 0 3`
Your move!
User move and ChessMaster move:
#%js
js-d3-chessboard(fen-specs(cbpaste), |%opts)
The chess positions clearly show the wrong LLM move (or FEN string). Trying to fix it:
#% chat cm
No, your move is wrong -- you took my white bishop with your black one.
Apologies for the oversight. Let's correct that.
Your move: Bishop from c1 to a3
FEN after your move: `rnbqkbnr/pppp1ppp/8/4p3/8/1P6/B1PPPPPP/RN1QKBNR b KQkq - 2 2`
Now, it's my turn.
My move: Bf8 to a3 (capturing your bishop)
FEN after my move: `rnbqk1nr/pppp1ppp/8/4p3/8/1P6/b1PPPPPP/RN1QKBNR w KQkq - 0 3`
Your move!
Well, the LLM fix preserved the same wrong moves:
#%js
js-d3-chessboard(fen-specs(cbpaste), |%opts)
Alternative development
Several other games were played, and, although they developed differently in all of them ChessMaster gave wrong FEN strings within the first 6 moves.
Chess position recognition
It is interesting to review the abilities of the vision (or “omni”) LLM models to recognize chess positions and produce the (correct) corresponding notation strings.
Remark: (Some of) the LLM-vision models explicitly state that they are not good at recognizing chess positions. (For example, see OpenAI’s “Vision/Limitations”.)
#%js
my $url = 'https://i.sstatic.net/uKnenm.png';
js-d3-image-display($url)
Here image’s corresponding Base64 string is obtained and rendered:
#%markdown
my $imgBase64 = data-import($url, format => 'md-image');
Remark: The Raku chatbook visualizations via JavaScript and URLs are faster that using Base64 strings.
Here is an LLM vision application (using image’s URL):
#% markdown
my $chessDescr = llm-vision-synthesize('Describe the following image:', $url);
The image shows a chess position with the following pieces:
- White pieces:
- Rook on e8
- Rook on a1
- Bishop on a8
- Queen on g1
- King on e1
- Pawn on f2
- Pawn on g3
- Black pieces:
- Knight on e4
- Pawn on g4
- King on f3
The board is mostly empty, with pieces strategically placed. White is in a strong position with multiple attacking options.
We can see from the output above that the positions of both the white and black figures are correctly recognized by the (default) LLM vision model. (Currently, that model is “gpt-4o”.)
Here ask an LLM to convert the textual description into Raku code:
my $chessSpec = llm-synthesize([
'Convert the following chess position description into a JSON array of dictionaries.',
'Conversion examples:',
'- "Black rook on e8" converts to `{x: "e", y: "8", z: "R"}`',
'- "White bishop on a4" converts to `{x: "a", y: "4", z: "b"}`',
'The key "z" has small letters for the blacks, and capital letters for the whites.',
'DESCRIPTION:',
$chessDescr,
llm-prompt('NothingElse')('JSON')
],
e => llm-configuration('chatgpt', model => 'gpt-4o', temperature => 0.4),
form => sub-parser('JSON'):drop
);
4[{x => e, y => 8, z => R} {x => a, y => 1, z => R} {x => a, y => 8, z => B} {x => g, y => 1, z => Q} {x => e, y => 1, z => K} {x => f, y => 2, z => P} {x => g, y => 3, z => P} {x => e, y => 4, z => n} {x => g, y => 4, z => p} {x => f, y => 3, z => k}]
Tabulate the result:
#%html
$chessSpec ==> to-html(field-names => <x y z>)
x
y
z
e
8
R
a
1
R
a
8
B
g
1
Q
e
1
K
f
2
P
g
3
P
e
4
n
g
4
p
f
3
k
Here we compare the original image with the image corresponding to its LLM derived description (converted to a Raku array of maps):
#%js
js-d3-image-display($url)
~ "\n\n" ~
js-d3-chessboard(
$chessSpec, |%opts, title => 'Text description to JSON'
)
By comparing the images it can be seen that a correct LLM-vision recognition JSON spec was obtained.
Remark: The FEN string of the image is B3R3/8/8/8/4n1p1/5kP1/5P2/R3K1Q1:
#%js
my $fenOrigSpec = 'B3R3/8/8/8/4n1p1/5kP1/5P2/R3K1Q1';
js-d3-chessboard($fenOrigSpec, |%opts, title => 'Image FEN')
Using FEN strings is counter-productive
In this subsection we demonstrate that LLMs have hard time converting correctly from- or to FEN strings.
Here are defined LLM-configurations and functions that are used below (more than once):
# Configurations
my $conf4o = llm-configuration('chatgpt', model => 'gpt-4o', temperature => 0.4);
my $confGemini = llm-configuration('gemini', model => 'gemini-1.5-pro-latest', temperature => 0.2);
# Vision function
my sub fen-vision(Str:D $src, *%args) {
llm-vision-synthesize([
'Describe the following chess position using Forsyth-Edwards notation.',
llm-prompt('NothingElse')('FEN')
],
$src,
|%args)
}
# Conversion function
my sub fen-from-descr(Str:D $src, *%args) {
llm-synthesize([
'Convert the following chess position description into a FEN spec.',
$src,
llm-prompt('NothingElse')('FEN')
],
|%args).subst(/^ '```' \w* | '```' $/, :g).trim
}
Get FEN string corresponding to the LLM-obtained image description:
my $fenTxt = &fen-from-descr($chessDescr, e => $conf4o);
my $fenTxtByGemini = &fen-from-descr($chessDescr, e => $confGemini);
($fenTxt, $fenTxtByGemini)
(rB2R2/8/8/8/4n1p1/5kP1/5P2/R3K1Q1 w - - 0 1 r1bR2k1/8/8/8/4n1p1/5Pp1/5P1Q/R3K3 w - - 0 1)
Instead of a “plain English” description, request a FEN string to be returned for the given image:
my $fenImg = fen-vision($url, e => $conf4o)
8/B3R3/8/8/4n1p1/5kP1/5P2/R3K1Q1 b - - 0 1
Using one of latest Gemini’s vision/omni models (with Gemini’s vision models image Base64 strings have to be used instead of image URLs):
my $fenImgByGemini = fen-vision($imgBase64, e => $confGemini)
8/8/8/8/5n2/8/4kpp1/R1B1K2Q w - - 0 1
Here is a comparison of the original image (top left) and the chess position plots based on the LLM recognitions above:
#% js
my %opts2 = %opts.clone; %opts2<width> = 350; %opts2<title-font-size> = 12;
[
'Original image FEN' => $fenOrigSpec,
'Text description to FEN (OpenAI, wrong)' => $fenTxt,
'Text description to FEN (Gemini, wrong)' => $fenTxtByGemini,
'Text description to JSON (OpenAI, correct)' => $chessSpec,
'Image to FEN (OpenAI, wrong)' => $fenImg,
'Image to FEN (Gemini, wrong)' => $fenImgByGemini,
].map({
js-d3-chessboard($_.value, |%opts2, title => $_.key)
}).join("\n")
Here are a few observations:
The FEN-titled images shows several mistakes of chess figures mis-placements and/or omissions.
E.g. white have no king.
The tried LLMs do not faithfully convert into FEN specs that are:
Plain text chess position descriptions
Chess position images
Conjectures
We can conjecture that:
Due to FEN’s aggregational nature LLMs have hard time deriving correct FEN representations.
FEN is not sufficiently well represented in the LLMs training data.
I.e. not enough chess games in the training data are using FEN.
We decided that the second conjecture is hard to investigate and confirm. Hence, below we focus on the first conjecture.
Possible explanations
In this section we ask one of ChatGPT’s reasoning models — “o1-preview” — to prove or disprove the conjecture formulated above.
#% openai > markdown, model=o1-preview, max-tokens=4096, temperature=0.5
Reason, prove, dis-prove, or explain the following conjecture:
> The chess Forsyth-Edwards Notation (FEN) has "aggregational nature", that is why LLMs have hard time
> deriving correct FEN representations from textual descriptions or from images.
Do not talk in generic terms. Be as concrete as possible.
Be moderately concise.
Two of the LLM responses are given below.
Shorter explanation
Yes, the aggregational nature of the Forsyth-Edwards Notation (FEN) does contribute to the difficulty Large Language Models (LLMs) have in deriving correct FEN representations from textual descriptions or images.
In FEN, empty squares within a rank are aggregated and represented by a single digit (1-8), indicating consecutive empty squares. For example, if there are three empty squares between pieces on a rank, they are represented by the digit ‘3’. This requires precise counting and aggregation of empty squares, which poses challenges for LLMs.
Consider a textual description: “On the 5th rank, there is a rook on the a-file and a knight on the h-file.” To convert this into FEN for that rank, the model must:
Place the rook on ‘a5’.
Count the empty squares between ‘a5’ and ‘h5’, which totals six.
Place the knight on ‘h5’.
Represent this rank in FEN as ‘R6N’.
This aggregation demands that the model accurately count the number of empty squares and maintain their sequence, a task that involves exact numerical computation and positional tracking.
LLMs, however, generate text based on learned patterns and are not inherently equipped for precise counting or maintaining exact positional information over sequences. They may miscount the number of empty squares or misplace pieces when trying to aggregate, leading to incorrect FEN output.
For example, an LLM might incorrectly generate ‘R5N’ instead of ‘R6N’, miscounting the empty squares and thus producing an incorrect representation of the board.
Furthermore, when deriving FEN from images, the model must first interpret the visual information into positional data and then perform the aggregation. Each step compounds the difficulty, especially since LLMs are primarily language-based and not designed for image recognition or detailed numerical tasks.
In contrast, if the notation were explicit—listing each square without aggregation (e.g., representing empty squares with ‘1’s)—the LLM’s task would be simplified. It would not need to perform counting or aggregation, reducing the chances of error. For instance, the rank could be written as ‘R111111N’, directly mapping each square without the need to count and compress empty squares.
Therefore, the aggregational nature of FEN introduces a layer of complexity that requires precise counting and positional reasoning, areas where LLMs currently have limitations. This contributes significantly to their difficulty in accurately deriving FEN representations from textual descriptions or images.
Longer explanation
The conjecture states that the Forsyth-Edwards Notation (FEN) has an “aggregational nature,” and this characteristic is the reason why Large Language Models (LLMs) have difficulty deriving correct FEN representations from textual descriptions or images. Let’s explore this statement in detail to determine its validity.
Understanding FEN and Its Aggregational Nature
FEN is a standard notation for describing a particular position of a chess game. It provides all the necessary information to reconstruct a game position, including:
Piece Placement: The positions of all the pieces on the board.
Active Color: Which player’s turn it is (white or black).
Castling Availability: Whether each side can castle kingside or queenside.
En Passant Target Square: If an en passant capture is possible.
Halfmove Clock: The number of halfmoves since the last capture or pawn move (for the fifty-move rule).
Fullmove Number: The number of the full move.
The piece placement portion of FEN uses a specific syntax:
The board is divided into ranks from 8 (top) to 1 (bottom), and each rank is listed from the ‘a’ file (left) to the ‘h’ file (right).
Pieces are represented by letters (‘K’ for king, ‘Q’ for queen, etc.), uppercase for White and lowercase for Black.
Empty squares are represented by digits from ‘1’ to ‘8’, indicating consecutive empty squares.
Ranks are separated by slashes (‘/’).
Example:
The starting position in FEN is:
rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1
Aggregational Nature Explained
The term “aggregational nature” refers to how FEN aggregates consecutive empty squares into a single digit and compresses the board’s information. Instead of listing each square individually, FEN combines adjacent empty squares and uses counts to represent them. This requires:
Counting: Determining the number of consecutive empty squares.
Grouping: Aggregating these counts within each rank.
Sequence Management: Resetting counts when a piece is encountered.
This aggregation reduces redundancy and creates a compact representation but adds complexity to the generation process because it involves precise counting and structuring.
Challenges for LLMs in Generating FEN
LLMs like GPT-4 are designed to predict and generate text based on patterns learned from large datasets. While they excel at understanding natural language and generating coherent narratives, they face challenges with tasks that require:
Precise Counting and Arithmetic: LLMs are not inherently numerically accurate. They can struggle with exact counts, especially over larger sequences.
Strict Formatting Rules: Adhering to rigid syntactic structures without deviation is difficult. Minor errors can occur, disrupting the validity of the entire notation.
Complex Aggregation: Aggregating data accurately across multiple dimensions (e.g., empty squares across ranks) is non-trivial for LLMs.
Deriving FEN from Textual Descriptions
When attempting to convert a textual description of a chess position into FEN, an LLM must:
Parse the Description: Understand the positions of all pieces and empty squares.
Count Empty Squares: Accurately count and aggregate empty squares in each rank.
Apply Syntax Rules: Format the notation correctly, including slashes, piece symbols, and counts.
Include Additional Information: Correctly identify active color, castling rights, en passant targets, and move counts.
Each of these steps requires precision and adherence to specific rules, which are areas where LLMs may falter.
Deriving FEN from Images
Deriving FEN from images adds another layer of complexity:
Visual Recognition: The LLM must first accurately identify each piece and its location from the image.
Spatial Mapping: Map visual data to board coordinates.
Aggregation: As with textual descriptions, aggregate empty squares appropriately.
Image recognition isn’t within the native capabilities of text-based LLMs without the integration of specialized computer vision systems.
Is the Aggregational Nature the Main Cause?
While the aggregational aspect of FEN contributes to the difficulty, it’s not the sole reason LLMs struggle:
Counting Limitations: LLMs generally have difficulty with tasks requiring exact counts. The need to count empty squares exacerbates this issue.
Symbolic Representation: FEN uses specific symbols and cases (uppercase for White, lowercase for Black), requiring attention to detail.
Context Length: Keeping track of all pieces and empty squares across 64 squares stretches the LLM’s ability to maintain context without error.
Error Propagation: A small mistake in one part of the FEN string can invalidate the entire notation.
Conclusion
The conjecture holds merit. The “aggregational nature” of FEN indeed contributes to the challenges LLMs face when deriving FEN representations from textual descriptions or images. The aggregation requires precise counting and grouping, tasks that are inherently difficult for LLMs due to their architecture and training focus.
Therefore, while LLMs are powerful tools for natural language understanding and generation, their current limitations in precise counting, strict formatting adherence, and aggregation make tasks like generating accurate FEN strings particularly challenging.
Potential Solutions
To mitigate these challenges, specialized approaches could be employed:
Augmented Models: Combining LLMs with tools or modules designed for counting and formatting could improve accuracy.
Training Data: Providing more examples of FEN generation during training might help the LLM learn patterns specific to FEN.
Pre-Processing Steps: Breaking down the task into smaller, manageable steps (e.g., handling one rank at a time) could reduce complexity.
Final Thoughts
Understanding the limitations of LLMs in tasks requiring strict precision and aggregation is crucial. While they are adept at many language tasks, recognizing and addressing their weaknesses allows for better application and the development of complementary tools to overcome these challenges.
The elves were keen to have an attractive Christmas tree this year. As usual, they wanted to build it with raku, their favourite programming language.
They dug back into the archives of Christmas past and came up with a pretty good way to make a tree – Santa and the Magic Tree. But honestly they felt that it is a bit old fashioned and needs something more dynamic.
HTMXmas
Some more digging and they found some great ideas about using HTMX with raku Crohere and here. This was beginning to look cool. According to Carson Gross – the inventor of HTMX, this was the best present ever – they could move the centre of gravity of web development from front end JavaScript (bleh) back to server side Raku (\o/) and keep their tree dynamic and modern at the same time.
Fragmas
(This is getting beyond a joke … ed.)
One of the coolest things that Carson said (and he said a lot of cool things) was:
Template fragments are a relatively rare Server Side Rendering (SSR) template library feature that allow you to render a fragment or partial bit of the content within a template, rather than the entire template. This feature is very handy in Hypermedia Driven Applications because it allows you to decompose a particular view for partial updates internally without pulling fragments of the template out to separate files for rendering, creating a large number of individual template files.
By keeping all the HTML in a single file, it is also easier to reason about how a feature works. This follows the Locality of Behavior design principle.
But – until now – no fragments were to be found. So the elves wished and wished and Mrs Claus heard their little voices and was sad and spoke to Santa and they lo – they magically appeared! Yes – Fragments in Cro in time for Christmas!
Here’s the Pull Request that was merged 2 weeks ago and here’s the documents. We are still waiting for a new release of Cro (8.9.0?) to push that change to the zef repo. But the elves couldn’t wait so they got it from the GitHub HEAD:
Let’s see how that looks in practice. First here is the Cro Template file – note the <:fragment svg($_)> tag and also the use of Cro Templates conditionals <?.bauble>…
And now, in Merry-Cromas.rakumod we specify the Cro Routes that our buttons will use to swap in the appropriate fragment(s) – here is where we specify the data for the conditionals to process as a raku hash using pair syntax, like { :baubles, :stars }.
You can see that the Fragment syntax provides a way for the Baubles and the Star to be set in situ within the svg tags rather than hoicked out to a separate file. And yet the get routes can just pick out the fragment they need and, with the help of HTMX, insert it dynamically into the web page.
This is a simple example, and is even more powerful when a website consists of many 100s lines of HTML, CSS and JavaScript. It also minimises the traffic that is send over the wire – fastes, more responsive sites are the result.
Bauble Up
Here’s the final result in action:
Illuminati
The elves have shared all the code, getting started instructions and a working Cro site that you can run locally with just a few commands is provided here for you to enjoy the fun…
Programming for a living used to be an active conversation between yourself, the computer, and your colleagues. This Christmas, a new guest is joining the programming party: the LLM.
Large Language Models (LLMs) can talk a lot and, just like your eccentric uncle at Christmas, occasionally blurt out something bonkers. But do we want to invite LLMs to the programming party? And if we do, where should they sit? More importantly, how can they help things flow?
Finding flow while programming is the art of staying in the Goldilocks zone – working on challenges that are not too hard and not too easy.
Raku and Perl are both highly expressive languages with rich operators. Their low and long learning curves enable programmers to pick a place that matches their current skill level and stay in flow.
There is a potential downside, however, for a language like Raku that is optimised for (O)fun. New programmers sometimes encounter code too far beyond their skill level, pushing them outside their Goldilocks zone. For these programmers fear can become a flow stopper. And as Master Yoda wisely said, “Fear leads to anger. Anger leads to hate. Hate leads to suffering.”
Fortunately, Father Christmas is smiling brighter than ever this Christmas! LLMs are a gift that can help lift more programmers up the learning curve and into the Goldilocks zone: less fear, more flow.
Expressive languages, like Raku, can offer LLM-powered utilities as part of their tooling, making it easier for new programmers to leap up the learning curve. Here’s a simple example.
Remember Your First Encounter with a Regular Expression?
Now instead of thinking, “WAT?!” Just run the Raku wat command-line utility to help explain it.
The wat Utility Can Help
shell> wat "[12]\d{3}-[01]\d-[0-3]\d ([^ \[]*?) [([^\]]*?)\]:.*"
This code extracts log entries in the format "timestamp [source]: message".
Use it to parse log files and extract specific information.
wat works with code snippets or can explain entire programs for you.
shell> wat WAT/CLI.rakumod
This code summarizes code using the LLM::DWIM module.
To use:
- Run `wat <filename>` to summarize a program
- Run `wat "$some-code.here();"` to explain code
- Run `wat config` to change settings
- Run `wat help` to show help
# keeping the prompt short and sweet
my $prompt = q:to"PROMPT";
You are an expert programmer.
Produce friendly, simple, structured output.
Summarize the following code for a new programmer.
Limit the response to less than $response-char-limit characters.
Highlight how to use this code:
PROMPT
One programmer’s WAT is another’s DWIM (Do What I Mean). Under the hood, the WAT::CLI module relies on Brian Duggan’sLLM::DWIM module’s dwim method.
say dwim $prompt ~ $program-content;
The LLM::DWIM module lets you choose the LLM you prefer and is powered by Anton Antonov’s excellent LLM modules.
Context Matters
Current LLMs work best with sufficient context – so they can associate problems and solutions. Without context, LLMs can stumble. Just try giving wat an empty context " " and it’s like that eccentric uncle at Christmas after one too many!
Getting Started
To have a play with wat this Christmas, install it with Raku’s package manager zef:
shell> zef install WAT
Why Not Build Your Own LLM-Powered Raku Tool?
Here are some ideas for Raku utilities to help your programming to flow.
Given the Source Code of a Program as part of the prompt …
Comment Injector
Inject useful comments
Comment Translator
Translate comments into your spoken language
Reviewer
Suggest potential improvements
Module Documentor
Document a module with RakuDoc
Unit Test Maker
Generate unit tests
CLI Maker
Generate a command-line interface to a program
API Scaffolder
Translate between OpenAPI spec ↔ program
Code Translator
Translate code from another language to Raku (e.g., Python → Raku)
SEO Docs
Document the problems the program solves and generate HTML for SEO and further ingestion by LLMs (e.g., nomnom)
Given the Output of a Shell Commandas part of the prompt …
Ticket Subtasks
Generate a set of subtasks required to close a ticket
Test Error Summary
Summarise the output of many tests with next steps to fix
Git Log Summary
Generate a stand-up report for your next Scrum meeting
Got something in mind that will help your own programming? Thanks to Raku’s rich set of LLM modules and command-line support it’s easy to do.
Merry Christmas, and happy coding with Raku + LLM powered utilities!